Covid-19 detection with X-Ray using Keras/TensorFlow CNNs

Hey Everyone!

This is a tutorial to help you make a CNN model for Covid-19 detection on X-Ray images using Keras/TensorFlow.

The current scenario that the world is going through is extremely critical and needs prime attention from every individual. Yes, you got it right! It’s the infamous COVID 19 pandemic that is a hot topic nowadays.

We have had all our frontline workers be on their toes constantly trying to help us cope with this virus.

Now that the world is slowly going back on its feet, several industries resuming work and getting back to the new normal, we must be sure about being safe for others and ourselves.

In this case, we need to constantly try and find new ways to make sure if one has COVID or not, to ensure that they are safe to enter a bio bubble or any workspace.

Use of Xrays

X-rays are a common way of understanding the inner working of the human body, the ability to use X-ray’s infrared rays allows us to find hidden symptoms inside the lungs, bones, brain, etc.

COVID also has symptoms that include cold and cough/throat issues, hence it is tricky for one to distinguish them from each other and hence creates difficulty in diagnosis.

However with the help of X-rays, we can examine the inner layering of the lungs to find an attack of the COVID-19 virus, thus classifying based on x-ray images is highly accurate.

We are going to perform the project using the deep learning algorithm of Convolutional Neural Network (CNN).

What is CNN?

A CNN is an algorithm that is capable of extracting various aspects of the image that we provide and further decide on the feature’s importance in a particular order.

Hence in let’s say a classification problem there are various factors to take into the frame and hence decide which ones are more crucial than others.

We will look into various aspects involved in CNN while we code along. So folks! Let’s get started!

First, let us import all the required libraries:

from keras.models import Sequential
from keras.layers import Conv2D, SeparableConv2D
from keras.layers import MaxPooling2D, AvgPool2D
from keras.layers import Flatten
from keras.layers import Dense
from keras import applications
from keras.models import Sequential, Model, load_model
from keras import optimizers

from keras.preprocessing.image import ImageDataGenerator
from keras.models import load_model
import os
from keras.preprocessing import image
import numpy as np
from keras.layers import Dropout
import matplotlib.pyplot as plt
from keras.layers import BatchNormalization
from keras.layers import Activation
from keras.optimizers import SGD
from keras.optimizers import Adam
from keras.regularizers import l2
from time import time
from tensorflow.python.keras.callbacks import TensorBoard
from ann_visualizer.visualize import ann_viz
import tensorflow as tf

 

 

We have imported Sequential model, several convolutional layers such as Conv2D, SeparableConv2D, the pooling layers MaxPooling2D, AvgPool2D, Flatten to convert the matrix with two-dimensional features into the Dense layer which is also imported to generate predictions, Dropout layer, BatchNormaliztion and Activation for effective standardization of the provided inputs. Further import Model and load_model for the setup, loading, and saving of the model.

We also import the optimizers which include SGD and Adam.

Now, we import one of the most crucial factors for this project which is the ImageDataGenerator, which helps you to conveniently augment your input images to help you expand your dataset and hence the different previews to the same image and hence better extraction of the features from that image.

 

Initializing Parameter:

height, width = 200, 200
continue_training = True
LOF, MOF, HOF, VHOF = 1, 3, 5, 7     # low order features, medium order features, high order features, very high
channels = 3
pooling_size = 2
output_classes = 2
batch_size = 3
steps_per_epoch = 1669
validation_steps = 400
epochs = 3

 

Now let us lay down several parameters such as the height and weight for the target size of the image, channel information, pooling size to be used in MaxPooling though if you want to directly use it in the function you are free to do so, define the sizes of output classes which is 2 i.e. normal and covid, batch sizes, steps per epochs, validation steps and no. of epochs which is 3.

 

Understanding The Model:

def create_model():
    # import sequential model and all the required layers
    #make model
    model=Sequential()
    model.add(Conv2D(filters=16,kernel_size=2,padding="same",activation="relu",input_shape=(200,200,3)))
    model.add(MaxPooling2D(pool_size=2))
    model.add(Conv2D(filters=32,kernel_size=2,padding="same",activation="relu"))
    model.add(MaxPooling2D(pool_size=2))
    model.add(Conv2D(filters=64,kernel_size=2,padding="same",activation="relu"))
    model.add(MaxPooling2D(pool_size=2))
    model.add(Conv2D(filters=64,kernel_size=2,padding="same",activation="relu"))
    model.add(MaxPooling2D(pool_size=2))
    model.add(Conv2D(filters=128,kernel_size=2,padding="same",activation="relu"))
    model.add(MaxPooling2D(pool_size=2))
    model.add(Dropout(0.2))
    model.add(Flatten())
    model.add(Dense(500,activation="relu"))
    model.add(Dropout(0.2))
    model.add(Dense(2,activation="softmax"))
    model.compile(loss='categorical_crossentropy', optimizer='adam', 
                  metrics=['accuracy'])
    return model

 

The next block of code will look familiar if you have ever trained a model on the Keras framework, we first define the create_model() function which will be used to create the layers used by the neural network for training the deep learning algorithm. The types of layers here include:

Conv2D

  • Conv2D: Also known as the 2D convolution layer, this layer is used to generate a convolution kernel which is then convolved with the input images in our case, the results of these layers are output tensors. You can see various parameters for these layers,
    • filters are there to set the number of filters the image has to pass through and that depends on the number of features we wanna extract, for choosing the number it is always advised to use it in 2’s power.
    • kernel size is for figuring out the dimensions for the kernel that will be used over the image for extracting the tensor.
    • padding is used when the input is padded around the corner so that kernel can extract maximum information, the parameter her can either be valid or same. ‘same’ keeps the dimensions unchanged making sure that output matches with the input, for ‘valid’ the spatial dimensions can reduce itself which is the natural application of convolution.

MaxPooling2D

  • MaxPooling2D: The job of this layer is just to pool the input after the convolution 2-d layers so they are fit to be fed into the next layer without losing computational power or information.
  • The layer performs down-sampling on the input representation by taking the maximum value overlapping with the window defined by the pool_size parameter for each dimension concerning the axis containing features.
  • The formula for the output shape of a MaxPooling layer can be accessed somewhat like this: output_shape = (input_shape – pool_size + 1) / strides)

Dropout

  • Dropout layer: Sometimes feeding access amount of data into the model can cause the model to overfit, (the condition where the model persons too well on the training data but not on test data). For these situations, we use dropout layers, where we provide a ratio and the neural network will drop those amount of neurons, decreasing the amount of information being fed.

Flatten

  • Flatten layer: The flatted layers are used to add an extra channel to the input provided so if the input is without the feature axis, it gets one with the output of this layer.

Dense

  • Dense: This is where the main computation of the neural network takes place and the output is sent into the activation function, here activation is an element-wise function so it is fed the sum of ‘bias’ and the dot product of ‘kernel’ and ‘input’

 

Starting The Training:

def train_validate_model(my_model):
    classes = ['covid','normal']

    train_datagen = ImageDataGenerator(
        rescale=1. / 255,
        horizontal_flip=True,
        vertical_flip=True,
        shear_range=0.2,
        zoom_range=0.2
    )

    training_set = train_datagen.flow_from_directory(
        'dataset/',
        target_size=(height, width),
        batch_size=batch_size,
        classes=classes,
        class_mode='categorical',
        shuffle=True,
        subset='training'
    )

    validation_set = train_datagen.flow_from_directory(
        'test/',
        target_size=(height, width),
        batch_size=batch_size,
        classes=classes,
        class_mode='categorical',
        shuffle=True
    )

    history = my_model.fit_generator(
        training_set,
        epochs=epochs,
        steps_per_epoch=steps_per_epoch,
        validation_steps=validation_steps,
        validation_data=validation_set
    )

 

The function will return the model consisting of various layers as explained above. After this we will move into the main training of our model, this is done with the help of the function of train_validate_model function.

In this function we will use the previously mentioned ImageDataGenerator for generating various augmentation on the input data, this is useful when input data is less.

First, we define the two classes for our case, ie, [‘covid’, ‘normal’]

 

Then we use the image data generator for the image augmentation. From this, we will now extract our train and validation set using the inbuilt function ‘flow_from_directory’ that comes with the object returned from ImageDataGenerator.

We provide all the previously defined input size, batch size, classes and shuffle the data for better training. This will leave us with two sets, one being the training set and the other being the validation one.

Now using the history variable to store all the information during the training of the model, we will start the process by calling the model.fit_generator function, with arguments being the training and validation sets, the number of epochs, steps per epoch, etc.

Evaluation:

Once the training has been completed we will run the evaluation and as you can see the graphs of the chosen data and model seems to be giving nice results.

 

 print('Model score: ')
    score = my_model.evaluate_generator(validation_set, steps=100)

    print("Loss: ", score[0], "Accuracy: ", score[1])

    # Plot training & validation accuracy values
    plt.plot(history.history['accuracy'])
    plt.plot(history.history['val_accuracy'])
    plt.title('Model accuracy')
    plt.ylabel('Accuracy')
    plt.xlabel('Epoch')
    plt.legend(['Train', 'Test'], loc='upper left')
    plt.show()

    # Plot training & validation loss values
    plt.plot(history.history['loss'])
    plt.plot(history.history['val_loss'])
    plt.title('Model loss')
    plt.ylabel('Loss')
    plt.xlabel('Epoch')
    plt.legend(['Train', 'Test'], loc='upper left')
    plt.show()

    return my_model


def save(my_model):
    my_model.save('xray.h5')


def load():
    return load_model('xray.h5')

 

However, we can see that the test accuracy started falling as we increased the training accuracy, a clear case of overfitting but with the accuracy still being over 90 percent we can ignore that for now.

Existing model found
Model loaded
Found 80 images belonging to 2 classes.
Found 14 images belonging to 2 classes.
Epoch 1/3
1669/1669 [==============================] - 434s 260ms/step - loss: 0.0273 - accuracy: 0.9923 - val_loss: 0.0000e+00 - val_accuracy: 0.9991
Epoch 2/3
1669/1669 [==============================] - 396s 237ms/step - loss: 0.0122 - accuracy: 0.9966 - val_loss: 5.9605e-08 - val_accuracy: 1.0000
Epoch 3/3
1669/1669 [==============================] - 386s 231ms/step - loss: 0.0036 - accuracy: 0.9990 - val_loss: 0.0026 - val_accuracy: 0.8732
Model score: 
Loss:  0.0 Accuracy:  0.8464285731315613

After the Evaluation has been finished we make the new function save which will save our model in h5 model format. This will allow us to train it again from the last checkpoint or even run inference on our model on various devices.

 

Also, a function called load has been created that will load this model if it already exists rather than recreating it.

Model: "sequential_1"
_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
conv2d_1 (Conv2D)            (None, 200, 200, 16)      208       
_________________________________________________________________
max_pooling2d_1 (MaxPooling2 (None, 100, 100, 16)      0         
_________________________________________________________________
conv2d_2 (Conv2D)            (None, 100, 100, 32)      2080      
_________________________________________________________________
max_pooling2d_2 (MaxPooling2 (None, 50, 50, 32)        0         
_________________________________________________________________
conv2d_3 (Conv2D)            (None, 50, 50, 64)        8256      
_________________________________________________________________
max_pooling2d_3 (MaxPooling2 (None, 25, 25, 64)        0         
_________________________________________________________________
conv2d_4 (Conv2D)            (None, 25, 25, 64)        16448     
_________________________________________________________________
max_pooling2d_4 (MaxPooling2 (None, 12, 12, 64)        0         
_________________________________________________________________
conv2d_5 (Conv2D)            (None, 12, 12, 128)       32896     
_________________________________________________________________
max_pooling2d_5 (MaxPooling2 (None, 6, 6, 128)         0         
_________________________________________________________________
dropout_1 (Dropout)          (None, 6, 6, 128)         0         
_________________________________________________________________
flatten_1 (Flatten)          (None, 4608)              0         
_________________________________________________________________
dense_1 (Dense)              (None, 500)               2304500   
_________________________________________________________________
dropout_2 (Dropout)          (None, 500)               0         
_________________________________________________________________
dense_2 (Dense)              (None, 2)                 1002      
=================================================================
Total params: 2,365,390
Trainable params: 2,365,390
Non-trainable params: 0
_________________________________________________________________

 

Testing The Model:

Once training is done, it is time to test our new model with the help of another handy and popular library called OpenCV.

 

OpenCV is an open-sourced computer vision library which is used for various vision-related tasks in modern computational projects but in our case, we will use it to load test image as NumPy arrays and send them as tensors into our model so we can predict if the X-ray is of a covid patient or not. This is obtained using the imread function in OpenCV, where we can read the image by providing its path. We use to walk from os library in python to find lots of testing images into an array, then using the great array manipulation functions provided in python we will run a for loop to feed all these images into the reading function, then we must run the resize function to make all the images into the size of the input tensor of the model which here is 200,200

classes=['covid','normal']
import cv2
import time 
runTotal=len(f)
time1 = time.time()
for i in f:
    cur_img = cv2.imread('test/normal/'+str(i))
    cur_img = cv2.resize(cur_img,(200,200))
    cur_img =  np.expand_dims(cur_img, axis=0)
    print(classes[np.argmax(model.predict(cur_img))])
timetotal = time.time() - time1
fps = float(runTotal/timetotal)
print("FPS=%.2f, total frames = %.0f , time = %.4f seconds" %(fps,runTotal,timetotal))
covid
covid
covid
covid
covid
FPS=4.77, total frames = 5 , time = 1.0485 seconds

 

Using the NumPy expand dimension we add an extra feature axis which is required to run an inference with any 2d Cnn model. At last, we use the model. predict function on the image to obtain a list of probabilities, we find the index of the highest probability, and that index matches with the index of the class list we defined earlier. Thus we will get our prediction!

The below is a python script to open an image and run the inference on it and
display text with the cv2.putText function:

classes = ['Covid','Normal']

image = cv2.imread('Xray_testimage.jpeg')

input_image = np.expand_dims(image,axis=0)

output = classes[np.argmax(model.predict(input_image))]

image = cv2.putText(image,'Covid',(50,50),cv2.FONT_HERSHEY_SIMPLEX,
                    1,(255,0,0),2,cv2.LINE_AA)
                    
cv2.imshow('image',image)

cv2.waitKey()

And Voila! You have successfully predicted if an x-ray is covid-19 affected or not with such a simple model, this is a display of power that CNN’s possess and the use case is limited with only your imagination.

Leave a Reply

Your email address will not be published. Required fields are marked *