Skin Cancer Detection using ResNet50
In this tutorial, we’ll use Resnet50 with Keras TensorFlow in the backend to try to identify seven distinct types of skin cancer, then examine the results to see how the model may be used in practice.
What is ResNet50?
Keras Applications are deep learning models that come with pre-trained weights. These models may be used to make predictions, extract features, and fine-tune them.
ResNet-50 is a 50-layer deep convolutional neural network (48 Convolution layers along with 1 MaxPool and 1 Average Pool layer). A residual neural network (ResNet) is a type of artificial neural network (ANN) that builds a network by stacking residual blocks on top of each other.
We can use the ImageNet database to load a pre-trained version of the network that has been trained on over a million photos. The network can categorize images into 1000 different object categories, including keyboards, mice, pencils, and a variety of animals. The network’s picture input size is 224 × 224 pixels.
Introduction
The most frequent human malignancy, skin cancer, is largely detected visually, beginning with a clinical screening and may be followed by dermoscopic analysis, a biopsy, and histological testing. Due to the fine-grained diversity in the appearance of skin lesions, automated categorization of skin lesions using images is a difficult problem.
About the Dataset
The dataset consists of 2357 images of malignant and benign oncological diseases, which were formed from the International Skin Imaging Collaboration (ISIC). Using the exception of melanomas and moles, whose photos are somewhat dominating, all images were sorted according to the classification acquired with ISIC, and all subgroups were divided into the same number of images.
The data set contains the following diseases:
Dataset Link: SkinCancer-ISIC
Setup
Import Libraries
First, we will import all the required libraries to solve this task.
import tensorflow as tf from tensorflow.keras.applications.resnet50 import ResNet50 from tensorflow.keras.applications.resnet50 import preprocess_input from tensorflow.keras.layers import Input, Lambda, Dense, Flatten,GlobalAveragePooling2D,MaxPooling2D,Dropout from tensorflow.keras.models import Model from tensorflow.keras.preprocessing import image from tensorflow.keras.preprocessing.image import ImageDataGenerator,load_img from tensorflow.keras.models import Sequential import numpy as np import pathlib as pa import glob import matplotlib.pyplot as plt import Augmentor
Import Data
Here we are importing data into train and test.
train=pa.Path('/content/drive/MyDrive/Data/Skin Cancer/Train/') test=pa.Path('/content/drive/MyDrive/Data/Skin Cancer/Test/')
Check No of Folders in Train
Let’s check the number of folders in the train directory.
Code
folders=glob.glob('/content/drive/MyDrive/Data/Skin Cancer/Train/*') folders
Output
['/content/drive/MyDrive/Data/Skin Cancer/Train/pigmented benign keratosis', '/content/drive/MyDrive/Data/Skin Cancer/Train/seborrheic keratosis', '/content/drive/MyDrive/Data/Skin Cancer/Train/dermatofibroma', '/content/drive/MyDrive/Data/Skin Cancer/Train/nevus', '/content/drive/MyDrive/Data/Skin Cancer/Train/vascular lesion', '/content/drive/MyDrive/Data/Skin Cancer/Train/melanoma', '/content/drive/MyDrive/Data/Skin Cancer/Train/squamous cell carcinoma', '/content/drive/MyDrive/Data/Skin Cancer/Train/actinic keratosis', '/content/drive/MyDrive/Data/Skin Cancer/Train/basal cell carcinoma']
As you can see there are 9 folders inside the train directory.
Total No of Images in Dataset
Let’s count the total number of images in the train and test directory.
Code
total_train=len(list(train.glob('*/*.jpg'))) print("Train Images :",total_train) total_test=len(list(test.glob('*/*.jpg'))) print("Test Images :",total_test)
Output
Train Images : 2239 Test Images : 118
Import Data into Tensorflow Object
Let’s load these images off disk using the image_dataset_from_directory utility.
Use 80% of the images for training and 20% for validation.
Defining IMG_SIZE:224*224.
Defining BATCH_SIZE=32
Code
IMG_SIZE=[224,224] BATCH_SIZE=32 train_ds=tf.keras.utils.image_dataset_from_directory(train,validation_split=0.2,subset='training',shuffle=True, batch_size=BATCH_SIZE,image_size=IMG_SIZE,seed=123) valid_ds=tf.keras.utils.image_dataset_from_directory(train,validation_split=0.2,subset='validation',shuffle=True, batch_size=BATCH_SIZE,image_size=IMG_SIZE,seed=123) test_t = tf.keras.utils.image_dataset_from_directory(test, shuffle=True,batch_size=BATCH_SIZE,image_size=IMG_SIZE)
Output
Found 11239 files belonging to 9 classes. Using 8992 files for training. Found 11239 files belonging to 9 classes. Using 2247 files for validation. Found 118 files belonging to 9 classes.
As you can see that there are 11239 images belonging to 9 classes in training. Here we are using:
- 8992 images – For Training.
- 2247 images -For Validation.
- 118 images – For Testing.
Visualize Random Images
Let’s visualize some random images of skin cancer.
Code
fig=plt.figure(figsize=(10,10)) for img,label in train_ds.take(1): for i in range(9): fig.add_subplot(3,3,i+1),plt.imshow(img[i].numpy().astype('uint8')) plt.title(names[label[i]]) plt.axis('off')
Output
Configure the dataset for performance
To load images from the disc without I/O becoming blocked, we use buffered prefetching. To learn more about this method see the data performance guide.
There are two important methods you should use when loading data:
- Dataset.cache: After the images are loaded from the disc during the first epoch, they are kept in memory. This will prevent the dataset from becoming a bottleneck during the training of your model. You may also use this approach to establish a performant on-disk cache if your dataset is too huge to fit in memory.
- Dataset.prefetch: It overlaps data preprocessing & model execution while training.
AUTOTUNE = tf.data.AUTOTUNE train_ds = train_ds.cache().shuffle(1000).prefetch(buffer_size=AUTOTUNE) valid_ds = valid_ds.cache().prefetch(buffer_size=AUTOTUNE)
Data Augmentation
When there are a limited number of training instances, overfitting arises. Data augmentation is a technique for producing new training data from existing examples by applying random modifications to them, resulting in believable-looking visuals. This allows the model to be exposed to more parts of the data and generalize more effectively.
We will Implement Data Augmentation using the following Keras preprocessing layers:
- tf.keras.layers.RandomFlip()
- tf.keras.layers.RandomRotation()
- tf.keras.layers.RandomZoom()
data_augmentation = tf.keras.Sequential( [ tf.keras.layers.RandomFlip("horizontal"), tf.keras.layers.RandomRotation(0.1), tf.keras.layers.RandomZoom(0.1), ] )
Visualize Augmented Image
Let’s visualize what a few augmented examples look like by applying data augmentation to the same image several times:
Code
plt.figure(figsize=(10, 10)) for images, _ in train_ds.take(1): for i in range(9): augmented_images = data_augmentation(images) ax = plt.subplot(3, 3, i + 1) plt.imshow(augmented_images[0].numpy().astype("uint8")) plt.axis("off")
Output

Model Building
Stages Include
- Train the model.
- Exclude the last layer in Resnet 50 with include_top=False
- Freezing layer in the network with trainable =False.
- Compile and fit the model.
Create the base model from pre-trained Conv-nets
First, instantiate a ResNet50 model pre-loaded with weights trained on ImageNet. By specifying the include_top=False argument, you load a network that doesn’t include the classification layers at the top, which is ideal for feature extraction.
Here we are passing image_size as 224*224 pixels and the channel we use is 3rd channel.
Code
IMG_SHAPE = IMG_SIZE + [3] base_model = ResNet50(input_shape=IMG_SHAPE, weights='imagenet', include_top=False) base_model.summary()
Output
Downloading data from https://storage.googleapis.com/tensorflow/keras-applications/resnet/resnet50_weights_tf_dim_ordering_tf_kernels_notop.h5 94773248/94765736 [==============================] - 1s 0us/step 94781440/94765736 [==============================] - 1s 0us/step Model: "resnet50" __________________________________________________________________________________________________ Layer (type) Output Shape Param # Connected to ================================================================================================== input_1 (InputLayer) [(None, 224, 224, 3 0 [] )] conv1_pad (ZeroPadding2D) (None, 230, 230, 3) 0 ['input_1[0][0]'] conv1_conv (Conv2D) (None, 112, 112, 64 9472 ['conv1_pad[0][0]'] ) conv1_bn (BatchNormalization) (None, 112, 112, 64 256 ['conv1_conv[0][0]'] ) conv1_relu (Activation) (None, 112, 112, 64 0 ['conv1_bn[0][0]'] ) pool1_pad (ZeroPadding2D) (None, 114, 114, 64 0 ['conv1_relu[0][0]'] ) pool1_pool (MaxPooling2D) (None, 56, 56, 64) 0 ['pool1_pad[0][0]'] conv2_block1_1_conv (Conv2D) (None, 56, 56, 64) 4160 ['pool1_pool[0][0]'] conv2_block1_1_bn (BatchNormal (None, 56, 56, 64) 256 ['conv2_block1_1_conv[0][0]'] ization)
The above output explains the summary of our base model.
Freeze the convolutional base
Whenever we are compiling and training the model, it’s critical to freeze the convolutional basis. Freezing (by putting a layer on top of it).
The weights in a particular layer are not changed during training if trainable = False. Because ResNet50 has so many layers, setting the trainable flag to False for the whole model will freeze them all.
Code
base_model.trainable = False base_model.summary()
Output
Model: "resnet50" __________________________________________________________________________________________________ Layer (type) Output Shape Param # Connected to ================================================================================================== input_1 (InputLayer) [(None, 224, 224, 3 0 [] )] conv1_pad (ZeroPadding2D) (None, 230, 230, 3) 0 ['input_1[0][0]'] conv1_conv (Conv2D) (None, 112, 112, 64 9472 ['conv1_pad[0][0]'] ) conv1_bn (BatchNormalization) (None, 112, 112, 64 256 ['conv1_conv[0][0]'] ) conv1_relu (Activation) (None, 112, 112, 64 0 ['conv1_bn[0][0]'] ) pool1_pad (ZeroPadding2D) (None, 114, 114, 64 0 ['conv1_relu[0][0]'] ) pool1_pool (MaxPooling2D) (None, 56, 56, 64) 0 ['pool1_pad[0][0]'] conv2_block1_1_conv (Conv2D) (None, 56, 56, 64) 4160 ['pool1_pool[0][0]'] conv2_block1_1_bn (BatchNormal (None, 56, 56, 64) 256 ['conv2_block1_1_conv[0][0]'] ization) conv2_block1_1_relu (Activatio (None, 56, 56, 64) 0 ['conv2_block1_1_bn[0][0]']
After freezing the layer the above output shows the summary of the model.
The Functional API
The Keras functional API allows you to design more flexible models than the tf.Keras.Sequential API allows. Models with non-linear topologies shared layers, and even multiple inputs and outputs are all supported by the functional API.
A deep learning model is often a directed acyclic graph (DAG) of layers, according to the fundamental notion. As a result, we will use the functional API that allows us to create layer graphs.
Parameters which we define in functional API are:
- inputs: Input of my base model i.e ResNet50.
- preprocess_input: It preprocesses a tensor or NumPy array encoding into a batch of images.
- Dropout Layer: Used to avoid overfitting.
- Dense Layer: It is our last layer i.e output layer in the network.
- Model: you can create a Model by specifying its inputs and outputs in the graph of layers
Code
inputs = base_model.input x = data_augmentation(inputs) x = preprocess_input(x) x = base_model(x, training=False) global_average_layer = GlobalAveragePooling2D() x=global_average_layer(x) x = Dropout(0.2)(x) outputs=Dense(len(folders), activation='softmax')(x) model = Model(inputs, outputs) model.summary()
Output
Model: "model" _________________________________________________________________ Layer (type) Output Shape Param # ================================================================= input_1 (InputLayer) [(None, 224, 224, 3)] 0 sequential (Sequential) (None, 224, 224, 3) 0 tf.__operators__.getitem (S (None, 224, 224, 3) 0 licingOpLambda) tf.nn.bias_add (TFOpLambda) (None, 224, 224, 3) 0 resnet50 (Functional) (None, 7, 7, 2048) 23587712 global_average_pooling2d (G (None, 2048) 0 lobalAveragePooling2D) dropout (Dropout) (None, 2048) 0 dense (Dense) (None, 9) 18441 ================================================================= Total params: 23,606,153 Trainable params: 18,441 Non-trainable params: 23,587,712
Compile and Fit the model
Here we are compiling and fitting the model which we earlier created.
The parameters which we are using are:
- Optimizers: Adam optimizer.
- metrics: Accuracy.
- Loss: Sparse Categorical Cross Entropy.
- Epoch: 10 (the number of iterations we are passing).
Code
model.compile( optimizer='adam', loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=False), metrics=['accuracy'] ) history = model.fit(train_ds, epochs=10, validation_data=valid_ds)
Output
Epoch 1/10 281/281 [==============================] - 335s 1s/step - loss: 1.3812 - accuracy: 0.5060 - val_loss: 1.0996 - val_accuracy: 0.6044 Epoch 2/10 281/281 [==============================] - 83s 295ms/step - loss: 0.9919 - accuracy: 0.6390 - val_loss: 0.9408 - val_accuracy: 0.6600 Epoch 3/10 281/281 [==============================] - 83s 294ms/step - loss: 0.8663 - accuracy: 0.6826 - val_loss: 1.0250 - val_accuracy: 0.6235 Epoch 4/10 281/281 [==============================] - 82s 294ms/step - loss: 0.8015 - accuracy: 0.7062 - val_loss: 0.9269 - val_accuracy: 0.6489 Epoch 5/10 281/281 [==============================] - 82s 293ms/step - loss: 0.7546 - accuracy: 0.7241 - val_loss: 0.8514 - val_accuracy: 0.6751 Epoch 6/10 281/281 [==============================] - 82s 293ms/step - loss: 0.7296 - accuracy: 0.7320 - val_loss: 0.8030 - val_accuracy: 0.6969 Epoch 7/10 281/281 [==============================] - 82s 292ms/step - loss: 0.7005 - accuracy: 0.7417 - val_loss: 0.7529 - val_accuracy: 0.7063 Epoch 8/10 281/281 [==============================] - 82s 292ms/step - loss: 0.6684 - accuracy: 0.7542 - val_loss: 0.8744 - val_accuracy: 0.6782 Epoch 9/10 281/281 [==============================] - 82s 292ms/step - loss: 0.6579 - accuracy: 0.7582 - val_loss: 0.7404 - val_accuracy: 0.7174 Epoch 10/10 281/281 [==============================] - 82s 292ms/step - loss: 0.6418 - accuracy: 0.7681 - val_loss: 0.6663 - val_accuracy: 0.7481
As you can see that for Epoch 10 our training accuracy is around 0.76 and validation accuracy is 0.74 and the loss is also decreasing after every epoch.
Leave a Reply