Salt Identification using Keras | Python
In this article, salt deposit detection using UNets fused with a simple residual network is done. A study claims that 70% of the Earth’s surface is covered with water, among which 97.5% is saltwater. Thus, salt covers a lot of Earth’s surface area. Even a lot of gas and oil has a huge amount of salt deposits. Detection of these salty deposits is still not 100% accurately in real-time. Thus in this article, we will try to induce a deep learning technique to determine the salt deposit with high precision and consistency.
In this article, we will be using the TGS dataset of salt images and masks, which is the world’s largest provider of Geospatial data.
The dataset for the salt detection can be downloaded by, clicking here.
The code implementation is as follows:
- Import libraries
- Pre-Processing
- Building the Network
- Data Augmentation
- Compiling and Training
- Prediction
Happy Reading!!!
REQUIRED LIBRARIES
Input the necessary packages.
import os import sys import random import pandas as pd import numpy as np import matplotlib.pyplot as plt plt.style.use('seaborn-white') import seaborn as sns sns.set_style("white") %matplotlib inline from sklearn.model_selection import train_test_split from tqdm import tqdm_notebook, tnrange from itertools import chain from skimage.io import imread, imshow, concatenate_images from skimage.transform import resize from skimage.morphology import label from keras.models import Model, load_model from keras.layers import Input,Dropout,BatchNormalization,Activation,Add from keras.layers.core import Lambda from keras.layers.convolutional import Conv2D, Conv2DTranspose from keras.layers.pooling import MaxPooling2D from keras.layers.merge import concatenate from keras.callbacks import EarlyStopping, ModelCheckpoint, ReduceLROnPlateau from keras import backend as K import tensorflow as tf from keras.preprocessing.image import ImageDataGenerator, array_to_img, img_to_array, load_img#,save_img
PARAMETERS
Initialization of the necessary parameters for processing and training.
im_width = 101 im_height = 101 im_chan = 1 basicpath = '../input/' path_train = basicpath + 'train/' path_test = basicpath + 'test/' path_train_images = path_train + 'images/' path_train_masks = path_train + 'masks/' path_test_images = path_test + 'images/'
PRE-PROCESSING
Normalizing of the dataset using upsampling and downsampling techniques.
img_size_ori = 101 img_size_target = 101 def upsample(img):# not used if img_size_ori == img_size_target: return img return resize(img, (img_size_target, img_size_target), mode='constant', preserve_range=True) #res = np.zeros((img_size_target, img_size_target), dtype=img.dtype) #res[:img_size_ori, :img_size_ori] = img #return res def downsample(img):# not used if img_size_ori == img_size_target: return img return resize(img, (img_size_ori, img_size_ori), mode='constant', preserve_range=True) #return img[:img_size_ori, :img_size_ori]
DATASET
Loading the training and test IDs from the link given in the introduction (kaggle) and storing them in variables for ease of usage. Also, the initialization of the depths is done in this section.
train_df = pd.read_csv("../input/train.csv", index_col="id", usecols=[0]) depths_df = pd.read_csv("../input/depths.csv", index_col="id") train_df = train_df.join(depths_df) test_df = depths_df[~depths_df.index.isin(train_df.index)] len(train_df)
4000
GRAYSCALE
Conversion of all the dataset images into grayscale depth images.
train_df["images"] = [np.array(load_img("../input/train/images/{}.png".format(idx), grayscale=True)) / 255 for idx in tqdm_notebook(train_df.index)] train_df["masks"] = [np.array(load_img("../input/train/masks/{}.png".format(idx), grayscale=True)) / 255 for idx in tqdm_notebook(train_df.index)] train_df["coverage"] = train_df.masks.map(np.sum) / pow(img_size_ori, 2) def cov_to_class(val): for i in range(0, 11): if val * 10 <= i : return i train_df["coverage_class"] = train_df.coverage.map(cov_to_class)
PLOT
Plotting of the salt coverage for the ease of visualization.
fig, axs = plt.subplots(1, 2, figsize=(15,5)) sns.distplot(train_df.coverage, kde=False, ax=axs[0]) sns.distplot(train_df.coverage_class, bins=10, kde=False, ax=axs[1]) plt.suptitle("Salt coverage") axs[0].set_xlabel("Coverage") axs[1].set_xlabel("Coverage class")
Plotting of the depth information using plt for visualization.
sns.distplot(train_df.z, label="Train") sns.distplot(test_df.z, label="Test") plt.legend() plt.title("Depth distribution")
SPLIT
Splitting of the dataset into training and validation sets for training purposes.
ids_train, ids_valid, x_train, x_valid, y_train, y_valid, cov_train, cov_test, depth_train, depth_test = train_test_split( train_df.index.values, np.array(train_df.images.map(upsample).tolist()).reshape(-1, img_size_target, img_size_target, 1), np.array(train_df.masks.map(upsample).tolist()).reshape(-1, img_size_target, img_size_target, 1), train_df.coverage.values, train_df.z.values, test_size=0.2, stratify=train_df.coverage_class, random_state= 1234)
RESIDUAL AND CONVOLUTIONAL BLOCKS
The building of the residual and convolution blocks with the help of helper functions. Thus, declaring the functions.
def convolution_block(x, filters, size, strides=(1,1), padding='same', activation=True): x = Conv2D(filters, size, strides=strides, padding=padding)(x) x = BatchNormalization()(x) if activation == True: x = Activation('relu')(x) return x def residual_block(blockInput, num_filters=16): x = Activation('relu')(blockInput) x = BatchNormalization()(x) x = convolution_block(x, num_filters, (3,3) ) x = convolution_block(x, num_filters, (3,3), activation=False) x = Add()([x, blockInput]) return x
BUILDING MODEL
The building of the model using convolution, residual, ReLU activation, max-pooling layers, and ending with dropout layers for the network. Dropout layers reduce the overfitting of the training data. Single-class classification results in the output layer of one and sigmoid function.
def build_model(input_layer, start_neurons, DropoutRatio = 0.5): # 101 -> 50 conv1 = Conv2D(start_neurons * 1, (3, 3), activation=None, padding="same")(input_layer) conv1 = residual_block(conv1,start_neurons * 1) conv1 = residual_block(conv1,start_neurons * 1) conv1 = Activation('relu')(conv1) pool1 = MaxPooling2D((2, 2))(conv1) pool1 = Dropout(DropoutRatio/2)(pool1) # 50 -> 25 conv2 = Conv2D(start_neurons * 2, (3, 3), activation=None, padding="same")(pool1) conv2 = residual_block(conv2,start_neurons * 2) conv2 = residual_block(conv2,start_neurons * 2) conv2 = Activation('relu')(conv2) pool2 = MaxPooling2D((2, 2))(conv2) pool2 = Dropout(DropoutRatio)(pool2) # 25 -> 12 conv3 = Conv2D(start_neurons * 4, (3, 3), activation=None, padding="same")(pool2) conv3 = residual_block(conv3,start_neurons * 4) conv3 = residual_block(conv3,start_neurons * 4) conv3 = Activation('relu')(conv3) pool3 = MaxPooling2D((2, 2))(conv3) pool3 = Dropout(DropoutRatio)(pool3) # 12 -> 6 conv4 = Conv2D(start_neurons * 8, (3, 3), activation=None, padding="same")(pool3) conv4 = residual_block(conv4,start_neurons * 8) conv4 = residual_block(conv4,start_neurons * 8) conv4 = Activation('relu')(conv4) pool4 = MaxPooling2D((2, 2))(conv4) pool4 = Dropout(DropoutRatio)(pool4) # Middle convm = Conv2D(start_neurons * 16, (3, 3), activation=None, padding="same")(pool4) convm = residual_block(convm,start_neurons * 16) convm = residual_block(convm,start_neurons * 16) convm = Activation('relu')(convm) # 6 -> 12 deconv4 = Conv2DTranspose(start_neurons * 8, (3, 3), strides=(2, 2), padding="same")(convm) uconv4 = concatenate([deconv4, conv4]) uconv4 = Dropout(DropoutRatio)(uconv4) uconv4 = Conv2D(start_neurons * 8, (3, 3), activation=None, padding="same")(uconv4) uconv4 = residual_block(uconv4,start_neurons * 8) uconv4 = residual_block(uconv4,start_neurons * 8) uconv4 = Activation('relu')(uconv4) # 12 -> 25 #deconv3 = Conv2DTranspose(start_neurons * 4, (3, 3), strides=(2, 2), padding="same")(uconv4) deconv3 = Conv2DTranspose(start_neurons * 4, (3, 3), strides=(2, 2), padding="valid")(uconv4) uconv3 = concatenate([deconv3, conv3]) uconv3 = Dropout(DropoutRatio)(uconv3) uconv3 = Conv2D(start_neurons * 4, (3, 3), activation=None, padding="same")(uconv3) uconv3 = residual_block(uconv3,start_neurons * 4) uconv3 = residual_block(uconv3,start_neurons * 4) uconv3 = Activation('relu')(uconv3) # 25 -> 50 deconv2 = Conv2DTranspose(start_neurons * 2, (3, 3), strides=(2, 2), padding="same")(uconv3) uconv2 = concatenate([deconv2, conv2]) uconv2 = Dropout(DropoutRatio)(uconv2) uconv2 = Conv2D(start_neurons * 2, (3, 3), activation=None, padding="same")(uconv2) uconv2 = residual_block(uconv2,start_neurons * 2) uconv2 = residual_block(uconv2,start_neurons * 2) uconv2 = Activation('relu')(uconv2) # 50 -> 101 #deconv1 = Conv2DTranspose(start_neurons * 1, (3, 3), strides=(2, 2), padding="same")(uconv2) deconv1 = Conv2DTranspose(start_neurons * 1, (3, 3), strides=(2, 2), padding="valid")(uconv2) uconv1 = concatenate([deconv1, conv1]) uconv1 = Dropout(DropoutRatio)(uconv1) uconv1 = Conv2D(start_neurons * 1, (3, 3), activation=None, padding="same")(uconv1) uconv1 = residual_block(uconv1,start_neurons * 1) uconv1 = residual_block(uconv1,start_neurons * 1) uconv1 = Activation('relu')(uconv1) uconv1 = Dropout(DropoutRatio/2)(uconv1) output_layer = Conv2D(1, (1,1), padding="same", activation="sigmoid")(uconv1) return output_layer
PREDICTION
Scoring for the model and the threshold optimization with the best Intersection Over Union. Some of the steps in this section are:
- Computing the union
- Exclude the background for the sake of analysis
- Then, computation of the IOU
- Definition of the precision helper function
def iou_metric(y_true_in, y_pred_in, print_table=False): labels = y_true_in y_pred = y_pred_in true_objects = 2 pred_objects = 2 temp1 = np.histogram2d(labels.flatten(), y_pred.flatten(), bins=([0,0.5,1], [0,0.5, 1])) intersection = temp1[0] area_true = np.histogram(labels,bins=[0,0.5,1])[0] area_pred = np.histogram(y_pred, bins=[0,0.5,1])[0] area_true = np.expand_dims(area_true, -1) area_pred = np.expand_dims(area_pred, 0) union = area_true + area_pred - intersection intersection = intersection[1:,1:] intersection[intersection == 0] = 1e-9 union = union[1:,1:] union[union == 0] = 1e-9 iou = intersection / union def precision_at(threshold, iou): matches = iou > threshold true_positives = np.sum(matches, axis=1) == 1 # Correct objects false_positives = np.sum(matches, axis=0) == 0 # Missed objects false_negatives = np.sum(matches, axis=1) == 0 # Extra objects tp, fp, fn = np.sum(true_positives), np.sum(false_positives), np.sum(false_negatives) return tp, fp, fn prec = [] if print_table: print("Thresh\tTP\tFP\tFN\tPrec.") for t in np.arange(0.5, 1.0, 0.05): tp, fp, fn = precision_at(t, iou) if (tp + fp + fn) > 0: p = tp / (tp + fp + fn) else: p = 0 if print_table: print("{:1.3f}\t{}\t{}\t{}\t{:1.3f}".format(t, tp, fp, fn, p)) prec.append(p) if print_table: print("AP\t-\t-\t-\t{:1.3f}".format(np.mean(prec))) return np.mean(prec) def iou_metric_batch(y_true_in, y_pred_in): y_pred_in = y_pred_in > 0.5 # added by sgx 20180728 batch_size = y_true_in.shape[0] metric = [] for batch in range(batch_size): value = iou_metric(y_true_in[batch], y_pred_in[batch]) metric.append(value) #print("metric = ",metric) return np.mean(metric) def my_iou_metric(label, pred): metric_value = tf.py_func(iou_metric_batch, [label, pred], tf.float64) return metric_value
DATA AUGMENTATION
Augmentation is done to increase the sample size and ultimately reduction in the computation required for big datasets. In more simple words, using a large dataset will increase the computation cost of the process, but here we will use the same dataset simply augmenting the images thus not affecting the cost but increasing the accuracy and consistency of the model.
x_train2 = np.append(x_train, [np.fliplr(x) for x in x_train], axis=0) y_train2 = np.append(y_train, [np.fliplr(x) for x in y_train], axis=0) print(x_train2.shape) print(y_valid.shape)
(6400, 101, 101, 1) (800, 101, 101, 1)
COMPILING
Compiling the model built in the BUILD Section. Displaying the summary of the model. Thus, a compilation of the binary_crossentrophy and adam optimizer.
input_layer = Input((img_size_target, img_size_target, 1)) output_layer = build_model(input_layer, 16,0.5) model = Model(input_layer, output_layer) model.compile(loss="binary_crossentropy", optimizer="adam", metrics=[my_iou_metric]) model.summary()
Layer (type) Output Shape Param # Connected to ================================================================================================== input_1 (InputLayer) (None, 101, 101, 1) 0 __________________________________________________________________________________________________ conv2d_1 (Conv2D) (None, 101, 101, 16) 160 input_1[0][0] __________________________________________________________________________________________________ activation_1 (Activation) (None, 101, 101, 16) 0 conv2d_1[0][0] __________________________________________________________________________________________________ batch_normalization_1 (BatchNor (None, 101, 101, 16) 64 activation_1[0][0] __________________________________________________________________________________________________ conv2d_2 (Conv2D) (None, 101, 101, 16) 2320 batch_normalization_1[0][0] __________________________________________________________________________________________________ batch_normalization_2 (BatchNor (None, 101, 101, 16) 64 conv2d_2[0][0] __________________________________________________________________________________________________ activation_2 (Activation) (None, 101, 101, 16) 0 batch_normalization_2[0][0] __________________________________________________________________________________________________ conv2d_3 (Conv2D) (None, 101, 101, 16) 2320 activation_2[0][0] __________________________________________________________________________________________________ batch_normalization_3 (BatchNor (None, 101, 101, 16) 64 conv2d_3[0][0] __________________________________________________________________________________________________ add_1 (Add) (None, 101, 101, 16) 0 batch_normalization_3[0][0] conv2d_1[0][0] __________________________________________________________________________________________________ activation_3 (Activation) (None, 101, 101, 16) 0 add_1[0][0] __________________________________________________________________________________________________ batch_normalization_4 (BatchNor (None, 101, 101, 16) 64 activation_3[0][0] __________________________________________________________________________________________________ conv2d_4 (Conv2D) (None, 101, 101, 16) 2320 batch_normalization_4[0][0] __________________________________________________________________________________________________ batch_normalization_5 (BatchNor (None, 101, 101, 16) 64 conv2d_4[0][0] __________________________________________________________________________________________________ activation_4 (Activation) (None, 101, 101, 16) 0 batch_normalization_5[0][0] __________________________________________________________________________________________________ conv2d_5 (Conv2D) (None, 101, 101, 16) 2320 activation_4[0][0] __________________________________________________________________________________________________ batch_normalization_6 (BatchNor (None, 101, 101, 16) 64 conv2d_5[0][0] __________________________________________________________________________________________________ add_2 (Add) (None, 101, 101, 16) 0 batch_normalization_6[0][0] add_1[0][0] __________________________________________________________________________________________________ activation_5 (Activation) (None, 101, 101, 16) 0 add_2[0][0] __________________________________________________________________________________________________ max_pooling2d_1 (MaxPooling2D) (None, 50, 50, 16) 0 activation_5[0][0] __________________________________________________________________________________________________ dropout_1 (Dropout) (None, 50, 50, 16) 0 max_pooling2d_1[0][0] __________________________________________________________________________________________________ conv2d_6 (Conv2D) (None, 50, 50, 32) 4640 dropout_1[0][0] __________________________________________________________________________________________________ activation_6 (Activation) (None, 50, 50, 32) 0 conv2d_6[0][0] __________________________________________________________________________________________________ batch_normalization_7 (BatchNor (None, 50, 50, 32) 128 activation_6[0][0] __________________________________________________________________________________________________ conv2d_7 (Conv2D) (None, 50, 50, 32) 9248 batch_normalization_7[0][0] __________________________________________________________________________________________________ batch_normalization_8 (BatchNor (None, 50, 50, 32) 128 conv2d_7[0][0] __________________________________________________________________________________________________ activation_7 (Activation) (None, 50, 50, 32) 0 batch_normalization_8[0][0] __________________________________________________________________________________________________ conv2d_8 (Conv2D) (None, 50, 50, 32) 9248 activation_7[0][0] __________________________________________________________________________________________________ batch_normalization_9 (BatchNor (None, 50, 50, 32) 128 conv2d_8[0][0] __________________________________________________________________________________________________ add_3 (Add) (None, 50, 50, 32) 0 batch_normalization_9[0][0] conv2d_6[0][0] __________________________________________________________________________________________________ activation_8 (Activation) (None, 50, 50, 32) 0 add_3[0][0] __________________________________________________________________________________________________ batch_normalization_10 (BatchNo (None, 50, 50, 32) 128 activation_8[0][0] __________________________________________________________________________________________________ conv2d_9 (Conv2D) (None, 50, 50, 32) 9248 batch_normalization_10[0][0] __________________________________________________________________________________________________ batch_normalization_11 (BatchNo (None, 50, 50, 32) 128 conv2d_9[0][0] __________________________________________________________________________________________________ activation_9 (Activation) (None, 50, 50, 32) 0 batch_normalization_11[0][0] __________________________________________________________________________________________________ conv2d_10 (Conv2D) (None, 50, 50, 32) 9248 activation_9[0][0] __________________________________________________________________________________________________ batch_normalization_12 (BatchNo (None, 50, 50, 32) 128 conv2d_10[0][0] __________________________________________________________________________________________________ add_4 (Add) (None, 50, 50, 32) 0 batch_normalization_12[0][0] add_3[0][0] __________________________________________________________________________________________________ activation_10 (Activation) (None, 50, 50, 32) 0 add_4[0][0] __________________________________________________________________________________________________ max_pooling2d_2 (MaxPooling2D) (None, 25, 25, 32) 0 activation_10[0][0] __________________________________________________________________________________________________ dropout_2 (Dropout) (None, 25, 25, 32) 0 max_pooling2d_2[0][0] __________________________________________________________________________________________________ conv2d_11 (Conv2D) (None, 25, 25, 64) 18496 dropout_2[0][0] __________________________________________________________________________________________________ activation_11 (Activation) (None, 25, 25, 64) 0 conv2d_11[0][0] __________________________________________________________________________________________________ batch_normalization_13 (BatchNo (None, 25, 25, 64) 256 activation_11[0][0] __________________________________________________________________________________________________ conv2d_12 (Conv2D) (None, 25, 25, 64) 36928 batch_normalization_13[0][0] __________________________________________________________________________________________________ batch_normalization_14 (BatchNo (None, 25, 25, 64) 256 conv2d_12[0][0] __________________________________________________________________________________________________ activation_12 (Activation) (None, 25, 25, 64) 0 batch_normalization_14[0][0] __________________________________________________________________________________________________ conv2d_13 (Conv2D) (None, 25, 25, 64) 36928 activation_12[0][0] __________________________________________________________________________________________________ batch_normalization_15 (BatchNo (None, 25, 25, 64) 256 conv2d_13[0][0] __________________________________________________________________________________________________ add_5 (Add) (None, 25, 25, 64) 0 batch_normalization_15[0][0] conv2d_11[0][0] __________________________________________________________________________________________________ activation_13 (Activation) (None, 25, 25, 64) 0 add_5[0][0] __________________________________________________________________________________________________ batch_normalization_16 (BatchNo (None, 25, 25, 64) 256 activation_13[0][0] __________________________________________________________________________________________________ conv2d_14 (Conv2D) (None, 25, 25, 64) 36928 batch_normalization_16[0][0] __________________________________________________________________________________________________ batch_normalization_17 (BatchNo (None, 25, 25, 64) 256 conv2d_14[0][0] __________________________________________________________________________________________________ activation_14 (Activation) (None, 25, 25, 64) 0 batch_normalization_17[0][0] __________________________________________________________________________________________________ conv2d_15 (Conv2D) (None, 25, 25, 64) 36928 activation_14[0][0] __________________________________________________________________________________________________ batch_normalization_18 (BatchNo (None, 25, 25, 64) 256 conv2d_15[0][0] __________________________________________________________________________________________________ add_6 (Add) (None, 25, 25, 64) 0 batch_normalization_18[0][0] add_5[0][0] __________________________________________________________________________________________________ activation_15 (Activation) (None, 25, 25, 64) 0 add_6[0][0] __________________________________________________________________________________________________ max_pooling2d_3 (MaxPooling2D) (None, 12, 12, 64) 0 activation_15[0][0] __________________________________________________________________________________________________ dropout_3 (Dropout) (None, 12, 12, 64) 0 max_pooling2d_3[0][0] __________________________________________________________________________________________________ conv2d_16 (Conv2D) (None, 12, 12, 128) 73856 dropout_3[0][0] __________________________________________________________________________________________________ activation_16 (Activation) (None, 12, 12, 128) 0 conv2d_16[0][0] __________________________________________________________________________________________________ batch_normalization_19 (BatchNo (None, 12, 12, 128) 512 activation_16[0][0] __________________________________________________________________________________________________ conv2d_17 (Conv2D) (None, 12, 12, 128) 147584 batch_normalization_19[0][0] __________________________________________________________________________________________________ batch_normalization_20 (BatchNo (None, 12, 12, 128) 512 conv2d_17[0][0] __________________________________________________________________________________________________ activation_17 (Activation) (None, 12, 12, 128) 0 batch_normalization_20[0][0] __________________________________________________________________________________________________ conv2d_18 (Conv2D) (None, 12, 12, 128) 147584 activation_17[0][0] __________________________________________________________________________________________________ batch_normalization_21 (BatchNo (None, 12, 12, 128) 512 conv2d_18[0][0] __________________________________________________________________________________________________ add_7 (Add) (None, 12, 12, 128) 0 batch_normalization_21[0][0] conv2d_16[0][0] __________________________________________________________________________________________________ activation_18 (Activation) (None, 12, 12, 128) 0 add_7[0][0] __________________________________________________________________________________________________ batch_normalization_22 (BatchNo (None, 12, 12, 128) 512 activation_18[0][0] __________________________________________________________________________________________________ conv2d_19 (Conv2D) (None, 12, 12, 128) 147584 batch_normalization_22[0][0] __________________________________________________________________________________________________ batch_normalization_23 (BatchNo (None, 12, 12, 128) 512 conv2d_19[0][0] __________________________________________________________________________________________________ activation_19 (Activation) (None, 12, 12, 128) 0 batch_normalization_23[0][0] __________________________________________________________________________________________________ conv2d_20 (Conv2D) (None, 12, 12, 128) 147584 activation_19[0][0] __________________________________________________________________________________________________ batch_normalization_24 (BatchNo (None, 12, 12, 128) 512 conv2d_20[0][0] __________________________________________________________________________________________________ add_8 (Add) (None, 12, 12, 128) 0 batch_normalization_24[0][0] add_7[0][0] __________________________________________________________________________________________________ activation_20 (Activation) (None, 12, 12, 128) 0 add_8[0][0] __________________________________________________________________________________________________ max_pooling2d_4 (MaxPooling2D) (None, 6, 6, 128) 0 activation_20[0][0] __________________________________________________________________________________________________ dropout_4 (Dropout) (None, 6, 6, 128) 0 max_pooling2d_4[0][0] __________________________________________________________________________________________________ conv2d_21 (Conv2D) (None, 6, 6, 256) 295168 dropout_4[0][0] __________________________________________________________________________________________________ activation_21 (Activation) (None, 6, 6, 256) 0 conv2d_21[0][0] __________________________________________________________________________________________________ batch_normalization_25 (BatchNo (None, 6, 6, 256) 1024 activation_21[0][0] __________________________________________________________________________________________________ conv2d_22 (Conv2D) (None, 6, 6, 256) 590080 batch_normalization_25[0][0] __________________________________________________________________________________________________ batch_normalization_26 (BatchNo (None, 6, 6, 256) 1024 conv2d_22[0][0] __________________________________________________________________________________________________ activation_22 (Activation) (None, 6, 6, 256) 0 batch_normalization_26[0][0] __________________________________________________________________________________________________ conv2d_23 (Conv2D) (None, 6, 6, 256) 590080 activation_22[0][0] __________________________________________________________________________________________________ batch_normalization_27 (BatchNo (None, 6, 6, 256) 1024 conv2d_23[0][0] __________________________________________________________________________________________________ add_9 (Add) (None, 6, 6, 256) 0 batch_normalization_27[0][0] conv2d_21[0][0] __________________________________________________________________________________________________ activation_23 (Activation) (None, 6, 6, 256) 0 add_9[0][0] __________________________________________________________________________________________________ batch_normalization_28 (BatchNo (None, 6, 6, 256) 1024 activation_23[0][0] __________________________________________________________________________________________________ conv2d_24 (Conv2D) (None, 6, 6, 256) 590080 batch_normalization_28[0][0] __________________________________________________________________________________________________ batch_normalization_29 (BatchNo (None, 6, 6, 256) 1024 conv2d_24[0][0] __________________________________________________________________________________________________ activation_24 (Activation) (None, 6, 6, 256) 0 batch_normalization_29[0][0] __________________________________________________________________________________________________ conv2d_25 (Conv2D) (None, 6, 6, 256) 590080 activation_24[0][0] __________________________________________________________________________________________________ batch_normalization_30 (BatchNo (None, 6, 6, 256) 1024 conv2d_25[0][0] __________________________________________________________________________________________________ add_10 (Add) (None, 6, 6, 256) 0 batch_normalization_30[0][0] add_9[0][0] __________________________________________________________________________________________________ activation_25 (Activation) (None, 6, 6, 256) 0 add_10[0][0] __________________________________________________________________________________________________ conv2d_transpose_1 (Conv2DTrans (None, 12, 12, 128) 295040 activation_25[0][0] __________________________________________________________________________________________________ concatenate_1 (Concatenate) (None, 12, 12, 256) 0 conv2d_transpose_1[0][0] activation_20[0][0] __________________________________________________________________________________________________ dropout_5 (Dropout) (None, 12, 12, 256) 0 concatenate_1[0][0] __________________________________________________________________________________________________ conv2d_26 (Conv2D) (None, 12, 12, 128) 295040 dropout_5[0][0] __________________________________________________________________________________________________ activation_26 (Activation) (None, 12, 12, 128) 0 conv2d_26[0][0] __________________________________________________________________________________________________ batch_normalization_31 (BatchNo (None, 12, 12, 128) 512 activation_26[0][0] __________________________________________________________________________________________________ conv2d_27 (Conv2D) (None, 12, 12, 128) 147584 batch_normalization_31[0][0] __________________________________________________________________________________________________ batch_normalization_32 (BatchNo (None, 12, 12, 128) 512 conv2d_27[0][0] __________________________________________________________________________________________________ activation_27 (Activation) (None, 12, 12, 128) 0 batch_normalization_32[0][0] __________________________________________________________________________________________________ conv2d_28 (Conv2D) (None, 12, 12, 128) 147584 activation_27[0][0] __________________________________________________________________________________________________ batch_normalization_33 (BatchNo (None, 12, 12, 128) 512 conv2d_28[0][0] __________________________________________________________________________________________________ add_11 (Add) (None, 12, 12, 128) 0 batch_normalization_33[0][0] conv2d_26[0][0] __________________________________________________________________________________________________ activation_28 (Activation) (None, 12, 12, 128) 0 add_11[0][0] __________________________________________________________________________________________________ batch_normalization_34 (BatchNo (None, 12, 12, 128) 512 activation_28[0][0] __________________________________________________________________________________________________ conv2d_29 (Conv2D) (None, 12, 12, 128) 147584 batch_normalization_34[0][0] __________________________________________________________________________________________________ batch_normalization_35 (BatchNo (None, 12, 12, 128) 512 conv2d_29[0][0] __________________________________________________________________________________________________ activation_29 (Activation) (None, 12, 12, 128) 0 batch_normalization_35[0][0] __________________________________________________________________________________________________ conv2d_30 (Conv2D) (None, 12, 12, 128) 147584 activation_29[0][0] __________________________________________________________________________________________________ batch_normalization_36 (BatchNo (None, 12, 12, 128) 512 conv2d_30[0][0] __________________________________________________________________________________________________ add_12 (Add) (None, 12, 12, 128) 0 batch_normalization_36[0][0] add_11[0][0] __________________________________________________________________________________________________ activation_30 (Activation) (None, 12, 12, 128) 0 add_12[0][0] __________________________________________________________________________________________________ conv2d_transpose_2 (Conv2DTrans (None, 25, 25, 64) 73792 activation_30[0][0] __________________________________________________________________________________________________ concatenate_2 (Concatenate) (None, 25, 25, 128) 0 conv2d_transpose_2[0][0] activation_15[0][0] __________________________________________________________________________________________________ dropout_6 (Dropout) (None, 25, 25, 128) 0 concatenate_2[0][0] __________________________________________________________________________________________________ conv2d_31 (Conv2D) (None, 25, 25, 64) 73792 dropout_6[0][0] __________________________________________________________________________________________________ activation_31 (Activation) (None, 25, 25, 64) 0 conv2d_31[0][0] __________________________________________________________________________________________________ batch_normalization_37 (BatchNo (None, 25, 25, 64) 256 activation_31[0][0] __________________________________________________________________________________________________ conv2d_32 (Conv2D) (None, 25, 25, 64) 36928 batch_normalization_37[0][0] __________________________________________________________________________________________________ batch_normalization_38 (BatchNo (None, 25, 25, 64) 256 conv2d_32[0][0] __________________________________________________________________________________________________ activation_32 (Activation) (None, 25, 25, 64) 0 batch_normalization_38[0][0] __________________________________________________________________________________________________ conv2d_33 (Conv2D) (None, 25, 25, 64) 36928 activation_32[0][0] __________________________________________________________________________________________________ batch_normalization_39 (BatchNo (None, 25, 25, 64) 256 conv2d_33[0][0] __________________________________________________________________________________________________ add_13 (Add) (None, 25, 25, 64) 0 batch_normalization_39[0][0] conv2d_31[0][0] __________________________________________________________________________________________________ activation_33 (Activation) (None, 25, 25, 64) 0 add_13[0][0] __________________________________________________________________________________________________ batch_normalization_40 (BatchNo (None, 25, 25, 64) 256 activation_33[0][0] __________________________________________________________________________________________________ conv2d_34 (Conv2D) (None, 25, 25, 64) 36928 batch_normalization_40[0][0] __________________________________________________________________________________________________ batch_normalization_41 (BatchNo (None, 25, 25, 64) 256 conv2d_34[0][0] __________________________________________________________________________________________________ activation_34 (Activation) (None, 25, 25, 64) 0 batch_normalization_41[0][0] __________________________________________________________________________________________________ conv2d_35 (Conv2D) (None, 25, 25, 64) 36928 activation_34[0][0] __________________________________________________________________________________________________ batch_normalization_42 (BatchNo (None, 25, 25, 64) 256 conv2d_35[0][0] __________________________________________________________________________________________________ add_14 (Add) (None, 25, 25, 64) 0 batch_normalization_42[0][0] add_13[0][0] __________________________________________________________________________________________________ activation_35 (Activation) (None, 25, 25, 64) 0 add_14[0][0] __________________________________________________________________________________________________ conv2d_transpose_3 (Conv2DTrans (None, 50, 50, 32) 18464 activation_35[0][0] __________________________________________________________________________________________________ concatenate_3 (Concatenate) (None, 50, 50, 64) 0 conv2d_transpose_3[0][0] activation_10[0][0] __________________________________________________________________________________________________ dropout_7 (Dropout) (None, 50, 50, 64) 0 concatenate_3[0][0] __________________________________________________________________________________________________ conv2d_36 (Conv2D) (None, 50, 50, 32) 18464 dropout_7[0][0] __________________________________________________________________________________________________ activation_36 (Activation) (None, 50, 50, 32) 0 conv2d_36[0][0] __________________________________________________________________________________________________ batch_normalization_43 (BatchNo (None, 50, 50, 32) 128 activation_36[0][0] __________________________________________________________________________________________________ conv2d_37 (Conv2D) (None, 50, 50, 32) 9248 batch_normalization_43[0][0] __________________________________________________________________________________________________ batch_normalization_44 (BatchNo (None, 50, 50, 32) 128 conv2d_37[0][0] __________________________________________________________________________________________________ activation_37 (Activation) (None, 50, 50, 32) 0 batch_normalization_44[0][0] __________________________________________________________________________________________________ conv2d_38 (Conv2D) (None, 50, 50, 32) 9248 activation_37[0][0] __________________________________________________________________________________________________ batch_normalization_45 (BatchNo (None, 50, 50, 32) 128 conv2d_38[0][0] __________________________________________________________________________________________________ add_15 (Add) (None, 50, 50, 32) 0 batch_normalization_45[0][0] conv2d_36[0][0] __________________________________________________________________________________________________ activation_38 (Activation) (None, 50, 50, 32) 0 add_15[0][0] __________________________________________________________________________________________________ batch_normalization_46 (BatchNo (None, 50, 50, 32) 128 activation_38[0][0] __________________________________________________________________________________________________ conv2d_39 (Conv2D) (None, 50, 50, 32) 9248 batch_normalization_46[0][0] __________________________________________________________________________________________________ batch_normalization_47 (BatchNo (None, 50, 50, 32) 128 conv2d_39[0][0] __________________________________________________________________________________________________ activation_39 (Activation) (None, 50, 50, 32) 0 batch_normalization_47[0][0] __________________________________________________________________________________________________ conv2d_40 (Conv2D) (None, 50, 50, 32) 9248 activation_39[0][0] __________________________________________________________________________________________________ batch_normalization_48 (BatchNo (None, 50, 50, 32) 128 conv2d_40[0][0] __________________________________________________________________________________________________ add_16 (Add) (None, 50, 50, 32) 0 batch_normalization_48[0][0] add_15[0][0] __________________________________________________________________________________________________ activation_40 (Activation) (None, 50, 50, 32) 0 add_16[0][0] __________________________________________________________________________________________________ conv2d_transpose_4 (Conv2DTrans (None, 101, 101, 16) 4624 activation_40[0][0] __________________________________________________________________________________________________ concatenate_4 (Concatenate) (None, 101, 101, 32) 0 conv2d_transpose_4[0][0] activation_5[0][0] __________________________________________________________________________________________________ dropout_8 (Dropout) (None, 101, 101, 32) 0 concatenate_4[0][0] __________________________________________________________________________________________________ conv2d_41 (Conv2D) (None, 101, 101, 16) 4624 dropout_8[0][0] __________________________________________________________________________________________________ activation_41 (Activation) (None, 101, 101, 16) 0 conv2d_41[0][0] __________________________________________________________________________________________________ batch_normalization_49 (BatchNo (None, 101, 101, 16) 64 activation_41[0][0] __________________________________________________________________________________________________ conv2d_42 (Conv2D) (None, 101, 101, 16) 2320 batch_normalization_49[0][0] __________________________________________________________________________________________________ batch_normalization_50 (BatchNo (None, 101, 101, 16) 64 conv2d_42[0][0] __________________________________________________________________________________________________ activation_42 (Activation) (None, 101, 101, 16) 0 batch_normalization_50[0][0] __________________________________________________________________________________________________ conv2d_43 (Conv2D) (None, 101, 101, 16) 2320 activation_42[0][0] __________________________________________________________________________________________________ batch_normalization_51 (BatchNo (None, 101, 101, 16) 64 conv2d_43[0][0] __________________________________________________________________________________________________ add_17 (Add) (None, 101, 101, 16) 0 batch_normalization_51[0][0] conv2d_41[0][0] __________________________________________________________________________________________________ activation_43 (Activation) (None, 101, 101, 16) 0 add_17[0][0] __________________________________________________________________________________________________ batch_normalization_52 (BatchNo (None, 101, 101, 16) 64 activation_43[0][0] __________________________________________________________________________________________________ conv2d_44 (Conv2D) (None, 101, 101, 16) 2320 batch_normalization_52[0][0] __________________________________________________________________________________________________ batch_normalization_53 (BatchNo (None, 101, 101, 16) 64 conv2d_44[0][0] __________________________________________________________________________________________________ activation_44 (Activation) (None, 101, 101, 16) 0 batch_normalization_53[0][0] __________________________________________________________________________________________________ conv2d_45 (Conv2D) (None, 101, 101, 16) 2320 activation_44[0][0] __________________________________________________________________________________________________ batch_normalization_54 (BatchNo (None, 101, 101, 16) 64 conv2d_45[0][0] __________________________________________________________________________________________________ add_18 (Add) (None, 101, 101, 16) 0 batch_normalization_54[0][0] add_17[0][0] __________________________________________________________________________________________________ activation_45 (Activation) (None, 101, 101, 16) 0 add_18[0][0] __________________________________________________________________________________________________ dropout_9 (Dropout) (None, 101, 101, 16) 0 activation_45[0][0] __________________________________________________________________________________________________ conv2d_46 (Conv2D) (None, 101, 101, 1) 17 dropout_9[0][0] ================================================================================================== Total params: 5,122,801 Trainable params: 5,113,969 Non-trainable params: 8,832 __________________________________________________________________________________________________
TRAINING
Declaring all the necessary Callback functions for the training procedure. Then, the triggering of the training function. Termination of the training of the model has been abruptly in the 47th epoch due to the early_stopping callback function used in the training block. Thus, to be precise the early_stopping function will look out for the changes in the accuracy and the loss of the validation set and when there is no much improvement, the training process will be terminated. Thus, saving time and computation.
early_stopping = EarlyStopping(monitor='val_my_iou_metric', mode = 'max',patience=20, verbose=1) model_checkpoint = ModelCheckpoint("./unet_best1.model",monitor='val_my_iou_metric', mode = 'max', save_best_only=True, verbose=1) reduce_lr = ReduceLROnPlateau(monitor='val_my_iou_metric', mode = 'max',factor=0.2, patience=5, min_lr=0.00001, verbose=1) #reduce_lr = ReduceLROnPlateau(factor=0.2, patience=5, min_lr=0.00001, verbose=1) epochs = 200 batch_size = 32 history = model.fit(x_train2, y_train2, validation_data=[x_valid, y_valid], epochs=epochs, batch_size=batch_size, callbacks=[early_stopping, model_checkpoint, reduce_lr], verbose=2)
Train on 6400 samples, validate on 800 samples Epoch 1/200 - 116s - loss: 0.4175 - my_iou_metric: 0.1365 - val_loss: 0.7320 - val_my_iou_metric: 0.0537 Epoch 00001: val_my_iou_metric improved from -inf to 0.05375, saving model to ./unet_best1.model Epoch 2/200 - 98s - loss: 0.2794 - my_iou_metric: 0.3313 - val_loss: 0.6903 - val_my_iou_metric: 0.3506 Epoch 00002: val_my_iou_metric improved from 0.05375 to 0.35062, saving model to ./unet_best1.model Epoch 3/200 - 98s - loss: 0.2520 - my_iou_metric: 0.4335 - val_loss: 0.4526 - val_my_iou_metric: 0.2159 Epoch 00003: val_my_iou_metric did not improve Epoch 4/200 - 98s - loss: 0.2302 - my_iou_metric: 0.4774 - val_loss: 0.2568 - val_my_iou_metric: 0.5993 Epoch 00004: val_my_iou_metric improved from 0.35062 to 0.59925, saving model to ./unet_best1.model Epoch 5/200 - 98s - loss: 0.2232 - my_iou_metric: 0.5108 - val_loss: 0.2335 - val_my_iou_metric: 0.6095 Epoch 00005: val_my_iou_metric improved from 0.59925 to 0.60950, saving model to ./unet_best1.model Epoch 6/200 - 98s - loss: 0.2025 - my_iou_metric: 0.5477 - val_loss: 0.2864 - val_my_iou_metric: 0.4641 Epoch 00006: val_my_iou_metric did not improve Epoch 7/200 - 98s - loss: 0.1985 - my_iou_metric: 0.5742 - val_loss: 0.3330 - val_my_iou_metric: 0.4142 Epoch 00007: val_my_iou_metric did not improve Epoch 8/200 - 98s - loss: 0.1884 - my_iou_metric: 0.5869 - val_loss: 0.2534 - val_my_iou_metric: 0.6360 Epoch 00008: val_my_iou_metric improved from 0.60950 to 0.63600, saving model to ./unet_best1.model Epoch 9/200 - 98s - loss: 0.1748 - my_iou_metric: 0.6168 - val_loss: 0.1843 - val_my_iou_metric: 0.6697 Epoch 00009: val_my_iou_metric improved from 0.63600 to 0.66975, saving model to ./unet_best1.model Epoch 10/200 - 98s - loss: 0.1714 - my_iou_metric: 0.6161 - val_loss: 0.2291 - val_my_iou_metric: 0.6494 Epoch 00010: val_my_iou_metric did not improve Epoch 11/200 - 98s - loss: 0.1618 - my_iou_metric: 0.6548 - val_loss: 0.1851 - val_my_iou_metric: 0.6990 Epoch 00011: val_my_iou_metric improved from 0.66975 to 0.69900, saving model to ./unet_best1.model Epoch 12/200 - 98s - loss: 0.1635 - my_iou_metric: 0.6468 - val_loss: 0.1712 - val_my_iou_metric: 0.6770 Epoch 00012: val_my_iou_metric did not improve Epoch 13/200 - 98s - loss: 0.1541 - my_iou_metric: 0.6642 - val_loss: 0.1632 - val_my_iou_metric: 0.7098 Epoch 00013: val_my_iou_metric improved from 0.69900 to 0.70975, saving model to ./unet_best1.model Epoch 14/200 - 98s - loss: 0.1546 - my_iou_metric: 0.6608 - val_loss: 0.2070 - val_my_iou_metric: 0.6200 Epoch 00014: val_my_iou_metric did not improve Epoch 15/200 - 98s - loss: 0.1519 - my_iou_metric: 0.6729 - val_loss: 0.2140 - val_my_iou_metric: 0.6516 Epoch 00015: val_my_iou_metric did not improve Epoch 16/200 - 98s - loss: 0.1483 - my_iou_metric: 0.6738 - val_loss: 0.1622 - val_my_iou_metric: 0.7092 Epoch 00016: val_my_iou_metric did not improve Epoch 17/200 - 98s - loss: 0.1446 - my_iou_metric: 0.6876 - val_loss: 0.2726 - val_my_iou_metric: 0.6000 Epoch 00017: val_my_iou_metric did not improve Epoch 18/200 - 98s - loss: 0.1441 - my_iou_metric: 0.6839 - val_loss: 0.1514 - val_my_iou_metric: 0.7071 Epoch 00018: val_my_iou_metric did not improve Epoch 00018: ReduceLROnPlateau reducing learning rate to 0.00020000000949949026. Epoch 19/200 - 98s - loss: 0.1242 - my_iou_metric: 0.7206 - val_loss: 0.1306 - val_my_iou_metric: 0.7425 Epoch 00019: val_my_iou_metric improved from 0.70975 to 0.74250, saving model to ./unet_best1.model Epoch 20/200 - 98s - loss: 0.1163 - my_iou_metric: 0.7311 - val_loss: 0.1372 - val_my_iou_metric: 0.7383 Epoch 00020: val_my_iou_metric did not improve Epoch 21/200 - 98s - loss: 0.1123 - my_iou_metric: 0.7380 - val_loss: 0.1344 - val_my_iou_metric: 0.7441 Epoch 00021: val_my_iou_metric improved from 0.74250 to 0.74413, saving model to ./unet_best1.model Epoch 22/200 - 98s - loss: 0.1124 - my_iou_metric: 0.7362 - val_loss: 0.1317 - val_my_iou_metric: 0.7444 Epoch 00022: val_my_iou_metric improved from 0.74413 to 0.74438, saving model to ./unet_best1.model Epoch 23/200 - 98s - loss: 0.1103 - my_iou_metric: 0.7409 - val_loss: 0.1287 - val_my_iou_metric: 0.7575 Epoch 00023: val_my_iou_metric improved from 0.74438 to 0.75750, saving model to ./unet_best1.model Epoch 24/200 - 99s - loss: 0.1035 - my_iou_metric: 0.7466 - val_loss: 0.1365 - val_my_iou_metric: 0.7376 Epoch 00024: val_my_iou_metric did not improve Epoch 25/200 - 98s - loss: 0.1066 - my_iou_metric: 0.7418 - val_loss: 0.1287 - val_my_iou_metric: 0.7511 Epoch 00025: val_my_iou_metric did not improve Epoch 26/200 - 99s - loss: 0.1050 - my_iou_metric: 0.7455 - val_loss: 0.1410 - val_my_iou_metric: 0.7588 Epoch 00026: val_my_iou_metric improved from 0.75750 to 0.75875, saving model to ./unet_best1.model Epoch 27/200 - 98s - loss: 0.1028 - my_iou_metric: 0.7495 - val_loss: 0.1330 - val_my_iou_metric: 0.7680 Epoch 00027: val_my_iou_metric improved from 0.75875 to 0.76800, saving model to ./unet_best1.model Epoch 28/200 - 98s - loss: 0.1009 - my_iou_metric: 0.7572 - val_loss: 0.1261 - val_my_iou_metric: 0.7666 Epoch 00028: val_my_iou_metric did not improve Epoch 29/200 - 99s - loss: 0.1042 - my_iou_metric: 0.7466 - val_loss: 0.1328 - val_my_iou_metric: 0.7584 Epoch 00029: val_my_iou_metric did not improve Epoch 30/200 - 99s - loss: 0.0997 - my_iou_metric: 0.7498 - val_loss: 0.1440 - val_my_iou_metric: 0.7507 Epoch 00030: val_my_iou_metric did not improve Epoch 31/200 - 98s - loss: 0.0992 - my_iou_metric: 0.7562 - val_loss: 0.1382 - val_my_iou_metric: 0.7634 Epoch 00031: val_my_iou_metric did not improve Epoch 32/200 - 98s - loss: 0.0985 - my_iou_metric: 0.7558 - val_loss: 0.1341 - val_my_iou_metric: 0.7581 Epoch 00032: val_my_iou_metric did not improve Epoch 00032: ReduceLROnPlateau reducing learning rate to 4.0000001899898055e-05. Epoch 33/200 - 98s - loss: 0.0939 - my_iou_metric: 0.7626 - val_loss: 0.1301 - val_my_iou_metric: 0.7559 Epoch 00033: val_my_iou_metric did not improve Epoch 34/200 - 98s - loss: 0.0924 - my_iou_metric: 0.7649 - val_loss: 0.1306 - val_my_iou_metric: 0.7596 Epoch 00034: val_my_iou_metric did not improve Epoch 35/200 - 98s - loss: 0.0897 - my_iou_metric: 0.7628 - val_loss: 0.1310 - val_my_iou_metric: 0.7664 Epoch 00035: val_my_iou_metric did not improve Epoch 36/200 - 98s - loss: 0.0888 - my_iou_metric: 0.7664 - val_loss: 0.1384 - val_my_iou_metric: 0.7625 Epoch 00036: val_my_iou_metric did not improve Epoch 37/200 - 98s - loss: 0.0890 - my_iou_metric: 0.7674 - val_loss: 0.1336 - val_my_iou_metric: 0.7678 Epoch 00037: val_my_iou_metric did not improve Epoch 00037: ReduceLROnPlateau reducing learning rate to 1e-05. Epoch 38/200 - 98s - loss: 0.0859 - my_iou_metric: 0.7669 - val_loss: 0.1355 - val_my_iou_metric: 0.7661 Epoch 00038: val_my_iou_metric did not improve Epoch 39/200 - 98s - loss: 0.0878 - my_iou_metric: 0.7680 - val_loss: 0.1333 - val_my_iou_metric: 0.7645 Epoch 00039: val_my_iou_metric did not improve Epoch 40/200 - 98s - loss: 0.0875 - my_iou_metric: 0.7688 - val_loss: 0.1341 - val_my_iou_metric: 0.7644 Epoch 00040: val_my_iou_metric did not improve Epoch 41/200 - 98s - loss: 0.0868 - my_iou_metric: 0.7673 - val_loss: 0.1334 - val_my_iou_metric: 0.7639 Epoch 00041: val_my_iou_metric did not improve Epoch 42/200 - 98s - loss: 0.0876 - my_iou_metric: 0.7655 - val_loss: 0.1338 - val_my_iou_metric: 0.7651 Epoch 00042: val_my_iou_metric did not improve Epoch 43/200 - 98s - loss: 0.0868 - my_iou_metric: 0.7721 - val_loss: 0.1345 - val_my_iou_metric: 0.7651 Epoch 00043: val_my_iou_metric did not improve Epoch 44/200 - 98s - loss: 0.0876 - my_iou_metric: 0.7707 - val_loss: 0.1330 - val_my_iou_metric: 0.7659 Epoch 00044: val_my_iou_metric did not improve Epoch 45/200 - 98s - loss: 0.0844 - my_iou_metric: 0.7711 - val_loss: 0.1353 - val_my_iou_metric: 0.7655 Epoch 00045: val_my_iou_metric did not improve Epoch 46/200 - 98s - loss: 0.0876 - my_iou_metric: 0.7681 - val_loss: 0.1339 - val_my_iou_metric: 0.7614 Epoch 00046: val_my_iou_metric did not improve Epoch 47/200 - 98s - loss: 0.0879 - my_iou_metric: 0.7713 - val_loss: 0.1342 - val_my_iou_metric: 0.7624 Epoch 00047: val_my_iou_metric did not improve Epoch 00047: early stopping
PLOTTING LOSS
Summarizing the history of loss in the 47 epochs as mentioned in the TRAINING Section.
import matplotlib.pyplot as plt plt.plot(history.history['my_iou_metric'][1:]) plt.plot(history.history['val_my_iou_metric'][1:]) plt.title('model loss') plt.ylabel('loss') plt.xlabel('epoch') plt.legend(['train','Validation'], loc='upper left') plt.show()
Plotting of the training and validation losses using the model trained in the previous section.
fig, (ax_loss, ax_acc) = plt.subplots(1, 2, figsize=(15,5)) ax_loss.plot(history.epoch, history.history["loss"], label="Train loss") ax_loss.plot(history.epoch, history.history["val_loss"], label="Validation loss")
[<matplotlib.lines.Line2D at 0x7feb0b06b6d8>]
LOAD
Loading of the training model into the project for testing purposes
model = load_model("./unet_best1.model",custom_objects={'my_iou_metric': my_iou_metric})
HELPER FUNCTION
The building of a helper function for the prediction of the validation image with the trained model. Predicting of the original and the reflected test image.
def predict_result(model,x_test,img_size_target): x_test_reflect = np.array([np.fliplr(x) for x in x_test]) preds_test1 = model.predict(x_test).reshape(-1, img_size_target, img_size_target) preds_test2_refect = model.predict(x_test_reflect).reshape(-1, img_size_target, img_size_target) preds_test2 = np.array([ np.fliplr(x) for x in preds_test2_refect] ) preds_avg = (preds_test1 +preds_test2)/2 return preds_avg
PREDICTION
The prediction of the validation set with the helper prediction function discussed in the previous section. Thus, storing appropriate variables.
preds_valid = predict_result(model,x_valid,img_size_target) preds_valid2 = np.array([downsample(x) for x in preds_valid]) y_valid2 = np.array([downsample(x) for x in y_valid])
COMPARISON
Scoring of the trained model. Then, a comparison of the threshold and IMU attained with the model in a graphical representation for visualization.
thresholds = np.linspace(0.3, 0.7, 31) ious = np.array([iou_metric_batch(y_valid2, np.int32(preds_valid2 > threshold)) for threshold in tqdm_notebook(thresholds)]) threshold_best_index = np.argmax(ious) iou_best = ious[threshold_best_index] threshold_best = thresholds[threshold_best_index] plt.plot(thresholds, ious) plt.plot(threshold_best, iou_best, "xr", label="Best threshold") plt.xlabel("Threshold") plt.ylabel("IoU") plt.title("Threshold vs IoU ({}, {})".format(threshold_best, iou_best)) plt.legend()
<matplotlib.legend.Legend at 0x7feb0affa128>
FINAL THOUGHTS
Thus, in this article, salt deposit detection using a robust deep learning architecture is trained and compared. This was done due to the reason in real-life specialists are not able to identify the salt deposit accuracy to the maximum. Thus, we proposed a network resulting in more accuracy and robustness of the network. The architecture is as follows: first, the initialization of the dataset along with necessary parameters and libraries. Pre-processing and data augmentation followed by building the model. Then, the compilation of the model and training with callback functions. Then, predictions were made to find the accuracy of the model.
The source code for the salt detection can be found and downloaded from here.
The dataset can be downloaded from here.
To learn from my other blogs, refer here.
Thank you. Hope this article was helpful!
Leave a Reply