TensorFlow Binary Classification with examples in Python
Hello programmers, in this tutorial, we will learn Binary Classification using TensorFlow with examples.
All the codes are done in a collab notebook
We have a dataset of cats vs. dogs for today’s binary classification.
Download and Prepare the DataSet
# Download the dataset of cats vs dogs !wget https://storage.googleapis.com/tensorflow-1-public/course2/cats_and_dogs_filtered.zip
- Now we have to extract the dataset from that zip file, and then we have to assign the directory of each training and validation set
import os import zipfile # Extract the archive zip_ref = zipfile.ZipFile("./cats_and_dogs_filtered.zip", 'r') zip_ref.extractall("tmp/") zip_ref.close() # Assign training and validation set directories base_dir = 'tmp/cats_and_dogs_filtered' train_dir = os.path.join(base_dir, 'train') validation_dir = os.path.join(base_dir, 'validation') # Directory with training cat pictures train_cats_dir = os.path.join(train_dir, 'cats') # Directory with training dog pictures train_dogs_dir = os.path.join(train_dir, 'dogs') # Directory with validation cat pictures validation_cats_dir = os.path.join(validation_dir, 'cats') # Directory with validation dog pictures validation_dogs_dir = os.path.join(validation_dir, 'dogs')
Build the model
For building the model, we will create four convolution layers alternate with the MaxPolling layer, and we are using the “relu” activation function.
import tensorflow as tf from tensorflow.keras.optimizers import RMSprop model = tf.keras.models.Sequential([ tf.keras.layers.Conv2D(32, (3,3), activation='relu', input_shape=(150, 150, 3)), tf.keras.layers.MaxPooling2D(2, 2), tf.keras.layers.Conv2D(64, (3,3), activation='relu'), tf.keras.layers.MaxPooling2D(2,2), tf.keras.layers.Conv2D(128, (3,3), activation='relu'), tf.keras.layers.MaxPooling2D(2,2), tf.keras.layers.Conv2D(128, (3,3), activation='relu'), tf.keras.layers.MaxPooling2D(2,2), tf.keras.layers.Flatten(), tf.keras.layers.Dense(512, activation='relu'), tf.keras.layers.Dense(1, activation='sigmoid') ])
Here is the part we have to learn how to classify binary classes.
- We use “loss=’binary_crossentropy'” for binary classes while compiling our model.
- Also, in ImageDataGenerator, while giving the directory to train_generator and Validation_generator, we have to pass class_mode, so for binary classification, we set class_mode=”binary”
Let’s see this in code
- Compiling the model
model.compile(loss='binary_crossentropy', optimizer=RMSprop(learning_rate=1e-4), metrics=['accuracy'])
2.ImageDataGenerator
from tensorflow.keras.preprocessing.image import ImageDataGenerator # All images will be rescaled by 1./255 train_datagen = ImageDataGenerator(rescale=1./255) test_datagen = ImageDataGenerator(rescale=1./255) # Flow training images using train_datagen generator train_generator = train_datagen.flow_from_directory( train_dir, directory for training images target_size=(150, 150), batch_size=20, # Since we use binary_crossentropy loss, we need binary labels class_mode='binary') # Flow validation images using test_datagen generator validation_generator = test_datagen.flow_from_directory( validation_dir, target_size=(150, 150), batch_size=20, class_mode='binary')
Train the model
# Create a new model model = create_model() # Train the model history = model.fit( train_generator, steps_per_epoch=100, # 2000 images = batch_size * steps epochs=20, validation_data=validation_generator, validation_steps=50, # 1000 images = batch_size * steps verbose=1)
output:Epoch 1/20 100/100 [==============================] - 10s 94ms/step - loss: 0.6863 - accuracy: 0.5425 - val_loss: 0.6602 - val_accuracy: 0.5980 Epoch 2/20 100/100 [==============================] - 9s 94ms/step - loss: 0.6453 - accuracy: 0.6165 - val_loss: 0.6262 - val_accuracy: 0.6600 Epoch 3/20 100/100 [==============================] - 9s 92ms/step - loss: 0.5948 - accuracy: 0.6780 - val_loss: 0.5953 - val_accuracy: 0.6760 Epoch 4/20 100/100 [==============================] - 9s 93ms/step - loss: 0.5650 - accuracy: 0.7015 - val_loss: 0.5796 - val_accuracy: 0.6780 Epoch 5/20 100/100 [==============================] - 9s 93ms/step - loss: 0.5440 - accuracy: 0.7095 - val_loss: 0.5820 - val_accuracy: 0.6920 Epoch 6/20 100/100 [==============================] - 9s 93ms/step - loss: 0.5139 - accuracy: 0.7405 - val_loss: 0.6229 - val_accuracy: 0.6490 Epoch 7/20 100/100 [==============================] - 9s 92ms/step - loss: 0.4900 - accuracy: 0.7640 - val_loss: 0.5664 - val_accuracy: 0.7160 Epoch 8/20 100/100 [==============================] - 10s 101ms/step - loss: 0.4683 - accuracy: 0.7770 - val_loss: 0.6086 - val_accuracy: 0.6930 Epoch 9/20 100/100 [==============================] - 9s 93ms/step - loss: 0.4481 - accuracy: 0.7775 - val_loss: 0.5889 - val_accuracy: 0.7120 Epoch 10/20 100/100 [==============================] - 9s 94ms/step - loss: 0.4196 - accuracy: 0.7980 - val_loss: 0.6005 - val_accuracy: 0.6980 Epoch 11/20 100/100 [==============================] - 9s 94ms/step - loss: 0.3971 - accuracy: 0.8230 - val_loss: 0.5423 - val_accuracy: 0.7310 Epoch 12/20 100/100 [==============================] - 9s 94ms/step - loss: 0.3656 - accuracy: 0.8390 - val_loss: 0.6199 - val_accuracy: 0.6930 Epoch 13/20 100/100 [==============================] - 9s 94ms/step - loss: 0.3482 - accuracy: 0.8485 - val_loss: 0.5643 - val_accuracy: 0.7330 Epoch 14/20 100/100 [==============================] - 9s 91ms/step - loss: 0.3209 - accuracy: 0.8640 - val_loss: 0.5616 - val_accuracy: 0.7450 Epoch 15/20 100/100 [==============================] - 9s 91ms/step - loss: 0.2987 - accuracy: 0.8750 - val_loss: 0.5337 - val_accuracy: 0.7430 Epoch 16/20 100/100 [==============================] - 9s 90ms/step - loss: 0.2745 - accuracy: 0.8940 - val_loss: 0.5738 - val_accuracy: 0.7470 Epoch 17/20 100/100 [==============================] - 9s 91ms/step - loss: 0.2504 - accuracy: 0.8940 - val_loss: 0.7697 - val_accuracy: 0.6950 Epoch 18/20 100/100 [==============================] - 9s 91ms/step - loss: 0.2324 - accuracy: 0.9120 - val_loss: 0.5576 - val_accuracy: 0.7570 Epoch 19/20 100/100 [==============================] - 9s 92ms/step - loss: 0.2137 - accuracy: 0.9155 - val_loss: 0.6398 - val_accuracy: 0.7400 Epoch 20/20 100/100 [==============================] - 9s 91ms/step - loss: 0.1903 - accuracy: 0.9275 - val_loss: 0.5814 - val_accuracy: 0.7350
So here we classify our dataset using binary classification, and we saw that our model accuracy is very bad, so for this, you may have to use Data Augmentation. After this, you will get good accuracy.
Hopefully, you have learned Binary Classification using TensorFlow.
Leave a Reply