Multi-Class Image Classification Using Keras in Python

In this article, We’ll work on a Dataset having Multiple Classes using Keras in Python. We’ll be creating a basic CNN architecture and will work on the Fashion MNIST dataset. You can read about the dataset from the following link- Dataset-Fashion MNIST. I hope you must have installed all the required libraries. Let’s Start and Understand how Multi-class Image classification can be performed.
IMPORT REQUIRED PYTHON LIBRARIES
import tensorflow as tf import numpy as np import matplotlib.pyplot as plt from tensorflow import keras
LOADING THE DATASET
Now, Import the fashion_mnist dataset already present in Keras.
df = keras.datasets.fashion_mnist (images_train, labels_train), (images_test, labels_test) = df.load_data() Name_of_the_classes = ['T-shirt/top', 'Trouser', 'Pullover', 'Dress', 'Coat', 'Sandal', 'Shirt', 'Sneaker', 'Bag', 'Ankle boot'] # summarize loaded dataset print('Train: X=%s, y=%s' % (images_train.shape, labels_train.shape)) print('Test: X=%s, y=%s' % (images_test.shape, labels_test.shape))
OUTPUT:
IMAGE PREPROCESSING
If you print any image you’ll see that the matrix you got contains values between 0-255. We simply cannot work with these values so, we’ll divide each element in a matrix by 255 so that we just got values between 0-1.
print("Image matrix before preprocessing\n".center(80)) print(images_train[7]) images_train = images_train/255.0 images_test = images_test/255.0 print("Image Matrix after preprocessing\n".center(80)) print(images_train[7])
OUTPUT:
BUILDING THE MODEL
Mnist_model = keras.Sequential([ keras.layers.Flatten(input_shape = (28,28)), keras.layers.Dense(128 , activation = "relu"), keras.layers.Dense(10, activation = "softmax") ]) Mnist_model.summary()
Here, I haven’t created a very complex Neural Network. It isn’t always necessary to use so many layers when only a few can work for your objective.
OUTPUT:
COMPILING AND TRAINING THE MODEL
Training my model on GPU:
with tf.device('/GPU:0'): Mnist_model.compile(optimizer = "adam", loss="sparse_categorical_crossentropy", metrics=["accuracy"]) Mnist_model.fit(images_train, labels_train, epochs=5)
OUTPUT:
Train on 60000 samples Epoch 1/5 60000/60000 [==============================] - 33s 550us/sample - loss: 0.5010 - accuracy: 0.8249 Epoch 2/5 60000/60000 [==============================] - 19s 311us/sample - loss: 0.3749 - accuracy: 0.8652 Epoch 3/5 60000/60000 [==============================] - 21s 356us/sample - loss: 0.3342 - accuracy: 0.8778 Epoch 4/5 60000/60000 [==============================] - 27s 454us/sample - loss: 0.3114 - accuracy: 0.8865 Epoch 5/5 60000/60000 [==============================] - 29s 484us/sample - loss: 0.2947 - accuracy: 0.8912 '#%%\ntest_loss , test_acc = model.evaluate(test_images,test_labels)\n\nprint("test accuracy:", test_acc)'
We get an accuracy of around 89% which can be improved by adding a few layers and adding dropout etc.
Now, Let’s use the test set to check how good our model is-
prediction = Mnist_model.predict(images_test) for i in range(5): plt.grid(False) plt.imshow(tf.squeeze(images_test[i]), cmap= plt.cm.binary) plt.xlabel("Actual:" + Name_of_the_classes[labels_test[i]]) plt.title("Prediction:" + Name_of_the_classes[np.argmax(prediction[i])]) plt.show()
OUTPUT:
And our model predicts each class correctly. Hence, we completed our Multi-Class Image Classification task successfully.
Thanks for reading and Happy Learning!
Leave a Reply