Bird detection using TensorFlow in Python

Hello Everyone,

In this machine learning tutorial, we will go through the process of understanding and developing a Bird Detection model using TensorFlow deep learning module in Python. Birdwatching is a popular hobby, but understanding the many types of birds requires a lot of information and literature.

However, in order to speed up this process, we are utilizing the power of technology to create a simple Bird detection application model using TensorFlow, that can rapidly capture and detect different sorts of birds.

Understanding and Using TensorFlow for Bird Detection is the topic of today’s lesson. This research entails creating a Convolutional Neural Network (CNN) model for Bird detection using TensorFlow that can extract information from previously acquired bird image datasets and saving the trained model for future use which can be beneficial for Bird Detection in real-time.

Let’s dive into the process.

What is TensorFlow ?

  • Firstly, lets understand what is the definition of TensorFlow and why we’re using it for this detection. TensorFlow is an Artificial Intelligence and Machine Learning software library that is free and opensource. This can be applied to a wide range of tasks, including neural network training and inferences.
  • TensorFlow is a set of procedures that make it simple to design and train models, as well as to deploy them on the cloud, in the browser, and on mobile devices.

Lets Begin…..!!!

  • As I am working on Google Colab, Firstly, I am going to mount my drive and then import my drive which includes the required dataset in it saved as a zip File.
  • However If you do not wish to use the drive for importing you dataset you can use any other available methods to import the dataset.
  • Link for the Dataset : Click here.
from google.colab import drive
drive.mount('/content/drive')

After connecting the drive in google colab we will check the lists and the directories within. To do so we will simply use the “ls” command.

ls

Now that we are successfully connected to the drive and know the directories within, we will change the current working directory for the notebook environment. For that we will use the following command and specify the location for the dataset and then unzip the dataset file using the “!unzip file_name.zip” command.

%cd drive/MyDrive/Bird Classification/
!unzip archive.zip

Now that we have successfully unzipped the dataset file, Lets proceed further towards importing the necessary libraries.

Importing the Libraries for Bird Detection

import tensorflow as tf
import keras
import cv2
import os
import image.io
from tensorflow.keras.optimizers import Adam
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import h5py

We have successfully imported the basic libraries required to carry out the tasks. Now, before we get started we need to check that our notebook is connected to the GPU. For that we will be using the following command.

import tensorflow as tf
tf.test.gpu_device_name()

As the output is : 0 . It is confirmed that we are connected to the GPU which is essential for all our images.

Next, we will be providing the test and train dataset path. Here we will be requiring the ImageDataGenerator. The ImageDataGenerator accepts the original data, randomly transforms it and returns only the new transformed data. The Keras ImageDataGenerator class actually works by

  • Accepting a batch of images used for training.
  • Taking this batch and applying a series of random transformations to each image in the batch (including random rotation, resizing, shearing, etc.).
  • Replacing the original batch with the new randomly transformed batch.
  • And finally Training the CNN on this randomly transformed batch (i.e., the original data itself is not used for training).
from tensorflow.keras.preprocessing.image import ImageDataGenerator
rescaled = ImageDataGenerator (1/255)
train_fed = rescaled.flow_from_directory ('/content/drive/MyDrive/Bird Classification/train', 
                                          target_size = (128,128),batch_size = 32, class_mode = 'categorical')
test_fed = rescaled.flow_from_directory ('/content/drive/MyDrive/Bird Classification/test', 
                                          target_size = (128,128),batch_size = 32, class_mode = 'categorical')

So Far, we have out dataset ready with 54652 images belonging to 375 classes for training and 1875 images belonging to 375 classes for testing.

(NOTE : These values of images and class may vary based on the dataset provided.)

Also, read: Bird species  detection using Keras in Python

Now, we will go ahead to create and define our CNN architecture. In this case, we are not going to use any predefined model instead we will be creating a model which will start by defining the layers.

  • We will use the Keras Conv2D which is a 2D Convolution Layer, (this layer creates a convolution kernel that is wind with layers input which helps produce a tensor of outputs.)
  • We will design our First layer with 16 filters with (3,3) kernals and the activation function that we are utilizing is ‘RELU’ with the input shape being (128,128,3). Similarly we will be creating a 4 Layers Architecture.
  • Followed by that we have our Flatten layer and Dense Layer where we are using the Softmax Activation function as we have multi-class classification
model = tf.keras.models.Sequential([tf.keras.layers.Conv2D (16,(3,3), activation = 'relu' , input_shape = (128,128,3)),
                                    tf.keras.layers.MaxPool2D(2,2),    #1st Layer

                                    tf.keras.layers.Conv2D (32,(3,3), activation = 'relu'),
                                    tf.keras.layers.MaxPool2D(2,2),    #2nd Layer

                                    tf.keras.layers.Conv2D (64,(3,3),activation = 'relu') , 
                                    tf.keras.layers.MaxPool2D(2,2),    #3rd Layer

                                    tf.keras.layers.Conv2D (128,(3,3),activation = 'relu') , 
                                    tf.keras.layers.MaxPool2D(2,2),    #4th Layer

                                    tf.keras.layers.Flatten(),

                                    tf.keras.layers.Dense(128, activation ='relu'),
                                    tf.keras.layers.Dropout(0.5),
                                    tf.keras.layers.Dense(375,activation = 'softmax')

])

Once we have our model ready lets glimpse the detailed summary for the Model.

model.summary()
Model Summary.

Model Summary.

Now, we will compile the model. For classification, we will use the Adam optimizer, categorical cross-entropy loss function and the accuracy metric.

from tensorflow.keras.optimizers import Adam
model.compile(loss= 'categorical_crossentropy', optimizer = 'Adam', metrics = ['accuracy'])
from keras.callbacks import EarlyStopping, ModelCheckpoint, ReduceLROnPlateau
import h5py

We are using the h5py where our model weights will be saved after training. Now, we have to define after how many epochs the accuracy is not being improved so we can early stop. Early stopping is where we will monitor the validation loss or the accuracy with patience being 6. That means after 6 epochs if the accuracy doesn’t improve, it will early stop.

We’ve provided the route where we want our model and weights checkpoints to be saved where we save best in the line below.

erl_stop = EarlyStopping(monitor="val_loss", patience=8, restore_best_weights=True )
mod_chk = ModelCheckpoint(filepath = '/content/drive/MyDrive/Bird Classification/my_model.hdf5', monitor='val_loss', save_best_only=True)
lr_rate = ReduceLROnPlateau( monitor = 'val_loss', patience=6, factor=0.1)

Now, we shall proceed toward training the model. we have specified the train dataset where we have kept Shuffle = True, set the Epochs 20 and set the test data as the validation data.

hist = model.fit(train_fed, shuffle=True, epochs=20, validation_data=test_fed,
                           callbacks=[erl_stop,mod_chk,lr_rate], verbose=2)

Visualizing the Results

Finally, we shall visualize the results by plotting. Here we are plotting the loss with epochs where x-axis represents the Training Epochs and Y- Axis the testing Epochs

#plot the reuslts
plt.plot(hist.history['loss'],color = ' blue', label='train')
plt.plot(hist.history['val_loss'],color = 'orange', label='train')
plt.grid(True)
plt.title("Train & test loss with Epochs" , fontsize = 18)
plt.xlabel('Training Epochs', fontsize= 12)
plt.ylabel('Testing Epochs', fontsize= 12)
plt.show()

Similarly, we are plotting the Accuracy where x-axis represents the Training Epochs and Y- Axis the train and test accuracy.

#plot the reuslts with Accuracy
plt.plot(hist.history['accuracy'],color = ' blue', label='train')
plt.plot(hist.history['val_accuracy'],color = 'orange', label='train')
plt.grid(True)
plt.title("Train & test loss with Epochs\n" , fontsize = 18)
plt.xlabel('Training Epochs', fontsize= 12)
plt.ylabel('Train & Test Accuracy', fontsize= 12)
plt.show()

In the end, we are printing the accuracy value that our model gives.

acc = model.evaluate(test_fed, steps=len(lest_fed), verbose=2)
print('%.2f'%(acc[1]*100))

The presented model approach is a way for identifying bird species utilizing dataset and deep learning algorithm for image classification.

  • There are around 375 categories and 54652 images in all.
  • The suggested model is based on the detection of a part and extraction of CNN features from several convolutional layers.
  • These characteristics are combined and then fed to the classifier for classification.
  • The algorithm has high accuracy rate in predicting the location of bird species based on the findings it has produced.

This model can be enhanced by connecting it to a user-friendly website. The users can contribute bird photographs for identification and it will produce the desired results.

Also, read: Bird species  detection using Keras in Python

Leave a Reply

Your email address will not be published.