Helmet Detection using TensorFlow and Keras

In this article, we are going to perform the helmet detection task in Python using TensorFlow and Keras library.

The first step to create a helmet detection classifier model will have to train our model with a lot of images. We will have to select a large number of images which will help us to get more accuracy. It is suggested to find real-world images as it is quite difficult to find a dataset for this specific purpose. You can look on Google to find images of people wearing helmets but make sure not to download very high-quality images because the larger the data the more time it’s going to take to train your model.

Also, we can create our model from scratch using Convolutional Neural Network but for detection purposes, and also we can use bigger models like YOLO. But for this case, we’ll be using our model.

Data Preparation

First of all import all requisite libraries.

import numpy as np
from matplotlib import pyplot as plt
import os
import cv2
import random
from keras.models import Sequential
from keras.layers import Dense, Dropout, Flatten, Conv2D, MaxPooling2D, BatchNormalization
from keras.optimizers import RMSprop

After this, we’ll have to use OpenCV to pre-process these images and we’ll create our training and validation directories. Below is the Python code:

DATADIR = 'images'
CLASS = ['neg','pos']
IMG_SIZE = 50

neg = []
pos = []

# building the training data
def create_training_data():  
  for cl in CLASS:
    path = os.path.join(DATADIR, cl)
    class_num = CLASS.index(cl)
    if class_num == 0:
      for img in os.listdir(path):
        try:
          img_array = cv2.imread(os.path.join(path, img))
          img_array = cv2.resize(img_array, (IMG_SIZE, IMG_SIZE))
          gray_img = cv2.cvtColor(img_array, cv2.COLOR_BGR2GRAY)
          neg.append([gray_img, class_num])
        except Exception as e:
          pass
    if class_num == 1:
      for img in os.listdir(path):
        try:
          img_array = cv2.imread(os.path.join(path, img))
          img_array = cv2.resize(img_array, (IMG_SIZE, IMG_SIZE))
          gray_img = cv2.cvtColor(img_array, cv2.COLOR_BGR2GRAY)
          pos.append([gray_img, class_num])
        except Exception as e:
          pass
  random.shuffle(neg)
  random.shuffle(pos)
  training_data = neg[:len(pos)]+pos
  return training_data

training_data = create_training_data()

After this shuffle the training data to make it more random.

random.shuffle(training_data)
print(len(training_data))

Then create X and Y labels and pre-process the data.

X = []
y = []

for features, label in training_data:
  X.append(features)
  y.append(label)

X = np.array(X).reshape(-1, IMG_SIZE, IMG_SIZE, 1)

# normalizing the data
X = X/255.0

The next step is to create a model using Convolutional Neural Network.

model = Sequential()

model.add(Conv2D(32, kernel_size=(3, 3), activation='relu', input_shape=X.shape[1:], data_format='channels_last',))
model.add(BatchNormalization())
model.add(MaxPooling2D(pool_size=(2, 2), strides=(2, 2)))
model.add(Dropout(0.5))

model.add(Conv2D(64, kernel_size=(3, 3), activation='relu', padding='same'))
model.add(BatchNormalization())
model.add(MaxPooling2D(pool_size=(2, 2), strides=(2, 2)))
model.add(Dropout(0.5))

model.add(Conv2D(128, kernel_size=(3, 3), activation='relu', padding='same'))
model.add(BatchNormalization())
model.add(MaxPooling2D(pool_size=(2, 2), strides=(2, 2)))
model.add(Dropout(0.5))

model.add(Conv2D(256, kernel_size=(3, 3), activation='relu', padding='same'))
model.add(BatchNormalization())
model.add(MaxPooling2D(pool_size=(2, 2), strides=(2, 2)))
model.add(Dropout(0.5))

model.add(Flatten())

model.add(Dense(256, activation='relu'))
model.add(Dropout(0.4))
model.add(Dense(128, activation='relu'))
model.add(Dropout(0.4))
model.add(Dense(64, activation='relu'))
model.add(Dropout(0.5))

model.add(Dense(1, activation='sigmoid'))

model.compile(loss='binary_crossentropy',
              optimizer='adam',
              metrics=['accuracy'])
model.summary()

Check the accuracy of your model by plotting the accuracy and loss graph to see how well it performs.

# helper function to plot the results
def plot_result(history):
  acc = history.history['acc']
  val_acc = history.history['val_acc']
  loss = history.history['loss']
  val_loss = history.history['val_loss']
  
  epochs = range(1, len(acc)+1)

  plt.plot(epochs, acc, label='Training acc')
  plt.plot(epochs, val_acc, label='Validation acc')
  plt.title('Training and validation accuracy')
  plt.xlabel('epochs')
  plt.ylabel('acc')
  plt.legend()

  plt.figure()

  plt.plot(epochs, loss, label='Training loss')
  plt.plot(epochs, val_loss, label='Validation loss')
  plt.title('Training and validation loss')
  plt.xlabel('epochs')
  plt.ylabel('loss')
  plt.legend()
  
  plt.show()

matplotlib plot

matplotlib plot

Real-Time Detection using OpenCV

The first step is to load the model using Keras’s load_model function and use the haarcascade_frontface_classifier.xml file to create an OpenCV classifier.

from keras.models import load_model
import cv2
import numpy as np
model = load_model(<modelname>)
face_clsfr=cv2.CascadeClassifier('haarcascade_frontalface_default.xml')
source=cv2.VideoCapture(0)
labels_dict={0:'Helmet Detected',1:'No Helmet'}
color_dict={0:(0,255,0),1:(0,0,255)}

Now we can use OpenCV to run detection on a video.

while(True): 
    ret,img=source.read()
    gray=cv2.cvtColor(img,cv2.COLOR_BGR2GRAY)
    faces=face_clsfr.detectMultiScale(gray,1.3,5)  
    for (x,y,w,h) in faces:
    
        face_img=gray[y:y+w,x:x+w]
        resized=cv2.resize(face_img,(100,100))
        normalized=resized/255.0
        reshaped=np.reshape(normalized,(1,100,100,1))
        result=model.predict(reshaped)
        
        label=np.argmax(result,axis=1)[0]
      
        cv2.rectangle(img,(x,y),(x+w,y+h),color_dict[label],2)
        cv2.rectangle(img,(x,y-40),(x+w,y),color_dict[label],-1)
        cv2.putText(img, labels_dict[label], (x, y-10),cv2.FONT_HERSHEY_SIMPLEX,0.8,(255,255,255),2)
    
   cv2.imshow('LIVE',img)
   key=cv2.waitKey(1)

Also, you can use this code to terminate the dialog box by pressing “Esc” key.

cv2

3 responses to “Helmet Detection using TensorFlow and Keras”

  1. venkata kumara swami m says:

    for img in os.listdir(path):
    # at this line facing error as :
    FileNotFoundError Traceback (most recent call last)
    in
    35 return training_data
    36
    —> 37 training_data = create_training_data()

    in create_training_data()
    13 class_num = CLASS.index(cl)
    14 if class_num == 0:
    —> 15 for img in os.listdir(path):
    16 try:
    17 img_array = cv2.imread(os.path.join(path, img))

    FileNotFoundError: [WinError 3] The system cannot find the path specified: ‘images\\neg’

    @can you pls help me to solve the [email protected]

  2. Kancharla Vaishnavi says:

    for img in os.listdir(path):
    # at this line facing error as :
    FileNotFoundError Traceback (most recent call last)
    in
    35 return training_data
    36

  3. Kancharla Vaishnavi says:

    I am getting error in Linking the path,can you help us by telling to link the image to this code .
    for img in os.listdir(path):
    # at this line facing error as :
    FileNotFoundError Traceback (most recent call last)
    in
    35 return training_data

Leave a Reply

Your email address will not be published. Required fields are marked *