Autonomous Lane Detection for Self-driving Cars in Python

original imageCannyMaskedlane detection

In this article, we will understand how to do autonomous lane detection for self-driving cars using Hough lines, Masking, Canny filters, and Gaussian filters in Python. Primary libraries used in this project are OpenCV, NumPy, and matplotlib. This technology has a lot of real-life applications including self-driving cars to detect lanes accurately and traverse accordingly in real-time.

The steps involved in this project to efficiently and accurately detect lanes are listed below:

  • Importing the required Libraries
  • Conversion of RGB to Grayscale
  • Gaussian Blur
  • Canny Filters
  • Masking
  • Combining Canny filters and masking images
  • Hough lines
  • Output image

Google Collaboratory is a python platform preferred due to the provision of free NVIDIA Tesla K80 GPU and cloud computing ability. Click here to open a new Google Colab notebook.

To enable GPU service powered by NVIDIA, follow the steps after clicking the above Colab page, Edit -> Notebook settings -> Enable GPU.

Thank you and Stay tuned!!!

CODE TO DETECT LANES

The required libraries are imported.

import cv2
import numpy as np
import matplotlib.pyplot as plt
from google.colab.patches import cv2_imshow

LOADING THE LANE IMAGE

Inputting the lane image for autonomous detection and plotting the image.

image_c= cv2.imread("/content/road_124163875_1000.jpg")
plt.imshow(image_c)
<matplotlib.image.AxesImage at 0x7f097bae8e80>
Lane detection

RGB TO GRAYSCALE

Conversion of RGB to GRAYSCALE is done to reduce the number of pixels. Thus, very little data has is needed for each pixel thus making the process faster and efficient. Therefore, by converting RGB to Grayscale the complexity of the process can be reduced.

image_g = cv2.cvtColor(image_c, cv2.COLOR_RGB2GRAY)
image_g.shape
(1000, 1000)

GAUSSIAN BLUR

In image processing, a Gaussian blur is a result of blurring an image with the help of the Gaussian function. Thus, it typically reduces image noise and reduces detail.

image_blur = cv2.GaussianBlur(image_g, (7,7), 0)
plt.imshow(image_blur)
<matplotlib.image.AxesImage at 0x7f097b503470>
image

CANNY FILTERS

Canny Edge Detection helps in detecting the edges in an image on grayscale images. Therefore, a grayscale image is passed as input and it uses a multi-stage algorithm to detect the edges.

threshhold_low = 10
threshhold_high = 200

image_canny = cv2.Canny(image_blur, threshhold_low, threshhold_high)
plt.imshow(image_canny)
<matplotlib.image.AxesImage at 0x7f097b458908>
image

MASKING

Defines the area of interest and visualizing the region. Thus, to remove the cloud sketched in canny filters, masking will be done. Contouring using fillPoly() helps in masking out the required part of the image, that’s the lane here.

vertices = np.array([(20,950),(350,650),(650,650),(1000,950)])
mask = np.zeros_like(image_g)
cv2.fillPoly(mask, np.int32([vertices]), 255)
masked_image = cv2.bitwise_and(image_g, mask)
plt.imshow(masked_image)
<matplotlib.image.AxesImage at 0x7f097b2beb38>
image

CANNY + MASKING

Integrating the masked image and edge detected image (via Canny Edge detectors). Thus, it helps in detecting the lanes properly, extracted from the original image.

masked_image = cv2.bitwise_and(image_canny, mask)
plt.figure()
plt.imshow(masked_image)
<matplotlib.image.AxesImage at 0x7f097b56c438>
image

HOUGH LINES DETECTION AND DRAW FUNCTION

Hough Transform is a popular technique that detects shape when coded in a mathematical format. It connects distorted or broken lines. Few of the Hough parameters are:

  • rho- Distance resolution in pixels
  • theta- Angular resolution in radians
  • threshold- the minimum number of votes
  • min_line_gap- the minimum number of pixels making a line
  • min_lin_gap- a maximum gap in pixels between connectable line segments

np.zeros creates an empty block image. And upon this block image, we will draw the lane pieces detected earlier.

rho = 2
theta = np.pi/180
threshold = 40
min_line_len = 100
max_line_gap = 50
lines = cv2.HoughLinesP(masked_image, rho, theta, threshold, np.array([]), minLineLength = min_line_len, maxLineGap = max_line_gap)

line_image = np.zeros((masked_image.shape[0], masked_image.shape[1], 3), dtype = np.uint8)

for line in lines:
  for x1,y1,x2,y2 in line:
    cv2.line(line_image, (x1,y1), (x2,y2), [255, 0, 255], 35)
lines
array([[[485, 919, 494, 779]], 
[[573, 660, 765, 749]], 
[[239, 751, 448, 654]], 
[[632, 685, 754, 739]], 
[[648, 706, 780, 768]], 
[[249, 747, 404, 678]], 
[[598, 675, 773, 756]], 
[[274, 728, 385, 679]], 
[[249, 742, 409, 681]], 
[[264, 728, 365, 687]], 
[[556, 650, 760, 746]], 
[[571, 660, 696, 732]], 
[[290, 712, 401, 678]], 
[[359, 699, 461, 650]], 
[[276, 728, 386, 679]], 
[[278, 720, 407, 678]]], dtype=int32)

IDENTIFICATION

Superimposing of the canny filter and masked detection. Thus, it helps in understanding the original image and work with it easier.

cv2.addWeighted represents the resultant weighted image calculated. Thus, represented as follows: original_image x a + image x b x c.

a = 1
b = 1
c = 1

image = cv2.addWeighted(image_c, a, line_image, b, c)

LANE DETECTION

Plotting the lines generated using canny filters, hough lines, and superimposing on the original image. Thus, real-time analysis and path planning can be done easily.

plt.figure() 
plt.imshow(image)
<matplotlib.image.AxesImage at 0x7f097ac4d908>

lane detection

 

 

 

 

 

 

 

FINAL THOUGHTS

Lane departure warning system (LDWS) a very efficient technique built keeping in mind the thousands of road accidents caused largely due to driver fatigue and inattention towards what lies in front of them. Lane detection is is a part of the Lane detection system. Thus, implemented with Hough lines, canny filters, and gaussian blurs.

Here, in this article, I tried to give the readers a very clear understanding of autonomous lane detection for self-driving cars in particular with the help of Hough Lines, Canny filters, Masking, and Gaussian filters.

To learn more from my Machine Learning blogs, click here.

The source code for the project can be downloaded from here.

Thank you for reading the article, you can reach me at Jerrie-bright, will be happy to help.

Leave a Reply

Your email address will not be published. Required fields are marked *