BRIEF and FAST feature extraction | Python

In a lot of real-life applications, we try to use the camera for detection and navigation purposes considering its robustness and accuracy. Identifying the features are very essential from the camera frames for proper and efficient navigation/ detection. Let’s take an example of an autonomous drone using stereo vision cameras and it has to navigate from point A to point B. Now, the drone will take video from the stereo cameras Camera Right (CR) and Camera Left (CL). Comparison of both the keyframes that’s from CR and CL for identifying the similarity between both the frames for efficient and accurate navigation. Thus, the process goes as follows: Extracting features (FAST+BRIEF), tracking those features, and then separating the outlier removals from the inliers.

Thus, a lot of techniques can be used depending on the project or implementation requirement at hand. We will look into one such technique in this article. There is a lot of feature extraction like SURF, SIFT, ORB, etc. Similarly, there are few feature descriptor techniques like BRIEF. We will look in-depth at the architecture and the algorithm of the BRIEF descriptor and on the corner detection algorithm called FAST, which will be used for Feature extraction.

Happy Reading!!!


Binary Robust Independent Elementary Features will be feature descriptors and Features from the Accelerated Segment Test will be the feature extractors. The explanation of both the BRIEF and FAST are what makes this section interesting. Let’s dive into its working.


BRIEF stands for Binary Robust Independent Elementary Features. It uses binary strings as a descriptor(which is a vector representing the size of the features/ keypoints). In other words, instead of the 128-bin vector in the SIFT, we will use binary strings in BRIEF. The neighbors around the keypoint are called patches. Thus, the major function of BRIEF is to identify the patches around the keypoint and convert them into a binary vector, so that they can represent an object. Thus, each key point will be described with the help of the binary 1’s and 0’s. One major consideration about the BRIEF is that it is very noise sensitive as it deals closely with pixel-level images. Thus, smoothening is very important to reduce the noise from the image. Gaussian Kernels smoothens the BRIEF descriptor.

Now the problem will come when we will have to calculate the n (x, y), that’s the x, y pairs. N is the length of the binary vector and τ is the binary test response of the vector. Finding the random pair that’s the (x,y) pair, the binary length of the vector has to be known. There are 5 different approaches to calculate the length. They are uniform (GI), gaussian (GII), gaussian (GIII), coarse polar grid (GIV), coarse polar grid (GV).


FAST stands for Features from Accelerated Segment Test. It is one of the fastest feature extraction technique which extracts features from images. They are the best for live real-time application point of view with efficient computation. It takes a pixel (p) from the image and circles it with 16 pixels called the Bresenham circle as the first step to detect corners. Now we will label each pixel from 1 to 16. Then check random N labels in the circle if the intensity of the labeled pixel is brighter than the pixel around which 16 pixels have been selected. Thus, the conditions for the pixel p to be a corner is that:

  • The intensity of x should be greater than the intensity of p + threshold
  • Or the intensity of x should be less than the intensity of p – threshold

The main significant step in this method is the point where we decide the N and the threshold. N is the number of pixels out of the 16 pixels. More the N, more accurate but more computation. N=12 in most cases. So, consideration of this trade-off has to be done smartly to get desirable results. The difference between the pixel p and the surrounding N pixel values is calculated and stored in a variable V. Considering of 2 adjacent key points and calculation of corresponding V values. The lowest V value will be castoff. Thus, avoiding multiple interest points.

Note: BRIEF is a feature descriptor only, so for feature extraction, we will have to look into some other extracting techniques like FAST, SURF, SIFT, etc. In this article, we will use the FAST corner extractor for feature extracting and the BRIEF will be used for feature description.


Let’s split the implementation part into 3 separate steps for ease of implementation. First, we will input the image, and make a duplicate of it with scaling and rotation invariance. Then use FAST extractor and BRIEF descriptor to efficiently extract and mark the features. Then, the tracking of the feature points between the two images is done.


Importing necessary Python libraries

import cv2
import matplotlib.pyplot as plt
import numpy as np
%matplotlib inline

Loading the image for implementation

image1 = cv2.imread('/content/road.jpeg')

Conversion of the training image to RGB and the grayscale from BGR

training_image = cv2.cvtColor(image1, cv2.COLOR_BGR2RGB)
training_gray = cv2.cvtColor(training_image, cv2.COLOR_RGB2GRAY)

Creation of the test image for tracking by scale invariance and rotational invariance.

test_image = cv2.pyrDown(training_image)
test_image = cv2.pyrDown(test_image)
num_rows, num_cols = test_image.shape[:2]
rotation_matrix = cv2.getRotationMatrix2D((num_cols/2, num_rows/2), 30, 1)
test_image = cv2.warpAffine(test_image, rotation_matrix, (num_cols, num_rows))
test_gray = cv2.cvtColor(test_image, cv2.COLOR_RGB2GRAY)

Plotting the training and test images for visualizing.

fx, plots = plt.subplots(1, 2, figsize=(20,10))
plots[0].set_title("Training Image")
plots[1].set_title("Testing Image")
<matplotlib.image.AxesImage at 0x7fab5308bd30>


Implementation of the FAST feature extraction and BRIEF feature descriptor using OpenCV pre-existing libraries. Also, a declaration of the keypoint size and without the size.

fast = cv2.FastFeatureDetector_create() 
brief = cv2.xfeatures2d.BriefDescriptorExtractor_create()
train_keypoints = fast.detect(training_gray, None)
test_keypoints = fast.detect(test_gray, None)
train_keypoints, train_descriptor = brief.compute(training_gray, train_keypoints)
test_keypoints, test_descriptor = brief.compute(test_gray, test_keypoints)
keypoints_without_size = np.copy(training_image)
keypoints_with_size = np.copy(training_image)
cv2.drawKeypoints(training_image, train_keypoints, keypoints_without_size, color = (0, 255, 0))
cv2.drawKeypoints(training_image, train_keypoints, keypoints_with_size, flags = cv2.DRAW_MATCHES_FLAGS_DRAW_RICH_KEYPOINTS)

Visualization and plotting of the key points with and without size.

fx, plots = plt.subplots(1, 2, figsize=(20,10))
plots[0].set_title("Train keypoints With Size")
plots[0].imshow(keypoints_with_size, cmap='gray')
plots[1].set_title("Train keypoints Without Size")
plots[1].imshow(keypoints_without_size, cmap='gray')

Printing of all the key points in both images(test and duplicate).

print("Number of Keypoints Detected In The Training Image: ", len(train_keypoints))
print("Number of Keypoints Detected In The Query Image: ", len(test_keypoints))
Number of Keypoints Detected In The Training Image: 13952 
Number of Keypoints Detected In The Query Image: 485


Matching between the training image and the scale and rotation invariance added image.

bf = cv2.BFMatcher(cv2.NORM_HAMMING, crossCheck = True)
matches = bf.match(train_descriptor, test_descriptor)

Scrutinization and sorting of the matching key features.

matches = sorted(matches, key = lambda x : x.distance)
result = cv2.drawMatches(training_image, train_keypoints, test_gray, test_keypoints, matches, test_gray, flags = 2)

Displaying the best possible matching feature points. Also, the printing of the total number of matching training images and query image features.

plt.rcParams['figure.figsize'] = [14.0, 7.0]
plt.title('Best Matching Points')
print("\nNumber of Matching Keypoints Between The Training and Query Images: ", len(matches))

Number of Matching Keypoints Between The Training and Query Images: 239


Some of the advantages of the BRIEF descriptor, when compared to other techniques, are:

  • Faster than SIFT in terms of rapidity and accuracy rate.
  • BRIEF relies on less intensity difference between the image patches.


Some of the disadvantages of the FAST corner detector are:

  • Will fail with contrasting pixelates around the key point that’s pixel p
  • Affected with Scaling invariance

Thus to prevent scale invariance and contrast, SIFT detectors were introduced.


In this article, we discussed the flaws and pros of the BRIEF descriptor, its architecture, and the concept and workflow behind the BRIEF descriptor. Also, simultaneously we explored and implemented the FAST corner detection technique upon which the BRIEF descriptors were implemented.

In future articles, I’ll explain other feature extraction and tracking techniques in detail. Stay tuned.

The source code for the BRIEF concept implemented here can be found here.

To learn from more of my machine learning blogs, Click here

Thank you for reading. Hope the article is helpful to the readers.


Some of the research papers I used for understanding the core concepts of BRIEF descriptors are:

[1]. BRIEF: Binary Robust Independent Elementary Features

[2]. The Color-BRIEF Feature Descriptor

[3]. Fast Feature Detector

Leave a Reply

Your email address will not be published. Required fields are marked *