Home > 计算机视觉, 杂记 > [OPENCV] Feature detectors and descriptors

[OPENCV] Feature detectors and descriptors

For a simple example of how to setup and use them, see [OPENCV] OpenCV+Eclipse|Qt|VS2010

For many computer vision tasks, the first step is usually extracting features from images. It consists of two parts: feature detection and description. By feature detection, we mean to find the pixels (or regions, objects) of interest in an image, which are represented by cv::KeyPoint. They are corresponding to the corners of objects in the image. OpenCV implements many popular feature points detection methods, such as SIFT, SURF, ORB, GoodFeatureToTrack and so on.

After these feature points are located, the next step is to describe them. An intuitive description of a feature point could be its intensity/color value. But a single intensity/color value does not provide sufficiently discriminative information so that this feature point can be uniquely matched to feature points in another image. Usually we use the information around a feature point to represent it, i.e. we use information of many pixels to represent one feature point. Based on Hubel & Wiesel’s research on information processing, orientation is the most sensitive factor to visual system. That’s why most feature techniques are based on orientation to describe feature points. For example, SIFT descriptor uses the normalized orientation histogram to describe a feature point. See figure below. Since this post is not about how the descriptors are designed, interested readers should read their papers.

sift

To provide consistent programming interface, OpenCV designs the feature detectors and descriptor extractors following OO principle. Below is the class hierarchy of most detectors and descriptors for version 2.4.1. FeatureDetector and DescriptorExtractor are the base class for detectors and descriptors, respectively. Feature2d simplifies detection and extraction by  deriving from FeatureDetector and DescriptorExtractor.  But Feature2d is still an abstract class because it doesn’t implement the virtual function detectImpl() and computeImpl().

opencv_detector_descriptor

Classes SURF, SIFT, BRISK and ORB (and others) are concrete classes which are both detectors and descriptor extractors. In order to explicitly use them as detectors or descriptor extractors, aliases are defined. For example,

typedef SURF SurfFeatureDetector;
typedef SURF SurfDescriptorExtractor;

typedef SIFT SiftFeatureDetector;
typedef SIFT SiftDescriptorExtractor;

typedef ORB OrbFeatureDetector;
typedef ORB OrbDescriptorExtractor;

From the perspective of functionality, there’s no distinction between SurfFeatureDetector and SurfDescriptorExtractor. They are exactly the same thing except they have different names. The same is true for SiftFeatureDetecor and SiftDescriptorExtractor, and others.

From this class hierarchy diagram, it can also be seen that FAST, GFTTDetector and DenseFeatureDetector are only feature point detectors. After obtaining the feature points (std::vector<cv::KeyPoint>), we can use SURF, SIFT or any other descriptor extractors to get a description for these feature points. std::vector<cv::KeyPoint> is the data structure for data passing.

About these ads
Categories: 计算机视觉, 杂记 Tags: ,

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

Follow

Get every new post delivered to your Inbox.

%d bloggers like this: