https://www.codeproject.com/Articles/99457/Edge-Based-Template-Matching?display=Print 2/7
Where n is the number of pixels in t(x, y) and f(x, y). [Wiki]
Though this method is robust against linear illumination changes, the algorithm will fail when the object is partially visible or the
object is mixed with other objects. Moreover, this algorithm is computationally expensive since it needs to compute the correlation
between all the pixels in the template image to the search image.
Feature based approach: Several methods of feature based template matching are being used in the image processing domain. Like
edge based object recognition where the object edges are features for matching, in Generalized Hough transform, an object’s
geometric features will be used for matching.
In this article, we implement an algorithm that uses an object’s edge information for recognizing the object in a search image. This
implementation uses the Open-Source Computer Vision library as a platform.
Compiling the example code
We are using OpenCV 2.0 and Visual studio 2008 to develop this code. To compile the example code, we need to install OpenCV.
OpenCV can be downloaded free from here. OpenCV (Open Source Computer Vision) is a library of programming functions for real
time computer vision. Download OpenCV and install it in your system. Installation information can be read from here.
We need to configure our Visual Studio environment. This information can be read from here.
The algorithm
Here, we are explaining an edge based template matching technique. An edge can be defined as points in a digital image at which
the image brightness changes sharply or has discontinuities. Technically, it is a discrete differentiation operation, computing an
approximation of the gradient of the image intensity function.
There are many methods for edge detection, but most of them can be grouped into two categories: search-based and zero-crossing
based. The search-based methods detect edges by first computing a measure of edge strength, usually a first-order derivative
expression such as the gradient magnitude, and then searching for local directional maxima of the gradient magnitude using a
computed estimate of the local orientation of the edge, usually the gradient direction. Here, we are using such a method
implemented by Sobel known as Sobel operator. The operator calculates the gradient of the image intensity at each point, giving the
direction of the largest possible increase from light to dark and the rate of change in that direction.
We are using these gradients or derivatives in X direction and Y direction for matching.
This algorithm involves two steps. First, we need to create an edge based model of the template image, and then we use this model
to search in the search image.
Creating an edge based template model
We first create a data set or template model from the edges of the template image that will be used for finding the pose of that
object in the search image.
Here we are using a variation of Canny’s edge detection method to find the edges. You can read more on Canny’s edge detection
here. For edge extraction, Canny uses the following steps:
Step 1: Find the intensity gradient of the image
Use the Sobel filter on the template image which returns the gradients in the X (Gx) and Y (Gy) direction. From this gradient, we will
compute the edge magnitude and direction using the following formula:
We are using an OpenCV function to find these values.
评论0
最新资源