It is a summer Saturday morning in the Twin Cities of Minneapolis and St. Paul in Minnesota. The temperature will go up to 87F. Around 10:00 AM, before it gets too hot, my wife and I will go out for a leisure walk of 10 miles. Last night I had a hard time sleeping so hopefully the mild exercise will help this evening.
In this post we continue to watch the PluralSight course Building Image Processing Applications Using scikit-image by Janani Ravi. She uses a Jupyter notebook. In this post we will use VSCode with GitHub Copilot. I should disclose that I am a Microsoft employee and have been using VSCode for a few years. When possible I like to follow the KISS principle so I prefer to use the minimum number of IDEs that support all the programming languages I wish to use. Why complicate life using as many IDEs as one can find and never become proficient on all of them.
Our task will be to detect corners in a sample image. The sample image represents a checkerboard.
According to Wikipedia, Corner Detection is an approach used within computer vision systems to extract certain kinds of features and infer the contents of an image. Corner detection is frequently used in motion detection, image registration, video tracking, image mosaicing, panorama stitching, 3D reconstruction and object recognition. Corner detection overlaps with the topic of interest point detection.
For starters we need to get to a folder of interest (the location on your computer may vary), create and save an empty Python script which we will refer to as CornerDetection.py, and then as we edit the Python script, we will save and then run the script. This is illustrated by the following screen capture:
# **** folder of interest **** cd C:\Documents\_Image Processing\scikit-image-building-image-processing-applications\02\demos # **** open file of interest using VSCode **** (base) C:\Documents\_Image Processing\scikit-image-building-image-processing-applications\02\demos>code CornerDetection.py # **** execute python script of interest **** (base) C:\Documents\_Image Processing\scikit-image-building-image-processing-applications\02\demos>python CornerDetection.py
Once we have an empty Python script we can start by editing it as needed.
# **** imports **** from matplotlib import pyplot as plt from skimage import data from skimage.feature import corner_harris, corner_subpix, corner_peaks from skimage.transform import warp, AffineTransform from skimage.draw import ellipse, circle_perimeter # circle_perimeter was: circle
We always start by including all the imports we expect to use. In this case the circle_perimeter function replaces the circle one. This is a difference between the Jupyter notebook and our code. It is due to the slightly different versions of the libraries.
# **** checkerboard **** checkerboard = data.checkerboard() # checkerboard is a 2D array of 0s and 1s # **** display checkerboard image **** plt.figure(figsize=(6, 6)) plt.imshow(checkerboard, cmap='gray') plt.title('Checkerboard') plt.show()
We select the checkerboard from the data and display it.
Since looking for corners on this image might look simple, we will transform the checkerboard image by scaling, rotating, shearing, and translating it.
# **** transform (scale, rotate, shear, and translate image) **** transform = AffineTransform(scale=(0.9, 0.8), rotation=1, shear=0.6, translation=(150, -80)) # **** generate warped_checkerboard image **** warped_checkerboard = warp( checkerboard, transform, output_shape=(320, 320)) # **** display warped checker_board image **** plt.figure(figsize=(6, 6)) plt.imshow(warped_checkerboard, cmap='gray') plt.title('Warped Checkerboard') plt.show()
The resulting image is then displayed.
# **** detect corners in warped checkerboard image **** corners = corner_harris(warped_checkerboard) # **** display corners in warped checkerboard image **** plt.figure(figsize=(6, 6)) plt.imshow(corners, cmap='gray') plt.title('Corners in Warped Checkerboard') plt.show()
We make a function call to use the Harris algorithm to detect the corners in the warped_checkerboard image we just created. The warped image containing the corners is then displayed.
# **** find corners in the Harris response image - # the result is the coordinates of the corners **** coords_peaks = corner_peaks(corners, min_distance=10) # min_distance is the minimum number of pixels separating # **** display coords_peaks.shape **** print(f'coords_peaks.shape: {coords_peaks.shape}') # **** statistical test to determine whether the corner is # classified as an intersection of two edges or a single peak **** coords_subpix = corner_subpix( warped_checkerboard, coords_peaks, window_size=10) # **** display coords_subpix **** print(f'coords_subpix[0:11]: {coords_subpix[0:11]}') # **** display coords_peeks **** print(f'coords_peaks[0:11]: {coords_peaks[0:11]}') # **** note that the corners are a little different # after we use the statistical estimation techniques **** # **** fig, ax = plt.subplots(figsize=(8, 8)) ax.imshow( warped_checkerboard, interpolation='nearest', cmap='gray') ax.plot(coords_peaks[:, 1], coords_peaks[:, 0], '.b', markersize=30) # markersize was: 30 ax.plot(coords_subpix[:, 1], coords_subpix[:, 0], '*r', markersize=10) # **** display the image **** plt.tight_layout() plt.show() # **** blue values are original corners # red values are the corners after statistical estimation ****
Finally we will check how the actual corners match with the statistical results. We use the corner_peaks to collect the coordinates of the corners in the image.
The coordinates of the image and estimate are then displayed. Note the similarity.
An image is created to display the results of the actual image compared to the statistical results.
Please take your time editing arguments to better understand how they affect the corner detection results.
If interested in the contents of this post, I invite you to watch the course. If you wish to experiment with the code, you can find it in my GitHub repository CornerDetection.
Remember that one of the best ways to learn is to read, experiment, and repeat as needed.
Enjoy;
John