How can I find minimum area rectangle in binary image?

I have EmguCV project to find all minimum area rectangle in contour like below.

VB.NET with EmguCV code :
imgRects.Draw(CvInvoke.MinAreaRect(contour), New Bgr(Color.Red), 3)

How can I find minimum area rectangle in binary image like vb.net code as above?

Hi, please clarify how this relates to the OpenMV Cam?

We don’t have a min area rect method. However, if you care about rotation the find_blobs() method returns the rotation angle of the object. Otherwise what is your goal with the min area rect?

Thank you for you fast reply. In my previous project, I use visual studio with emgu cv to measures width and length of object. In last project, I use openmv with non-distortion lens. Then I want to make embeded machine to use mesure width and length of object and have light display when found defect.

So I want to measure minimum area rectangle. That in my previous project, I use function as below.

CvInvoke.CvtColor(imgROI, imgHSV, ColorConversion.Bgr2Hsv)
CvInvoke.InRange

CvInvoke.FindContours(…

imgRects.Draw(CvInvoke.MinAreaRect(contour), New Bgr(Color.Red), 3)

I understand what you need to do. Um, so the rect we return has a left/right/up/down parts. However, this ins’t the min area rect… but, the min area rect intersects these points.

Anyway, the easiest way to get this is to take the rotation angle from the find_blobs() function and rotate the rectangle corners by a 2D rotation matrix.

E.g.

For each point of the of the rect do trate the point as:

[X
Y]

and the mulitply by: Rotation matrix - Wikipedia where pheta is the rotation angle from find_blobs().

I understand you’d prefer some working code versus how to do it but I am answering this from not at home right now and I’m unable to write some code for this. However, we can add this feature to the firmware. Please submit a github ticket issue for it.

I found new update OPENMV ide with new firmware. Do you add minimum area rectangle in last firmware?

Not yet. I’ve finally completed enough tasks to get back to doing firmware updates however. So, the next firmware release should have this feature.

Um, note, I plan to just expose the four extreme corners of the object. The minimum rect will just then be parallel lines that pass through these corners.

In our lib a rect() is never rotated… So, to draw the minimum rect you’d have to call draw line repeatedly on each corner to corner. Anyway, do you actually need the minimum rect or just the extreme corners?

I want bounding rotated rectangle that can returns center, angle, width, height, point[0], point[1], point[2], point[3] and contour area as attach picture. Thank you and Happy new year.

http://www.emgu.com/wiki/index.php/Minimum_Area_Rectangle_in_CSharp
https://docs.opencv.org/3.2.0/dd/d49/tutorial_py_contour_features.html

Yeah, I get what you want to do. But, keep in mind you haven’t presented a use case other than you want to display this onscreen which is a rather weak need for it. We have a method that gives you already the rotation an object.

Anyway, given what you want. I’ll add the outer edges of the object points along with projected points that show the min bounding rect.

Note that I will note be adding the contour value. I mad a decision a while back not to add that explicitly because of the limited RAM on the system. To keep an object contour requires retaining a variable length list of points of the object which takes quite a bit of ram and is not suitable for an embedded system. I understand this prevents using a lot of different features OpenCV has for object introspection but this is okay.

Anyway, I’m coding right now and fixing other bugs but will start on this feature on Wednesday. I should have it done before the end of the week and I’ll share a new binary with you for it.

Thanks,

Hi, okay, I did it. Attached is the new firmware. Here’s some sample code:

``````# Single Color RGB565 Blob Tracking Example
#
# This example shows off single color RGB565 tracking using the OpenMV Cam.

import sensor, image, time, math

threshold_index = 0 # 0 for red, 1 for green, 2 for blue

# Color Tracking Thresholds (L Min, L Max, A Min, A Max, B Min, B Max)
# The below thresholds track in general red/green/blue things. You may wish to tune them...
thresholds = [(30, 100, 15, 127, 15, 127), # generic_red_thresholds
(30, 100, -64, -8, -32, 32), # generic_green_thresholds
(0, 30, 0, 64, -128, 0)] # generic_blue_thresholds

sensor.reset()
sensor.set_pixformat(sensor.RGB565)
sensor.set_framesize(sensor.QVGA)
sensor.skip_frames(time = 2000)
sensor.set_auto_gain(False) # must be turned off for color tracking
sensor.set_auto_whitebal(False) # must be turned off for color tracking
clock = time.clock()

# Only blobs that with more pixels than "pixel_threshold" and more area than "area_threshold" are
# returned by "find_blobs" below. Change "pixels_threshold" and "area_threshold" if you change the
# camera resolution. "merge=True" merges all overlapping blobs in the image.

while(True):
clock.tick()
img = sensor.snapshot()
for blob in img.find_blobs([thresholds[threshold_index]], pixels_threshold=200, area_threshold=200):
img.draw_rectangle(blob.rect())
img.draw_cross(blob.cx(), blob.cy())

img.draw_line(blob.corners()[0][0], blob.corners()[0][1], blob.corners()[1][0], blob.corners()[1][1], color=(0,255,0))
img.draw_line(blob.corners()[1][0], blob.corners()[1][1], blob.corners()[2][0], blob.corners()[2][1], color=(0,255,0))
img.draw_line(blob.corners()[2][0], blob.corners()[2][1], blob.corners()[3][0], blob.corners()[3][1], color=(0,255,0))
img.draw_line(blob.corners()[3][0], blob.corners()[3][1], blob.corners()[0][0], blob.corners()[0][1], color=(0,255,0))

img.draw_line(blob.min_corners()[0][0], blob.min_corners()[0][1], blob.min_corners()[1][0], blob.min_corners()[1][1], color=(0,0,255))
img.draw_line(blob.min_corners()[1][0], blob.min_corners()[1][1], blob.min_corners()[2][0], blob.min_corners()[2][1], color=(0,0,255))
img.draw_line(blob.min_corners()[2][0], blob.min_corners()[2][1], blob.min_corners()[3][0], blob.min_corners()[3][1], color=(0,0,255))
img.draw_line(blob.min_corners()[3][0], blob.min_corners()[3][1], blob.min_corners()[0][0], blob.min_corners()[0][1], color=(0,0,255))

print(clock.fps())
``````

Drawing the lines is kinda a pain right now. I’ll add two more draw methods to make this easier.

So, the corners value is the object corners. It’s always sorted from the left, top, right, and bottom. Min corners is then the min area rect. I was going to call it min_area_rect_corners() but that was too verbose.

Note that since the min area rect is generated by just a few points it’s quite jumpy. Set your thresholds well.
firmware.zip (918 KB)

Thank you friend about blob.min_corners(). I will try to use.

About my project, I have egg place on green black ground. First step, I will detect green color in Lab color space and invert. So I will reach egg area. Then I want use to know ratio between width and length of egg area.

I not sure that the egg put align with camera sensor. So I want to use minimum rectangle area enclosing egg area. After than I will sent width and length data to process through UART to Arduino.

Please suggest me. What OpenMV library do I need?

Hi, I recently updated the firmware method with a major_axis_line() method that will tell you the longest length of the side in the min_area_rect. So, that’s basically what you need.

As for transmit via the UART. See the UART example on our docs. You can just print the value.

I’ll upload the new firmware tonight.

Hi, I’m still working on the min_area_rect() method. The method I used doesn’t seem to work in all situations. It’s very memory intensive to store the contour and then calculate the min_area_rect() per frame based on a full contour of the object. I am trying to figure out a way to do this with just 4 points but I have not found anything really robust yet…

In the mean-time I’ve upgraded the find_blob() method to be measurably faster, better merging, added perimeter support, floating point centroid, roundness calculation (eccentricity), and x/y histogram projections.

I will continue working on this until I can find a solution. Since I added the perimeter code I know what pixels are on the perimeter of the object so I could allocate a point list which will work on the F7/H7s RAM. However, I want to get this working on the M4 which requires me to do this in the least memory usage way possible.

EDIT: An idea… I think if I combine the 4 corners with the 4 diagonal corners I will get a much better result. I will see what I can do with a constrained list of points based on local optima of geometric identities.

Note: On the H7 I’m getting 80 FPS at 320x240 and 30 FPS on VGA grayscale. The code is fast.

Hi,

I would like to use this function with sensor.set_pixformat(sensor.GRAYSCALE)

Have you a solution ?

thank you

At the 1st draw.line, I can see this error message :

attribute errors : ‘blob’ object has no attribute ‘corners’

Hi, you have to update the firmware on your camera using the firmware binary I posted above.

Yes, it works on grayscale.

The current algorithm is not as robust as it needs to be. I will post another version that should work better tonight.

Yes it seems to be the last firmware : 3.2.0.

No, you have to download the firmware above. I posted a binary file a few posts back. This is a firmware cut that’s a pre-release.

Anyway, I’ll post something else again.

This code has the features you want:

``````# Single Color RGB565 Blob Tracking Example
#
# This example shows off single color RGB565 tracking using the OpenMV Cam.

import sensor, image, time, math

threshold_index = 0 # 0 for red, 1 for green, 2 for blue

# Color Tracking Thresholds (L Min, L Max, A Min, A Max, B Min, B Max)
# The below thresholds track in general red/green/blue things. You may wish to tune them...
thresholds = [(30, 100, 15, 127, 15, 127), # generic_red_thresholds
(30, 100, -64, -8, -32, 32), # generic_green_thresholds
(0, 30, 0, 64, -128, 0)] # generic_blue_thresholds

sensor.reset()
sensor.set_pixformat(sensor.RGB565)
sensor.set_framesize(sensor.QVGA)
sensor.skip_frames(time = 2000)
sensor.set_auto_gain(False) # must be turned off for color tracking
sensor.set_auto_whitebal(False) # must be turned off for color tracking
clock = time.clock()

# Only blobs that with more pixels than "pixel_threshold" and more area than "area_threshold" are
# returned by "find_blobs" below. Change "pixels_threshold" and "area_threshold" if you change the
# camera resolution. "merge=True" merges all overlapping blobs in the image.

while(True):
clock.tick()
img = sensor.snapshot()
for blob in img.find_blobs([thresholds[threshold_index]], pixels_threshold=200, area_threshold=200, merge=True):
# These values depend on the blob not being circular - otherwise they will be shaky.
if blob.elongation() > 0.5:
img.draw_edges(blob.min_corners(), color=(255,0,0))
img.draw_line(blob.major_axis_line(), color=(0,255,0))
img.draw_line(blob.minor_axis_line(), color=(0,0,255))
# These values are stable all the time.
img.draw_rectangle(blob.rect())
img.draw_cross(blob.cx(), blob.cy())
# Note - the blob rotation is unique to 0-180 only.
img.draw_keypoints([(blob.cx(), blob.cy(), int(math.degrees(blob.rotation())))], size=20)
print(clock.fps())
``````

If you want the blob locking to be better there’s a parameter in my code that can be increased to improve the accuracy of the min area rect. Generally, unless the object is unique the min area rect is jumpy. Right now, the firmware bounds the rect with 20 points separated by 18 degrees each. Its not easily possible to make the number of points used to bound the object dynamic so it’s a parameter in the code.

Anyway, for objects like rectangles it works great and irregularly shaped things. It will be in the firmware from now on.
firmware.zip (919 KB)

Hi,

I have installed the firmwire in this post and when I tried to run the example, I notice that the FPS printed on terminal is only about 15.
Is this normal ? As I saw that you have mentioned that the fps you can get is about 80 fps.
Also, I found that the minimum area rectangle is not that stable but I cant figure out how to modify the parameters to make it more stable.
Would you mind to further explain on how to modify those parameters?

Thanks

Hi, higher FPS is achieved if you lower the resolution and change the image type to one that is easier to process. If you are working on a grayscale image things go faster because there’s less work per pixel. Since you have an M7 that’s 2x slower than the new H7 coming out. Also, keep in mind the process of streaming the image to the IDE takes cpu and you can disable the frame buffer to see the real fps for your app.

As for the min area rect, the method I use shoots 20 points out from the center of the object and bounds the min area rect to those 20 points. If the object you are looking at is round then the rest will not be stable since the 20 points can jiggle around. Notice the elongation check in the code avoids drawing the min area rect if the object is not elongated. The method is quite stable the more elongated an object is and unique it is.

Given the method we use does not store a contour of the object this is the best I can do. If the firmware I can arbitrarily raise the number of points samples but I found 20 points to work the best causing the frame rate to cave.

Also, keep in mind your thresholds and the effect they have on the min area rect. Unlike the normal bounding box each and every pixel matters on the contour for the min area rect. If your color thresholds are too tight most likely the objects edges are going in and out per frame causing a lot of jiggle. Given the omnivision camera has a ton of shot noise you’re going to experience a lot of jiggling unless the object is really elongated such that some of the 20 points shot out from the center of the object when bounding it are stable and in the same place.