Area Measurement

Hello. I am new with this OpenMV camera. I am currently working on a project for simple area measurement (like rectangular shapes) using this OpenMV camera. Here is what I’m trying to do:

1. Make a point in each corner and detect it using the OpenMV cam.
2. These points will then be connected to form rectangular / square shape and calculate the distance between these points. Like for length and width of the shape made.
3. Then, I can now compute the area using the calculated length and width of the shape.

The problem is I am having hard time doing these. So far, I can now detect points. Can someone help me with this? It will be very much appreciated. Thank you in advance.

How big is the object? Can it be seen in the field of view? There’s a find_rect() function which can find rectangles.

Anyway, if you have the points detected… then what’s holding you up with the math? What method are you detecting the points by?

What I am trying to do is like this example:

1. I have 4 colored dots. The distance between these dots is unknown.
2. After detecting the dots, I want to connect it with a line. Forming a shape, let’s say rectangle, at the same time computing the distance between these dots. (Is it possible/doable? )
3. If the distance between the dots now known, I can compute now its area.

Btw, sir, I am doing it first with smaller scope (4 colored dots on wall, for example) and if I now get to know how to do that, can I apply it to measure, let’s say small lot area, from top view?

I understand what you are trying to do. I’m just asking what is the challenge?

If you are able to detect the dots right now can you post the code you are using to do that with? If so, then are you struggling with the math/python code to compute the area with the 4 detected dots?

Yes, sir. I’m having trouble now with how can I compute the distance between these dots using python code & OpenMV cam. I’m really new with this.

Btw, sir, I just used the color detecting code from google to detect the dots.

Okay,

So, get the dots:

.cx() and .cy() for the center x/y of the dots. It’s not really important how you get the order of the dots.

Once you have these 8 values… then follow this guide to get the area:

I understand you might be basically asking me to write all the code in python for you. However, this is not the purpose of the forum. I am here to offer guidance but not write your whole program. For doing the math you should import the python math library in your script.

If you google “python general area of polygon” for example you would have found this:

``````def PolygonArea(corners):
n = len(corners) # of corners
area = 0.0
for i in range(n):
j = (i + 1) % n
area += corners[i][0] * corners[j][1]
area -= corners[j][0] * corners[i][1]
area = abs(area) / 2.0
return area

# examples
corners = [(2.0, 1.0), (4.0, 5.0), (7.0, 8.0)]
``````

So, just set the corners list of tuples equal to all the .cx() and .cy() values.

If you want to be clever… the easiest way of doing that is:

``````corners = [(blob.cx(), blob.cy()) for blob in img.find_blobs(ARGUMENTS)]
``````

This is really a big help, sir. Thank you so much.
I’ll follow all the guidance you gave sir. Thanks.

Hello. Good day, Sir.
I tried to do the guide you gave me Sir…
Right now, I’m not sure of the values I got in serial terminal, it ranges from 14161.0 to 14932.0.
Also, is it possible to draw a line (between) from one point detected to another point, like the image I attached? The first photo is the original picture, the second one is edited (i put lines to connect the center of detect point, for illustration)

So far, this is the code, sir:

``````import sensor, image, time, math

#Thresholds (L Min, L Max, A Min, A Max, B Min, B Max).
thresholds = [(38, 47, -20, -9, -28, -12),
(15, 27, -19, 6, -23, 6),
(38, 61, -27, -8, -28, -1),
(18, 31, -20, 6, -31, 1),
(16, 32, -16, 7, -23, 0),
(24, 34, -13, 7, -26, -7),
(9, 50, -12, 14, -26, -1),
(17, 37, -18, 10, -26, -3),
(37, 62, -14, 16, -59, -30),
(35, 48, -2, 26, -64, -41),
(29, 52, -12, 24, -62, -28)] # threshold_value_of_points

sensor.reset()
sensor.set_pixformat(sensor.RGB565)
sensor.set_framesize(sensor.QVGA)
sensor.skip_frames(time = 2000)
sensor.set_auto_gain(False) # must be turned off for color tracking
sensor.set_auto_whitebal(False) # must be turned off for color tracking
clock = time.clock()

def polygon_area(corners):
n=len(corners)
area=0.0
for i in range(n):
j=(i+1)%n
area+=corners[i][0]*corners[j][1]
area-=corners[j][0]*corners[i][1]
area = abs(area)/2.0
return area

while(True):
clock.tick()
img = sensor.snapshot()
for blob in img.find_blobs(thresholds, pixels_threshold=200, area_threshold=200):
if blob.elongation() > 0.5:
img.draw_edges(blob.min_corners(), color=(255,0,0))
img.draw_line(blob.major_axis_line(), color=(0,255,0))
img.draw_line(blob.minor_axis_line(),  color=(0,0,255))
img.draw_rectangle(blob.rect(),thickness=5)
img.draw_cross(blob.cx(), blob.cy())
corners = [(blob.cx(), blob.cy())for blob in img.find_blobs(thresholds, pixels_threshold=200, area_threshold=200)]
print(polygon_area(corners))
``````

Try this (might have compile issues):

``````import sensor, image, time, math

#Thresholds (L Min, L Max, A Min, A Max, B Min, B Max).
thresholds = [(38, 47, -20, -9, -28, -12),
(15, 27, -19, 6, -23, 6),
(38, 61, -27, -8, -28, -1),
(18, 31, -20, 6, -31, 1),
(16, 32, -16, 7, -23, 0),
(24, 34, -13, 7, -26, -7),
(9, 50, -12, 14, -26, -1),
(17, 37, -18, 10, -26, -3),
(37, 62, -14, 16, -59, -30),
(35, 48, -2, 26, -64, -41),
(29, 52, -12, 24, -62, -28)] # threshold_value_of_points

sensor.reset()
sensor.set_pixformat(sensor.RGB565)
sensor.set_framesize(sensor.QVGA)
sensor.skip_frames(time = 2000)
sensor.set_auto_gain(False) # must be turned off for color tracking
sensor.set_auto_whitebal(False) # must be turned off for color tracking
clock = time.clock()

def polygon_area(corners):
n=len(corners)
area=0.0
for i in range(n):
j=(i+1)%n
area+=corners[i][0]*corners[j][1]
area-=corners[j][0]*corners[i][1]
area = abs(area)/2.0
return area

while(True):
clock.tick()
img = sensor.snapshot()
blobs = img.find_blobs(thresholds, pixels_threshold=200, area_threshold=200, merge=True)
for blob in blobs:
img.draw_rectangle(blob.rect(),thickness=5)
img.draw_cross(blob.cx(), blob.cy())
corners = [(blob.cx(), blob.cy())for blob in blobs]
size = len(corners)
for i in range(size):
img.draw_line(corners[i][0], corners[i][1], corners[(i+1)%size][0], corners[(i+1)%size][1], thickness=5)
print(polygon_area(corners))
``````

Having that many thresholds will run very slow… try less thresholds…

Regarding the area, that seems about right. You can compare by making the rects almost straight in the field of view and then selecting that area in OpenMV IDE and seeing if the numbers are similar.

Okay Sir. Thank you. I’ll try to minimize the threshold values.
For the area sir, is it possible to have values in centimeters or inches or meters? I’m really not sure the values I got in serial monitor.

It’s in pixels, so, you need to rescale the value. Lookup how to scale a number from one range to another in Python. Note that it’s an area so it’s pixels^2.

Also… The size of each pixels in space in determined by the distance from the camera… So, whatever scaling method you figure out will only be valid for the camera looking at a way at the distance you workout the scaling function needs to be.

To determine the scaling function constants you should manually calculate the area of the object in the picture by hand using a ruler and then compare that to the area you see in pixels. That will give you a ratio number to multiply by to convert pixels to area.