I’ve been working on an OpenMV project that uses find_blobs() to detect IR spots inside a camera view.
Right now, the built-in roi parameter only accepts a rectangular area (x, y, w, h).
However, in my case the valid detection area is not a rectangle — it’s a trapezoid or arbitrary quadrilateral, defined by points like:
(x1, y1), (x2, y2), (x3, y3), (x4, y4)
What I’d like to achieve is:
Run find_blobs()only inside this four-point ROI, not a rectangular bounding box.
Do it as fast as possible — ideally without creating an extra binary mask or looping through all pixels manually.
Has anyone implemented something like:
A way to apply a polygon mask before calling find_blobs()?
Or a trick to filter out blobs outside the ROI polygon efficiently (maybe via centroid test)?
Hi, are there color areas that match the detection outside of the shape you want to check? If not, then just use a rectangular ROI. If so, then you’ll need to break find_blobs() up into smaller rects that fit the area you are working in.
A way to apply a polygon mask before calling find_blobs()?
So, using the line op methods you can apply a blanking mask to zero a copy of the frame buffer and then apply find_blobs() to that.
So, like, make a mask image. Draw the shape you want to mask out there. Make a copy of the frame buffer with the line op methods like band() and then apply the mask while anding. This will produce an image with black pixels that fail and non-black pixels that pass. Then run find_blobs() on that.
import sensor, image, time
sensor.reset()
sensor.set_pixformat(sensor.GRAYSCALE)
sensor.set_framesize(sensor.QQVGA)
sensor.skip_frames(time = 2000)
sensor.set_auto_gain(False)
clock = time.clock()
ENABLE_MASKING = True
QUAD_POINTS = [(20, 5), (140, 5), (155, 115), (5, 115)] # leftTop, rightTop, rightBottom, leftBottom
SEED_POINT = (80, 60)
while(True):
clock.tick()
img = sensor.snapshot()
img_display = img
if ENABLE_MASKING:
mask_img = img.copy()
mask_img.clear() # Make it all black
LINE_COLOR = 128
LINE_THICKNESS = 1
mask_img.draw_line(QUAD_POINTS[0][0], QUAD_POINTS[0][1], QUAD_POINTS[1][0], QUAD_POINTS[1][1], color=LINE_COLOR, thickness=LINE_THICKNESS)
mask_img.draw_line(QUAD_POINTS[1][0], QUAD_POINTS[1][1], QUAD_POINTS[2][0], QUAD_POINTS[2][1], color=LINE_COLOR, thickness=LINE_THICKNESS)
mask_img.draw_line(QUAD_POINTS[2][0], QUAD_POINTS[2][1], QUAD_POINTS[3][0], QUAD_POINTS[3][1], color=LINE_COLOR, thickness=LINE_THICKNESS)
mask_img.draw_line(QUAD_POINTS[3][0], QUAD_POINTS[3][1], QUAD_POINTS[0][0], QUAD_POINTS[0][1], color=LINE_COLOR, thickness=LINE_THICKNESS)
mask_img.flood_fill(SEED_POINT[0], SEED_POINT[1], color=255) # fill the quad area with white
img_masked = img.b_and(mask_img)
img_display = img_masked
Hi,
Thanks for the helpful suggestions and guidance!
I’ve successfully made a working solution using a mask with b_and(), and I’ve shared the code for others to reference.