Area_Threshold vs Pixel_Threshold

Good Day!

From the find_blobs definition image — machine vision — MicroPython 1.13 documentation we have

image.find_blobs(thresholds[, invert=False[, roi[, x_stride=2[, y_stride=1[, area_threshold=10[, pixels_threshold=10[, merge=False[, margin=0[, threshold_cb=None[, merge_cb=None[, x_hist_bins_max=0[, y_hist_bins_max=0]]]]]]]]]]]])

. . .

If a blob’s bounding box area is less than area_threshold it is filtered out.

If a blob’s pixel count is less than pixel_threshold it is filtered out.

When should area_threshold be used vs pixel_threshold vs both? Which is faster and/or optimal?

Your sample programs use both - but our team cannot figure out why:

    for blob in img.find_blobs([threshold], pixels_threshold=100, area_threshold=100, merge=True, margin=10):

Thanks,

It enforces blob density. E.g. the blob is large and mostly full of matching pixels. You don’t need to check both however.

Thanks, that makes sense!

So when would you only use one and when do you recommend using both Area and Pixel in the search parameters?

I recommend both to filter out noise pixels. It’s quite hard to get color bounds setup to only target some pixel. So, these values are essential for removing noise blobs. Thus you can relax color bounds.

Thanks! Very appreciated