Image differencing three separate parts of camera view

Hey! It’s me. Back at it again with a slightly better problem.

My cohort and I are looking into using the OpenMV’s image differencing feature to determine when someone moves into or out of specified spaces. While this is easy enough to do with the whole screen (or a fraction of it
via windowing), I haven’t been able to find anything in the documentation that would allow the differencing and histogram-related actions to be used on multiple different fractions of the screen at the same time. In an ideal world, we’d like to do all of our image differencing on just one of these cameras with three defined rectangles we’d like to perform the image differencing algorithm on.

Is this possible currently?

Thanks in advance. :nerd:

I think you can use mask arg to do what you want (difference parts of the image), otherwise I don’t see ROI implemented for those functions.

You need to provide a mask image. This can be 1 bit per pixel. However, loading mask images from disk is not implemented yet. Just creating them in ram. So, do this. Load an an image from disk, binary it using the binary method with the bitmap creation options, and do that for each bitmap you need to make. You may also use the to_bitmap() option to create bitmaps. Note that these methods allocate the bitmap in RAM on the heap so the number is limited.

After you create these images use them with difference as a mask.

I will be out of internet service this weekend. Ibrahim can help with any further details.

Hey, thanks for the help so far.

I’ve created a 1-bit-per-pixel bitmap in Photoshop and am trying to load it to RAM–however, I keep getting an error saying that the image is corrupt. I tried to create the same 1-bit image in paint and got an error saying that the image is in an unsupported format. I get the same unsupported format error when I try to load in a .jpg image taken with the example snapshot code… so I’m not sure where I’m going wrong!

Here’s the line for reference:
maskImg = image.Image("/masktest.bmp", copy_to_fb=True)

Yes, we don’t support loading 1bit per pixel images. Adding support isnt that much work in the C code but I don’t have time for this right now. That’s why I mentioned that you have to have the camera conversation the image to 1bit per pixel itself. So save the image as the pgm or ppm file and then you can you use the to_bitmap() method to covert the image to a mask.

Got it working, thanks!

Could you post a snippet of code for others?

Sure.

I exported a .pbm in Photoshop and used a converter to turn it into a .pgm. Then I applied the mask to the image before frame differencing.

while(True):
    clock.tick() # Track elapsed milliseconds between snapshots().
    img = sensor.snapshot().b_and("/masktest.pgm") # Take a picture and return the image.
    frame_count += 1
    if (frame_count > BG_UPDATE_FRAMES):
        frame_count = 0
        img.save("temp/bg.bmp")

    # Replace the image with the "abs(NEW-OLD)" frame difference.
    img.difference("temp/bg.bmp")

The only line you really have to change is the one where img is declared. Plan on attempting multiple differencing operations today–I can post the results of that here later as well.

Forgot to post yesterday, but here’s what I ended up doing. Discovered that there was a way to reduce the region-of-interest for the histogram and so I ultimately didn’t need any masks at all.
Here’s an example line of how I implemented that:

diffLeft = img.get_histogram(roi=(0, 215, 100, 25)).get_percentile(0.99).value() - img.get_histogram(roi=(0, 215, 100, 25)).get_percentile(0.90).value()