Find Object Position after using Frame Differencing

Hello,

I would like use frame differencing to detect if an object has been added in my scene and then analyze the object by focusing specifically on its position.

For example, drawing a rectangle after performing framing differencing.

I’ve been looking into the find_blobs function with the “in_memory_advance_frame_difference.py” example but can’t seem to figure it out. I figured that the background after applying the difference will generally be the same colour so I found a general threshold for that colour then applied it to the find_blobs function with Invert = true. But, it just outlines the entire frame and not a change in the frame.

Here’s what I added to the code example:

    diff = hist.get_percentile(0.99).l_value() - hist.get_percentile(0.90).l_value()
    triggered = diff > TRIGGER_THRESHOLD

    black = [(0, 20, 0, 20, 0, 30)]
    for blob in img.find_blobs(black, invert = True, pixels_threshold=20, area_threshold=200, merge=True):
        img.draw_rectangle(blob.rect())


    print(clock.fps(), triggered)

Additionally, frame differencing has switched the Frame Buffer to show the difference output. Is there a way to revert the Buffer back to the image with the bounded rectangle?

Thank you,
Nicholas

Hi, run the frame differencing operation and then sample the colors in the image after the frame differencing operation to get the color values of whatever is there minus the background. You need to manual sample the new color values after frame differencing.

Thank you for your quick reply.

I’m a bit confused on “get the colour values of whatever is there minus the background.” Why minus the background?

On the IDE’s Frame Buffer, after Frame Differencing, the histogram shows a range of 0-30 (roughly) for the background and 40-80 for the difference. Does that mean I do a range from 40-50 for my colour values? If so, why?

Alternatively, why can’t I just take the background colour and apply invert = True ? If Frame Differencing always leaves the background colour in the range of 0-30 and a difference has a colour scheme different, then can’t we just look for blobs that aren’t in the colour scheme of the background? Unless the openMV still sees the original image rather than the difference image for the application of the find_blobs ?

Thanks,
Nicholas

Please note, I just noticed that the histogram shows the values in RGB rather than LAB. I just did the conversion for the range 0-30 and 40-80.

In my previous reply, please ignore the fact that I gave the range in RGB. The rest of my previous reply is what I’d like to know.

Thanks,
Nicholas

Yeah, using invert is probably the way to go since the background is black normally. So, marching for everything not black is the best way to go.

Um, in regards to the original image it gets destroyed after you do difference on it. Soz it’s not there anymore. You can however allocate a second extra frame buffer and then store the original image in there before doing the difference and hen find blobs on the original frame buffer. The IDE will only display the main frame buffer. You can do an xor swap operation to swap frame buffers at the end of everything however.

… Mmm, I should add a swap method. Will do that.

Thanks for your response!

Perfect! I’ve been giving a shot at finding blobs of black colour with invert = true. I notice that its bounded the object, but there’s a lot of noise/unwanted bounded boxes in the background

A swap method will definitely make it a lot simpler. I had to reduce the resolution from QVGA to QQVGA in order to have a second extra frame buffer. I see the image.b_xor method but what’s the swap? Doing just the xor results in a full black output in the Frame Buffer.

Thank you,
Nicholas

Hi, Google xor swap. It’s a common method to swap two values without a temporary buffer.

Anyway, regarding the noise. You’ll never get rid out that flat out. You have to then use the area and pixel thresholds to filter out unwanted smaller pixels. You may also want to use the binary method to binaries the image and then use errode to kill noise pixels too.

Yes, I knew the XOR swap algorithm but when applying it, my result was a black screen. It turns out that I didn’t save a copy of the frame properly. I had to use the img.replace() method while I was just using a regular assigning operation.

The swap is now working!

I’ll play around with the filters to see if I can perfect the bounding to just my object.

Thank you,
Nicholas