Moving camera background correction

First of all I am very happy with the openmv H7. I am currently working on some background subtraction and I would like to ask for some advises:

  1. I want to use the camera for a continuous background correction, meaning that I want to use a frame taken in a previous time step (t0-1) as a background for a current time frame (t0):
extra_fb = sensor.alloc_extra_fb(sensor.width(), sensor.height(), sensor.GRAYSCALE)


triggered = False


    clock.tick() # Track elapsed milliseconds between snapshots().

    img = sensor.snapshot() # Take a picture and return the image.

    img.difference(extra_fb) # Subtract background

However, If I do that I get only every second frame a subtracted background (the displayed image alternates between black (background subtracted) and a normal image with no subtraction). Why is that?

  1. I would like to check how much the images were shifted and distorted between different frames at t0 ant t0-1 (in case of the moving camera). In the end I would like to get a vector for the whole image, which I could use to modify the previous image (at t0-1) and use it for a current time step (t0) as a background.

  2. Is there I good way to track moving objects with moving camera on an openMV platform?

Any help would really helpful.

Thank you very much!

Hi, the img returned from the snapshot and get_fb() are the same…

So, in your code you are taking the current image and subtracting the last one and then saving the difference image to the extra fb which was not what you were trying to do.

Instead, do snapshot, then apply the difference to the extra_fb(), then do calculations on that, then move the new image to the extra fb.

If you want to display the difference image then you have to either have a 3rd buffer. This is because snapshot displays whatever is in the frame buffer to the IDE.

Regarding your second question, we have phase correlation code for this. It doesn’t require background subtraction. Have you seen it? It’s under examples.

For your third question. Typically, detecting the objects is step one and then you just place them given the detections. You don’t factor in the camera movement unless you need super high precision.

Thank you for a very detailed reply. It helped me a lot!

Regarding the 2nd question, did you mean differential translation?

I want to use the OpenMV system to steer a small car. The system would detect another small car (its shape, color, illumination is not defined, could be any) in front (up several meters → the car then takes ~30:30 or less pixels on the camera) and track it. However, the speeds are quite high, plus there are a lot of jumps and fast sharp turns involved. Is there an example that I could follow to do this or is it too complicated?

Thank you a lot in advnace!

Depends on the signal to noise ratio. You typically can use faster algorithms for things when it’s really easy to track the thing.

E.g. low less color blob tracking is very fast but not robust to massive lighting changes.