Motion Detection

I have a project using the OpenMV camera to detection movement/motion happening in an area. A camera is mounted about 4 feet from the ground, pointing diagonally downwards onto the floor. This will be used to detect people, bags, etc to open a door.

I am using the snapshot on movement example motion detection concept as a baseline. I wanted to ask for some suggestions.

  1. The example code uses the Max() statistics parameter to look for changes between the saved background and current snapshot. However, this seems to trigger very easily (shadows especially). Changing lighting conditions also gives another dynamic aspect to what the Max() limit I should use. Any suggestions on making it more robust? Using more parameters? How about being to adapt my limit to the different lighting conditions?

  2. I am updating my background periodically. To ensure a consistent basis, is it better to take a new background periodically or blend it periodically? I know there is an advance background blending algorithm example, but there are occasions that I don’t want the person/object to be included into the background.

  3. Turning on/off lights causes movement detection. Any suggestions on being able to adapt to this?

This project is going to be heavily used with a lot of foot traffic, so I wanted to have a robust motion detection in order to reliably open a door and not close on anyone. Thank you

  1. Yeah, so, I just picked a parameter that appeared to work. After you do the difference() you have a whole image to work with. There’s pretty much an endless amount of stuff you can do to compare between them. Using max() on the lighting channel of the whole image is simply a way of checking the global lighting change.

… Anyway, I don’t really have a lot of ideas on what’s the best thing to do to detection motion reliably. I’m happy to brainstorm with you however.

  1. Either method is fine. This is something where the flexibility of being able to use MicroPython to code up a lot of cases will be useful. You might want to track some type of score metric on the image to determine when to blend and when to just update the image entirely.

  2. Lighting is easy to deal with, just check if the global illumination change was massive and ignore that frame. Then, update the background image.

You might wish to use the find_blobs() method on differences in the image. This will allow you to then find the location of a difference area and check it’s size.