I have a project using the OpenMV camera to detection movement/motion happening in an area. A camera is mounted about 4 feet from the ground, pointing diagonally downwards onto the floor. This will be used to detect people, bags, etc to open a door.
I am using the snapshot on movement example motion detection concept as a baseline. I wanted to ask for some suggestions.
The example code uses the Max() statistics parameter to look for changes between the saved background and current snapshot. However, this seems to trigger very easily (shadows especially). Changing lighting conditions also gives another dynamic aspect to what the Max() limit I should use. Any suggestions on making it more robust? Using more parameters? How about being to adapt my limit to the different lighting conditions?
I am updating my background periodically. To ensure a consistent basis, is it better to take a new background periodically or blend it periodically? I know there is an advance background blending algorithm example, but there are occasions that I don’t want the person/object to be included into the background.
Turning on/off lights causes movement detection. Any suggestions on being able to adapt to this?
This project is going to be heavily used with a lot of foot traffic, so I wanted to have a robust motion detection in order to reliably open a door and not close on anyone. Thank you