OpenMV H7 Plus examples not working near edge of screen

I tried several of the examples like multi_color_blob_tracking and tf_person_detection_search_whole_window with several different background.

I am running them on full screen: VGA framesize and 640x480 windowing. The H7 accurately tracks an object/person that is in the middle of the screen, but as soon as it gets leaves the center of the screen, the object/person starts being tracked differently. In the case of multi_color_blob_tracking smaller parts of the object are tracked, and in the case tf_person_detection_search_whole_window the person is less likely to be recognized.

This is incredibly noticeable as the object/person gets near the edge of the screen to the point where it is unusable. Why isn’t the same object that is the same distance away, fully in frame being tracked poorly when it’s not in the center of the screen?!

Hi, the tf_person_detection_search_whole_window example just shows off doing a very low effort pyramid search using an image classification CNN. If you actually want it to search all locations by changing the wieghts it will be very slow as it’s not an SSD type of network.

… Like, the example has quite a bit of text describing what is happening… To be clear, we do not offer person tracking as a useable feature. Just person classification using Google’s TensorFlow model onboard.

That said, we will have actually SSD object detection coming soon thanks to EdgeImpulse. However, it is not released yet.

Is there any easy way to modify tf_person_detection_search_whole_window so that it searches the whole window using SSD? Like decreasing the image quality or loosening some search parameters? I do not need to track the person nor do I need to to be super accurate, I just want a basic program that can tell me in a reasonable amount of the time if a person is in an image. If not, do you think writing/borrowing outside code could get the job done? I find it hard to believe that some version of person detection is not possible, even at low fps or image quality.

Um, yeah, just read the example code: openmv/tf_person_detection_search_whole_window.py at master · openmv/openmv · GitHub

When you set x/y overlap to like 0.7 and min_scale to 0.3 and scale_mul to like 0.75 the it will search for a person in a lot of places. However, you will get sub 1 FPS.

Again, it’s on the roadmap to support actual object classification. This is coming in a few months. However, all we have right now is this sliding window detector.

Is there documentation somewhere (besides in the example code) that explains how min_scale, scale_mul, and x_overlap/y_overlap interact with each other? For example, how could I separate the image into multiple ROIs in the x direction? Scale does both x and y, and changing x_overlap does not fix that.


Is something similar to the above possible? Where the Orange box is the first sweep (entire box), red is second (half), and black is third (fifths)? Note: I made fifths so it is easy to see in the example, it could be fourths, eighths, etc.

https://docs.openmv.io/library/omv.tf.html#tf.tf.classify

Just set min_scale to 0.5 and scale_mul to 0.5. Keep x/y overlap to zero.

So, the sliding window algorithm automatically takes care of doing what you want above with just the scale_mul arugment. Min_scale tells it how many iterations to go. Then the x/y overlap tell it if it should overlap the detection areas. For example, you have no overlap in the image you posted above.

You may notice the -1 being passed to x/y overlap. That forces the algorithm then to not try all areas. That’s a special arg. When overlap is zero it does what you posted above.