I’m using the OpenMV H7 Plus with the built-in example: TensorFlow face detection. I’m trying to output the coordinates of the face to Arduino via UART. Often the camera would detect multiple faces at once, which kinda messes with the coordinate outputs. Is there a way to limit the number of detections to one?
Thanks in advance!
Yeah, you just need to filter the detections in Python. So, detections are a list of objects you can write Python code to filter. As such, you can do anything you want to filter them.
Ah I see, is the color blob tracking the same as well?
Does the face detection algorithm create a new detection each frame or does it track an object between frames? Is there any way to separate the different detections? I’m trying to only show the object that’s closest to the center of the frame, how should I differentiate the multiple faces detected?
Our code doesn’t track objects. It’s a new detection per frame. So, you need to add your own tracker. This is highly application dependent so you should just write what you think makes sense.
Each frame will return a list of rectangles which are the faces. So, if you want just the center of the frame then look at the rectangles near the center. As for handling multiple faces… you loop through the faces detected and then pick the one which is closest to the center.
Just think of the faces as rectangles which are spit out by the face detection code. You can then literally do whatever you want in python to track them.