I noticed that the face detection code on the CAM M7 doesn’t detect all faces within an image at QVGA resolutions when multiple faces are present even if the faces are looking directly at the camera. I started looking at the src on github and was wondering where the thresholds in cascade.h were obtained from. Did you train manually using your own dataset and adaboost?
Hi, the face detector is the same one used on OpenCV. That said, there’s some loss of precision, testing of all possible scales, and video quality all effect it. In particular, we don’t test every scale to run faster.
Note that that could could use an upgrade. It was designed for the M4 which had a lot more limits than the M7. We could do the whole process a lot better now on the M7.