I’m trying to get a decent people recognition program to run on the openMV camera, and i was wondering if there is anything build in to transfer the image information to a numpy array to do some more exact people recognition.
The examples making use of the Tensor Flow plugin and machine learning isn’t that accurate and doesn’t work for people more than 5 m away from the camera, and that is something necessary for this project.
If there would be an other way to get better results on the recognition programs using Tensor Flow i’d like to hear about them.
I would like to export the numpy array to an other micro controller to run the recognition software on. I guess the openMV cam doesn’t have enough RAM to do more complex algorithms.
Or if there is anyway to use the TensorBoard program combined with the build in network from tensorflow in the openMV’s firmware to see the programming making decisions based on the images from the camera? to refine the network.
Is there a way to do this?
Hi, the easiest thing to do is to use the moving window feature built into the classify() call and just let the network search the whole images at multiple resolutions. This will take more time… But, it’s trivial for you to change. Please note that the net only sees 96x96 pixels. So, it can’t really make out that much distance. If you adjust the scale and overlap arguments you can get the net to execute like 9 times on the image to see folks farther away (at 9x more processing time however).
If you want to train your own networks you can do that too. You don’t want to use numpy with the camera. The OpenMV Cam is a microcontroller. It can’t trivially allocate an image in a python heap without you easily running out of memory. You can however capture pictures from the camera to train your own net with.
Another customer just let me know TensorFlow Lite for Microcontrollers finally supports a missing operator that lets you use keras so I will update the TensorFlow library tonight and post a new firmware image with this feature.