Is it possible to run object recognition and classification tasks on OpenMV Cam H7?

I see that this example https://github.com/openmv/openmv/blob/master/scripts/examples/25-Machine-Learning/tf_mobilenet_search_whole_window.py only works for OpenMV Cam H7 Pro which has SDRAM. are there any other ways to run the object recognition and classification tasks on “OpenMV Cam H7”. If possible please link some examples. The above code throws a MemoryError( Out of fast Frame Buffer Stack Memory! Please reduce the resolution of the image you are running this algorithm on to bypass this issue!). So any examples to run it on Cam H7?

Thanks in Advance.

Yes, it’s possible, you have to train your own smaller TensorFlow CNN. We will be standing up a pipeline to make it easy to train networks later this year inside OpenMV IDE.

When you say a smaller network? How small, should it be? I tried using Squeezenet(around 5MB), it threw a MemoryError, then I tried using Mobilenet_V1_0.25_128_quant (around 0.5 MB). It is still throwing an error. Is there a tutorial or example I can follow?

You can basically only the the built-in person detector.

We posted a link in the docs to a guide that Google published on how to train a net. It’s under the TensorFlow module documentation.

Generally, the net needs to be about 200kb at max. This is smaller than any general purpose networks.

Yes, I realize the TensorFlow support is rather unusable right now. Our goal is to fix that later this year. Google doesn’t exactly make this easy for us either…

Can you share the model which you guys used for person detection? I mean, I know that it can be used internally, is there any way where I could get the model and use it for transfer learning or can you share the model architecture and the dataset it was trained on? So that I can use a similar model and then it would probably fit the requirements to run on Cam H7. None of the smaller models that I train are running on the H7.

Thanks in advance.

I would also be interested to see this model!

It’s in tensorflow lite for microcontrollers: https://github.com/tensorflow/tensorflow/tree/master/tensorflow/lite/micro/examples/person_detection

I am actually trying to train the model from scratch but for car detection. The documentation provided from tensorflow (https://github.com/tensorflow/tensorflow/blob/master/tensorflow/lite/micro/examples/person_detection/training_a_model.md) is not updated, but I manage it to get a TFLite model.
Despite that, I am still having problems with the quantization problem (Quantization problem while reproducing person detection example · Issue #37347 · tensorflow/tensorflow · GitHub).

I am posting the model I have, in case you want to check it, or even if you have any suggestions on how to fix it.
2020_03_02-quantized-io8.zip (227 KB)

Hi, I honestly don’t know how to get around the issue. We’re working with TensorFlow lite people directly on getting more support for this. If possible, reach out to Daniel Situnayake. He left the TensorFlow lite team but was able to help us get the person detector working originally.

Also, I think with the latest update training keras models should work… However, we noticed a regression where the stack that is used by TensorFlow is very high. It causes stack overflows on our system right now.