Process for building custom a model, training it, and deploying it to the OpenMV Cam H7

Greetings,

I am intrigued by the OpenMV Cam H7. I would like to build my own mode, train it, and deploy it to the OpenMV Cam H7, what would be the recommended process for doing so?

Thank you in advance,

JCP13

Hi, we are about to release tensor flow lite support for the system. Once this is done you can use tensor flow for all of this.

I just have to write some examples on how to train and port a model and then I can do the release.

Great! thank you :stuck_out_tongue:

By the way, do you have a rough ETA?

Cheers.

Not really. I can give you a link to my branch the code is on. I don’t have a small model to test with yet so it’s not working as far as I know. I was trying to get the initial work with mobilenet operational but the memory manager used by google is very inefficient and they are re-writing it now to allow mobilenet to run on the H7.

Also, going on a retreat this Friday-Sunday. So, I can get back on this next week. I can give you the branch the code is on in the mean-time. I believe I’m done with it.

Again, thank you for the prompt reply. Can you provide the code branch?

Thank you.

It’s here: GitHub - kwagyeman/openmv at kwabena/add_tf

There’s a new tf module. It runs TensorFlow lite models that use depth wise convolutions and fully connected layers. Relu and pooling are part of the depthwise layers.

I have to write an example first for how to train a model, convert it, and then examples before I can release this.

Sweet! I can’t wait to try out your example :sunglasses:

Hi, i mentioned this on another post. But, if you could train and quanitize a tflite network this would accelerate me on this. I have a heavy email load right now which is preventing dev work.

Hi,
Has Tensorflow lite support been completed?
If yes, how to find it? If no, when it will be released?
Thanks.

The port has been completed. It will be released in the next firmware release. I was busy having a social life for a while but I’m putting more time into OpenMV development now again.

Hi,

I hope all is well, do you have any updates on this request?

Thank you.

Me and Ibrahim are finally working on the firmware after a long while. I have to finish up DRAM support and then this will be added to the firmware.

Thank you for the reply.
Question, I would like to use the Flir lepton camera for object detection, can I leverage a pre trained model or do I have to build one from scratch or can I do transfer learning with a pre trained model?

Thanks.

You basically have to do everything from scratch right now. We will be offering TensorFlow support very soon but the tooling will be up to you still.

how to find a tutorial about it with tensorflow lite? thank you

Hi, we won’t be doing a tutorial for tensorflow lite. We’ve found if folks need help training models they can’t actually do it. We instead plan to build a system into the IDE which will make this easier. Finding time to work on all this stuff is challenging however.

Hi,
I have updated my IDE and firmware to the latest to run tflite models on OpenMV H7.
Trying to load a tflite file is not possible.

  1. Converting a keras model with h5 format to quantized tflite will throw an error: OSError: Unable to read model height, width, and channels!
  2. Converting a model with pb format using command toco and specifically mentioning height, width, and channels will again cause an error: OSError: Unable to read model height, width, and channels!
  3. Using Tensorflow version 2 for creating a model, saving it as SavedModel directory, and converting it to a quantized tflite will cause the camera to reset.
    It would be great if you explain how to run a custom model on openMV using tf library.
    Thanks.

Hi, I can’t say why your model doesn’t work.

I’ve run the code on mobilenet and the person detector model from Google.

Did you follow this tutorial? https://github.com/tensorflow/tensorflow/blob/master/tensorflow/lite/experimental/micro/examples/person_detection/training_a_model.md

In particular, you need to pass our code a quanitized tflite flatbuffer model: 构建和转换模型  |  TensorFlow Lite

Our code then does the following checks on the model:

When you get the “Unable to read model height, width, and channels!” can you check the terminal and grab the text printed there? You should notice some extra text printed in the terminal which is the true error reason for not being able to load the model. This text will not be displayed in the popup box however. TensorFlow has their own debug output channel which the IDE can’t easily grab.

The error in the serial terminal will be much more descriptive.

Mmm, I can improve this. I’ll modify the library to print text to a buffer that I can read in one to make errors more understandable.

Hi,
Thanks for following my issue. I have already followed the mentioned guide, and I quantized the model using the mentioned command. Still getting the error.
Here is the error shown in terminal:

Traceback (most recent call last):
File “”, line 12, in
OSError: Unable to read model height, width, and channels!
MicroPython v1.11-omv OpenMV v3.5.0 2019-11-04; OPENMV4-STM32H743
Type “help()” for more information.

By the way I checked the tflite files on PC, the model files are correct and I can inference when I load the model file using command tf.lite.Interpreter. The following code returns the details of the model file:

interpreter = tf.lite.Interpreter(model_path=“model.tflite”)
interpreter.allocate_tensors()
interpreter.get_tensor_details()

But I get the error when trying to load the same model in OpenMV.
Thanks.

Hi, the error text would be printed above that. Please look at the text that’s printed out in your terminal. TensorFlow ouputs an error via printf before the exception is caught by my code. The IDE has no way of knowing the TensorFlow error is not normal output text so it will be a bit up the log. Please scroll up and look at the grayed out text and it will be there.

I realized that I need to modify my code error handling so this is more obvious.

If the error is something like alloc tensors failed then your model is too big.