Page 1 of 1

Process for building custom a model, training it, and deploying it to the OpenMV Cam H7

Posted: Fri May 10, 2019 5:44 pm
by jcp13
Greetings,

I am intrigued by the OpenMV Cam H7. I would like to build my own mode, train it, and deploy it to the OpenMV Cam H7, what would be the recommended process for doing so?

Thank you in advance,

JCP13

Re: Process for building custom a model, training it, and deploying it to the OpenMV Cam H7

Posted: Sat May 11, 2019 3:43 pm
by kwagyeman
Hi, we are about to release tensor flow lite support for the system. Once this is done you can use tensor flow for all of this.

I just have to write some examples on how to train and port a model and then I can do the release.

Re: Process for building custom a model, training it, and deploying it to the OpenMV Cam H7

Posted: Mon May 13, 2019 6:14 pm
by jcp13
Great! thank you :P

By the way, do you have a rough ETA?

Cheers.

Re: Process for building custom a model, training it, and deploying it to the OpenMV Cam H7

Posted: Mon May 13, 2019 8:16 pm
by kwagyeman
Not really. I can give you a link to my branch the code is on. I don't have a small model to test with yet so it's not working as far as I know. I was trying to get the initial work with mobilenet operational but the memory manager used by google is very inefficient and they are re-writing it now to allow mobilenet to run on the H7.

Also, going on a retreat this Friday-Sunday. So, I can get back on this next week. I can give you the branch the code is on in the mean-time. I believe I'm done with it.

Re: Process for building custom a model, training it, and deploying it to the OpenMV Cam H7

Posted: Tue May 14, 2019 12:21 am
by jcp13
Again, thank you for the prompt reply. Can you provide the code branch?

Thank you.

Re: Process for building custom a model, training it, and deploying it to the OpenMV Cam H7

Posted: Tue May 14, 2019 12:36 am
by kwagyeman
It's here: https://github.com/kwagyeman/openmv/tree/kwabena/add_tf

There's a new tf module. It runs TensorFlow lite models that use depth wise convolutions and fully connected layers. Relu and pooling are part of the depthwise layers.

I have to write an example first for how to train a model, convert it, and then examples before I can release this.

Re: Process for building custom a model, training it, and deploying it to the OpenMV Cam H7

Posted: Tue May 14, 2019 1:04 am
by jcp13
Sweet! I can't wait to try out your example 8-)

Re: Process for building custom a model, training it, and deploying it to the OpenMV Cam H7

Posted: Wed May 15, 2019 12:42 am
by kwagyeman
Hi, i mentioned this on another post. But, if you could train and quanitize a tflite network this would accelerate me on this. I have a heavy email load right now which is preventing dev work.

Re: Process for building custom a model, training it, and deploying it to the OpenMV Cam H7

Posted: Fri Jul 26, 2019 10:14 am
by ahpd
Hi,
Has Tensorflow lite support been completed?
If yes, how to find it? If no, when it will be released?
Thanks.

Re: Process for building custom a model, training it, and deploying it to the OpenMV Cam H7

Posted: Fri Jul 26, 2019 2:38 pm
by kwagyeman
The port has been completed. It will be released in the next firmware release. I was busy having a social life for a while but I'm putting more time into OpenMV development now again.

Re: Process for building custom a model, training it, and deploying it to the OpenMV Cam H7

Posted: Mon Sep 09, 2019 3:06 pm
by jcp13
Hi,

I hope all is well, do you have any updates on this request?

Thank you.

Re: Process for building custom a model, training it, and deploying it to the OpenMV Cam H7

Posted: Tue Sep 10, 2019 12:19 am
by kwagyeman
Me and Ibrahim are finally working on the firmware after a long while. I have to finish up DRAM support and then this will be added to the firmware.

Re: Process for building custom a model, training it, and deploying it to the OpenMV Cam H7

Posted: Thu Sep 12, 2019 8:00 pm
by jcp13
Thank you for the reply.
Question, I would like to use the Flir lepton camera for object detection, can I leverage a pre trained model or do I have to build one from scratch or can I do transfer learning with a pre trained model?

Thanks.

Re: Process for building custom a model, training it, and deploying it to the OpenMV Cam H7

Posted: Thu Sep 12, 2019 10:29 pm
by kwagyeman
You basically have to do everything from scratch right now. We will be offering TensorFlow support very soon but the tooling will be up to you still.

Re: Process for building custom a model, training it, and deploying it to the OpenMV Cam H7

Posted: Tue Oct 22, 2019 3:50 am
by uraibeef
how to find a tutorial about it with tensorflow lite? thank you

Re: Process for building custom a model, training it, and deploying it to the OpenMV Cam H7

Posted: Tue Oct 22, 2019 10:12 am
by kwagyeman
Hi, we won't be doing a tutorial for tensorflow lite. We've found if folks need help training models they can't actually do it. We instead plan to build a system into the IDE which will make this easier. Finding time to work on all this stuff is challenging however.

Re: Process for building custom a model, training it, and deploying it to the OpenMV Cam H7

Posted: Sat Nov 16, 2019 10:47 am
by ahpd
Hi,
I have updated my IDE and firmware to the latest to run tflite models on OpenMV H7.
Trying to load a tflite file is not possible.
1. Converting a keras model with h5 format to quantized tflite will throw an error: OSError: Unable to read model height, width, and channels!
2. Converting a model with pb format using command toco and specifically mentioning height, width, and channels will again cause an error: OSError: Unable to read model height, width, and channels!
3. Using Tensorflow version 2 for creating a model, saving it as SavedModel directory, and converting it to a quantized tflite will cause the camera to reset.
It would be great if you explain how to run a custom model on openMV using tf library.
Thanks.

Re: Process for building custom a model, training it, and deploying it to the OpenMV Cam H7

Posted: Sat Nov 16, 2019 1:33 pm
by kwagyeman
Hi, I can't say why your model doesn't work.

I've run the code on mobilenet and the person detector model from Google.

Did you follow this tutorial? https://github.com/tensorflow/tensorflo ... a_model.md

In particular, you need to pass our code a quanitized tflite flatbuffer model: https://www.tensorflow.org/lite/microco ... conversion

Our code then does the following checks on the model:

https://github.com/openmv/tensorflow-li ... tf.cc#L107

...

When you get the "Unable to read model height, width, and channels!" can you check the terminal and grab the text printed there? You should notice some extra text printed in the terminal which is the true error reason for not being able to load the model. This text will not be displayed in the popup box however. TensorFlow has their own debug output channel which the IDE can't easily grab.

The error in the serial terminal will be much more descriptive.

...

Mmm, I can improve this. I'll modify the library to print text to a buffer that I can read in one to make errors more understandable.

Re: Process for building custom a model, training it, and deploying it to the OpenMV Cam H7

Posted: Sun Nov 17, 2019 11:14 am
by ahpd
Hi,
Thanks for following my issue. I have already followed the mentioned guide, and I quantized the model using the mentioned command. Still getting the error.
Here is the error shown in terminal:

Traceback (most recent call last):
File "<stdin>", line 12, in <module>
OSError: Unable to read model height, width, and channels!
MicroPython v1.11-omv OpenMV v3.5.0 2019-11-04; OPENMV4-STM32H743
Type "help()" for more information.
>>>

By the way I checked the tflite files on PC, the model files are correct and I can inference when I load the model file using command tf.lite.Interpreter. The following code returns the details of the model file:

interpreter = tf.lite.Interpreter(model_path="model.tflite")
interpreter.allocate_tensors()
interpreter.get_tensor_details()

But I get the error when trying to load the same model in OpenMV.
Thanks.

Re: Process for building custom a model, training it, and deploying it to the OpenMV Cam H7

Posted: Sun Nov 17, 2019 4:47 pm
by kwagyeman
Hi, the error text would be printed above that. Please look at the text that's printed out in your terminal. TensorFlow ouputs an error via printf before the exception is caught by my code. The IDE has no way of knowing the TensorFlow error is not normal output text so it will be a bit up the log. Please scroll up and look at the grayed out text and it will be there.

I realized that I need to modify my code error handling so this is more obvious.

If the error is something like alloc tensors failed then your model is too big.

Re: Process for building custom a model, training it, and deploying it to the OpenMV Cam H7

Posted: Tue Nov 19, 2019 7:50 pm
by iabdalkader
ahpd wrote:
Sun Nov 17, 2019 11:14 am
OSError: Unable to read model height, width, and channels!
Might want to double check the model is not corrupted, if you reset the camera after copying a file without safe removing the disk (eject/unmount) files get corrupted.

Re: Process for building custom a model, training it, and deploying it to the OpenMV Cam H7

Posted: Sat Nov 23, 2019 9:41 am
by ahpd
Hi,
The gray-line error says: Input model data type should be 8-bit quantized!
However I have performed the quantization by following https://www.tensorflow.org/lite/convert/quantization It quantizes the model to 8 bits.
Thanks.

Re: Process for building custom a model, training it, and deploying it to the OpenMV Cam H7

Posted: Sat Nov 23, 2019 10:57 am
by kwagyeman
Can you verify that your input is an 8 bit unit or int and output is an 8 bit unit or int? This is probably not the case if you are getting that error.