Process for building custom a model, training it, and deploying it to the OpenMV Cam H7

Discussion related to "under the hood" OpenMV topics.
jcp13
Posts: 10
Joined: Fri May 10, 2019 12:15 pm

Process for building custom a model, training it, and deploying it to the OpenMV Cam H7

Postby jcp13 » Fri May 10, 2019 5:44 pm

Greetings,

I am intrigued by the OpenMV Cam H7. I would like to build my own mode, train it, and deploy it to the OpenMV Cam H7, what would be the recommended process for doing so?

Thank you in advance,

JCP13
User avatar
kwagyeman
Posts: 3522
Joined: Sun May 24, 2015 2:10 pm

Re: Process for building custom a model, training it, and deploying it to the OpenMV Cam H7

Postby kwagyeman » Sat May 11, 2019 3:43 pm

Hi, we are about to release tensor flow lite support for the system. Once this is done you can use tensor flow for all of this.

I just have to write some examples on how to train and port a model and then I can do the release.
Nyamekye,
jcp13
Posts: 10
Joined: Fri May 10, 2019 12:15 pm

Re: Process for building custom a model, training it, and deploying it to the OpenMV Cam H7

Postby jcp13 » Mon May 13, 2019 6:14 pm

Great! thank you :P

By the way, do you have a rough ETA?

Cheers.
User avatar
kwagyeman
Posts: 3522
Joined: Sun May 24, 2015 2:10 pm

Re: Process for building custom a model, training it, and deploying it to the OpenMV Cam H7

Postby kwagyeman » Mon May 13, 2019 8:16 pm

Not really. I can give you a link to my branch the code is on. I don't have a small model to test with yet so it's not working as far as I know. I was trying to get the initial work with mobilenet operational but the memory manager used by google is very inefficient and they are re-writing it now to allow mobilenet to run on the H7.

Also, going on a retreat this Friday-Sunday. So, I can get back on this next week. I can give you the branch the code is on in the mean-time. I believe I'm done with it.
Nyamekye,
jcp13
Posts: 10
Joined: Fri May 10, 2019 12:15 pm

Re: Process for building custom a model, training it, and deploying it to the OpenMV Cam H7

Postby jcp13 » Tue May 14, 2019 12:21 am

Again, thank you for the prompt reply. Can you provide the code branch?

Thank you.
User avatar
kwagyeman
Posts: 3522
Joined: Sun May 24, 2015 2:10 pm

Re: Process for building custom a model, training it, and deploying it to the OpenMV Cam H7

Postby kwagyeman » Tue May 14, 2019 12:36 am

It's here: https://github.com/kwagyeman/openmv/tree/kwabena/add_tf

There's a new tf module. It runs TensorFlow lite models that use depth wise convolutions and fully connected layers. Relu and pooling are part of the depthwise layers.

I have to write an example first for how to train a model, convert it, and then examples before I can release this.
Nyamekye,
jcp13
Posts: 10
Joined: Fri May 10, 2019 12:15 pm

Re: Process for building custom a model, training it, and deploying it to the OpenMV Cam H7

Postby jcp13 » Tue May 14, 2019 1:04 am

Sweet! I can't wait to try out your example 8-)
User avatar
kwagyeman
Posts: 3522
Joined: Sun May 24, 2015 2:10 pm

Re: Process for building custom a model, training it, and deploying it to the OpenMV Cam H7

Postby kwagyeman » Wed May 15, 2019 12:42 am

Hi, i mentioned this on another post. But, if you could train and quanitize a tflite network this would accelerate me on this. I have a heavy email load right now which is preventing dev work.
Nyamekye,
ahpd
Posts: 4
Joined: Fri Jul 26, 2019 10:09 am

Re: Process for building custom a model, training it, and deploying it to the OpenMV Cam H7

Postby ahpd » Fri Jul 26, 2019 10:14 am

Hi,
Has Tensorflow lite support been completed?
If yes, how to find it? If no, when it will be released?
Thanks.
User avatar
kwagyeman
Posts: 3522
Joined: Sun May 24, 2015 2:10 pm

Re: Process for building custom a model, training it, and deploying it to the OpenMV Cam H7

Postby kwagyeman » Fri Jul 26, 2019 2:38 pm

The port has been completed. It will be released in the next firmware release. I was busy having a social life for a while but I'm putting more time into OpenMV development now again.
Nyamekye,
jcp13
Posts: 10
Joined: Fri May 10, 2019 12:15 pm

Re: Process for building custom a model, training it, and deploying it to the OpenMV Cam H7

Postby jcp13 » Mon Sep 09, 2019 3:06 pm

Hi,

I hope all is well, do you have any updates on this request?

Thank you.
User avatar
kwagyeman
Posts: 3522
Joined: Sun May 24, 2015 2:10 pm

Re: Process for building custom a model, training it, and deploying it to the OpenMV Cam H7

Postby kwagyeman » Tue Sep 10, 2019 12:19 am

Me and Ibrahim are finally working on the firmware after a long while. I have to finish up DRAM support and then this will be added to the firmware.
Nyamekye,
jcp13
Posts: 10
Joined: Fri May 10, 2019 12:15 pm

Re: Process for building custom a model, training it, and deploying it to the OpenMV Cam H7

Postby jcp13 » Thu Sep 12, 2019 8:00 pm

Thank you for the reply.
Question, I would like to use the Flir lepton camera for object detection, can I leverage a pre trained model or do I have to build one from scratch or can I do transfer learning with a pre trained model?

Thanks.
User avatar
kwagyeman
Posts: 3522
Joined: Sun May 24, 2015 2:10 pm

Re: Process for building custom a model, training it, and deploying it to the OpenMV Cam H7

Postby kwagyeman » Thu Sep 12, 2019 10:29 pm

You basically have to do everything from scratch right now. We will be offering TensorFlow support very soon but the tooling will be up to you still.
Nyamekye,
uraibeef
Posts: 10
Joined: Sun Apr 28, 2019 5:56 am

Re: Process for building custom a model, training it, and deploying it to the OpenMV Cam H7

Postby uraibeef » Tue Oct 22, 2019 3:50 am

how to find a tutorial about it with tensorflow lite? thank you
User avatar
kwagyeman
Posts: 3522
Joined: Sun May 24, 2015 2:10 pm

Re: Process for building custom a model, training it, and deploying it to the OpenMV Cam H7

Postby kwagyeman » Tue Oct 22, 2019 10:12 am

Hi, we won't be doing a tutorial for tensorflow lite. We've found if folks need help training models they can't actually do it. We instead plan to build a system into the IDE which will make this easier. Finding time to work on all this stuff is challenging however.
Nyamekye,
ahpd
Posts: 4
Joined: Fri Jul 26, 2019 10:09 am

Re: Process for building custom a model, training it, and deploying it to the OpenMV Cam H7

Postby ahpd » Sat Nov 16, 2019 10:47 am

Hi,
I have updated my IDE and firmware to the latest to run tflite models on OpenMV H7.
Trying to load a tflite file is not possible.
1. Converting a keras model with h5 format to quantized tflite will throw an error: OSError: Unable to read model height, width, and channels!
2. Converting a model with pb format using command toco and specifically mentioning height, width, and channels will again cause an error: OSError: Unable to read model height, width, and channels!
3. Using Tensorflow version 2 for creating a model, saving it as SavedModel directory, and converting it to a quantized tflite will cause the camera to reset.
It would be great if you explain how to run a custom model on openMV using tf library.
Thanks.
User avatar
kwagyeman
Posts: 3522
Joined: Sun May 24, 2015 2:10 pm

Re: Process for building custom a model, training it, and deploying it to the OpenMV Cam H7

Postby kwagyeman » Sat Nov 16, 2019 1:33 pm

Hi, I can't say why your model doesn't work.

I've run the code on mobilenet and the person detector model from Google.

Did you follow this tutorial? https://github.com/tensorflow/tensorflo ... a_model.md

In particular, you need to pass our code a quanitized tflite flatbuffer model: https://www.tensorflow.org/lite/microco ... conversion

Our code then does the following checks on the model:

https://github.com/openmv/tensorflow-li ... tf.cc#L107

...

When you get the "Unable to read model height, width, and channels!" can you check the terminal and grab the text printed there? You should notice some extra text printed in the terminal which is the true error reason for not being able to load the model. This text will not be displayed in the popup box however. TensorFlow has their own debug output channel which the IDE can't easily grab.

The error in the serial terminal will be much more descriptive.

...

Mmm, I can improve this. I'll modify the library to print text to a buffer that I can read in one to make errors more understandable.
Nyamekye,
ahpd
Posts: 4
Joined: Fri Jul 26, 2019 10:09 am

Re: Process for building custom a model, training it, and deploying it to the OpenMV Cam H7

Postby ahpd » Sun Nov 17, 2019 11:14 am

Hi,
Thanks for following my issue. I have already followed the mentioned guide, and I quantized the model using the mentioned command. Still getting the error.
Here is the error shown in terminal:

Traceback (most recent call last):
File "<stdin>", line 12, in <module>
OSError: Unable to read model height, width, and channels!
MicroPython v1.11-omv OpenMV v3.5.0 2019-11-04; OPENMV4-STM32H743
Type "help()" for more information.
>>>

By the way I checked the tflite files on PC, the model files are correct and I can inference when I load the model file using command tf.lite.Interpreter. The following code returns the details of the model file:

interpreter = tf.lite.Interpreter(model_path="model.tflite")
interpreter.allocate_tensors()
interpreter.get_tensor_details()

But I get the error when trying to load the same model in OpenMV.
Thanks.
User avatar
kwagyeman
Posts: 3522
Joined: Sun May 24, 2015 2:10 pm

Re: Process for building custom a model, training it, and deploying it to the OpenMV Cam H7

Postby kwagyeman » Sun Nov 17, 2019 4:47 pm

Hi, the error text would be printed above that. Please look at the text that's printed out in your terminal. TensorFlow ouputs an error via printf before the exception is caught by my code. The IDE has no way of knowing the TensorFlow error is not normal output text so it will be a bit up the log. Please scroll up and look at the grayed out text and it will be there.

I realized that I need to modify my code error handling so this is more obvious.

If the error is something like alloc tensors failed then your model is too big.
Nyamekye,
User avatar
iabdalkader
Posts: 1042
Joined: Sun May 24, 2015 3:53 pm

Re: Process for building custom a model, training it, and deploying it to the OpenMV Cam H7

Postby iabdalkader » Tue Nov 19, 2019 7:50 pm

ahpd wrote:
Sun Nov 17, 2019 11:14 am
OSError: Unable to read model height, width, and channels!
Might want to double check the model is not corrupted, if you reset the camera after copying a file without safe removing the disk (eject/unmount) files get corrupted.
ahpd
Posts: 4
Joined: Fri Jul 26, 2019 10:09 am

Re: Process for building custom a model, training it, and deploying it to the OpenMV Cam H7

Postby ahpd » Sat Nov 23, 2019 9:41 am

Hi,
The gray-line error says: Input model data type should be 8-bit quantized!
However I have performed the quantization by following https://www.tensorflow.org/lite/convert/quantization It quantizes the model to 8 bits.
Thanks.
User avatar
kwagyeman
Posts: 3522
Joined: Sun May 24, 2015 2:10 pm

Re: Process for building custom a model, training it, and deploying it to the OpenMV Cam H7

Postby kwagyeman » Sat Nov 23, 2019 10:57 am

Can you verify that your input is an 8 bit unit or int and output is an 8 bit unit or int? This is probably not the case if you are getting that error.
Nyamekye,

Return to “Technical Discussion”

Who is online

Users browsing this forum: No registered users and 5 guests