Tensorflow Updates?

Hey guys,

Thanks for all the work with the OpenMV. I am actually doing this really cool project that I plan on to deploy OpenMV with a custom built tensorflow model on a CubeSat (Cubesat? What is a CubeSat? Nanosatellite? PocketQube? | Nanosats Database) in low earth orbit. We have already built and launched 3 such satellites and are all functioning (https://birds3.birds-project.com/). Now we want the satellite’s camera to be a bit smarter in our future missions.

I have trained my model on tensorflow with Keras API. The model was then converted to tensorflow lite model. I need to now deploy that model on the OpenMV. I read here:

that the next firmware update will have tensorflow support. However, when I checked the blog tab, it does look like the latest firmware has not been released. My question would be

  1. When would it be released? Would there be a guide to load custom model on the MCU?
    The reason why I ask is that we need to complete the prototype by end of September, would be really useful if I could have a proof of concept by then.

What I can offer in return is this:
A) Space qualify OpenMV camera (we have all the facilities here, thermal vacuum chamber, vibration and shock), also can do total ionizing dose radiation testing
B) Write paper, mention OpenMV and provide acknowledgment. I want the space community to use OpenMV in the future much like how arduinos and pi are being tested and used in space.

  1. Or is there any other way to convert the tensorflow model to caffe, quantize and then load it up on the OpenMV?

Let me know. Thanks again, big fan here.

Hi, I will merge my PR for this. Um, let me get back to you with a more detailed response. I long post requires a long response from me.

Thanks, I will be waiting

Hi Project_Sat,

The work to allow tensor flow support on the OpenMV Cam was completed back in April. We have been working on a DRAM camera that will be really good for this. Using the internal SRAM is also possible.

Anyway, this is what I can do… I can merge my PR for this soon. Once I do that the support for the feature will be in the firmware. Then, you can execute a tensor flow lite model that has been converted to a flat buffer file. There are a number of command line steps to build a usable model file but they are simple.

I wasn’t able to test my code without SDRAM support. So, now that we have that I can.

As for other things you were offering, sounds good. Thanks.

Hi, I’ll be working on getting TensorFlow into the firmware release soon. Right now, the branch to merge is here:

If you rebase it ontop of the main branch you should be good to go. Note that I was building mobilenet into the main firmware which may not be the best idea. You might have to take that out.

Noted, thanks

Hi again,

I seem to be bit stuck with what to do next. So far I was able to:

  1. Train model in TensorFlow
  2. Change the model to TensorFlow to TensorflowLite
  3. use xxd -i model.tfile > model.c
  4. Manually fix the variable inside from unsigned int to const unsigned int looking at the models trained here:

I looked at what you said earlier and I realized I didn’t seem to grasp everything. “To rebase it on top of the branch” in github.

I was wondering what my next step would be to use model.c with STM32H7 on OpenMV IDE? Would this only be available after firmware is released? Is there a rough code example of implementation that I can base my code on? I apologize if I am repeating my previous question.

Hi, the branch I pointed to has the full implementation of tensorflow completed. You just rebase it ontop of main and then the tensorflow code will be available in the python API. You put the network on the SD card and then load it with the tensflow code and it should run.

That said, I was never able to test the code because the mobilenet model supplied by Google doesn’t fit in our RAM.

I’m starting to work through getting this back on the OpenMV Cam in the main firmware now. I’ve completed all other tasks that are blocking working on this. We have a 32-bit sdram camera that’s more or less working now.

Thanks, will get back with the results.

Hi again,

Some updates on my progress, do let me know if I am missing out or doing something wrong. Where I need help is to figure out what my next step would be.

Step1: Rebasing GitHub - kwagyeman/openmv at kwabena/add_tf on top of forked repository of main which is GitHub - openmv/openmv: OpenMV Camera Module

git clone --recursive https://github.com/openmv/openmv
cd openmv
git remote add tf https://github.com/kwagyeman/openmv/tree/kwabena/add_tf
git fetch tf
git branch -a --v

Which gave me these results

 remotes/origin/HEAD -> origin/master

I then did this:

git checkout tf/kwabena/add_tf
git rebase master

Had one conflict and after resolving that

git add . 
git rebase --continue

Was then able to complete the rebase.

I am still new at this so I am still trying to figure out what my next step would be? How can I link this file with OpenIDE? Do i need to rebuild from source?

In another note, how about this. I can share my CNN trained on Tensorflow and then converted to Tensorflow lite along with the flat buffer file. The tensorflow lite code is only about 104kB and I am thinking should run on SRAM. That way I can still make sure my NN runs on the OpenMV cam and also perhaps get a basic example code from you that could help me run the CNN on the module. All I need to show for my research is that the CNN runs on an MCU and does it’s job. That’s it, then I can proceed with my writing.

Let me know your thoughts. If you want me to share the CNN, I can post it here as soon as I get the reply.

Thanks again for the time. I know you guys are busy.

Hi, just wait for me to finish the porting. I’ve already started on it and managed to get past the first sets of issues with updating TensorFlow. I should be done getting the software ready to integrate into our firmware on Monday and then be done getting it merged by the end of the week. Because TensorFlowLite for Microcontrollers has so much code in it I have a separate repo where I compile it into a library file for integration into our firmware.

As for what you have done so far. Yes, you have to compile the firmware and then flash the camera. There’s no documentation for the tf module however so you have to read the c code and fix any bugs that are there until I am finished.

Sure, will wait then.

Thanks again

Hey again,

Any updates on the implementation? I have the model ready to go.


Yes, it’s been released in the latest firmware package on the GitHub. Please track the GitHub commits if you want to keep up to date on this stuff.

You have to download the latest firmware and then flash it to your camera model along with getting the scripts to run it from GitHub. Nothing is really documentated except in the scripts right now. It will take a while for us to release everything to the IDE along with documentation.

Fantastic, thanks. Will give it a shot.

Note, you need to 8-bit quanitize your tflite network for it to be runnable. Google has a lot of information on this.

Hi, kwagyeman
I saw CNN won’t support multiple object detection in previous post, How about Tensionflow? In my project i can teach any of simple 10 objects initially and find position of each object in single frame at-least 10 FPS.

We can just do classification and image segmentation. Object detection using any current net arches is not happening. They all have way too many operations. There’s no research yet into micro models except classification. So, we plan to start with that. For example, all OpenMV Cams with firmware v3.5.0 can perform person detection.

That said, object detection is not impossible. However, you actually have to make a new arch from scratch and then train it on a data set.