Py_tf update in development release firmware

Hi!

I noticed that a development version of the firmware has been released, which includes support for generic CNNs. It might be a bit early to ask about this version as it’s still under development, but in a previous conversation with me, you mentioned this new functionality, and I was eager to try it out. In my case, I’m working with the OpenMV Cam H7 Plus and have loaded the new firmware. I haven’t had much time to explore it, but I noticed that it doesn’t initially allow me to load any .tflite models, as I encounter the following error: AttributeError: 'module' object has no attribute 'load'.

I would like to take this opportunity to ask if there will be documentation related to the new tf update. Having an estimated date for when it might be available would be very helpful. Additionally, in the PR for the generic CNN support update, I noticed that it could supports YOLOv5 models converted to tflite. Will it be possible to do the same with YOLOv8?

Thank you very much in advance, and best regards.

Tensorflow micro support is still under heavy development at the moment, and the API and examples will both change, so I don’t recommend that you try to use it right now. I don’t have an ETA for a release but I’m making good progress, and hopefully it will be done soon. We’ll have a much cleaner API, and slightly improved inference speed. FWIW the load model is gone, both external and builtin models are loaded with tf.Model(...) (built-in ones return labels and model: labels, model = tf.Model(..)).

Thank you for your response. The new version sounds very promising, and I will wait for the final release before using it. I would like to know if the current methods (tf.detect, tf.classify, tf.segment) will remain compatible in the new version, functioning as they do now. I have some applications developed using these methods and it would be very helpful to know if they will continue to work (obviously, changing from tf.load to the new tf.Model).

Best regards.

No they were all removed and replaced with a single predict(...) function. All of the post-processing these functions used to do, we have now implemented in Python. Even the Micro Speech keyword module has been re-implemented in Python. The new API is easier to use, cleaner and more flexible, it will allow you to do almost anything you can think of. There are a few more details about changes in this PR (will be adding to it)

Hi! The PR was merged and we’re now using the updated API. Please install the dev firmware, from the IDE, if you would like to test it. Here’s an example that shows how the new API can be used to do post-processing in Python:

1 Like

Thank you very much for the heads-up and for the usage example. I already have the development version installed from the IDE, and so far, it’s working on the OpenMV Cam H7 Plus. I will keep an eye out for the upcoming documentation related to the new methods and functions in the new ml library.

Best regards.

Quite interested in what is to come.
I currently have a very stable setup running with 4.5.5 that we are about to go to production with for our machines.

I’m curious to know if I can a) continue to stay up to date with the future releases and b) get some benefit from the tf changes.

In my setup I load custom tflite files and then use the classify function on certain ROIs of frames.

  1. Will there be support from openmv to help with build in models https://wiki.st.com/stm32mcu/wiki/AI:How_to_add_AI_model_to_OpenMV_ecosystem
    I havent gotten around to do this but does it bring any benefits?

  2. will we have similar functions to firmware >4.5.5 for classify?

I’d like to test the dev release.
Is it enough to change to ml.load(path,load_to_fb=True)
And
ml.predict(img, roi=ROI)
?
Will the output be an array with the predictions confidence for each category 0-1 float?

Hi,

With the new system, you can run ANY TensorFlow model on the system. We removed all the previous limits as long as you have the RAM. On the H7 Plus and RT1062 this really opens up what is supported. On the H7 and Nicla you’re going to be limited to FOMO models and classification models as they are the only things that fit in flash and the limited heap size.

We made the heap 4MB larger on the H7 Plus and RT1062. This allows for much larger models to be loaded and stored.

As for classify, so, predict(img) will just output a list of values that represent the output tensor of the model. Classification models typically have 1 output tensor that’s 1x1x1xC. So, you’ll get the list that looks more or less the same as before. However, we support mult-output models now. So, you need to index into tensor 0 first given the output to get the list that you typically had before.

Anyway, the examples have been updated online. See here for how image classification has changed: openmv/scripts/examples/03-Machine-Learning/00-TensorFlow/tf_image_classification.py at master · openmv/openmv (github.com)

That said, I plan to send 1 more PR in before the API is finalized so that we can handle multi-input models, too. This way, we are future-proofed. This will be another breaking change. But, we should be stable after this.

Note, once this is all released it means you can run models like this on the camera: “Embedded Edge Intelligence with Infineon New Products and Imagimob Studio” (tinyml.org) - You’ll likely get 1-2 FPS with the H7 Plus and RT1062. But, they will finally work onboard. I’ll probably be integrating a lot of these models into the IDE for easy access along with creating example scripts that have post processing support for various models.

Thanks for the explanation. Indeed opens up a lot of applications

For mine I just need to make a small update for the breaking change.

Is there a big advantage of integrating the model , compiling it with firmware? Does the performance increase?

This procedure is quite old, is it still valid?
https://wiki.st.com/stm32mcu/wiki/AI:How_to_add_AI_model_to_OpenMV_ecosystem

There is for boards that don’t have enough RAM to load the model. built-in models are stored in in flash, their data is read in-place. However, all models (built-in or loaded to RAM) still need some memory to run inference (memory for the interpreter itself, for tensorflow’s scratch buffer, aka arena).

This is indeed very old and only applies to ST Cube NN/AI stuff. I’m not sure if it still works, we don’t maintain or test it with releases. If it’s still useful to someone we can add it to the CI workflow to make sure it builds.

We mainly support embedding TensorFlow lite (tflite) models. The way to do that is very easy, see:

I’m curious to know if I can a) continue to stay up to date with the future releases and b) get some benefit from the tf changes.

If you want to keep using a stable older firmware release that works for you, that’s very reasonable. The bare minimum you get from the updated firmware is faster inference.