Loading a tflite model into SDRAM instead of SRAM - H7 Plus

I’ve found the latest branch edge impulse supports. ei-v2.5.3. It has a commit 10 hours ago. So, I just need to get guidance from EI before updating.

I have no idea if this will give you the op you need. But, I’ll try to do the update.

Hey @kwagyeman

Anything up? If there is anything that we can do about the solution process (with @BehicMV ), we’d be happy to make things lighten up for you.

Oh, sorry, I changed work streams on this. I can show you how to compile the firmware though and update to the latest. Will post tomorrow. The PR for the fixes for some layers was merged.

Can’t wait for it. Thank you so much for everything you’ve done so far, we really appreciate it.

Hey, @kwagyeman

Sorry for continuously disturbing, but I am really excited about the product that I am about to finish with OpenMV. It’d be amazing if I will be able to compile the firmware with the needed ops as you’ve said.

Anything new so far?

Hi @sencery, we just got the sample prototypes for the new system back and I am focused on that right now.

I already update the firmware: Releases · openmv/openmv · GitHub

With all the ops we had available that matched some of what you wanted. As for updating to the latest version of the SDK. I can do this. But, maybe at the end of the week.

I have to focus on the new product currently.

Hi, this PR updates to ei-2.5.3. It’s the latest from Edge Impulse: imlib/libtf: Update to the latest tensorflow API. by kwagyeman · Pull Request #1848 · openmv/openmv · GitHub

I don’t have any more fixes or etc I can do for you. I enabled all the ops I could that you needed and updated to the latest stable version of the library.

Going with the “latest” tensorflow for MicroControllers library from Google would be… not wise as it’s likely choked full of bugs. The EdgeImpulse branch works.

Hi, @kwagyeman
Thank you for your fixes. I built the PR and flashed it to the OpenMV H7 Plus board. My model has a “Dequantize” layer. The model couldn’t be runned even if with your last PR. Am I missing something or what can I do else?

Thank you for your attention…

Hi, I updated our library to the latest of what EdgeImpulse provides for TensorFlow for Microcontrollers. The op you wanted was uncommented. So… I don’t really have any other knobs I can turn.

This is what is enabled:

Here’s the registration of the ops: tensorflow/tensorflow/lite/kernels/builtin_op_kernels.h at be8b5a90355f3a1de83a77e259eea9da7839d08d · openmv/tensorflow · GitHub

You can use github to follow the links to determine the op version: tensorflow/tensorflow/lite/kernels/dequantize.cc at be8b5a90355f3a1de83a77e259eea9da7839d08d · openmv/tensorflow · GitHub

Anyway, not sure why you have that op… We do all math in 8 bit signed values. You should need any float layers anywhere.

Hi, the problem might be caused by my firmware-building process (maybe I’m using the old version still). Could you tell me how can I build this firmware?

Also, I eliminated the “Dequantize” layer actually. But I need the all ops either. I was trying to make sure about the new version of the TensorFlow with my current testing model which is containing “Dequantize” because I deleted the other incompatible models. I have to train them again after I enable the all ops in my OpenMV Board.

Thank you for your attention…

The all ops layer is just all the ops.

As for building the code. You just run the make.py script at the top level of that repo and then copy the library file over into the OpenMV firmware repo under omv/lib/TensorFlow and then build the OpenMV Cam source.

1 Like

Thank you for your attention. I will try it.