Training and Running TF Lite Models

Around 18 months ago I tried running a TF Lite model I had trained on an openMV h7+ and found it didn’t work.

I came to the forum asking for clarification on how it should be quantized (int or uint for examle) input and output shapes…

There seemed to be quite a few other people asking the same questions but it was just recommended to use Edge Impulse.

Has this been fixed in later versions of the firmware can it now run a normal TFLite model or are there clear instructions on what needs to be done differently? The models we are making are far too large now for the free version of Edge Impulse and the enterprise license costs are beyond the budget.

It would be great to here this has now been resolved in some manner.

thanks in advance.

Uh, we did fix our library in the last 18 months. It works perfectly for all models generated in edge impulse. As long as you limit yourself to the same operators then it should be fine.

We fixed it 9 months ago: