I generated a TFLite transfer-learned model several months ago using EdgeImpulse and it worked fine. Now I have a new one using better training images, generated using the same settings in EdgeImpulse, but it crashes the example script tf_mobilenet_search_whole_window.py
The terminal does not show any output, the board disconnects and shows flashing green LEDs, then re-connects with the IDE. What can I do?
The library is just bugged in the new firmware. I will have a new release by the weekend.
Downgrade back to the old firmware.
I stated in the title that I am running 4.1.1, the latest firmware that works with TFlite. The old EI model works, the new one doesn’t.
Wait, 4.1.1 was working for everyone and then I bugged it in 4.1.2 right?
We are currently investigating the reason why our workflow on EdgeImpulse (identical to the one used months ago) resulted in an invalid model, even in 4.1.1. @kurileo
I guess maybe it’s a bug with int8 quantization, but can be temporarily fixed with choose 96*96 input.
I’ve checked the old and new models with the same config with 128*128 input and found Edge Impluse applied int8 quantization recently. I’m not sure if the model was added resize of padding ops that caused this bug? I’ve met a similar issue about years ago that I add global average pooling with pure TensorFlow that will lead to a crash.
Besides, I notice that mlir was applied to the model by Edge Impulse, could you please check if this is the issue with the implementation of dialect? Maybe the Float Processing Unit should be paid attention to.
and @darrask could you please upload the video of blinking LED? any signal or log can help debug.
Now it’s a bit confusing because with the new firmware 4.1.4, the ei-flower-openmv-v1 model that previously crashed works fine, and I cannot even reproduce the crash when downgrading to firmware 4.1.1. When it was crashing, we double-checked it with @kurileo but now it’s fine… I realize this is not very helpful.
Hi all, I finally have a break from work and will be able to focus on OpenMV. I will complete the merge of the new tensorflow code over the weekend.
Hi, has this been fixed? A model that was working for me does not work after updating to the new firmware (4.2.1).
What’s broken? The TensorFlow lib was updated. We did remove some ops however
So the model size was about 100 kb, and didn’t think I had an issue with it in the past. I slimmed down the model a bit, and was able to get it to work. I was a little perplexed by the H7 just rebooting, instead of getting a frame buffer error message that I’ve seen before.
Bottom line: I have a functioning model that is executing on the camera. Thanks for the quick response.
Yeah, the Google Tensorflow code is not good. It really doesn’t check array bounds correctly. There are many issues with the code where they just go past allocated RAM.
I’m having the same problem with an Edge Impulse generated MobileNet classification model. My model is only 89KB. I created it by following the instructions in this guide. The H7 crashes and resets without any messages when it encounters the “tf.load(‘trained.tflite’)” line.
Firmware version: 4.2.1
Any ideas how to resolve this?
Edit: I tried reverting to FW v4.1.0, and now I get an error saying
“OSError: tensorflow/lite/micro/kernels/reduce.cc Currently, only float32 input type is supported.”
I guess this ties in with what @kurileo mentioned with int8 quantization.
Edit #2: Nevermind! I upgraded to the latest firmware release (v4.2.3) and it works now. Cheers!
Install this: Release v4.2.3 · openmv/openmv · GitHub
There are some bugs in the current release. We’ll have this automatically installed in a few days.