I generated a TFLite transfer-learned model several months ago using EdgeImpulse and it worked fine. Now I have a new one using better training images, generated using the same settings in EdgeImpulse, but it crashes the example script tf_mobilenet_search_whole_window.py
The terminal does not show any output, the board disconnects and shows flashing green LEDs, then re-connects with the IDE. What can I do?
Yes.
We are currently investigating the reason why our workflow on EdgeImpulse (identical to the one used months ago) resulted in an invalid model, even in 4.1.1. @kurileo
I guess maybe itās a bug with int8 quantization, but can be temporarily fixed with choose 96*96 input.
Iāve checked the old and new models with the same config with 128*128 input and found Edge Impluse applied int8 quantization recently. Iām not sure if the model was added resize of padding ops that caused this bug? Iāve met a similar issue about years ago that I add global average pooling with pure TensorFlow that will lead to a crash.
Besides, I notice that mlir was applied to the model by Edge Impulse, could you please check if this is the issue with the implementation of dialect? Maybe the Float Processing Unit should be paid attention to.
Now itās a bit confusing because with the new firmware 4.1.4, the ei-flower-openmv-v1 model that previously crashed works fine, and I cannot even reproduce the crash when downgrading to firmware 4.1.1. When it was crashing, we double-checked it with @kurileo but now itās fine⦠I realize this is not very helpful.
So the model size was about 100 kb, and didnāt think I had an issue with it in the past. I slimmed down the model a bit, and was able to get it to work. I was a little perplexed by the H7 just rebooting, instead of getting a frame buffer error message that Iāve seen before.
Bottom line: I have a functioning model that is executing on the camera. Thanks for the quick response.
Yeah, the Google Tensorflow code is not good. It really doesnāt check array bounds correctly. There are many issues with the code where they just go past allocated RAM.
Iām having the same problem with an Edge Impulse generated MobileNet classification model. My model is only 89KB. I created it by following the instructions in this guide. The H7 crashes and resets without any messages when it encounters the ātf.load(ātrained.tfliteā)ā line.
Firmware version: 4.2.1
Any ideas how to resolve this?
Edit: I tried reverting to FW v4.1.0, and now I get an error saying
āOSError: tensorflow/lite/micro/kernels/reduce.cc Currently, only float32 input type is supported.ā
I guess this ties in with what @kurileo mentioned with int8 quantization.
Edit #2: Nevermind! I upgraded to the latest firmware release (v4.2.3) and it works now. Cheers!