I’m deploying a custom INT8 TFLite classifier model on the OpenMV N6 (firmware 4.8.1 latest) and hitting a hard fault / DFU reboot loop when calling ml.Model() on my ROMFS model. I also get “database error” after that. The built-in ROMFS models load fine, so it appears to be specific to my model. FYI I uploaded and committed my model using the Edit ROMFS method on to my N6 device. I saw the similar issue with the exported Edge impulse tflite models too.
Model details: 3-class classifier, 64×64 grayscale input, INT8 quantized
Architecture: Conv2D(8) → MaxPooling2D → Conv2D(16) → GlobalAveragePooling2D → Dense(3)
Original .tflite size: 5.0 KB
After ROMFS commit: 18,480 bytes — IDE NPU conversion appears to have succeeded
Is it possible to tell what could be the issue?
Is there a list of supported ops for NPU conversion on the N6?
Is there a way to run a ROMFS model on the CPU TFLite interpreter instead of the NPU, as a fallback?
You need to do Tools->Install the Latest dev firmware release for the N6 in OpenMV IDE. Then it will work. We are getting very close to releasing v5.0.0.
Thanks for the reply. I think I did install the latest dev firmware as you mentioned before loading the models through ROMFS. It is 4.8.1 (latest) as shown in the screenshot below:
But after rebooting, it went back to 4.8.1 version and then I started hitting the hard fault and DFU reboot again, now even with loading the in built rom models as shown below:
I think I installed the wrong IDE - i.e, factory version. Now I uninstalled it and reinstalled the user version and reloaded everything. Loading custom models is working fine now. I think I don’t need an appointment for working on this issue anymore, as it is resolved. Can you please cancel the appointment that I booked at 3 PM PST today.