Device: OpenMV N6, firmware v4.8.1, MicroPython v1.26.0-77
Issue: When I add a custom INT8 quantized YOLOv8 tflite model to ROMFS
via the IDE editor, the IDE automatically converts it to a bin file
(3,236,416 bytes) using ST EdgeAI. But calling ml.Model() on this file
causes a hard fault and board reset.
The official yolov8n_192.tflite (3,283,296 bytes, original tflite format,
not converted) loads fine with ml.Model().
My model: YOLOv8n, 10 classes, input 224x224, INT8 full integer quantized.
Question: How do I load a custom model that has been converted to bin
format by the IDE? Or how do I prevent the IDE from converting and keep
the original tflite in ROMFS?