Hi everyone,
I am working with the OpenMV Cam RT1062 and I have a few questions regarding the deployment of neural networks on this specific hardware:
-
Does the RT1062 support only .tflite models, or is there support for other formats?
-
Regarding quantization: Does the model strictly need to be fully quantized to int8, or can the camera also run float32 models efficiently?
-
If float32 is supported, is there a significant performance penalty compared to int8 on the i.MX RT1062?
I am currently training a character recognition model and want to ensure I’m exporting the best format for this board.
Thanks in advance for the help!