Model compatibility for OpenMV Cam RT1062: .tflite formats and quantization (int8 vs float32)

Hi everyone,

I am working with the OpenMV Cam RT1062 and I have a few questions regarding the deployment of neural networks on this specific hardware:

  1. Does the RT1062 support only .tflite models, or is there support for other formats?

  2. Regarding quantization: Does the model strictly need to be fully quantized to int8, or can the camera also run float32 models efficiently?

  3. If float32 is supported, is there a significant performance penalty compared to int8 on the i.MX RT1062?

I am currently training a character recognition model and want to ensure I’m exporting the best format for this board.

Thanks in advance for the help!

Does the RT1062 support only .tflite models, or is there support for other formats?

Just tflite. Its because we use Googles TFLite runtime for Microcontrollers.Regarding quantization:

Does the model strictly need to be fully quantized to int8, or can the camera also run float32 models efficiently?

It needs to be mostly INT8. You can full float32 models, but, they will be much slower.

If float32 is supported, is there a significant performance penalty compared to int8 on the i.MX RT1062?

Yes, it’s slower.

1 Like