OpenMV Firmware v4.5.6 and up TensorFlow Porting Guide

Please don’t use that firmware it might be broke. As for quantization you need full int8 quantization (weights and activations) no int16. If you’re using the latest tf, see this post:

For ops, you can use https://netron.app/ and check if the model has ops not supported by tflite micro

I will test the model as soon as I can.