The OpenMV Cam H7 crashes and disconnects without any error message when loading a full integer quantized tflite model. The tflite model is 172kb. The OpenMV Test code and tflite model are attached to aid in debugging.
This is the code used to do the full integer quantization of the tensorflow model to tflite:
Convert model to TFLite with full integer quantization
converter = tf.lite.TFLiteConverter.from_keras_model(model)
converter.optimizations = [tf.lite.Optimize.DEFAULT]
def representative_dataset_gen():
for images, labels in train_ds.take(1):
yield [images]
converter.representative_dataset = representative_dataset_gen
converter.target_spec.supported_ops = [tf.lite.OpsSet.TFLITE_BUILTINS_INT8]
converter.inference_input_type = tf.int8 # or tf.uint8
converter.inference_output_type = tf.int8 # or tf.uint8
tflite_int_quant_model = converter.convert()
Save tfLite int quant model
tfLite_dir = “C:\Users\ZZZ\Model\”
tfLite_name = “Model_Int_Quant_V2.tflite”
tfLite_path = tfLite_dir+tfLite_name
with open(‘model.tflite’, ‘wb’) as f:
with open(tfLite_path, ‘wb’) as f:
f.write(tflite_int_quant_model)
Model_Int_Quant_V2.zip (160 KB)
TFLite_Test_V1.py (617 Bytes)