Error while using tf.load on custom tflite model

I own an OpenMV CAM H7

I built and trained a CNN model using Keras and converted it to tflite using the following code:

model = tf.keras.models.load_model('/content/model1_954.h5')
converter = tf.lite.TFLiteConverter.from_keras_model(model)
converter.optimizations = [tf.lite.Optimize.DEFAULT]
converter.target_ops = [tf.lite.OpsSet.TFLITE_BUILTINS_INT8]
tflite_quant_model = converter.convert()

I added it to the 32GB sd card that I had put in the cam, when I load the model using tf.load() function, I get the following error:

OSError: tensorflow/lite/micro/kernels/fully_connected.cc Hybrid models are not supported on TFLite Micro.

Please help me resolve this issue. I need to make this model work on this camera module, it is basically a classification model.

As the error says it’s not supported.

Please use Edge Impulse to train TensorFlow models. They have the complete tool chain setup to make this easy.

TF lite micro doesn’t support everything desktop tf lite supports. If you look at the code in that file you may be able to determine what layer is wrong.

I have found the solution, actually, if we use the code which I posted earlier, it only quantized the weights and not the input/activation tensors.
The following code can be used to fully quantize the model, but it requires representative database more details here: קוונטיזציה לאחר אימון  |  TensorFlow Lite

Thanks

Glad to hear it’s working!

Thank you so much. This method works, too.

I added also the network created by stm32cubeai to the firmware. But now, there are sometimes nan, none and unreadable values in the output of network. Did you have the same problem, too?

Which quantitative method did you choose?

Did you asked me or other person?

There’s a working example using Keras here: