RT1062 ml.Model() "Failed to allocate tensors" despite sufficient RAM

I’m having a problem loading a small tflite model.

Firmware: 4.7.0
Tflite model has:
Input dtype: <class ‘numpy.uint8’>
Input shape: [ 1 48 48 1]
Output dtype: <class ‘numpy.int8’>
Output shape: [1 2]
Model size: 10.57 KB
Total tensor memory needed: 17.18 KB
Free memory: 296768 bytes

This model works fine with desktop TFLite interpreter.

I’d appreciate your looking at the attached model and letting me know what the issue is. Thanks.

model_48x48_grayscale.zip (7.6 KB)

There’s something odd with the model, when I try to run it with either the AE3 or RT1062, I just get a soft reset when loading the model off flash, even with logs enabled for tflite.

Have you used this: https://netron.app/

When I opened the model I saw float32 uint8 and etc. layers which aren’t supported.

Unfortunately, I cannot debug more than this really.

Thank you. I will look into the model further.

TensorFlow changed the quantization implementation with TF version 2.15+. A previous model created with TF ver 2.13 works, but the same model quantized with TF ver 2.17 did not work on the RT1062. Added converter._experimental_disable_per_channel_quantization_for_dense_layers = True to tf.lite.TFLiteConverter.from_keras_model(model) and that solved the issue.

Thanks Kwabena for your awesome support!!!