Failed to allocate tensors. RT1062

Hello. I have been training my own model and converted it to tflite, when I want to deploy it in my openMV IDE:
try:
model = ml.Model(“model.tflite”, load_to_fb=True)
except Exception as e:
raise Exception('Failed to load “model”: ’ + str(e))

I get the error: Failed to allocate tensors

My camera board is RT1062 and my model size is 28K

Hi, can you post the model?

Yes. model.tflite - Google Drive

Thanks

tflm_backend: tensorflow/lite/micro/kernels/cmsis_nn/fully_connected.cc Hybrid models are not supported on TFLite Micro.
tflm_backend: Node FULLY_CONNECTED (number 0f) failed to prepare with status 1

Traceback (most recent call last):
  File "<stdin>", line 18, in <module>
  File "ml/model.py", line 36, in __init__
ValueError: Failed to allocate tensors
OpenMV v4.5.9-554.g74734385.dirty; MicroPython v1.25.0-preview.479.g1a472061e; OpenMV IMXRT1060 with MIMXRT1062DVJ6A
Type "help()" for more information.
>>> 

Your tensor isn’t quantized fully.

Here’s what the ARM vela compiler says about your network:

C:/GitHub/openmv-ide/build/share/qtcreator/python/win/python.exe -u -m ethosu.vela --optimise Performance --system-config RTSS_HP_SRAM_OSPI --accelerator-config ethos-u55-256 --memory-mode Shared_Sram --config C:\ProgramData\OpenMV\openmvide\firmware\OPENMV_AE3\vela.ini --verbose-performance --verbose-cycle-estimate --output-dir C:\Users\kwagy\AppData\Local\Temp\OpenMVIDE-PcKhCf C:\Users\kwagy\OneDrive\Desktop\model (1).tflite
Warning: Could not read the following attributes from CAST 'onnx_tf_prefix_/l1/neuron/Cast' CastOptions field: in_data_type, out_data_type
Warning: Unsupported TensorFlow Lite semantics for FULLY_CONNECTED 'onnx_tf_prefix_/l1/synapse_ff/MatMul2'. Placing on CPU instead
 - Input(s), Output and Weight tensors must have quantization parameters
   Op has tensors with missing quantization parameters: serving_default_input:0, onnx_tf_prefix_/l1/synapse_ff/MatMul2
Warning: Unsupported TensorFlow Lite semantics for FILL 'onnx_tf_prefix_/l1/ConstantOfShape'. Placing on CPU instead
 - Scalar Input tensors are only valid for op type: ADD, ARG_MAX, EXPAND_DIMS, MAXIMUM, MEAN, MINIMUM, MUL, QUANTIZE, SPLIT, SPLIT_V, SUB
   Op has scalar input tensor(s): onnx_tf_prefix_/l1/ConstantOfShape/value
Warning: Unsupported TensorFlow Lite semantics for FULLY_CONNECTED 'onnx_tf_prefix_/l1/synapse_rec/MatMul1'. Placing on CPU instead
 - Input(s), Output and Weight tensors must have quantization parameters
   Op has tensors with missing quantization parameters: onnx_tf_prefix_/l1/ConstantOfShape, onnx_tf_prefix_/l1/synapse_rec/MatMul1
Warning: Unsupported TensorFlow Lite semantics for ADD 'onnx_tf_prefix_/l1/Add'. Placing on CPU instead
 - Input(s), Output and Weight tensors must have quantization parameters
   Op has tensors with missing quantization parameters: onnx_tf_prefix_/l1/synapse_ff/MatMul2, onnx_tf_prefix_/l1/synapse_rec/MatMul1, onnx_tf_prefix_/l1/Add
Warning: Unsupported TensorFlow Lite semantics for GATHER 'onnx_tf_prefix_/l1/neuron/Gather;assert_equal_1/Rank'. Placing on CPU instead
 - Input(s) and Output tensors must not be dynamic
   Op has dynamic tensor(s): onnx_tf_prefix_/l1/neuron/Gather;assert_equal_1/Rank
Warning: Unsupported TensorFlow Lite semantics for RESHAPE 'onnx_tf_prefix_/l1/neuron/Unsqueeze'. Placing on CPU instead
 - Input(s) and Output tensors must not be dynamic
   Op has dynamic tensor(s): onnx_tf_prefix_/l1/neuron/Gather;assert_equal_1/Rank
Warning: Unsupported TensorFlow Lite semantics for GATHER 'onnx_tf_prefix_/l1/neuron/Gather_1;assert_equal_1/Rank'. Placing on CPU instead
 - Input(s) and Output tensors must not be dynamic
   Op has dynamic tensor(s): onnx_tf_prefix_/l1/neuron/Gather_1;assert_equal_1/Rank
Warning: Unsupported TensorFlow Lite semantics for RESHAPE 'onnx_tf_prefix_/l1/neuron/Unsqueeze_1'. Placing on CPU instead
 - Input(s) and Output tensors must not be dynamic
   Op has dynamic tensor(s): onnx_tf_prefix_/l1/neuron/Gather_1;assert_equal_1/Rank
Warning: Unsupported TensorFlow Lite semantics for GATHER 'onnx_tf_prefix_/l1/neuron/Gather_2;assert_equal_1/Rank'. Placing on CPU instead
 - Input(s) and Output tensors must not be dynamic
   Op has dynamic tensor(s): onnx_tf_prefix_/l1/neuron/Gather_2;assert_equal_1/Rank
Warning: Unsupported TensorFlow Lite semantics for RESHAPE 'onnx_tf_prefix_/l1/neuron/Unsqueeze_21'. Placing on CPU instead
 - Input(s) and Output tensors must not be dynamic
   Op has dynamic tensor(s): onnx_tf_prefix_/l1/neuron/Gather_2;assert_equal_1/Rank
Warning: Unsupported TensorFlow Lite semantics for CONCATENATION 'onnx_tf_prefix_/l1/neuron/Concat'. Placing on CPU instead
 - Input(s), Output and Weight tensors must have quantization parameters
   Op has tensors with missing quantization parameters: onnx_tf_prefix_/l1/neuron/Unsqueeze, onnx_tf_prefix_/l1/neuron/Unsqueeze_1, onnx_tf_prefix_/l1/neuron/Concat
Warning: Unsupported TensorFlow Lite semantics for FILL 'onnx_tf_prefix_/l1/neuron/ConstantOfShape'. Placing on CPU instead
 - Scalar Input tensors are only valid for op type: ADD, ARG_MAX, EXPAND_DIMS, MAXIMUM, MEAN, MINIMUM, MUL, QUANTIZE, SPLIT, SPLIT_V, SUB
   Op has scalar input tensor(s): onnx_tf_prefix_/l1/ConstantOfShape/value
Warning: Unsupported TensorFlow Lite semantics for SPLIT_V 'onnx_tf_prefix_/l1/neuron/Split'. Placing on CPU instead
 - Input(s), Output and Weight tensors must have quantization parameters
   Op has tensors with missing quantization parameters: onnx_tf_prefix_/l1/neuron/ConstantOfShape, onnx_tf_prefix_/l1/neuron/Split
Warning: Unsupported TensorFlow Lite semantics for SQUEEZE 'onnx_tf_prefix_/l1/neuron/Squeeze_1'. Placing on CPU instead
 - Input(s), Output and Weight tensors must have quantization parameters
   Op has tensors with missing quantization parameters: onnx_tf_prefix_/l1/neuron/Split1, onnx_tf_prefix_/l1/neuron/Squeeze_1
Warning: Unsupported TensorFlow Lite semantics for MUL 'onnx_tf_prefix_/l1/neuron/Mul_1'. Placing on CPU instead
 - Input(s), Output and Weight tensors must have quantization parameters
   Op has tensors with missing quantization parameters: onnx_tf_prefix_/l1/neuron/Squeeze_1, onnx_tf_prefix_/l1/neuron/Sigmoid_1, onnx_tf_prefix_/l1/neuron/Mul_1
Warning: Unsupported TensorFlow Lite semantics for SQUEEZE 'onnx_tf_prefix_/l1/neuron/Squeeze_2'. Placing on CPU instead
 - Input(s), Output and Weight tensors must have quantization parameters
   Op has tensors with missing quantization parameters: onnx_tf_prefix_/l1/neuron/Split2, onnx_tf_prefix_/l1/neuron/Squeeze_2
Warning: Unsupported TensorFlow Lite semantics for SUB 'onnx_tf_prefix_/l1/neuron/Sub'. Placing on CPU instead
 - Input(s), Output and Weight tensors must have quantization parameters
   Op has tensors with missing quantization parameters: onnx_tf_prefix_/l1/neuron/Constant_4, onnx_tf_prefix_/l1/neuron/Squeeze_2, onnx_tf_prefix_/l1/neuron/Sub
Warning: Unsupported TensorFlow Lite semantics for MUL 'onnx_tf_prefix_/l1/neuron/Mul_2'. Placing on CPU instead
 - Input(s), Output and Weight tensors must have quantization parameters
   Op has tensors with missing quantization parameters: onnx_tf_prefix_/l1/neuron/Mul_1, onnx_tf_prefix_/l1/neuron/Sub, onnx_tf_prefix_/l1/neuron/Mul_2
Warning: Unsupported TensorFlow Lite semantics for SQUEEZE 'onnx_tf_prefix_/l1/neuron/Squeeze'. Placing on CPU instead
 - Input(s), Output and Weight tensors must have quantization parameters
   Op has tensors with missing quantization parameters: onnx_tf_prefix_/l1/neuron/Split, onnx_tf_prefix_/l1/neuron/Squeeze
Warning: Unsupported TensorFlow Lite semantics for MUL 'onnx_tf_prefix_/l1/neuron/Mul'. Placing on CPU instead
 - Input(s), Output and Weight tensors must have quantization parameters
   Op has tensors with missing quantization parameters: onnx_tf_prefix_/l1/neuron/Squeeze, onnx_tf_prefix_/l1/neuron/Sigmoid, onnx_tf_prefix_/l1/neuron/Mul
Warning: Unsupported TensorFlow Lite semantics for ADD 'onnx_tf_prefix_/l1/neuron/Add'. Placing on CPU instead
 - Input(s), Output and Weight tensors must have quantization parameters
   Op has tensors with missing quantization parameters: onnx_tf_prefix_/l1/neuron/Mul, onnx_tf_prefix_/l1/Add, onnx_tf_prefix_/l1/neuron/Add
Warning: Unsupported TensorFlow Lite semantics for ADD 'onnx_tf_prefix_/l1/neuron/Add_1'. Placing on CPU instead
 - Input(s), Output and Weight tensors must have quantization parameters
   Op has tensors with missing quantization parameters: onnx_tf_prefix_/l1/neuron/Mul_2, onnx_tf_prefix_/l1/neuron/Add, onnx_tf_prefix_/l1/neuron/Add_1
Warning: Unsupported TensorFlow Lite semantics for SUB 'onnx_tf_prefix_/l1/neuron/Sub_1'. Placing on CPU instead
 - Input(s), Output and Weight tensors must have quantization parameters
   Op has tensors with missing quantization parameters: onnx_tf_prefix_/l1/neuron/Add_1, Const_1, onnx_tf_prefix_/l1/neuron/Sub_1
Warning: Unsupported TensorFlow Lite semantics for GREATER 'onnx_tf_prefix_/l1/neuron/Greater'. Placing on CPU instead
 - Scalar Input tensors are only valid for op type: ADD, ARG_MAX, EXPAND_DIMS, MAXIMUM, MEAN, MINIMUM, MUL, QUANTIZE, SPLIT, SPLIT_V, SUB
   Op has scalar input tensor(s): onnx_tf_prefix_/l1/ConstantOfShape/value
Warning: Unsupported TensorFlow Lite semantics for CAST 'onnx_tf_prefix_/l1/neuron/Cast'. Placing on CPU instead
 - All required operator attributes must be specified
   Op has missing attributes: in_data_type, out_data_type
Warning: Unsupported TensorFlow Lite semantics for FULLY_CONNECTED 'StatefulPartitionedCall:0'. Placing on CPU instead
 - Input(s), Output and Weight tensors must have quantization parameters
   Op has tensors with missing quantization parameters: onnx_tf_prefix_/l1/neuron/Cast, onnx_tf_prefix_/p_out/synapse/MatMul_reshape, StatefulPartitionedCall:0

Network summary for model (1)
Accelerator configuration               Ethos_U55_256
System configuration                RTSS_HP_SRAM_OSPI
Memory mode                               Shared_Sram
Accelerator clock                                 400 MHz


CPU operators = 26 (100.0%)
NPU operators = 0 (0.0%)

Neural network macs                                 0 MACs/batch

Info: The numbers below are internal compiler estimates.
For performance numbers the compiled network should be run on an FVP Model or FPGA.

Network Tops/s                                    nan Tops/s

NPU cycles                                          0 cycles/batch
SRAM Access cycles                                  0 cycles/batch
DRAM Access cycles                                  0 cycles/batch
On-chip Flash Access cycles                         0 cycles/batch
Off-chip Flash Access cycles                        0 cycles/batch
Total cycles                                        0 cycles/batch

Batch Inference time                 0.00 ms,     nan inferences/s (batch size 1)

Warning: Could not write the following attributes to RESHAPE 'onnx_tf_prefix_/l1/neuron/Unsqueeze_21' ReshapeOptions field: new_shape
Warning: Could not write the following attributes to RESHAPE 'onnx_tf_prefix_/l1/neuron/Unsqueeze_1' ReshapeOptions field: new_shape
Warning: Could not write the following attributes to RESHAPE 'onnx_tf_prefix_/l1/neuron/Unsqueeze' ReshapeOptions field: new_shape
Total Heap Required: 0.00%
Success - Press Ok to close the window

I tried again (making sure every tensor and operator must use int8 precision):

But I still get the same error. I am using SNN (Spiking Neural Network), I am not sure if that will affect the model.

firmware.zip (1.5 MB)

Load via Tools->Run bootloader->firmware.bin (don’t load the zip itself, unzip).

Attached is an RT1062 binary with error messages enabled for tflite. Check the greyed out text in the log after you get the error message popup to see the error.

I’ll see if I have time to add a tool into the IDE to help debug this on the desktop so you don’t need custom firmware.

Note, a SNN is probably unsupported.

Thanks, this firmware helped me solving some errors, but now i get the error: tflm_backend: Too many buffers (max is 11)

I guess this is because my SNN model architecture has some ops like reshape, greater, cast..

That’s probably being triggered by this: tflite-micro/tensorflow/lite/micro/memory_planner/greedy_memory_planner.cc at main · tensorflow/tflite-micro · GitHub

However, I’ve never seen this error before. Not sure how you fix this.