Here’s what the ARM vela compiler says about your network:
C:/GitHub/openmv-ide/build/share/qtcreator/python/win/python.exe -u -m ethosu.vela --optimise Performance --system-config RTSS_HP_SRAM_OSPI --accelerator-config ethos-u55-256 --memory-mode Shared_Sram --config C:\ProgramData\OpenMV\openmvide\firmware\OPENMV_AE3\vela.ini --verbose-performance --verbose-cycle-estimate --output-dir C:\Users\kwagy\AppData\Local\Temp\OpenMVIDE-PcKhCf C:\Users\kwagy\OneDrive\Desktop\model (1).tflite
Warning: Could not read the following attributes from CAST 'onnx_tf_prefix_/l1/neuron/Cast' CastOptions field: in_data_type, out_data_type
Warning: Unsupported TensorFlow Lite semantics for FULLY_CONNECTED 'onnx_tf_prefix_/l1/synapse_ff/MatMul2'. Placing on CPU instead
- Input(s), Output and Weight tensors must have quantization parameters
Op has tensors with missing quantization parameters: serving_default_input:0, onnx_tf_prefix_/l1/synapse_ff/MatMul2
Warning: Unsupported TensorFlow Lite semantics for FILL 'onnx_tf_prefix_/l1/ConstantOfShape'. Placing on CPU instead
- Scalar Input tensors are only valid for op type: ADD, ARG_MAX, EXPAND_DIMS, MAXIMUM, MEAN, MINIMUM, MUL, QUANTIZE, SPLIT, SPLIT_V, SUB
Op has scalar input tensor(s): onnx_tf_prefix_/l1/ConstantOfShape/value
Warning: Unsupported TensorFlow Lite semantics for FULLY_CONNECTED 'onnx_tf_prefix_/l1/synapse_rec/MatMul1'. Placing on CPU instead
- Input(s), Output and Weight tensors must have quantization parameters
Op has tensors with missing quantization parameters: onnx_tf_prefix_/l1/ConstantOfShape, onnx_tf_prefix_/l1/synapse_rec/MatMul1
Warning: Unsupported TensorFlow Lite semantics for ADD 'onnx_tf_prefix_/l1/Add'. Placing on CPU instead
- Input(s), Output and Weight tensors must have quantization parameters
Op has tensors with missing quantization parameters: onnx_tf_prefix_/l1/synapse_ff/MatMul2, onnx_tf_prefix_/l1/synapse_rec/MatMul1, onnx_tf_prefix_/l1/Add
Warning: Unsupported TensorFlow Lite semantics for GATHER 'onnx_tf_prefix_/l1/neuron/Gather;assert_equal_1/Rank'. Placing on CPU instead
- Input(s) and Output tensors must not be dynamic
Op has dynamic tensor(s): onnx_tf_prefix_/l1/neuron/Gather;assert_equal_1/Rank
Warning: Unsupported TensorFlow Lite semantics for RESHAPE 'onnx_tf_prefix_/l1/neuron/Unsqueeze'. Placing on CPU instead
- Input(s) and Output tensors must not be dynamic
Op has dynamic tensor(s): onnx_tf_prefix_/l1/neuron/Gather;assert_equal_1/Rank
Warning: Unsupported TensorFlow Lite semantics for GATHER 'onnx_tf_prefix_/l1/neuron/Gather_1;assert_equal_1/Rank'. Placing on CPU instead
- Input(s) and Output tensors must not be dynamic
Op has dynamic tensor(s): onnx_tf_prefix_/l1/neuron/Gather_1;assert_equal_1/Rank
Warning: Unsupported TensorFlow Lite semantics for RESHAPE 'onnx_tf_prefix_/l1/neuron/Unsqueeze_1'. Placing on CPU instead
- Input(s) and Output tensors must not be dynamic
Op has dynamic tensor(s): onnx_tf_prefix_/l1/neuron/Gather_1;assert_equal_1/Rank
Warning: Unsupported TensorFlow Lite semantics for GATHER 'onnx_tf_prefix_/l1/neuron/Gather_2;assert_equal_1/Rank'. Placing on CPU instead
- Input(s) and Output tensors must not be dynamic
Op has dynamic tensor(s): onnx_tf_prefix_/l1/neuron/Gather_2;assert_equal_1/Rank
Warning: Unsupported TensorFlow Lite semantics for RESHAPE 'onnx_tf_prefix_/l1/neuron/Unsqueeze_21'. Placing on CPU instead
- Input(s) and Output tensors must not be dynamic
Op has dynamic tensor(s): onnx_tf_prefix_/l1/neuron/Gather_2;assert_equal_1/Rank
Warning: Unsupported TensorFlow Lite semantics for CONCATENATION 'onnx_tf_prefix_/l1/neuron/Concat'. Placing on CPU instead
- Input(s), Output and Weight tensors must have quantization parameters
Op has tensors with missing quantization parameters: onnx_tf_prefix_/l1/neuron/Unsqueeze, onnx_tf_prefix_/l1/neuron/Unsqueeze_1, onnx_tf_prefix_/l1/neuron/Concat
Warning: Unsupported TensorFlow Lite semantics for FILL 'onnx_tf_prefix_/l1/neuron/ConstantOfShape'. Placing on CPU instead
- Scalar Input tensors are only valid for op type: ADD, ARG_MAX, EXPAND_DIMS, MAXIMUM, MEAN, MINIMUM, MUL, QUANTIZE, SPLIT, SPLIT_V, SUB
Op has scalar input tensor(s): onnx_tf_prefix_/l1/ConstantOfShape/value
Warning: Unsupported TensorFlow Lite semantics for SPLIT_V 'onnx_tf_prefix_/l1/neuron/Split'. Placing on CPU instead
- Input(s), Output and Weight tensors must have quantization parameters
Op has tensors with missing quantization parameters: onnx_tf_prefix_/l1/neuron/ConstantOfShape, onnx_tf_prefix_/l1/neuron/Split
Warning: Unsupported TensorFlow Lite semantics for SQUEEZE 'onnx_tf_prefix_/l1/neuron/Squeeze_1'. Placing on CPU instead
- Input(s), Output and Weight tensors must have quantization parameters
Op has tensors with missing quantization parameters: onnx_tf_prefix_/l1/neuron/Split1, onnx_tf_prefix_/l1/neuron/Squeeze_1
Warning: Unsupported TensorFlow Lite semantics for MUL 'onnx_tf_prefix_/l1/neuron/Mul_1'. Placing on CPU instead
- Input(s), Output and Weight tensors must have quantization parameters
Op has tensors with missing quantization parameters: onnx_tf_prefix_/l1/neuron/Squeeze_1, onnx_tf_prefix_/l1/neuron/Sigmoid_1, onnx_tf_prefix_/l1/neuron/Mul_1
Warning: Unsupported TensorFlow Lite semantics for SQUEEZE 'onnx_tf_prefix_/l1/neuron/Squeeze_2'. Placing on CPU instead
- Input(s), Output and Weight tensors must have quantization parameters
Op has tensors with missing quantization parameters: onnx_tf_prefix_/l1/neuron/Split2, onnx_tf_prefix_/l1/neuron/Squeeze_2
Warning: Unsupported TensorFlow Lite semantics for SUB 'onnx_tf_prefix_/l1/neuron/Sub'. Placing on CPU instead
- Input(s), Output and Weight tensors must have quantization parameters
Op has tensors with missing quantization parameters: onnx_tf_prefix_/l1/neuron/Constant_4, onnx_tf_prefix_/l1/neuron/Squeeze_2, onnx_tf_prefix_/l1/neuron/Sub
Warning: Unsupported TensorFlow Lite semantics for MUL 'onnx_tf_prefix_/l1/neuron/Mul_2'. Placing on CPU instead
- Input(s), Output and Weight tensors must have quantization parameters
Op has tensors with missing quantization parameters: onnx_tf_prefix_/l1/neuron/Mul_1, onnx_tf_prefix_/l1/neuron/Sub, onnx_tf_prefix_/l1/neuron/Mul_2
Warning: Unsupported TensorFlow Lite semantics for SQUEEZE 'onnx_tf_prefix_/l1/neuron/Squeeze'. Placing on CPU instead
- Input(s), Output and Weight tensors must have quantization parameters
Op has tensors with missing quantization parameters: onnx_tf_prefix_/l1/neuron/Split, onnx_tf_prefix_/l1/neuron/Squeeze
Warning: Unsupported TensorFlow Lite semantics for MUL 'onnx_tf_prefix_/l1/neuron/Mul'. Placing on CPU instead
- Input(s), Output and Weight tensors must have quantization parameters
Op has tensors with missing quantization parameters: onnx_tf_prefix_/l1/neuron/Squeeze, onnx_tf_prefix_/l1/neuron/Sigmoid, onnx_tf_prefix_/l1/neuron/Mul
Warning: Unsupported TensorFlow Lite semantics for ADD 'onnx_tf_prefix_/l1/neuron/Add'. Placing on CPU instead
- Input(s), Output and Weight tensors must have quantization parameters
Op has tensors with missing quantization parameters: onnx_tf_prefix_/l1/neuron/Mul, onnx_tf_prefix_/l1/Add, onnx_tf_prefix_/l1/neuron/Add
Warning: Unsupported TensorFlow Lite semantics for ADD 'onnx_tf_prefix_/l1/neuron/Add_1'. Placing on CPU instead
- Input(s), Output and Weight tensors must have quantization parameters
Op has tensors with missing quantization parameters: onnx_tf_prefix_/l1/neuron/Mul_2, onnx_tf_prefix_/l1/neuron/Add, onnx_tf_prefix_/l1/neuron/Add_1
Warning: Unsupported TensorFlow Lite semantics for SUB 'onnx_tf_prefix_/l1/neuron/Sub_1'. Placing on CPU instead
- Input(s), Output and Weight tensors must have quantization parameters
Op has tensors with missing quantization parameters: onnx_tf_prefix_/l1/neuron/Add_1, Const_1, onnx_tf_prefix_/l1/neuron/Sub_1
Warning: Unsupported TensorFlow Lite semantics for GREATER 'onnx_tf_prefix_/l1/neuron/Greater'. Placing on CPU instead
- Scalar Input tensors are only valid for op type: ADD, ARG_MAX, EXPAND_DIMS, MAXIMUM, MEAN, MINIMUM, MUL, QUANTIZE, SPLIT, SPLIT_V, SUB
Op has scalar input tensor(s): onnx_tf_prefix_/l1/ConstantOfShape/value
Warning: Unsupported TensorFlow Lite semantics for CAST 'onnx_tf_prefix_/l1/neuron/Cast'. Placing on CPU instead
- All required operator attributes must be specified
Op has missing attributes: in_data_type, out_data_type
Warning: Unsupported TensorFlow Lite semantics for FULLY_CONNECTED 'StatefulPartitionedCall:0'. Placing on CPU instead
- Input(s), Output and Weight tensors must have quantization parameters
Op has tensors with missing quantization parameters: onnx_tf_prefix_/l1/neuron/Cast, onnx_tf_prefix_/p_out/synapse/MatMul_reshape, StatefulPartitionedCall:0
Network summary for model (1)
Accelerator configuration Ethos_U55_256
System configuration RTSS_HP_SRAM_OSPI
Memory mode Shared_Sram
Accelerator clock 400 MHz
CPU operators = 26 (100.0%)
NPU operators = 0 (0.0%)
Neural network macs 0 MACs/batch
Info: The numbers below are internal compiler estimates.
For performance numbers the compiled network should be run on an FVP Model or FPGA.
Network Tops/s nan Tops/s
NPU cycles 0 cycles/batch
SRAM Access cycles 0 cycles/batch
DRAM Access cycles 0 cycles/batch
On-chip Flash Access cycles 0 cycles/batch
Off-chip Flash Access cycles 0 cycles/batch
Total cycles 0 cycles/batch
Batch Inference time 0.00 ms, nan inferences/s (batch size 1)
Warning: Could not write the following attributes to RESHAPE 'onnx_tf_prefix_/l1/neuron/Unsqueeze_21' ReshapeOptions field: new_shape
Warning: Could not write the following attributes to RESHAPE 'onnx_tf_prefix_/l1/neuron/Unsqueeze_1' ReshapeOptions field: new_shape
Warning: Could not write the following attributes to RESHAPE 'onnx_tf_prefix_/l1/neuron/Unsqueeze' ReshapeOptions field: new_shape
Total Heap Required: 0.00%
Success - Press Ok to close the window