Please don’t use that firmware it might be broke. As for quantization you need full int8 quantization (weights and activations) no int16. If you’re using the latest tf, see this post:
Doing an update of all my tools for ML and upgraded to Openmv 4.5.6.
In doing so I observed that Tflite models that were created and quantized with Tensorflow 2.8.2 were working but when using Latest tensorflow 2.17 I kept getting failures while loading to Openmv. With this flag in my script before quantization process I was able to successfully load the model though
converter = tf.lite.TFLiteConverter.from_keras_model(model)
converter._experimental_disable_per_channel_quantization_for…
For ops, you can use https://netron.app/ and check if the model has ops not supported by tflite micro
/* Copyright 2023 The TensorFlow Authors. All Rights Reserved.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
==============================================================================*/
#ifndef TENSORFLOW_LITE_MICRO_MICRO_MUTABLE_OP_RESOLVER_H_
#define TENSORFLOW_LITE_MICRO_MICRO_MUTABLE_OP_RESOLVER_H_
#include <cstdio>
#include <cstring>
This file has been truncated. show original
I will test the model as soon as I can.