I trained a face-recognition model by Tensorflow image-classification and export as Tensorfllow Lite, the model size is around 12M. And it’s worked on H7 Plus, yet the latency is very high —— about 20s every time it detects a face and inference the local model.tflite. For optimizing the performance, I try to convert the tflite model by quantization, which reduced the model size to 4M. Unfortunately, when use the model on IDE, it’s crashed and output error messages as following ——
Traceback (most recent call last): File "<stdin>", line 15, in <module> OSError: tensorflow/lite/micro/kernels/quantize.cc:69 input->type == kTfLiteFloat32 || input->type == kTfLiteInt16 || input->type == kTfLiteInt8 was not true. Node QUANTIZE (number 0f) failed to prepare with status 1 AllocateTensors() failed!
Is there any way to walk around this and make Quantization TFLite Model work on H7 Plus? Thanks in advanced.