I have trained an image multi classification model based on MobileNet-V2(Only the Dense layer has been added), and have carried out full integer quantization(INT8), and then exported model.tflite file, using TF Class () to call this model.
The accuracy of this model is quite good in the test while training. However, when tested on openmv, the same label is output for all objects (although the probability is slightly different).
I looked up some materials, one of them mentioned TF Classify() has offset and scale parameters, which is related to compressing RGB values to [- 1,0] or [0,1] during training, but this parameter is not available in the official API document.
So are there any examples of workflow from tensorflow training model to deployment to openmv?
What do you mean “dive 255”, Do you mean that when I call the model in OpenMV, I input the grayscale image?
When training the model, I transformed the image in the first layer of the model and compressed the RGB value to 0-1 (div 255). Should the div operation you mentioned be carried out in openmv?
My model has good verification accuracy in tensorflow, but the correct output cannot be obtained on openmv after quantization and converted to tflite. I found that the model I trained in edge impulse has only one channel, (tf_model.channels()) while the model I exported has three channels. May this be related to the failure of the model to output correct results?