Hello,
I am making an image classification project using TensorFlow. The project has two classes.
I’ve trained a model where the output is the probability of each class. After confirming that there’s no issue with the model or the TFLite format (as the interpreter still provides predictions as probabilities), I have tried various CNN architectures. In every case, when I translate the model to TFlite and run it on the OpenMV cam, it sends only 1.0 or 0.0, as the probability of each model.
When I made something similar with EdgeImpulse I got a value between 0 and 1 for each class.
What can be the issue?
On the other hand, when I convert the model to a quantized model, it doesn’t run on the camera.
The camera detects the two classes pretty well.
This is the model:
trained.zip (766.0 KB)
@AlfonsoAIT - The can you confirm the model works on Edge Impulse after being quantized using their post training testing module? They allow you to verify the performance after training a model. We should have the same results.
int8 models should return results correctly. We’ve verified this behavior directly using image classification models from edge impulse.
Hello,
We made tests using EdgeImpulse and yes, it works fine.
Now we need to generate the model outside of EdgeImpulse, that’s where the question comes from. In my last post I sent you our model, that detects the classes fine but only returns 1.0 or 0.0 as each probability, it doesn’t return a value between 0 and 1.
Our quantized model directly doesn’t work, the camera does not load it.
I have attached the quantized model. Could you check it and see why it doesn’t load it?
modelquant.zip (169.7 KB)
Okay, for the unquantized model I don’t know why the output is just 0.0 and 1.0. Our code for floating point output models does no translations on the output. So, the model is really outputting that. The only thing I can think of is that it accepts a different input scale than what we are processing by default.
Do you know how you specified the original input image color channels? Your net has a floating point image input… so, you might need to specify the pre-processing of the image:
ml.preprocessing — ML Preprocessing — MicroPython 1.23 documentation (openmv.io)
I tried -1/1, 0/255, -128/127, 0/1, and got the same result for the model each time.
As for the quantized model not loading… if you load the two using modelquant.tflite (netron.app) you’ll notice they are not the same network and that things have changed between them. Something may have gone wrong during quantization.