Hey, I am implementing a image detection model on the Arduino Nicla Vision board. (The task is to not use Edge Impulse).
However I have a trained model that does run on the Arduino and has an Accuracy score of 94% with the test dataset.
But, when I use it on the Arduino with live tests it does not work as expected. Some unique classes are not detected at all, and the overall accuracy is below 50%.
After checking the preprocessing of the images I am not expecting that the preprocessing of the image is the issue.
The live tests are done in the same environment with similar conditions as the training-data.
The last issue could be, that the normalization process somehow changes the information different than I do in python, where I simply divide the values by 255.
Is there a way to access the output of ml.Normalization()?
Or is there another possible reason to this difference in behavior?
Many thanks in advance