Normalization of images, access to output

Hey, I am implementing a image detection model on the Arduino Nicla Vision board. (The task is to not use Edge Impulse).

However I have a trained model that does run on the Arduino and has an Accuracy score of 94% with the test dataset.
But, when I use it on the Arduino with live tests it does not work as expected. Some unique classes are not detected at all, and the overall accuracy is below 50%.

After checking the preprocessing of the images I am not expecting that the preprocessing of the image is the issue.
The live tests are done in the same environment with similar conditions as the training-data.

The last issue could be, that the normalization process somehow changes the information different than I do in python, where I simply divide the values by 255.
Is there a way to access the output of ml.Normalization()?
Or is there another possible reason to this difference in behavior?

Many thanks in advance :slight_smile:

Hi, the code for the normalization class is here:

openmv/scripts/libraries/ml/ml/preprocessing.py at master · openmv/openmv

It doesn’t divide by 255 unless the input layer is floating point. For int8 and is passes 0:255 → -127:128 and uint8 as 0:255 → 0:255.

Note that you can re-implement the class in your code and bypass it.

Most of the ML models we run don’t use floating point layer inputs, so the code is less well-tested. However, assuming you are using the int8 and uint8 path then Normalization should work as expected.