Image Classification with EdgeImpulse - poor performance

I am trying to use transfer learning to classify images. I am gathering data from the openmv h7 plus. I have windowed the image in openMV to 142x60. I have taken around 200 training images for each class. I then modified each training image by cropping the feature that I am trying to identify. The cropped images are around 45x45 pixels. I then import these images into EdgeImpulse project. I get good training results with say 97.3% accuracy and 0.09 loss (using transfer learning block). Model Testing is near 100% accuracy. I have tried resizing the data for training in Edge Impulse to 48x48 and 96x96, both with good results. I then export the neural network file.
When I run the NN file on the openMV H7 plus looking at the whole 142x60 image I am not getting good results. If I adjust the ROI in the call to tf.classify to be close to the object size it performs better.

What am I doing wrong here?

OpenMV
image that I am running tfclassify over.
00039

Example image of training data:
00094

The image you train on and then try to classify in real life need to be similar. The results you are getting are expected with your approach.

E.g. you need to crop the window to match the cnn window.

Thanks for the response. Couple more questions:

If I am using images that I have cropped out of the 142x60 image and then in edgeimpulse I resize all of the images for training to 96x96…on the openMV side if I crop to a window that matches the size of the cropped images used for training (~45x45) side do I need to scale the openmv snapshot to be 96x96? I know the resize has an effect on training but how does that translate to tfclassify when looking at live images on the openMV?

You said “E.g. you need to crop the window to match the cnn window.”
To identify the suit of the card by training with only the suit cropped images can I first segment out that region using blob detection as opposed to cropping the window on the openmv and then pass that ROI into tfclassfiy?

Is there a better approach to this that I am missing?

Thanks

Hmmm, um, so, we have resizing code on the camera that scales whatever the ROI is to match the network input size. However, it sounds like your having a lot of scaling issues. I’d fix your crops to be square or adjust the network input size to match your crops. In general, avoid any scaling where the w/h ratio is not 1. I’m not saying that you have to make the width and height the same… just, you want to keep the ratio when scaling 1.