I am trying to use transfer learning to classify images. I am gathering data from the openmv h7 plus. I have windowed the image in openMV to 142x60. I have taken around 200 training images for each class. I then modified each training image by cropping the feature that I am trying to identify. The cropped images are around 45x45 pixels. I then import these images into EdgeImpulse project. I get good training results with say 97.3% accuracy and 0.09 loss (using transfer learning block). Model Testing is near 100% accuracy. I have tried resizing the data for training in Edge Impulse to 48x48 and 96x96, both with good results. I then export the neural network file.
When I run the NN file on the openMV H7 plus looking at the whole 142x60 image I am not getting good results. If I adjust the ROI in the call to tf.classify to be close to the object size it performs better.
What am I doing wrong here?
image that I am running tfclassify over.
Example image of training data: