Problems with Machine learning

Hi! i have an Openmv Cam H7 and ive prepared a dataset with it for learning characters [H,S,U].its a quiet simple task for Deep learning! my model consists of two Convolution layers and two Fully connected layer. ive trained it with Google colab and got an evaluation accuracy of about 95%.(the eval and train dataset are different).so the problem is that the model cannot classify S and gets it mixed with H and U!i guess that my problem is my training and stuff but i was curios if the openmv cam h7 or the firmware im using has any problem.By the way im currently using a Wide lens on openmv and i wonder if that is the problem!

Hi, did you use our new deep learning system with edge Impulse and our data set editor?

We built this because people keep getting bad results trying to give the camera synthetic data and then expecting that to work in the real world.

CNNs are very good at memorization. They will overfit on whatever dataset they are given. Please do not assume the CNN is smart. It’s just a powerful algorithm. You have to feed it images of exactly what it was trained on when in practice. The best way to avoid a mismatch is to collect training data from the real system which is what our dataset editor is for.

If you trained in on pictures of letters and then expect it to work in the world it will not. It has no idea what you are showing it. It is just segmenting a dataset. If you want it to work well you need need to collect images from the camera of the target you are looking at and then train a net using that.

Hi,thanks for a such quick reply
In fact I’ve trained the model with a dataset wich I created with openmv ides dataset editor!i haven’t use edge impulse though I’m familiar with the over fitting process and that’s why I’ve taken created the dataset with openmv ide.i used agumantation though but the evaluation dataset is the raw output of the openmv

Okay, cool, then, just keep in mind the effect of two things:

  1. Background details. The CNN can learn what the background looks like versus the foreground. Make sure that the background is not interesting and that the image just contains the subject you care about. E.g. have an object on a flat table that’s one color. Then look at another object on the same flat table of one color.

  2. Make sure the resolution of the training and then data for evaluation is the same. Our system will scale to fit while maintaining aspect ratio the frame buffer into the CNN detector window. This means however that if you feed it an image that doesn’t have the right amount of padding pixels you will then be feeding an image that doesn’t look like the dataset. E.g. if you made the dataset of things without a tight crop… And then you feed the CNN images with a tight crop after our scale to fit code run the image internally might be feed to the CNN cropped.

Hi
Well my image is binary with a white background I guess number 2 may be the case I’ll check it.my problem is that I cant make my models input image too large (larger than 30 pix I guess)or the openmv will give me some memory errors is the only solution openmv cam h7+?

Hi, you just need to make sure that whatever you feed the net looks exactly like the training dataset. I don’t know what this means exactly for you. But, as long as the input matches what it was trained on it should work.

No what I ment was that I have to resize the images because I cant make the input shape of the model (50,50) wich is the average shape of the data set .why?when I try to do so the model gives me memory error because I’m running the “algorithm” on a way large image!is there any other way to fix the problem else than buy a new cam with bigger ram?

Use the crop and then scale/resize methods. This will change the input size and free up ram.