Any way to add images captured during the testing of my model to the dataset itself

Hi, I am fairly new to coding and ML/MV stuff and I’ve just started using openML for a object classification project. I already have a dataset of around 50 images of each class, but I would like to know if its possible to take the image that gets captured during classification and add it into the labelled dataset to strengthen it. I am using the openMV dataset creator and edge impulse to create a transfer learning model. I apologize if what I said is not clear as english is not my first language feel free to clarify what I am asking for.

Yes, this is totally fine to do.

Any idea how I would go about doing so? Right now I have an item I am rotating and taking 5 images of in order to run the classification model, but I want to then take those 5 images and insert them into the dataset feature of openMV. I tried doing it via keyboard inputs since CRTL+SHIFT+S saves an image into the dataset but the keyboard python libraries are not working for me

I don’t know how you can automate this process.

I suppose I could add automatic timed record of images to the dataset.

Please describe the use case exactly. I would need to add this to the IDE.

My project is on identifying types of recyclable and non recyclable comercial bottles. Items like shampoo pump bottles and different size categories of water bottles and plastic bottles. I have a servo turning the bottle 5 times and taking 5 images of the bottle from different perspectives. After taking those 5 pictures I would like to upload the 5 pictures I took into the dataset folder if the bottle is classified correctly.


The easiest thing for me to do is to make the IDE parse the text log and look for a particular text string using an escape sequence.

The IDE actually already supports this for things like jpeg transfer with the compressed_for_ide() function.

I can actually turn this around over the weekend for you and have the IDE accept a print command from the camera to trigger what you want.

It would be simply… whatever is in the frame buffer would be added to an open dataset editor class folder. You’d send the class name in the command to save a pic.

Does this meet your needs?

Yea sure that would 100% help thank you for the fast reply

Okay, will try to implement this when I have some time maybe over the weekend.

Hi sorry for the late reply thank you so much

Sorry, this will get done next week.

Hi, this feature has been added to the IDE. See the issue above for how to activate it. You’ll need to use the development release of the IDE.

Might be a stupid question but how do I download the development release? Would like to thank you again for how fast you guys got this done

It’s under releases on the OpenMV ide GitHub. There’s a link to the release page on our download page on the website.

thank you, I have it working now. Would I be able to automatically put it into a certain class or will it only go to the class I am highlighting?

It’s only going to go to the class you are highlighting.

I could make it select other classes… is that really something you have the ability to do though? As you’d have to find the anomaly first.

Also, I think I’m going to shorten the sequence to 1 character at the end. “OMV” is three characters which is non-standard. Thinking it’s better to switch to 1 character so other systems can parse and remove the escape sequence. Pushing new IDE now.

Hi, I have been testing with the new feature and have run into a problem. My camera is taking 5 pictures and seems to be running the classification model on all 5 pictures, but the final picture isnt being uploaded to the frame buffer(and not saving to the dataset editor). Therefore when I run the code again(the code waits for a button to be pressed before activating the whole code) the last image captured in the previous set of 5 gets pushed to the dataset editor as well as the frame buffer. Will upload simplifed version of code if requested.

Hi, the frame buffer kinda updates when it likes. You need to add some delay before the script ends and call the sensor.flush() command to your script and then make sure the script doesn’t end right after that.

I assume the sensor.flush command would delete the image from the frame buffer? if so I could work around that

No, it doesn’t do that. It’s flushes it to the IDE. It’s what snapshot() does internally before grabbing a new image.