Feasibility of using OpenMV in a bean sorter

Hello, all!

New to machine vision, but not new to electronics, coding, or electromechanics.

I’m in the early stages of helping my friend build an automated, ‘semi-industrial’ bean sorter. He grows beans that have a relatively high amount of genetic variability, and would like to separate out the different subtypes, among other sorting needs (such as sorting by size, “non-bean-ness,” etc). This ‘type sorting’ is the first goal. An example of what I’m talking about is attached as a picture. All of those beans come from the same species, and a single bean pod may contain all of these subtypes. I do recognize that there’s at least 2 bins in there that look virtually identical to me – I really need to ask my friend how he differentiated between those two, specifically. But the rest have (I think) pretty distinct differences to human eyes.

The general mechanical design of such a system is well-established (http://www.satake-usa.com/images/principals-optical-sorting.jpg, for example). I do not plan on reinventing the wheel.

Regarding the sensing aspects, however: Using simpler methods, like PIRs or color sensors, I believe will be impossible – these ain’t brightly colored M&Ms or skittles – though I will be doing some experiments to verify that a simpler, cheaper approach won’t work. Assuming it won’t, that pretty much leaves me with machine vision. I have not purchased the OpenMV board yet, but I believe it is the best option (out of a camera with a Pi using OpenCV, the JeVois, the HiCat Livera, and the CMUcam5 Pixy – if anyone knows of others, please let me know).

I plan on sending beans down a chute, either using a gate to trigger the camera or use the camera to look for motion, when the bean is in front of a well-lit background (ie whatever I determine to be appropriate based on guides to machine vision lighting such as this one: A Practical Guide to Machine Vision Lighting), take a color picture, perform some statistical analysis on the blob that holds the bean to determine what kind of bean it is, and then either trigger a pneumatic/mechanical classifying system with the OpenMV or send some signal to an Arudino to do the same.

My questions are:

  1. Do you think this system is up for making the distinction between the types of beans in the picture? I believe this will be the hardest job this system will have to perform. Of course I plan on experimenting, but if the answer is an obvious ‘no,’ I can save $65.
  2. Would an analysis based on color of the RGB channels be enough to distinguish between these beans? That was my initial guess at a process. I assumed I would ‘train’ the system by putting some n amount of the ‘same’ bean through and taking means, modes, medians, standard devs of the histograms (need to brush up on the appropriate statistical tools in this case) and determining some boundaries for identifying the different beans. Does this seem like the correct path forward?
  3. Is the LED light on the OpenMV intended to light the space being photographed?
  4. If I were to send beans down a ramp, take a single picture of each, capture a blob, do a histogram, and classify it, can anyone guess at what speed this could occur? I’m looking for generalities like “more than 5 times a second” or “1-2 seconds” or whatever.

Thanks, all.

The OpenMV Cam is excellent at color tracking. Expect frame rates above 60 FPS.

As for doing this, we have a built in get histogram and get statistics methods which will output all the color info you like and more. We also have a find blobs method to find the bean given a set of color thresholds.

So, the OpenMV Cam does everything you want.

As for using it, the cameras LEDs are powerful by are just IR LEDs. You should hook up and external led driver to it that it can control. As for scene setup, you should have the OpenMV cam stare at a blank white bankground on startup which is well illuminated. Let autogain and white balance run, and then turn them off.

Beans coming through then will be obvious against the white background.

Note that you’ll want to configure the camera optics to zoom in on each bean so you get a lot of detail. Color tracking and stats methods aren’t particularly precise with a low pixel count. You’ll be able to do this work with a res of 320x240 which will give you a lot of bean to work with.

Thanks!

When you say “configure the camera optics” are you referring to the physical set up? As in, choosing a lens, how to position the camera physically, adjusting the field of view of the camera? Or are you referring to some type of digital configuration? Or both?

Basically, I would want the bean to take up as much space in the frame being captured as possible. The pushback is that I also want to move a lot of beans through the system, which would mean the bean would be traveling at speed in front of the lens. If I could take a high resolution picture that is “all bean,” that’s great, but if I have trouble getting the bean in the frame as it flies by, I would need to get a wider field of view. When you say the stats and color tracking aren’t precise at a low pixel count, can you give me an estimate of how low is low? Is there a recommended minimum total pixel count in a blob, for instance?

Also, I’ve noticed that the ‘blob’ function captures a rectangle, which is fine with me, but does it include the non-colored aspects when calculating a histogram? For instance, if I have a white background, will it include the white background in the histogram? Or just the things it recognizes as a blob?

Thanks so much.

Woah, sorry, I forgot to answer more on this thread. I’m very sorry. You need to ping me sometimes since I get a little overloaded.

Are you still working on this and do you need help?

Hello ,

I’m going back to this topic because I need the same thing to sort chickpeas … the good ones are yellow, the bad ones are green and must be out of the lot. I can handle all the mechanical part, but the software part is really very hard for me, my coding level is arduino lvl 2 lol. Openmv cv, tensorflow, all these things are outside my league. I need someone to help me with this. I would like this project to be open source and be able to serve all those who need it, commercial machines are sold for several thousand euros. Is anyone motivated to help me lead this project? !

hello buddy, my project also the same with you. any progress? about accuracy of color detection for you. please let me know, so it can help me a lot

Not to be rude… But, you above messages basically say you want someone to build your project for free and have no interest in even putting some effort into trying to use our system which makes this pretty simple… We have color tracking examples that literally do what you want and you just have to spend an hour or so trying to tune them…

Not to be rude… But, you above messages basically say you want someone to build your project for free and have no interest in even putting some effort into trying to use our system which makes this pretty simple… We have color tracking examples that literally do what you want and you just have to spend an hour or so trying to tune them…

Like, if you have a particular ask then I can just answer that… But, when the ask is just… I have a project and I need help and you don’t ask for anything in particular for me to answer it’s not really possible for me to help you.

A better question would be, I’m having issues with a function that doesn’t seem to work like I think after reading the docs/etc. Can you clarify on it. Etc.

I m gonna check all your solution , but It s really hard to me .

My Openmv cam h7 is here ! :slight_smile:

Looking a little , I think with a little time i can do something cool with that . Color tracking seems to be a solution … But i need to find the LAB thresholds … not so easy …

What should i use to find LAB of my object i want to detect ?

A track to follow to guide me?

Thanks for your time.

Oh i find image histrogram info , and image statistic info … maybe the way … I will play with this tonight .

Tools → Machine Vision → Threshold Editor