Hello, all!
New to machine vision, but not new to electronics, coding, or electromechanics.
I’m in the early stages of helping my friend build an automated, ‘semi-industrial’ bean sorter. He grows beans that have a relatively high amount of genetic variability, and would like to separate out the different subtypes, among other sorting needs (such as sorting by size, “non-bean-ness,” etc). This ‘type sorting’ is the first goal. An example of what I’m talking about is attached as a picture. All of those beans come from the same species, and a single bean pod may contain all of these subtypes. I do recognize that there’s at least 2 bins in there that look virtually identical to me – I really need to ask my friend how he differentiated between those two, specifically. But the rest have (I think) pretty distinct differences to human eyes.
The general mechanical design of such a system is well-established (http://www.satake-usa.com/images/principals-optical-sorting.jpg, for example). I do not plan on reinventing the wheel.
Regarding the sensing aspects, however: Using simpler methods, like PIRs or color sensors, I believe will be impossible – these ain’t brightly colored M&Ms or skittles – though I will be doing some experiments to verify that a simpler, cheaper approach won’t work. Assuming it won’t, that pretty much leaves me with machine vision. I have not purchased the OpenMV board yet, but I believe it is the best option (out of a camera with a Pi using OpenCV, the JeVois, the HiCat Livera, and the CMUcam5 Pixy – if anyone knows of others, please let me know).
I plan on sending beans down a chute, either using a gate to trigger the camera or use the camera to look for motion, when the bean is in front of a well-lit background (ie whatever I determine to be appropriate based on guides to machine vision lighting such as this one: http://www.ni.com/white-paper/6901/en/), take a color picture, perform some statistical analysis on the blob that holds the bean to determine what kind of bean it is, and then either trigger a pneumatic/mechanical classifying system with the OpenMV or send some signal to an Arudino to do the same.
My questions are:
- Do you think this system is up for making the distinction between the types of beans in the picture? I believe this will be the hardest job this system will have to perform. Of course I plan on experimenting, but if the answer is an obvious ‘no,’ I can save $65.
- Would an analysis based on color of the RGB channels be enough to distinguish between these beans? That was my initial guess at a process. I assumed I would ‘train’ the system by putting some n amount of the ‘same’ bean through and taking means, modes, medians, standard devs of the histograms (need to brush up on the appropriate statistical tools in this case) and determining some boundaries for identifying the different beans. Does this seem like the correct path forward?
- Is the LED light on the OpenMV intended to light the space being photographed?
- If I were to send beans down a ramp, take a single picture of each, capture a blob, do a histogram, and classify it, can anyone guess at what speed this could occur? I’m looking for generalities like “more than 5 times a second” or “1-2 seconds” or whatever.
Thanks, all.