Measuring misplacement of SMT components

Hi, I’m upgrading the machine vision of a roughly 25 year old SMT pick and place machine. It has decent mechanical setup but the software and electronics is really bad.

Before I dive into learning OpenMV I thought it’s a good idea to post two sample images here and kindly ask if OpenMV is powerful enough to detect the misalignment of the SMT component on the image. I need to know how far the component is away from the center of the image and how much it is rotated. The two sample pictures are taken with different camera and light settings.

I absolutely don’t mind getting a pretty short yes/no/likely/unlikely kind of answer.

PS: I wouldn’t be surprised if aligning SMT components on a PCB is one of the earliest applications for machine vision.

Thanks a million,

As I found the promising “Displacement” feature in the OpenMV API I went ahead and ordered a OpenMV Cam to give it a try for my project.

I will prepare a template image of the the SMT component (basically a binary image of the solder pads) which are compared to a snapshot from the camera. The snapshot will be postprocessed to also show nothing but the solder pads. I attached a sketch to visualize the procedure.

If this works well I need to prepare a template image of all different SMT packages (SOT-23, SOIC-8, QFP-44, …).

If somebody else has solved a more or less similar machine vision problem it would be great to hear what OpenMV features have been used.

Hi, sorry for not answering your post. Some questions I can’t answer via my phone and then they just get lost to time. I will try to answer it in a a few hours.

No problem. The camera arrived already but it will easily take several more days until I find time to power it up. Although I’m enticed to test it right away :slight_smile:

In the meantime I read about the Frame-Differencing feature. Sounds like it could be helpful to remove the static background of my camera setup. Might be better than using black/white thresholds to separate the SMT component from the background.

Yeah, so, blob tracking is basically the best feature for doing what you want. Via find_blobs(). I recommend trying to process the image such that you can make it look binary and then use find_blobs() on the binary image to get part location and rotation.

Using frame differencing and background subtraction is the easiest way to make the part pop out from where the device head is (which you subtract a picture of).

Keep in mind that you NEED to absolutely control the lighting in the environment.

Thanks for the feedback. I will focus on find_blobs() instead of find_displacement(). I already learned about the need to absolutely control the lighting and made some experiments with different setups like ring-lights, dark-field, bright-field, …

Will post more information for the community when I make progress with the OpenMV part of my project.

I need to do the exact same thing; except my project is a completely DIY machine; not a retrofit.

I would be very interested in seeing what you come up with. (er… no pun intended).