Light cone correction in OpenMV: Strategies for improving line tracking

Hello everyone,

I’m hoping someone here on the forum can help me with a problem we’re having with our OpenMV project. We are using the camera and the camera’s light shield to track a black line on a white background. Unfortunately, the light shield creates a cone of light that makes our black treshold determination much more difficult.

The problem is this: In the center of the light cone, we often have a gap in the line where the light shines directly on the surface.

If we increase the Treshold, the edges of the image are also recognized as black, which leads to incorrect driving instructions to our Arduino.

We can determine the point at which the light hits the surface relatively accurately. Therefore, it would be helpful to find a method to adjust the brightness values outwards from this point. Unfortunately, trial and error with gamma correction has not yielded the desired results, as this method looks at the entire image and does not specifically address the light cone.

Do any of you have experience with light cone correction or suggestions on how we can adjust the brightness values near the light cone? Maybe there is a way to mask or specifically edit a certain area of the image to make the line more visible.

I appreciate any help or advice you can give me!

Many thanks in advance!

Best regards,
PixelBot

Since I was only allowed to add one image, here is the image with the higher Treshold value

Hi, you can use multiple thresholds. However, this won’t really help you here as the LED lights make the line look like the background.

Can you design your algorithm handle the gap? This is how our line following robot we built works. It drives on dashed lines using the find_regression(robust=True) method.

Otherwise, I’d remove the light shield, add a diffuser, or place it somewhere else.

Hello,

Many thanks for the helpful suggestions! I have now stuck a piece of parchment paper over the LEDs, which has softened the brightest part of the image somewhat.
Unfortunately, the cone of light itself remains, which continues to present us with challenges.

The line tracking works well overall, but there are occasional gaps on the line. Some are so large that the camera cannot capture their end.
Our goal is for the robot to be able to navigate correctly in these areas and align itself with the line before the gap occurs.
The part of our line recognition system that deals with the gaps still has difficulties:

We have already experimented with masks, but these do not seem to work.
Also, using the floodfill method to color the line has slowed down our program, but has not brought any significant success.

I wonder if selective darkening is possible?
We are currently working on an LED ring and hope to get the problem under control with this, but it would be interesting to know if there are also programmatic solutions to counteract the light problem.

I look forward to further ideas and suggestions!

Best regards!

Here a picture with a normal line:

Yes, so, any of the line operation functions in the API like add(),sub(),min(),max() and etc. can be used with a template image that can have brighter areas.

You’d create the template image on your PC using GIMP or paint. Then you just add() it into the image. This can brighten the corners or etc.

Line operations have been optimized, by the way. So, once you load the image into RAM they will be extremely fast and should not slow down the main processing code.

I’d get this image set by pointing the camera at a surface without any lines. Use grayscale.

Your goal is to brighten the edges here. So, you probably want to invert the image you find, find the min pixel of the inverted image, and then subtract that from all inverted pixels. That should then create a template image that when added to the normal image brightens the edges.

Note: You can do all this on the camera via a calibration step if you like. Our API has all the functions. So, you could have the code calibrate itself via line a button press on startup or delay and then apply the above process to create this template image.

It’s something like:

calibration_image = sensor.snapshot().copy()
calibration_image.to_grayscale() # if not grayscale
calibration_image.invert()
min = calibration_image.get_stats().min()
calibration_image.sub(min)

And then in your loop you do:

img = sensor.snapshot()
img.add(calibration_image)

Hello,
thank you very much for all the information. The suggestions worked like a charm. We are now using the add function.

 img.add("mask.pgm")

I still have one question if we use it like this, will the image be constantly reloaded? Can we make this more efficient? i.e. only load once at the start?
Many thanks in advance.
Kind regards

Hi, you cannot pass a path to the add function anymore. The API used to support this but we removed that functionality. In particular because there’s a massive speed hit to load the image, use it, and then unload it per frame. So, it would encourage slow code.

Use the Image constructor to allocate the image on the heap and then use the returned image object.