Horizontal line detection


I have been making good progress with the openMV :slight_smile:
the blob function gives you a great bunch of tools.

But I could use some suggestions for finding a horizontal line in my Grayscale/binary Frame.
Major problem: my horizontal line is NOT always PERFECT Horizontal.
So if I make my ROI to narrow, it will miss thin line “horizontal” lines that have a bit of an angle.
I’m mainly using area, elongation and rotation to do some filtering.

But, this is the frame I’m struggling with. Parameter wise this is indeed a horizontal line. But as you see, its just because I have quite some pixels “creating” a elongated box (smallest rectangular box on picture).
Big box is pure my ROI for visualization

Any suggestions on how to make it more robust?

Thank you
Fake line.jpg

Hi, you should use the minimum area rect method returned by find_blobs(). This returns a rectangle that is rotated to match the blob. Also, the minimum area rect has a major axis line and minor axis line which are exactly the values you want. The major axis line will be the longer line.

Oh, really?
I didn’t know it would also do the rotation. Should have read the info more in detail.

Can not wait to give it a try!!!
I think the basics for my project are coming together. Next step is hardware part and I/O programming, but that should be straight forward (I hope)

By the way: the image.write and image.read have helped me tremendously!!! This is a super function to simulate and tweak the software.

Awesome! Thanks for reading the documentation and using them.

Just to be sure, is this the function you’re referring to?


Will test tomorrow, wonder if it will provide me the data to be able to do good filtering. But let’s try!

For example:

if blob.elongation() > 0.5:
            img.draw_edges(blob.min_corners(), color=(255,0,0))
            img.draw_line(blob.major_axis_line(), color=(0,255,0))
            img.draw_line(blob.minor_axis_line(), color=(0,0,255))

The major axis line is what you want. It will return (x0, y0, x1, y1) of a line that goes through the longest axis of the object. It’s based off on the min_corners() method however. So, it’s accuracy depends on that finding the right corners.


Did run some tests with the min_area and indeed a nice function as the axis is indeed rotating (because of the corners).
Will just have to use some Math to calculate the ANGLE of the rotation if I would need this further on.

BUT, probably I didn’t explain my challenge clearly enough, this min_area is not really able to solve the issue.

Objective: detect horizontal lines (some can be slightly tilted)
If I make my ROI too narrow(height), the blob function will not find lines (thresholds need to be lowered too much), certainly if they are a bit angled, as to few pixels fall inside the region of interest.
So I need to secure a high enough ROI so that I have enough pixels available to trigger the blob-function.

By making the ROI sufficient High enough, it of course will also detect other “blob-areas”. Most of them I can filter out.
BUT, in some minor cases I have pixels that are connected in a odd shape in a horizontal direction, making the blob trigger.
And at the same time this generates a elongated boundary box that looks extremely similar to a “horizontal line” even when using min_area.

So main issue is, to get a good horizontal line detection without having this false detection.
Currently I am thinking of adding a regression function in my ROI. I expect that the magnitude number of the returned line-object could perhaps give me an indication on how well the pixels are lined up.
But this will probably too sensitive. So still figuring out it I can come up with something else…
As far as I know this is the only way to determine if a triggered blob is just a combination of different shapes or really a long stretched shape of pixels.

In attachment:

  1. Picture of where I get a FALSE line detection. (because in this specific case the pixels are connected in width)
  2. Picture of a detected normal line
  3. Picture of a normal line that requires a high enough ROI, otherwise I wont be able to detect it (without lowering pixel threshold too much and create other noise). This I have covered, just showing example of failed detection if ROI is not high enough

Curious if magnitude could help or if I have to do some other tricks but still securing a robust process
I do not want to include to many crazy filters as it will most likely end up with a system that can only handle perfect images.
Real line3.jpg
Fake line.jpg
Real line2.jpg

Magnitude is not doing the trick… no major difference
Will have to come up with some other ideas…

Have you tried using dilate and erode? They are there to clean up these issues. Once you have a binary image you can run them on the image.

Will read more in-depth on your problem later.

Erode is already included

Not really sure what’s going on still from your description. Erode seems to be the only way to fix the parts of the image from touching each other.

Hi, I appreciate your effort to help.
I’m working in a industrial environment with quite some “noise” in the image.
Some lines are not super clear, light conditions vary,…
Meaning if I erode too much, I loose valuable markers. So trying to find good balance to make it robust enough.
I will keep trying and post if I come up with some workarounds.
I need to find a way to get a better grip on the blob result to determine is it’s really a “line” or just some pixels that meet all requirements to theoretical judge it’s a line.

As a Temp solution I adjusted my LAB values to filter out one part of the False-image.
This improves the situation.
Depending on where the robot is I will have to change LAB filter values.
This will get me going for the moment.

Is this the line following robot still?

If so, use histeq (). It will help stabilize the image before you binarize it.

Yes, it is