find_lines vs cv2.HoughLines ... which is better?

I’ve been playing around with some of the examples on /usr/

I’ve done HoughLine detection using cv2.HoughLines()… is find_lines() “better” (I’m looking at the find_lines.py example)?
I’m just trying to figure out what all libraries/ modules have been included by default or are used for a specific reason I don’t know about.

Thank you!

It’s a different implementation, optimized to run on OpenMV. Note Open-CV libraries are Not available on OpenMV

If you were to use our find lines function on a pi it would be significantly faster than the CV find lines function. We use the output from a sobel filter to feed the Hough transform. OpenCV does this in many, many more steps.

That said, our method is somewhat naturally noiser, but not explicitly worse.

OpenCV takes a more decoupled approach to machine vision code requiring you to build up an algorithm in steps. Our goal with our find functions is to more or less do all possible work in one method call and give you the results you likely want.

thanks for all of the answers!
is there a list (I’m sure I’m missing its location) of the OpenMV functions/ modules that have been implemented?

See the docs

Thanks! I’d found that page but hadn’t scrolled down far enough!

Is there a numpy like module or function?
I haven’t found anything that jumped out at me …

In python I’d use it for functions like numpy.array or numpy.max …

thank you!

No, there’s no numpy support. What do you need that for?

I use in during a object recognition task.
I look at flow out of a nozzle … detect the edges of the flow and “draw lines” showing the edges of the flow to see if the nozzle needs cleaning.
(high level view of the steps: image thresholding, Canny Edge Detection, Canny Contours, Hough LinesP, find “highest left, lowest left” and “highest right, lowest right” points on the outer edges of the flow, draw two lines one top left to bottom left and one top right to bottom right … and then see if the flow out of the nozzle is “straight down” or “cone shaped”).

The step that finds the top and bottom points on each edge requires an array.

Okay, I see.

So, fine_line_segments does just what you need:

http://docs.openmv.io/library/omv.image.html#image.image.find_line_segments

And it returns a list of these things:

http://docs.openmv.io/library/omv.image.html#class-line-line-object

You just need to setup your thresholds and merge settings and you’ll basically get two lines on either side of the nozel that start and end on either side of the stream until they hit the nozel or bottom of the image.

Anyway, there’s a find_line_segments example script for this… start with it.

Also, since you know what openCV does to find lines, here’s what we do: https://github.com/openmv/openmv/blob/master/src/omv/img/hough.c

Basically, we sobel filter the image and use the gradient and mag output from each pixel to feed a hough accumulator array. By doing this we skip the canny process and doing the 0-180 degree hough line iteration on each pixel. This method is a little more noisy but an order of magnitude faster. After which, we look for hough peaks above your given threshold and we also make sure to only return true peaks (i.e. peaks with no value near them that are greater than them). Next we merge all hough lines which have similar rho and pheta values. We do this because the raw output of the hough transform produces a ton of lines. By merging lines you get cleaner output. Finally, we compute the start and end points of each infinite line.

Find line segments uses the above code and then walks each infinite line while running the sobel algorithm on the pixels underneath to determine the start and end points of line segments on that infinite line. A better method would have been to keep track of what points contributed to the infinite line when we ran the above algorithm but we don’t have memory for that. Anyway, to make walking the infinite line work better I actually 3 three lines in parallel to each infinite line to deal with slight curves. Then finally, the real magic happens during merging. So, walking lines with the above approach produces a lot of small unconnected lines right next to each other. Our merge algorithm then joins all these small line segments into one line segment that’s returned to you.

Anyway, the point of both these methods is to get you the output you want without having to deal with all the details between.

Dude.

Let me go through and read that stuff … but wow.
That sounds pretty awesome.

And that much faster? Wow. I wonder why OpenCV doesn’t do it that way …

It looks like this operates on a region of interest … cool idea.
Is there an “easy” way to have this look at two regions of interest within a single image without having to call the algorithm twice?
(ie, the left and right side of the spray… instead of looking at the entire image …? Breaking the image down in to two smaller sections may be faster than finding lines/ edges over a larger viewing area)

By definition when you use a region of interest the algorithm only does work on that region. So, you need to call the algorithm for each region but it does less work since each region is smaller than the whole image.

In the latest version of OpenMV IDE you can select an area in the frame buffer to get the region of interest coordinates.

OpenCV does this differently because there steps are decoupled. The sobel trick to feed the Hough accumulator only works if you tie the two algorithms together. OpenCV takes the more accademic view on things and does the general operation but tires to be less specific for the application.

For another example, look how to find color blobs in OpenCV and then compare that to our find_blobs method.

That makes a lot of sense.
General vs Specialized Application.

in the image.find_line_segments method … would syntax be something like :

for l in img.find_line_segments(roi = (1,1,200,200), threshold = 1000, theta_margin = 15, rho_margin = 15, segment_threshold = 100):
img.draw_line(l.line(), color = (255, 0, 0))

for the roi defined by the tuple: (1,1,200,200) or is it treated like an array [x,y,w,h] ?
(sorry, this isn’t in the example)

Yes, all Rois are x,y,w,h. Select an area in the frame buffer of the IDE and it should tell you the ROI for that area.

I’ve been playing around with the find_line_segments() method.

Using this example loop I can return the line object:

   for l in img.find_line_segments(threshold = 1000, theta_margin = 15, rho_margin = 15, segment_threshold = 100):
        img.draw_line(l.line(), color = (255, 0, 0))
       print(l)

However, i’d like to return the “longest line” found by img.find_line_segments.
I know that if do

print(l[4])

I can get the line length for the lines that are being returned … but, I can’t do

print(max(l[4]))

because I get an error … ie, ‘l’ is not being returned like an array (I’m used to Matlab and being able to iterate over an array…).
How should I approach something like this?

Um, see the line following code under the color tracking examples. There’s some code there that does exactly what you want. Basically, you need to use a lambda lethod. Google python get max of object in array.

I think what you want looks like this:

Max(img.find, key=lambda x: x.length())

Basically, the key argument of the max function tells max how to find the max of each item. It then returns the item in the list with the max key. Note that you don’t need the for loop anymore.

Max(img.find, key=lambda x: x.length())

Is this the length of x1 - x2 ?
or is this the euclidian distance of (x1, y1) to (x2, y2)?

I ask because for the lines that have been found as the “max” … the theta is always equal to zero … that seems… hinky.

Length is the euclidean distance. If you are getting a theta of zero that would be a horizontal line in the image somewhere.

I am getting a theta of zero for several values … but that seems to happen when there aren’t any lines in the ROI to be detected.
Is there a way to filter those out ?

I’m thinking a series of “if’s” … if theta!=0 then max(img. … ) … although I’m not quite sure how to pull out the theta term for the comparison.

EDIT:
This doesn’t work:
rl = max(img.find_line_segments(roi = r_r, threshold = 1000, theta_margin = 15, rho_margin = 15, segment_threshold = 100), key = lambda x: x.length() if key = lambda x:x.theta()!=0)

I didn’t expect it to … but hey :slight_smile: