Detecting and navigating in a matrix

Hello!
I’m building a robot to move in a well defined matrix by following perpendicular lines.
I can navigate very reliable over one line using get_regression, but I also need to detect when a perpendicular line is coming to localize and turn the robot on the correct line. I’ve attached a picture of this.
Can anyone tell me what would the most efficient way to detect these two lines?

I’m thinking of mixing get_regression for the navigation and find_lines for the perpendicular lines, but find_lines seems slow and not as robust as get_regression.

Thanks!!!

Use find_line_segments and search only a small window in the front of the robot.

Hey, thanks for the reply and the idea! I just tried it and it stills seems to slow down the processing by almost half.
I’m running about 73 fps at 80x60 and once I do find_line segments it goes down to 37, but my biggest concern is that it’s not as stable as get_regression.
Am I doing anything wrong here?

No, that’s to be expected. You can also just use get_regression again with a different roi that’s horizontal.

Oh, I didn’t know I could run it again!
Sorry, I’m very new to OpenMV!
It is, indeed, much faster and definitely more robust, but unfortunately it gets too affected by the main vertical line, so I’m starting to think I’m just not following the right strategy.
I guess I should have sent an image of what the camera feed looks like, so here I’m showing both lines with their ROIs.
Any advice on a new strategy or idea on how to improve that detection?
Again, Thank you soooo much for your help here! this is truly an amazing product!
second get_regression.jpg

Use the roation_corr() method with the bird-eye-view example to project the image as a top down transform. This will make the lines appear to be straighter.

Second, use the robust argument for the robust version of the linear regression. It’s slower but then that issue will go away.

Hey,
Thanks for the advise on the rotation_corr. unfortunately the camera is too close to the floor, so the correction creates too much black space. But your advise got me to play with lense_corr and I got a much better looking image.

So I’ve been making some progress and improved the process quite a bit. I am basically just adjusting the ROI dynamically so my vertical line does not get tilted by incoming horizontal lines. That helped a lot, but I’m trying to follow what you said and do a new get_regression on a more horizontal ROI and but it seems to be fairly biased to finding vertical lines only.
Here is what I’m doing:

GRAYSCALE_THRESHOLD = (200, 255)
MAG_THRESHOLD = 12
MAG_THRESHOLD_HORIZONTAL = 6

while True:
    clock.tick()
    print_string = ""

    img = sensor.snapshot().invert()	#I gotta find white lines over black, but I only have a white table with black lines now, so this trick helps
    img = img.lens_corr(strength = 1.5, zoom = 1.0)
    img.binary([GRAYSCALE_THRESHOLD])
    
    #Look for main vertical line
    line = img.get_regression([(255, 255)], roi=previousROIvertical, robust=True)

    if line and (line.magnitude() >= MAG_THRESHOLD):
        #Compute ROI for next frame
        if line.x1() < line.x2():
            previousROIvertical = (line.x1()-int(marginROI/2), 0, marginROI+line.x2()-line.x1(), 60)
        else:
            previousROIvertical = (line.x2()-int(marginROI/2), 0, marginROI+line.x1()-line.x2(), 60)

        #Draw ROI
        img.draw_rectangle(previousROIvertical, color=80)
        img.draw_line(line.line(), color=127)

        print_string = "Line Ok: %d - line t: %d, r: %d" % (line.magnitude(), line.theta(), line.rho())

    elif line and (line.magnitude() < MAG_THRESHOLD):
        print_string = "Not a good line: %d" % (line.magnitude())
        previousROIvertical = (0,0,80,60)	#Reset ROI so we start looking in the whole area

    else:
        print_string = "Line Lost"
        previousROIvertical = (0,0,80,60) #Reset ROI so we start looking in the whole area

    #Look for horizontal lines at the top (0,0,80,20)
    lineH = img.get_regression([(255, 255)], roi=(0,0,80,20), robust=True)
    img.draw_rectangle((0,0,80,20), color=80)	#Draw ROI for horizontal line

    if lineH and (lineH.magnitude() >= MAG_THRESHOLD_HORIZONTAL):
        img.draw_line(lineH.line(), color=127)
        print_string = print_string + " | LineH Ok: %d - line t: %d, r: %d" % (lineH.magnitude(), lineH.theta(), lineH.rho())

    elif lineH and (lineH.magnitude() < MAG_THRESHOLD_HORIZONTAL):
        img.draw_line(lineH.line(), color=127)
        print_string = print_string + " | Not a good lineH: %d " % (lineH.magnitude())

    print("FPS %f, %s" % (clock.fps(), print_string))

Here is an example of what I’m getting.


I would expect the horizontal line to be follow a lot more the white line but I can’t get it to do it at all.

Maybe my strategy here is wrong and I should be using some other function?
I have tried using find_lines and find_line_segments but they produce much more unreliable results… or most likely, I haven’t got to set the parameters well enough, I guess.

Any idea how I could improve this detection?

Thanks!

Mmm, do this. Zero the roi at the top of the image. I.e draw a black rectangle in the area where you see the center of the vertical line at the top. This will remove that.

You should use the vertical line center to dynamically move the rectangle center.

hmmm… interesting approach!
So I’m doing

        img.draw_rectangle((previousROIvertical[0]-3, previousROIvertical[1],previousROIvertical[2]+6, previousROIvertical[3]), color=0, fill=True)

Once I’m done with the vertical line, I’m making the rectangle a bit bigger than the ROI so I’m sure I don’t leave any white spots there.

Still, the get_regression seems to like getting more vertical than horizontal lines.
Is there anything I may be missing?
Here are a couple screenshots.
zeroed v roi.jpg
zeroed v roi2.jpg

That’s definitely not expected behavior.

Upload a pic of the unmodified image and I’ll fix your code.

Thanks for your help!!!
So, here is what I have:

BINARY_VIEW = True
GRAYSCALE_THRESHOLD = (200, 255)
MAG_THRESHOLD = 12
MAG_THRESHOLD_HORIZONTAL = 5

import sensor, image, time, math, pyb

sensor.reset()
sensor.set_pixformat(sensor.GRAYSCALE)
sensor.set_framesize(sensor.QQQVGA)
sensor.set_vflip(True)
sensor.set_hmirror(True)
sensor.set_auto_gain(True)
sensor.set_auto_exposure(True)
sensor.set_auto_whitebal(True)

sensor.skip_frames(time = 2000)
clock = time.clock()

uart = pyb.UART(3, 115200)

previousROIvertical = (0,0,80,60)
previousROIhorizontal = (0,0,80,60)

marginROI = 10
marginROIhorizontal = 3

while True:
    clock.tick()
    print_string = ""

    img = sensor.snapshot().invert()
    img = img.lens_corr(strength = 1.5, zoom = 1.0)

    if BINARY_VIEW:
        img.binary([GRAYSCALE_THRESHOLD])

    #VERTICAL LINE
    line = img.get_regression([(255, 255)], roi=previousROIvertical, robust=True)
        
    if line and (line.magnitude() >= MAG_THRESHOLD):
        #Compute ROI for next frame
        if line.x1() < line.x2():
            previousROIvertical = (line.x1()-int(marginROI/2), 0, marginROI+line.x2()-line.x1(), 60)
        else:
            previousROIvertical = (line.x2()-int(marginROI/2), 0, marginROI+line.x1()-line.x2(), 60)

        #Draw ROI
        img.draw_rectangle((previousROIvertical[0]-3, previousROIvertical[1],previousROIvertical[2]+6, previousROIvertical[3]), color=0, fill=True)        #Zero out the ROI and 3 pixels more on each side of the X axis 
        img.draw_rectangle(previousROIvertical, color=80)
        img.draw_line(line.line(), color=127)

        #Add info
        print_string = "Line Ok: %d - turn %d - line t: %d, r: %d" % (line.magnitude(), output, line.theta(), line.rho())

    elif line and (line.magnitude() < MAG_THRESHOLD):
        print_string = "Not a good line: %d - turn %d" % (line.magnitude(), output)
        previousROIvertical = (0,0,80,60)	#Reset V ROI

    else:
        print_string = "Line Lost - turn %d" % (output)
        previousROIvertical = (0,0,80,60)	#Reset V ROI


    #HORIZONTAL LINE
    lineH = img.get_regression([(255, 255)], roi=previousROIhorizontal, robust=True)

    #Draw the ROI
    img.draw_rectangle(previousROIhorizontal, color=80)

    if lineH and (lineH.magnitude() >= MAG_THRESHOLD_HORIZONTAL):
        #Compute ROI for next frame
        if lineH.y1() < lineH.y2():
            previousROIhorizontal = (0, lineH.y1()-int(marginROIhorizontal/2), 80, lineH.y2()-lineH.y1()+marginROIhorizontal)
        else:
            previousROIhorizontal = (0, lineH.y2()-int(marginROIhorizontal/2), 80, lineH.y1()-lineH.y2()+marginROIhorizontal)
        #Draw ROI
        img.draw_line(lineH.line(), color=127)

        #Add info
        print_string = print_string + " | LineH Ok: %d - line t: %d, r: %d  | %s" % (lineH.magnitude(), lineH.theta(), lineH.rho(), lineH.line())

    elif lineH and (lineH.magnitude() < MAG_THRESHOLD_HORIZONTAL):
        print_string = print_string + " | Not a good lineH: %d " % (lineH.magnitude())
        previousROIhorizontal = (0,0,80,60)
    else:
        previousROIhorizontal = (0,0,80,60)

    print("FPS %f, %s" % (clock.fps(), print_string))

And here a couple more examples of the before and after images:
zeroed v roi3_pre.jpg
zeroed v roi3_post.jpg
I really appreciate your help looking into this!

Seems like something is wrong with robust linear regression. It gets the answer wrong with the horizontal line.

BINARY_VIEW = True
GRAYSCALE_THRESHOLD = (200, 255)
MAG_THRESHOLD = 12
MAG_THRESHOLD_HORIZONTAL = 5

import sensor, image, time, math, pyb

sensor.reset()
sensor.set_pixformat(sensor.GRAYSCALE)
sensor.set_framesize(sensor.QQQVGA)
sensor.set_vflip(True)
sensor.set_hmirror(True)
sensor.set_auto_gain(True)
sensor.set_auto_exposure(True)
sensor.set_auto_whitebal(True)

sensor.skip_frames(time = 2000)
clock = time.clock()

uart = pyb.UART(3, 115200)

previousROIvertical = (0,0,80,60)
previousROIhorizontal = (0,0,80,60)

marginROI = 10
marginROIhorizontal = 3

output = 0

while True:
    clock.tick()
    print_string = ""

    img = image.Image("pre.bmp", copy_to_fb=True).invert().to_grayscale()
    img = img.lens_corr(strength = 1.5, zoom = 1.0)

    if BINARY_VIEW:
        img.binary([GRAYSCALE_THRESHOLD])
        
    copy = img.copy()

    #VERTICAL LINE
    line = copy.get_regression([(255, 255)], roi=previousROIvertical, robust=True)
        
    if line and (line.magnitude() >= MAG_THRESHOLD):
        #Compute ROI for next frame
        if line.x1() < line.x2():
            previousROIvertical = (line.x1()-int(marginROI/2), 0, marginROI+line.x2()-line.x1(), 60)
        else:
            previousROIvertical = (line.x2()-int(marginROI/2), 0, marginROI+line.x1()-line.x2(), 60)

        #Draw ROI
        copy.draw_rectangle((previousROIvertical[0]-3, previousROIvertical[1],previousROIvertical[2]+6, previousROIvertical[3]), color=0, fill=True)        #Zero out the ROI and 3 pixels more on each side of the X axis 
        img.draw_rectangle(previousROIvertical, color=80)
        img.draw_line(line.line(), color=127)

        #Add info
        print_string = "Line Ok: %d - turn %d - line t: %d, r: %d" % (line.magnitude(), output, line.theta(), line.rho())

    elif line and (line.magnitude() < MAG_THRESHOLD):
        print_string = "Not a good line: %d - turn %d" % (line.magnitude(), output)
        previousROIvertical = (0,0,80,60)	#Reset V ROI

    else:
        print_string = "Line Lost - turn %d" % (output)
        previousROIvertical = (0,0,80,60)	#Reset V ROI


    #HORIZONTAL LINE
    lineH = copy.get_regression([(255, 255)], roi=previousROIhorizontal, robust=False)

    #Draw the ROI
    img.draw_rectangle(previousROIhorizontal, color=80)

    if lineH and (lineH.magnitude() >= MAG_THRESHOLD_HORIZONTAL):
        #Compute ROI for next frame
        if lineH.y1() < lineH.y2():
            previousROIhorizontal = (0, lineH.y1()-int(marginROIhorizontal/2), 80, lineH.y2()-lineH.y1()+marginROIhorizontal)
        else:
            previousROIhorizontal = (0, lineH.y2()-int(marginROIhorizontal/2), 80, lineH.y1()-lineH.y2()+marginROIhorizontal)
        #Draw ROI
        img.draw_line(lineH.line(), color=127)

        #Add info
        print_string = print_string + " | LineH Ok: %d - line t: %d, r: %d  | %s" % (lineH.magnitude(), lineH.theta(), lineH.rho(), lineH.line())

    elif lineH and (lineH.magnitude() < MAG_THRESHOLD_HORIZONTAL):
        print_string = print_string + " | Not a good lineH: %d " % (lineH.magnitude())
        previousROIhorizontal = (0,0,80,60)
    else:
        previousROIhorizontal = (0,0,80,60)

    print("FPS %f, %s" % (clock.fps(), print_string))

So, I switched to the regular linear regression.

Robust is not stable for a pure horizontal line…

Must be something here loosing precision.

https://github.com/openmv/openmv/blob/master/src/omv/img/stats.c#L1070

I think it’s something to do with the atan2 as x gets close close to zero.
regular.mp4 (85.6 KB)
robust.mp4 (63.7 KB)

Aha!!!
That’s a great finding!
Thank you very much for putting the time on this. I hope this can help other people too!
Any chance that gets fixed in some new release? Please, let me know if there is anything I can do to help here. I’ll be happy to do so!

So, using regular makes it work a lot better, but I’m getting weird results with the magnitude. It seems like I get higher magnitude values when the line is at the top of the image even though it is much smaller than when the line is in the middle with a lot more white pixels.
Here is an example right before and after finding the lines:
This is the lense and color corrected image before finding lines.
h magnitude 23_pre.jpg
So I’m getting a Magnitude value of 23 on this one, which is good!
h magnitude 23_post.jpg
But now when the line is in the middle of the frame with a lot more “substance”
h magnitude 10_pre.jpg
I’m only getting a magnitude of 10 :frowning:
h magnitude 10_post.jpg
I’d assume this is not really expected, right?

Magnitude is a different calculation for robust versus non-robust.

For non-robust it’s basically how liney the line is. I.e. it gets higher the more straight and thin the line is.

For robust its the size of the line thinkness basically. Basically, the length of the mx/my vector that gives the line it’s direction. It makes sense for the robust operation. Not for the non-robust one. So… I just picked a measurement that gives some type of quality info for non-robust.

I see.
So would there be any easy way to measure the quality of the line in terms of its thickness or area?
EDIT: Just my two cents here. So after trying the magnitude in different situations, obviously within my restricted experiment, I’d say it’s actually misleading as it provides high numbers in scenes that are not lines at all. Here is an example of just a couple small blobs providing a magnitude of 16 as opposed to some of the previous images with thick lines that were getting about 10.
misleading magnitude_pre.jpg
misleading magnitude.jpg
I think I still have to find a way to evaluate the line with something as powerful as the magnitude when using Robust=True, so I’m trying to find rectangles for now unless you have a better way to do it :slight_smile:

Again, Thank you sooo much for your help here!

Yeah, i didn’t have an algorithm template to work with for designing this. I need to fix the robust linear regression however.

No worries! You are doing a lot already helping this many people!
Please, let me know if there is anything I can do to help!