Line Following - Blob Spotting/Decoding robot

Update: I don’t know what I was thinking. Optical Flow only gives motion, not position. Needs to be fused with an absolute, not relative, measurement.

That said, the Optical Flow example is super cool and it was fun playing with before I realized my mistake :wink:

Oh, I was talking about cutting the image horizontally into three pieces. Like a top piece, a middle piece, and a bottom piece. I think you need to do that because the camera perspective is going to skew lines that are far off. Like, think about how far away the part of the line that is near the top of the image from the bottom part of the image. I don’t think you can tread them as one thing and get good results.

Going to make the optical flow a lot better soon. But, other things to work on first.

Ok, I think I fixed that logic problem. I need to keep the “If lines” conditional to trap cases where no lines are found and we can’t calculate a slope. But you were right that I had the flow wrong. I think this commit fixes it:

Okay… other thoughts… I’d work on your activation function:

steer_angle = 1/(totalslope/counter)

Maybe try to determine the angle from thinking about the slope as if it were the slop of the hypotenuse on a right triangle and you’re trying to find an angle? Not exactly sure what the math should be.

Normally you’d use an inv tangent function there, but given the distortion effect in slope between the line close to you and that far away, I’m thinking of just implementing a regular PID and see if I can tune the distortion away with damping functions.

Still, we’re making some progress. Now it weaves its way drunkenly through the track, but at least stays on the track.

I think it’s the arctan(delta_y/delta_x) = pheta. Where delta_y is the average of y deltas and delta x is the average of x deltas.

I’ve now updated the code with the arctan function you suggested and a basic PID loop to control the steering. Seems to work better!

BTW, here’s the built-in line-following code with a PID motor controller added. Works great!

I’ve now updated the code to follow lines of any color, so you can switch lanes on a FormulaPi track. Here’s the code:

I need to tweak my PIDs a bit, but it works.

Seems to need a lot of PID update. Well, the M7 will give you double the FPS so this should help a lot.

This code with the new M7 OpenMV cam did great this weekend at the Thunderhill Self Racing Cars/DIY Robocars race (Login to Meetup | Meetup). Came in 2nd, beating out nVidia TX2 cars and a lot of RaspberryPi 3s. Here’s a practice lap:

The code is here: OpenMVrover/bwlinefollowingArduino.py at master · zlite/OpenMVrover · GitHub

The secret to the good performance was weighting the top third of the frame the highest. These were the ROIs that worked best for us:

ROIS = [ # [ROI, weight]
(0, 100, 160, 20, 0.1), # You’ll need to tweak the weights for your app
(0, 050, 160, 20, 0.3), # depending on how your robot is setup.
(0, 000, 160, 20, 0.7)
]

OMG that is fast.

Like a druken sailor. You’re definitely pushing the PID loop out of it’s control zone. But, hey, it still works. If you went slower I bet you could lock perfectly on the line.

Going fast was the point of the race :wink: Winning drunk is still winning!

Hello Zlite I am working on this project with my uncle who is Trevor we been tackling this for a while this was our version 1.0 I know he has revised it but this code does follow the line well. I can help you out a bit to if you let me know what your issues are in the coding, I sense it can be little confusing for some people I code a lot and it was even little tricky for me. But I am more then willing to help as well. https://github.com/E-V-G-N/OpenMV-Line-Follower/blob/master/Line%20Following%20V%201.0

Thanks for sharing that! Any particular reason why you didn’t add a PID control loop to smooth it?

Yeah, we were going to go that route, but we decided to not for some reason. We are focusing on more like you were how to get it to lock onto the line. We are looking into adding a little headlight to help with the contrast and shadow issues. I just got the new board so I am going to take a look at your code and see what I can tweak for me and my uncle’s application a bit.

Hey did something happen when the new cameras came out? We had a fully functional code and now gettings lots of undefined errors and what not. here is a view of the code

The find blobs API was updated a while back. That code needs to be tweaked for the new Find blobs method. Only minor changes are necessary to the script.

See the line following example code. That was updated with how the new find blobs script accepts arguments.

# Tracks a black line. Use [(128, 255)] for a tracking a white line.
GRAYSCALE_THRESHOLD = [(0, 64)]

sl = Servo(2) # P8
sr = Servo(1) # P7

# Camera setup...
sensor.reset() # Initialize the camera sensor.
sensor.set_pixformat(sensor.GRAYSCALE) # use grayscale.
sensor.set_framesize(sensor.QQVGA) # use QQVGA for speed.
sensor.skip_frames(30) # Let new settings take affect.
sensor.set_auto_gain(False) # must be turned off for color tracking
sensor.set_auto_whitebal(False) # must be turned off for color tracking
clock = time.clock() # Tracks FPS.

def linerobot(turnangle, turnvelocity):
    wheelwidth = 6
    turnradius = wheelwidth/(math.sqrt(2 - 2 * math.cos(math.radians(turnangle)))+.0000001)
    if turnangle <= 0:
        vl = turnvelocity / turnradius * ( turnradius - wheelwidth / 2)
        vr = turnvelocity / turnradius * ( turnradius + wheelwidth / 2)
        sl.speed(vl)
        sr.speed(-vr)
    else:
        vl = turnvelocity / turnradius * ( turnradius + wheelwidth / 2)
        vr = turnvelocity / turnradius * ( turnradius - wheelwidth / 2)
        sl.speed(vl)
        sr.speed(-vr)

def check_stop():
            GRAYSCALE_THRESHOLD = [(0, 64)]
            stopROI = [(0, 000, 160,20)]
            blobs = img.find_blobs(GRAYSCALE_THRESHOLD, roi=r[0:4], merge=True) # r[0:4] is roi tuple.



merged_blobs = img.find_markers(blobs)
            
if blobs:
    # Find the index of the blob with the most pixels.
    most_pixels = 0
    largest_blob = 0
    for i in range(len(merged_blobs)):
        if merged_blobs[i].pixels() > most_pixels:
            most_pixels = merged_blobs[i].pixels()
            largest_blob = i
           
if merged_blobs[largest_blob][2] > 80:
                # Draw a rect around the blob.
                img.draw_rectangle(merged_blobs[largest_blob][0:4]) # rect
                img.draw_cross(merged_blobs[largest_blob][5], # cx
                               merged_blobs[largest_blob][6]) # cy
                sl.speed(0)
                sr.speed(0)
                print("false")
                return True

              # Draw a rect around the blob.
    img.draw_rectangle(blobs[largest_blob].rect())
    img.draw_cross(blobs[largest_blob].cx(),
                   blobs[largest_blob].cy())


# Each roi is (x, y, w, h). The line detection algorithm will try to find the
# centroid of the largest blob in each roi. The x position of the centroids
# will then be averaged with different weights where the most weight is assigned
# to the roi near the bottom of the image and less to the next roi and so on.
ROIS = [ # [ROI, weight]
        (0, 100, 160, 20, 0.7), # You'll need to tweak the weights for you app
        (0, 050, 160, 20, 0.3), # depending on how your robot is setup.
        (0, 000, 160, 20, 0.1)
       ]

# Compute the weight divisor (we're computing this so you don't have to make weights add to 1).
weight_sum = 0
for r in ROIS: weight_sum += r[4] # r[4] is the roi weight.

centroid_sum += blobs[largest_blob].cx() * r[4] # r[4] is the roi weight.
 center_pos = (centroid_sum / weight_sum) # Determine center of line.

     print(clock.fps()) # Note: Your OpenMV Cam runs about half as fast while
    # connected to your computer. The FPS should increase once disconnected.

Will this work now I changed it up a bit