Linear Regression Line Following

Hi Kwabena!

I’m looking for the Linear Regression Line Following code that you recently demo’d on youtube. I’m interested in trying to build my own line following bot; it looks like a really cool project. I don’t yet have one of the new M7 cameras, just the M4 v2 camera, so hopefully it will work on that?

Thanks,

Ted

The linear regression algorithm works on the OpenMV Cam M4.

The algorithm outputs the line fine, and I tested it in real life and it worked great on the dyi robocars track. However, I’m still working on figuring out how to derive the control loop using it. For example, the line following script I wrote using blob tracking only works as long as the robot stays within ±45 degrees of the line it’s tracking. It can’t recover if it gets off the line. With linear regression you can find the line anywhere and with any rotation so its a harder to boil all that info down into a single… turn left or right output. But, the benefit is that you can recover if you go off the line.

Anyway, the next race is next week so I should have some good code to post by then.

Hi,

If possible in your next code, include the possible to use some colors of line.
Because with diferent colors, is possible to set diferent speeds, danger areas, etc…

The linear regression code has the same threshold interface like find blobs. You pass it some thresholds to use. So, it can easily deal with colored lines.

How’d you do in the race? Wish I lived closer and could attend (I’m really into robotics). At least we have the Sparkfun AVC in Colorado!

3rd place. Had some throttle switch transmitter problems. Check the YouTube channel for the video. I’m working on a blog post write-up ask we speak.

I’ll be at the AVC BTW.

Writeup and video: Linear Regression Line Following | OpenMV

Great write-up, thanks! (I’m still studying it, though). I noticed from the video that your car drives consistently outside the road-line, was this on purpose? Could you perhaps bias the line following, so that it drives inside the line?

Anyway, I’m building one, I already have an OpenMV cam hooked up to a small vehicle. My initial attempts at line following have not been very successful, but I have high hopes, once I figure out your code.

Thanks again,

Ted

Hi, I’m just following the line on the outside of the road. As for following the inside line… the dotted one is a bit harder to lock onto and I was mainly trying to prove the algorithm. Given my lessons learned I think I can get it to follow the yellow dotted line by just applying a color filter in RGB565 mode.

That said, I’d rather get working a mode where I can do find_lines() on the image and use the left, right, and center dotted line all as support information to drive the robot. Not sure how to combine them but I’ve determined how to at least use the theta() and rho() output to drive the robot which was the hard part.

Following the yellow line in the center was hard because given the code I posted… If it saw the outside line it would then lock to that because it is the better quality line as far as the robot is concerned. I’m thinking a Kalman filter to reject spontaneously changing lines would help along with doing some color filtering. But, if possible I’d rather solve the problem in grayscale since that’s more robust and more general. Color filtering fails so easily.

Anyway, my goal is to make an algorithm that works really well on following an line/edge robustly. For the AVC competition I think I should look at edges,

Hi Kwabena,

I’m having pretty good results using your Linear Regression function to follow a single line. For head-to-head racing however, I’d like to be able to see Two lines, and be able to drive down the center (vanishing point perhaps), then move to the side for passing. I think that in the Sept 16th meetup in Oakland your car was indeed driving between the white and the yellow lines. I’m wondering how you did that.

I’ve tried using Get Lines and saving the strongest two, but it doesn’t give nearly as good a result as Linear Regression. I guess I could go with blobs as we’ve been doing with the Pixy cameras and infer two lines, but surely there’s a better way to do this. Your thoughts please?

I’ve been thinking about this issue since I need to solve it for the sparkfun race coming up in october. My basic idea is to do the following:

img = sensor.snapshot()
lines = img.find_lines
<remove low magnitude lines and horizontal lines>
img.xor(img) # zero image
<draw lines on image - the image will be black with just the lines>
img.get_regression(robust=True)

The robust linear regression of a bunch of vertical lines is still a vertical line. However, the line will be centered on one or the other lines. Not in the middle. So, you can only use the slope. However, you can calculate the centroid of all the lines in the image.

That said, it’s far easier just to follow the yellow line in the center and then just change the RHO offset from being the center of the image to the edge of the image. This should solve the problem. If you look in the code there’s a place where I’m subtracting half the resolution (horizontal) from rho. Just use a different number there.

Thanks for your quick reply. I wasn’t aware of the Sparkfun race. Is this the one at the Denver Maker Faire? Am I correct in thinking you are entering the Speed Demon category, large figure 8 track with hay bale sides, no painted center line, and OBSTACLES?

One of the difficulties might be seeing past/around the obstacles. I’m running into that trying to overtake another car when we’re both following a line.

Your idea of drawing lines on the image interests me. I’ll give that a try.

We’re just going to hug the wall. We’ll use the find lines technique to get the edges of the wall and then use linear regression to follow that. As for obstacles… One of the walls you can follow doesn’t have many obstacles.

What do you think of the idea of getting a linear regression line on the left half of the image, saving its info, then getting another on the right side? I just had some success with that with two white tape lines about 14 inches apart. When it failed to find both, it just used the one it found. I expect that wider spaced lines would work too if they were visible to the cam.

Yeah, that will work too. You could also support higher res then for each side of the image (this is because the linear regression is N^2… so, as long as the number of set pixels are low in a frame it can do more). Not quite sure how to merge the turning results from both sides however. I guess an average of the output of the code I wrote for both sides would suffice.

I’m using dumb guy math, just finding the midpoint between x1 Left and x1 right and mapping that against 0 to 80 to an angle from -45 to +45. It works. It’s an imprecise science anyway. Sometimes it “cuts the corners” a bit but that just gets you there faster.

Firstly, I wanted to thank kwagyeman for the OpenMV project. I started arduino projects a few months ago and always wanted to try computer vision. I tried and, till now, always failed to achieve Raspsberry Pi based project in vision. Installing OpenCV on a Raspberry is a real challenge for a newbie like me :smiley:
By following your tutorial on DIY robocar, I have made a line following car using the blob detection method only by building the basic rover and adjusting the threshold to follow a white line. And it works well !

Now I would like to try the linear regression method. I’m now copying, pasting and replacing parts of both codes you’ve posted (linear regression and blob detection) to obtain a code that could work.

I tried the following code :

BINARY_VIEW = True
GRAYSCALE_THRESHOLD = (230, 255)
MAG_THRESHOLD = 4
THETA_GAIN = 40.0
RHO_GAIN = -1.0
P_GAIN = 0.7
I_GAIN = 0.0
I_MIN = -0.0
I_MAX = 0.0
D_GAIN = 0.1


import sensor, image, pyb, math, time
from pyb import LED
from pyb import Pin, Timer

tim = Timer(4, freq=1000) # Frequency in Hz

cruise_speed = 60 # how fast should the car drive, range from 0 to 100
steering_direction = -1   # use this to revers the steering if your car goes in the wrong direction
steering_gain = 1.7  # calibration for your car's steering sensitivity
steering_center = 60  # set to your car servo's center point
kp = 0.8   # P term of the PID
ki = 0.0     # I term of the PID
kd = 0.4    # D term of the PID

blue_led  = LED(3)


old_error = 0
measured_angle = 0
set_angle = 90 # this is the desired steering angle (straight ahead)
p_term = 0
i_term = 0
d_term = 0
old_time = pyb.millis()
radians_degrees = 57.3 # constant to convert from radians to degrees



def constrain(value, min, max):
    if value < min :
        return min
    if value > max :
        return max
    else:
        return value

ch1 = tim.channel(1, Timer.PWM, pin=Pin("P7"), pulse_width_percent=0)
ch2 = tim.channel(2, Timer.PWM, pin=Pin("P8"), pulse_width_percent=0)

def steer(angle):
    global steering_gain, cruise_speed, steering_center
    angle = int(round((angle+steering_center)*steering_gain))
    angle = constrain(angle, 0, 180)
    angle = 90 - angle
    angle = radians_degrees * math.tan(angle/radians_degrees) # take the tangent to create a non-linear response curver
    left = (90 - angle) * (cruise_speed/100)
    left = constrain (left, 0, 100)
    right = (90 + angle) * (cruise_speed/100)
    right = constrain (right, 0, 100)
    print ("left: ", left)
    print ("right: ", right)
    # Generate a 1KHz square wave on TIM4 with each channel
    ch1.pulse_width_percent(left)
    ch2.pulse_width_percent(right)

sensor.reset()
sensor.set_pixformat(sensor.GRAYSCALE)
sensor.set_framesize(sensor.QQQVGA)
sensor.set_vflip(True)
sensor.set_hmirror(True)
sensor.skip_frames(time = 2000)
clock = time.clock()

def line_to_theta_and_rho(line):
    if line.rho() < 0:
        if line.theta() < 90:
            return (math.sin(math.radians(line.theta())),
                math.cos(math.radians(line.theta() + 180)) * -line.rho())
        else:
            return (math.sin(math.radians(line.theta() - 180)),
                math.cos(math.radians(line.theta() + 180)) * -line.rho())
    else:
        if line.theta() < 90:
            if line.theta() < 45:
                return (math.sin(math.radians(180 - line.theta())),
                    math.cos(math.radians(line.theta())) * line.rho())
            else:
                return (math.sin(math.radians(line.theta() - 180)),
                    math.cos(math.radians(line.theta())) * line.rho())
        else:
            return (math.sin(math.radians(180 - line.theta())),
                math.cos(math.radians(line.theta())) * line.rho())

def line_to_theta_and_rho_error(line, img):
    t, r = line_to_theta_and_rho(line)
    return (t, r - (img.width() // 2))

old_result = 0
old_time = pyb.millis()
i_output = 0
output = 90

while(True):
   
    clock.tick()
    img = sensor.snapshot().histeq()
    if BINARY_VIEW:
        img.binary([GRAYSCALE_THRESHOLD])
        img.erode(1)

    line = img.get_regression([(255, 255)], robust=True)
    print_string = ""

    if line and (line.magnitude() >= MAG_THRESHOLD):
        img.draw_line(line.line(), color=127)

        t, r = line_to_theta_and_rho_error(line, img)
        new_result = (t * THETA_GAIN) + (r * RHO_GAIN)
        delta_result = new_result - old_result
        old_result = new_result

        new_time = pyb.millis()
        delta_time = new_time - old_time
        old_time = new_time

        p_output = new_result
        i_output = max(min(i_output + new_result, I_MAX), I_MIN)
        d_output = (delta_result * 1000) / delta_time
        pid_output = (P_GAIN * p_output) + (I_GAIN * i_output) + (D_GAIN * d_output)

        output = 90 + max(min(int(pid_output), 90), -90)
        print_string = "Line Ok - turn %d - line t: %d, r: %d" % (output, line.theta(), line.rho())
    else:
        print_string = "Line Lost - turn %d" % output
        
        steer_angle = output
        steer (steer_angle)
#        print(str(measured_angle) + ', ' + str(set_angle) + ', ' + str(steer_angle))
        blue_led.off()

The IDE does not generate any error and a line is drawn onto the image when connected with the USB cable. The line is well oriented so the linear regression seems to work.
However the motors are not moving at all. The problem does not seem to be the hardware. Indeed, it works well with the blob detection code.

Does someone have any idea on what should be corrected ?

Hi, the latest script is here:

As for motor control… you should just be able to run the PWM script by itself to verify motor’s work. Can you do that and let me know if that is the case?

Hi,
Thank you for the updated version ! I’m not sure what the “PMW script” is, but I tried the pmw_control.py script from the examples of the IDE and it works. I also tried different pulse_width_percent from 30 to 100% and 0% and it works well.
I just bought an rc car on hobbyking so I can try the original updated code without modifying it, except threshold or gain values. In the meantime I still would like to try with the basic rover kit comprising only two DC motors like this one :

Okay, so, if the PWM script works then you only need to modify the output part of the code which drives the servos. You’ll notice that the output method gets a steering and throttle value. All you need to do is to map the throttle to the a general scaling of the PWM and the steering to a difference between the two PWM channels.

If you want me to debug a small part of the script I can do that but you can’t post the whole thing and then ask if there’s a problem. I don’t have your hardware or whatnot. You have to narrow down the error to something in particular and ask if that’s broken.