OpenMV self-driving RC Code Help

I need help modifying my code to go in between two lanes (instead of just following one). So I want my RC car to drive between the lanes instead of just following one. Any help is appreciated:

# Color Line Following Example with PID Steering

# For this script to work properly you should point the camera at a line at a
# 45 or so degree angle. Please make sure that only the line is within the
# camera's field of view.

import sensor, image, pyb, math, time
from pyb import LED
from pyb import Pin, Timer

tim = Timer(4, freq=1000) # Frequency in Hz

cruise_speed = 60 # how fast should the car drive, range from 0 to 100  !!!Change this parameter!!!
steering_direction = -1   # use this to revers the steering if your car goes in the wrong direction
steering_gain = 1.7  # calibration for your car's steering sensitivity
steering_center = 60  # set to your car servo's center point
kp = 0.8   # P term of the PID      !!!Change this parameter!!!
ki = 0.0     # I term of the PID    !!!Change this parameter!!!
kd = 0.4    # D term of the PID     !!!Change this parameter!!!

# Color Tracking Thresholds (L Min, L Max, A Min, A Max, B Min, B Max)
# The below thresholds track in general red/green things. You may wish to tune them...
# old thresholds = [(30, 100, 15, 127, 15, 127), # generic_red_thresholds
#              (30, 100, -64, -8, -32, 32), # generic_green_thresholds
#              (0, 15, 0, 40, -80, -20)] # generic_blue_thresholds

threshold_index = 0
# 0 for red, 1 for green, 2 for blue

thresholds = [(0, 100, -1, 127, -25, 127), # generic_red_thresholds
              (0, 100, -87, 18, -128, 33), # generic_green_thresholds
              (0, 100, -128, -10, -128, 51)] # generic_blue_thresholds
# You may pass up to 16 thresholds above. However, it's not really possible to segment any
# scene with 16 thresholds before color thresholds start to overlap heavily.

# Each roi is (x, y, w, h). The line detection algorithm will try to find the
# centroid of the largest blob in each roi. The x position of the centroids
# will then be averaged with different weights where the most weight is assigned
# to the roi near the bottom of the image and less to the next roi and so on.
ROIS = [ # [ROI, weight]
        (38,1,90,38, 0.4),

blue_led  = LED(3)

old_error = 0
measured_angle = 0
set_angle = 90 # this is the desired steering angle (straight ahead)
p_term = 0
i_term = 0
d_term = 0
old_time = pyb.millis()
radians_degrees = 57.3 # constant to convert from radians to degrees

def constrain(value, min, max):
    if value < min :
        return min
    if value > max :
        return max
        return value

ch1 =, Timer.PWM, pin=Pin("P7"), pulse_width_percent=0)
ch2 =, Timer.PWM, pin=Pin("P8"), pulse_width_percent=0)

def steer(angle):
    global steering_gain, cruise_speed, steering_center
    angle = int(round((angle+steering_center)*steering_gain))
    angle = constrain(angle, 0, 180)
    angle = 90 - angle
    angle = radians_degrees * math.tan(angle/radians_degrees) # take the tangent to create a non-linear response curver
    left = (90 - angle) * (cruise_speed/100)
    left = constrain (left, 0, 100)
    right = (90 + angle) * (cruise_speed/100)
    right = constrain (right, 0, 100)
    print ("left: ", left)
    print ("right: ", right)
    # Generate a 1KHz square wave on TIM4 with each channel

def update_pid():
    global old_time, old_error, measured_angle, set_angle
    global p_term, i_term, d_term
    now = pyb.millis()
    dt = now - old_time
    error = set_angle - measured_angle
    de = error - old_error

    p_term = kp * error
    i_term += ki * error
    i_term = constrain(i_term, 0, 100)
    d_term = (de / dt) * kd

    old_error = error
    output = steering_direction * (p_term + i_term + d_term)
    output = constrain(output, -50, 50)
    return output

# Compute the weight divisor (we're computing this so you don't have to make weights add to 1).
weight_sum = 0
for r in ROIS: weight_sum += r[4] # r[4] is the roi weight.

# Camera setup...
clock = time.clock() # Tracks FPS.
sensor.reset() # Initialize the camera sensor.
sensor.__write_reg(0x6B, 0x22)  # switches camera into advanced calibration mode. See this for more:
sensor.set_framesize(sensor.QQVGA) # use QQVGA for speed.
sensor.set_auto_gain(True)    # do some calibration at the start
sensor.skip_frames(time = 10000)  # When you're inside, set time to 2000 to do a white balance calibration. Outside, this can be 0
sensor.set_auto_gain(False)   # now turn off autocalibration before we start color tracking

    clock.tick() # Track elapsed milliseconds between snapshots().
    img = sensor.snapshot().histeq() # Take a picture and return the image. The "histeq()" function does a histogram equalization to compensate for lighting changes
    print("FPS: ",clock.fps())
    centroid_sum = 0
    for r in ROIS:
        blobs = img.find_blobs([thresholds[threshold_index]], roi=r[0:4], merge=True) # r[0:4] is roi tuple.
        if blobs:
            # Find the index of the blob with the most pixels.
            most_pixels = 0
            largest_blob = 0
            for i in range(len(blobs)):
                if blobs[i].pixels() > most_pixels:
                    most_pixels = blobs[i].pixels()
                    largest_blob = i

            # Draw a rect around the blob.

            centroid_sum += blobs[largest_blob].cx() * r[4] # r[4] is the roi weight.

    center_pos = (centroid_sum / weight_sum) # Determine center of line.

    # Convert the center_pos to a deflection angle. We're using a non-linear
    # operation so that the response gets stronger the farther off the line we
    # are. Non-linear operations are good to use on the output of algorithms
    # like this to cause a response "trigger".
    deflection_angle = 0
    # The 80 is from half the X res, the 60 is from half the Y res. The
    # equation below is just computing the angle of a triangle where the
    # opposite side of the triangle is the deviation of the center position
    # from the center and the adjacent side is half the Y res. This limits
    # the angle output to around -45 to 45. (It's not quite -45 and 45).
    deflection_angle = -math.atan((center_pos-80)/60)

    # Convert angle in radians to degrees.
    deflection_angle = math.degrees(deflection_angle)

    # Now you have an angle telling you how much to turn the robot by which
    # incorporates the part of the line nearest to the robot and parts of
    # the line farther away from the robot for a better prediction.
#    print("Turn Angle: %f" % deflection_angle)
    now = pyb.millis()
    if  now > old_time + 0.02 :  # time has passed since last measurement; do the PID at 50hz
        measured_angle = deflection_angle + 90
        steer_angle = update_pid()
        old_time = now
        steer (steer_angle)
#        print(str(measured_angle) + ', ' + str(set_angle) + ', ' + str(steer_angle))

Hi, please don’t expect me to just magically debug code for a robot by posting it here.

(Sigh, not mad at you, just a lot of people keep doing this instead of critically thinking about a particular ask that I can actually help with).

Anyway, I understand what you want to do. Um, this is very difficult to do unless you can get a birds eye view of the scene and then if you can see far enough around edges. For example, driving between two black lines on the ground is possible. Driving between two walls will likely not be possible. The reason for this is because you can’t see around walls thus any method you use will likely not be able to determine where both sides are the track are on a turn and it will fail. If you have two lines on the ground however, then you can use the rotation correction method to birds eye view transform the image to looking top down. Once you do that you can then segment the image and compute the robust linear regression on it to find how to drive. Note that the top down transformation is very important in order to make sure you can compute the linear regression correctly.

I didn’t mean for it to be this general. Yes, My program is following one red line at the moment that is made using red tape (it is driving on the line itself). I am looking to make it drive between two lines. Both lanes are in view of the camera currently (so that’s not a problem).

I am afraid I am not familiar with most of the terms you used as I am still a beginner when it comes to programming. Can you please be more specific (show on the code)? Or if you can just provide any details.

Thank you so much for your time!

Please see the Filters → rotation correction code. Run the demo on your robot until you can rotation correct the main image such that you have a top down view of the line instead of a view from an angle. Write down the rotation values you used in X/Y/Z/Zoom. Apply the method with these values in your current script.

So, this is really just adding one line of code to your current script. Your robot behavior should change following one line still.

Now, you’re using my older line following code. Please see the updated OpenMV Minimum Viable racer code. This new code uses something called a robust linear regression which can follow the line much better.

(Answering this on the phone so it’s hard for me to provide links to things).

Anyway, try to get the rotation correction stuff working first and then you can switch to the linear regression method.

Thank you so much for your help. The linear regression code is performing much better. One last question, my camera seem to be performing a lot better when it is connected to my laptop (as in deviating off the line less). I was told to install a Wifi-shield to help solve this problem. I was wondering if there is anything I can do with the code to help solve this (I expect the issue has to do with color calibration).

When it’s connected to the laptop it’s running slower so this means your PID loop weights don’t work right when it starts to run faster. Just tune the PID loop weights. In particular the P and D weights.