Line Following - Blob Spotting/Decoding robot

From my post on “MV Primer” in the tech discussion area of the forum - I want to start a project HERE.

“I have the idea that I would like to make a line following robot for a grid. At each grid intersection could be a symbol so it would have the idea of which node it was approaching and its pose relative to the node. The robot could take instruction to get to the next node or waypoint. In this way it could go from point to point and know where it was and where it is going”.

I have the open mv functioning running example scripts, I have purchased a robot base, AS-2wd from amazon.com. The robot base accepts 5 AA cells- 5 x 1.5v =7.5 volts and has a switch that connects battery + terminal to robot motors charging plug depending on direction of switch.

I am thinking I just to wire a 3.7 lipo battery to power the openmv and operating the robot base from the intended 7.5V.

Can I connect the lipo battery and robot base negative terminals together? I would then route pin 7&8 to the signal pin of each servo. The servos would be powered to the common “-” terminal and the +7.5. The charging switch when flipped I believe would simply charge the 7.5 battery and I would remove and charge the 3.7 lipo. Any commentary would be helpful.

Thanks,

Trev

Yep, you can always connect grounds to each other. In fact, you have to do this. That said, please connect the lipo (+) to the VIN pin on the OpenMV Cam. Anyway, this all said, 7.5 V sounds kinda high for the servos. Please make sure to check the voltage rating on the servos. As for the OpenMV Cam it should power on with the 3.7V battery source. However, a 4.5V source would be better.

Nyamekye,

Thank you, I got the robot to move by common grounding and running one battery to openmv and the other to power the servos of the robot base (used the servo example). I took the python course at codeacademy - yay me! I think I am ready to try to do this! With the markers example, I took some circles of colored construction paper of red, green, and blue and was playing with them to see if they would make a marker when close enough and got results of placing reds, greens, and blues in close proximity to one another. I need to get inside the two functions to try and understand how this really works - I could see the color number identifier change as the targets were placed closer together and merged into a single blob.

I am trying to come up with a scheme that would be reliable and based on “first principals”. One of the previous post indicated to make a pac-man like marker to take advantage of a blobs direction. I am assuming for my application that only one marker will be in the field of view at a time, but random colored objects might show up in the field of view - like people walking by with similarly colored shoes. I was thinking to have one circle larger than the others with a direction that would always face north. That way the robot will always know its pose. I was thinking of placing smaller circular markers around the periphery of the large circle that were spaced such that they never be interpreted as a single blob. Always with the same number of small circles, but contrasting color from the larger circle. I quickly experimented and found that three small circles of the same color around the bigger circle would not appear as a single blob. I figure I can get 12 distinct markers with either a larger red, green, or blue circle, and three smaller circles around it in the other two contrasting colors - always four blobs visible and interpreted in the field of view. I have not figured out how many more distinct markers I could get by incorporating the directions of a pac man like smaller marker. I am thinking of checking to verify a marker by checking to see if there are exactly four blobs in the field of view, then checking to see if the three smaller blobs are nearly 1/4 the area of the primary blob. After these checks were passed, the machine would know it had seen an intentional marker and the marker would be uniquely identified by the list of four colors with the first color being the larger one. I could expand this to include variations of the directions of the smaller blobs if I needed more markers. Lastly, even though I got the bot to move, I have no idea how to start executing a line following routine. I googled “PID control python” and became quickly overwhelmed. Some more basic questions:

  1. question deleted. I found the https://openmv.io/docs/library/omv.image.html
    2)Are the three best colors to sense red green and blue?
    3)Does it make sense from a false positive standpoint to check relative blob areas and total number of blobs before reporting a marker has been sensed?
    4)Do you have any additional guidance for the line following part of the application? (ok if not - I will just try harder)
    5)Is there a more efficient way to work with the bot to play with the pid parameters other than loading the main file from pc then mounting the board on the bot and observing what happens? I am not sure if it is ok to have the usb connected to the bot at the same time the servos are hooked up without the 3.7v lipo.

Thanks, Trev

Good work so far.

  1. Depends on the lighting in the area. But, yest, primary colors are the best.
  2. Yes, that’s a good kind of check. Of course, you’ll have to figure out how to deal with no blobs being there… anyway, at this point your application dependent. Basically you have to test whatever you’re doing a lot and refine.
  3. See the line-flowing script under the color tracking examples. You should be able to see what’s going on. It show’s you how to do P control (the P part of PID control).
  4. It’s okay to have USB connected while the lipo is connected. The OpenMV Cam will try to power itself from the USB voltage source however since it has a higher voltage.

You Rock, Thanks- T.

It follows a black line now- Thanks gents!

Nice work :slight_smile: If you have any images/videos please share them here, could always use material for the website/tutorials :slight_smile:

So I got the line following code down pat it follows a straight line. But now say there was a curved line how would I get it to follow or say if the line ends can I tell it to stop driving. because as of right now it just follows a straight line and will keep on going on and on.

See your other post. Thanks,

making some progress :slight_smile:

Kwebana,
Can I Resolder white leds where the ir leds are without overloading the circuit? I still have been having difficulty with reliable line detection. My home has a bunch of windows and overhead lights so there is lots of reflections off my floor. I’m thinking of trying another approach now that cam m7 will have AprilTags. I am thinking to use retroreflective tape for the line and AprilTag white field. I am thinking these headlights will allow me turn up my threshold to 240,255 so everything else will be ignored. I did a test in the basement by turning on the red led and it seems to brighten the tape a little so I think I am on the right track.

Thanks,

Trev

Yes, you can do that. I’d recommend not modifying the OpenMV Cam itself but instead just attaching a transistor driving circuit with some white LEDs.

Trevor, I’m trying to do the same, but it’s not working as well as yours. My code is here: GitHub - zlite/OpenMVrover: Autonomous car using the OpenMV camera. Can you share yours?

Trevor is just following a line on the ground. It looks like your trying to see parallel lines in the field of view.

Do you have some debug info I can look at? Like, a picture of what the OpenMV Cam sees (unmodified) and the output after you do the edge detection followed by the line detection?

Yes, I’m trying to implement a “vanishing point” approach to staying within the lines on either side of a track, on the assumption that roads tend to have features (lines, walls, contrast) that register as “lines” with the edge-detection libraries and can be averaged to lead to a steering angle. I’ll upload some sample shots tonight.

Here’s a screenshot of it looking at black tape on a gray floor.


BTW, a picture of the car is also attached

Okay, um, quick question. How good is the line lock? Is it jittery or pretty stable?

K, looked through your script. I think the math calculation part is done wrong. It looks like your accessing variables that are only variable in the loop above outside of that said loop:

for l in lines:
        img.draw_line(l, color=(127)) # Draw lines
    if lines:
        if (l[2]-l[0]) != 0: # don't allow vertical lines (infinite slope)

Doesn’t make any sense. l is only valid in the loop. I think you want to remove “if lines:”. Then the bottom part:

if counter != 0:
            steer_angle = 1/(totalslope/counter) # 1/(average slope), to compensate for the inverse slope curve
            print (steer_angle)
#           s1.pulse_width(cruise_speed) # move forward at cruise speed
            s2.pulse_width(1500 - int(steer_angle*steering_gain)) # steer
            pyb.delay(10)

Needs to be moved up to the same indentation as “totalslope = 0”.

Anyway, it looks like after the above fixes your code can’t deal with the line being straight. Because if it is straight then counter == 0 and you never set the steering angle to 0.

So, thinking about doing this at a higher level… so, when the line is off center the perspective will cause it to look like it’s curving. I think you’re going to need to find_lines in different parts of the image. Like the top, middle, and bottom parts. Otherwise I don’t think you’re going to get usable outputs.

Great feedback. I’ll review those bugs and fix.

As for the line lock, it really depends on lighting and reflections. For example, with lighting from sunlight, I can tune the thresholds to find lines with the sun behind the car, but looking into the sun the reflections confuse it. Hard to find settings that work everywhere.

Interestingly, the blob-detection approach in your grayscale line following example works much better at spotting the lines under different lighting conditions.

As for the problem of driving between the lines rather than along them, I agree it would be best to divide the image into right and left sides and look for lines in each. But if I can only see one of them, my problem is that I don’t know if it’s the left or the right side.

I’m also going to experiment with optical flow, which might lead to more statistical results and less dependency on distinct features. My assumption is that the flow vector field of a road is broadly correlated with position in the road and angle. But there’s only one way to find out :wink: