Autonomous driving in predefined colored lanes

Hi,

First of all, happy to join the OpenMV community.
I did my first trials with the H7 version. Very user friendly IDE!!! Good product.

Topic: I’m working on a autonomous driving robot to move from A to B using existing lanes and markings. Some area’s are missing lane markings, but that is the next challenge…
I already tried some of the included examples.
Line detection and Edge detection seems to provide a lot of data from the image.

Question: could you give me some hints on what base I should best start to get a successful outcome. As I’m new in Vision.
(edge detection, color detection, line detection, or start from scratch.).
I dont expect the full solution answer (learning is the fun part), but a head start would be nice.
I started experimenting with “line_follower_main.py”, but of course only detects 1 yellow line. Not really a lane.
Also the crossings (see pictures) are an issue to overcome…
But could this be a good starting point?

See 2 images to get an impression of the environment. Goal is to stay within the yellow or green lanes.

Thank you for the suggestion.
area2.jpg
area1.jpg

So, the environment you are in is easy to work in. Fixed lighting, etc. So, it’s doable.

Um, regarding lane following, to do that well is actually quite involved. It typically requires doing some 3D rotation of the image and re-projecting it and then finding horizontal lines that can be fit to the lanes.

So, what are you looking to build? Scope-wise? Following two lines is not hard, but, what more do you want to do? I see the crosshatched area as a hard problem to solve. Are you looking to get past that? What else? Computer vision is just about using techniques to extract data from an image… but, there’s a lot of algorithm programming you have to do that there’s no magic bullet code for.

Thanks for the input.

I understand there will be quite some algorithms and software involved. But I’m fine that, although this is my first experience with Python (I’m more C-language)

Scope is to transport small parts from A to B and return.
So want to include vision systems.
Regarding line/lane following just wondered what example could have been a good starting point
Related to the crossings, will have to use my encoders and some assumptions. Perhaps even use the crossing as a reference point, but probably hard to detect with vision.

Is there a place where I can find the syntax to get the “line-data”? For example “finding lines” shows a lot of useful lines on the screen, but how to use it in the program. Meaning how to get the data (start point, end point, angle,…)?

Regards

Yes: image — machine vision — MicroPython 1.15 documentation

Question… googling this is pretty trivial. Is there something I need to do to make the API location more available?

I’m sorry.
Indeed, found it on the website.
Should have checked that first.