Request for Road Navigation example

Kwabena: Do you think you could port the open source Jevois Road Navigation example to OpenMV?

It’d described here:
http://jevois.org/moddoc/RoadNavigation/modinfo.html

The code is here:

I can work on implementing that, but, without a nice and clean already working C library without any dependencies it’s going to take a long time.

The only thing I can do quickly is make a faster line detection algorithm. This should get you most of the way there. I can do a bunch of… hacks… on the standard way lines are detected if I’m just trying to find lines for a road and then I can just output the turn angle.

The main C library is here, which seems to be based on standard OpenCV calls:

So, I’ve been looking at this road detection code and it’s not going to be useful for ultra curvy roads like your car was trying to track. But, it does seems like it will do a really good job for just making a robot drive straight on a straight path that changes in direction slowly.

Basically, you’re going to have to split the image up into multiple horizontal “slices” and piece wise approximate the driving angle. Color tracking does this by finding some centroids. But, this can also be done by finding the center point of two edges on either side of a road area. Or a dotted line in the center of the road. Or all three at once.

Anyway, I’m thinking of adaptively thresholding and image to remove lighting issues, then sobel filtering it and using the mag/phase info from that to populate a nice hough line detection map. Edge responses should be very strong by doing the thresholding first allowing me to easily extract lines. Additionally, the thresholding step should allow me to mark flat areas so I don’t sobel them which cost a lot of CPU time per pixel. Once I have the lines I’ll filter them given an allowed min and max angle you can pass the function. Since this will be done in C this basically just reduces the search space in the hough map.

Moving on, the output of this algorithm will be some lines. Probably line points (x1, y1, x2, y2). The centroid of all X positions should provide a good “center of the road” estimation like color tracking does. But, this time it will be more robust to lighting and not require any color config.

Care will have to be taken to avoid situations that mess up the centroid position however. Like, if you design and algorithm for one line in the center you should avoid allowing any other lines into the field of view. Or, use a kalman filter to filter out impossible driving vector change requests do to the centroid moving after seeing a line on the edge.

Sounds very smart. Thanks for thinking about this and explaining!