can the openMV M7 cam do this?

I just purchased my first OpenMV Cam M7, and some other accessories. My goal is to use the OpenMV cam to produce datasets that can easily be simulated using the python NEAT library ( Welcome to NEAT-Python’s documentation! — NEAT-Python 0.92 documentation) to train models using genetic algorithms. This is, as you probably know, slightly different than training a static ANN with keras/tensorflow/etc.

What I need is not for the camera to detect lines per se like usual CV/ML driving algos. But, rather detect (or estimate) distances to chunks of lines, and return them as relative x,y values to the camera.

notice that I will need to account for parallax/perspective in the output data. The actual image shows the lines converging (perspective), but the output data I’m trying to produce is 2d, from above the car (like an old school top down vertical scroller video game)

Why do I want this? Because then I can easily randomize and train offline (similar to how many CV/ML car projects do in unity), but using a much simpler dataset like the one shown below. Because the output data is explicit (x,y) and expressed in 2d, it can make for a much simpler/smaller set of inputs to a neural network. As such, I can run through hours and hours of offline genetic style training, with simple random datasets, and hopefully produce some interesting results and models for my M7 powered donkey car.

example of genetic algorithm in use (flappy bird)

Oh, yeah, we can do this. Actually, I plan to port one of the small nueral network libraries to the camera to drive it in the future versus a PID loop. Getting tired of PID as a control system. It will be much easier to use a NN on just a few numeric inputs.

Anyway, um, so, I suppose you saw the linear regression donkey car project right? The linear regression is a pretty strong classifier whose output can then be feed into an NN. But, anyway, to answer your question:

I added a rotation correction method to the next gen firmware (which was released online a few days ago). This method will fix the perspective issue and give you some flat lines. As for finding the chunks like you want, just use find_blobs with an ROI than spans a horizontal cross section of the image for white and yellow thresholds. Find blobs I’ll then return to you the centroids of each line along with bounding boxes. Do that repeat for multiple horizontal cross sections. Then take all that data and feed it to your NN. You should hit above 20 FPS or so with the M7 camera.

As for doing well at the diy robocar challenge, I think it’s quite important NOT to feed the NN the raw image but instead preprocessed data so it doesn’t learn useless things like people standing around the room and not the road (which is usually the case with most folks using NNs…). Alot of people show up hoping the NN will magically think the road is the right thing to track but often it ignores the road and learns where non road features are to drive on.