I just purchased my first OpenMV Cam M7, and some other accessories. My goal is to use the OpenMV cam to produce datasets that can easily be simulated using the python NEAT library ( http://neat-python.readthedocs.io/en/latest/) to train models using genetic algorithms. This is, as you probably know, slightly different than training a static ANN with keras/tensorflow/etc.
What I need is not for the camera to detect lines per se like usual CV/ML driving algos. But, rather detect (or estimate) distances to chunks of lines, and return them as relative x,y values to the camera.
notice that I will need to account for parallax/perspective in the output data. The actual image shows the lines converging (perspective), but the output data I’m trying to produce is 2d, from above the car (like an old school top down vertical scroller video game)
Why do I want this? Because then I can easily randomize and train offline (similar to how many CV/ML car projects do in unity), but using a much simpler dataset like the one shown below. Because the output data is explicit (x,y) and expressed in 2d, it can make for a much simpler/smaller set of inputs to a neural network. As such, I can run through hours and hours of offline genetic style training, with simple random datasets, and hopefully produce some interesting results and models for my M7 powered donkey car.
example of genetic algorithm in use (flappy bird)