Obstacle Detection and Avoidance

Think it would make it worse to do the blocking. Typically I do prints for testing and debugging purposes and the frame rate would not be critical. As long as we know that this is the case I can adjust the timing of large data prints. When I actually get this going I wouldn’t be sending the data via SPI or wifi.

By the way when will UDP be implemented? If it is already do you have an example? As long as I got you I have the new M7 with the standard lens and just picked up the wide angles as well. Do you have the specs on these lens (FOV, lens dia, etc). Do you have any specs on the CCD as well - looking for dimensions? Going to try an experiment.

Thanks
Mike

Lens specs are on the website on the lens product pages. As for UDP, that’s Ibrahim’s domain. He needs to be bugged about it. I’ll let him know it’s something to be fixed net at a higher priority.

Thanks appreciate it. I actually have another question. When I run the edge detection example it shows the edge of the frame as a detected edge. How do you get around that?

Um, please read up on the edge detection code. Use canny edge detection which does the proper thing. The fast basic one detects the edge of the frame as an edge because the convolution uses zeros for pixels off the edge of the image.

Sorry for not getting back here sooner but am in the processes of putting together a test bot and tweaking the code some more. Yep, read up on it and for some dumb reason thought the fast basic one was using canny as well. Think my eyes were getting crossed. By the way I also tested sending data over serial to one of the serial lines on a teensy 3.5 (tested at 115200) and works like a charm.

Mike

Finally got the code working for at least some baseline testing. The final output that I am sending to a Teensy 3.5 over the uart is the estimated angle of perceived gaps. So anyway, thought I would share the code if anyone is interested (note: not the most efficient). There’s also an option to send the image over wifi for testing purposes. I also have a vl53l0x sensor attached to the teensy. Wanted to use it directly with the OpenMv but ran into memory problems.

Enjoy
Mike

UPDATE: Here is a picture of bot.
20170604_211451.png
ObstacleAvoidanceMV.py (7.91 KB)

ok. Finally got everything at least baselined and tweaked to a point where it does work depending on a few factors I have to work on. I made a little video of it:

Cheers.

PS> next version how about lots more memory and higher res camera :slight_smile: know its a wish list. :slight_smile:

Hi, it looks like I’ll have the weekend free so I’ll get that code done for you. The next version will use the STM32H7 which has the same horse power as the raspberry pi 1. It will be able to work on VGA res stuff with CV. Also, we’re working on making the camera module switchable and offering a global shutter cam. Resolution will not increase much beyond VGA but to be honest… VGA is pretty much a good res for CV stuff. Even the highest end deep learning systems only work on about 200x200 pixels at a time.

If you’d like to share a write up of the project please send a PR to the OpenMV projects repo on github. You can summit stuff you’ve done there.

Thank you. How detailed do you need the write up?

Whatever you feel comfortable with. You’ve found the repo correct? Just add another folder at the top level and send a PR with whatever you’ve done.

Yep. Saw the donkey car so I figured it was the right one :slight_smile:.

9/3/17 = created pull request :slight_smile:

FANTASTIC!

How about a 360 degree scan?

Though occasionally raises an index out of range (on second line):

index0Pixels = all_indices(0, ObsArray3)
    if index0Pixels[0] > 0 and ObsArray3[0] == 0:
        index0Pixels.insert(0, 0)
    #print("Index of Zero Pixels: ", index0Pixels)

Just needs a change in one line:
img.find_edges(image.EDGE_CANNY).linpolar(reverse=False)

And a little fiddling to centre the image on the sensor. I need to check if linpolar allows x,y coords of a “centre” point.

Have you considered doing a camera with an STM32H745 or STM32H747 dual core MCU?

I just git an Arduino Portenta H7 with the Vision/Lora shield and am getting ready to load Micropython on it. I am especially interested in how the two cores can communicate.

8-Dale

We’re going to do doing a pro camera. I’m almost done with the spec. Just very busy right now.

hello i want to talk to you about project computer vision.