Obstacle Detection and Avoidance

Real newbie here with machine vision/image processing. Just received my board the other day with the hopes of using it to detect obstacles and avoid them. Right now I am using multiple sonar and IR sensors. Was doing a bunch of research on methods to use monocular vision with OpenCV and it appears totally doable but not sure how to implement on OpenMV.

For instance I found this project, Obstacle detection using OpenCV | Big Face Robotics that uses Opencv with a webcam and several functions from OpenCV but I don’t think they are all currently supported. Also was interest in an approach the detects the ground (floor) plane through segmentation using Eigen transform, think it was the cv function svd. Guess it is essentially for texture mapping.

Again just learning here so forgive if I got my terminology wrong.

Any help in getting started would be appreciated.

thanks
Mike

Hi, so, are you looking to code this in C? Or use the python interface?

If in the python interface you just need to blur the image via the mean() method and then call the canny method on an image to get the image into line segments. Once you do this you can manually look at images pixels in python to implement the rest of the algorithm described in the link.

While accessing the image in python is slow… you don’t have to check every pixel, just every 10-20 pixels. So, things should run quickly. Once you’ve created a list of image heights from the floor you then just test the difference between two measurements and threshold to get an idea of where obstacles are.

Alternatively, if you know the color of the floor you can just find_blobs equal to not the floor color.

Lot’s of different ways to tackle this… all have their trades offs.

@kwagyeman Thanks for the reply. I finally hooked up the camera and its great I love it. Examples are plentiful illustrating the use of the commands. Anyway, I want to implement everything using the python interface. I started using the median filter and edge detection methods and beginning to get a rudimentary understanding of how everything works.

Thanks to your comments it pointed me in the right direction for the commands :slight_smile:. Anyway if I pick up a good book on opencv would that be of help for a newbie like me. Last time I did anything with image processing it was on a ibm360 about 35 years ago, so anything I learned has been forgotten.

Thanks
Mike

Cool, also, let me know if you need me to make anything in particular “faster”. I plan to improve a lot of the library code… but, like anything. I have finite time and focus on fighting fires people tell me about first.

I’ll be improving the line detection code next.

Me again. I started coding the method but ran into a problem with memory allocation error. The exact error is MemoryError: memory allocation failed, allocating 16384 bytes.

Here is the code that I am using, any suggestions would be appreciated. Do have a couple of questions, how can I overlay the captured image with the edge detected image or the results of the edgearray?


# Untitled - By: CyberPalin - Mon May 1 2017


import sensor, image, time, array

sensor.reset() # Initialize the camera sensor.
sensor.set_pixformat(sensor.GRAYSCALE) # or sensor.RGB565
sensor.set_framesize(sensor.QQVGA) # or sensor.QVGA (or others)
sensor.skip_frames(30) # Let new settings take affect.
sensor.set_gainceiling(8)
clock = time.clock() # Tracks FPS.

StepSize = 8

EdgeArray = []

while(True):
    clock.tick() # Track elapsed milliseconds between snapshots().
    img = sensor.snapshot() # Take a picture and return the image.

    # The only argument is the kernel size. N coresponds to a ((N*2)+1)^2
    # kernel size. E.g. 1 == 3x3 kernel, 2 == 5x5 kernel, etc. Note: You
    # shouldn't ever need to use a value bigger than 2.
    img.mean(1)

    # Use Canny edge detector
    img.find_edges(image.EDGE_CANNY, threshold=(50, 80))

    imagewidth = img.width() - 1
    imageheight = img.height() - 1

    print(imagewidth)
    print(imageheight)

    for j in range(0,imagewidth, StepSize):    #for the width of image array
        for i in range(imageheight-5,0,-1):    #step through every pixel in height of array from bottom to top
                                               #Ignore first couple of pixels as may trigger due to undistort
            if img.get_pixel(i,j) == 255:      #check to see if the pixel is white which indicates an edge has been found
                EdgeArray.append((j,i))        #if it is, add x,y coordinates to ObstacleArray
                break                          #if white pixel is found, skip rest of pixels in column
            else:                              #no white pixel found
                EdgeArray.append((j,0))        #if nothing found, assume no obstacle. Set pixel position way off the screen to indicate
                                               #no obstacle detected


    #for x in range (len(EdgeArray)-1):      #draw lines between points in ObstacleArray
    #    cv2.line(img, EdgeArray[x], EdgeArray[x+1],(0,255,0),1)
    #for x in range (len(EdgeArray)):        #draw lines from bottom of the screen to points in ObstacleArray
    #    cv2.line(img, (x*StepSize,imageheight), EdgeArray[x],(0,255,0),1)


    #print(clock.fps()) # Note: Your OpenMV Cam runs about half as fast while
    # connected to your computer. The FPS should increase once disconnected.

Hi,

Which line are you running out of memory on? EdgeArray something? It looks like you’re doing a double loop on that. So, it’s growing too big. The camera has enough RAM for you to do one pass on the image, not w*h passes. For each horizontal position you should walk up from the bottom pixel until you hit a white line and record the distance walked. You’re current code doesn’t do that… but instead is trying to store most of the image in the small few KB of heap space for your micropython program.

As for overlaying the image… this is tricky. So, because the OpenMV Cam is a microcontroller we don’t have enough RAM to keep two large copies of an image in RAM really. If you really want to do an image overlay you have to save the captured image to disk, then read it back in:

img = sensor.snapshot()
img.save("/temp/bg.bmp")
# modify image
img.blend("temp/bg.bmp", alpha=127)

There are other functions than blend which can be used for better masking affects.

Yes, I know the above really isn’t ideal :wink:. That said, in mission operation you don’t need to do this so it’s not that big of a deal when you deploy the system.

Hi Nyamekye

Thanks for getting back to me. Right now I am just trying to implement the code as written in the link. But I think your way is the way to go in the end. You are right about the overlaying of images. Again just trying to see how he implemented the process, what the results are and the overlay is really just for debugging.

Running into another problem right now that has to do with python programming vs image the camera. The EdgeArray variable I think is a series of tuples with each set of coordinate pairs. What I have to do is break each down pair down into separate x0, y0, x1, y1 for draw_line and can not seem to get it through my head how to do that.

This is where I run into problems:

    for x in range (len(EdgeArray)-1):      #draw lines between points in ObstacleArray
        #img.draw_line(EdgeArray[x], EdgeArray[x+1],color = 0)
        #img.draw_line(EdgeArray[([x][0])], EdgeArray[([x][1])], EdgeArray[([x+1][0])], EdgeArray[([x+1][1])], color=0)
        print(EdgeArray()[[x]])

I got around the memory issue by selectively sensor.snapshot() and also emptying the EdgeArray by redefining it after the loop finishes, in other words doing EdgeArray = .

Thanks for your help
Mike

Try this out:

import sensor, image, time

sensor.reset()                      # Reset and initialize the sensor.
sensor.set_pixformat(sensor.GRAYSCALE) # Set pixel format to RGB565 (or GRAYSCALE)
sensor.set_framesize(sensor.QQVGA)   # Set frame size to QVGA (320x240)
sensor.skip_frames(10)              # Wait for settings take effect.
clock = time.clock()                # Create a clock object to track the FPS.

while(True):
    clock.tick()                    # Update the FPS clock.
    img = sensor.snapshot()         # Take a picture and return the image.
    img.mean(1)
    img.find_edges(image.EDGE_CANNY)
    img.erode(1, threshold = 2)
    
    EdgeArray = []
    
    for x in range(img.width()):
        
        flag = False
        for y in range(img.height()):
            z = img.height() - 1 - y
            if img.get_pixel(x, z):
                EdgeArray.append((x, z))
                flag = True
                break
                
        if not flag:
            EdgeArray.append((x, 0))
        
    old = None 
    for i in range(len(EdgeArray)):
        if old != None:
            img.draw_line((old[0], old[1], EdgeArray[i][0], EdgeArray[i][1]), color = 127)
        old = EdgeArray[i]

    print(clock.fps())

This basically does what you want. You can see the contour of the edges in light gray. The next step is then to derive some insight from the points around the bottom edge of the image.

Note that scanning through the the pixels one pixel at time kills the performance. That’s why this runs at about 3 FPS on the M7. If you skip multiple horizontal pixels the speed will increase by a lot.

For finding a symbol just use the “find_apriltag” method. This finds any apriltag in the image irregardless of lighting, scale, shear, rotation, etc.

Nyamekye

Wow, thanks for thanking the time to adjust my code. Great learning for me. Will give it a try first thing in the morning.

Thanks again
mike

Nyamekye

Been playing with it for a couple hours now and it works great. I changed the stepsize to 8 and got a 7.5 fps update rate. I also added the vertical lines in the image (actually helps. Been moving around obstacles to see what it gives me. Rather fun. Now the fun stuff starts with determining obstacle criteria based on the image data.

Will keep you posted on how it goes.

Mike

Nyamekye

Thought you would like to see some preliminary results. I took the data and put into an excel spreadsheet so I could determine the best approach to the calculations. Here is a image and the results. Sorry the image is so big. Next time smaller.

Cool, looks like you have a hard problem ahead of you though for actually detecting the objects. Seems like you can just threshold the difference between two points to determine object start and stop. A large negative difference is a start, and a large positive difference is a stop. Just toggle a variable when either condition happens.

That’s pretty close to what I am doing. I added a few more tests to avoid where there is are spikes in the data. Got it coded up now in python and drawing circles where the obstacles edges are. Just one question, is there easy way to put it back in color mode and then draw circles where the bottom edge of a potential obstacle is. I was able to put it back into color mode but when I went to draw the circles wouldn’t work.

There is also one draw back, if is a wall method doesn’t work. Planning on adding a time of flight sensor or add a laser light hitting an object along the los of the lens :slight_smile: Haven’t got that far yet.

Thanks
Mike

If you switch the pixformat to RGB565 and then snapshot() the image will be in color. You can then draw in color again. Note that you’re frame rate will suffer doing this. Only enable this for debug.

Hi
I did that but what happens is the data points are not persistent, in others, draws one circle than draws the next circle but the first circle gets erased.

Mike

Yeah, that will happen switching between color spaces.

Um, when I get around to it I’ll make canny accept an RGB565 image. Until then, please us that function with just grayscale.

Nyamekye,

Not a problem. No rush. Using it with grayscale.

Thanks
Mike

PS. Just bought the protoshield, wifi shield and wide angle lens. Having fun now.

In prep for sending data for further processing I noticed that if I do a print to the serial terminal of two arrays back to back, i.e.,

print(ObsArray3)
print(EdgeArray)

the will only partially print and only the last print:

 152, 0), (156, 68)]
 0), (156, 73)]
(156, 0)]

If I put a 20 ms delay between the two it will work. This is what should be printed:

[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0]
[(0, 0), (4, 0), (8, 9), (12, 0), (16, 0), (20, 0), (24, 0), (28, 0), (32, 0), (36, 0), (40, 0), (44, 0), (48, 0), (52, 0), (56, 0), (60, 0), (64, 0), (68, 0), (72, 0), (76, 0), (80, 0), (84, 0), (88, 0), (92, 0), (96, 0), (100, 0), (104, 0), (108, 113), (112, 112), (116, 0), (120, 0), (124, 0), (128, 0), (132, 0), (136, 0), (140, 0), (144, 0), (148, 0), (152, 0), (156, 0)]

Is there a way around this or is this a bug?

It’s not a bug. When printing via the USB debug feature built-in to the IDE a maximum number of bytes can be printed per time unit. The IDE services the print buffer every 20 Ms or so. Print less data not as quickly and you’ll be able to see it all.

Um, for large debug data sets you may wish to print to an SD card. Or hook up a serial to USB cable to the OpenMV Cams serial port and print via that.

To fix this issue we’d need to make the print function block when there’s no buffer space available… Thus reducing FPS…

Question, do you think that would be better or worse, making print block on text output if there’s no space available because the IDE hasn’t read the text yet from a USB buffer?