location by openMV

I am planning to design a simple home-service robot which is designing to pick up the garbage on the floor ,so it needs an eye to see the garbage and locate it .
I wonder that if the openMV can be used to locate the garbage on the floor,such as a lump of paper .

I am a new starter here ,and really thanks for the helps of yours.

Hi Kate,

It should be able to do this. But, can you let me know a little bit more about the situation? Like, for example, is the floor a sold color and garbage something different? Then you can just use color tracking to find trash (track anything not equal to the floor color).

Please let me know.

Thank you very much to help me and answer it .

About the situation , I think it would not be any limitation in color (the floor color and garbage something may have the same color ).

What I need the openMV do is to detect that there is an object on the floor (or we can say the object is upper the floor and makes the floor not a flat surface ).
If not by tracking the color, can openMV do it ?
Thank you very much again~

And excuse me ,I got another question about color tracking by openMV, could you please tell me that if I use color tracking to locate the garbage or something on the floor ,how much accuracy can it matches ,like 5cm or others?

Thank you a lot.

Hi Kate, you can use frame differencing to detect in something changes in a static scene. If the room the garbage is indoors then frame differencing will work great. If there’s outside light then that will complicate things. You can also mask off regions in the image with frame differencing if there are parts in the scene you don’t want to look at.

Anyway, once you detect areas that have changed then you can use color tracking to find the location of all changes. As for the accuracy, that depends on the lens and the scene. The camera just sees what’s in front of it. If you’re closer to something then each pixel maps to a smaller point in the world.

Hi, Thanks ! That is very helpful to me .“Frame differencing +Color trancking”,that is great!
I think I can start my designing work,If I get questions during my designing ,hope I can ask here and get help from you .
Thanks again.

Yes, I am here to help.

Hello kwagyeman,here me again,Kate.
I got another question: Can I use all the source code of openMV in an ARM development board,and the camera could be just a normal one,such as USB camera?
I wonder that whether the source code of OpenMV can only run in OpenMV model or not?
Hope I describe the question clearly.
Thanks very much.

You’re looking at a significant amount of work to port our firmware to the desktop. You can test some of the algorithms a little bit in isolation but that’s not really valuable.

If you have a full linux ARM board then I guess the opencv path is the way to go. But, we can’t provide support for you then anymore. Note that you’re going to encounter the same computer vision problems with opencv on the linux board as on the openmv cam. More power and more resolution won’t help with white balance and auto gain issues when doing color tracking.

OK,I get it .
What if connect OpenMV to a ARM board ,the OpenMV and ARM board work together dealing with the "frame differencing " and “color tracking”? If so,how the two board connect, by UAB or some other ways? And can the video collected by the openmv cam be transmitted to the ARM board through the connect ?
Thanks very much again .

Sorry ,it is “USB” in sentence " If so,how the two board connect, by UAB or some other ways?"

What’s the ARM board’s job? To display video? For what purpose? Can you describe the system you have in mind.

The ARM board’s job :First, obtains the video from OpenMV, then sends the video to a phone through WIFI.
Second, obtains the location of a object which is located by OpenMV, then controls the robot moving to the boject.
Thanks again.

Okay, the best way to transmit video to the arm board will then be via the OpenMV Cams SPI bus. We can output JPEG compressed images on the SPI bus at up to 48 MHz. The code to do this is very simple. The whole JPEG image can be transferred by doing one SPI transaction to transmit the length of bytes to send first and then another SPI transaction to transmit the image. I can provide the python code for how to do this. The OpenMV Cam can also provide tracking information via the SPI bus too.

Okey ,got it.
Another question: Dose OpenMV Cam need a driver of SPI bus ? If so ,Do I need to program it?
And Could please send me the python code?
Thanks really really much~

Not sure why you are using it like a webcam because it can do the processing on board… but, okay, I’ll send you the code tonight or tomorrow night.

Hi sorry for not getting around to this until now. It helps it you remind me on the weekends. Otherwise I forget.

import sensor, image, time, pyb
from pyb import Pin, SPI

cs = Pin("P3", Pin.OUT_PP)

# The hardware SPI bus for your OpenMV Cam is always SPI bus 2.
spi = SPI(2, SPI.MASTER, baudrate=4000000, polarity=0, phase=0)

def send_image_format(img):

    # Send width
    spi.send((img.width() >> 24) & 0xFF)
    spi.send((img.width() >> 16) & 0xFF)
    spi.send((img.width() >> 8) & 0xFF)
    spi.send((img.width() >> 0) & 0xFF)

    # Send height
    spi.send((img.height() >> 24) & 0xFF)
    spi.send((img.height() >> 16) & 0xFF)
    spi.send((img.height() >> 8) & 0xFF)
    spi.send((img.height() >> 0) & 0xFF)

    # Send size
    spi.send((img.size() >> 24) & 0xFF)
    spi.send((img.size() >> 16) & 0xFF)
    spi.send((img.size() >> 8) & 0xFF)
    spi.send((img.size() >> 0) & 0xFF)


def send_image(img):

sensor.reset() # Initialize the camera sensor.
sensor.skip_frames(10) # Let new settings take affect.
clock = time.clock() # Tracks FPS.

    clock.tick() # Track elapsed milliseconds between snapshots().
    img = sensor.snapshot() # Take a picture and return the image.
    img = img.compressed() # comment out to send uncompressed images...
                           # you'll have to increase the baud rate however.

    print(clock.fps()) # Note: Your OpenMV Cam runs about half as fast while
    # connected to your computer. The FPS should increase once disconnected.

The code is pretty simple to follow. You can increase the SPI clock rate by much more to make it faster. You can also preform image manipulation, etc. before sending the image.
send_image.py (1.61 KB)