Any suggestion to detect the high-speed moving object

Hi! Everyone
I am working on a project that deters the small bird (moving at high speed) before they hit the window. It is my first time to use the OpenMv and I do not have any machine learning background. So far, I find out two algorithms that can be used in my project, frame differencing and optical flow. I decided to try the frame differencing first since it has example code. Both of them only tell whether a moving object passes by. I am looking for an algorithm or method which is easy to use and able to distinguish the small bird and human (Maybe run the frame differencing and Face recognition together?). Any suggestion or example code can be helpful.

Is there a way you can constrain the variables in your project? What you want to do right now is extremely hard.

Do you mean the environment variables? The basic goal of this project is to detect whether the small birds fly toward the device. If a human walks toward the device, then do nothing.

Um, so, you basically said I want a camera in an outdoor environment to detect birds accurately and not people.

This is really hard. Lighting, clouds, etc, all make this very hard. Yes, if our system was powerful enough to run a CNN that could do bounding boxes them you could somewhat tackle the problem. However, lighting and etc. will hurt the CNNs performance without a lot of training data collected by you.

So, is there anything about your application that could make the problem easier?

Sorry to reply late. I spend some time talking with my project partner to see whether I can decrease the difficulty of the project.

  1. The device will be put at an angle (like a 45 degree), then the vision of OpenMv will only see the sky, tree, and bird.
  2. The device only needs to run in the daytime and sleep during the night time.
  3. I run the example of advanced Frame differencing and it can output a True or false. My project partner suggests me to add the blob function in the advanced Frame differencing to pick up and track the area of the changed pixels. If the area of changed pixels is getting bigger, then do the next action.
  4. I will also use the PIR sensor and Ultrasonic sensor to reduce the uncertain detection (like human).

This is a school project and does not have to be too professional and accurate. The following file is my thought on the operating mode of OpenMv in the project.

Do you think it will work or you have a better idea?

Thank you to spend time to think about my project and I really appreciate that

Hi, this will actually work well then. What you are saying greatly reduces the difficulty.

I also recommend a shade over the camera to reduce sunlight and stray light.

Hi! kwagyeman,

I try to use a rectangle to draw the area of the changed pixels. For example, if I put my hand in the vision of OpenMv, it will put a rectangle around my hand. If I throw a tennis ball toward the OpenMv, it will put a rectangle around it and the rectangle will get bigger as OpenMv keeps tracking it. In other words, I expect the OpenMv can keep tracking the area of the changed pixels (at least 20x20 or more). I had tried blob functions but it does not work as I expected. Are there any tutorials or examples I can learn?

Thank you

Hi! kwagyeman,

I try to use a rectangle to draw the area of the changed pixels. For example, if I put my hand in the vision of OpenMv, it will put a rectangle around my hand. If I throw a tennis ball toward the OpenMv, it will put a rectangle around it and the rectangle will get bigger as OpenMv keeps tracking it. In other words, I expect the OpenMv can keep tracking the area of the changed pixels (at least 20x20 or more). I had tried blob functions but it does not work as I expected. Are there any tutorials or examples I can learn? Any help will be great

Thank you

The best method for this is mean-shift or cam-shift tracking. We don’t have a method for this.

You can do frame differencing with find-blobs. However, this only works if the background is known. If that is the case though then it should work.

Ok. Let pretend the background is known. I know how to use the blob to find a specific color block. But I have no idea how to use the blob to find the difference between the two images. I also try to search if anyone did that. I do not get any useful info. So far, I add up the blob code in the example of the frame differencing. It put the rectangle on the entire image. The OpenMv does not have any reaction if I put my hand at the front. I do not know what I can do for the next.

import sensor, image, pyb, os, time

TRIGGER_THRESHOLD = 5

BG_UPDATE_FRAMES = 40 # How many frames before blending.
BG_UPDATE_BLEND = 128 # How much to blend by... ([0-256]==[0.0-1.0]).

color_threshold = (41, 100, -128, 127, -128, 127)

sensor.reset() # Initialize the camera sensor.
sensor.set_pixformat(sensor.RGB565) # or sensor.RGB565
sensor.set_framesize(sensor.QVGA) # or sensor.QQVGA (or others)
sensor.skip_frames(time = 2000) # Let new settings take affect.
sensor.set_auto_whitebal(False) # Turn off white balance.
clock = time.clock() # Tracks FPS.

if not "temp" in os.listdir(): os.mkdir("temp") # Make a temp directory

print("About to save background image...")
sensor.skip_frames(time = 2000) # Give the user time to get ready.
sensor.snapshot().save("temp/bg.bmp")
print("Saved background image - Now frame differencing!")

triggered = False

frame_count = 0
while(True):
    clock.tick() # Track elapsed milliseconds between snapshots().
    img = sensor.snapshot() # Take a picture and return the image.
    img2 = img

    frame_count += 1
    if (frame_count > BG_UPDATE_FRAMES):
        frame_count = 0
        img.blend("temp/bg.bmp", alpha=(256-BG_UPDATE_BLEND))
        img.save("temp/bg.bmp")


    img.difference("temp/bg.bmp")


    blobs = img2.find_blobs([color_threshold], invert = True)
    if blobs:
        for b in blobs:
            img2.draw_rectangle(b[0:4])
            img2.draw_cross(b[5], b[6])


    hist = img.get_histogram()

    diff = hist.get_percentile(0.99).l_value() - hist.get_percentile(0.90).l_value()
    triggered = diff > TRIGGER_THRESHOLD



    print(clock.fps(), triggered)

Hi, you are treating images like as if they are objects that you can trivially assign. They are not.

E.g. img2 = img is just making a copy of the image reference. This is not a deep copy.

Please see the frame differencing example scripts and then add blob tracking on the final image output.

Hi!

I delete the img2 and put the blob functions at the end. It still put a rectangle on the entire image. I set the threshold value to the whole picture. I do not know if that is correct.

import sensor, image, pyb, os, time

TRIGGER_THRESHOLD = 5

BG_UPDATE_FRAMES = 40 # How many frames before blending.
BG_UPDATE_BLEND = 128 # How much to blend by... ([0-256]==[0.0-1.0]).

color_threshold = (41, 100, -128, 127, -128, 127)

sensor.reset() # Initialize the camera sensor.
sensor.set_pixformat(sensor.RGB565) # or sensor.RGB565
sensor.set_framesize(sensor.QVGA) # or sensor.QQVGA (or others)
sensor.skip_frames(time = 2000) # Let new settings take affect.
sensor.set_auto_whitebal(False) # Turn off white balance.
clock = time.clock() # Tracks FPS.

if not "temp" in os.listdir(): os.mkdir("temp") # Make a temp directory

print("About to save background image...")
sensor.skip_frames(time = 2000) # Give the user time to get ready.
sensor.snapshot().save("temp/bg.bmp")
print("Saved background image - Now frame differencing!")

triggered = False

frame_count = 0
while(True):
    clock.tick() # Track elapsed milliseconds between snapshots().
    img = sensor.snapshot() # Take a picture and return the image.

    frame_count += 1
    if (frame_count > BG_UPDATE_FRAMES):
        frame_count = 0
        img.blend("temp/bg.bmp", alpha=(256-BG_UPDATE_BLEND))
        img.save("temp/bg.bmp")


    img.difference("temp/bg.bmp")

    hist = img.get_histogram()

    diff = hist.get_percentile(0.99).l_value() - hist.get_percentile(0.90).l_value()
    triggered = diff > TRIGGER_THRESHOLD

    blobs = img.find_blobs([color_threshold], invert = True)
    if blobs:
        for b in blobs:
            img.draw_rectangle(b[0:4])
            img.draw_cross(b[5], b[6])

    print(clock.fps(), triggered)

result.PNG

Oh, you are using some super old demo code.

Can you download the latest IDE and firmware and run the Examples->Frame differencing->Simple In Memory Frame Differencing.

Once you have that running you see that movement blobs are clearly anything non-black. Then you can just set find_blobs color tracking thresholds to track the non-black parts.

I see. I follow your advice and have some progress. I add the blob code in the basic frame differencing example. I still have some questions. Is it common that sometimes it will have multiple rectangles? How to reduce these rectangles into a big one? I am not sure whether the 11 FPS is enough to catch the throwing tennis ball. Can the MT9V034 be applied in the frame differencing? I tried but it gave me some error (like cannot turn-off the white balance and the OS error). Or decreasing the resolution is the only way to get high FPS?

import sensor, image, pyb, os, time

TRIGGER_THRESHOLD = 5
color_threshold = (0, 25, -128, -4, 5, 11)


sensor.reset() # Initialize the camera sensor.
sensor.set_pixformat(sensor.RGB565) # or sensor.GRAYSCALE
sensor.set_framesize(sensor.QVGA) # or sensor.QQVGA (or others)
sensor.skip_frames(time = 2000) # Let new settings take affect.
sensor.set_auto_whitebal(False) # Turn off white balance.
clock = time.clock() # Tracks FPS.

if not "temp" in os.listdir(): os.mkdir("temp") # Make a temp directory

print("About to save background image...")
sensor.skip_frames(time = 2000) # Give the user time to get ready.
sensor.snapshot().save("temp/bg.bmp")
print("Saved background image - Now frame differencing!")

while(True):
    clock.tick() # Track elapsed milliseconds between snapshots().
    img = sensor.snapshot() # Take a picture and return the image.

    # Replace the image with the "abs(NEW-OLD)" frame difference.
    img.difference("temp/bg.bmp")

    hist = img.get_histogram()
    # This code below works by comparing the 99th percentile value (e.g. the
    # non-outlier max value against the 90th percentile value (e.g. a non-max
    # value. The difference between the two values will grow as the difference
    # image seems more pixels change.
    diff = hist.get_percentile(0.99).l_value() - hist.get_percentile(0.90).l_value()
    triggered = diff > TRIGGER_THRESHOLD

    blobs = img.find_blobs([color_threshold], invert = False, merge=True)
    if blobs:
        for b in blobs:
            img.draw_rectangle(b[0:4])
            img.draw_cross(b[5], b[6])

    print(clock.fps(), triggered) # Note: Your OpenMV Cam runs about half as fast while
    # connected to your computer. The FPS should increase once disconnected.

result_3.PNG
result_1.PNG
result_2.PNG

You want to merge the blobs. So, set the merge=True argument and then increase the margin= value so that blobs merge into one. This is for find_blobs()

Regarding the global shutter sensor… yes, remove RGB565 and go to grayscale and remove the auto white balance code and etc. Also, reduce the resolution you are processing the image at. QQVGA is the best size/speed tradeoff.

I tried to use the Greyscale. It does not work. It looks like the frame differencing only accepts the RGB565. Am I missing anything?

import sensor, image, pyb, os, time

TRIGGER_THRESHOLD = 5
#color_threshold = (0, 25, -128, -4, 5, 11)
grey_threshold = (0, 255)

sensor.reset() # Initialize the camera sensor.
sensor.set_pixformat(sensor.GRAYSCALE) # or sensor.GRAYSCALE
sensor.set_framesize(sensor.QVGA) # or sensor.QQVGA (or others)
sensor.skip_frames(time = 2000) # Let new settings take affect.
#sensor.set_auto_whitebal(False) # Turn off white balance.
clock = time.clock() # Tracks FPS.

if not "temp" in os.listdir(): os.mkdir("temp") # Make a temp directory

print("About to save background image...")
sensor.skip_frames(time = 2000) # Give the user time to get ready.
sensor.snapshot().save("temp/bg.bmp")
print("Saved background image - Now frame differencing!")

while(True):
    clock.tick() # Track elapsed milliseconds between snapshots().
    img = sensor.snapshot() # Take a picture and return the image.

    # Replace the image with the "abs(NEW-OLD)" frame difference.
    img.difference("temp/bg.bmp")

    hist = img.get_histogram()
    # This code below works by comparing the 99th percentile value (e.g. the
    # non-outlier max value against the 90th percentile value (e.g. a non-max
    # value. The difference between the two values will grow as the difference
    # image seems more pixels change.
    diff = hist.get_percentile(0.99).l_value() - hist.get_percentile(0.90).l_value()
    triggered = diff > TRIGGER_THRESHOLD

    blobs = img.find_blobs([grey_threshold], invert = False, merge=True, margin=100)
    if blobs:
        for b in blobs:
            img.draw_rectangle(b[0:4])
            img.draw_cross(b[5], b[6])

    print(clock.fps(), triggered) # Note: Your OpenMV Cam runs about half as fast while
    # connected to your computer. The FPS should increase once disconnected.

result_4.PNG

You set your grayscale threshold to 0-255 which is all pixels.

I can capture my hand through the greyscale by changing the threshold to (0, 40). When I switch the camera to MT9V034 and run the same code, it gives me an error and crush (no response). The MT9V034 still can run the helloworld example.

import sensor, image, pyb, os, time

TRIGGER_THRESHOLD = 5
#color_threshold = (0, 25, -128, -4, 5, 11)
grey_threshold = (0, 40)

sensor.reset() # Initialize the camera sensor.
sensor.set_pixformat(sensor.GRAYSCALE) # or sensor.GRAYSCALE
sensor.set_framesize(sensor.QVGA) # or sensor.QQVGA (or others)
sensor.skip_frames(time = 2000) # Let new settings take affect.
#sensor.set_auto_whitebal(False) # Turn off white balance.
clock = time.clock() # Tracks FPS.

if not "temp" in os.listdir(): os.mkdir("temp") # Make a temp directory

print("About to save background image...")
sensor.skip_frames(time = 2000) # Give the user time to get ready.
sensor.snapshot().save("temp/bg.bmp")
print("Saved background image - Now frame differencing!")

while(True):
    clock.tick() # Track elapsed milliseconds between snapshots().
    img = sensor.snapshot() # Take a picture and return the image.

    # Replace the image with the "abs(NEW-OLD)" frame difference.
    img.difference("temp/bg.bmp")

    hist = img.get_histogram()
    # This code below works by comparing the 99th percentile value (e.g. the
    # non-outlier max value against the 90th percentile value (e.g. a non-max
    # value. The difference between the two values will grow as the difference
    # image seems more pixels change.
    diff = hist.get_percentile(0.99).l_value() - hist.get_percentile(0.90).l_value()
    triggered = diff > TRIGGER_THRESHOLD

    blobs = img.find_blobs([grey_threshold], invert = True, merge=True, margin=100)
    if blobs:
        for b in blobs:
            img.draw_rectangle(b[0:4])
            img.draw_cross(b[5], b[6])

    print(clock.fps(), triggered) # Note: Your OpenMV Cam runs about half as fast while
    # connected to your computer. The FPS should increase once disconnected.

error.PNG

Yeah… you’re using an old API for frame difference for. Please update your code using the in memory frame differencing example. Because you are pulling the background image off the disk each time the code runs really slowly and causes a lot of disk I/O.