Detect moving object

HI,

I want to detect all moving objects on my image.
For example counting the cars passing by. Also I do want to get the biggest moving object.
In which direction should I look, should I go for the blob detection, but I don’t want to use colors for thresholds ?

Regards,

Jan

Frame differencing followed by find_blobs() will work for what you want to do. However, shadows and other things will trip you up. Try frame differencing out followed by blob detection for quick results however.

I like optical flow for applications like this. Unlike frame differencing, OF will actually give you the direction and speed of the moving objects in addition to their mere detection. OF is built into the OpenMV APIs I believe. Just dig through the examples some.

Hello, I am trying In Memory Basic Frame Differencing with find blobs.
Can you help me ? differencing works, but blobs not.

# In Memory Basic Frame Differencing Example

import sensor, image, pyb, os, time, math

TRIGGER_THRESHOLD = 5
thresholds = [(11, 100, -128, 127, -128, 127)]

sensor.reset() # Initialize the camera sensor.
sensor.set_pixformat(sensor.GRAYSCALE) # or sensor.GRAYSCALE
sensor.set_framesize(sensor.QVGA) # or sensor.QQVGA (or others)
sensor.skip_frames(time = 5000) # Let new settings take affect.
sensor.set_auto_whitebal(False) # Turn off white balance.
clock = time.clock() # Tracks FPS.
sensor.set_auto_exposure(True, exposure_us=10000) # shutter module

extra_fb = sensor.alloc_extra_fb(sensor.width(), sensor.height(), sensor.GRAYSCALE)

print("About to save background image...")
sensor.skip_frames(time = 2000) # Give the user time to get ready.
extra_fb.replace(sensor.snapshot())
print("Saved background image - Now frame differencing!")

while(True):
    clock.tick() # Track elapsed milliseconds between snapshots().
    img = sensor.snapshot() # Take a picture and return the image.

    # Replace the image with the "abs(NEW-OLD)" frame difference.
    hah = img.difference(extra_fb)

for blob in hah.find_blobs(thresholds, pixels_threshold=1, area_threshold=1):
    # These values depend on the blob not being circular - otherwise they will be shaky.
    if blob.elongation() > 0.5:
        hah.draw_edges(blob.min_corners(), color=(255,0,0))
        hah.draw_line(blob.major_axis_line(), color=(0,255,0))
        hah.draw_line(blob.minor_axis_line(), color=(0,0,255))
    # These values are stable all the time.
    hah.draw_rectangle(blob.rect())
    hah.draw_cross(blob.cx(), blob.cy())
    # Note - the blob rotation is unique to 0-180 only.
    hah.draw_keypoints([(blob.cx(), blob.cy(), int(math.degrees(blob.rotation())))], size=20)

    print(clock.fps())

Hi, are the color tracking thresholds set right? Please look at the difference image and use the Tools-> Machine Vision-> Threshold Editor to get the right color thresholds for the difference image. You have those apply them to the script and things should start to work.

Um, from your code the indentation is wrong. The code for the find blobs never gets executed. Please check your python indentation.

Ok, problem solved, very easy to detect all objects
Global Shutter 37 fps - 73 with USB

import sensor, image, pyb, os, time, math

thresholds = [(11, 100, -128, 127, -128, 127)]

sensor.reset()
sensor.set_pixformat(sensor.GRAYSCALE)
sensor.set_framesize(sensor.QVGA)
sensor.skip_frames(time = 2000)
sensor.set_auto_gain(False) # must be turned off for color tracking
sensor.set_auto_whitebal(False) # must be turned off for color tracking
clock = time.clock()
#sensor.set_auto_exposure(True, exposure_us=10000) # shutter module

extra_fb = sensor.alloc_extra_fb(sensor.width(), sensor.height(), sensor.GRAYSCALE)

print("About to save background image...")
sensor.skip_frames(time = 2000) # Give the user time to get ready.
extra_fb.replace(sensor.snapshot())
print("Saved background image - Now frame differencing!")


while(True):
    clock.tick()
    img = sensor.snapshot()
    img.difference(extra_fb)

    for blob in img.find_blobs(thresholds, pixels_threshold=200, area_threshold=200):
        # These values depend on the blob not being circular - otherwise they will be shaky.
        if blob.elongation() > 0.5:
            img.draw_edges(blob.min_corners(), color=(255,0,0))
            img.draw_line(blob.major_axis_line(), color=(0,255,0))
            img.draw_line(blob.minor_axis_line(), color=(0,0,255))
        # These values are stable all the time.
        img.draw_rectangle(blob.rect())
        img.draw_cross(blob.cx(), blob.cy())
        # Note - the blob rotation is unique to 0-180 only.
        img.draw_keypoints([(blob.cx(), blob.cy(), int(math.degrees(blob.rotation())))], size=20)
    print(clock.fps())

So 37 fps with USB ON
and without USB it will be 2*37fps = 74fps ?

Um, just click the disable button the on top right of the IDE to see the frame rate without the burden of streaming JPG images to the PC.

hmm, its almost same. average 50fps in QVGA (with and without USB) and 100fps+ in QQVGA