Detect movement and direction

Hi,
for my project I must find a way to detect the moving of person along the horizontal axis and if cross in one direction or in another an immaginary vertical line ( lets assume 1/2 of image size) , increment or decrement a counter.
In another post was suggested for a similar problem to use the frame differentiation and then the blob detection and putting toghether parts of the sample codes I’ve obtained a first step of what I need.

Now my problem is select just one of the blob that is moving and on this one calculate if the blob.cx() is mooving left or right and if cross the imaginary line .

The problem is that the blobs retured from the img.find_blobs method are more than one but just one is moving in a significant way ( see video attached ), and I’ve not idea at all how I can isolate just that one and operate on cx() value .
I can’t work in a dinamic way on pixels_threshold and area_threshold parameters to isolate just one blob.

This is the merge of samples codes that I’ve used :

# Advanced Frame Differencing Example
#
# This example demonstrates using frame differencing with your OpenMV Cam. This
# example is advanced because it preforms a background update to deal with the
# backgound image changing overtime.

import sensor, image, pyb, os, time
high_threshold = (30, 100)
TRIGGER_THRESHOLD = 5

BG_UPDATE_FRAMES = 50 # How many frames before blending.
BG_UPDATE_BLEND = 128 # How much to blend by... ([0-256]==[0.0-1.0]).
blob_cx_trh=100
sensor.reset() # Initialize the camera sensor.

sensor.set_pixformat(sensor.RGB565) # or sensor.RGB565
sensor.set_framesize(sensor.QVGA) # or sensor.QQVGA (or others)
sensor.skip_frames(time = 2000) # Let new settings take affect.
sensor.set_auto_whitebal(False) # Turn off white balance.
clock = time.clock() # Tracks FPS.

# Take from the main frame buffer's RAM to allocate a second frame buffer.
# There's a lot more RAM in the frame buffer than in the MicroPython heap.
# However, after doing this you have a lot less RAM for some algorithms...
# So, be aware that it's a lot easier to get out of RAM issues now. However,
# frame differencing doesn't use a lot of the extra space in the frame buffer.
# But, things like AprilTags do and won't work if you do this...
extra_fb = sensor.alloc_extra_fb(sensor.width(), sensor.height(), sensor.RGB565)

print("About to save background image...")
sensor.skip_frames(time = 2000) # Give the user time to get ready.
extra_fb.replace(sensor.snapshot())
print("Saved background image - Now frame differencing!")

triggered = False

frame_count = 0
while(True):
    clock.tick() # Track elapsed milliseconds between snapshots().
    img = sensor.snapshot() # Take a picture and return the image.

    frame_count += 1
    if (frame_count > BG_UPDATE_FRAMES):
        frame_count = 0
    
        # Blend in new frame. We're doing 256-alpha here because we want to
        # blend the new frame into the backgound. Not the background into the
        # new frame which would be just alpha. Blend replaces each pixel by
        # ((NEW*(alpha))+(OLD*(256-alpha)))/256. So, a low alpha results in
        # low blending of the new image while a high alpha results in high
        # blending of the new image. We need to reverse that for this update.
    
        img.blend(extra_fb, alpha=(256-BG_UPDATE_BLEND))
        extra_fb.replace(img)

    # Replace the image with the "abs(NEW-OLD)" frame difference.
    img.difference(extra_fb)

    hist = img.get_histogram()
    
    # This code below works by comparing the 99th percentile value (e.g. the
    # non-outlier max value against the 90th percentile value (e.g. a non-max
    # value. The difference between the two values will grow as the difference
    # image seems more pixels change.
    
    diff = hist.get_percentile(0.99).l_value() - hist.get_percentile(0.90).l_value()
    triggered = diff > TRIGGER_THRESHOLD
    img.binary([high_threshold])


    for blob in img.find_blobs([high_threshold], pixels_threshold=10, area_threshold=180, merge=False):
           print(blob)
       

           img.draw_keypoints([(blob.cx(), blob.cy(),90)] , size=40, color=127)

In the attachment a screen recording of what happen .
Obviously the only blob that I want track is the one tha is moving back and forward .

Any suggestion is wellcome.

Thanks Roberto.
20191129122519.mp4 (4.1 MB)

We have a person detector now built-in the latest fw (3.5.0) with tensorflow, you could try it instead of blobs (try the tf_person_detection_* scripts).

HI,
yes I’ve seen the example that you suggest but as also declared in the code notes can’t work in real time ,or let me say at least faster as I would like , and at the moment seems quite slow for my needs.
Based on the crossing of the inmmaginary line I should activate or disactivate such sensors quickly.
The blob detection seems to be fast enough for my scope and more versatile because in such application the camera could be mounted on the roof and observe from the top the moving of one person.

Tks. Roberto.

So, I’ve no chance to use the blobs isolating just the moving ones ?

Hi, I see your issue. Sorry, it was Thanks Giving in the USA.

Um, so, you have to write something called object filtering. Basically, you have to have a model for the world and then only track things that fit that model. Basically, you need to output a list of candidate blobs from the CV stuff and then given that candidate list of blobs per frame update a list of tracked blobs. The tracked blobs are what you are actually looking at.

So, for example… you’d compare the x() distance of all the blobs you detect from the blobs you are tracking each frame. If the x distance is close to a previous tracked blob you’d then update the tracked blob’s x distance with the newly detected one assuming that some score criteria were met for that blob being decent to track.

For the stationary blob… you can then filter it out by looking at the history of it’s movement. If you make the assumption that x() displacement should not be zero for a tracked blob repeatedly then you remove that blob from the tracked blob list. This filters out then old blobs that went off screen and blobs that aren’t moving slightly. Note that random noise will make sure x() is always moving around by like ±1 or so.

Hi kwagyeman,
just to understand if I’ve understood your post , are you referreing to somenthing , conceptually , similar to this ( i’ve found an example searching for object filter in google … ) :

# list of alphabets
alphabets = ['a', 'b', 'd', 'e', 'i', 'j', 'o']

# function that filters vowels
def filterVowels(alphabet):
    vowels = ['a', 'e', 'i', 'o', 'u']

    if(alphabet in vowels):
        return True
    else:
        return False

filteredVowels = filter(filterVowels, alphabets)

print('The filtered vowels are:')
for vowel in filteredVowels:
    print(vowel)

Tks. Roberto.

That’s more of the static filter, it doesn’t updated or change itself on time.

What I’m talking about is separating the code that does the detection of the blobs from the code that tracks the blobs.

The detection code just outputs a list of blob candidates. You’d noise filter this candidate list as required and etc.

Then the tracking code takes the detection candidate list and tries to update it’s internal state of the world based on that detection list.

By writing the code in these two steps the tracking code can implement a lot of heuristics to filter out those stationary blobs you are bothered by.

Hi ,
just to have evidence of the sequence of blobs , is there any way to add a frame number inside the blob ?
I’ve tryed with

 blob["Frame"]=num_frame

but seems that blob object doesn’t accet to append any value .
Tks

MicroPython doesn’t allow you to do things like this. Objects are static and can’t be added to.

Instead, create a new tuple or object with a blob inside it and the other fields you want.