Fall detector

Hello,

I have a project under openMV 7 to make a fall detector.
My fall detector is placed under the bed.
i used a base of Advanced Frame Differencing Example for make a difference between background and movement but maybe have you a better idea ? :smiley:

Thanks in advance.

What are you looking for exactly? Some new object appearing on the floor? What’s the application exactly?

I send you my code

# Basic Frame Differencing Example
#
# Note: You will need an SD card to run this example.
#
# This example demonstrates using frame differencing with your OpenMV Cam. It's
# called basic frame differencing because there's no background image update.
# So, as time passes the background image may change resulting in issues.

THRESHOLD = (0, 100)
import sensor, image, pyb, os, time
from pyb import LED

red_led   = LED(1)
green_led = LED(2)
blue_led  = LED(3)
ir_led    = LED(4)

BG_UPDATE_FRAMES = 40 # How many frames before blending.
BG_UPDATE_BLEND = 140 # How much to blend by... ([0-256]==[0.0-1.0]).

sensor.reset() # Initialize the camera sensor.
sensor.set_pixformat(sensor.RGB565) # or sensor.GRAYSCALE
sensor.set_framesize(sensor.QVGA) # or sensor.QQVGA (or others)
sensor.skip_frames(10) # Let new settings take affect.
sensor.set_auto_whitebal(False) # Turn off white balance.
clock = time.clock() # Tracks FPS.

thresholds = [(30, 100, 15, 127, 15, 127), # generic_red_thresholds
              (30, 100, -64, -8, -32, 32), # generic_green_thresholds
              (100, 80, -5, 5, -5, 5)] # generic_blue_thresholds

inverter = [(-15, 15, -15, 15, -15, 15)]

if not "temp" in os.listdir(): os.mkdir("temp") # Make a temp directory

print("About to save background image...")
sensor.skip_frames(60) # Give the user time to get ready.
sensor.snapshot().save("temp/bg.bmp")
print("Saved background image - Now frame differencing!")

frame_count = 0

while(True):
    clock.tick() # Track elapsed milliseconds between snapshots().

    img = sensor.snapshot() # Take a picture and return the image.
    frame_count += 1

    if frame_count > BG_UPDATE_FRAMES:
        frame_count = 0
        img.blend("temp/bg.bmp", alpha=(256-BG_UPDATE_BLEND))
        img.save("temp/bg.bmp")

    img.difference("temp/bg.bmp")
    for blob in img.find_blobs(thresholds, pixels_threshold=300, area_threshold=200):
        if blob.h() < blob.w() and blob.w() > 40 :      # DETECTION
            img.draw_rectangle(blob.rect(), color=(255,0,0))
            img.draw_cross(blob.cx(), blob.cy())
            print(blob.h(), blob.w(), blob.rotation())
            green_led.on()
            # BG_UPDATE_FRAMES = 0
        else :
            img.draw_rectangle(blob.rect())
            img.draw_cross(blob.cx(), blob.cy())
            green_led.off()
    # print(clock.fps()) # Note: Your OpenMV Cam runs about half as fast while

I detect new object by frame difference and the application is for old person !
You can see in my code i detect object by blob and i detect lying objects with the width and height !

Okay, so, if it’s working what’s the issue?

You asked for help improving the algorithm but you have to ask me about a specific objective. I assume you’re using frame differencing because it easily detects change.

I have post this because i search a method for improved the script because it’s work but not very good.

I am open to any proposal to improve the code !

Okay, but, I can’t just tell you exactly what’s wrong and improve your code without you telling me what needs fixing. You need to give me a specific problem that you are having and I can then propose a solution. Explain to me the environment, where the camera is placed, if the lighting is changing. All these factors come into play for determining the best algorithm. Maybe frame differencing is not the right approach. For me to know what is you need to tell me how you have the system setup. This involves writing a few paragraphs of text about what you are doing.

Sure your right,

My camera is placed under the bed i have ligthning change (day and night) but i think i can stay always the infrared alight. My system is for detect people on the ground.
My problem is the reliability of the system because the system detect blob with the frame differency and seen if the blob is wider than large and detect elongate person because i have a trigger for horizontal objets but i wonder if a better system exist ?

If you have more question don’t hesitate !! :smiley:

Thanks for all.

Seems like a solid algorithm for this task… So, what in particular is bad? Like the exact issue.