distance Meter

Hi, man. I want a gauge or distance meter. :exclamation: :exclamation: :exclamation: :exclamation:
Whether this is face detection practical or not ?? :question: :question: :question:
Please give me the code. :unamused: :unamused:
Tanks. :slight_smile: :wink:

Please help me

Hi, can you give me some idea in a rather long write up what you need. The camera can kind of measure distance but not really.

Hi, man. oh Yes.I want to display the distance between the object and the camera as an approximation in the face Detection :exclamation: :question: :exclamation: :question: :exclamation:
If the distance was approximate. That’s great :slight_smile: :wink: :slight_smile: :wink:
I want this distance meter on face Detection :slight_smile:

tanks men.please help me :unamused: :unamused:

please Help me :unamused:

This kind of works. You get distances which are farther as the face gets smaller… and smaller as the face gets larger.

Please put some effort into using the system. I can’t do the code for you. I was able to find a solution for this with very little effort by just googling for how to get the distance from a camera image.

# Face Detection Example
#
# This example shows off the built-in face detection feature of the OpenMV Cam.
#
# Face detection works by using the Haar Cascade feature detector on an image. A
# Haar Cascade is a series of simple area contrasts checks. For the built-in
# frontalface detector there are 25 stages of checks with each stage having
# hundreds of checks a piece. Haar Cascades run fast because later stages are
# only evaluated if previous stages pass. Additionally, your OpenMV Cam uses
# a data structure called the integral image to quickly execute each area
# contrast check in constant time (the reason for feature detection being
# grayscale only is because of the space requirment for the integral image).

import sensor, time, image

lens_mm = 2.8 # Standard Lens.
average_head_height_mm = 232.0 # https://en.wikipedia.org/wiki/Human_head
image_height_pixels = 240.0 # QVGA
sensor_h_mm = 2.952 # For OV7725 sensor - see datasheet.
offest_mm = 100.0 # Offset fix...

# https://photo.stackexchange.com/questions/12434/how-do-i-calculate-the-distance-of-an-object-in-a-photo
def rect_size_to_distance(r): # r == (x, y, w, h) -> r[3] == h
    return ((lens_mm * average_head_height_mm * image_height_pixels) / (r[3] * sensor_h_mm)) - offest_mm

# Reset sensor
sensor.reset()

# Sensor settings
sensor.set_contrast(1)
sensor.set_gainceiling(16)
# HQVGA and GRAYSCALE are the best for face tracking.
sensor.set_framesize(sensor.QVGA)
sensor.set_pixformat(sensor.GRAYSCALE)

# Load Haar Cascade
# By default this will use all stages, lower satges is faster but less accurate.
face_cascade = image.HaarCascade("frontalface", stages=25)
print(face_cascade)

# FPS clock
clock = time.clock()

while (True):
    clock.tick()

    # Capture snapshot
    img = sensor.snapshot()

    # Find objects.
    # Note: Lower scale factor scales-down the image more and detects smaller objects.
    # Higher threshold results in a higher detection rate, with more false positives.
    objects = img.find_features(face_cascade, threshold=0.75, scale_factor=1.25)

    # Draw objects
    for i in range(len(objects)):
        img.draw_rectangle(objects[i])
        print("Face %d -> Distance %d mm" % (i, rect_size_to_distance(objects[i])))
        
    img.draw_cross(img.width()//2, img.height()//2, size = min(img.width()//5, img.height()//5))

    # Print FPS.
    # Note: Actual FPS is higher, streaming the FB makes it slower.
    print("FPS %f" % clock.fps())




hi men.tanks , but I did not mean that.
I want to show the target distance to the camera at the same time as the number. Like this image


The number is now printed on the screen. It’s not going to look like what you want right now since we don’t support face tracking with color images yet. There’s no technical limit for this… the front end for parsing images during face tracking simply doesn’t convert the image to grayscale yet.

# Face Detection Example
#
# This example shows off the built-in face detection feature of the OpenMV Cam.
#
# Face detection works by using the Haar Cascade feature detector on an image. A
# Haar Cascade is a series of simple area contrasts checks. For the built-in
# frontalface detector there are 25 stages of checks with each stage having
# hundreds of checks a piece. Haar Cascades run fast because later stages are
# only evaluated if previous stages pass. Additionally, your OpenMV Cam uses
# a data structure called the integral image to quickly execute each area
# contrast check in constant time (the reason for feature detection being
# grayscale only is because of the space requirment for the integral image).

import sensor, time, image

lens_mm = 2.8 # Standard Lens.
average_head_height_mm = 232.0 # https://en.wikipedia.org/wiki/Human_head
image_height_pixels = 240.0 # QVGA
sensor_h_mm = 2.952 # For OV7725 sensor - see datasheet.
offest_mm = 100.0 # Offset fix...

# https://photo.stackexchange.com/questions/12434/how-do-i-calculate-the-distance-of-an-object-in-a-photo
def rect_size_to_distance(r): # r == (x, y, w, h) -> r[3] == h
    return ((lens_mm * average_head_height_mm * image_height_pixels) / (r[3] * sensor_h_mm)) - offest_mm

# Reset sensor
sensor.reset()

# Sensor settings
sensor.set_contrast(1)
sensor.set_gainceiling(16)
# HQVGA and GRAYSCALE are the best for face tracking.
sensor.set_framesize(sensor.QVGA)
sensor.set_pixformat(sensor.GRAYSCALE)

# Load Haar Cascade
# By default this will use all stages, lower satges is faster but less accurate.
face_cascade = image.HaarCascade("frontalface", stages=25)
print(face_cascade)

# FPS clock
clock = time.clock()

while (True):
    clock.tick()

    # Capture snapshot
    img = sensor.snapshot()

    # Find objects.
    # Note: Lower scale factor scales-down the image more and detects smaller objects.
    # Higher threshold results in a higher detection rate, with more false positives.
    objects = img.find_features(face_cascade, threshold=0.75, scale_factor=1.25)

    # Draw objects
    for i in range(len(objects)):
        img.draw_rectangle(objects[i])
        img.draw_string(objects[i][0] - 16, objects[i][1] - 16, "Distance %d mm" % rect_size_to_distance(objects[i]))
        
    img.draw_cross(img.width()//2, img.height()//2, size = min(img.width()//5, img.height()//5))

    # Print FPS.
    # Note: Actual FPS is higher, streaming the FB makes it slower.
    print("FPS %f" % clock.fps())



Thank you so much men
That’s what I wanted. :smiley: :smiley:
Thank you men :wink:

Hi, Man I need to measure the height of a person sitting and send the measurement via esp 32 to a raspberry pi. :exclamation: :question: :exclamation: :question: :question:

Um, well, if you can see the face then just calculate the distance the bottom of the rect is from the bottom of the screen. This should do the trick. It’s not very accurate but it will give you a different number per person.

Hi man Im a just a newbee, I tried youre suggestion and it works now my problem is I only want to have the face detection in a one particular place so the only person to be measured is the one that is sitting. Thanks in advance. :slight_smile:

Hi, just pass an roi=(x,y,w,h) to the method as an argument. This makes it only operate in that region. You can get the roi by clicking and dragging on the frame buffer image.

Hi, man can I ask for the code I’ve tried my best but I cant make it to work. Pls…:pray::pray: :unamused: :unamused: thanks men.

img.find_features(face_cascade, threshold=0.75, scale_factor=1.25)

Becomes:

img.find_features(face_cascade, roi=(x,y,w,h) threshold=0.75, scale_factor=1.25)

Where you need to figure out x, y, w, and h are by selecting the area in OpenMV IDE in the frame buffer and writing down the ROI values.

Hi, man thanks for the roi, now i can detect the face in a specific region but i I also want to measure the distance of the face from a colored marker posibly yellow marker within the roi. And output the distance measured via bluetooth using esp 32. Sorry men, that i have a lot of question. Can i have the code… Please… :unamused: :unamused: :unamused: :unamused: thanks again…

Please don’t ask for code to copy and paste. Ask for an idea.

Anyway, just add the find_blobs() method to you code. If you want to see how to use check the color tracking examples. You can determine the color thresholds using the Threshold editor under Tools->Machine Vision.

When you get the list of blobs back just do the standard distance formula from the centroid of each blob and the center of the face rect.

Thanks men, I’ve tried the example in color tracking but it seems that colored object is not what I needed as my reference object because when I tried it it has alot of detections. What i want is a single reference that also move with the chair during up or down adjustment. Ive tried the april tag but my openMv cam does not support it. Thanks again men.

Um, if you can get an M7 then using an AprilTag makes this a lot easier.

Thanks you very much man. I have learned alot… Until my next question :wink: :wink: :wink: