use of "copy_to_fb" in image.Image

a question on the use of “copy_to_fb” in image.Image.
to load a pi]cture with size of 38KB, below is the code:

img_hzy = image.Image("/hmyface.pgm",copy_to_fb=True)

the IDE(version 1.0.0) showed an error:
“TypeError: function does not take keyword arguments”

thanks in advance,
Jeff

I think you should update your IDE to the new version at first!

Hi, please download and install the latest version of OpenMV IDE and then update your board firmware. copy_to_fb wasn’t working previous on all but the latest version of the software.

I updated to the latest IDE1.4 and latest firmware(openmv2). Yes, copy_to_fb can be used now.
But “img.find_keypoints” always returns “none”, when I was running “face_tracking.py” of IDE’s examples. it worked well in previews version( IDE1.0 with firmware v1.9)

kpts1 = img.find_keypoints(scale_factor=1.2, max_keypoints=100, roi=face)

by the way, the face recognizing was fine (drawing rectangles).

Mmm, find_keypoints was completely redone by Ibrahim in the latest firmware. He’ll have to look at that.

The ROI needs to be bigger than before, I expand it by 31 pixels in the example:

 face = (objects[0][0]-31, objects[0][1]-31,objects[0][2]+31*2, objects[0][3]+31*2)

If this still doesn’t detect keypoints you could try to use a lower scale_factor

scale_factor=1.1

and/or lower threshold

threshold=10

Here try this script:

import sensor, time, image

# Reset sensor
sensor.reset()
sensor.set_contrast(3)
sensor.set_gainceiling(16)
sensor.set_framesize(sensor.VGA)
sensor.set_windowing((320, 240))
sensor.set_pixformat(sensor.GRAYSCALE)

# Skip a few frames to allow the sensor settle down
sensor.skip_frames(80)

# Load Haar Cascade
# By default this will use all stages, lower satges is faster but less accurate.
face_cascade = image.HaarCascade("frontalface", stages=25)
print(face_cascade)

# First set of keypoints
kpts1 = None

# Find a face!
while (kpts1 == None):
    img = sensor.snapshot()
    img.draw_string(0, 0, "Looking for a face...")
    # Find faces
    objects = img.find_features(face_cascade, threshold=0.5, scale=1.5)
    if objects:
        # Expand the ROI by 31 pixels in every direction
        face = (objects[0][0]-31, objects[0][1]-31,objects[0][2]+31*2, objects[0][3]+31*2)
        # Extract keypoints using the detect face size as the ROI
        kpts1 = img.find_keypoints(threshold=5, scale_factor=1.1, max_keypoints=100, roi=face)
        # Draw a rectangle around the first face
        img.draw_rectangle(objects[0])

# Draw keypoints
print(kpts1)
img.draw_keypoints(kpts1, size=24)
img = sensor.snapshot()
time.sleep(2000)

# FPS clock
clock = time.clock()

while (True):
    clock.tick()
    img = sensor.snapshot()
    # Extract keypoints using the detect face size as the ROI
    kpts2 = img.find_keypoints(threshold=5, scale_factor=1.1, max_keypoints=100, normalized=True)

    if (kpts2):
        # Match the first set of keypoints with the second one
        c=image.match_descriptor(kpts1, kpts2, threshold=90)
        match = c[6] # C[6] contains the number of matches.
        print(match)
        if (match>10):
            img.draw_rectangle(c[2:6])
            img.draw_cross(c[0], c[1], size=10)
            print(kpts2, "matched:%d dt:%d"%(match, c[7]))

    # Draw FPS
    img.draw_string(0, 0, "FPS:%.2f"%(clock.fps()))

Yes, your script works.
By some comparing test, my conclusion is that the key is the “sensor.set_framesize(framesize)”.
Current “find_keypoints” have to work on a size no smaller than HQVGA, it cannot work on QQVGA- even with “Expand the ROI by 31 pixels”.
I think that “need bigger ROI” is implying the similar thing.

That’s true I realized that after my first reply, this script wasn’t updated after a fix to ORB code which made the ROI even smaller, that’s why I used higher res. I’ll update this script in the next release. Also note you should experiment with the corner detector threshold and the matching threshold until you find the best tracking parameters for your application.