Decrease time for sensor.snapshot and find_blobs?

First off, the camera is really impressive. Nice work.

In my application, I am doing motion tracking of an IR LED. I locate the led in the image with find_blobs and send the x and y coordinates as bytes through serial. It is critical in my project to reduce the latency between real time movement and bytes being sent down the USB. Currently, I am getting about ~15-20ms, at least that’s the time it takes to execute my code according to clock.avg. This is fantastic and works well enough, but I want to try and optimize my code to push this execution time as low as possible.

There are three basic parts to my code and here are the times required for each part according to clock.avg.

Sensor.snapshot - 15 ms
find_blobs - 3 ms
serial communication - 1 ms

Clearly the most important part is reducing the snapshot time, if possible. Firstly, I am of course using GRAYSCALE. I found I could further reduce the readout time by reducing the quality to QQVGA. Reducing the quality further to QQQVGA actually seems to increase the readout time. I also am reducing the exposure time, which is necessary anyway for my bright LED. Reducing the exposure stops improving the sensor readout at a value of about 10 (what does this number represent?).

My questions:
Is there any ways you can think of to reduce the sensor readout time further?
Does find_blobs seem like the quickest function to use for my application?

In my experience with scientific cameras, I have been able to use ‘hardware binning’ to reduce the readout time and reduce the exposure time required for sufficient signal-to-noise.
see here: Binning to increase SNR and frame rate with CCD and CMOS industrial cameras - Adimec. Is this happening to any degree when reducing the sensor quality? is there any way to implement it? if not you should consider implementing it in the future. If the entire CCD could be shifted and added into one line before reading out, that would be ideal.

Hi, the frame rate of the camera is locked at 120 FPS for below VGA images and 60 FPS for VGA images. At 120 FPS each frame comes out at 8.33 ms. Processing images usually takes a little time so (2 * 8.33 = ~16 ms) seems to be right on the mark. I’d say you’re pushing the system is fast as it can go. We’re not a scientific camera. The camera IC is the OV7725 which is just a standard cell phone camera sensor. Nothing special.

Anyway, you should be able to get above 50 FPS easy. 60 to 80 FPS is doable with find_blobs(). What’s likely happening that’s slowing you down is that you’re dropping frames. To avoid this, first, click the disable frame buffer button on the top right hand corner of the IDE. This will greatly reduce the work the OpenMV Cam has to do on jpeg compressing frames. Afterward, use QQVGA or QVGA as your resolution.

# Single Color Grayscale Blob Tracking Example
#
# This example shows off single color grayscale tracking using the OpenMV Cam.

import sensor, image, time

# Color Tracking Thresholds (Grayscale Min, Grayscale Max)
# The below grayscale threshold is set to only find extremely bright white areas.
thresholds = (245, 255)

sensor.reset()
sensor.set_pixformat(sensor.GRAYSCALE)
sensor.set_framesize(sensor.QQVGA)
sensor.skip_frames(time = 2000)
sensor.set_auto_gain(False) # must be turned off for color tracking
sensor.set_auto_whitebal(False) # must be turned off for color tracking
clock = time.clock()

# Only blobs that with more pixels than "pixel_threshold" and more area than "area_threshold" are
# returned by "find_blobs" below. Change "pixels_threshold" and "area_threshold" if you change the
# camera resolution. "merge=True" merges all overlapping blobs in the image.

while(True):
    clock.tick()
    img = sensor.snapshot()
    for blob in img.find_blobs([thresholds], pixels_threshold=100, area_threshold=100, merge=True):
        img.draw_rectangle(blob.rect())
        img.draw_cross(blob.cx(), blob.cy())
    print(clock.fps())

This code above gets 85 FPS when the frame buffer is disabled.