First off, the camera is really impressive. Nice work.
In my application, I am doing motion tracking of an IR LED. I locate the led in the image with find_blobs and send the x and y coordinates as bytes through serial. It is critical in my project to reduce the latency between real time movement and bytes being sent down the USB. Currently, I am getting about ~15-20ms, at least that’s the time it takes to execute my code according to clock.avg. This is fantastic and works well enough, but I want to try and optimize my code to push this execution time as low as possible.
There are three basic parts to my code and here are the times required for each part according to clock.avg.
Sensor.snapshot - 15 ms
find_blobs - 3 ms
serial communication - 1 ms
Clearly the most important part is reducing the snapshot time, if possible. Firstly, I am of course using GRAYSCALE. I found I could further reduce the readout time by reducing the quality to QQVGA. Reducing the quality further to QQQVGA actually seems to increase the readout time. I also am reducing the exposure time, which is necessary anyway for my bright LED. Reducing the exposure stops improving the sensor readout at a value of about 10 (what does this number represent?).
Is there any ways you can think of to reduce the sensor readout time further?
Does find_blobs seem like the quickest function to use for my application?
In my experience with scientific cameras, I have been able to use ‘hardware binning’ to reduce the readout time and reduce the exposure time required for sufficient signal-to-noise.
see here: http://info.adimec.com/blogposts/bid/104547/Binning-to-increase-SNR-and-frame-rate-with-CCD-and-CMOS-industrial-cameras. Is this happening to any degree when reducing the sensor quality? is there any way to implement it? if not you should consider implementing it in the future. If the entire CCD could be shifted and added into one line before reading out, that would be ideal.