Doing some more work on this, with just a single position capture and working with only the main frame buffer instead of copying to different buffers, and viewing the result in the IDE instead of on a LCD screen, I came across some oddities.
The first is that triggered mode seems to cause the image shown in the IDE to be old, that is, each time I trigger the snapshot the IDE shows the result from last time. This can be worked around using print(img.compressed_for_ide())
after processing.
I also found that calling to_rgb565()
after all the capture and blob-finding is done, appears to somehow affect the trigger timing. Here is a minimal case:
import sensor, image, time
from pyb import ExtInt, Pin
sensor.reset()
sensor.set_pixformat(sensor.GRAYSCALE)
sensor.set_framesize(sensor.QVGA)
sensor.skip_frames(time = 1000)
sensor.ioctl(sensor.IOCTL_SET_TRIGGERED_MODE, True)
sensor.set_auto_exposure(False, 400)
cap = 0
def callback (line):
global cap
cap = 1
ext = ExtInt(Pin('P7'), ExtInt.IRQ_RISING, Pin.PULL_DOWN, callback)
light = Pin('P9', Pin.OUT_PP)
light.low()
while(True):
if (cap == 1):
light.high()
sensor.snapshot()
img = sensor.snapshot()
light.low()
# find blobs here, usually
img.to_rgb565() # <--- affects capture timing?!?
print(img.compressed_for_ide())
cap = 0
Here is the result when to_rgb565()
is called:

Here is the result when to_rgb565()
is not called (this is about 15ms earlier, the blobs are moving from right to left):

I find this strange because to_rgb565()
is called after the snapshot is completed. How can it cause the framebuffer content to be from a later time? Finding blobs results in the blobs matching up correctly with the displayed framebuffer in both cases. Overall it’s behaving like the to_rgb565()
causes a slight delay before the capture.
I don’t think the print(img.compressed_for_ide())
is causing this, I get the same result without it (except for on each trigger the IDE shows the capture from last time). I also tried copying to another buffer at the time of snapshotting, but this gave the same outcome:
img = image.Image(320,240,sensor.GRAYSCALE) # before main loop
...
sensor.snapshot().copy(copy_to_fb=img)
...
img = img.copy(copy_to_fb=True)
img.to_rgb565()
btw the reason there are two calls to sensor.snapshot
is because with only one, the low exposure time results in very inconsistent and grainy results, for example:



In my early experiments this was a very disappointing discovery, until just by chance I happened to trigger two captures in quick succession, and noticed the result was nice and clean. Seems like the first exposure blows the cobwebs off the ADCs in the sensor somehow. In any case, doing just a single snapshot instead of two does not change the weirdness with to_rgb565()
.
While I’m here, you may have noticed in my video that the first trigger after startup is always off by quite a lot, just wondering if there’s any reason for that. The second and subsequent triggers are perfectly timed.
These are not show-stoppers by any means, just a few things I found weird.