Time Lag for Image Acquisition

I stumbled across the OpenMV project, and am interested in this camera.

I have a need to accurately synchronize the instant an image is acquired to a digital signal. I don’t need any processing done for the moment, just the 640x480 image delivered to the host machine over USB.
Assume I have a rail with an object traveling at 1000mm/sec, and it breaks an optical beam whose sensor is connected to one of the DIO pins on the board. I’m assuming I’d write a little bit of Python code here to either run an interrupt handler, or just busy wait.
When I get the digital signal, I’ll tell the camera to grab an image.
My question is, how far will the target have traveled in the image that was acquired?

I’m currently using a USB camera, and the frame buffering is problematic. Time delay from command to actual acquisition is variable. I need to have repeatable performance. What’s worse is that sometimes the image is from before the gate was broken, because of the buffering of images.

The worst case is when the cam tries to capture a frame and miss the VSYNC signal by a few ms, it has to wait for the next VSYNC. For QVGA a readout takes about < 16ms (sensor outputs 60FPS) say 16ms + time needed to handle the IRQ and start the frame capture, I would guess around ~20ms, you do the distance math :slight_smile:

Hmm, I was hoping that wasn’t your answer. Are there any hobby boards out there that you are aware of that will trigger a capture with less than 1ms delay?

It seems that most cameras run open loop, and its up to the acquisition chip to decide which images to commit to memory. I’d like a quicker way to slave a snapshot to an interrupt.

No, I think you should look for a sensor which supports slave mode, this way you can start the frame on interrupt.

What exactly are you trying to do? I can advise better if I know more.

I believe the original poster was describing a situation where you wanted to use the OpenMV module to simply acquire image data at a high rate onto the PC, and then do all of the signal processing on the PC (instead of the ARM). I realize this is probably not the intended purpose of the OpenMV hardware, but I could see it beneficial in certain situations (custom CV applications where you make your own camera hardware interface using python and then process the image in a PC environment). In terms of maximal frame rate of raw images, what is the bottleneck on fast image acquisition in this scenario? Is it the ARM processor, the board-to-PC communication interface, perhaps something else?

The connection to the PC is limited to 12 Mbps full speed USB. You pretty much have to JPEG compress images to get them to the PC at any decent data rate. Anyway, the hello world example shows about how fast the cam can do this for 320x240 which is about 15 FPS (with a jpeg compressed image stream). But… if you modified the firmware you could go faster. We only grab images right now when requested to do so… so, we drop 1/2 of them at the fastest rate. If you made the image capture into an interrupt and used a smaller res with a double buffer you could get 60 FPS at 160x120. Or 320x240 grayscale.

So if I am understanding you correctly, the main bottleneck comes from the USB interface? For example, if I were to modify the firmware to simply acquire and send a grayscale image as fast as possible, how much of the limitation comes from the ARM’s processing time of acquiring the image from the CMOS sensor, versus sending the image to the PC? There are many USB cameras (though to be fair these are probably USB 3.0 or FireWire) that can acquire grayscale images at around 100 fps at resolutions around 640 x 480 pixels. I am sure these cameras probably use more specialized processors (either ASICs and/or FPGAs) as well, but I was wondering how much of the limitation comes from the communication bus versus the processing controller.

We can store images in Ram at the max FPS. But, unless you jpeg compress you can’t get it out in time. The fastest external interface on the system is 48 MHz 1 bit spi. The camera bus is 24 MHz 8 bit.

Anyway, jpeg compressing the image takes too much time. On grayscale images this goes about 2-3x faster but it still takes a while

Now… USB high-speed can be done with the processor. You need an external PHY chip and you’ll have to make your own board but it is possible.

So, it really is a limit of the external interfaces. We tried sending images that weren’t jpeg compressed before but it takes so long that the frame rate more or less tanks. You’d need a much faster interface for uncompressed images.