Hi OpenMV Team,
I am working on a high-speed spot tracking system using the OpenMV N6 with the PAG7936 sensor. I need to synchronize external light sources (via GPIO) with the camera’s exposure window to ensure each frame only captures specific light spots without cross-talk.
- Continuous Mode: ~470 FPS (QVGA)
- Triggered Mode (
csi.IOCTL_SET_TRIGGERED_MODE): Drops to ~180 FPS (Raw capture)
Add support for triggering. - Actual System (with
find_blobs& UART): ~110 FPS - Target: 240+ FPS
Proposed Schemes:
Scheme A: Triggered Mode Optimization The snapshot() call in triggered mode seems to have a large software overhead. Is there any low-level ioctl or driver tweak to reduce the trigger-to-capture latency to reach 240+ FPS?
Scheme B: External Sync with Frame Indexing If we run in Continuous Mode (400+ FPS) and use an external MCU to pulse lights via FSYNC (P10):
- Is there a way for the Python script to read a Hardware Frame Count or DMA tag to ensure the
snapshot()buffer aligns with the correct external light phase?
Scheme C: Hardware Windowing (ROI) Does the N6 driver support hardware-level cropping/windowing (e.g., to QQVGA 160x120) to increase the readout speed?
Scheme D: Asynchronous Processing & Task Pipelining
-
Background DMA: Does the CSI driver support Double Buffering / Background DMA? We want to expose the next frame while the CPU is processing the current one.
-
Task Pipelining (UART & CPU): Currently,
find_blobsand UART communication are executed sequentially. Is there a way to offload the UART transmission to DMA or a separate thread (without blocking the CPU) so that the nextfind_blobscan start immediately? -
N6 NPU Offloading: Can the N6’s internal NPU be leveraged to offload basic image tasks like blob detection? This would significantly free up the CPU for protocol handling and system logic.
We prioritize speed and deterministic frame-to-light alignment. Which of these approaches would you recommend for achieving 240+ FPS?
Thanks!
import csi, time
from machine import Pin
csi0 = csi.CSI()
csi0.reset()
csi0.pixformat(csi.GRAYSCALE)
csi0.framesize(csi.QVGA)
csi0.auto_exposure(False, exposure_us=2000)
csi0.framerate(470)
csi0.snapshot(time=2000)
csi0.ioctl(csi.IOCTL_SET_TRIGGERED_MODE, True)
pins = [Pin("P0", Pin.OUT_PP), Pin("P1", Pin.OUT_PP)]
frame_count = 0
clock = time.clock()
while True:
clock.tick()
target_idx = frame_count % 2
pins[target_idx].high() # turn on LED
img = csi0.snapshot()
pins[target_idx].low() # turn off LED
# Processing & Communication (Takes ~5ms)
# blobs = img.find_blobs(...)
# uart.write(...)
frame_count += 1
if frame_count % 100 == 0:
print(clock.fps())