Streaming Video Over USB is Slower than Streaming to IDE

I want to stream video over USB to my computer so that I can process the images in real time with OpenCV.

If I connect the OpenMV H7 to the IDE and stream JPEG images with the frame buffer off, I get the following frame rates:
→ VGA: 47.6 FPS
→ HD: 28.6 FPS
→ WQXGA2: 7.5 FPS

However, if I use the code to stream JPEG images, I get the following frame rates:
→ VGA: 11.7 FPS
→ HD: 4.7 FPS
→ WQXGA2: 0.8 FPS
(Please note I got the above frame rates by just modifying the sample code and adding timing. I did not do any processing or displaying, so that can’t be the reason for the slower frame rate).

I notice a similar drop in frame rates when using the sample code. Why is this happening? The OpenMV streams to the IDE over USB, so why is it so much slower when I try to read the images outside of the IDE? How do read the OpenMV’s images at the higher frames rates outside of the IDE? Again, I’d like to do this so that I can process the images in real time with OpenCV.

Also, the code is written to stream JPEG images. How do I stream RGB565 images? It’s clearly possible since I can stream RGB565 images to the IDE when the OpenMV is connected over USB.

With the framebuffer disabled the cam does Not stream anything, in fact it doesn’t even compress the framebuffer for preview, it just skips all that. That’s why the IDE may seem “faster” compared to the other scripts that actually fetch images, you should enable FB to compare and that FPS is the best you can hope to achieve… Please keep in mind this was Not designed as a webcam and most of the boards have USB FS which is slow. Streaming is there mainly to preview images while writing code, the assumption is that you’re done you save the code, disconnect the cam and run it standalone.

Just don’t compress the image and write it as it is, and on the other side read the expected frame size (w * h * bpp). Keep in mind this will be much slower than JPEG.

It’s not streaming RGB it’s streaming JPEG compressed images to the IDE.

These are the frame rates from the IDE with the frame buffer enabled and viewing the stream (format is RGB565):
→ VGA: 30 FPS
→ HD: 15 FPS
→ WQXGA2: 4.8 FPS

As you can see, this is much, much faster than the code. Streaming to the IDE and streaming to usb serial should have the same frame rates. Both are transmitting images over usb. So why is it so much slower using usb_vcp?

The OpenMV can stream RGB565 video at WQXGA2 resolution at 4.8FPS to the IDE (frame buffer enabled) when it’s connected via USB. I just want the images to be streamed to my computer at 4.8 FPS directly (serial port or otherwise) instead of the IDE so that I can process the images with OpenCV.

Okay, then use the RPC JPG streaming examples:

Now that FB is enabled you can see the difference is closer, almost half the FPS… one thing to keep in mind when using it’s not continuously capturing images like the IDE running for example, waits for a command and only then starts capturing a frame and then compressing and then transferring it, which means it’s missing the next frame edge and likely to cut the FPS in half like that…Something you can try is changing your script to

    cmd = usb.recv(4, timeout=10)
    img = sensor.snapshot().compress()
    if (cmd == b'snap'):
        usb.send(ustruct.pack("<L", img.size()))

Alternatively it would be interesting to see if co-routines could fix this issue. Note you should post your script I’ll test it and see if I can make it faster.

I’ve tried the RPC example code before. Similar issue, much slower than directly streaming to the IDE.

Tried your suggestion and played around with the timeout parameter, still going at 5FPS. I’m not sure how to use co-routines on the OpenMV. If you can make it faster that would be amazing!

OpenMV code, saved as

import sensor, image, time, ustruct
from pyb import USB_VCP

usb = USB_VCP()
sensor.skip_frames(time = 2000)    

    cmd = usb.recv(4, timeout=10)
    if (cmd == b'snap'):
        img = sensor.snapshot().compress()
        usb.send(ustruct.pack("<L", img.size()))

Code running from my computer:

import sys, serial, struct, time, os, argparse, time
import cv2
import numpy as np

port = '/dev/tty.usbmodem3573385831391'
serial_port = serial.Serial(port, baudrate=115200, bytesize=serial.EIGHTBITS, parity=serial.PARITY_NONE,
         xonxoff=False, rtscts=False, stopbits=serial.STOPBITS_ONE, timeout=None, dsrdtr=True)

while True:
    start = time.time()
    # Read data from the serial buffer
    size = struct.unpack('<L',[0]
    buf =
    # Use numpy to construct an array from the bytes
    x = np.fromstring(buf, dtype='uint8')

    # Decode the array into an image
    img = cv2.imdecode(x, cv2.IMREAD_UNCHANGED)
    end = time.time()

You need double buffering.

Okay, enough folks are asking about this now that it will get done next. Already working on the MDMA feature to support it.

That’s true double buffering will help, in the sense that there will always be a frame ready, coros may allow faster streaming (now) since you can capture a buffer and stream the other one at the same time.

@Jhc999 I didn’t test your script yet but I noticed 2 things, first your counting the decode time in the FPS, second you should set a much higher baudrate try
10_000_000 avoid 12m and 921600 because these are used for debugging.

@Jhc999 So I just realized you’re actually looking at the wrong number, the FPS you see printed is how fast the camera is capturing frames, the IDE fps displayed on the lower right is how fast the IDE can lock the FB and transfer it per seconds. That’s what should be the guideline for any USB transfer.

That said I’m still trying a few things.

So right now we can’t yield from snapshot so coros are useless. Anyway, until double buffering is implemented you can either:

  1. use timers like the following example ( I get 20FPS for VGA and 10FPS for HD)
import sensor, image, time, ustruct
from pyb import USB_VCP, Timer

usb = USB_VCP()

img = None
ready = False
size = 0

def timer_callback(timer):
    global img
    global ready
    global size
    # DO NOT allocate memory in this handler.
    if (ready == True):
        ready = False 

tim = Timer(2, freq=100)

    fb = sensor.snapshot()
    if (usb.recv(4, timeout=10) == b'snap'):
        while (ready):
        img = fb.compressed(quality=50)
        size = ustruct.pack("<L", img.size())
        ready = True
  1. take a look at the module and in the tools dir, these pretty much implement the same protocol used by the IDE: