I want to stream video over USB to my computer so that I can process the images in real time with OpenCV.
If I connect the OpenMV H7 to the IDE and stream JPEG images with the frame buffer off, I get the following frame rates:
→ VGA: 47.6 FPS
→ HD: 28.6 FPS
→ WQXGA2: 7.5 FPS
However, if I use the usb_vcp.py code to stream JPEG images, I get the following frame rates:
→ VGA: 11.7 FPS
→ HD: 4.7 FPS
→ WQXGA2: 0.8 FPS
(Please note I got the above frame rates by just modifying the sample code and adding timing. I did not do any processing or displaying, so that can’t be the reason for the slower frame rate).
I notice a similar drop in frame rates when using the rpc.py sample code. Why is this happening? The OpenMV streams to the IDE over USB, so why is it so much slower when I try to read the images outside of the IDE? How do read the OpenMV’s images at the higher frames rates outside of the IDE? Again, I’d like to do this so that I can process the images in real time with OpenCV.
Also, the usb_vcp.py code is written to stream JPEG images. How do I stream RGB565 images? It’s clearly possible since I can stream RGB565 images to the IDE when the OpenMV is connected over USB.
With the framebuffer disabled the cam does Not stream anything, in fact it doesn’t even compress the framebuffer for preview, it just skips all that. That’s why the IDE may seem “faster” compared to the other scripts that actually fetch images, you should enable FB to compare and that FPS is the best you can hope to achieve… Please keep in mind this was Not designed as a webcam and most of the boards have USB FS which is slow. Streaming is there mainly to preview images while writing code, the assumption is that you’re done you save the code, disconnect the cam and run it standalone.
Just don’t compress the image and write it as it is, and on the other side read the expected frame size (w * h * bpp). Keep in mind this will be much slower than JPEG.
It’s not streaming RGB it’s streaming JPEG compressed images to the IDE.
These are the frame rates from the IDE with the frame buffer enabled and viewing the stream (format is RGB565):
→ VGA: 30 FPS
→ HD: 15 FPS
→ WQXGA2: 4.8 FPS
As you can see, this is much, much faster than the usb_vcp.py code. Streaming to the IDE and streaming to usb serial should have the same frame rates. Both are transmitting images over usb. So why is it so much slower using usb_vcp?
The OpenMV can stream RGB565 video at WQXGA2 resolution at 4.8FPS to the IDE (frame buffer enabled) when it’s connected via USB. I just want the images to be streamed to my computer at 4.8 FPS directly (serial port or otherwise) instead of the IDE so that I can process the images with OpenCV.
Now that FB is enabled you can see the difference is closer, almost half the FPS… one thing to keep in mind when using usb_vcp.py it’s not continuously capturing images like the IDE running helloworld.py for example, usb_vcp.py waits for a command and only then starts capturing a frame and then compressing and then transferring it, which means it’s missing the next frame edge and likely to cut the FPS in half like that…Something you can try is changing your script to
Alternatively it would be interesting to see if co-routines could fix this issue. Note you should post your script I’ll test it and see if I can make it faster.
Tried your suggestion and played around with the timeout parameter, still going at 5FPS. I’m not sure how to use co-routines on the OpenMV. If you can make it faster that would be amazing!
OpenMV code, saved as main.py:
import sensor, image, time, ustruct
from pyb import USB_VCP
usb = USB_VCP()
sensor.reset()
sensor.set_pixformat(sensor.RGB565)
sensor.set_framesize(sensor.HD)
sensor.skip_frames(time = 2000)
while(True):
cmd = usb.recv(4, timeout=10)
if (cmd == b'snap'):
img = sensor.snapshot().compress()
usb.send(ustruct.pack("<L", img.size()))
usb.send(img)
Code running from my computer:
import sys, serial, struct, time, os, argparse, time
import cv2
import numpy as np
port = '/dev/tty.usbmodem3573385831391'
serial_port = serial.Serial(port, baudrate=115200, bytesize=serial.EIGHTBITS, parity=serial.PARITY_NONE,
xonxoff=False, rtscts=False, stopbits=serial.STOPBITS_ONE, timeout=None, dsrdtr=True)
while True:
start = time.time()
# Read data from the serial buffer
serial_port.write("snap".encode())
serial_port.flush()
size = struct.unpack('<L', serial_port.read(4))[0]
buf = serial_port.read(size)
# Use numpy to construct an array from the bytes
x = np.fromstring(buf, dtype='uint8')
# Decode the array into an image
img = cv2.imdecode(x, cv2.IMREAD_UNCHANGED)
end = time.time()
print(1/(end-start))
That’s true double buffering will help, in the sense that there will always be a frame ready, coros may allow faster streaming (now) since you can capture a buffer and stream the other one at the same time.
@Jhc999 I didn’t test your script yet but I noticed 2 things, first your counting the decode time in the FPS, second you should set a much higher baudrate try
10_000_000 avoid 12m and 921600 because these are used for debugging.
@Jhc999 So I just realized you’re actually looking at the wrong number, the FPS you see printed is how fast the camera is capturing frames, the IDE fps displayed on the lower right is how fast the IDE can lock the FB and transfer it per seconds. That’s what should be the guideline for any USB transfer.