Streaming Data to PC at 25 FPS to custom Application


We have bought a couple of OpenMV boards.
We have some algorithm already written in C++ and they also use OpenGL for certain things as well and they work very well using video file.

Now, we want to stream at least 25FPS from OpenMV at VGA Quality to PC and then feed that live stream to out custom application.

So, in this way we want to show case the live feed result instead of video file.

Please can guys helps on this.
1- How to have stream of data from OpenMV board and feed it to our custom application ?
2- Can we stream out 25 FPS at VGA from OpenMV Board ? What FPS for VGA we can stream out? (not processing on OpenMV board, just streaming).

Hi, it’s not possible to stream 25 FPS VGA video off of the OpenMV Cam. We don’t have any I/O outputs that can run at the data rate you need to do that. The best we can do is compress the image but then you get about 10 FPS grayscale and 5 FPS RGB565. You may get slightly higher fps via spi than USB if you have something that can take in SPI data. The main cpu gets video from the camera at 640x480 at 60 FPS on an 8 bit data bus which takes 2 clocks per pixel. This means that the pixel clock is running at 48 mhz or so. We don’t have any output option fast enough to get the image off the board once we pull the image into ram.

The OpenMV cam is not a webcam, I am sorry that it cannot stream data to the PC fast enough but the ability to stream image data is just meant for debugging.

Hi kwagyeman,

Thanks for the update. What VGA rate to expect if we record to SD Card ? and then later stream out to PC… Is SD card on SPI ?
Since we already bought it, semi-live would work as well.

For, Instance we do some live action and record and then stream it later to PC as if it is live? That will be slow but we don’t have to swap SD card back and forth for Demonstration…

Also, if you know, there is high resolution camera that can be used on PC side. Also to mention, we have some other requirement that is we have some sensor data that we want to sync with frames… so a plain store bought web camera is not a solution for our problem.


Ah, yeah. Good idea. SD card write speed is actually blazing fast. It’s 4 bit data at 25 MHz. You could get above 20 FPS with VGA. I’ll give that a try when I get home. Note that you’ll need to use a fast SD card. Also, note that I’m talking about jpeg compressed VGA images. Raw images are too large.

As for streaming the data to the PC that is fast too. Once images are saved you can open the MJPEG file on the PC since the camera’s SD card appears as a USB flash drive. You just have to reset the camera to get the flash drive to appear which you can do with OpenMV IDE. Or… Via a python script.

Hi, the M7 can do:

Grayscale MJPEG VGA record at 11 FPS
RGB565 MJPEG VGA record at 5 FPS (note the camera is actually processing a bayer image instead of RGB565 at 640x480)

RAW Grayscale VGA record at 15 FPS - e.g. ~4,608,000 B/s
RAW RGB565 VGA record at 15 FPS (note the camera is actually processing a bayer image instead of RGB565 at 640x480) - e.g. ~4,608,000 B/s

SD card max bandwidth is 12.5 MB/s. So, I believe I’m being either limited by the SD card or waiting for the next image from the camera.

At QVGA (320x240)

Grayscale MJPEG VGA record at ~32 FPS
RGB565 MJPEG VGA record at ~22 FPS

RAW Grayscale QVGA record at 30 FPS - e.g. ~2,304,000 B/s
RAW RGB565 QVGA record at 26 FPS - e.g. ~3,993,600 B/s

Unless you need the extra resolution your goal is met at 320x240 using raw image capture. Parsing the raw data is rather easy. The format is very simple and can be directly loaded into memory by any application that can read a file.

Note that the camera can also stream compressed jpeg images to the PC at 25 FPS too at 320x240. Using OpenMV IDE demonstrates this.

Interesting… I guess if the bottle neck is SD Card (Which I believe it is), then one can gain some extra speed by packaging(keep them in memory) 2,3 or 4 frames and then try writing them once…
I did encountered that problem when I was writing data to SD card at 1 Khz (Not Image data)… and I was only getting 700Hz at the… After some experiments of packaging multiple samples together and writing them once, I was able to easily achieve 1 Khz.
If possible can you try that please.

The frames are about 300kb each and they all get written in one large multiblock operation. Packaging frames is not possible (we don’t have the ram) or the problem. I can try a faster SD card however. The one I am using isn’t a high class card.


I just tested something I am wondering and have some questions.
I am running this code and I am getting ~32 FPS with VGA and RGB565
The real questions is this code is running on the MCU and it does send back the snapshot frame and then print the fps. On the OpenMV IDE I have disabled the the Live image view.
The print out on the terminal suggest that the OpenMV IDE is in fact receiving the image frame data and from header it is printing that the image is jpeg and width and height of the image data…
So, it sounds to me that OpenMV IDE is have some bug because of which, when I enable the preview of image in OpenMV IDE… the frame rate drops down to 6 FPS

Please can you describe what is going on…

here is the code snapshot.

Thanks just text. The image isn’t being printed. The OpenMV Cam is just printing the image’s attributes.

If you want an actual image to be sent then call compress_for_ide() on the image and print that. The FPS will fall again.

Hi, I understand that and I was suspecting that.
But can you clarify this… When I don’t actually make any change to the code and just press the Disable button to enable the preview… The preview just appears there.

Does OpenMV IDE issues the call compress_for_ide() behind the scene itself ?

Um, kinda, the camera automatically compresses the frame buffer when the IDE has enabled preview on the start of snapshot. Compress for IDE allows you to manually trigger this so you can generate compressed data safe to send on a serial line if you are trying to send the IDE data over TTL serial. In particular, it’s made for when you are using the open terminal feature to open up a serial terminal on any uart port connected to your PC.

You can also just call the compressed() method on an image to compress the image into a normal jpeg file.

Compress for IDE turns a normally compressed jpeg file into a 6 bits per byte data stream so it can be sent over a serial connection that might drop by and whatnot without corrupting normal debug text if that happens. Basically, it moves the joeg data I to another “band” over a serial connection so it doesn’t destroy normal debug text if any part of the data transfer dropped bytes.

The normal debug connection to the PC is over USB so this problem doesn’t exist. So, the camera just send raw JPEG data since no bytes will be dropped over USB.

Anyway, sorry it that wasn’t explained well, basically, we just have multiple ways to send the IDE image data given the particular connection situation.

I’ll try the faster SD card I have when I get a chance.

Hi, the faster SD card I tried was actually the first one with the above results (another one I had lying around was a lot slower). I don’t think the current system can do much better than what I posted above for VGA.

… as an aside, it’s not really design to rock in VGA performance anyway. That said, I believe the issue with the low frame rate isn’t due to the M7 or the SD card but instead or frame capturing architecture. So, the OpenMV Cam doesn’t have a fifo buffer to store images that are received. Instead, when you call snapshot we wait for the next frame to come out of the camera in the stream of images from the camera and then work on that frame. This means that if we take a long time to save the image to the SD card we can easily be at 1/2 or 1/4 of the max frame rate. Since grayscale RAW record seems to be stuck at 15 FPS I’m guessing this is happening.

Anyway, so, what do you want to do? Can you accept 320x240?

Where is firmware code and where we capture the frame.
Can you please point me there.
From my some experiments, It looks liked the issue is either the way frame is captured or read or the issue is with the Virtual Com Port (bandwidth or speed at which data is transmitted).

I would like to perform some experiments…
Test 1- Just Capturing/Reading frames from camera… What FPS it can read at. No writing to the SD card and writing to the USB or capturing from IDE
Test 2- Just capture one frame. Keep it there and don’t overwrite it. Then in a loop keep transmitting that one frame to the USB over and over.

Test 1 will result in frame capture speed
Test 2 will result in throughput rate from MCU to PC
If both perform well, then it means there is some other issue in the middle.