RAW Image Capture - Higher Frame Rate

I am trying to capture images in .DATA format with the OpenMV Cam H7. The goal was to capture at images at about 40+ Frames per second at a pixel resolution of 500 x 500. However, when I run my code in the openMV IDE, I am only able to capture at a rate of 30 Frames per second at a QVGA pixel resolution with a windowing of 200 x 200 (according to my clock)

This is a copy of my code for reference:


import sensor
import time

# Initialize the camera
sensor.reset()
sensor.set_pixformat(sensor.GRAYSCALE)
sensor.set_framesize(sensor.QVGA)
sensor.skip_frames(time = 1000) # time measure in seconds


sensor.set_windowing(200,200) # this section would focus on very specific dimensions (W x H) of the image
clock = time.clock(); # Create a clock to track FPS

# Set the file name and format for captured images
file_format = "DATA"
file_name = "captured_image"

# Set the number of images to capture
num_images = 10

# Capture and save the images
for i in range(num_images):
    clock.tick()    # Update the FPS Clock
    img = sensor.snapshot()
    img.save("%s_%d.%s" % (file_name, i, file_format))
    print("Image %d captured" % i)
    print(clock.fps())
    time.sleep(0.001) # measured in seconds
    



print("Capture completed")

Can you clarify how to increase the frame rate for RAW image capture without settling for that low of a pixel resolution???

Could you also clarify why changing the pixel format to .tif or .data still saves the file as a bmp file??

Thank you!

… Hi, you should read the Documentation… as if you use a filename of “data.bmp” it saves it in BMP format.

Second… you should use the ImageIO class to write the images as creating a separate file per image is very slow. Instead, you want to use a format like ImageIO or MJPEG which save the files into a single stream which avoids unnecessary disk activity.

# Image Writer Example
#
# USE THIS EXAMPLE WITH A USD CARD! Reset the camera after recording to see the file.
#
# This example shows how to use the Image Writer object to record snapshots of what your
# OpenMV Cam sees for later analysis using the Image Reader object. Images written to disk
# by the Image Writer object are stored in a simple file format readable by your OpenMV Cam.

import sensor, image, pyb, time

record_time = 10000 # 10 seconds in milliseconds

sensor.reset()
sensor.set_pixformat(sensor.RGB565)
sensor.set_framesize(sensor.QVGA)
sensor.skip_frames(time = 2000)
clock = time.clock()

stream = image.ImageIO("/stream.bin", "w")

# Red LED on means we are capturing frames.
pyb.LED(1).on()

start = pyb.millis()
while pyb.elapsed_millis(start) < record_time:
    clock.tick()
    img = sensor.snapshot()
    # Modify the image if you feel like here...
    stream.write(img)
    print(clock.fps())

stream.close()

# Blue LED on means we are done.
pyb.LED(1).off()
pyb.LED(3).on()

OpenMV IDE can then turn the stream.bin file into any format for you using the Tools->Video Tools->Convert Video File feature. If you want the induvial images out of the file just set the target output format to “%05d.bmp”. E.g: How to Extract Images from a Video Using FFmpeg - Bannerbear

1 Like

Please note, unless you have the H7 Plus we do not have enough RAM with a larger video resolution to enable triple buffering which gives you the highest frame rate. You will notice a large performance difference between the H7 and H7 plus for video recording.

1 Like

Is there a way to increase the frame rate while maintaining the QVGA frame size settings and not windowing?

What’s the frame rate of the solution I suggested? If it’s 20 FPS it’s because you’re dropping half the frames because of the lack of double/tripple buffering (assuming the camera does 40 FPS). The only way to get that is with more RAM or less resolution to fit 2-3 images in RAM.

It goes to about 10 frames per second with QVGA Resolution and Grayscale Pixel Format. If i add a windowing of 200 x 200, it does increase to about 34 frames per second

this is my code:

import sensor, image, pyb, time

record_time = 20000 # 10 seconds in milliseconds

sensor.reset()
sensor.set_pixformat(sensor.GRAYSCALE)
sensor.set_framesize(sensor.QVGA)
sensor.skip_frames(time = 2000)
sensor.set_windowing(200,200)
clock = time.clock()

stream = image.ImageIO(“/stream.bin”, “w”)

Red LED on means we are capturing frames.

pyb.LED(1).on()

start = pyb.millis()
while pyb.elapsed_millis(start) < record_time:
clock.tick()
img = sensor.snapshot()

Modify the image if you feel like here…

stream.write(img)
print(clock.fps())

stream.close()

Blue LED on means we are done.

pyb.LED(1).off()
pyb.LED(3).on()

Mmm, okay, you can probably increase the resolution more.

First, find what the camera max fps is without recording for the format. That will give you an idea of what the upper limit is. Note that the FPS will drop once you have to go into single buffer mode from double/triple buffering.

Then, the SD card itself will force erase times on you. This can hurt the FPS also, to get pst this you want to turn the image fifo on.

Do: sensor.set_framebuffers(n)

Where n is a value greater than or equal to 4. Once you do this the camera system will store images into a fifo in ram. As long you are writing data out of the fifo quick enough you will get the max frame rate. You need to do this to overcome random periods where the SD card blocks you while it’s erasing. Sensor.snapshot() is the call that removes an image from the fifo.

200x200 is 40000 bytes per image, so, you should be able to have a lot of these in RAM. That said, note that SD cards transfer data faster the larger the block size. So, weirdly enough… you probably won’t see any speed impact moving data to the SD card even if the image size increases because it’s all done in on transfer at 200mbs. I.e, the only way to get the highest bandwidth to an SD card is to do giant transfers. When the image size is smaller the SD card actually gets less efficient.

Also, note that the camera driver will automatically crop the image size for you to fit the buffers you requested. This is an easy way to tell if you are out of RAM. Once you see the image size starting to get cropped you know you’ve hit how many you can fit. This crop will overwrite a previous crop if it needs to.

Modified the code as you suggested.

import sensor, image, pyb, time

record_time = 20000 # 10 seconds in milliseconds

sensor.reset()
sensor.set_pixformat(sensor.GRAYSCALE)
sensor.set_framesize(sensor.QVGA)
sensor.skip_frames(time = 2000)

#5 frames is the max that can be acheived without cropping
sensor.set_framebuffers(12) #Need to understand how this works
sensor.set_windowing(200,200)
clock = time.clock()

stream = image.ImageIO("/stream.bin", "w")

# Red LED on means we are capturing frames.
pyb.LED(1).on()

start = pyb.millis()
while pyb.elapsed_millis(start) < record_time:
   clock.tick()
   img = sensor.snapshot()
   # Modify the image if you feel like here...
   stream.write(img)
   print(clock.fps())


stream.close()




# Blue LED on means we are done.
pyb.LED(1).off()
pyb.LED(3).on()

There was a slight increase in FPS as you suggested, and we did notice the image cropping after we increased the buffers to any number more than 5. I think we got to a decent FPS for our needs

Frame Buffer Set FPS Image Dimensions Resolution
5 18.0212 320 x 240 QVGA
7 25.3361 272x 204 QVGA
10 34.5017 232 x 174 QVGA
12 41.7042 208 x 156 QVGA
15 51.3282 184 x 138 QVGA
10 34.7326 200 x 200 QVGA
10 35.6785 220 x 176 QVGA
8 32.8934 250 x 200 QVGA

I thought I would just show our results to inform you and seek your perspective on if there was anymore suggestions you had for improvement. Thank you for your help!

Glad to hear you got something that will work. With the OpenMV Cam RT which we are about to launch you will get much higher performance. I was doing some SD card testing today and hit 21 MB/s write speed to the SD Card.

Hi, I had a quick question, when I used sensor.TRIPLE_BUFFER instead of sensor.set_framebuffers(), I seem to get a higher FPS rate, I was under the impression the openMV cam H7 does not enable triple buffering and that triple buffering is not traditionally used in video recording. Is there a reason for this difference in performance? Thank you.

Hi, we support single buffering, double buffering, triple buffering, and an image fifo.

These are all controlled by passing:

sensor.set_framebuffers(1) → single buffering
sensor.set_framebuffers(2) → double buffering
sensor.set_framebuffers(3) → triple buffering
sensor.set_framebuffers(N >= 4) → image fifo

Triple buffering offers the highest normal performance. Image fifo mode is really only necessary for video recording, it actually has draw backs in that images may be old when you use it.

Anyway, the H7 doesn’t have enough RAM to enable triple buffering all the time. So, the system only turns it on for smaller resolutions. Anything larger like QVGA falls back to single buffering.

The reason why performance goes way up with it is that we stop dropping half the frames from the camera. Essentially, with single buffering… if you are processing a frame and don’t have space for the next frame then you have to drop it. With triple buffering our drivers are able to store the frame in 1 of 2 buffers while the CPU is working on the third buffer. This ensures that nothing is blocking the camera capture driver from saving an image and the processor from working on an image. When the processor wants a new frame it then just grabs 1 of the 2 buffers that the camera capture driver was not using. So, the CPU always has the latest image possible.

In image fifo mode pictures are stored in a fifo. If the fifo filles up you drop frames. Also, if the fifo is very long then the CPU may be working on an image from long ago. So, you don’t want image fifo mode for real-time video processing. Just video recording to avoid dropping frames if the processor gets busy and can’t read and image for a while.

Hi, I didn’t check the origin of the thread contents when responding. Yes, triple buffering isn’t used typically for video recording…

It’s weird that performance goes up. Image fifo mode should be superior. The only reason for the difference I guess is that your resolution increased since you are using less RAM for buffers allowing for faster SD card writes.

SD card bandwidth achieves the best performance under the image writer class. On the new OpenMV Cam RT 1060 I measured 20MB/s with a 1080p image. However, I saw this fall as low as 5 MB/s when the wrong buffering style was used and etc.

So, to clarify, If I am to do video recording, it is better to use the FIFO mode as triple buffering might drop images if the buffer is unable to continuously store images?

Triple buffering never drops images on the input side. But if the processing can’t keep up with the rate of images coming in then you’ll miss frames. However, you’ll never have to deal with old frames.

Image fifo mode makes sure you capture all frames, even old frames.

The difference becomes apparent when you have an image fifo of like 100 images and you add a delay between reading frames. You will get a slow-mo like effect with image fifo mode then.

1 Like