OpenMV Cam H7 Plus Frame Rate for RAW Images

Hello,

In this forum post, I see that the H7 Plus can record to video frame rates significantly higher than the H7 (link)

In the homepage for the camera, I saw before that this could reach 75 fps for an RGB raw image- is this frame rate still possible with the triple buffering in the H7 Plus?

I would like to capture raw images (TIF files) at the highest frame rate but want to make sure the fps is sufficient before making a purchase.

Thanks!

Yes, but, you need to use the change the readout window via an ioctl. On the OV7725 the camera sensor does 75 FPS by default. The OV5640 can only do 46 FPS at it’s full field of view. If you reduce the readout window then it can go up to 240 FPS (this reduces your field of view though).

Note, I’m not taking about set_windowing. We have an IOCTL called READOUT_WINDOW which modifies the pixel area read out from the camera sensor pixel array.

Thanks for following up on this.

Is there some preliminary code to capture raw images using the IOCTL function?

Here is some preliminary code I am using, but I just want to validate:

  1. The time between frames is being set to ~16 ms and I’m seeing frame rates at around 150+ fps- is this reasonable?

  2. For capturing 50 frames, the frame rate is not uniform for each image being capture- why is this?

  3. I’m seeing no difference in frame rate for changing the frame size from QVGA to VGA- what might be causing this?

import sensor, image, time, pyb

sensor.reset()                      # Reset and initialize the sensor.
sensor.set_pixformat(sensor.GRAYSCALE) # Set pixel format to grayscale
sensor.set_framesize(sensor.QVGA)   # Set frame size
sensor.skip_frames(time = 2000)     # Wait for settings to take effect.

for i in range(50):
    start_time = pyb.micros()       # Get the current time in microseconds before capturing the image.

    img = sensor.snapshot()         # Take a picture and return the image.

    end_time = pyb.micros()         # Get the current time in microseconds after capturing the image.

    elapsed_time = end_time - start_time  # Compute the elapsed time in microseconds.

    if elapsed_time > 0:   # Check to avoid division by zero
        frame_rate = 1000000 / elapsed_time  # Compute the frame rate.
        print("Image {} saved at {:.2f} frames per second.".format(i, frame_rate))  # Print a message with the frame rate.
    else:
        print("Frame captured too quickly to measure frame rate.")

    pyb.delay(16)   # Delay 16 milliseconds before the next capture.

Please see Examples->Camera->Readout Control->100 FPS IR LED tracking for how to use the ioctl to get a very fast frame rate.

What you are doing is not correct at all. Adding delays in your code is unnecessary. The camera outputs images at a fixed rate. Adding delays just slows down your processing.

If you want to set a fixed frame rate of when images come out you can se the frame rate via sensor.set_framerate() on the camera.