Slower than expected FPS

I would expect the simple code below to output a 120Hz signal on pin 1 however I only get 30Hz.
Why is this the case?
I have not connected the camera via USB and am powering it through the vin pin.

import sensor, image, time, math
from pyb import Pin, Timer, LED

red_led   = LED(1)
green_led = LED(2)
blue_led  = LED(3)

PW_center = 1500
tim = Timer(4, freq=50)
ch1 =, Timer.PWM, pin=Pin("P7"), pulse_width = PW_center)

pin1 = Pin('P1', Pin.OUT_PP, Pin.PULL_NONE)

sensor.set_framesize(sensor.QQVGA) # use QQVGA for speed
time.sleep(1000) # Let new settings take affect.


    img = sensor.snapshot()

Hi, you should get about 92 FPS on the OpenMV Cam M7. We can’t capture faster than that. 30 FPS seems too low though. What board are you using?

I am using the Cam M7 and a logic analyzer to test it. The frequency is the same as it shows on the IDE with USB attached.
Could I edit the firmware to make it go close to 120 FPS at QQVGA? I don’t need USB or micropython support.

If you want a faster FPS I recommend playing with the camera register settings. It’s already quite overclocked to 92 FPS. To get higher you’ll need to increase the camera XVCLk above 54 MHz. I don’t think this is possible without crashing the system. But, yes, go ahead and try what you like to drive the camera faster. You also will need to low the exposure time which will increase the FPS.

Thank you for the quick replies!

Regarding the code I posted above, why isn’t the pin toggled at 92 Hz ?

Hi, are you using an old firmware ? If so you should update the firmware to the latest.

I didn’t think of that. Will update the firmware and let you know

Updating firmware made a huge difference and now the pin toggles at 86 Hz. I’m guessing that, as kwagyeman posted, FPS is 92 but the gpio routine and python overhead drops the loop speed to 86 Hz.

I had a question regarding the OV7725. I realized that it is indeed being overclocked so I won’t bother pushing XCLK higher. However, based on the timing diagrams I calculated the following fps rates. Are these achievable on the M7 ?

I read a post on this forum which mentioned that fps is locked at 120fps for the camera. Is this limitation imposed by the OV7725 or the STM?

// FPS calculations for OV7725 raw grayscale images

// datasheet =

PCLK = 54000000; //overclocked from 48MHz with no prescaler
tpixel_raw = 1/PCLK; //1 byte per pixel for grayscale

VGA_tline = 784*tpixel_raw; //640 + delay (144)
VGA_frameTime = 510 * VGA_tline; //480 + delay (30)
VGA_fps = 1/VGA_frameTime; // = 135 fps

QVGA_tline = 576 * tpixel_raw;
QVGA_frameTime = 278 * QVGA_tline;
QVGA_fps = 1 / QVGA_frameTime; // = 337 fps

// I am assuming that the exposure time is just the inverse of the framerate

Um, I think I was just incorrect saying the FPS was 120. I thought Ibrahim had doubled the FPS. But, it’s not exactly like that. He made the camera use a faster readout system for resolutions below 320x240 and he increased the pixel clock speed. Both of these combined make the FPS about 92. If you decrease the camera res you’ll see the frame rate lock at 92 Hz or so.

To go faster, set the exposure to a low value. The new firmware on our github release page has this feature. You can find the example scripts under usr/examples. Note that the latest firmware has not yet been pushed to OpenMV IDE.

I would also really appreciate an explanation for where I miscalculated above (QVGA->337fps).

I see that you are using the DCMI and DMA so reading the grayscale pixels in at high speed isn’t a problem.
Running the DCMI in continuous grab mode along with the DMA would give the CPU close to a 3ms window between reading frames assuming 337fps which seems like plenty for a stm32f7.

Is there some hardware limit that I am overlooking or is there simply too much going on in terms of software to be able to support this fps?


If you set the camera into RAW Bayer mode you may be able to approach that FPS. The calculation is correct from the data sheet. However, the camera has an exposure time window which slows things down. If you want to go that fast you have to kill the exposure.

Try out the below code with the frame buffer disabled. You’ll see the frame rate up at 92 FPS or so. The actual camera FPS is higher however. It’s just we are dropping frames due to the next frame coming right after the first one. If you want to capture all frames you have to do everything in C world. In particular, you have to do all your processing during the line interrupt. See sensor.c.

Note that things like USB, etc. cause the processor to miss turning on the frame capture logic in time. You’ll need to cut a lot of interrupt sources out of your special code.

# Sensor Exposure Control
# This example shows off how to cotnrol the camera sensor's
# exposure manually versus letting auto exposure control run.

# What's the difference between gain and exposure control?
# Well, by increasing the exposure time for the image you're getting more
# light on the camera. This gives you the best signal to noise ratio. You
# in general always want to increase the expsoure time... except, when you
# increase the exposure time you decrease the maximum possible frame rate
# and if anything moves in the image it will start to blur more with a
# higher exposure time. Gain control allows you to increase the output per
# pixel using analog and digital multipliers... however, it also amplifies
# noise. So, it's best to let the exposure increase as much as possible
# and then use gain control to make up any remaining ground.

import sensor, image, time

# Change this value to adjust the exposure. Try 10.0/0.1/etc.

sensor.reset()                      # Reset and initialize the sensor.
sensor.set_pixformat(sensor.BAYER) # Set pixel format to RGB565 (or GRAYSCALE)
sensor.set_framesize(sensor.QQVGA)   # Set frame size to QVGA (320x240)

# Print out the initial exposure time for comparison.
print("Initial exposure == %d" % sensor.get_exposure_us())

sensor.skip_frames(time = 2000)     # Wait for settings take effect.
clock = time.clock()                # Create a clock object to track the FPS.

# You have to turn automatic gain control and automatic white blance off
# otherwise they will change the image gains to undo any exposure settings
# that you put in place...
# Need to let the above settings get in...
sensor.skip_frames(time = 500)

current_exposure_time_in_microseconds = sensor.get_exposure_us()
print("Current Exposure == %d" % current_exposure_time_in_microseconds)

# Auto exposure control (AEC) is enabled by default. Calling the below function
# disables sensor auto exposure control. The additionally "exposure_us"
# argument then overrides the auto exposure value after AEC is disabled.
sensor.set_auto_exposure(False, \
    exposure_us = 0)

print("New exposure == %d" % sensor.get_exposure_us())
# sensor.get_exposure_us() returns the exact camera sensor exposure time
# in microseconds. However, this may be a different number than what was
# commanded because the sensor code converts the exposure time in microseconds
# to a row/pixel/clock time which doesn't perfectly match with microseconds...

# If you want to turn auto exposure back on do: sensor.set_auto_exposure(True)
# Note that the camera sensor will then change the exposure time as it likes.

# Doing: sensor.set_auto_exposure(False)
# Just disables the exposure value update but does not change the exposure
# value the camera sensor determined was good.

    clock.tick()                    # Update the FPS clock.
    img = sensor.snapshot()         # Take a picture and return the image.
    print(clock.fps())              # Note: OpenMV Cam runs about half as fast when connected
                                    # to the IDE. The FPS should increase once disconnected.

Thanks for that.
I haven’t programmed STM before but I found some free IDEs that seem interesting.
Is there an IDE or programmer that you recommend?
I was currently considering this:
I could use USB but I’d like to have some debugging support

See here: Home · openmv/openmv Wiki · GitHub

Um, Ibrahim knows how to use the hardware level debugger.

Thanks, I stumbled upon that but I’m on a Mac low on space since I already have a VM running windows. No space for another one unfortunately.
I’m going to try compiling and programming the firmware on windows.

The sensor outputs 120FPS, you can confirm this with the scope if you (carefully) scratch the VSYNC signal via and test that point (see the design files). The older firmware set the sensor output to VGA (which is limited to 60FPS) and down-scaled from VGA to lower resolutions. I changed that so the output is set to QVGA (which can go up to the maximum 120FPS) for resolutions <= QVGA.
The reason for the 92FPS is exactly like Kwabena explained, we’re not using interrupts and double buffering, frames are read on demand. When you call snapshot the camera waits for the current frame to finish and the next one to start (so a few frames get dropped). Note you’ll need to heavily modify the code to get all of the 120 frames, but then you probably won’t have enough ram/time to do anything useful with the extra frames.

Compiling the firmware on Windows works as long as you can get a Make working. I found Make in windows without mingw to be lacking.

Alright, Thanks.
Just to confirm, the VGA can go up to 135 fps like my calculations above show however this would require us to kill exposure so that it’s equal to 1/135. Thus I assume that the 60fps comes from the default exposure setting.
Am I correct in assuming this?

I don’t really have any more info for you on the FPS. Just, expect to need to expose at least 5 ms per frame. Since the shutter is rolling this happens while the frame is being read out so it’s not a huge time blocker.

Getting the high FPS is nice but you need to do something with the data. This always it the bottleneck.

Hello sir,
How to calculate the FPS values of different colors and what is the FPS value of white ?

Thanks in advance,

Can you please re-phrase?