HVGA scaling question

When applying HVGA the frame size appears to be cropped rather than a scaled resolution. For example changing from VGA to QVGA the frame size and perspective stays the same. Is this expected?

I’m also curious if there is any way to gain more control of the resolution. I’m riding a fine line of enough detail to capture April tags but not too large of a resolution to impact frame rate of my operations.

Perhaps there is a way to set a ratio between HVGA and VGA without cropping. Or perhaps I can change the resolution dynamically and only use a higher res image for April tag detection and then revert back to QVGA.


You can just use the sensor.set_windowing() feature to control exact cropping on any resolution.

As for how the resolution gets picked out of the sensor, it’s pretty complex, and it’s per sensor. When you set the resolution, we try to do our best job to get that resolution out of the camera sensor, given start/stop locations where you can read pixels out and how much scaling the camera can do for you. Different cameras have weird quirks, which make the behavior between resolutions very different.

I think is understand the cropping piece, but is there a way to effectively downsample the resolution without losing field of view?

Hey hi, The OpenMV camera module is designed to crop frames when switching resolutions to maintain aspect ratio. To gain more control over resolution, consider adjusting settings within the OpenMV firmware or dynamically changing resolution based on specific tasks, such as using higher resolution for April tag detection and reverting to lower resolution for other operations.

Setting the resolution downsamples the image sensor while keeping the same field of view. This is exactly what it’s for.

However, the sensors as mentioned have quirks. So, if you loose field of view when changing the resolution is because we couldn’t do a fractional downsample of the image array so we had to start cropping.

Then, there’s a performance aspect, on the Nicla you can keep the whole Field of View when you downsample but this will result in very low fps. So, we limit the amount of scaling to keep the FPS up. However, I added a sensor ioctl for the nicla camera to allow for the maximum field of view without if you don’t care about the frame rate falling.

Anyway, the current code really tries its best. Which camera sensor are you using?

I’m using the global shutter camera available on your site. Now reading the specs I think I’m limited to about HVGA anyway by the sensor capabilities.

I’d be very interested in changing from high resolution for April tag detection and then lowering to qvga for my blob detection, but I need to make sure I have the same field of view otherwise my tag references will throw off the rest of my process.

Is that as simple as changing the resolution when advancing states? I’m effectively using a state machine to manage which path to proceed down.

Yeah, you just change the res.

All the timeouts to avoid corrupt images and all the DMA logic is automatically restarted by our drivers. Just change the resolution and then call snapshot.

There will be a slight performance impact but as long as you don’t do it constantly you should be good.