H7 Resolutions?

Good Day iabdalkader and Team!

Just received your awesome H7 cams and love them; they are indeed much (2x+) faster than the M7’s we’ve been testing with.

However, the reason we ‘moved up’ was the promise (as you indicated in https://forums.openmv.io/viewtopic.php?f=6&t=1258) of higher supported resolutions - especially 640x480 (VGA).

This seems to be problematic for image processing, however.

For example - the ‘single color tracking’ example program works perfectly at 320x240 (QVGA) at around 40 fps.

Changing the resolution from QVGA to VGA in the example program gives the dreaded “OSError: Image Format Not Supported” when running img.find_blobs from line 31. From https://forums.openmv.io/viewtopic.php?f=6&t=541&p=3573&hilit=OSError%3A+Image+Format+Not+Supported#p3573, your team indicated this is because the camera returned a bayer image because it cannot support the larger frame buffer.

Given the new H7’s increased on-board memory (512K in the M7 to 1MB in the H7), we assumed this would now be supported.

Our project requires higher resolution to track small colored objects at range in a large field of view, over the entire sensor field. Accordingly, telephoto lenses or gray scale processing will not work.

We tried to limit the ROI to just a small area in the sample program, but still receive the OSError error:

for blob in img.find_blobs(thresholds, roi=[(320//2)-(50//2), (240//2)-(50//2), 50, 50], pixels_threshold=100, area_threshold=100, merge=True):

Can you provide any guidance?


Hi, ST split the RAM into multiple segments on the STM32 that aren’t actually in-order in RAM. So, while we have 1MB we can’t allocate it all to the frame buffer. The heap grew by a lot because of this. Anyway, the RAM in the frame buffer is 409600 bytes. So, if you need higher than 320x240 just set the camera to VGA and then set windowing on the sensor.

You should be able to hit 400x400 pixels easily. You can keep cranking it up until you get memory errors.

Note, sensor_set_windowing() crops the image coming in. The ROI per method just reduces the work an algorithm has to do to just a part of the image. If you use windowing in combination with the ROI you can make the res take up as much of the frame buffer as you possibly can.

Note that find blobs keeps a binary image of the frame buffer in ram along with a list data structure to connect blobs. So, you need to leave enough space for the binary image to fit along with some space for the list structure. If you don’t leave enough space for the list which is sized to fill up whatever remaining ram is available the algorithm will just not connect all pixels in blobs if a blob has a huge perimeter. If blobs are small this is not an issue.

Thank you!

Sad about the memory allocation madness, but understand.

Splitting the field into four window-quandrants sounds great.

If the H7 sells well enough we made be able to do a V2 with DRAM. That would be in the future however.