700x700 TIF Image Capture Using OpenMV H7 Plus

Hello,

I am working to use the OpenMV H7 Plus to capture 700x700 images in a TIF format at the highest frame rate suitable to capture (this resolution).

I updated the code to have the IOCTL readout feature included (although the IR example had way more features than I needed).

That code can be seen below:

import sensor, image, time, os

# Attempt to mount the SD card using the filesystem
# If successful, images will be saved to the SD card
# If not, images will not be saved
try:
    os.mount('/sd')
    storage = '/sd'  # Define the storage path to the SD card
except Exception as e:
    storage = None  # No storage will be used if SD card fails to mount
    print("Failed to mount SD card:", e)

# Initialize the camera sensor
sensor.reset()  
sensor.set_pixformat(sensor.GRAYSCALE)  # Capture images in grayscale
sensor.set_framesize(sensor.VGA)  # Use VGA resolution as the starting point

# Disable automatic gain and white balance to maintain consistent color tracking
sensor.set_auto_gain(False)
sensor.set_auto_whitebal(False)

# Disable automatic exposure - we want to manually control the exposure
sensor.set_auto_exposure(False)

# Exposure time needs to be set manually if you're adjusting for a specific frame size or light condition
# sensor.set_auto_exposure(False, exposure_us=EXPOSURE_TIME_MICROSECONDS)

# Wait for the above settings to take effect on the sensor
sensor.skip_frames(time = 2000)

# Create a clock object to measure the frames per second (FPS)
clock = time.clock()

# Use IOCTL to set the desired readout window size on the sensor
# This assumes the camera sensor can support the custom resolution
sensor.ioctl(sensor.IOCTL_SET_READOUT_WINDOW, (700, 700))

# Capture and save images in a loop
for i in range(50):
    # Begin a new time measurement
    clock.tick()
    
    # Capture an image from the camera
    img = sensor.snapshot()
    
    # If an SD card is mounted, save the image to the card with an incremental filename
    if storage:
        img.save("{}/image{}.tif".format(storage, i))
    
    # Print a message to the serial console to show that the image has been saved
    print("Image {} saved.".format(i))
    
    # Print the current FPS rate to the serial console
    print(clock.fps())

I’d like to clarify the following:

  1. Does the H7 Plus support a frame rate of >75fps at a resolution of 700x700? I’ve tested this but am not able to confirm as the code freezes after saving 50 images without displaying the frame rate.
  2. Is there a simple way to implement the saving of the TIF image files directly to my laptop without the SD card? If not, can the correct code for saving to the SD card be provided?

Thanks!

Hi, we don’t support the TIF image format. So, save() shouldn’t work at all when you try that. We support jpg, bmp, pgm, ppm, png.

Also, you cannot save individual images to the SD card as a high FPS. There’s a cost to opening and closing the file. You need to use the MJPEG module to create an MJPEG object or the ImageIO module to write out video.

I’d recommend using MJPEG as it lowers the IO bandwidth necessary.

As for sending the images to your laptop. The easist thing to do is to use the video recording feature in OpenMV IDE. This will save the frame buffer at 30 FPS. The H7 Plus cannot transfer over USB images as large as you want as 75 FPS. The OpenMV Cam RT1062 could technically do this. However, we don’t really have a software setup for the image transfer to anything but the IDE.

That shouldn’t be an issue- I can save the images as BMP files and convert them to TIF files through Python.

Would this be the mjpeg you’re referring to (mjpeg — mjpeg recording — MicroPython 1.20 documentation)? I can stream the video and save it, and then use ImageJ to chop based on the framerate.

I’m still very confused though- in this post, the frame rate should do 46 fps with the max field of view- since I’m reducing it, I should at least see something greater than that, correct? Assuming I want to save the high frame rate, but as a video to the SD card (instead of individual images to the SD card or video to the USB), how can I do this, relating this to the attached post? Any sample code would be greatly appreciated!

Thanks

Hi, this does 75+ FPS:

import sensor, image, time

sensor.reset()  # Reset and initialize the sensor.
sensor.set_pixformat(sensor.JPEG)  # Set pixel format to RGB565 (or GRAYSCALE)
sensor.set_framesize(sensor.VGA)  # Set frame size to QVGA (320x240)
sensor.ioctl(sensor.IOCTL_SET_READOUT_WINDOW, (0, 0, 1600, 1200))
sensor.skip_frames(time = 2000)

clock = time.clock()

while(True):
    clock.tick()
    img = sensor.snapshot()
    print(clock.fps())

This saves to disk at 75 FPS:

import sensor
import time
import machine
import mjpeg

sensor.reset()  # Reset and initialize the sensor.
sensor.set_pixformat(sensor.JPEG)  # Set pixel format to RGB565 (or GRAYSCALE)
sensor.set_framesize(sensor.VGA)  # Set frame size to QVGA (320x240)
sensor.ioctl(sensor.IOCTL_SET_READOUT_WINDOW, (0, 0, 1600, 1200))
clock = time.clock()  # Create a clock object to track the FPS.

led = machine.LED("LED_RED")

led.on()
m = mjpeg.Mjpeg("75fps.mjpeg")

clock = time.clock()  # Create a clock object to track the FPS.
for i in range(1000):
    clock.tick()
    m.write(sensor.snapshot())
    print(clock.fps())

m.close()
led.off()

Note that readout window is the readout window of the camera pixel area. This is scaled down to whatever the framesize you set it.

You passing (700x700) makes no sense.

The way to think about the argument to it is that x/y are offsets from the camera center, then w/h are where the sensor should readout pixels from. Those pixels are then passed to a scaler in the camera which produces an image that is framesize.

The reason the FPS increases is because when you say to the camera it only has to readout 1600x1200 pixels verus e.g. 2560x1440 it takes less time to process a frame before it exits the camera at the specified framesize.

Finally, note the use the JPEG format from the camera sensor. If you don’t set that then the H7 Plus has to JPEG compress the images itself which will limit the maximum frame rate to about 44 FPS. We have already done everything possible to optimize the JPEG compression onboard so this is the upper limit.

Regarding the OpenMV Cam RT1062. It cannot readout JPEG images from the camera at the moment. I need to improve the camera sensor driver for it. Once I do this it should then be able to run this code too.

Regarding 700x700. You can’t use set_windowing to crop the frame at 700x700. You’d need a custom framesize() which would require you to edit our firmware to get that output. Since the image has already been JPEG compressed from the camera sensor it’s size it equal to the framesize.

Thank you for providing this detailed response!

The resolution doesn’t have to be 700x700- I just need the highest frame rate for the highest possible size (assumed 75 fps corresponded to ~700x700, but 1600x1200 is more than suitable at this frame rate).

With mjpeg, it looks like it can be any image type- I’ll test this with bmp and hope the high frame rate translates.

I’ll follow up on progress and mark this as resolved as soon as I can do some initial tests.

Appreciate the help once again.

Hi, you can use OpenMV IDE with the Video Tools → Convert Video File and select the mjpeg and then make the output video file “%d.bmp” and FFMPEG will pull the images from the video file and save them as individual bmp files for you.

Note that this will produce 1000s of images.

Hello,

I updated the code since some of the commands were outdated and causing errors like:
1.) AttributeError: ‘module’ object has no attribute ‘LED’
2.) AttributeError: ‘Mjpeg’ object has no attribute ‘write’

The code can be seen below:

import sensor
import time
from pyb import LED
import mjpeg

sensor.reset()  # Reset and initialize the sensor.
sensor.set_pixformat(sensor.GRAYSCALE)  # Set pixel format to RGB565 (or GRAYSCALE)
sensor.set_framesize(sensor.VGA)  # Set frame size to QVGA (320x240)
sensor.ioctl(sensor.IOCTL_SET_READOUT_WINDOW, (0, 0, 1600, 1200))
clock = time.clock()  # Create a clock object to track the FPS.

led = LED(1)  # Using LED from pyb module

led.on()
m = mjpeg.Mjpeg("75fps.mjpeg")

for i in range(1000):
    clock.tick()
    m.add_frame(sensor.snapshot())
    print(clock.fps())

m.close(clock.fps())
led.off()

When trying to use the Video Tools for the conversion, I see the following:

Is there anyway to default to do the conversion as a bmp file? I assumed FFMPEG was on the backend of the tools and was built into the video conversion firmware, making it unnecessary to install it.

Any clarity is appreciated.

Hi, write() for mjpeg is the new api. Add frame was removed. It sounds like you went back to an older firmware image base.

As for that error… Mmm, that might be related to how mac handles things. Please select the All Files format from the save dialog. Our software doesn’t care and just passes things along. I think you’re just hitting the mac native dialog forcing you to use .mp4 when you select the .mp4 format.