Several questions about sensor.JPEG()

I have several questions regarding the sensor.JPEG() method.

  1. How does sensor.JPEG() work? Does it capture the raw RGB image, then compress it? Is it somehow able to capture the JPEG directly?

  2. At the same resolution, using sensor.JPEG() is much faster than using sensor.RGB565() and then using frame.compress() on the captured image. Why is the sensor.JPEG() method faster?

  3. Once an image is captured by sensor.JPEG(), why does it become immutable and unable to be cropped? JPEG cropping is permitted in OpenCV. Is the OpenCV cropping method possible to implement on the OpenMV?

  4. Why does the snapshot capturing time for sensor.JPEG() method plateau for higher resolution images? The plot below was generated using the OpenMV H7 Plus, and each point is averaged over 100 images. A snippet of the code used to generate the plot is included.
    Screen Shot 2021-01-19 at 10.08.52 AM.png

sensor.reset()
sensor.set_pixformat(sensor.JPEG)
sensor.set_quality(80)
sensor.set_framesize(WQXGA)
sensor.skip_frames(time = 2000)
F1 = utime.ticks_ms()/1000
img = sensor.snapshot()
F2 = utime.ticks_ms()/1000
time = F2-F1

As you can see, the amount of time it takes to capture the JPEG rises exponentially as the image resolution increases. However, for resolutions at sensor.SXGA or greater, the amount of time it takes to capture the JPEG caps at 0.07s. Why does this happen? In comparison, the amount of time it takes to capture a RGB image on the OMV is linear with image resolution. Why is there such a sharp difference in performance?

  1. In the plot above, changing the compression quality doesn’t change the snapshot time. That is, after setting sensor.set_pixformat(sensor.JPEG), the amount of time it takes for snapshot is the same, regardless if the JPEG quality was set at 30%, 50%, or 90%. Why does it take the same amount of time? In comparison, the amount of time it takes to compress a RGB image on the OMV is linear with image resolution. Why is there such a sharp difference in performance?

1-2. The OV5640 has a JPEG compressor on chip. Sensor.JPEG puts the chip into JPEG mode. So, the STM32H7 doesn’t have to do any work. It just receives the JPEG image from the camera. Note that the OV5640 is like a 800 MHz SoC internally.

  1. JPEG cropping is not permitted in OpenCV. What is happening is that they load the JPEG, convert it to a BGR888 image, and then allow you to crop, and then save as a JPEG again. We don’t have any code in our library for decompressing JPEG images. This can be added however. Additionally, the STM32H7 has a JPEG decompressor onboard. I’ve been going through our library fixing large issues like this involving multi-media issues. I’ll keep JPEG decompression support in-mind.

  2. You need to use the sensor.ioctl sensor.SET_READOUT_WINDOW to reduce the sensor resolution. The OV5640 has 5M pixels. In order it get a full frame it has to process ALL the pixels in the image. This limits how fast it can run. If you are willing to have the field-of-view cropped (which is what the RaspberryPi camera does when you change the resolution) then you can increase the frame rate as the expense of the field-of-view.

E.g. if you set the readout window to 1920x1080 versus the default of 2560x1944 you will see a massive boost in speed but you will also notice the field of view narrowing by a lot.

  1. See 1/2. The STM32H7 has a hardware optimized JPEG decompression pipeline. However, for JPEG encoding the processor has to assemble 8x8 pixel MCUs to feed to the JPEG compressor. While the hardware on the STM32H7 could jpeg compress a 5MP in less than 10ms the processor has to feed the compressor which takes a tremendous amount of work and 200 ms. ST didn’t design in the right hardware to feed the compressor as it was clearly added to the silicon as an after thought.

sensor.JPEG() captures images directly in JPEG format without first capturing in raw RGB format and compressing. The immutability and inability to crop images captured using sensor.JPEG() is due to JPEG compression.
The plateauing of snapshot capturing time for higher resolution images with sensor.JPEG() may be due to hardware limitations or optimizations and if you want to try it manually then you can try any online application such as https://jpegcompressor.com/ it compress image without affecting their original quality and the lack of impact of compression quality on snapshot time with sensor.JPEG() could be because JPEG compression is computationally fixed.

Our library supports jpeg cropping now. We’ll decompress the jpeg, crop it, and then recompress with one line of code.