I have several questions regarding the sensor.JPEG() method.
-
How does sensor.JPEG() work? Does it capture the raw RGB image, then compress it? Is it somehow able to capture the JPEG directly?
-
At the same resolution, using sensor.JPEG() is much faster than using sensor.RGB565() and then using frame.compress() on the captured image. Why is the sensor.JPEG() method faster?
-
Once an image is captured by sensor.JPEG(), why does it become immutable and unable to be cropped? JPEG cropping is permitted in OpenCV. Is the OpenCV cropping method possible to implement on the OpenMV?
-
Why does the snapshot capturing time for sensor.JPEG() method plateau for higher resolution images? The plot below was generated using the OpenMV H7 Plus, and each point is averaged over 100 images. A snippet of the code used to generate the plot is included.
sensor.reset()
sensor.set_pixformat(sensor.JPEG)
sensor.set_quality(80)
sensor.set_framesize(WQXGA)
sensor.skip_frames(time = 2000)
F1 = utime.ticks_ms()/1000
img = sensor.snapshot()
F2 = utime.ticks_ms()/1000
time = F2-F1
As you can see, the amount of time it takes to capture the JPEG rises exponentially as the image resolution increases. However, for resolutions at sensor.SXGA or greater, the amount of time it takes to capture the JPEG caps at 0.07s. Why does this happen? In comparison, the amount of time it takes to capture a RGB image on the OMV is linear with image resolution. Why is there such a sharp difference in performance?
- In the plot above, changing the compression quality doesn’t change the snapshot time. That is, after setting sensor.set_pixformat(sensor.JPEG), the amount of time it takes for snapshot is the same, regardless if the JPEG quality was set at 30%, 50%, or 90%. Why does it take the same amount of time? In comparison, the amount of time it takes to compress a RGB image on the OMV is linear with image resolution. Why is there such a sharp difference in performance?