I’ve been playing with the MJPEG example in the IDE and trying shrink the size of the videos by modifying resolution, pixel format, and compression. I’m seeing some odd results - with several videos of the same scene for the same duration, I observe the following file sizes:
320_120_GRAY.mjpeg = 1.1M
320_120_RGB.mjpeg = 830K
20_120_RGB_COMP25.mjpeg = 1.1M
320_120_RGB_COMP50.mjpeg = 1.5M
320_120_RGB_COMP75.mjpeg = 1.9M
So as I increase compression quality, the size increases - this makes sense. However, I would expect grayscale to be half the size of RGB and compression to be at least a bit smaller than uncompressed.
Any ideas? Am I misunderstanding something or using the compression API incorrectly?
m = mjpeg.Mjpeg(name)
startTime = time.ticks()
while (time.ticks() - startTime) < EndTime:
img = sensor.snapshot()
Thanks in advance for the help!
The compressed video looks roughly the same aside to distortion at the top of the frame. So perhaps I am doing something wrong there.
Hi, all videos are compressed, when you call add(sensor.snapshot()) it compresses the frame so you don’t actually need to call compress first, if you call add with (sensor.snapshot().compress()) it will just write the frame.
I think this explains why the compressed videos have the same size as the videos you think are not compressed and why they look the same.
As for grayscale being larger than RGB video, it should be smaller, are you capturing a video for a duration or a frame count ? GS is compressed faster so if you’re capturing for a duration it might save more frames, try again with for i in range (300): and let me know.
Finally that line you see is a bug, I changed the way the camera does compression in recent FW, and the image.compress function was never updated.
Thanks for your reply. That makes sense about compression already happening.
With regard to GRAY vs. RGB…I was capturing with duration, so your theory about compression time makes sense…I changed the code to implement exactly what is on this page:
I am now seeing
RGB = 764 KB, 13 seconds
GRAY = 561 KB, 10 seconds
So the grayscale is smaller, thats good news - but the record time is also less, despite using the same counter of 100 in the for loop.
Is there a way to guarantee constant frame rate and constant record time? This would give me a fairer comparison of video sizes.
Thanks for your help!
I guess if the RGB and GS are compressed at different speeds - it will be impossible to get the same video duration and the same number of frames.
I either have to choose:
-Same duration - but GS will have more frames
-Same frame number - but GS will be a shorter duration
I think I may have answered my own question!
Hi, use pyb.mills() as the time base. Doing a loop with 100 was just the easiest to write when we were updating the firmware in a blitzkrieg last year.
As in, track the milliseconds between frames and start to compress the next frame once some number of milliseconds passes. This will let you slow the FPS down and control it. For maximum speed don’t do this.
It’s possible if you limit the FPS to the slower FPS say 10-15 FPS, you can’t make RGB compression run faster so you’ll have to make GS slower.
Here’s an example to what Kwabena is saying, it’s in C but you’ll get the idea, basically it’s just like trying to make a game loop: