Hello, I am taking an image with the camera that’s QVGA and I would want to upscale it to a higher size. However the pre allocated image buffer size always is the sensor size, and thus I get an error that the image wouldn’t fit the framesize. How can I go around this and pre define more buffer space?
You need to use the sensor alloc extra fb function to allocate a new image buffer.
Then, you have to use the scale method to upsize the image and pass the image buffer as the copy= argument.
However, we will be deprecating this way of working soon as it makes the code rather complicated. Also, as you can see, it’s even complicated to explain.
That said, I don’t have plans to replace this feature… there shouldn’t be a reason to store an upscale image in RAM at any time. It’s better for all methods that do output/processing and etc to handle this themselves. E.g. adding scaling support to save which will happen eventually and etc. and for methods that need to handle image pyramids they should be able to scale the input down themselves.
Question, why do you need an upscaled resolution in RAM?
Is there a way to do this with outputting image to a display. Basically I want to have a small image upscale to fit the display, with text overlay with full resolution over it.
What hardware are you using? If this is the Arduino Giga with a display the write() method supports image scaling itself. So, it can scale up a smaller image to be shown on the screen display.
I am using a giga r1 indeed
Yes, then use the image scaling built-in to the write method: class RGBDisplay – RGB Display Driver — MicroPython 1.20 documentation
The latest firmware coming also adds 90 degree rotation support and more. I haven’t yet written the documentation for it. But, if you install the latest development release via Tools then you can pass:
image.ROTATE_90 to the hint parameter to rotate the image for the screen too. This saves doing the transpose set by itself which saves image copies so it visibly increases your FPS.