What is the best way to use two different resolutions of a snapshot? I’d like to do processing with a lower resolution and store a higher resolution if needed. Low resolution should be scaled and not a crop or roi of the ‘big’ image.
The FPS on the camera has not decreased. We’ve just limited the frame buffer poll rate that the IDE is allowed to pull from the camera to 20 FPS. This is to reduce the load on the CPU generating frames for the PC. There are ways to get around this if you need to do so. But, otherwise, the FPS is what is printed by your script in the IDE terminal.
What is the best way to use two different resolutions of a snapshot? I’d like to do processing with a lower resolution and store a higher resolution if needed. Low resolution should be scaled and not a crop or roi of the ‘big’ image.
Just do:
new_img = img.copy(x_scale=..., y_scale=...)
Then, run your algorithms on the smaller image. You’ll have to scale detections back up to the larger image though if you want to display them.
cool thanks for your help!
And, no, I don’t care about the fps value that is displayed I just took it as a fact and was questioning.
P.S. The 20 fps are indeed printed in the terminal of ide. fps of stream gives liek 7 fps which is a value I usually don’t care. Maybe someone could copy the example script and let it run?
Thanks for your effords!
Intriguing result I’d say. I don’t want to abuse my own initial thread, but any ideas what reason might reduce my fps? In contrast to video streaming my older hardware should have no effect on raw fps?
I noticed your hardware: openmv4p. Faster in practice?