So, you mean instead of transposing the image we can send them to the display driver in the intended orientation order? I can’t find the lcd’s write method at the Github repo.
Hi, I looked into doing this today and it’s quite a lot more challenging than I thought. It will take a lot longer to get this working.
However, I did sit down and work the math on this and if it will help. By integrating transpose into the write() backend you’ll save two image buffer copies compared to what you are doing now.
Anyway, I have to put this feature on hold until other PRs are finished and merged.
In the mean-time… I would just transpose the jpeg images themselves before the system receives them. This should get you up to the max FPS. Please note that whatever code I make will not reach the same performance as just having pre-transposed images.
Thanks for looking into this. I understand this would not be easy. In fact, I do not want the same FPS without the transpose. I have written an AVI parser to get JPEG frames. The AVI file contains 25 fps mjpeg video. I just want to play the video at natural speed. If I disable the Framebuffer in the OpenMV IDE, I can see the FPS reaches at 27 fps but the video is playing slower. Can you help in checking what is the bottleneck? Please find the code and the AVI file here: GitHub - metanav/avi_giga_display_openmv
The current code doesn’t hit higher than about 15 FPS. Not sure how you are seeing 27 in the IDE. I get about 15 FPS on the OpenMV Pure Thermal. If you selectively drop frames for right now you can get the playback speed to match what the system is capable of doing until I have transpose working in write().
Maybe the PureThermal OpenMV and Giga R1+Display shield have some differences (flash vs SD or DSI vs SPI?).
Serial Terminal output:
FPS when IDE frame buffer enabled:
I am wondering why video lags when the FPS is around 27. I will try to drop the frames to see if it helps. As a side note: I also wanted to play the audio in the AVI. Is there any openmv library support for STM32H7 so that I could port it to the Giga R1?
MicroPython has support for DAC control in the pyb module. You can use the timed write feature to feed the DAC a buffer to play. You’ll want to write use a timer callback (in interrupt mode) to feed the DAC with a new buffer that you should prepare outside of interrupt.
You want to call write_timed when you get a callback. buf needs to exist already though and have been prepared. The bug should be marked as a global var in the callback.
Alternatively, you can use the timer to show the video frames. In the callback you’d use micropython.schedule() to run a function not in an interrupt which would use the screen.
I know you are very busy but just checking if the jpeg decoder PR will be merged sometime this week.
An off topic question: I am going to preorder an OpenMV RT1062 and wanted to know whether FLIR Lepton adapter module will work out of the box. Also, when will it be available for shipping?