Image not rendering correctly on Arduino Giga Display

So, you mean instead of transposing the image we can send them to the display driver in the intended orientation order? I can’t find the lcd’s write method at the Github repo.

Edit: I think this is the file: https://github.com/openmv/openmv/blob/master/src/omv/ports/stm32/modules/py_display.c

I’ll prioritize fixing this for you today. Getting this feature working ASAP is useful for Arduino in general.

1 Like

Hi, I looked into doing this today and it’s quite a lot more challenging than I thought. It will take a lot longer to get this working.

However, I did sit down and work the math on this and if it will help. By integrating transpose into the write() backend you’ll save two image buffer copies compared to what you are doing now.

Anyway, I have to put this feature on hold until other PRs are finished and merged.

In the mean-time… I would just transpose the jpeg images themselves before the system receives them. This should get you up to the max FPS. Please note that whatever code I make will not reach the same performance as just having pre-transposed images.

Hi,

Thanks for looking into this. I understand this would not be easy. In fact, I do not want the same FPS without the transpose. I have written an AVI parser to get JPEG frames. The AVI file contains 25 fps mjpeg video. I just want to play the video at natural speed. If I disable the Framebuffer in the OpenMV IDE, I can see the FPS reaches at 27 fps but the video is playing slower. Can you help in checking what is the bottleneck? Please find the code and the AVI file here: GitHub - metanav/avi_giga_display_openmv

The current code doesn’t hit higher than about 15 FPS. Not sure how you are seeing 27 in the IDE. I get about 15 FPS on the OpenMV Pure Thermal. If you selectively drop frames for right now you can get the playback speed to match what the system is capable of doing until I have transpose working in write().

Maybe the PureThermal OpenMV and Giga R1+Display shield have some differences (flash vs SD or DSI vs SPI?).
Serial Terminal output:
FPS when IDE frame buffer enabled:

22.0951
21.657
21.9088
22.1551
21.7232
21.9673
21.564
21.7958
22.0327

FPS when IDE frame buffer disabled:

27.1021
27.4496
26.8727
27.2086
27.5445
27.0006
27.326
27.6423
27.1122
27.4275 

I am wondering why video lags when the FPS is around 27. I will try to drop the frames to see if it helps. As a side note: I also wanted to play the audio in the AVI. Is there any openmv library support for STM32H7 so that I could port it to the Giga R1?

MicroPython has support for DAC control in the pyb module. You can use the timed write feature to feed the DAC a buffer to play. You’ll want to write use a timer callback (in interrupt mode) to feed the DAC with a new buffer that you should prepare outside of interrupt.

See this for how to feed the DAC:

https://docs.micropython.org/en/latest/pyboard/tutorial/amp_skin.html

Their example is just doing audio. So, to make outputting audio a sub task you have to use the timer to do an interrupt callback.

Note, given how long it takes to do image stuff you’ll want to have very large audio buffers as you need to load new audio chunks between frames.

1 Like

Thanks, @kwagyeman! The audio works by using the example code!

Their example is just doing audio. So, to make outputting audio a sub task you have to use the timer to do an interrupt callback.

Not sure how to do this.

Also, why there is only one DAC available? The Giga R1 has 2 DACs.
This works:

dac = DAC("DAC2", bits=12)
dac.write_timed(buf, 44100)

This does not work:

dac = DAC("DAC1", bits=12)
dac.write_timed(buf, 44100)

ValueError: Pin(A4) doesn’t have DAC capabilities

Here’s how to do a Timer: class Timer – control hardware timers — MicroPython 1.20 documentation

You want to call write_timed when you get a callback. buf needs to exist already though and have been prepared. The bug should be marked as a global var in the callback.

Alternatively, you can use the timer to show the video frames. In the callback you’d use micropython.schedule() to run a function not in an interrupt which would use the screen.

As for the DAC… it looks like our DAC module is restricting use of the other DAC pin: https://github.com/openmv/micropython/blob/7f615a4a0f11e640dac20b8bc748385b4f5f67f3/ports/stm32/dac.c#L310 - @iabdalkader

1 Like

Hi, I managed to integrate transpose into draw_image. The results are fantastic. I see a 2x speedup.

So, once this PR is merged and I fixe up the jpeg decoder PR you should have everything you need.

1 Like

Hi @kwagyeman

I know you are very busy but just checking if the jpeg decoder PR will be merged sometime this week.

An off topic question: I am going to preorder an OpenMV RT1062 and wanted to know whether FLIR Lepton adapter module will work out of the box. Also, when will it be available for shipping?

Maybe, I’m fixing up the PR for it today.

And yes, the RT1060 works with the FLIR Lepton out of the box. It’s going to start shipping hopefully next week.

1 Like

Posting Performance Numbers Just for reference:

  • HD (1280x720) RGB565 (Old code) 102ms, new code is 21ms (4.85x speedup)
  • Grayscale (Old code) 44ms, new code is 14 ms (3.14x speedup)
  • SVGA (800x600) RGB565 (Old code) 53ms, new code is 10ms (5.3x speedup)
  • Grayscale (Old code) 23ms, new code is 4ms (5.75x speedup)
  • VGA (640x480) RGB565 (Old code) 32ms, new code is 6ms (5.33x speedup)
  • Grayscale (Old code) 15ms, new code is 2ms (7.5x speedup)

Software Decoder Performance:

  • HD (1280x720) RGB565 - 209ms, Grayscale - 66ms
  • SVGA (800x600) RGB565 - 109ms, Grayscale - 36ms
  • VGA (640x480) RGB565 - 70ms, Grayscale - 23ms

@yokonav - So, you will be able to play video at 60 FPS.

1 Like

Thanks, @kwagyeman for the update.

The numbers look amazing! Can’t wait to try it out soon!

Hi @kwagyeman,

Any chance to merge it in this week?

I’ll have the remaining fixes done for the PR before the week is done. It might be a little longer before it’s merged.

1 Like