Hi,
I started my hobbyist project, which is goal to get a binocular camera and investigate algorithms for processing stereo images on a micropython. My camera will based on pair of MT9V034 monochrome sensors and MAX10 Intel FPGA. The host board is Portenta H7, as I already have it. My first prototype is consist of Portenta itself, carrier board, Arrow Deca MAX10 evaluation kit and some amount of wires.
At the first step I want to get internally generated image from FPGA, I generated simple monochrome bars with 320x240 frame size and output to DCMI. I don’t want to implement my own driver at this moment, so I try to emulate HM01B0 image sensor which is already supported by OpenMV for Portenta H7 and Vision shield.
I connected DCMI interface(DATA, VS, HS) and I2C to FPGA. And I faced with first problem - In some reasons I can’t see any I2C transaction during IDE connection. And of course IDE didn’t recognize image sensor. So could you please give me start point where in the sources I can see connection procedure? All I can found is sensor_init(), and if I understand correctly, I2C scanning sould performed here in any case, so I should see it inside fpga, but I cannot see it… I thing I missed something.
I’m sure about I2C connection, it uses same pins as Vision shield. Aslo if I push Tools->Reset OpenMV Cam, I see something on this pins, so physical connection alive. After connection I received following in terminal :
Traceback (most recent call last):
File "/main.py", line 11, in <module>
Exception: IDE interrupt
MicroPython: v1.13-r63 OpenMV: v3.9.2 HAL: v1.9.0 BOARD: PORTENTA-STM32H747
Type "help()" for more information.
>>>
Don’t sure it is important, I try to run micropython after that(with commented sensor-related lines) and it runs successfully…
Any advice might be helpful
Thanks!
Hi, do you have a protocol analyzer? You should defiantly see a bunch of I2C toggles on startup by the OpenMV Cam. Note that the camera scans the I2C bus on power-on. Not on when you script is run.
Yes, I use something like I2C protocol analyzer inside fpga. I found I2C toggles on startup! It was my fault, I searched it on IDE connection before. Now sensor detected!
Thanks, now I will continue to work on sensor protocol supporting.
Regarding stereo image processing. This is interesting to me. If you make a dev board with the FPGA that generates a 2x width image from each camera combined I’ll write SIMD accelerated block matching code for you.
So, focus on making the hardware that can generate the image for the Arduino Portenta and I’ll jump in with the stereo algorithm.
Thank you, it is great! Let’s discuss hardware some later, as soon as I get image on my current development boards. I need to estimate FPGA resources required and choose chip capacity before starting PCB development.
It is background project, so progress will not fast
Some update I got 2 x width combined image from cameras, but I have 2 problems.
First is the camera synchronization. I try now to use one camera as master and second camera as slave, but in some reasons slave camera not in sync with master, so to combine images I need to use triple frame buffer for frame rate conversion. In real life this is unacceptable, need to have perfectly synchronized frames to get good results.Need to investigate it in more detail, possible I missed something. In any case I can run both sencors in slave mode and control intergation and readout via fpga instad of using one sensor as master… It take some more affot but possible be much better for tunung image quality.
here is the videos : https://drive.google.com/file/d/1d5Yl4n6DhAMvFX649ZaWaYzHYwkCKWYq/view?usp=sharing https://drive.google.com/file/d/1t9xe6mNGSFu72N3cMC200IfVkulIi5hu/view?usp=sharing
Second problem is a noise. If I enable test pattern on both sensory, image is clear, but if you take a look on real video it have some random brightness distortion. Maybe power is not clean, or maybe the reason ton of wires…
And of course resolution should be improved.Not problem for fpga to generate 1280x480 frame. Need to make own driver to be able to set double width resolution in OpenMV.
Hi, you just have to edit sensor.c and a few other files to support a double wide resolution config. It’s very easy. I can do that for you in a commit if you’d like.
What resolutions do you want to support? I can make a whole bunch of 2x width resolutions.
I think you’re going to need to use the slave move feature that the MT9V034 supports where the FPGA controls the exposure/readout of pixels manually. It should then be able to send the combined image to the camera without any buffering.
Regarding the random brightness. More caps on voltage rails, then add chokes feeding those voltage rails, then more caps. It’s always more caps. Camera sensors are extremely sensitive to voltage rails. Note that the chokes are important. Also, you can do an RC circuit too… but, that will have a voltage drop.
I think all resolutions from 640x480 and lower may be useful. Let it will be 3 resolutions: VGA, QVGA and QQVGA.
About slave mode , yes, I think I should run both sensors in slave mode and perform synchronization from fpga side.
PCB I’m using now is from other project , not my. I just using it to quick start, for proof of concept. So I think I can’t get good quality image on this PCB with wires for connection. I’m thinking now to create my own pcb with pair of sensors and fpga, it will like a arduino vision shield. How do you think will such shield interesting for someone except me?
So, again, our time is limited… but, if you develop a Portenta carrier shield with the dual cameras on it and an FPGA like a lattice FPGA then we’ll help produce it. A lot of people would be interested in buying this device.
I will think about it. I usually use Intel fpga, so it will take more time to implement on other fpga vendor, but if I decide to use other fpga, I almost sure I will use new Gowin fpga, it is extremely chipper then Intel and Xilinx, and even Lattice.