please i need a help on how can i use 2 openmv M7 camera to do a my project “robot navigation using Stereovision”.


What’s the question?

the openmv cam is compatible with stereovision? how can we synchronise two openmv cam if the stereovision is possible?

I am facing the same problem. I am unable to reliably sync the output of the cameras. The documentation says to use the vsync signal, but then sending the data over serial causes losses in frames.

Yes, if you send data out the serial port over USB then the camera blocks on that data transmit. If you are trying to sync cameras you have to keep in mind that you have to deal with that. You should use a master device a that generates a trigger capture signal to each camera.

I.e. if you have two cameras with the global shutter module you can set the camera module to triggered mode. Once you do this you can capture pictures in a triggered manner. Then, you should have each camera wait on an i/o pin going high to take a pic. That i/o pin per camera should be connected to a master device. That master device should only make it’s output go high to trigger the systems when it knows all the cameras are ready. Cameras report they are ready by making an i/o pin go high before they start waiting on the master trigger signal.

Hi there,

is there a way to implement stereoscopic vision a way the MT9V034 Datasheet describes (see below) ?
If i’m understand it the right way, one should have a external board with 2x MT9V034’s internally wired (which you need anyway), and the serial + controlpins broken out.
That should indeed consume less [10] GPIOs hence no parallel bus is needed anymore.

Questions is how much cpu/time is consumed by deserialization on the H7?
You also have a higher load/slow down by default due to the need to processing 2 frames at the same time.

LVDS Serial (Stand-Alone/Stereo) Output

The LVDS interface allows for the streaming of sensor
data serially to a standard off-the-shelf deserializer up to
eight meters away from the sensor. The pixels (and controls)
are packeted-12-bit packets for stand-alone mode and 18-bit
packets for stereoscopy mode.
In stereoscopic mode, the two sensors run in lock-step,
implying all state machines are in the same state at any given
time. This is ensured by the sensor-pair getting their sys-clks
and sys-resets in the same instance. Configuration writes
through the two-wire serial interface are done in such a way
that both sensors can get their configuration updates at once.



We use the 8 bit data bus for the MT9V034. There’s no way to connect two cameras to the OpenMV Cam without an external FPGA in the data path that combines the two images into one.

Designing that type of hardware would be complex. Our software would give you a head start but otherwise it’s a lot of work.

yes, won’t even try that - just would be neat to use the internal feature of the sensor(s) like that.
I personally don’t look forward to see the openmv going stereoscopic-ish. This is not what ‘it’s supposed to do’…
Although, having some basic 3D translate functions would be nice :wink:

There’s a some (affordable) other stuff around, like the TeensyCam:

Using 2x MT9V034 in 10Bit parallel mode with an Cortex M4@180Mhz as dedicated controller.
The imagesensor you get for about 9€ each, couple of SMD’s 4€ , teensy about 25€, + pcb 3€ = 50€
Haven’t checked the code if the onchip feature is used, but in principle the solution looks ok.
But, you would still have 11bit raw data to process. In case of the teensycam with usb out, this is supposed to be a raspi or better where you run openCV,TF,…

A double ArduCam could fit your needs as well and it supports a nice bunch of cameras and platforms and no pcb-ing needed.

And last but not least, just use 2 OpenMV cams and the sync pin as externel cams :wink:


The OpenMV cameras look very interesting for what I want to do. I have not purchased one yet but I plan to if it will do what I need.

My goal is to trigger burst images (2-4 images) on each camera (2-4 cameras) through a digital pulse from a master. I assume the burst images can be stored on each STM32. After the burst images are collected I will download the images to the master processor. I would like less than 1 msec timing difference between the various triggered camera images. I realize the Global Shutter camera is better for this but it has lower resolution, so I am also interested in the Global Shutter cameras.

Do you have example programs which I could look at to get started on this task?
Is there an issue with what I want to do?


You can only do with with the global shutter camera right now. With the OpenMV Cam Plus we have a OV5640 camera which has a trigger line. The driver is not done yet for it but it should be able to take triggered images that have less than 1 ms difference.