Forgive my ignorance as I’m new to all of this but I was wondering if it is possible to use 2 (or more) cameras in conjunction to track an object (or objects) in 3D space. In theory if you know the distance and angles between the cameras as well as the location of an object in relation to those cameras you can use some trigonometry to find the object in 3D space. I don’t currently own an OpenMV cam so I can’t play around and test this theory out but it seems simple enough.
Yes, but, you need to develop a serial protocol to talk between the two cameras. Each camera will have it’s own processor. As for frame sync. Just connect the FSIN of one camera to a pin set to output VSYNC to another.
Fantastic! Now if only I knew how to do that lol. Thanks for the quick reply. I guess I better get to learning. Any ideas where to start?
Yeah, so, do you know how to general serial port communication? If so, then there are just subtle differences in the API on the camera but not much else. Let’s start here. That said, I honestly can’t really write the code for you. I don’t have that kind of time anymore.
I have some experience with I2C, working with Arduino and some IMU sensors but that’s it. I intend to use your cameras for a research project at my university so its necessary that I write and fully comprehend the code in the project so asking you to write the code for me would be too much. Could I connect multiple cameras to an Arduino and retrieve all the cameras’ data that way or would the Arduino be too much of a bottleneck? I assume SPI would be better than I2C here. I can’t wait to get a cam and start poking around
What if we use servo to rotate single camera instead of using two (cameras)?
Of course the camera lens should be of substantial distance from the servo’s pivot/fulcrum so we have the effect of having 2 cameras.
That’s fine too. You can really do whatever you want.