3D position from known object with multiple cameras

Hi all, I have a question that I’m sure is simple, but being new to MV I just don’t know the “magic words” that describe what I’m trying to do.

Lets’ say I have a number of cameras in a room, and I have an object with IR LEDs of a known configuration. For the sake of argument, let’s say the LEDs are bright enough that the cameras exposure can be low, so that they’re effectively the only thing the cameras see. I can calibrate the cameras in any way needed.

What I’d like to be able to do to is get the position of the object in 3D space, for example in a way that I could feed into Unity to attach a 3D object to in a scene. This is similar to some commercial motion capture systems.

What would be the best way for me to get started with this? I feel like it’s simpler than most projects I see here–but again, assume I know nothing :slight_smile:

Getting the position of the object and sending that to the PC is simple. However, you have to have a master machine that combines the positions.

I don’t know the math for this. However, you should google 3D localization using markers.

This makes sense, thank you. I feel like this is essentially a combination of your amazing LED tracking youtube demo, keypoint detection, and the 3D pose from Apriltag position example. Would you say those are good places to start?

You just need the IR tracking part for the OpenMV Cam. The cameras will stream results back to a main PC. Tat PC has to take the point locations from each camera to compute the actual 3d location.

Um, to make this easy. I’d just mount a camera overhead and get the position in 2d from above from that. Then mount one camera in each corner of a room and average their results to get the height off the ground.

Actually its not than simple , the X,Y position that is read on the “TOP” camera is not the actual XY position in 3d objects since it varies depending on the Z value of the object. But there are many articles that use OpenCV for this matter