I am currently doing a calibration project. This involves a robotic arm and a M7 camera. To map the pixels to the robotic arm coordinate system, I tried to get an understanding of the M7’s field of view. This is what I obtained:
The polygon that can be viewed here represents the image boundaries of the M7. I used the single color detection example available on the openMV IDE and moved a colored object along the boundaries.
What I would like to know is:
- Is this the normal behaviour of the camera?
- If not, can it be fixed?
- If it is, is there any mathematical formular to represent this capture?