camera matrix

Hi,

I am currently using AR tags to determine the position using OpenMV cam.
I wanted to convert the tag position in world coordinates. I have my extrinsic parameters but could not find a way to get the intrinsic parameters.

OpenCV has a way to get these parameters.
Is it possible to get from OpenMV as well?

Is there an alternative way to get these params?

You can calibrate the camera the same way using OpenCV. i.e. capture images of the squares, save to SD run the calibrate.py script. These are my results:

1.7mm VGA

RMS: 0.441215118532
camera matrix:
[[ 292.03384947 0. 297.59432695]
[ 0. 291.68951254 221.76270058]
[ 0. 0. 1. ]]
distortion coefficients: [ -3.47292298e-01 1.90743719e-01 -2.93602264e-03 -3.37712793e-04
-6.77249634e-02]

1.7mm VGA cropped to QVGA

RMS: 1.25631067317
camera matrix:
[[ 347.04979028 0. 255.42502928]
[ 0. 344.71876155 135.07713761]
[ 0. 0. 1. ]]
distortion coefficients: [-0.19428735 -0.03901292 0.00355052 0.00290978 0.03037962]

2.8mm VGA

RMS: 1.0346942622
camera matrix:
[[ 481.83516332 0. 324.54956102]
[ 0. 479.59349073 234.00402299]
[ 0. 0. 1. ]]
distortion coefficients: [-0.45813545 0.28146574 -0.00329139 0.00571643 -0.07724835]

Hi

Thanks for your reply.

I am unable to find calibration.py in the OpenMV repository.
Could you please point me to the link?

One additional question:
I am using QQVGA for ar tags else there is a memory issue.
Is there something else i need to do for QQVGA frame size?

Sorry, missed the OpenCV part even though it was highlighted

calibration.py is an Open-CV script

I’m not sure, I think you may need to recalibrate, search for opencv+calibrate camera+python. It’s the same process, think of it as any other camera, you just need to capture the images and save them on SD card.

Hi gents,

I’m working on a project for an autonomous robot. I want to use the OpenMV camera in order to provide to the robot knowledge of environment in front of him. This is my approach:

  • the floor is flat and horizontal
  • camera is installed on the robot with fixed position looking ahead for few meters
  • floor has a defined fixture/color

I’m going to load a series of known object and look for them in the image, once I found I want to evaluate where they are (coordinates from robot) in order to define the best path.

Could you provide some examples related to:

  • passing from pixel to real world
  • finding known object and locate them

Thanks a lot for your support
Emanuele

Hi:

  • passing from pixel to real world

Not sure what you mean by this… Um, are you asking for mapping a pixel location to a distance… if so… this is only possible with the OpenMV Cam using AprilTags. Otherwise, you have to make up a segmented look up method to do something.

  • finding known object and locate them

The camera has a lot of different features. Which particular algorithm are you looking to run?

I try to explain better my idea. I have a rover in a garden perfectly flat and horizontal. The camera is installed on top of rover tilted down in order to visualize what is in front of it.

In the camera are already loaded some template with a typical object which are in the garden like trees, flowers, outdoor lamps. All those items are obstacle for the rover.
The code should look for the template inside the picture and once found should calculate the distance from rover to obstacle assuming that obstacle is vertical and starts from grass (floor).

the template matching code will result in pixel coordinates but with assumption that floor is flat, obstacle are on the grass and are vertical I believe that I can pass to real distance with limited calculation power consumption.


I found something related to intrinsic camera matrix but I don’t know if it is the right way to go.


Hopefully once above clarify.

Okay, I understand what you are trying to do. However, it is not very easy to do so:

In the camera are already loaded some template with a typical object which are in the garden like trees, flowers, outdoor lamps. All those items are obstacle for the rover.
The code should look for the template inside the picture and once found should calculate the distance from rover to obstacle assuming that obstacle is vertical and starts from grass (floor).

Template matching require more or less fixed scenes. How do you plan to do this with the API we have available?

Can you go into some more detail on how you’d like to find obstacles in the scene? It’s not hard to work out a distance per say… but, first you need to be able to detect objects. Yes, I understand you are looking for help on this part. So…

If you plan to avoid obstacles outside, color tracking is our best feature for that… color will have to be trained for the outside scene… but, this is the most obvious way to get something working quickly to avoid things that are not the color of the ground. Like trees, flowers, outdoor lamps. The OpenMV Cam is not able to do general purpose object recognition using algorithms like yolo so it is unable to easily classify all the objects you mentioned previously. So, we need a… shortcut, some way to identify the object and avoid it without having to know what that object is. That’s why I mentioned color.

can I have a database of typical object stored in the SD card? if yes, can I have a look inside the single frame if any on the object is present?


In the case I find it I’ll take the lowest pixel of the square where the object has been found and I’ll convert to a distance from rover.


I’m afraid the use only color code approach because some obstacle have similar color to floor and they could not been detected

Yes, you’re distance method will work for finding the lowest pixel… however, as for detecting things the way that will need to work is through CNNs. We just released this feature so it’s not built out right now. However, it will eventually enable you to do what you want.

So… are you capable of generating a data set to train a CNN? If so, then there’s a reasonable path forward. If not, template matching will not really get you where you want to be.

unfortunately I’m not familiar at all with CNN… So I have to find another way…

I am working on a similar project. Here are some good links about calibrating distortion of the lens. These three links bellow can help you calculate lens radial and tangential distortion coefficients if you take some checkerboard images with OpenMV Cam M7.

http://www.vision.caltech.edu/bouguetj/calib_doc/htmls/parameters.html
http://www.cvlibs.net/software/calibration/index.php#citation
http://www.cvlibs.net/software/calibration/mistakes.php

And you’ll need this chessboard:

Cam M7’s 2.8mm standard lens is heavily distorting, so one has to calibrate for that.

You can avoid a problem of processor heavy constant correcting distorted image by applying the above algo and converting only once, when camera finds your object of interest.

after finding the distortion cooefficients, how can we relate this to cx and cy?