I am currently using AR tags to determine the position using OpenMV cam.
I wanted to convert the tag position in world coordinates. I have my extrinsic parameters but could not find a way to get the intrinsic parameters.
OpenCV has a way to get these parameters.
Is it possible to get from OpenMV as well?
You can calibrate the camera the same way using OpenCV. i.e. capture images of the squares, save to SD run the calibrate.py script. These are my results:
I’m not sure, I think you may need to recalibrate, search for opencv+calibrate camera+python. It’s the same process, think of it as any other camera, you just need to capture the images and save them on SD card.
I’m working on a project for an autonomous robot. I want to use the OpenMV camera in order to provide to the robot knowledge of environment in front of him. This is my approach:
the floor is flat and horizontal
camera is installed on the robot with fixed position looking ahead for few meters
floor has a defined fixture/color
I’m going to load a series of known object and look for them in the image, once I found I want to evaluate where they are (coordinates from robot) in order to define the best path.
Not sure what you mean by this… Um, are you asking for mapping a pixel location to a distance… if so… this is only possible with the OpenMV Cam using AprilTags. Otherwise, you have to make up a segmented look up method to do something.
finding known object and locate them
The camera has a lot of different features. Which particular algorithm are you looking to run?
I try to explain better my idea. I have a rover in a garden perfectly flat and horizontal. The camera is installed on top of rover tilted down in order to visualize what is in front of it.
In the camera are already loaded some template with a typical object which are in the garden like trees, flowers, outdoor lamps. All those items are obstacle for the rover.
The code should look for the template inside the picture and once found should calculate the distance from rover to obstacle assuming that obstacle is vertical and starts from grass (floor).
the template matching code will result in pixel coordinates but with assumption that floor is flat, obstacle are on the grass and are vertical I believe that I can pass to real distance with limited calculation power consumption.
I found something related to intrinsic camera matrix but I don’t know if it is the right way to go.
Okay, I understand what you are trying to do. However, it is not very easy to do so:
In the camera are already loaded some template with a typical object which are in the garden like trees, flowers, outdoor lamps. All those items are obstacle for the rover.
The code should look for the template inside the picture and once found should calculate the distance from rover to obstacle assuming that obstacle is vertical and starts from grass (floor).
Template matching require more or less fixed scenes. How do you plan to do this with the API we have available?
Can you go into some more detail on how you’d like to find obstacles in the scene? It’s not hard to work out a distance per say… but, first you need to be able to detect objects. Yes, I understand you are looking for help on this part. So…
If you plan to avoid obstacles outside, color tracking is our best feature for that… color will have to be trained for the outside scene… but, this is the most obvious way to get something working quickly to avoid things that are not the color of the ground. Like trees, flowers, outdoor lamps. The OpenMV Cam is not able to do general purpose object recognition using algorithms like yolo so it is unable to easily classify all the objects you mentioned previously. So, we need a… shortcut, some way to identify the object and avoid it without having to know what that object is. That’s why I mentioned color.
Yes, you’re distance method will work for finding the lowest pixel… however, as for detecting things the way that will need to work is through CNNs. We just released this feature so it’s not built out right now. However, it will eventually enable you to do what you want.
So… are you capable of generating a data set to train a CNN? If so, then there’s a reasonable path forward. If not, template matching will not really get you where you want to be.
I am working on a similar project. Here are some good links about calibrating distortion of the lens. These three links bellow can help you calculate lens radial and tangential distortion coefficients if you take some checkerboard images with OpenMV Cam M7.
Cam M7’s 2.8mm standard lens is heavily distorting, so one has to calibrate for that.
You can avoid a problem of processor heavy constant correcting distorted image by applying the above algo and converting only once, when camera finds your object of interest.
Hi,
I’m trying to run the calibration code, using 10 images similar to the image attached, but the function cornersFound, cornersOrg = cv.findChessboardCorners(imgGray,(nRows,nCols), None) is not finding any corners in my image, so I’m getting wrong values for the outputs.