Accuracy of April Tags position

Hi again,
For my project I would like to track April Tags with a OpenMV H7 R2. What is the accuracy at which the position of these tags are reported (relative to each other)?
And do I also get a distance (from the camera) value? How accurate will that be?
The distance of the tags to the camera is about 50 cm.

Hi, I can’t answer on the accuracy. That depends your tag size and lighting. We can see 8"x8" tags about 8ft away.

We’ve had customers get up to 30ft using the H7 Plus with a higher resolution and using grayscale.

Well, my tags will be at about 50 cm. Does the size of the tag matter for the accuracy?
Will it be pixel or subpixel accuracy for instance.

Just try it out and see. I really can’t tell you what the result will be. 50cm is a small tag. So, the distance will be low. But the algorithm should be fine. It will give sub pixel position.

Hi - I am working with 8cm tags (8cm along the black->white border around the code, 10cm in total with the width of that white border included) and a 3.3mm lens on the global shutter module (MT9V034 sensor.) I am measuring with the sensor running at QVGA resolution. In my case the camera is between 60cm and 80cm from the tags so close to your 50cm.

With the axis of the lens at ~ normal to +/- 10 degrees to the plane of the tags (the XY plane) I have measured the relative accuracy (so that’s, as you say displacement between two tags) and precision (so the magnitude of the noise on this measurement) to both be ~ 1cm in that plane.

The ‘z’ measurement - distance to the tags from the camera - I’m not so interested in so haven’t been looking to closely at that but it is certainly not as good - maybe 5 - 10 cm in the worst cases.

On all three axes the accuracy and precision vary across the field of view of the camera - generally the closer to the optic axis the better. The important parameters as far as accuracy goes are the tag size, the focal length of the lens (determines ‘magnification’ and field of view), and the sensor resolution. There is a non-linear relationship between all three of these and the performance - things get worse pretty quickly.

Hope my figures give you a starting point but the best thing to do, as kwagyeman says, is to get on and make some measurements. I found the quickest way to make measurements was to print out a big array of tags rather than moving a single tag around.

Thank you very much ecdm2! This helps me a lot. Though the resolution is what dissapointing. I think mainly beause of the wide FOV. Doe you know what your FOV is?
I will start experimenting as soon as possible.

Hi - With the sensor at QVGA and the lens I am using it’s around 60deg on the long edge by 47deg on the short if my maths is correct. With the camera at around 70cm I can see something like an 80x60cm rectangle on the floor.

If you look on AliExpress you can find a wide selection of different focal length M12 lenses at reasonable cost so you can fine tune your field of view. I tended to go for the ones marked as ‘low distortion’.

Looking at your previous posts, for your application, I would start with a tag of size equal to letter or A4 paper width, QQVGA sensor resolution (you can work with whole frames which is way simpler) and tune your field of view to be absolutely no larger than required.

Thanks again!
What I do not understand: If you have about 80 cm range with 320 pixels resolution, I would expect at least a position accuracy of 2.5 mm. And @kwagyeman says it should be subpixel accuracy!
So why are you getting about 10 mm accuracy?
For my application I need about 1 to 2 mm accuracy.

Hi

In my application it isn’t that simple.

My OpenMV camera is mounted at one end of a rigid body but the point on that body that I want to know the position of is at the other end. To calculate the position of that point I also need to consider the rotation component R of the pose returned by the AprilTag algorithm - the 3D rotation R between the frame of the camera and the frame of the tag. Most of my ‘loss’ in precision, over the case you describe, comes about through what can be thought of simply as the lever between the point I am tracking and the camera. Noise on the estimate of R is amplified by that lever.

Looking at your earlier posts it looks like you may only need to worry about tracking the centre of projection of the camera in the frame of the tag (or vice versa which ever way you want to look at it.) So hopefully your lever will be vanishingly small and you can neglect R entirely.

If you are using a similar field of view to me I think that you should start by trying a 20cm wide tag at QQVGA resolution, I think that will put you in reach of your target 1 - 2mm precision. If it’s close but not quite there then narrow your field of view a bit, make your tag a bit bigger (go in 10 - 20% increments) or both.

I bought A camera and started experimenting. But when detecting April tags, I get the position of the tag in integers! Not float. So how can this be sub pixel accuracy??

I apologize, we have this for find_blobs() but not for AprilTags. It’s been a todo forever: Subpixel corners for apriltag.corners() · Issue #415 · openmv/openmv · GitHub

Than I have a problem! The camera resolution for April Tags recognition is already very very low. have to set this to QQVGA, thus only 160x120. If I do not get sub pixel resolution, I can not use it for my project.
When will this be implemented? Seems not to be a lot af work. Does it?

Is there some other detection algorithm that I can use to detect position of objects with sub pixel accuracy? Blobs are not suitable because I need to be sure the object is the correct object. So some kind of identification is needed.

It’s not a lot of work to do sub pixel. We internally have a float and store it in an int for output. You could edit our C code and implement the feature if you like.

You can increase the resolution by limiting your roi in the code. The cameras don’t have enough memory to process the whole image at high resolution, but you can first search for blobs (or tags) at low resolution and then search a limited roi at higher resolution.

Any news on this?
Is subpixel accuracy for april tags possible now?

Yes, it’s been available. See the documenation.

That is good news!

But when looking at the documentation, I see:


apriltag.cx()¶

    Returns the centroid x position of the apriltag (int).

    You may also get this value doing [6] on the object.

apriltag.cy()¶

    Returns the centroid y position of the apriltag (int).

    You may also get this value doing [7] on the object.

position in still in integers.

Sorry, apparently that has not been added.

Will send a PR in for this today.

That is bad news. I really hoped it would be available now.
How long do you think it will take before this is in the firmware?

I can get it done today. Which camera do you have? And H7 or H7 Plus?