h7p.zip (1.0 MB)
h7.zip (1.0 MB)
Attached is a firmware with the new feature. Just do cxf() or cyf() on the apriltag to get the float position.
h7p.zip (1.0 MB)
h7.zip (1.0 MB)
Attached is a firmware with the new feature. Just do cxf() or cyf() on the apriltag to get the float position.
wow, that is quick! Thx a lot. I will check it soon.
Hi kwagyeman,
I does not seem to work.
I upgraded the firmware. It indicates version 4.4.3 now.
I changed the april tag example code to:
while(True):
clock.tick()
img = sensor.snapshot()
for tag in img.find_apriltags(families=tag_families): # defaults to TAG36H11 without "families".
img.draw_rectangle(tag.rect(), color = (255, 0, 0))
img.draw_cross(tag.cx(), tag.cy(), color = (0, 255, 0))
print_args = (family_name(tag), tag.id(), tag.cxf())
print("Tag Family %s, Tag ID %d, position %f" % print_args)
print(clock.fps())
But position is still a integer. I see numbers like 98.000002 and 95.999998
Interesting. Well, that’s the output of the algorithm.
That’s produced by this in the code:
homography_project(det->H, 0, 0, &det->c[0], &det->c[1]);
Which does:
static inline void homography_project(const matd_t *H, float x, float y, float *ox, float *oy)
{
float xx = MATD_EL(H, 0, 0)*x + MATD_EL(H, 0, 1)*y + MATD_EL(H, 0, 2);
float yy = MATD_EL(H, 1, 0)*x + MATD_EL(H, 1, 1)*y + MATD_EL(H, 1, 2);
float zz = MATD_EL(H, 2, 0)*x + MATD_EL(H, 2, 1)*y + MATD_EL(H, 2, 2);
*ox = xx / zz;
*oy = yy / zz;
}
So, I’d have to modify the homography matrix which is computed by the apriltags code:
float theta = -entry.rotation * M_PI / 2.0;
float c = cos(theta), s = sin(theta);
// Fix the rotation of our homography to properly orient the tag
matd_t *R = matd_create(3,3);
MATD_EL(R, 0, 0) = c;
MATD_EL(R, 0, 1) = -s;
MATD_EL(R, 1, 0) = s;
MATD_EL(R, 1, 1) = c;
MATD_EL(R, 2, 2) = 1;
matd_t *RHMirror = matd_create(3,3);
MATD_EL(RHMirror, 0, 0) = entry.hmirror ? -1 : 1;
MATD_EL(RHMirror, 1, 1) = 1;
MATD_EL(RHMirror, 2, 2) = entry.hmirror ? -1 : 1;
matd_t *RVFlip = matd_create(3,3);
MATD_EL(RVFlip, 0, 0) = 1;
MATD_EL(RVFlip, 1, 1) = entry.vflip ? -1 : 1;
MATD_EL(RVFlip, 2, 2) = entry.vflip ? -1 : 1;
det->H = matd_op("M*M*M*M", quad->H, R, RHMirror, RVFlip);
As you can see… the only entropy to the above other than rotations and mirrors are from the quad->H matrix which is the locations of the corners of the tag.
…
Anyway, after digging into how to change the quad point locations it looks like the code speeds a lot of time optimizing them and subtly changing their values.
So… I don’t want to touch anything in there to try to get more accuracy on the quad since that might compromise something else.
Do you mean that there will be no subpixel support?
Well, there’s an upgrade to AprilTag 3 I need to do. However, can’t say when I’ll get that done. It might support this.
But, as for right now. Yeah, I don’t feel comfortable touching that code right now and affecting other folks. So, as it stands the algorithm does not actually produce sub-pixel output from it’s core logic.
Hmm, then I need to track my object some other way.
Suppose I attach some IR LEDs on the objects. Which detection algorithm does OpenMV support that I can use to detect IR LED dots with subpixel precision?
The blobs detection algorithm features subpixel accuracy. It can also run on a much larger resolution.
This algorithm was designed for exactly what you want to do so it shouldn’t be hard.
on the other hand i was wondering why a decimal point in your result will improve so much the accurancy.
whats your field of view?
whats your working distance?
how big are the tags?
what lens did you use?
what resolution?
what sensor?
what is the accurancy per pixel right now?
what accurancy you want?
please answer those and maybe a help will arrive…
I need to detect the positions of the markers within 1 or 2 mm. The camera is positioned such that for a 160x120 frame (that is sadly the max for April Tag detection), one pixel corresponds to almost 1 centimeter. So therefore I would like to increase the resolution and/or use a sub-pixel detection algorithm.
So you need almost 5 to 10 times better aqurancy?
I would suggest to try and use the global shutter module at full resolution and do a blob inspection.
Once you find where is the tag pass a roi around the blob and read the tag on that specific place.
Yes, that sounds like a good option.