Frame Rate April Tags

Hi,

what kind of frame rates are there to expect for april tag pose estimation at VGA resolution with the new H7?

Cheers
Tobias

April Tags runs at 240x240 max on the H7 at about 20 FPS.

The AprilTag algorithm has a rather large memory overhead that scales exponentially with resolution. Anyway, in order to get more RAM for the algorithm I locked the res to 256x256 at max. This allowed me to cut the size of the union find table by half allowing for higher reses than 160x120. But, this also limits the max res too.

I’m trying to maximize the frame size for detecting April tags. How do I set it for 256x256 when the sensor.set_framesize command doesn’t have that as an option?

Alternatively, can I go with a larger frame size if I switch to Datamatrix or QR Codes?

Thanks.

Don’t run at 256x256, stay at 240x240. Use the set_windowing method with a resolution of QVGA or higher. set the windowing to 240x240.

As for QrCodes, I believe that takes less RAM. Remember resolution is in 2-dimensions so going up to the next res is 4x more RAM.

With respect to windowing, I understand, but I am getting an error. I entered the following lines;

sensor.set_framesize(sensor.VGA)
sensor.set_windowing(240,240)

to create a 240x240 window centered inside a VGA frame. I get the following error.

TypeError: function takes 1 positional arguments but 2 were given

I did a search and discovered a syntax error. I needed 2 sets of brackets. Why, I have no idea.

sensor.set_windowing((240,240))

I found something disturbing. If I use windowing, I cannot decode April Tags. Why is this?

sensor.set_framesize(sensor.VGA)
sensor.set_windowing((240,240))

Scott

Hi, it requires brackets because the Python code expects a tuple of 2 or 4 length.

When you increase the AprilTag resolution the algorithm adjusts to the larger memory requirement by reducing a temporary heap size used to store edge point matches. Because this memory pool is smaller the algorithm runs out of RAM and reallocs fail. When this happens point structures it was iterating on are lost. I choose the algorithm to have this behavior versus throwing an error when this happens. The algorithm can start finding new April structures so it’s not fatal for all tags in the image. Just large ones that are built from a long list of edge points.

In the H7 240x240 can be achieved. However, I recommend lowering the res to as low as you can get by with. The algorithm is exponentially memory hungry and our RAM did not grow by an exponential amount for the H7.

Um, try 200x200. That works well.

Note… we might be making an H7 version with RAM where this limit would be removed. The AprilTag algorithm doesn’t actually need a lot of memory bandwidth so the external RAM and resolution would not necessary reduce performance.

I think it would be great to have more RAM available. How about if we could use the micro SD card as memory with the caveat that we’d have to reduce the frame rate? I’d be fine with 5-10 fps if I could use the entire 640 x 480 sensor resolution. I’d use a 1 or 2 GB card.

SD Cards can’t be used for RAM. Running at 200x200 is about the max the H7 can do with AprilTags. I’m sorry this isn’t enough res.

If we have an H7 with RAM working soon I will send you a model. Then this limit will be removed.

I set the resolution to 200x200 and it runs for awhile and then I sometimes get a memory error that reads “Memory Error: Out of temporary Frame Buffer Heap Memory! Please reduce the resolution of the image you are running this algorithm on to bypass this issue”. Instead of reducing the window size, is there a command I can run every so often to flush the memory buffer to prevent this from happening? Does what the camera sees affect the memory use?

Um, just wrap the code with a try: except: block.

AprilTags uses more RAM when it has to deal with edges. The less stuff there is in the image the less RAM it uses. The big RAM hog is that is allocates a 12 byte point structure per black/white edge transition. So, with 200x200 pixels it could need ~500KB or so. This along with the already static memory it needs of about 8x resolution for other memory structures. It’s honestly crazy the algorithm runs on a MCU in SRAM.

I see. So that’s why it fails sometimes. I guess in those instances, the camera sees more edges. I can give you some information about what I’m trying to do. I want to buy a mecanum type robot and have the camera under it pointing down. On the floor, I want to place April tags space 4-6" apart in a grid. I want the camera to be able to see several in the FOV so that i can find the next tag and instruct the robot to move to it. Since height off the floor would only be a few inches, I need to maximize the window size to see more than one at a time. I was thinking of trying the wide angle lens, but I’m not sure if the distortion would be too much for the decoder.

In your opinion, is there a more memory efficient tag I could use, or is this a job for a more powerful system like a raspberry Pi?

Yeah, so, you don’t need the resolution in this particular problem. Just the FoV. So, I’d use a wide angle lens. The algorithm used by AprilTags can decode tags that are warped. It doesn’t need the tag to be squarish.

Since you don’t need the resolution. Just get to the highest res you can using greyscale where len_corr() still works. Then run the AprilTag algorithm on the lens corrected image.

The AprilTag algorithm can find tags with very few pixels. It’s quite robust. So, work on making the FoV big versus the resolution.

Thanks Nyamekye for your responses. You are very helpful indeed. I should have bought the wide angle lens when I bought a couple of the cameras. I will order one and experiment with different size April Tags and the greyscale and lens correction settings you suggest. I will upload pictures and more information when I get to that stage.

Any update on new cameras with more memory?

We just paid for the production run of 1k. SingTown is building another 2.5k. the main chip lead time is a lot so it will be available likely in January or February.

I will start preorders soon for it. I got TensorFlow support done and committed so I just need to finish out the OV5640 driver and then we will be really to demo it with a video and etc.

(I’ve been trying to have a social life between my day job and OpenMV so I get work done when I can).

I look forward to trying it, but for this robot it looks like I have to go with what I have. I have another camera question, but I will ask it in a new post for the benefit of all.