Messing around with matching templates

Hello. I hope you are all doing well. Just a concept question, but what does the “threshold” parameter actually algorithmically do ?

I ask because OpenMV seems to just choose a “good enough” area in the image rather than the area of best correlation. Is this a hardware based limitation?

I tried getting around this by having nested search routines descend from 99%,98%… until 40%, but the speed would get incredibly slow. I guess this is because it performs the search and large matrix math comparisons over and over, instead of comparing said comparisons to find the most ideal one.
Ive tried feeding a function like max_val from examples into this argument, but it was a random unsuccessful guess.

I ask because a neat technique I found to get around object rotation, translations, changes in lighting, is to actively update the template with the highest matching area’s image. Its super neat in OpenCV, and Im wondering how to port it with the clever camera functions in OpenMV.

Thank you.

What function are you asking about?

Oh, template matching, yeah, so, the template is just a Normalized Cross Correlation. This outputs a score. So, when the score is above some threshold which you supply then we track the location of that match area. The threshold value just has to do with the output of the NCC algorithm.

Template matching is very poor and we all waiting to improve it thought.
Template matching is very important tool in many applications regarding inspection.

Don’t expect rotation or scale factors to appear in this tool…

@oramafanis - I’m finally putting more time into OpenMV again this year. Things are less crazy at work.

What improvements would you like to see for template matching? I have a bunch of stuff on my todo list but making template matching awesome is something I could likely work on within a few months.

Ahh. Hmm. That makes sense. So there can be multiple outputs then and stuff. All NCC areas exceeding that threshold are reported then.

Well, would it be possible in the way OpenMV functions are structured to output the single strongest matching NCC correlation there is? For example, instead of values surpassing .50 in the threshold, the point with the highest threshold .93 in one case, or .84 in another, is reported. I tried feeding some arbitrary functions, but I could not figure it out.

If it’s not possible, is it technically possible to write a custom algorithm for OpenMV? I am not a great coder guy, but I’d love to try to write something custom.

Perhaps the tool can be used as a basis of things. Like if the image has its gain normalized for a ROI, and then binary contours are used as the input image and template, the algorithm is less confused by shading and more concerned with the congruity of shapes. I guess also scanning a template or scene for slight inclinations of something could be an approach too.

My template matching in reading upside down isn’t that great. Not in code, but my head with my eyes. Although doing a rotation of a large image in code is tough, for a small template, it could be less painful.

I am just tossing ideas around for approaches. I’m sure they’ve all been done in one way or another with varying results of success.

The current code is really basic and I just haven’t updated it. It can be a lot better.

Since @oramafanis has interest in this maybe I can re-write it to be faster and do a lot more. I don’t have time this weekend as I will be focused on doing store stuff. But, maybe this can be done sooner. Out of all the things I have to do it’s nice to re-write these algorithms with SIMD and I’m enthusiastic to do it and see how fast I can get it to run.

(Plowing more time into OpenMV now, the chip shortage is ending so it’s time to get back to work.)

1 Like

Dear @kwagyeman ,

i will speak about same tools found on other cameras.

-scale factor for both input and output of the tool. when scale factor is 1 the tool will go as fast as possible .
-rotation factor to use for both input and output of the tool. when rotation is 0 the tool will run as fast as possible.

-sensitivity for input of the tool. You can try to find something that looks like the stored image instead of finding something that match exactly .so then the tool will run faster for low sensitivity.

-As far as i know other tools has options for area model vs edge perimeter model. just to try to find the “object” and not the object with the background.
-timeout control

Notice that i don;t know excactly what DS vs EX finding method do.

The problems i had when testing this tool is more on the image quality.
I don’t know why but seems that the tool is messing with the frame buffers and exposure time.

The best working test is to use search=SEARCH_EX and set frame buffers to 1 so to make it work.
I have to tell that after these settings the tool works well and fast. there is a rotation and scale tolerance too. but no output or input value for them.

I have 35 fps with global shutter module
(sensor.QQVGA)
stored image (70x60)

The best working test is to use search=SEARCH_EX and set frame buffers to 1 so to make it work.
I have to tell that after these settings the tool works well and fast. there is a rotation and scale tolerance too. but no output or input value for them.

I have 35 fps with global shutter module
(sensor.QQVGA)
stored image (70x60)

Interesting. I will try that, could be neat.