Proper use of the Threshold Editor to define LAB values

1st, Thanks to Kwabena for pointing me to the Threshold Editor under Tools → Machine Vision. My goal is to fine-tune the threshold values for the Example script under Examples (10-Color-Tracking → I am using a color swatch consisting of red and green adjacent to each other. In the example code, two sets of thresholds for LAB values are used with the “merge=True” parameter. The example script has generic red and green thresholds.

My question is about the proper use of the Threshold Editor to derive new values for the specific colors I am using. Should I expose the frame buffer to each of my colors (RED then GREEN) and copy down the threshold values for each color separately in the example OR do I expose the color swatch of both colors together and use that as single set of threshold values for the example and not invoke “merge=True” in the code?

Here are my values from my trials of both techniques. The combined set of values looks suspiciously like a “merged set” of values from my cursory examination of the values. What do the authors of the Threshold Editor say?

RED alone yielded these min/max LAB values for L, and A, then B - (44, 82, -4, 79, -4, 53)
GREEN alone yielded these values (58, 81, -38, 5, -1, 38)
When I supplied a frame buffer of both colors together, the Editor came up with these values (53, 91, -56, 75, -5, 44)

Note that the 3rd set looks a lot like a crude merge of the A and B values for each color alone. At least, I think so.


Thanks for posting,

Okay, so, I believe your goal is to track a red/green object. In this case you’re actually trying to track two color separately, but, you want the camera to only tell you about blobs red and green blobs who’s bounding boxes overlap.

So, you want to use the threshold editor to find… not too relaxed, but, also not too tight color bounds for the red and green objects separately. Once you have these bounds you then need to just pass them to the “find_blobs” function.

NOTE: Don’t make the thresholds very tight. If possible try to leave the L value untouched. You should be able to threshold with just the A and B values. Additionally, try to make the thresholds as wide as possible. By using the color coding feature you can have wide thresholds but great noise rejection still since your looking for specific color codes. Additionally, you can utilize the pixels_threshold and area_threshold arguments of find blobs to filter out noise too.

So, for find blobs you want to do this:

blobs = img.find_blobs([(44, 82, -4, 79, -4, 53), (58, 81, -38, 5, -1, 38)], merge=True, margin=5)

If you didn’t call “find_blobs” with merge=True then it would just tell you where red and green blobs are. However, with merge=True it will tell you about red blobs, green blobs, and red/green blobs. The red/green blob is defined as a separate red and green blob who’s bounding boxes overlap each other by “margin”. In the example above this would mean any red/green blob who are within 5 pixels of each other.

Again, merge=True just checks the bounding box overlap across each blob and then merges the bounding boxes, centroids, etc. for blobs where it finds that they have an overlapping bounding box. “margin” controls how far away blobs can be that will be merged.

Anyway, once you have the list of blobs, you just have to check the “code()” of the blob to determine if it’s a red blob, a green blob, or a merged red and green blob. Basically, “find_blobs” will give each color in the list of thresholds you passed it a binary bit mask. So the first color will get the binary value 1, the second color 2, and the third color 4, and etc. for up to 16 colors where the 16th color threshold would have a value of 1<<15 or 32768.

When you merge blobs the code of each blob is OR’ed. So, for the red and green blobs above this means you get a code of “3” because the red blob had a code of 1 and the green blob had a code of 2.

So, just ignore all blobs with a code value other than 3.

Thanks. I think I have it. I better read up on the Blob object (and methods). On the Luminance values, I thought maybe I was being too restrictive. Should I leave the L at 0 and 100 or tighten it down just a bit. I assume the lighting, distance between camera and objects, as well as ambient light all affect the L value. Is that correct?


To the above, yes.

However, dark things don’t really have colors, they all have low L values and A/B go to zero. So, you may wish to raise the L value floor threshold. The L value ceiling threshold doesn’t need to be changed however as when L is near it’s maximum then A and B are quite valid.

Look at some LAB color space images to see what I am talking about. Basically, as the lighting goes down A/B both go to zero. So, in general you don’t want to let in small L values.

That said, at the point at where to put the threshold is unknown. If you want the best color tracking performance you really have to control the lighting. Whenever you see control tracking used in robotics like for robot soccer or etc they play in a well lit environment with very saturated color swatches.