Function to Normalize Color Image/Adaptive Thresholding?

I am attempting to detect various shapes with blob detection, but noticed that depending on the light levels when the camera is initialized the resultant image can be very different (i.e. sometimes brighter, sometimes darker) which in turn causes any hardcoded thresholds to no longer function as well as they did before.

I assume if I normalize the color image prior to running blob detection it will be a bit more robust at detecting the blobs I’m looking for regardless of light variation (within reason of course). Is there a function to perform this? I couldn’t find anything in the documentation. Furthermore, if that is not possible, does anyone have any good resources to that deal with adaptive thresholding?

Many thanks in advance for any advice and assistance offered! ~(-_- )~

Hi, it’s not really possible to have consistent color tracking thresholds. Color is dependent on lighting. If the lighting changes then the thresholds have to change. There’s no way around this.

The only method that will truly work is to create different thresholds based on the total global scene lighting and use different thresholds as the lighting changes.

Now, yes, we use LAB thresholds where L is the lighting and A/B should be independent. However, while there is some separation isn’t not 100%.