I am attempting to detect various shapes with blob detection, but noticed that depending on the light levels when the camera is initialized the resultant image can be very different (i.e. sometimes brighter, sometimes darker) which in turn causes any hardcoded thresholds to no longer function as well as they did before.
I assume if I normalize the color image prior to running blob detection it will be a bit more robust at detecting the blobs I’m looking for regardless of light variation (within reason of course). Is there a function to perform this? I couldn’t find anything in the documentation. Furthermore, if that is not possible, does anyone have any good resources to that deal with adaptive thresholding?
Many thanks in advance for any advice and assistance offered! ~(-_- )~