Differentiating between colors where one is a subset of the other

Howdy!

We’re having a great time with your H7 cameras - they are fast and provide a lot of power for our small project.

Can you provide any guidance on how to differentiate between objects where the thresholds for one are a subset of another?

We’re using color tracking for an Orange ball on a 3 meter long green field. By itself, this works great and is almost 100% accurate at VGA resolution.

However, we have not been able to find a way to design non-overlapping thresholds for the Orange ball when we need to also track a Yellow target that the ball needs to be moved into - see histograms below.



We broke this into two separate find_blob calls so we could set different search parameters for the Orange ball versus the Yellow rectangular target. We also tried these together in a single find_blob call and extracted the blob values using the binary decoded blob.codes - but had the same results of being able to track the Yellow target well, but the ball in isolation only.

Sadly, we have no control over the lighting - so we need to be tolerant of slightly varying light levels and shadows.

Our code for finding the Yellow target is as follows, and is working well:

  for blob in img.find_blobs([thresholds_Yellow], pixels_threshold=50, area_threshold=50, merge=True, margin=50):

Our code for finding the Orange ball is as follows, and also works well when the Yellow target is not present. Since we need to locate the ball at a 3 meter distance - where it appears as only a few pixels - we’ve set the pixels_threshold low.

This works well except when the Yellow target is around - the routine detects that.

for blob in img.find_blobs([thresholds_Ball], pixels_threshold=3, merge=True, margin=5):

We’ve tried many alternative blob properties to try to differentiate between the two objects including Roundess, Elongation, Compactness and Density - but have not been able to get any useful consistent readings. For example, Roundess numbers for the Yellow target often exceed those of the ball - perhaps either because the Orange Ball blobs are seldom ball-shaped since the light gradients on the ball make identifying the entire ball difficult and/or because of distortion in our optical field. Elongation values for the Yellow target are also all over the scale.

We had some success trying to ensure the histogram of the Orange ball was larger than the Yellow one - but this is not reliable. This code is likely a bad idea - just posted it here to show the desperate attempts we’ve made to try to solve this challenge.

            r = blob.rect()   #  get the area of the image that the potential Orange ball blob occupies - so we can use the histogram function
            histBall = img.get_histogram(roi=r)  # get a histogram of the area
            hi = histBall.get_percentile(0.99)
            lo = histBall.get_percentile(0.01)
            print(lo.a_value(), hi.a_value())

	    #  Compare the hi/lo range of the ideal Ball with the observed ball and mark this is a 'best' candidate if appropriate
            if (abs(abs(thresholds_Ball[3]-thresholds_Ball[2])-(abs(hi.a_value()-lo.a_value())))) < 15:  
                BestZ[IndexBall] = blob.pixels()
                BestX[IndexBall] = blob.cx()
                BestY[IndexBall] = blob.cy()

Any suggestions you can provide to help track the Orange ball when the Yellow target is around are greatly appreciated.

Thanks,

Brainstorming other ways of finding the orange bellow in proximity to a yellow target, we tried the find_circles example.

After trying a array of different options, we managed to find everything EXCEPT our ball. Also, moving to grayscale causes other problems with our project - but was an option if it found the ball perfectly.

Any ideas are appreciated

Thanks,

Hi, these two colors look pretty much the same to me, maybe blob.count will help or find_circles for the orange ball.

EDIT: based on you reply, looks like you need to play with the find_circles thresholds it’s returning too many false positives.

Hi, thanks for the detailed questions. I’m on a retreat right now and only have cell service for quick periods. I’ll be able to provide a longer answer in 2 days.

In the meantime. First, go to the sensor routines and change the exposure to be much longer. This will greatly improve the image quality (at the cost of some FPS). See the sensor control examples.

Next, use histogram equalization (histeq()) in either non adaptive or adaptive mode to pump the image contrast up. This will remove the effect of lighting a little bit from the image allowing you to set color thresholds better.

Finally, use find_blobs() now with new thresholds picked after the histeq image. Histeq will make the image look weird. So, be ready for that.

Finally, use find_circles to detect between the ball and not a ball.

To speed up find circles, only call it on the ROI of the objects you sound above.

For example:

blobs=img.find_blobs()
for b in blobs:
    cirs=img.find_circles(roi=b.rect()...

Now you’ll need to make the ROI larger than just the above. So, make a (x,y,w,h) tuple and then make it slightly larger in w/h in both directions than the rest tuple of the blob.

Since you’re only calling find_circle() on objects that were already in found by color you can lower the circle threshold to pull what you need out. Make sure to have you merge parameters running. If you find the yellow ball then it should have roughly 1 circle object veruses the other object will have more random circles.

Understand that you will be doing stuff involving lists of objects and then lists of objects on top of that.

Thank you again for the amazing information; very appreciated!!!

Thanks for getting us to look at exposure!

Perhaps we were have issues with too much light.

Reducing exposure seems to have helped improve colors and create more accurate thresholds. Still not perfect, but 100% better.

We’ll dig into the histogram equalization to improve further.

sensor.reset()
sensor.set_pixformat(sensor.RGB565)
sensor.set_framesize(sensor.VGA) # Sizes for the OpenMV OV7725 image sensor QVGA 320x240 VGA 640x480
sensor.set_windowing((WindowStartX,WindowStartY,WindowSizeX,WindowSizeY))  # front (right side of image sensor)
sensor.skip_frames(time = 2000) # skip frames to give camera time to adjust and warm up
sensor.set_auto_whitebal(False) # must be turned off for color tracking
sensor.set_auto_gain(False) # must be turned off for color tracking
sensor.set_auto_exposure(False, sensor.get_exposure_us()-8000)
sensor.set_brightness(-3) #   -3 to +3

I’m working on a camera image quality update that will be commited this week. It reduces frame rate a bit but 100% improves the default camera quality.