Detect 1 of 3 colors from 3 input pins, raise 1 of 3 output pins if observed

Can you start a forum thread so I can post the sample code online for others to see? It’s quite simple and I can do this tomorrow for you.



Sent: Thursday, May 24, 2018 10:30 PM
To: openmv@openmv.io
Subject: [OpenMV] Request Sample Code

Hi,

I saw your openMV M7 board at the maker faire, bought it right away and it came today.
At the maker faire, I got a demo of the camera looking for blue, finding blue and showing where blue was found and raising an I/O Pin
I checked the examples with the openMV IDE and I don’t see that same example.
Is there community with samples and examples?

I want to signal the camera pin X high means look for blue. When blue and only if blue is found pin Y goes HIGH.
Next pin X2 goes high means look for green. green found pin Y2 goes HIGH
Next X3 goes high means look for red, red found ping Y3 goes HIGH
just 3 colors should do.

I am working on latency(time) measurement project. I want my arduino to start a clock and immediately ask the M7 (raise a Pin HIGH) to look for a particular color. When / If the color is observed the M7 signals (raises a Pin HIGH) going back to the arduino. Arduino stops the clock and calculates the elapsed time. 3 input pins to M7. 1 pin each for 3 colors (RED, GREEN, BLUE), 3 output pins back to arduino. total of 6 M7 pins. Video source will be display of an android phone. I pretty much saw this demoed at the Maker Faire, so I’m trying to jump start this project.

Tangential Question: I observe that the M7 is unable (with example code) to see colors from LiFX Color LED Bulb. M7 doesn’t see to most wonderful Red, Green, Blue. Why? Is there away to correct for this? I mean the human eye sees Red, Green, Blue, so shouldn’t the M7?



Summary

Raise Pin 1 Input to M7, means look for BLUE, when BLUE seen/found, raise Pin 4 Output from M7
Lower Pin 1, look for nothing
Raise Pin 2 Input to M7, means look for GREEN, when GREEN seen/found raise Pin 5 Output from M7
Lower Pin 2, look for nothing
Raise Pin 3 Input to M7, means look for RED, when RED seen/found raise Pin 6 Output from M7
Lower Pin 3, look for nothing

A couple of pointers would be appreciated. I have a narrow immediate need that if I solve now, it will give me more time to learn later.

Thanks

Robert

Please us the threshold editor under Tools->Machine Vision to get the right color bounds for tracking. Alternatively, if you highlight the 6 value tuple in the script the IDE will show you an option to edit it via the right click menu.

# Multi Color Blob Tracking Example
#
# This example shows off multi color blob tracking using the OpenMV Cam.

import sensor, image, time
from pyb import LED

red_led   = LED(1)
green_led = LED(2)
blue_led  = LED(3)
ir_led    = LED(4)

def led_control(x):
    if   (x&1)==0: red_led.off()
    elif (x&1)==1: red_led.on()
    if   (x&2)==0: green_led.off()
    elif (x&2)==2: green_led.on()
    if   (x&4)==0: blue_led.off()
    elif (x&4)==4: blue_led.on()
    if   (x&8)==0: ir_led.off()
    elif (x&8)==8: ir_led.on()

# Color Tracking Thresholds (L Min, L Max, A Min, A Max, B Min, B Max)
# The below thresholds track in general red/green things. You may wish to tune them...
thresholds = [(30, 100, 15, 127, 15, 127), # generic_red_thresholds (code 1)
              (30, 100, -64, -8, -32, 32), # generic_green_thresholds (code 2)
              (0, 15, 0, 40, -80, -20)] # generic_blue_thresholds (code 4)
# You may pass up to 16 thresholds above. However, it's not really possible to segment any
# scene with 16 thresholds before color thresholds start to overlap heavily.

# See the blob documentation about codes. They are basically bit masks for the above
# color positions in the list of colors.

sensor.reset()
sensor.set_pixformat(sensor.RGB565)
sensor.set_framesize(sensor.QVGA)
sensor.skip_frames(time = 2000)
sensor.set_auto_gain(False) # must be turned off for color tracking
sensor.set_auto_whitebal(False) # must be turned off for color tracking
clock = time.clock()

# Only blobs that with more pixels than "pixel_threshold" and more area than "area_threshold" are
# returned by "find_blobs" below. Change "pixels_threshold" and "area_threshold" if you change the
# camera resolution. Don't set "merge=True" becuase that will merge blobs which we don't want here.

while(True):
    clock.tick()
    img = sensor.snapshot()
    blob_list = img.find_blobs(thresholds, pixels_threshold=200, area_threshold=200)
    
    red_bool = 0
    green_bool = 0
    blue_bool = 0

    for b in blob_list:
        if b.code() & 1: # red blob
            red_bool = 1
            img.draw_rectangle(b.rect(), color=(255,0,0), thickness=2)
            img.draw_cross(b.cx(), b.cy(), color=(255,0,0), thickness=2)
        if b.code() & 2: # green blob
            green_bool = 2
            img.draw_rectangle(b.rect(), color=(0,255,0), thickness=2)
            img.draw_cross(b.cx(), b.cy(), color=(0,255,0), thickness=2)
        if b.code() & 4: # blue blob
            blue_bool = 4
            img.draw_rectangle(b.rect(), color=(0,0,255), thickness=2)
            img.draw_cross(b.cx(), b.cy(), color=(0,0,255), thickness=2)
        
    led_control(red_bool | green_bool | blue_bool)

    print(clock.fps())

Thanks for the start. The right-click tip was very very helpful.

At first I made revisions to only ‘find’ for a single color at a time. It drew a nice rectangle, calculated elapsed time from request-to-look-for-color to the time-color-was-seen.

But as I add more stubs for interfacing to input pins and output pins, I stopped getting the annotated graphics in the IDE frame buffer display. I suspect I’m tired or I am about to learn about some command to update a stale frame buffer.

My intent is to look for each color for 10 seconds, and display in the frame buffer if color-not-found after 10 seconds or what-was-elapsed-time to find the color. The print output looks OK, the frame buffer looks bad.

Here is my current code. I’m keen to learn why the draw commands no longer work or blink super quick when they should persist during the delay(500).

# Multi Color Blob Tracking Example
#
# This example shows off multi color blob tracking using the OpenMV Cam.

import sensor, image, time
from pyb import LED
import pyb

red_led   = LED(1)
green_led = LED(2)
blue_led  = LED(3)
ir_led    = LED(4)

def led_control(x):
    if   (x&1)==0: red_led.off()
    elif (x&1)==1: red_led.on()
    if   (x&2)==0: green_led.off()
    elif (x&2)==2: green_led.on()
    if   (x&4)==0: blue_led.off()
    elif (x&4)==4: blue_led.on()
    if   (x&8)==0: ir_led.off()
    elif (x&8)==8: ir_led.on()

# Color Tracking Thresholds (L Min, L Max, A Min, A Max, B Min, B Max)
# The below thresholds track in general red/green things. You may wish to tune them...
thresholds = [( 22,  64,   18,  127,  -32,   43), # generic_red_thresholds (code 1)
              ( 25, 100,  -69,  -25,  -42,   26), # generic_green_thresholds (code 2)
             (  5,  45, -101,   21,  -97,  -12)] # generic_blue_thresholds (code 4)

thresholdRed = [( 22,  64,   18,  127,  -32,   43)] # generic_green_thresholds (code 1)

thresholdGreen = [( 25, 100,  -69,  -25,  -42,   26)] # generic_green_thresholds (code 1)

thresholdBlue = [(  5,  45, -101,   21,  -97,  -12)] # generic_blue_thresholds (code 1)

# You may pass up to 16 thresholds above. However, it's not really possible to segment any
# scene with 16 thresholds before color thresholds start to overlap heavily.

# See the blob documentation about codes. They are basically bit masks for the above
# color positions in the list of colors.

sensor.reset()
sensor.set_pixformat(sensor.RGB565)
sensor.set_framesize(sensor.QVGA)
sensor.skip_frames(time = 2000)
sensor.set_auto_gain(False) # must be turned off for color tracking
sensor.set_auto_whitebal(False) # must be turned off for color tracking
clock = time.clock()

# Only blobs that with more pixels than "pixel_threshold" and more area than "area_threshold" are
# returned by "find_blobs" below. Change "pixels_threshold" and "area_threshold" if you change the
# camera resolution. Don't set "merge=True" becuase that will merge blobs which we don't want here.


# define input pin look red
# define input pin look green
# define input pin look blue
# define output pin found red
# define output pin found green
# define output pin found blue

lookRed = False
lookGreen = False
lookBlue = False
foundRed = False
foundGreen = False
foundBlue = False
oldLookRed = False
oldLookGreen = False
oldLookBlue =  False
start = pyb.millis()
red_bool = 0
green_bool = 0
blue_bool = 0

fauxStart = pyb.millis()
oneOfThree = 0
elapsed = 0  # from request to look to time 1 of 3 colors found

print ("")
print ("Look for only 1 color based on HIGH input pin")

while(True):

    clock.tick()  # learn what a clock.tick does
    img = sensor.snapshot()

    #lookRed = pyb.Pin(pyb.Pin.board.P1, pyb.Pin.IN)      # which color to look for
    #lookGreen = pyb.Pin(pyb.Pin.board.P2, pyb.Pin.IN)
    #lookBlue = pyb.Pin(pyb.Pin.board.P3, pyb.Pin.IN)




    if (pyb.elapsed_millis(start) > 10000): # fake the look signal from main arduino
        if ((red_bool == 0) and (lookRed == True)):
            print ("NO RED")
            img.draw_string(20, 50, "NO RED", color=(255,0,0), scale =4, mono_space=True )
            pyb.delay(500)
        if ((green_bool == 0) and (lookGreen == True)):
            print ("NO GREEN")
            img.draw_string(20, 50, "NO GREEN", color=(0,255,0), scale =4, mono_space=True )
            pyb.delay(500)
        if ((blue_bool == 0) and (lookBlue == True)):
            print ("NO BLUE")
            img.draw_string(20, 50, "NO BLUE", color=(0,0,255), scale =4, mono_space=True )
            pyb.delay(500)

        oneOfThree = oneOfThree + 1  # fake the look signal from main arduino
        if (oneOfThree  == 1) :
            lookRed = True
            lookGreen = False
            lookBlue = False
            start = pyb.millis()
            print (start)
            print ("RED")
        if (oneOfThree == 2):
            lookGreen = True
            lookRed = False
            lookBlue = False
            start = pyb.millis()
            print (start)
            print ("GREEN")
        if (oneOfThree == 3):
            oneOfThree = 0
            lookBlue = True
            lookRed = False
            lookGreen = False
            start = pyb.millis()
            print (start)
            print ("BLUE")


    oldLookRed = lookRed   # save I may need to know if this is a new-look or old-look
    oldLookGreen = lookGreen
    oldLookBlue =  lookBlue



    if lookRed: # if look for RED PIN HIGH
        clock.tick()
        img = sensor.snapshot()
        blob_list = img.find_blobs(thresholdRed, x_stride=10, y_stride=10, pixels_threshold=20, area_threshold=1500)

        red_bool = 0
        green_bool = 0
        blue_bool = 0

        for b in blob_list:
            if b.code() & 1: # red blob
                red_bool = 1
                img.draw_rectangle(b.rect(), color=(255,0,0), thickness=5)
                img.draw_cross(b.cx(), b.cy(), color=(255,0,0), thickness=5)

                foundRed = True
                print ("RED FOUND")
                elapsed = pyb.elapsed_millis(start)
                print (elapsed)
                img.draw_string(20, 50, str(elapsed), color=(255,0,0), scale =4, mono_space=True )
                lookRed = False
                #pyb.delay(5000)
        #print ("")
        #pyb.delay(500)
        #latch the RED FOUND PIN HIGH



    if lookGreen:  # look for GREEN PIN HIGH
        clock.tick()
        img = sensor.snapshot()
        blob_list = img.find_blobs(thresholdGreen, x_stride=10, y_stride=10, pixels_threshold=20, area_threshold=1500)

        red_bool = 0
        green_bool = 0
        blue_bool = 0

        for b in blob_list:
            if b.code() & 1: # green blob
                green_bool = 1
                img.draw_rectangle(b.rect(), color=(0,255,0), thickness=5)
                img.draw_cross(b.cx(), b.cy(), color=(0,255,0), thickness=5)
                foundGreen = True
                print ("GREEN FOUND")
                elapsed = pyb.elapsed_millis(start)
                print (elapsed)
                img.draw_string(20, 50, str(elapsed), color=(0,255,0), scale =4, mono_space=True )
                lookGreen = False
                #pyb.delay(5000)
        #print ("")
        #pyb.delay(500)
        #latch thr GREEN FOUND PIN HIGH




    if lookBlue: #look for BLUE PIN HIGH
        clock.tick()
        img = sensor.snapshot()
        blob_list = img.find_blobs(thresholdBlue, x_stride=10, y_stride=10, pixels_threshold=20, area_threshold=1500)

        red_bool = 0
        green_bool = 0
        blue_bool = 0

        for b in blob_list:
            if b.code() & 1: # blue blob
                blue_bool = 1
                img.draw_rectangle(b.rect(), color=(0,0,255), thickness=5)
                img.draw_cross(b.cx(), b.cy(), color=(0,0,255), thickness=5)
                foundBlue = True
                print ("BLUE FOUND")
                elapsed = pyb.elapsed_millis(start)
                print (elapsed)
                img.draw_string(20, 50, str(elapsed), color=(0,0,255), scale =4, mono_space=True )
                lookBlue = False
                #pyb.delay(5000)
        #print ("")
        #pyb.delay(500)
        # latch the BLUE found PIN HIGH

Hi, sensor.snapshot() is what updates the display. It flushes the previous image and grabs a new one. If not called the display doesn’t update.

You can use sensor.flush() to flush the frame buffer at any time. Note that the IDE still has to grab said flushed frame buffer over USB via polling. So, the cameras script needs to keep executing for a second or two for the IDE to grab the frame if that’s the last command in your script. If your script is a loop then you don’t need to worry about waiting after sensor.flush().

Ok I follow that.

Why then does

               
               img.draw_rectangle(b.rect(), color=(255,0,0), thickness=5)
               img.draw_cross(b.cx(), b.cy(), color=(255,0,0), thickness=5)

and

               
               img.draw_string(20, 50, str(elapsed), color=(255,0,0), scale =4, mono_space=True )

not draw in MV IDE after

               img = sensor.snapshot()

I’m a MV noob, but my thinking is sensor.snapshot sets a background in the IDE framebuffer and draw_rectangle (_cross, _string, etc) paints on top of that background. In practice I don’t see that happening, so I need to learn something more.

Hi, they should, that said, it’s hard to follow your code since you have timeouts and whatnot added in. Could you pair it down to just the error case?

I’m out right now and don’t have the ability to test it.

I’ll look at that later. Quirky MV IDE frame buffer behaviour isn’t essential to app, I just hoped it be easy out-of-box.

More critical question, I observe BLUE from the source, but the MV IDE frame buffer isn’t close to blue. Source is a perfect BLUE, but the MV IDE frame buffer consistently looks more green. The RED source doesn’t resemble MV IDE Frame Buffer image.


Did I miss a color calibration procedure for MV Camera?

Are my expectations too high for $65 part?

Hi, the color is based on what auto white balance does to the image. When you point the camera at a colored object it changes the color of the image to make things look gray. If you don’t want this to happen then you need to turn auto white balance off at the very start of the script. In most script I let this run for about 2 seconds. However, you don’t want this if you are staring at an object right off the bat.

Additionally, once you lock in your color settings you may wish to save/restore white balance settings:

http://docs.openmv.io/library/omv.sensor.html?highlight=white%20balance#sensor.sensor.set_auto_whitebal

http://docs.openmv.io/library/omv.sensor.html?highlight=white%20balance#sensor.sensor.get_rgb_gain_db

By calling the get gain method you can see the camera gains, then you can apply then again the next time the script runs. I.e. you tuple returned by gain_db and set it to the rgb_gain_db value of set_auto_whitebal(false, rgb_gain_db=(tuple)).

As for the frame buffer. The behavior is pretty good by default. Let me fix you script up. I’m at home now.

Thanks, I’m working tonight too.
So so so close to getting what I need.

Hi, do the look code like this:

if lookRed: # if look for RED PIN HIGH
        clock.tick()
        img = sensor.snapshot()
        blob_list = img.find_blobs(thresholdRed, x_stride=10, y_stride=10, pixels_threshold=20, area_threshold=10)

        red_bool = 0
        green_bool = 0
        blue_bool = 0

        for b in blob_list:
            if b.code() & 1: # red blob
                red_bool = 1
                img.draw_rectangle(b.rect(), color=(255,0,0), thickness=5)
                img.draw_cross(b.cx(), b.cy(), color=(255,0,0), thickness=5)

                foundRed = True
                print ("RED FOUND")
                elapsed = pyb.elapsed_millis(start)
                print (elapsed)
                img.draw_string(20, 50, str(elapsed), color=(255,0,0), scale =4, mono_space=True )
                lookRed = False
        sensor.flush()
        pyb.delay(5000)

I.e. call flush() and then delay outside of the loop.

As for the gain stuff:

sensor.reset()
sensor.set_pixformat(sensor.RGB565)
sensor.set_framesize(sensor.QVGA)
sensor.set_auto_whitebal(False) # must be turned off for color tracking
sensor.skip_frames(time = 2000)
sensor.set_auto_gain(False) # must be turned off for color tracking

This turns auto white balance right off at the start.

As for the JPG quality. Man, Ibrahim really turned it down on the F7. The H7 looks so much better with the HW JPG compressor. Definitely need to get the new board done soon. Anyway, if you want to force a higher jpg quality do this:

        img.compress(80)
        sensor.flush()
        pyb.delay(5000)

Note, if you make the quality too high it doesn’t fit in an internal jpg buffer and the camera won’t transfer it to the IDE. This is silent. So, stay at 80% quality or something lower like 70%. The default quality is less than 50% to reduce CPU overhead on JPG compressing images in software.

Here’s how all this stuff works:

There’s a frame buffer of about 350+ KB. That frame buffer stores the image that gets captured from a stream of images from the camera when you call snapshot(). Additionally, whatever space is left over is used for a fast frame_buffer stack we use to hold data structures for algorithms. Making the image size larger reduces this space. Anyway, this type of arch is what lets us run things like AprilTags on a microcontroller which would normally require 50 MB+ of RAM.

There’s also a heap of about 128KB of data for MP and objects, lists, etc. that you have in your script.

Next, when snapshot or flush is called we jpg compress the image and then transfer it to a JPG buffer (24KB~) that the IDE will read out of at it’s leisure. If the image is already compressed it just gets transferred. Anyway, if the image doesn’t fit in the jpg buffer then it’s ignored and skipped. We have an auto jpg quality algorithm that will reduce the jpg quality to make the next time it jpg compresses the image smaller so it will fit. If the image fits then it ups the quality (up to a max level) for the next frame.

Note that while the IDE is reading out the frame the main loop doesn’t have to jpg compress data for the IDE since the jpg buffer is locked.

All buffer sizes on the H7 are larger… thus allowing for better everything. (256 KB heap, 512 KB FB, and 128 KB jpg buffer). We also have a hardware JPG compressor on the H7 which lets us increase the JPG quality by default with no FPS hit.

Thanks, I went zero to 60 in 1 weekend.

I already had the white-balance suggestion made when I raise the color mis-calibration observation. I see others have made color calibration comments. Can’t have it all.
I added flush()
I wasn’t sure were to do compression, so I added it in a few places.
I shrunk the image size smaller and bumped up the quality.
I removed all the draw this and that into the frame buffer.
I haven’t done anything with gain yet, so the colors shown in the MV IDE don’t match real world source. Not essential for my app. I have to use green or blue but not both. MV sees my temporary green source is seen as equal to the blue source. I’ve played and played with the LBA threshold settings. I may just use detection of red and detection of blue for my project. MV seems to see red as different from blue. I will still have to make threshold adjustments when the genuine color source comes online.

You have a good concept here. The H7 will need checked out.
BTW: is there something special with Pin P3? I haven’t gotten it to accept a rising-signal like I can do with P1 and P2.


# Multi Color Blob Tracking Example
#
# This example shows off multi color blob tracking using the OpenMV Cam.

import sensor, image, time
from pyb import LED
import pyb
from pyb import Pin

red_led   = LED(1)
green_led = LED(2)
blue_led  = LED(3)
ir_led    = LED(4)

def led_control(x):
    if   (x&1)==0: red_led.off()
    elif (x&1)==1: red_led.on()
    if   (x&2)==0: green_led.off()
    elif (x&2)==2: green_led.on()
    if   (x&4)==0: blue_led.off()
    elif (x&4)==4: blue_led.on()
    if   (x&8)==0: ir_led.off()
    elif (x&8)==8: ir_led.on()

# Color Tracking Thresholds (L Min, L Max, A Min, A Max, B Min, B Max)
# The below thresholds track in general red/green things. You may wish to tune them...
thresholds = [( 22,  64,   18,  127,  -32,   43), # generic_red_thresholds (code 1)
              ( 25, 100,  -69,  -25,  -42,   26), # generic_green_thresholds (code 2)
             (  5,  45, -101,   21,  -97,  -12)] # generic_blue_thresholds (code 4)

thresholdRed = [( 81,  94,  -40,   -8,   78,   95)] # generic_red_thresholds (code 1)

thresholdGreen = [( 67,  96,  -23,    0,  -10,    5)] # generic_green_thresholds (code 1)

thresholdBlue = [( 75,  89,  -34,   -2,  -41,  -18)] # generic_blue_thresholds (code 1)

# You may pass up to 16 thresholds above. However, it's not really possible to segment any
# scene with 16 thresholds before color thresholds start to overlap heavily.

# See the blob documentation about codes. They are basically bit masks for the above
# color positions in the list of colors.

sensor.reset()
sensor.set_pixformat(sensor.RGB565)
sensor.set_framesize(sensor.QQVGA)
sensor.skip_frames(time = 2000)
sensor.set_auto_gain(False) # must be turned off for color tracking
sensor.set_auto_whitebal(False) # must be turned off for color tracking
clock = time.clock()

# Only blobs that with more pixels than "pixel_threshold" and more area than "area_threshold" are
# returned by "find_blobs" below. Change "pixels_threshold" and "area_threshold" if you change the
# camera resolution. Don't set "merge=True" becuase that will merge blobs which we don't want here.


lookRed = False
lookGreen = False
lookBlue = False
foundRed = False
foundGreen = False
foundBlue = False
oldLookRed = False
oldLookGreen = False
oldLookBlue =  False
start = pyb.millis()
red_bool = 0
green_bool = 0
blue_bool = 0

fauxStart = pyb.millis()
oneOfThree = 0
elapsed = 0  # from request to look to time 1 of 3 colors found

print ("")
print ("Look for only 1 color based on HIGH input pin")


# define input pin look red
# define input pin look green
# define input pin look blue
# define output pin found red
# define output pin found green
# define output pin found blue
red_in = Pin ("P1", Pin.IN, Pin.PULL_DOWN)
green_in = Pin ("P2", Pin.IN, Pin.PULL_DOWN)
blue_in = Pin ("P4", Pin.IN, Pin.PULL_DOWN) # why does P3 not work

while(True):

    clock.tick()  # learn what a clock.tick does
    latchLookRed   = red_in.value()
    latchLookGreen = green_in.value()
    latchLookBlue  = blue_in.value()

    if ((pyb.elapsed_millis(start) % 250) >= 100):
         img = sensor.snapshot()
         img.compress(90)


    if (oldLookRed == False and oldLookGreen == False and oldLookBlue == False):
        if latchLookRed == 1:
            lookRed = True
            oldLookRed = True
            start = pyb.millis()
            print ("Looking RED")
        if latchLookGreen == 1:
            lookGreen = True
            oldLookGreen = True
            start = pyb.millis()
            print ("Looking GREEN")
        if latchLookBlue == 1:
            lookBlue = True
            oldLookBlue = True
            start = pyb.millis()
            print ("Looking BLUE")



    if lookRed: # if look for RED PIN HIGH
        clock.tick()
        img = sensor.snapshot()
        blob_list = img.find_blobs(thresholdRed, x_stride=10, y_stride=10, pixels_threshold=20, area_threshold=1500)

        red_bool = 0
        green_bool = 0
        blue_bool = 0

        for b in blob_list:
            if b.code() & 1: # red blob
                red_bool = 1
                foundRed = True
                print ("RED FOUND")
                elapsed = pyb.elapsed_millis(start)
                print (elapsed)
                lookRed = False
                oldLookRed = False
        img.compress(90)
        sensor.flush()
        #latch the RED FOUND PIN HIGH



    if lookGreen:  # look for GREEN PIN HIGH
        clock.tick()
        img = sensor.snapshot()
        blob_list = img.find_blobs(thresholdGreen, x_stride=10, y_stride=10, pixels_threshold=20, area_threshold=1500)

        red_bool = 0
        green_bool = 0
        blue_bool = 0

        for b in blob_list:
            if b.code() & 1: # green blob
                green_bool = 1
                foundGreen = True
                print ("GREEN FOUND")
                elapsed = pyb.elapsed_millis(start)
                print (elapsed)
                lookGreen = False
                oldLookGreen = False
        img.compress(90)
        sensor.flush()
        #latch the GREEN FOUND PIN HIGH




    if lookBlue: #look for BLUE PIN HIGH
        clock.tick()
        img = sensor.snapshot()
        blob_list = img.find_blobs(thresholdBlue, x_stride=10, y_stride=10, pixels_threshold=20, area_threshold=1500)

        red_bool = 0
        green_bool = 0
        blue_bool = 0

        for b in blob_list:
            if b.code() & 1: # blue blob
                blue_bool = 1
                foundBlue = True
                print ("BLUE FOUND")
                elapsed = pyb.elapsed_millis(start)
                print (elapsed)
                lookBlue = False
                oldLookBlue = False
        img.compress(90)
        sensor.flush()
        # latch the BLUE found PIN HIGH

PIN 3 should accept an interrupt. That’s MicroPython standard code… um, put a bug here for tracking and it will get fixed.

As for the colors, you’ll get better auto-gain results out of the box if you make sure there’s a white background in the image. Then the camera gets the color values right.

Note that blue and green kinda of require you to mess with the lighting to separate them.

Hum, white background isn’t practical. I am and I will be aiming the M7 Optics at an Android phone.

Look for Red vs Look for Blue/Green will work for this phase.

Say from start of looking for red to seeing red (already displayed) measures out as ~47 milliseconds. Does that seem credible? My project will add 3 to 30 seconds of latency between looking and seeing so 47 ms is fine.

I have a $8 color sensor part to benchmark against the M7 tomorrow.

Okay, well, once you have the lighting set, then point the camera at a white background… turn auto white balance off, and then show it a red, green and blue objects. Record the color settings for those objects and check that you can threshold them well. If everything is fine then record the r/g/b gain settings for the camera and then reapply them on turn on.

If you change the lighting in the environment then this is moot.

I struggle to reliably and repeatably use the M7camera to detect fixed colors displayed on android phone.

After the white-balance setup step, (a step that is repeated frequently), the source-RED produces inconsistent M7 LAB color values. To my eye, I confirm the source-RED is reproduced consistently. The Color source is a LiFX color bulb, so it very easy to dial in a specific color and intensity.

How can adjust my code or M7 to see the same source-RED and produce essentially the same LAB values for a given source-RED? … or Blue, or Green.

Thanks

Hi, to track colors correctly you want to avoid settings the bounds to tight. Additionally, you want to make sure white balance in general never runs. So, first, are you recording the color gains from sensor.get_rgb_gain_db() and then reapplying them with sensor.set_auto_whitebal(enable[, rgb_gain_db])?

By default the camera starts up with white balance running. The camera sensor chip will automatically adjust it’s color thresholds to make the world gray. You should turn this off right after reset if you feel the light is stable to prevent it from running. Note that the camera always initializes its settings after sensor.reset() to the same values and then auto white balances and auto gains to change the colors to approximate a gray world. To prevent this do this immediately:

sensor.reset()
sensor.set_auto_gain(False)
sensor.set_auto_whitebal(False)

Don’t add any waits. This will turn the camera onto the same settings on every startup.

Please double check me, I believe I’ve had those code suggestions from day 1. I started with your example code. Please look back on code posted.

sensor.reset()
sensor.set_pixformat(sensor.RGB565)
sensor.set_framesize(sensor.QQVGA)
sensor.skip_frames(time = 2000)
sensor.set_auto_gain(False) # must be turned off for color tracking
sensor.set_auto_whitebal(False) # must be turned off for color tracking
clock = time.clock()

If true, then the M7 LBA values not reproducing was not helped.

Threshold LBA bounds not too tight, but then the M7 detects green as blue or blue as green.


In testing last night a $10 adafruit RGB color sensor shows promise as a work around. Is this particular color detection problem not well suited to the $65 openMV M7?

Hi, please remove the skip frames call. This let’s auto white balance and auto gain run for 2 seconds.

The adafruit color sensor does not try to adjust color values to please your eye. All cameras do this. We use a standard OmniVision camera which runs these auto methods. You have to turn them off immediately if you want an unadjusted image.

As an example… For my DIY Robocar racer I’ve found that turning off white balance and auto gain after 200 ms versus 2000 ms works great. So, just try removing a 0 from the time value passed to skip frames.