Why doesn't my program work on my H7 device?

I created a little program, that detects the color of tokens running past the camera on a conveyor-belt. Depending on the color of the detected chip, I’m then setting the percentage of the pulse of a PWM output. This is then converted into a real analog 0-24V signal.

Everything is working fine when running the program in the OpenMV IDE, but as soon as I save the program to the device and want to run it without, it only seems to be detecting red tokens for everything.

Is there anything that I need to be aware of, when running it without the IDE?

Here’s the code of the program:

import sensor, image, time

from pyb import Pin, Timer

thresholds = [
              # Red
              (70,  80,   10,  30,  50,   60),
              # Blue
              (55,  70, -20,   10,  -65,  -40),
              # Green
              (25, 100,  -69,  -25,  -42,   26),
             ]
global_pixels_threshold = 300
global_area_threshold = 300

NO_TOKEN=0
RED_TOKEN=25
BLUE_TOKEN=50
GREEN_TOKEN=75
DAMAGED_TOKEN=100

# Initialize the output pin
p = Pin('P4') # P4 has TIM2, CH3
tim = Timer(2, freq=1000)
ch = tim.channel(3, Timer.PWM, pin=p)
ch.pulse_width_percent(NO_TOKEN)

# Initialize the image sensor
sensor.reset()
sensor.set_pixformat(sensor.RGB565)
sensor.set_framesize(sensor.QVGA)
sensor.set_auto_gain(False)     # must be turned off for color tracking
sensor.set_auto_whitebal(False) # must be turned off for color tracking
sensor.set_auto_exposure(False) # must be turned off for color tracking
sensor.skip_frames(time = 2000)

while(True):
    color = NO_TOKEN

    # Take a picture and return the image and de-fisheye the image and zoom to the completely black part of the image
    img = sensor.snapshot().lens_corr(strength=1.5, zoom=1.1, x_corr=-0.03, y_corr=0.0)

    # Cut out the part we're interested in (Reduces the false-positives on the sides)
    img = img.crop(roi=[87,0,166,400])

    # Detect Red Tokens
    for blob in img.find_blobs([thresholds[0]], pixels_threshold=global_pixels_threshold, area_threshold=global_area_threshold, merge=True):
        color = RED_TOKEN
        img.draw_rectangle(blob.rect(), color=(255,0,0))

    # Detect Blue Tokens
    if color == NO_TOKEN:
        for blob in img.find_blobs([thresholds[1]], pixels_threshold=global_pixels_threshold, area_threshold=global_area_threshold, merge=True):
            color = BLUE_TOKEN
            img.draw_rectangle(blob.rect(), color=(0,0,255))

    # Detect Green Tokens
    if color == NO_TOKEN:
        for blob in img.find_blobs([thresholds[2]], pixels_threshold=global_pixels_threshold, area_threshold=global_area_threshold, merge=True):
            color = GREEN_TOKEN
            img.draw_rectangle(blob.rect(), color=(0,255,0))

    ch.pulse_width_percent(color)

Would be happy if I could finally finish this little project (Initially I had RED, BLUE and WHITE tokens, but detecting white reliably seems to have been sort of impossilbe, so I painted the white tokens green and now all is good … at least theoretically)

Your help is greatly appreciated.

Chris

No not really, there shouldn’t be any difference, not with that script anyway. Maybe the lighting is different where you deploy the cam ? It would change the colors and require different thresholds. Or maybe the script wasn’t fully written and it’s running an older version ? You could also try recording a raw video (see the imageio examples) play it back and run the detection and see what’s going on.

I think I’ve solved my problem. Probably the industrial power supply still had enough to not reset the device. After powering off the diver for a longer period of time, the cam started doing it’s job correctly. Sorry for the noise.

I bought a really long usb cable because I find it’s too easy to effect the lighting if you don’t have dedicated lighting . This helps me stay away during programming and keep things consistent during production. But nothing beats dedicated lighting.