Help with location, orientation and position code for a box

General discussion about topics related to OpenMV.
NEWBIE_IN_OPENMV
Posts: 1
Joined: Fri Feb 23, 2018 1:29 pm

Help with location, orientation and position code for a box

Postby NEWBIE_IN_OPENMV » Sat Feb 24, 2018 4:47 pm

Hi, my name is David.
I'm an Electrical Engineering student and for my final project, my team members and I were thinking about using the OpenMV M7 cam to create a Vision Sorting Smart Conveyor System. The thing is that I've never worked with python language before, and I started to look for the code to track a box and to get its center but I'm having some errors. This is the code that I've been using, please can you check it and help me.

Code: Select all

import sensor, image, time
print("Letting auto algorithms run. Don't put anything in front of the camera!")

sensor.reset()
sensor.set_pixformat(sensor.RGB565)
sensor.set_framesize(sensor.QVGA)
sensor.skip_frames(time = 2000)
sensor.set_auto_gain(False) # must be turned off for color tracking
sensor.set_auto_whitebal(False) # must be turned off for color tracking
clock = time.clock()

# Capture the color thresholds for whatever was in the center of the image.
r = [(320//2)-(50//2), (240//2)-(50//2), 50, 50] # 50x50 center of QVGA.

print("Auto algorithms done. Hold the object you want to track in front of the camera in the box.")
print("MAKE SURE THE COLOR OF THE OBJECT YOU WANT TO TRACK IS FULLY ENCLOSED BY THE BOX!")
for i in range(60):
img = sensor.snapshot()
img.draw_rectangle(r)

print("Learning thresholds...")
threshold = [50, 50, 0, 0, 0, 0] # Middle L, A, B values.
for i in range(60):
img = sensor.snapshot()
hist = img.get_histogram(roi=r)
lo = hist.get_percentile(0.01) # Get the CDF of the histogram at the 1% range (ADJUST AS NECESSARY)!
hi = hist.get_percentile(0.99) # Get the CDF of the histogram at the 99% range (ADJUST AS NECESSARY)!
# Average in percentile values.
threshold[0] = (threshold[0] + lo.l_value()) // 2
threshold[1] = (threshold[1] + hi.l_value()) // 2
threshold[2] = (threshold[2] + lo.a_value()) // 2
threshold[3] = (threshold[3] + hi.a_value()) // 2
threshold[4] = (threshold[4] + lo.b_value()) // 2
threshold[5] = (threshold[5] + hi.b_value()) // 2
for blob in img.find_blobs([threshold], pixels_threshold=100, area_threshold=100, merge=True, margin=10):
img.draw_rectangle(blob.rect())
img.draw_cross(blob.cx(), blob.cy())
img.draw_rectangle(r)

print("Thresholds learned...")
print("Tracking colors...")

while(True):
clock.tick()
start = pyb.millis() #I'm getting my error here because it says the pyb is not declared
img = sensor.snapshot()
for blob in img.find_blobs([threshold], pixels_threshold=255, area_threshold=100, merge=True, margin=10):
img.draw_rectangle(blob.rect())
img.draw_cross(blob.cx(), blob.cy())
#timer = int(start)

img.midpoint(0, bias=0.5, threshold=True, offset=5, invert=True)

    slopes = []

    for i in range(len(blobs)):
            for j in range(i, len(blobs)):
                    my = blobs[i].cy() - blobs[j].cy()
                    mx = blobs[i].cx() - blobs[j].cx()
                    slopes.append(math.atan2(my, mx))

    slopes = sorted(slopes)
    median = slopes[len(slopes) / 2]
    rotation = math.degrees(median)

print("FPS %f" % clock.fps())
print("TRACK %f %f " % (blob.cx(), blob.cy()), blob.w(), blob.h())

uart.write("%d ; %d ; %d ; %d \r\n " % (blob.cx(), blob.cy(), blob.w(), blob.h()))
###############################################################################################################
I'm also open to all suggestions that you can give me, in a way that I can determine the location, orientation and position of the box. I would also want to know how can I export those matrices to arduino because the code for the motors controllers is written in that software.
Thanks in advance
User avatar
kwagyeman
Posts: 1771
Joined: Sun May 24, 2015 2:10 pm

Re: Help with location, orientation and position code for a box

Postby kwagyeman » Sat Feb 24, 2018 6:32 pm

I think this is kinda what you want?

Code: Select all

import sensor, image, time, pyb, math
print("Letting auto algorithms run. Don't put anything in front of the camera!")

sensor.reset()
sensor.set_pixformat(sensor.RGB565)
sensor.set_framesize(sensor.QVGA)
sensor.skip_frames(time = 2000)
sensor.set_auto_gain(False) # must be turned off for color tracking
sensor.set_auto_whitebal(False) # must be turned off for color tracking
clock = time.clock()

uart = pyb.UART(3, 19200, timeout_char = 1000)

# Capture the color thresholds for whatever was in the center of the image.
r = [(320//2)-(50//2), (240//2)-(50//2), 50, 50] # 50x50 center of QVGA.

print("Auto algorithms done. Hold the object you want to track in front of the camera in the box.")
print("MAKE SURE THE COLOR OF THE OBJECT YOU WANT TO TRACK IS FULLY ENCLOSED BY THE BOX!")
for i in range(60):
    img = sensor.snapshot()
    img.draw_rectangle(r)

print("Learning thresholds...")
threshold = [50, 50, 0, 0, 0, 0] # Middle L, A, B values.
for i in range(60):
    img = sensor.snapshot()
    hist = img.get_histogram(roi=r)
    lo = hist.get_percentile(0.01) # Get the CDF of the histogram at the 1% range (ADJUST AS NECESSARY)!
    hi = hist.get_percentile(0.99) # Get the CDF of the histogram at the 99% range (ADJUST AS NECESSARY)!
    # Average in percentile values.
    threshold[0] = (threshold[0] + lo.l_value()) // 2
    threshold[1] = (threshold[1] + hi.l_value()) // 2
    threshold[2] = (threshold[2] + lo.a_value()) // 2
    threshold[3] = (threshold[3] + hi.a_value()) // 2
    threshold[4] = (threshold[4] + lo.b_value()) // 2
    threshold[5] = (threshold[5] + hi.b_value()) // 2
    for blob in img.find_blobs([threshold], pixels_threshold=100, area_threshold=100, merge=True, margin=10):
        img.draw_rectangle(blob.rect())
        img.draw_cross(blob.cx(), blob.cy())
        img.draw_rectangle(r)

print("Thresholds learned...")
print("Tracking colors...")

while(True):
    clock.tick()
    img = sensor.snapshot()
    blobs = img.find_blobs([threshold], pixels_threshold=255, area_threshold=100, merge=True, margin=10)
    for blob in blobs:
        img.draw_rectangle(blob.rect())
        img.draw_cross(blob.cx(), blob.cy())
 
    slopes = []

    for i in range(len(blobs)):
        for j in range(i + 1, len(blobs)):
            my = blobs[i].cy() - blobs[j].cy()
            mx = blobs[i].cx() - blobs[j].cx()
            slopes.append(math.atan2(my, mx))
    
    if len(slopes) > 1:
        slopes = sorted(slopes)
        median = slopes[int(len(slopes) / 2)]
        rotation = math.degrees(median)
    
        print("TRACK %d " % rotation)
        uart.write("%d\r\n " % rotation)
    
    print("FPS %f" % clock.fps())
    
Results come out of the UART as a number with carriage return and line feed at 19200 baud.
Nyamekye,

Return to “General Topics”

Who is online

Users browsing this forum: No registered users and 2 guests