I'm an Electrical Engineering student and for my final project, my team members and I were thinking about using the OpenMV M7 cam to create a Vision Sorting Smart Conveyor System. The thing is that I've never worked with python language before, and I started to look for the code to track a box and to get its center but I'm having some errors. This is the code that I've been using, please can you check it and help me.
Code: Select all
import sensor, image, time print("Letting auto algorithms run. Don't put anything in front of the camera!") sensor.reset() sensor.set_pixformat(sensor.RGB565) sensor.set_framesize(sensor.QVGA) sensor.skip_frames(time = 2000) sensor.set_auto_gain(False) # must be turned off for color tracking sensor.set_auto_whitebal(False) # must be turned off for color tracking clock = time.clock() # Capture the color thresholds for whatever was in the center of the image. r = [(320//2)-(50//2), (240//2)-(50//2), 50, 50] # 50x50 center of QVGA. print("Auto algorithms done. Hold the object you want to track in front of the camera in the box.") print("MAKE SURE THE COLOR OF THE OBJECT YOU WANT TO TRACK IS FULLY ENCLOSED BY THE BOX!") for i in range(60): img = sensor.snapshot() img.draw_rectangle(r) print("Learning thresholds...") threshold = [50, 50, 0, 0, 0, 0] # Middle L, A, B values. for i in range(60): img = sensor.snapshot() hist = img.get_histogram(roi=r) lo = hist.get_percentile(0.01) # Get the CDF of the histogram at the 1% range (ADJUST AS NECESSARY)! hi = hist.get_percentile(0.99) # Get the CDF of the histogram at the 99% range (ADJUST AS NECESSARY)! # Average in percentile values. threshold = (threshold + lo.l_value()) // 2 threshold = (threshold + hi.l_value()) // 2 threshold = (threshold + lo.a_value()) // 2 threshold = (threshold + hi.a_value()) // 2 threshold = (threshold + lo.b_value()) // 2 threshold = (threshold + hi.b_value()) // 2 for blob in img.find_blobs([threshold], pixels_threshold=100, area_threshold=100, merge=True, margin=10): img.draw_rectangle(blob.rect()) img.draw_cross(blob.cx(), blob.cy()) img.draw_rectangle(r) print("Thresholds learned...") print("Tracking colors...") while(True): clock.tick() start = pyb.millis() #I'm getting my error here because it says the pyb is not declared img = sensor.snapshot() for blob in img.find_blobs([threshold], pixels_threshold=255, area_threshold=100, merge=True, margin=10): img.draw_rectangle(blob.rect()) img.draw_cross(blob.cx(), blob.cy()) #timer = int(start) img.midpoint(0, bias=0.5, threshold=True, offset=5, invert=True) slopes =  for i in range(len(blobs)): for j in range(i, len(blobs)): my = blobs[i].cy() - blobs[j].cy() mx = blobs[i].cx() - blobs[j].cx() slopes.append(math.atan2(my, mx)) slopes = sorted(slopes) median = slopes[len(slopes) / 2] rotation = math.degrees(median) print("FPS %f" % clock.fps()) print("TRACK %f %f " % (blob.cx(), blob.cy()), blob.w(), blob.h()) uart.write("%d ; %d ; %d ; %d \r\n " % (blob.cx(), blob.cy(), blob.w(), blob.h()))
I'm also open to all suggestions that you can give me, in a way that I can determine the location, orientation and position of the box. I would also want to know how can I export those matrices to arduino because the code for the motors controllers is written in that software.
Thanks in advance