I have been using this camera as it seems quite fast for what I need to do, which is tracking something really fast and measuring rotation, size and position.
I am trying to use the hough transform find_lines code, along with the frame differencing and find blobs to do this, but I keep having trouble with the lines from the find_lines always jumping around. Is there a way to mitigate this, as I thought running the find lines after I identified a non- moving blob would make the find_lines move less frequently?
I may be doing this wrong with the find_lines, but any help with what technique would be fastest would be a huge help. I also have the PixyCam, which can track objects at the speed I am looking for, but I was hoping this might be a bit more professional and able to be tweaked more than that camera and the find lines was brilliant, but has a lot of noise.
I have my code below if you would like to take a look and run it for yourselves, but I would be really appreciative of any amount of help you could give me on this matter:
import sensor, image, pyb, os, time sensor.reset() # Initialize the camera sensor. sensor.set_pixformat(sensor.GRAYSCALE) # or sensor.GRAYSCALE sensor.set_framesize(sensor.QQVGA) # or sensor.QQVGA (or others) sensor.skip_frames(time = 2000) # Let new settings take affect. sensor.set_auto_whitebal(True) # Turn off white balance. clock = time.clock() # Tracks FPS. min_degree = 0 max_degree = 179 if not "temp" in os.listdir(): os.mkdir("temp") # Make a temp directory kernel_size = 1 # kernel width = (size*2)+1, kernel height = (size*2)+1 kernel = [-1, -1, -1,\ -1, +3, -1,\ -1, -1, -1] print("About to save background image...") sensor.skip_frames(time = 2000) # Give the user time to get ready. sensor.snapshot().save("temp/bg.bmp") print("Saved background image - Now frame differencing!") while(True): clock.tick() # Track elapsed milliseconds between snapshots(). img = sensor.snapshot() # Take a picture and return the image. img.difference("temp/bg.bmp"); thresholds = (245, 255) # Run the kernel on every pixel of the image. img.morph(kernel_size, kernel) img.find_blobs([thresholds], pixels_threshold=100, area_threshold=100, merge=True); # Replace the image with the "abs(NEW-OLD)" frame difference. #img.difference("temp/bg.bmp") #print(clock.fps()) # Note: Your OpenMV Cam runs about half as fast while # connected to your computer. The FPS should increase once disconnected. for l in img.find_lines(threshold = 1000, theta_margin = 25, rho_margin = 25): if (min_degree <= l.theta()) and (l.theta() <= max_degree): img.draw_line(l.line(), color = (255, 0, 0)) print(l)