[Basic] Outputting Edge Detection Image to be Put in to Line Detection

Preface: I am a MechE Major and have very little Python Experience but coded in MatLab and C++ a decent amount

I’m working on a project to determine when a wishbone coupon breaks from fatigue and wanted to use the OpenMV for its integrated Edge Detection Libraries. A problem that I’m facing is I don’t know how to get the image outputted on the video feed and manipulate the image.

Thanks,
Travis

See the edge detection example: examples/09-Feature-Detection/edges.py

Yeah I’m using that example I just don’t know what the outputs of the functions are or how to grab the image so I can make a code look at each individual pixel.

This function finds edges in the image. You can also use find_lines, rects, circles etc… What do you mean by grab an image ? img = sensor.snapshot() img is the image object.

So I used the find edges function and it gets an image in black and white. What I want to do with the image is to analyze the pixels and look at the surrounding pixels white to determine if there’s breakage. What I don’t know what to do how to interact with the processed image so that we can look at the individual pixels. Would having a sensor.snapshot() function have the post-processed image?

Here is what I thought should work with the edge detection image be used in conjunction with the find lines function but the image didn’t have any segment lines in it.

while(True):
clock.tick()
img = sensor.snapshot()
pros_img=img.find_edges(image.EDGE_CANNY, threshold=(50, 80))
for l in img.find_lines(threshold = 1000, theta_margin = 25, rho_margin = 25):
if (min_degree <= l.theta()) and (l.theta() <= max_degree):
img.draw_line(l.line(), color = (255, 0, 0))

print(l)

Appreciate the Help

You should call find_lines directly on the image (please see the find_lines.py example). And if you want to access pixels you can access the image like an array (img[0] etc…).

Hi, use find_blobs() on the edge image. Find_blobs() connects all pixels of a particular color with each other. This if there isn’t a breakage you will get one big blob. If there is a break you will get multiple blobs.

Set the color threshold to white to find all the white pixels connected with find blobs.

Your code to solve this problem will be about 5 lines.

Let me know if you still need help.

Do something like:

img = sensor.snapshot()
img.find_edges(...)
blobs = img.find_blobs([(200, 255)], area_threshold=0, pixels_threshold=0)
print(blobs)
if blobs > 1:
    print("Error")
elif blobs == 1:
    print("All connected")
else:
   print("No blobs")

Again, find blobs creates blobs by connecting all pixels of one color together. Note, on the OpenMV Cam never think about doing anything pixel wise. Look through our library to try to find what methods do what you want. Trying to do pixel wise stuff in python in terribly slow. Our algorithms are written in C to be fast however.