Hough Transform Lines shaking


I have been using this camera as it seems quite fast for what I need to do, which is tracking something really fast and measuring rotation, size and position.

I am trying to use the hough transform find_lines code, along with the frame differencing and find blobs to do this, but I keep having trouble with the lines from the find_lines always jumping around. Is there a way to mitigate this, as I thought running the find lines after I identified a non- moving blob would make the find_lines move less frequently?

I may be doing this wrong with the find_lines, but any help with what technique would be fastest would be a huge help. I also have the PixyCam, which can track objects at the speed I am looking for, but I was hoping this might be a bit more professional and able to be tweaked more than that camera and the find lines was brilliant, but has a lot of noise.

I have my code below if you would like to take a look and run it for yourselves, but I would be really appreciative of any amount of help you could give me on this matter:

import sensor, image, pyb, os, time

sensor.reset() # Initialize the camera sensor.
sensor.set_pixformat(sensor.GRAYSCALE) # or sensor.GRAYSCALE
sensor.set_framesize(sensor.QQVGA) # or sensor.QQVGA (or others)
sensor.skip_frames(time = 2000) # Let new settings take affect.
sensor.set_auto_whitebal(True) # Turn off white balance.
clock = time.clock() # Tracks FPS.

min_degree = 0
max_degree = 179

if not "temp" in os.listdir(): os.mkdir("temp") # Make a temp directory

kernel_size = 1 # kernel width = (size*2)+1, kernel height = (size*2)+1
kernel = [-1, -1, -1,\
          -1, +3, -1,\
          -1, -1, -1]

print("About to save background image...")
sensor.skip_frames(time = 2000) # Give the user time to get ready.

print("Saved background image - Now frame differencing!")

    clock.tick() # Track elapsed milliseconds between snapshots().
    img = sensor.snapshot() # Take a picture and return the image.

    thresholds = (245, 255)

# Run the kernel on every pixel of the image.
    img.morph(kernel_size, kernel)

    img.find_blobs([thresholds], pixels_threshold=100, area_threshold=100, merge=True);

    # Replace the image with the "abs(NEW-OLD)" frame difference.

    #print(clock.fps()) # Note: Your OpenMV Cam runs about half as fast while
    # connected to your computer. The FPS should increase once disconnected.
    for l in img.find_lines(threshold = 1000, theta_margin = 25, rho_margin = 25):
        if (min_degree <= l.theta()) and (l.theta() <= max_degree):
            img.draw_line(l.line(), color = (255, 0, 0))

Hi, I don’t quite understand your code. You’re calling for find_blobs but tossing it’s output so it does nothing. As for find lines… You’re calling it on a high passed image. Find lines high passed the image internally again so you shouldn’t do this, high pass (morph) the image.

Um, what are you trying to do exactly?

Hi kwagyeman,

Apologies for the confusion, I was trying to use the high passed image to try to make the lines less noisy. Basically, I am trying to find a way to use the find_lines to make a quick region of interest for really fast tracking. The lines are great, because hopefully I can determine the size of the object and also the distances to items around it.

I am thinking along the lines of what this guy has been doing http://forums.openmv.io/viewtopic.php?f=6&t=314 but instead of just 2 lines, there would be 4 lines making a box. I am not sure if his code worked or not?

Also, I am trying to make the lines shake less if possible? One more question: what material is the lens made out of?

Hi, we have a new find rectangles method coming out. It repurposes the Apriltag code’s rectangle finder so it would amazingly good. Will this do what you need? It basically can find 4 connected lines in a rect that area shaered, rotated, scaled, etc.

As for find_lines being shaky, for the next firmware release I fixed a bug when averaging lines that was causing noise issues. That said, if you binarize the image lines will be much more stable. The method I’m using runs at faster than the standard Hough transform but has more phase noise error causing the jumpyness. Not really sure how to make that better without slowing the algorithm down. Probably improving the camera code would help since images seem to flash a little bit because the FPS is so high.

Last, the lens is made out of plastic and metal with glass. I don’t know more than that.

Hi Kwabena,

That sounds great! This will very likely do what I need, dependant on if you can specify a tolerance for the angle of the lines, as it may be that everything is slightly trapezoidal in reality, just as a heads up. You probably have already accounted for that, but it would be good to allow for the user to define that in the python ide (if at all possible, but anything that you do here would definitely be a great improvement).

I’ll try/ find out about binarizing the image line. Thank you for telling me about the materials.

How long would it take you to do all of this?

Also, I’ve had some help and playing around with the code. I will know that what I am going to have in my region of interest is basically a white box on a dark background. So the moethod of measuring the outside box would be scanning each pixel row column by coloumn, marking them as points on the corners of the box and then measuring between the points to get measurements.

I’ll put the code below, as I’m finding it hard to find the first white pixel from the top left corner and then track it.

# Find Lines Example
# This example shows off how to find lines in the image. For each line object
# found in the image a line object is returned which includes the line's rotation.

# Note: Line detection is done by using the Hough Transform:
# http://en.wikipedia.org/wiki/Hough_transform
# Please read about it above for more information on what `theta` and `rho` are.

# find_lines() finds infinite length lines. Use find_line_segments() to find non-infinite lines.

enable_lens_corr = False # turn on for straighter lines...

import sensor, image, time

sensor.set_pixformat(sensor.RGB565) # grayscale is faster
sensor.skip_frames(time = 2000)
clock = time.clock()

# All line objects have a `theta()` method to get their rotation angle in degrees.
# You can filter lines based on their rotation angle.

min_degree = 0
max_degree = 179

# All lines also have `x1()`, `y1()`, `x2()`, and `y2()` methods to get their end-points
# and a `line()` method to get all the above as one 4 value tuple for `draw_line()`.

    img = sensor.snapshot()
    if enable_lens_corr: img.lens_corr(1.8) # for 2.8mm lens...

    # `threshold` controls how many lines in the image are found. Only lines with
    # edge difference magnitude sums greater than `threshold` are detected...

    # More about `threshold` - each pixel in the image contributes a magnitude value
    # to a line. The sum of all contributions is the magintude for that line. Then
    # when lines are merged their magnitudes are added togheter. Note that `threshold`
    # filters out lines with low magnitudes before merging. To see the magnitude of
    # un-merged lines set `theta_margin` and `rho_margin` to 0...

    # `theta_margin` and `rho_margin` control merging similar lines. If two lines
    # theta and rho value differences are less than the margins then they are merged.

    roi = (54,15,69,92)
    width = roi[2]; #roi[2] - roi[0];
    height = roi[3]; # roi[3] - roi[1];

    leftcorner = [0,0]

    print (width)

    searchvalx = 1
    searchvaly = 1
    found = 0

    #while(found == 0):

       #for rows
            #check each column
               # pixel found  = true then exit loop corner found

        #else next row

        # Check all pixels in this row on the current column
    for x in range(searchvalx):
        color = img.get_pixel(x + roi[0], searchvaly + roi[1])
        c = color [0] + color[1] + color[2]
        c = c/3
        if(c > 200):
            leftcorner = [x + roi[0], searchvaly + roi[1]];
            found = 1

    for y in range(searchvaly):
        color = img.get_pixel(searchvalx + roi[0], y + roi[1]);
        c = color [0] + color[1] + color[2]
        c = c/3
        if(c > 200):
            leftcorner = [searchvalx + roi[0], y + roi[1]];
            found = 1

        searchvalx = searchvalx + 1;
        if searchvalx > roi[2]:
            searchvalx = roi[2]
            searchvaly = searchvaly + 1;

        if searchvalx > roi[2] & searchvaly > roi[3]:
            found = 1

    img.draw_cross(leftcorner[0], leftcorner[1], color= (255,0,0))
    #for x in range(width):
    #    for y in range (height):
    #        color = img.get_pixel(x + roi[0], y + roi[1]);
    #        c = color [0] + color[1] + color[2]
    #        c = c/3
    #        if (c < 128):
    #            img.set_pixel(x + roi[0],y + roi[1],(0, 0, 0))
    #        else:
    #            img.set_pixel(x + roi[0],y + roi[1],(255, 255, 255))

    #for l in img.find_lines(roi = roi, threshold = 1000, theta_margin = 25, rho_margin = 25):
        #if (min_degree <= l.theta()) and (l.theta() <= max_degree): #Code: !<= 45
           # img.draw_line(l.line(), color = (255, 0, 0))
            #print(max(img.find_lines(), key=lambda x: x.length()))

    #print("FPS %f" % clock.fps())

# About negative rho values:
# A [theta+0:-rho] tuple is the same as [theta+180:+rho]

I’ve already finished the find rects method. Can post tonight.

That sounds great! Ill try it over the weekend and see what happens, thank you!

I’ve attached the firmware you asked for along with the new find_rectangles method.
firmware.zip (1010 KB)

Ah thank you! Although the linear regression did sound really cool as well for line tracking, hope you had fun at the event :slight_smile: Thank you for this, I will try tomorrow and see what happens.


Hi Kwabena,

I tried the code, and it seems to work well, would it be possible to do it on a binary image? I have tried to do this, but it refuses to make lines on a (hopefully simple) 2D- ish image.

#(240, 255)
RHO_GAIN = -1.0
P_GAIN = 0.7
I_GAIN = 0.0
I_MIN = -0.0
I_MAX = 0.0
D_GAIN = 0.1

import sensor, image, time, math, pyb

sensor.skip_frames(time = 2000)
clock = time.clock()

while True:
    img = sensor.snapshot().histeq()

for r in img.find_rects(threshold = 100, roi = (51, 6, 74, 90)):
    img.draw_rectangle(r.rect(), color = (255, 0, 0))
    for p in r.corners(): img.draw_circle(p[0], p[1], 5, color = (0, 255, 0))

Hi, the for loop isn’t under the while loop. Also, it looks like you’ve pasted unrelated code together. Can you clean it up a bit and then post again?

It works now (im used to languages that use brackets, like C#, so that was totally my fault)! I need to do some motion testing, but this is amazing, thank you very much for all your help!