Better lock?

Check out the video:

Is there any way to improve how well this locks to the sheet?

The ROI on the left side looks for the “T” intersection of the table lines. My code looks for line segments and throws out all but the horizontal and vertical lines that are greater than a certain length. But you can see for whatever reason, sometimes it detects a line at the bottom of the ROI for no reason. The horizontal works quite well.

Here’s the code for that ROI. The resultant X/Y drives where the 7 checkbox ROIs are.

for r in img.find_line_segments(SEARCHBOX, merge_distance=5, max_theta_difference=20):

        #print(x,": ",abs(r.x2() - r.x1()),abs(r.y2() - r.y1()),"Diff")

        # see if we have a verticle line
        if abs(r.y2() - r.y1()) > abs(r.x2() - r.x1()) :  # only verticle
            y_diff = abs(r.y2() - r.y1())

            if (y_diff > 15) and y_diff > longest_y  : #make sure it's not just a dot
                offset_x = round(abs(r.x2() + r.x1()) / 2 ) # get the average x position
                x_line = r




        if abs(r.y2() - r.y1()) < abs(r.x2() - r.x1()) :  # only horizontal
            x_diff = abs(r.x2() - r.x1())
            #print(x_diff)
            if (x_diff > 20) and x_diff > longest_x  : #make sure it's not just a dot
                offset_y =  round((r.y2() + r.y1()) / 2 ) - 460 # get the average x position
                y_line = r
                #print(offset_y)


        x=x+1
    if x_line != 0 :
        img.draw_line(x_line.x1(),x_line.y1(),x_line.x2(),x_line.y2(),color = (255),thickness=1)
    if y_line != 0 :
        img.draw_line(y_line.x1(),y_line.y1(),y_line.x2(),y_line.y2(),color = (255),thickness=1)

I tried using find_rects() dirrectly on the checkboxes and was not successful. And it sucks up a lot of buffer memory with an ROI that big. Any help would be great.

I see in the video the lines are jumpy. Which line is causing the issue? The one from the barcode in the image? The barcode is being picked up as a line since it’s an alternating pattern where the derivative (which is what the algorithm looks for) is strong. Can you just not try to detect lines in the bottom area with the barcode?

I initially tried to use the barcode for a positioning reference but because it can be detected anywhere along the Y axis, that wasn’t going to work. So in the video, the bar code is not used for any reference, A line is just drawn at it’s detection coordinates.

The barcode’s X position is actually fairly stable so I thought about using it, but I need a Y reference. I tried to use the large horizontal line, but I could not get settings that would keep it from detecting two lines, one on the top and bottom of the line. So it would either detect the top of the bottom shifting my reference back and forth.

The small box on the left is what it driving the position lock. I’ve used this technique before, same paper, same code, but with the older OpenMV camera. Though I used the bottom corner. For whatever reason, it was not stable there with the H7 so I moved it to a T section. For the most part it works, but you can see the random jump of the 7 vertical boxes? This is caused by a false positive detection of a horizontal line at the bottom of the ROI on the left.


The false positive is never anywhere else. It’s always either on the line, or at the bottom. I guess i should re-review my code to see if some edge case is causing it.

I think next year I need to put fiducials on the sheet.

Okay, I see the issue you are having…

Um, so, I’d just use a temporal filter to reject the false detection. I can’t say why it’s happening. We use the LSD line segment detector algorithm to find_line segments. It has about 6k of rather complex code we ported. I don’t know if the algorithm will always produce stable results if there’s a minor change in lighting/pixels. The LSD algorithm isn’t exactly something that you can eyeball and say it’s stable. Especially since it allocates memory and we try to gracefully handle alloc failures by tossing lines.

So, a temporal filter would be that you look across a few images what the detect box position is and then ignore ones that are outside of that. E.g. a kalman filter. Note, that I’m not talking about averaging, just make a list of the last N detections and then compare if the next detection lies near the last N detections. If so, then accept it as the new position, otherwise, reject it, but, still add that detection to the detection list. This way if a large amount of movement happens you update your model.