Lines Detected Change on successive images

I am trying to understand the line-segment example that was added in the last release. It works rather well except for one thing. As I let it run the lines detected change in successive frames. They go back and forth between the lines. I made a little video of what I mean at this link, https://drive.google.com/open?id=0BwzZjH9KYYMDQmVETGlhNTVUMkk. Was wondering if there was any way to get to get consistency between frames? Have this same problem with other edge detection methods as well.

Thanks
Mike

PS> When is the M8 Coming out :slight_smile:

Yeah, that’s due to how the fast method we’re using to detect lines works along with the averaging of lines… if you turn down all the averaging options and whatnot you’ll be able to see the real output of the algorithm… and how there are a lot of detections to deal with.

For example, lower the threshold and lower the merging towards zero, you’ll start to get a lot more lines.

I think part of the issue is how I do merging, when I first wrote that code for blob detection I thought it was fine and used it in a bunch of other places but I’m starting to think now that the merging operation is not that stable. The merging operation right now kinda acts like an FIR filter versus a weighted average. E.g. the order of merges matters for the resulting output line and the order of merged lines can easily change.

If you have an idea for how to improve this I’m all ears, but, I don’t have time to make it better right now:

https://github.com/openmv/openmv/blob/master/src/omv/img/hough.c#L250

https://github.com/openmv/openmv/blob/master/src/omv/img/hough.c#L635

ST hasn’t released the STM32H7 yet… so, I can’t say. For now, we’re just building an audience for it with more M7 boards and trying to get the price on it down.

I am going to take a look at it. But for now is there a way to do it just on a single image?

Also have you seen this, looks interesting: http://www.ipol.im/pub/art/2012/gjmr-lsd/,https://github.com/primetang/LSD-OpenCV-MATLAB/tree/master/pylsd. Unfortunately looks like it wants numpy :frowning:.

Heh, heh, so, those are all stills… so, you don’t know if their code is any better. That said, it probably is.

Anyway, I think the main source of error is the fact that the camera output is rather jumpy. If you look at just what the camera sees you’ll notice that the color and gain level are rapidly moving around a lot. The algorithm I wrote isn’t that different that what was presented in the first paper you linked to.

As for making the camera more stable… OmniVision will not provide us with the “Golden Register Settings” for the OV7725 sensor. If you’d like to contribute the thing to do would be to look into modifying some of the camera register settings to make it more stable.

… I also have a private datasheet from my CMUcam4 days about how to program register settings that may be helpful but I can’t share it online. Let me know if you actually would like to look into making the register settings better.

Mmm, it looks like they’ve got a C code library for it. I could port it to the M7 pretty easily.

Yeah, the code is quite simple. Should be easy to port…

Okay, I’ll schedule to do this.

Hi kwagyeman,

Hmm. Thought there was a video embedded in the fist link that I sent you. I did notice that the histogram was jumping quite a bit which I thought might be a contributor but I didn’t know how to stop that. I have tried changing the gain and brightness in the past but…

Now, looking at register settings and modifying them is probably something I can understand. I wouldn’t mind giving it a shot. Can’t make any promises :slight_smile: but I can try. You want me to send you my email or post it here.

Mike

Just email openmv for the docs.

Okay, so, it’s possible to port that c code. I took a look at it and it’s doable… but, somewhat a lot of work. Like… a lot of the OpenCV stuff there is no effort not to waste memory on pointless amounts of resolution. So, I’ll have to modify a lot of things in there code to get it operational. Given this, I’m going to look into just making my own code less jumpy, it shouldn’t really be so jumpy in the first place anyway…

That said, I wish I’d known about the LSD code first before I wrote my own line segment finder. Would have been less work.

Mmm, so, I just tried my method out again and the results aren’t really that bad… Um, it’s not going to work really well pointing it at random stuff. That was not my goal, have you tried just having like one thing in the image? The camera is still a sensor and has to be used in a focused way.

I do agree though that results are quite jumpy. There may be some things I can do to make that better. If you use the regular find_lines method for infinite lines and lower the merging margins you’ll be able to see why the algorithm is doing certain things.

Only got it just now because I got my rove semi working with the M7 and was looking for something more stable so I finally tried the line segments example and decided to post. Sorry about that. Anyway in looking for more info on 7725 registers I cam across the Arducam which is interesting since it can be attached to the Arduino and that I understand better than Python. Took a look at it and it had a register map with the settings and the changes they made. Haven’t gone much further yet. Here is the link if you are interested, Arduino/ov7725_regs.h at master · ArduCAM/Arduino · GitHub

Could you try disabling fast AGC/AEC (see COM8) ?

Hi iabdalkader,

EDIT: Ok turned them both off and screen went black. Seems to be driven by auto exposure. I did experiment a little bit and just used:
sensor.set_auto_gain(False,value=0)
sensor.set_gainceiling(2)

It seemed to get better if the edges were very distinct. If there were any shadows it would be jumpy in that area.

If I use the edges example in that same are I can see the outlines of objects very clearly.


thanks
mike

I tried it and it doesn’t seem to make any difference, but it shouldn’t blank the output so I think you disabled something else. For the future here’s how to disable a bit in a reg:

reg = sensor.__read_reg(addr)
sensor.__write_reg(addr, reg&mask)

I used sensor.set_auto_exposure(False, value=0) to turn the exposure off, unless its suppose to +1? Didn’t notice that you were able to set the reg using a sensor command. Just looked it up now that you mentioned it.

Hey, I got a lot of progression on porting the lsd algorithm to the cam. It will work on it… however, I had to modify a lot of the code. Since we need to do a release soon this feature will not be in the next release. I’ll cut you a firmware with the fixes though afterwards.

Thanks kwagyeman. I am surprised that you decided to work on it. I have started to look at the register mapping and the settings. Quite a bit in there.

kwagyeman. You might find this document of use in your efforts. Found it doing a web search. Has some stuff I was looking into: http://www.electrodragon.com/w/images/4/4a/OV7725_software_application_note.pdf

Playing with lighting options:

# Hello World Example
#
# Welcome to the OpenMV IDE! Click on the green run arrow button below to run the script!

import sensor, image, time

sensor.reset()                      # Reset and initialize the sensor.
sensor.set_pixformat(sensor.RGB565) # Set pixel format to RGB565 (or GRAYSCALE)
sensor.set_framesize(sensor.QVGA)   # Set frame size to QVGA (320x240)
sensor.skip_frames(time = 2000)     # Wait for settings take effect.


clock = time.clock()                # Create a clock object to track the FPS.

def light_mode_auto():
    #sensor.__write_reg(addr, reg&mask)
    sensor.__write_reg(0x13, 0xff)  #AWB off
    sensor.__write_reg(0x0e, 0x65)
    sensor.__write_reg(0x2d, 0x00)
    sensor.__write_reg(0x2e, 0x00)

def light_mode_home():
    sensor.__write_reg(0x13, 0xfd)  #AWB off
    sensor.__write_reg(0x01, 0x96)
    sensor.__write_reg(0x02, 0x40)
    sensor.__write_reg(0x0e, 0x65)
    sensor.__write_reg(0x2d, 0x00)
    sensor.__write_reg(0x2e, 0x00)
    
def light_mode_night():
    #sensor.__write_reg(addr, reg&mask)
    sensor.__write_reg(0x13, 0xff)  #AWB on
    sensor.__write_reg(0x0e, 0xe5)
    
def light_mode_cloudy():
    sensor.__write_reg(0x13, 0xfd)  #AWB off
    sensor.__write_reg(0x01, 0x58)
    sensor.__write_reg(0x02, 0x60)
    sensor.__write_reg(0x0e, 0x65)
    sensor.__write_reg(0x2d, 0x00)
    sensor.__write_reg(0x2e, 0x00)

light_mode_auto()

while(True):
    clock.tick()                    # Update the FPS clock.
    img.lens_corr(2.0)
    img = sensor.snapshot()         # Take a picture and return the image.

    print(clock.fps())              # Note: OpenMV Cam runs about half as fast when connected
                                    # to the IDE. The FPS should increase once disconnected.

OMG. That’s solid gold there. Yeah, we can do a lot with that.

I started playing around with the modes some more along with adjusting for banding. Here is a test sketch. May help.

# Find Line Segments Example
#
# This example shows off how to find line segments in the image. For each line object
# found in the image a line object is returned which includes the line's rotation.

# Note: Line detection is done by using the Hough Transform:
# http://en.wikipedia.org/wiki/Hough_transform
# Please read about it above for more information on what `theta` and `rho` are.

enable_lens_corr = True # turn on for straighter lines...

import sensor, image, time

sensor.reset()
sensor.set_pixformat(sensor.RGB565) # grayscale is faster
sensor.set_framesize(sensor.QQVGA)
sensor.skip_frames(time = 2000)
clock = time.clock()

def light_mode_auto():
    #sensor.__write_reg(addr, reg&mask)
    sensor.__write_reg(0x13, 0xff)  #AWB off
    sensor.__write_reg(0x0e, 0x65)
    sensor.__write_reg(0x2d, 0x00)
    sensor.__write_reg(0x2e, 0x00)

    sensor.__write_reg(0x22, 0x89); #60Hz banding filter
    sensor.__write_reg(0x23, 0x03); #4 step for 60hz

def light_mode_home():
    sensor.__write_reg(0x13, 0xfd)  #AWB off
    sensor.__write_reg(0x01, 0x96)
    sensor.__write_reg(0x02, 0x40)
    sensor.__write_reg(0x0e, 0x65)
    sensor.__write_reg(0x2d, 0x00)
    sensor.__write_reg(0x2e, 0x00)
    
    sensor.__write_reg(0x22, 0x7f); #60Hz banding filter
    sensor.__write_reg(0x23, 0x03); #4 step for 60hz
    
def light_mode_night():
    #sensor.__write_reg(addr, reg&mask)
    sensor.__write_reg(0x13, 0xff)  #AWB on
    sensor.__write_reg(0x0e, 0xe5)
    sensor.__write_reg(0x11, 0x03);
    sensor.__write_reg(0x22, 0x7f); #60Hz banding filter
    sensor.__write_reg(0x23, 0x03); #4 step for 60hz
    
def light_mode_cloudy():
    sensor.__write_reg(0x13, 0xfd)  #AWB off
    sensor.__write_reg(0x01, 0x58)
    sensor.__write_reg(0x02, 0x60)
    sensor.__write_reg(0x0e, 0x65)
    sensor.__write_reg(0x2d, 0x00)
    sensor.__write_reg(0x2e, 0x00)
    
def light_mode_office():
    sensor.__write_reg(0x13, 0xfd);
    sensor.__write_reg(0x01, 0x84);
    sensor.__write_reg(0x02, 0x4c);
    sensor.__write_reg(0x0e, 0x65);
    sensor.__write_reg(0x2d, 0x00);
    sensor.__write_reg(0x22, 0x7f); #60Hz banding filter
    sensor.__write_reg(0x23, 0x03); #4 step for 60hz

#sensor.set_auto_gain(False)
#sensor.set_auto_exposure(False)


light_mode_auto()

# All lines also have `x1()`, `y1()`, `x2()`, and `y2()` methods to get their end-points
# and a `line()` method to get all the above as one 4 value tuple for `draw_line()`.

while(True):
    clock.tick()
    img = sensor.snapshot()
    if enable_lens_corr: img.lens_corr(1.8, 1.0) # for 2.8mm lens...

    # `threshold` controls how many lines in the image are found. Only lines with
    # edge difference magnitude sums greater than `threshold` are detected...

    # More about `threshold` - each pixel in the image contributes a magnitude value
    # to a line. The sum of all contributions is the magintude for that line. Then
    # when lines are merged their magnitudes are added togheter. Note that `threshold`
    # filters out lines with low magnitudes before merging. To see the magnitude of
    # un-merged lines set `theta_margin` and `rho_margin` to 0...

    # `theta_margin` and `rho_margin` control merging similar lines. If two lines
    # theta and rho value differences are less than the margins then they are merged.

    # Setting both the above to zero will greatly increase segment detection at the
    # cost of a lot of FPS. This is because when less lines are merged more pixels
    # are tested... which takes longer but covers more possibilities...

    # `segment_threshold` controls line segment extraction. It's a threshold on the
    # magnitude response per pixel under an infinite line. Pixels with a magnitude
    # above threshold are added to the line segment.

    # `find_line_segments` merges detected lines that are no more than 5 pixels apart
    # and no more than 15 degrees different to create nice continous line segments.

    for l in img.find_line_segments(threshold = 800, theta_margin = 15, rho_margin = 15, segment_threshold = 100):
        img.draw_line(l.line(), color = (255, 0, 0))
        # print(l)

    print("FPS %f" % clock.fps())

Cheers
Mike

Nyamekye, I posted a couple of documents that you might find interesting on the omnivision cams in case you change the sensor: https://drive.google.com/open?id=0BwzZjH9KYYMDQmVETGlhNTVUMkk