lines.py not working as expected

Hi,

I’ve been trying to get lines.py running but I always get lines which span the image and seem unrelated to the found edges. I’m currently using the M7 but pretty well the same thing happened with the M4.

Please help.

Thanks


Sean

Hi, I’ve been working on this function and I almost have a new version of the firmware done that’s much improved for finding lines.

Would you like the early release firmware? I don’t have the docs yet, but, I can provide an example script.

As for lines spanning the image that’s what find lines returns. The Hough detector used to find lines gives you infinite length lines. Please read up about the Hough detector here: Bruno Keymolen: Hough Transformation - C++ Implementation

As for the new code. I’ve made it work on RGB565 images and it runs faster. So, you can see what lines are being found. Additionally, I output the angle and rho for each line so you can classify things.

The threshold is the number of pixels in the line, when you use a low threshold small edges will output lines like in your example, remember those are lines not line segments so they are infinite.

Thanks for the quick and useful responses, as usual. Yes, an early version of the firmware would be awesome!

Thanks again!


Sean

This code runs like a bat out of hell now 30 FPS on the M7 in grayscale and 30 FPS in color when the FB is disabled.
firmware.zip (812 KB)
find_lines.py (1.7 KB)

Wow, that is fast. My objective is to turn the results of canny edge finding to straight line segments. If you tell me that’s on the way, or there’s an easy way to get it from find_lines, I’ll wait for it, otherwise, I think I’m going to need to dive into the source code.

thanks


Sean

You don’t need canny anymore…

But, you can call the function on a canny image if you like.

So, you can call find_lines on the raw image. The output will be a little jumpy then, or you can call it on a canny image.

I’m not sure I will be implementing line segment finding… that requires walking each infinite line and then creating a new line under it for all continuous lengths of white pixels. Given the low around of pixel access for this you could write it all in python and it wouldn’t be that slow. You just need a function to walk the line. Any line drawing function should work for this.

I found pixel access get_pixel and set_pixel to be a bit slow in python - is there a faster way? I assume a compiled function will be significantly faster? Walk the canny line was my plan. A lot less overhead if I don’t have to fork my own firmware though.

I’m getting the development environment set up anyway but it stops at not finding /micropython/stmhal. It looks like micropython files have been moved around or something? Does the makefile need to be updated in the repo or am I doing something wrong?

Thanks


Sean

micropython is a submodule (separate repo), if you just want to clone the openmv repo, use “git clone --recursive” switch to clone the submodule as well. If you want to fork the openmv repo you’ll have to init and clone the submodule. If you want to fork both openmv repo and MP repo make sure to fork our MP fork and switch to openmv branch I know it’s kinda confusing :slight_smile: I never did that before so I don’t have detailed instructions, but Kwabena might be able to help more.

ok, really close - now getting FLASH_TEXT overflowed by 1480 bytes in OPENMV2.

OPENMV2 is the M4 camera, do you have an M4 or M7 ?

If you have an M7 cam use

make -j5 TARGET=OPENMV3

EDIT: Note it shouldn’t overflow for M4 or M7 not sure what’s going on there, maybe you added some new code or tables.

Hi, I’m just going to write this for you. I went to dinner with some friends and worked out all the steps in my head. It’s not very hard to do after building all the code for finding infinite lines.

So, the function will be called find_line_segments. Basically, it will call find line segments, then using the list of lines returned, walk each line in the image using a line drawing function to go from the start point to the end point. While it walks each line it will then compute the magnitude using the sobel function of the pixels below that line. The magnitude will then feed otsu’s adaptive threshold algorithm. This will then spit out line segments for me per infinite line. I’ll put all these line segments on a list.

That said… one problem is that… well, anything under the infinite line segment basically will become a line segment. Can you explain to me how you plan to use all this info? Because you’re going to get ALOT of small line segments. Filtering them, etc. is going to be hard. I’m… not sure how to plan to use all this info. From my perspective its going to be overwhelming. Note that line segments will just be (x1, y1, x2, y2, theta) tuples.

Note you will be able to call this on a canny image or just a plain image without any canny applied. The canny image will produce more accurate lines… but, your FPS will be lower.

It’s a clean copy of the current git repo.

Switching over to OPENMV3 in the makefile…

#TARGETS
TARGET ?=OPENMV3 #instead of OPENMV2

my QTCreator set up includes a make argument so make build looks like:

make -j5 TARGET=OPENMV3

Now I get past the overflow error,

But now I get the message:
Error: unknown CPU ‘cortex-m7’ in startup_stm32f765xx.s 47

wow kwagyeman, thank you. I would have to figure out how to join or discard small line segments anyway. So, much appreciated.

I plan to represent edges as lines as a form of vector conversion for scaling. The lines should approximate the canny edges image fairly accurately. I understand that will be a lot of lines but would estimate a typical image to be in the hundreds of line segments (and not thousands, unless you have something else in mind).

I will go order some more M7’s as a token of my gratitude :slight_smile:

This is a toolchain issue, you need to download and install the ARM GCC toolchain and add it to the path (we’re using 2016-q4)


Note re Canny, I’m going to make some performance improvements very soon, you can use it for now as is but know it will be faster.

Hundreds of line segments sounds right. Please put some effort into thinking about what you’re going to do with all the above tuples. :slight_smile:

Hey, so, I got this kinda working. You can find line segments. However, there’s an issue. Due to a lack of memory onboard the system the hough accumulator space doesn’t have enough resolution to “lock” lines on all edges of objects perfectly. For infinite hough lines the system will draw the line within 1-3 pixels on average near where the actual line edge is.

This behavior is fine for infinite lines since you don’t necessary need the lines right ontop of the object. But, for line segments I have to actual look under the infinite lines which means this amount of pixel error causes failures. Additionally, due to this lack of resolution the hough lines tend to be rather jumpy because peaks are spread out over multiple bins instead of just one.

Anyway, the result of this issue is that I find under 50% of all the line segments of an object. I think I can combine close by line segments to further increase this… but, at the end of the day the main problem is that there’s not enough hough resolution to draw the line on the edge of an object perfectly.

So, not sure what I can do about this. I can’t increase the hough resolution. Not enough ram. I think I can make the line finder look for fake lines to the right and left of the actual hough line. I’ll see if that helps. Have to think about how to do this more.

Hi, I sure appreciate you doing this so quickly and I’d like to try it out. Also, I think I will try a somewhat different approach that I’ve been thinking about - canny edge crawling and iterative line fitting. It might be a bit of brute force but I think it will go light on ram since the longest line will be under 700 pixels (roughly) and i think i can get by with about 2.5k of ram to process it.

I got the development environment up and running so I might try it out there. Probably just in python first.

Thanks for all your support!


Sean

If you edit the firmware for the hough lines you could… increase the resolution to solve this problem. But, this would prevent operation beyond 160x120. I allow 320x240 resolution so I don’t do this. But, if you edit the C code you could get to half angle resolution with 160x120.

You can change 1 number in the code to get a higher hough res. Right now the code is 0-180 degrees in 2 unit steps. Switching to one unit steps requires just one line of code changed. But, will increase memory usage by 4X (r res doubles too). Note that this will disable 320x240 res support though.

I think though you need to go beyond 0-180 degree res, like you need half angle res and beyond. Doing this requires replacing a bunch of code.

Got it working. I made it walk 5 lines right next to each other for each hough line detected. The number of segments is high however (in the picture below most segments are only 10 px long). Once I get the detection perfect I’ll figure out some merging algorithm to connect all these line segments that lie right next to each other. I think merging needs to be done after each walk of a hough line and then once more after all lines are walked.
Untitled.png