kwagyeman - I started taking a closer look at the settings they are using for the different light modes. For other than direct sun light and night modes 3 of the register settings (0x65, 0x2e and 0x2d) are the same. That leaves only the Red and Blue gains being adjusted depending on lighting. If you do a plot of lux or color temperature you get almost a linear relationship between the values used for those settings. I am attaching a couple of plots - yes I converted the hex values to decimal.
Hey, I got a lot of progression on porting the lsd algorithm to the cam.
Was wondering if you ever had a chance to finish the porting of the algorithm. Know you are all busy, but was just wondering. Still tweaking and testing the rover with the camera. Had a couple of good runs but not consistent.
No, I haven’t been working on it? Are you building a line follower? There’s some good code I’ve been working on for that under OpenMV/open projects. I’ve been uploading my DYI robocar race code.
No. Not a line follower. I doing obstacle detection and avoidance, see Obstacle Detection and Avoidance - Project Discussion - OpenMV Forums using a openmv camera and a single vl53 distance sensor. Depending on the lighting and contrast edges may or may not be detected. Got around the issue of changing detection values by analyzing at least 5 frames on a teensy 3.5 before making decision on which way to turn. Looking to improve detection.
Okay, I got it kinda working. It generates a very stable output but runs at 2 FPS @ 160x120. 8 FPS at 80x60. Code is crashing right now though if the input shifts around too much. Have to trace where there’s a memory violation bug (this is always the case with porting desktop libs which use the stack like its free). While tracking the code I’ll look to see where all the speed is lost too. The low FPS doesn’t make that much sense given how the algorithm works…
Thanks for keeping me posted. Everything turns out to be more work than planned . I created the pull request for the Autonomous Rover. Got a little carried away with the write up. Feel free to edit to your hearts content
Here’s a demo with the better line segment detector. The same line segment detector script released with OpenMV IDE will work to run the code. It has much better segment detection performance… but, no parameters. So, all the stuff in the find_line_segments call isn’t needed.
I’ll see what I can do on making it faster now. It doesn’t crash anymore however. There was an infinite loop in the code due to floating point precision reduction. firmware.zip (1.99 MB)
Okay, the code is so slow because it uses sin/cos/atan2 in a loop that’s called a lot. So, I’ll just use lookup tables for these values and it should be really fast…
Mmm, changing that only improved FPS by 1. I see that this isn’t the problem… but, how it sets up it’s data structures (lots of poorly structured inner loops). I won’t be able to get it to go faster without redoing a lot at the algorithmic level.
Question, for find line segments is precision more valuable than speed? I’m thinking I’ll just change the implementation to this and get rid of the old one. Also, what’s your opinion on merging lines? This algorithm doesn’t do that so you’re going to get a lot of tiny lines. I need to put a front end line merger on it.
Try it out and let me know if you think it outputs too many tiny lines.
Hi Nyamekye, Can’t tell you how much I appreciate the time you are spending on this. As to your question on speed vs accuracy. Isn’t that always the issue.
I know this may sound like a cope out but I think there is probably a break even point between the two, you want to kind of balance the two. In other words you want to get it as accurate as possible while keeping the speed at least reasonable. For instance, when using my current algorithm I’ll get double digit FPS but the minute I try to send a image stream over wifi it drops to about 2FPS, which is unacceptable while with out it is acceptable.
What I think may be more important is consistency from frame to frame analysis. Right now the data changes so quickly from frame to frame its hard to do any sort of further analysis of the image. In this case yes I would sacrifice some speed for accuracy or consistency from frame to frame.
As for the finding a bunch of tiny lines vs merging lines? I have to give a try to see how bad it really is, there must be some threshold values that can be set. I know from playing around with the lighting modes you do have some control - I am running night mode more often than not which is rather surprising. I give it a try as soon as I can over the next couple of days - found doing things late at night I make more mistakes
Okay, here’s my final result along with a test script. It runs at 3 FPS at 160x120. 8 FPS at 80x60. It’s still usable at 80x60. Let me know if this is useful to you. Some other folks wanted this algorithm to be more stable… so, I’m thinking of just changing to this code. I have normal find lines for fast line tracking. openmv.zip (1.99 MB)
Hi K. Started playing with it and yes it can be useful. Finding it interesting that it works a bit better in lower brightness (i used auto setting from my previous posts). The hard lines are much more consistent between frames. Going to play more tonight and tomorrow - getting ready for the party.
This is getting more interesting. I decided to print the line info and looked like there were more lines on the screen than there was line info, or it could just be timing for the print. Anyway, wanted to know if there was a way to access the individual line info that is printed:
Hi K. Sorry for not posting sooner. I have been busy with a couple other projects. Anyway, I’ve been playing around with the function for a few days on and off now including using magnitude as a filter as well as looking lines only in the lower have of the screen. It does work nicely if there is sufficient contrast in the image as well as lighting to see distinct edges. In my case I am looking at scenes where the gradient in colors or grayscale is not always distinct. I got it functioning better by using the auto lighting mode from the app note as well as boosting the contrast to 2. One of the things I noticed is that in some cases I am still getting variation in lines from frame to frame (like its cycling). Was wondering if there is a way to take say x number of frames, read the lines and then create a composite frame of the three and maybe the merge the lines?
Yeah, just write the frames to the SD card, then read them back and use the blend method to blend them together. It would be optimal if snapshot could blend a frame while capturing it… this is possible but not implemented currently. You can then call find_line_segments on the blended data.
Please let me know if I should replace the previous algorithm…
Yeah. That’s what I thought would have to be done but just was curious if there was another way. As for replacing the old method with the new method. Not 100% sure. On one hand the new method is probably better at finding line segments as well as for me less variables to adjust but if there are a lot of lines found there is a big performance hit versus the old method. The old method is quicker at finding lines and probably with some more tweaking of the variables it would probably find the lines of interest. I did make a video comparing the two methods, https://drive.google.com/open?id=0BwzZjH9KYYMDRHJicWFSQjh6VE0 so you can see for yourself. Please note that the test was in a very dark room with night mode set for both tests.
For me the frame rate hit is not that big of a deal and I could adjust for it but for most users and how they are probably going to use it the old method would be better. Like I said there are pros and cons to both. One is a fast method and the second is a slow method. Not sure which would be better. If the frame rate on the new method was on par I would definitely say the new method. Any way to have both?
I will leave it up to you, as you know the user environment better.