MV primer?

Ok, I dont have much background in computing. Can anyone direct me to some sources to learn more about machine vision topics like symbol recognition, optical flow and all the other functionality on this wonderful little device? Thanks!

Hi, I’m working on the tutorial right now. But, it won’t be done for a while.

How about this, what would you like to do with your OpenMV Cam and I’ll walk you through that.

I have the idea that I would like to make a line following robot for a grid. At each grid intersection could be a symbol so it would have the idea of which node it was approaching and its pose relative to the node. The robot could take instruction to get to the next node or waypoint. In this way it could go from point to point and know where it was and where it is going. I know I don’t have the skill to pull this off, but this is what I would “like” to do with the OpenMV. My other idea is to sense optical flow from a tv screen and use it as an input to pose an articulated home theater chair. Not sure if optical flow is the right terminology… If things go by fast, tilt the chair forward, if things pass from left to right roll the chair left and lean into the motion. The line following example provided with OpenMV was exciting to see. It is amazing how great the steering angle worked. Thanks so much for the kickstarter. You guys are amazing.

Thanks for the kind words. Okay, well, let’s get into it:

For the line following application, the easiest thing to do is to basically use the code like in the line following demo. That gets your robot to follow the line. You of course have to use the steering angle to drive a PID loop. Best to test the robot on a winding line to get that working.

Next, for dealing with the symbol on each grid line. Since the camera will likely be at an angle the easiest thing to do would be to use color tracking to identify a color code. So, you’d use the find_blobs function to look for two very contrasting colors. Once you got these colors you can use the find_markers function to merge overlapping color blobs and this will give you the color code.

Now… as for different symbols, the best thing to do would be to develop the color code such that you can encode a value given its rotation with respect to the robot approaching it. All color blobs have an angle, so, if you setup the markers each rotated by 90 degrees or so you then can have 4 unique symbols per color code pair.

Next, if you need more symbols then you can have different color code pairs. Note that by contrasting colors I mean like Red/Green or Red/Blue. Also, color codes should be circular with one color taking up a PacMan like area.

As for the TV application. The next firmware update will feature optical flow along with image windowing. This will allow you to define an active area of the image window to save and then you can just run the optical flow on the area of the image you want. Optical flow is very good. So, it will basically do what you want in 1 line of code. Note that the image can’t completely change. The optical flow looks for translations to the left, right, up, and down.

So, what you should do next is get a robot together and get it following a straight line. Once you have that let me know. Then you can move on to the color code parts.

Do you need help with robot PID stuff?

Wow, thanks. I will likely need more help. I will post again when I am ready for some more assistance!

Nyamekye, can you recommend a robot and base that would be compatible with my goal? lots i still don’t understand like how much memory needed how turn angle will be sent to robot. Looking for simple direct way. Also I understand basics of pid what it does etc and arduino has pid library. Confused because most robot bases are skid steer, but open mv outputs a steering angle. do i need a servo steered robot?

You just need a differential drive robot. The turning angle just lowers the power on one motor and raises the power on the other. The power supplied to both motors should be enough that the robot keeps going straight in the angle is zero.

The OpenMV Cam should be all that you need for the Robot. It can do PWM and drive Servos. I highly recommend that you find a robot chassis that uses servos to drive around. E.g. Something like this: http://www.ebay.com/itm/like/252423307573?lpid=82&chn=ps&ul_noapp=true.

You’ll also need to find some kinda of mount online that you can attach the camera to so that its looking forwards and down in front of the robot.

Thanks for your generous tutelage - order has been placed.

Since the camera will likely be at an angle the easiest thing to do would be to use color tracking to identify a color code. So, you’d use the find_blobs function to look for two very contrasting colors. Once you got these colors you can use the find_markers function to merge overlapping color blobs and this will give you the color code.

Now… as for different symbols, the best thing to do would be to develop the color code such that you can encode a value given its rotation with respect to the robot approaching it. All color blobs have an angle, so, if you setup the markers each rotated by 90 degrees or so you then can have 4 unique symbols per color code pair.

--------------------------------------------_

So you are saying color code so say you pull up to an intersection and there is red and blue square you should assign those two a value ? So they say would eqaul 2 and 2 would be code to go right . am I grasping what you ate saying I am pretty fluent in c++ and java so just trying to figure this out . need better clarification on way you think . thank you so much

So, you pass find_blobs a size of color tuples - e.g:

[ (l_min, l_max, a_min, a_max, b_min, b_max), (l_min, l_max, a_min, a_max, b_min, b_max), (l_min, l_max, a_min, a_max, b_min, b_max), …, (l_min, l_max, a_min, a_max, b_min, b_max) ]

The blobs returned from the find_blobs function have a color index number that is set to 2^N of the 0 based index position of the color that produced the blob. So, if a blob was made from colors matching the first tuple in the list above it would have a color number of 1. Blobs from the second tuple would have a color number of 2. From the third would have 4. Etc.

The find_markers function the merges overlapping blobs. The merge process logically OR’s all the color numbers. Since each number was a power of two they all OR perfectly.

Now, you just have to check the color number of the merged blobs to see what is there.