Robotics competition entry- seeking hints

Hi I’m an Australian mechatronics engineering student working in a team of 6 students from Australia, and other countries, to create an autonomous agent capable of navigating a black and white floor and locating red and blue cubes within this world. I have selected the OpenMV to try and develop a proof of concept to demonstrate to the team that we can use it for the challenge. I have no previous experience in using openmv and have done a free lynda.com course on opencv.
Some of the things I want to proof out are:
-using colour blob detect to find the coloured cubes and hopefully get a range based on knowing their size
-using the field/arena markings as a giant fiducial and extract the robot and cubes coords in relation to this, see below:


-translating position of the cubes in relation to the robot and/or the robots manipulator

  • guiding the robot to stay on the black areas

Some hints on how to start, what functions to use and whether this is all possible would be greatly appreciated.

Also, is it possible to use the PWM pins to control servos, not sure why there are only 3 servo pins but more with PWM?
we will probably intergrate with another controller but would still like to explore servo control with OpenMV.

Hi, the OpenMV Cam is VERY good at color tracking. We can do this at extremely high frame rates. What you learned in OpenCV should be directly applicable to the OpenMV Cam.

A few notes:

Getting the range of a blob on it’s size is an inverse square type of thing. I.e. the closer you are the better you know the range. The farther way the worse your detection becomes. So, know this…

As for the course? I’m not sure what the problem is. Staying on the black is easy. But, you want to know your location? Is the robot on the course navigating around in the black areas? If so… finding the red/blue cubes will be easy. But, in general, figuring out where the robot is will be hard. I’m afraid the camera can just keep you on the course and find colored objects. It cannot tell you where you are on the course however. You’ll have to provide another sensor for that.

As for PWM. The OpenMV Cam can do 3 PWM channels. I plan to build a shield in the future with 3 PWM servo headers… but, I haven’t gotten around to it. Anyway, PWM is timer driven so it doesn’t require any CPU time.

Thanks Kwabena.
Yes we want to stay inside the black areas, so I am wondering if we could get a version of ‘line following’ happening where we pick up the white edges on the black areas and return this info to the locomotion controller.

As for deriving location, I was wanting to play around with fiducial sensing from above, as the green square with the white square next to it (on the arena floor, see the image above) present a uniquely oriented graphic that we could determine position and orientation if viewed from above (albeit at a fairly skewed angle as we will most likely mount the MV on a manipulator arm).

Using the “Single Color RGB565 Blob Tracking Example”, I am already printing the centroid of the target object, and will return this data to our locomotion controller, so this is great, thanks.

Hi, where’s the camera mounted? On the ground or in the air?

If it’s in the air all these problems are easy. Note that you can use the rotation correction method to undo 3D rotation so changes in any direction are scaled correctly.

Um, anyway, assuming you know the robot’s position, just get stats on ROIs to the left front, right front, and middle front of the robots location. Then, check what the color values returned are and avoid the area or drive there as needed. Basically, the camera samples the colors immediately in front of the robot.

So we’re planning on having the camera looking down from an arbitrary height at roughly 45°(also arbitrary at this stage) angle so it can see the floor markings and target objects in front of it. Will probably also have pan/tilt or the like.

I like the ROI idea, this will be good for reactive “avoid” behaviour. For deliberative path finding, (we need to get to the targets asap and return them to the green area) perhaps we can use the rotation/perspective correction and binarise the image to get a matrix of 0/1 to represent the black and white floor markings, and then use a path finding algorithm on such as A* to find the fastest route.

I like the ROI idea, this will be good for reactive “avoid” behaviour. For deliberative path finding, (we need to get to the targets asap and return them to the green area) perhaps we can use the rotation/perspective correction and binarise the image to get a matrix of 0/1 to represent the black and white floor markings, and then use a path finding algorithm on such as A* to find the fastest route.

Yep, the OpenMV Cam can totally do that. I recommend preallocating a list for the A* and using that as a lifo/fifo for breadth/depth first search. Avoid recursion. Also, don’t make your map of the world equal to the image depth. If you have a 320x240 image make the map of the world 40x30.