using openmv for robot localization

Hi,

I am new to the openmv cam and have some questions.

Could the cam be reliably used for robot localization indoors, so that the robot knows where it is based on cam images. For example: could the cam be used to take a picture, compare it to a stored picture and assess for example in which direction it is looking by idenitfying for example a bookshelf, a television, a speaker, a chair etc. Would variable lighting be a problem? (I use a pixy cam at the moment to identify colored beacons, but it is very suspectible to changing lighting conditions)

Another topic mentions a new board with more gpio will come to the market. This could mean an additional microcontroller becomes obsolete which would be great. Is there any news on when this board becomes available? What specs can be expected, for example number of gpio, maybe integrated bluetooth of wifi for wireless control

Hi, we can’t do general purpose SLAM on any image but you can cheat using AprilTags. Please see our YouTube channel (link is on the bottom of our website) and look for the April Tag video. Apriltags are visual fiducials which are like numbers the camera can see very reliablely (magically good). By putting enough tags in the room so the camera can always see one you can then know where you are by comparing the seen tags to a record of what rooms what tags should be in.

As for the next gen OpenMV Cam. Once ST releases the STM32H7 we’ll be able to double the RAM, MIPs to equal the raspberry PI zero, and half the current consumption. WiFi will likely be onboard too. However, it’s going to be more expensive than the current cam, but, we’ll be able to sell the M7 cam for cheaper in the future as we expect to increase our volumes to get more cost savings by the time the H7 hits the market.

Hi gents,

I’m going to describe an idea of simplified localization and navigation and I’m interested to know your opinion about feasibility with openmv.

There is a rover equipped with openmv camera which is required to navigate inside a rectangular area with the scope to cover all the surface like that he has a brush and it has to colorfill the rectangular.

there are apriltags at known position around the rectangular area. They are located in a way that rover can see minimum 3 apriltag in every position so that triangulation can been calculated

Coordinates of the 4 rectangular edges, and apritags are stored on opneMV SDcard

In order rover to cover whole rectangular area, it will move parallel to X until reach border, then turn 180 degrees (stopping one of the two wheel) and move again parallel to X until reach border on opposite side.

Apriltags are used in order to localize the rover inside the rectangular area so that planned path can be followed.

Am I asking a big calculation effort to openMV?

Not at all. We can detect an unlimited number of AprilTags per frame and you will get a value equal to your distance away from each tag from our methods. However, the camera can’t really determine the Z distance with that much accuracy. X and Y are much easier to be more precise about since that directly maps to pixels.

Anyway, as for the math to turn the tag detections into a position. We don’t supply that. However, it shouldn’t be that hard given the 6DoF values we supply to you for each tag to determine the distance from the tag and then triangulate. I think just doing sqrt(x^2 + y^2 + z^2) will give you your distance to each tag and you can use a signal strength like localization algorithm to find your location.