OpenMV M7 with Raspberry Pi and OpenCV Feasibility

Hi, I’m fairly new to using the OpenMV M7 camera. The objective for my project is to be able to identify a moving image, which may appear in multiple orientations, and have the image/location data sent back to a raspberry pi.

Now, I’ve played around a little with the OpenMV camera and managed to get it to detect a pre-saved image and track it using the image template function. However the resolution is poor which I realize is due to the processing constraints on the M7. The image recognition via this method is fairly unreliable for the scope of this specific project, and I’d like a color image feed.

So now I’m thinking about using OpenCV on the raspberry pi to do the image processing, using images received from the OpenMV camera. My question is whether doing this with the M7 would be feasible?

If this is possible I’d also like to do some servo control too - so if the target image is identified and is away from the center of the camera’s view, the camera could be re-orientated such that the image is directly in the center of the camera view using a servo apparatus.

Some guidance on where to start on getting the camera to pass a video feed to the raspberry pi would be much appreciated, as well as any pointers you might have.


There’s no point in using the OpenMV Cam as a webcam for the PI. It’s not really good at that. Better to buy a camera for the Pi.

That said, the Pi is not exponentially faster than the OpenMV Cam at image processing. If you’re finding template matching too slow on the camera you’re not going to get the speed up you want on the OpenMV Cam.

What’s your application exactly? Maybe there’s a better algorithm to use?

Thanks for the reply.

The application is image recognition for a drone. The drone is required to identify a predefined image as its target from any direction, which it will then fly towards. I used the template matching example in the OpenMV IDE with a 32x32 bit sized image for the template image (the target image). This worked to some extent, but my issue with this is the low quality video produced and the greyscaling, plus getting a reliable lock on the target image (printed out on a piece of A4) varied depending on orientation of the paper, distance and movement of the target image. If the camera was in a fixed position I could see this feature being much more useful. I don’t particularly need the processing to be done quickly though, speed isn’t my largest concern for this - I’d just like a reliable lock on the target with a high quality video feed. I’d also like to use a larger template image, and if possible also do color recognition on the image (ie, look for the color blue in the target image).

Perhaps it would be better to use the Raspberry Pi camera with the Pi. I’ve considered this, but since the M7 camera is still a neat bit of kit I thought I’d attempt to use that first.

Have you considered using AprilTags for tracking? These are QRcode like things which we can track at about 10 FPS from out to 8 ft away. AprilTags are rotation, scale, and skew invariant. They sound like more of what you need. Each AprilTag encodes a number from 0 to 511.

Template tracking is just for matching images in very particular scenes. Additionally, template matching on the Pi will take multiple seconds per image. The template matching algorithm in general is only fast on low res images. If you want a high quality image it won’t really do…