Motion tracking

I’m totally new to Open MV.
I want to track the back and forth motion of the tip of a small rod. The motion would be similar to the tip of a pool cue stick. That is extending and retracting into the field of vision but very little side to side motion. From the examples if have seen it is certainly doable at some level.

My questions relate to what options are available to maximize the accuracy and update rate at the expense of color, complexity and area looked at. I can optimize lighting, back ground and most anything else that will help.

What options are there for simplifying the image to gray scale or even black and white?
Is it possible to only look at only a small strip of the overall field ignoring areas where the rod should never be?
I saw examples for several types of feature identifying algorithms. What is likely to provide the best performance? Is there a function to just look at a rectangle along the center line of the path of the rod and count dark pixels, meaning rod, vs light pixels, meaning back lit background. The number of dark pixels would then be proportional to how far the tip of the rod extends into the field of vision.

Any hints of where to look?
Am I even looking at this the right way?
Thanks In Advance

Hi, with the OpenMV Cam H7 and the global shutter sensor you’ll be able to get a very high FPS in grayscale. Depending on how low res you can go you can make the FPS higher and higher.

Anyway, um, if you are looking at the object from the top that will work poorly since you can’t determine depth from a 2d image. You can only extrapolate in an 1/r see some so when something is close you get linear distance measurements but you get a non linear response the farther the object moves away.

So, instead, use find_blobs() and look at the object from the side. The algorithm will tell you the centroid of the blob along with the top and bottom pixels. It’s very fast and you should be able to get over 50 fps trivially and much higher if you go to a lower res.

Um, with the H7 and a low res you can hit 200 fps. By low res I mean like 80x60 or 40x30. If you need more vertical height you can select an ROI of a particular res to skip processing parts of the image you don’t care about.

That said, lowering the resolution on the global shutter camera causes it to go faster, setting an ROI on the image coming in doesn’t. So, you have to choose the lowest res you are happy with and then set an ROI on the area on interest in that res to get the highest speed.


I would use a side view.
I would like to better understand why a smaller ROI does not increase the FPS?
Is is a matter of the light gathering of the pixel?
Is it related to the electronics of how the camera pixels are read or is it related to the programming of the camera pixels are digitized and stored as an image.

From what little if have read about digital cameras it seems that the pixels are arranged in a 2 dimensional array like so [r,c]. To read an image you digitize and store pixel [0,0] then [0,1]… till you finish that row then start on the second row with pixel [1,0] then pixel [1,1] and on. Is it not electrically possible only digitize and store rows 300 to 340? If there is no provision to start digitizing at a selected row would it be possible to clock thru them more quickly by not taking the time to digitize and store the pixels in rows out side the ROI.


Reducing the ROI increases the speed of the algorithm but not the rate at which the camera generates images. So, the processor will be waiting around for the next frame from the camera without anything to do if you’ve made the amount of work it has to do very little.

For the M7 camera with the OV7725 theax frame rate of that camera is 90 for or so. But, the global shutter camera with the H7 gains 2x to 4x the readout speed as the res is reduced.