Object orientation estimation


I want to track the orientation of an object equipped with 3 or 4 IR LED.
I imagine that I need to extract blobs and then process/compare coordinates with a model.

I found an algorithm that can be a solution : POS with ITerations.


Does anybody have implemented this algorithm on an OpenMv cam ?


Hi, the find blobs method outputs the orientation of each blobs if they are elongated. If you are trying to find the rotation of 4 blobs around a center point then you need to make sure that you can identify what point each blobs matches to if you want greater than 90 degrees of knowledge about how the object is rotated.

Anyway, I can give you the code to turn 4 points into an orientation. That said, can you determine which of the 4 points is which? Otherwise, I can only tell you if the object is rotated between 0 to 90 degrees in relation to the OpenMV Cam.


The object cant turn more than ± 70° horizontally, ±45° roll and ±45°.

View of the object from the top :

View of the object and the camera from the side :

The rotation can be calculated around the rear center point.

What do you think ?

This is not a problem to do. We added a new method to the OpenMV Cam firmware that will fix perspective issues like that such that the object looks flat. Afterwards, spinning the object will be in one dimension which is easy to work out the orientation of the object given the points.

I’ll work on some code tonight.

Okay, so, you’ll need a build of the latest firmware to undo the 3D rotation. Ibrahim just updated the firmware image here: https://github.com/openmv/openmv/tree/master/firmware/OPENMV3.

Use OpenMV IDE to load the firmware.bin file on the OpenMV Cam M7. Then you can undo the 3D rotation using this script: https://github.com/openmv/openmv/blob/master/usr/examples/04-Image-Filters/rotation_correction.py

The point of the perspective correction is to make the points on the image appear flat so the angle at which the camera is mounted doesn’t matter. You need to manually test to find the right values to pass to the rotation_corr() method to get the image to be flat.

Finally, to compute the orientation of the object, you need to call find_blobs to get the position of each LED. Once you have those positions, the centroids… .mx(), .my(), then you can use a robust linear regression to find the orientation. Here’s the code:

blobs = img.find_blobs(...)
slopes = []

for i in range(len(blobs)):
	for j in range(i, len(blobs)):
		my = blobs[i].cy() - blobs[j].cy()
		mx = blobs[i].cx() - blobs[j].cx()
		slopes.append(math.atan2(my, mx))

slopes = sorted(slopes)
median = slopes[len(slopes) / 2]
rotation = math.degrees(median)

Ok thanks very much :slight_smile:
I’m going to try this after buy the camera :wink:


I’m testing the solution you give me.
About the use of “rotation_corr”, if I well understood what youve told me, I have to test all rotations to find out the good angles.
The problem is that this solution is far too slow for my application. I need 5Hz minimum per second refresh.

Is there any other algorithm to try ?


Um, no, the rotation correction is fast. It just takes an angle to derotate by which you must deduce first. You can get above 30 FPs when doing rotation correction.

So, I’m saying you just have to try out different derotate values by hand until you find one that works for you. Then you just derotate by that.


One image of the object to track with the blobs :

It’s easy to find out the front, the rear, the left and the right blog.

As the object can permanently move I dont understand the part about manually derotate.

What amount of rotation information do you need? The code above gives the Z rotation. Like the rotation of the device on a flat surface. Getting the X and Y rotation is a lot harder… Do you need those?

As for the rotation_corr() method, that unrotates the OpenMV Cams fixed mounting. So, like, if the camera is mounted on a stick or whatnot looking downward that method allows you to re-project the image such that all lines look straight. You just have to try different X rotation values until lines on a flat table look straight.

I need orientation on 3 axis.

Okay, so that means the camera may move around with respect to the device. I thought the camera was mounted on a fixed plane looking down at an angle and the device was on a fixed plane too.

I only know how to solve for the rotation given the situation above. If the camera is allowed to move around then that means all the dots won’t be in the same plane anymore.

Mmm, well, you might be able to solve for x and y rotation by looking at what points of off center in the x and y directions and using the same trick I did above.

I.e. compute the centroids of all points. This is the sum of all x positions divided by 4 and the same and all y positions divided by 4. Then, for the y rotation get the max delta x from the centroids an do the opposite for the x rotation. The delta numbers will shrink as the object is rotated more.

Anyway, the above doesn’t give you exact rotation but will let you know if the objects profile appears smaller that it should be.