Find Corners/Lines of Edge Impulse trainned object

If I trainned an object to be recognized by the camera using the Edge Impulse openmv library and am able to achieve object recognition by a very high confidence rating,

im curious if you know of a way to get data on the image


[context]:
building a robot that picks up an item on a conveyor belt. The item is see through and as a result stand alone techniques like find_rect and find_corners dont work very well. As a result I bought the h7 plus (because the h7 wasnt super powerful for running the edge impulse pipeline) and am able to detect the existence of the object however I would like to pick up more information such as the object’s orientation (rotation) as well as its x,y position. Im thinking if there is a way to process the image such that I am able to do some geometry on the corners or edges to find the angle (theta) and xy postion(p) of the object
[end context]

thanks a bunch, y’all are very responsive and of course, happy new year!

Yes, you can use multiple networks to do what you want. Once you’ve locked on to the location of the network and you have a training dataset where the object is in the same place you can then train a classifier to recognize the rotation of the object and etc.

E.g. It sounds like you can find the object in an image. Okay, then make a network now that tells you the x/y position of the object. Or, better yet, is the object left of the center, right of the center, above, or below. This is easy for the network to encode.

The robot will then use the first class to find the object, the second class to center the object, and then once that is done you can make a third class to rotate the object.

Yes, it’s possible to make a network that does all this in one shot… Edge Impulse doesn’t support that yet. However, they may support new network types in the future which will make this whole process easier. As of right now you have to break the process up.