Using Raspberry Pi Build HAT and producing .xml file

Hi all,
I have a Raspberry Pi 4 and a Raspberry Pi Build HAT. I want to detect a specific object and grip it using Lego motors. I am using the line ‘from buildhat import Motor’ to achieve that. If I use an OpenMV Cam instead of a Raspberry Pi camera, how can I do this? Additionally, can I produce an .xml file for a specific object using the OpenMV IDE?

Hi,

I’m not sure how to answer your question. We don’t offer specific help for raspberry pi related problems.

As for the OpenMV Cam. It’s not a webcam. Replacing the Pi Camera with it kinda defeats the whole purpose of the OpenMV Cam.

What are you trying to do?

Firstly, thank you for your help. I am trying to grasp a specific object using Raspberry Pi-controlled LEGO motors and LEGO pieces. My objective is to approach and capture the object. However, I am experiencing low accuracy and inconsistent object detection when working with OpenCV and the Raspberry Pi camera. As a solution, I plan to incorporate the OpenMV Cam alongside my Raspberry Pi-controlled LEGO motors. My goal is to obtain a perfectly trained .xml file that contains data for my target object. With this data, I aim to control my robot. It is important to note that I do not wish to give up on using Raspberry Pi-controlled LEGO motors.

Okay, what’s the XML file for? Are you training a HAAR cascade?

With the OpenMV Cam we recommend training a FOMO CNN using edge impulse which can do multi object detection.

Thank you so much for your prompt response and clarification.
(Correct, the .xml file is used to train a HAAR cascade.)
Firstly; As far as I understand, I am able to create a .tflite file and upload it to OpenMV Cam.

Secondly; Can OpenMV Cam automatically generate the .tflite file for me, that is, generate the .tflite file in real time at the time of showing the object to the camera?

As the third; Is there a tutorial you can recommend to train a FOMO CNN with the help of “edge impulse”?

Finally; As I understand it, what I really need is a perfectly trained .tflite file. Is it possible for me to include OpenMV Cam in the production process of the .tflite file?

Thank you very much for your time

Hi, this is explained by edge Impulse: OpenMV - Edge Impulse Documentation

The OpenMV Cam can help you take pictures. Once you have those you upload to edge impulse. It trains the CNN. Then you download that. Then you load that onto the camera and it can be run in real-time.

I got it :slight_smile: