Using Edge Impulse Classifier to Drive a Create2 Robot

Hi everyone,
I’ve done my best to search the forums, but I want to make sure I’m taking the smartest approach I can to solve my needs.

What I need to do is get images from the OpenMV camera, classify them, then use that classification in a python 3 program so that I can send a command to the robot.

Currently what I need help with are these 2 aspects.
My unquantized model performs great, but my quantized is not going to do well enough to succeed. If I can’t improve the model’s quantized performance enough, is there a way to use the data from the OpenMV cam and the float32 Edge Impulse model in python on say a ras pi or a Windows machine?
Say I run the model on the OpenMV and it’s classifying. How can I take those results and then use that result in another .py program? Or with an SD card, am I able to install other python libraries on the OpenMV?

You can run a float32 model I think on the OpenMV Cam. We used to support floats in the tensors. Maybe not anymore though as we are use the commit that EdgeImpulse uses for TensorFlow and they might have removed floating point support.

As for sending data to the PC. Just print results and then open a serial port on the PC to the camera to receive the results in a custom program by you.

Hi Kwagyeman,
Thank you so much.
If I could trouble you for another question. I can’t seem to get it to focus more than a couple inches in front of the camera. If I could get it to focus 1-2 feet away that’d be ideal.

You need to change the focus of the lens if you can.