Custom net with tensorflow

Hi!
I’m currently working with ML models for street surveillance inference. I tried using FOMO with Edge Impulse, but it has some constraints I’d like to avoid. I’m interested in using state-of-the-art networks like YOLO and noticed there’s progress in adding support for these models. Is there an experimental firmware version available that I could use?

Additionally, I’m developing a custom network from scratch and would like to know if only the Tensor operations/layers listed on the GitHub page are available, or if more will be enabled soon. I’m working with H7 Plus and RT1602.

Thanks!

YOLO support: modules/py_tf: Add support for yolov3, yolov5 and yolov7. by kwagyeman · Pull Request #2134 · openmv/openmv · GitHub

Layers/Tensor operations: tensorflow-lib/libtf.cc at master · openmv/tensorflow-lib · GitHub

Hi, the RT1062 enables all operators. The H7 Plus has a reduced set to save Flash.

As for YOLO support. This is blocked on Edge Impulse rolling this feature out. We were testing some models back in March of this year and achieved 2.5 FPS with YOLO on the RT1062 at a 240x240 resolution. However, they were having issues with training the models. Finally, they don’t want to release this feature except for enterprise customers. So… it’s not clear how you can actually use it.

Internally at OpenMV we are focused on next gen stuff right now which will just remove any blockers on running any models and allow you to put anything onboard.

Thanks for quick response :slight_smile:

What needs to be done to use Ultralytics int8 quantized tflite yolov8n model?

Best, Casper

The models we tested with were NVIDIA TAO ones. YoloV3.

I can’t comment on YOLOV8.

As for running it… if you pull the PR I was working on and modify the code to parse the output head of YOLOV8 it will most likely work. You just need the correct C code to parse the final results of the model. However, YOLO models all have slightly different results heads. So, static C code isn’t the way to go.

We will be modifying that PR to allow post processing via custom Python callbacks. This will allow any model to run. However, I’m not working on this functionality right now. If you want to modify the PR please go ahead.