Deploying Object Detection Models on OpenMV Cam H7 R2

Hello,

I was wondering what the capabilities of the OpenMV Cam H7 R2 were in terms of running object detection models such as YOLO? I have seen the documentation on the tensorflow lite support however am not sure if the operations in YOLO are supported by tflite and furthermore if the hardware capabilities of the cam are adequate for a real time object detection task using such models.

I would appreciate some light being shed on this matter.
Thank you!

Please see Edge Impulses FOMO network. It’s like YOLO but can run on the MCU.

Note that… there’s new silicon coming out next year where we will be able to do this. But, not yet.

I see that the ram on the openMV H7 R2 has 1MB of ram. So will it be able to support all FOMO models which will take less 1MB for processing?

Yeah, it can run FOMO models. However, you want to build them into the firmware. Edge Impulse does this.

how do i build it into the firmware

openmv/src/lib/tflm at master · openmv/openmv (github.com)

Hi @kwagyeman, wondering if there’s any update on this

PC

Hi, you can run YOLOV5 models on the H7 Plus and RT1062 right now. It just runs at 0.3 FPS or so. Edge Impulse allows you to train them right now.

We don’t have any demos yet per say though, but, I can share with you scripts that work if you want to try this out right now. v4.6.0 that was released can run YOLOV5. As for the new silicon. That’s around the corner :slight_smile:

2 Likes

Hi, can you send me Yolo5 scripts on OpenMV H7 plus.

# This work is licensed under the MIT license.
# Copyright (c) 2013-2024 OpenMV LLC. All rights reserved.
# https://github.com/openmv/openmv/blob/master/LICENSE
#
# TensorFlow Lite Tiny YoloV5 Object Detection Example

import time
import sensor
import ml
from ml.postprocessing import yolo_v5_postprocess

sensor.reset()
sensor.set_pixformat(sensor.RGB565)
sensor.set_framesize(sensor.VGA)

model = ml.Model("<MODEL FILE PATH>", load_to_fb=True)
model_class_labels = ["person"]
model_class_colors = [(0, 0, 255)]
print(model)

clock = time.clock()
while True:
    clock.tick()
    img = sensor.snapshot()

    # boxes is a list of list per class of ((x, y, w, h), score) tuples
    boxes = model.predict([img], callback=yolo_v5_postprocess(threshold=0.4))

    img.to_ironbow()

    # Draw bounding boxes around the detected objects
    for i, class_detections in enumerate(boxes):
        rects = [r for r, score in class_detections]
        labels = [model_class_labels[i] for j in range(len(rects))]
        colors = [model_class_colors[i] for j in range(len(rects))]
        ml.utils.draw_predictions(img, rects, labels, colors, format=None)

    print(clock.fps(), "fps")
# This work is licensed under the MIT license.
# Copyright (c) 2013-2024 OpenMV LLC. All rights reserved.
# https://github.com/openmv/openmv/blob/master/LICENSE
#
# TensorFlow Lite Tiny YoloV2 Object Detection Example

import time
import sensor
import ml
from ml.postprocessing import yolo_v2_postprocess

sensor.reset()
sensor.set_pixformat(sensor.RGB565)
sensor.set_framesize(sensor.VGA)

model = ml.Model("<MODEL FILE PATH>", load_to_fb=True)
model_class_labels = ["person"]
model_class_colors = [(0, 0, 255)]
print(model)

clock = time.clock()
while True:
    clock.tick()
    img = sensor.snapshot()

    # boxes is a list of list per class of ((x, y, w, h), score) tuples
    boxes = model.predict([img], callback=yolo_v2_postprocess(threshold=0.4))

    img.to_ironbow()

    # Draw bounding boxes around the detected objects
    for i, class_detections in enumerate(boxes):
        rects = [r for r, score in class_detections]
        labels = [model_class_labels[i] for j in range(len(rects))]
        colors = [model_class_colors[i] for j in range(len(rects))]
        ml.utils.draw_predictions(img, rects, labels, colors, format=None)

    print(clock.fps(), "fps")

Expect <1 FPS Frame Rates

2 Likes

Thanks. How did you generate the model for yolov2, yolov5 and convert it to tflite?

1 Like

The YOLO V2 model is from ST. The YOLO V5 was created by Edge Impulse using their tools.

1 Like