Google teacheble machine on Openmv

Hi,

Is there someone who has tried to use a model trained with google teachable machine on openMV and can share experience?

The tflite model generated can be used on OpenMV ?

I would like test teachable machine that seems quite easy but avoid to wast time if can’t be supported by OpenMV.

Tks.

Hi, we recommend using Edge Impulse to train an image classifier CNN. These run without issues on the H7 plus and RT1062.

For our upcoming AE3 and N6 we have a new service called Roboflow we will be leveraging that can train object detection models based on YOLOV8.

As for Google Teachable machine, I have no experience, if it outputs TensorFlow INT8 quantized models, then it probably will work with us.

At the end I’ve been able to build a model with Google Teachable machine and run on OpenMV H7 plus . I’ve asked the help of chat gpt to understand how fix some problems because the original example to use tflite with OpenMv was not working . The inference work at about 1.5 fps , not too much but good enough for such usage. Here the python script and attached files of project in teachable machine ( project_componenti.tm ) that can be imported/modified if someone want try it.

# prova_1_teachm.py:
    
# model generated by teachable machine by google
# output tflight quantized
# trained to recognise , led , integrati, capacitor and transistor plus the bachground
# generated with help of chat gpt

import sensor, time, ml

# Load labels
with open("labels.txt") as f:
    labels = [l.strip() for l in f.readlines()]

sensor.reset()
sensor.set_pixformat(sensor.RGB565)  # or GRAYSCALE if model expects it
sensor.set_framesize(sensor.QVGA)    # auto-resized by predict()
sensor.skip_frames(time=2000)
clock = time.clock()

net = ml.Model("model.tflite", load_to_fb=True)

print("Model loaded.")
print("Output shape:", net.output_shape)
print("Output dtype:", net.output_dtype)
clock = time.clock()
while True:
    clock.tick()
    img = sensor.snapshot()

    outputs = net.predict([img])
    
    scores = outputs[0].tolist()[0]

    max_idx = scores.index(max(scores))
    label = labels[max_idx]
    score = float(scores[max_idx])

    print(label, score)
    
    img.draw_string(2, 2, "%s: %.3f" % (label, score))
    
    print(clock.fps(), "fps\t", "%s = %f\t")ype or paste code here

Below how I’ve exported the trained model .

Next step is understand if is posssible obtain the x,y and w coords of detected object.

project_componenti.zip (4.9 MB)

Cool, the N6 will be out soon. With it, that 1.5 FPS will turn into 60 FPS. Once we get all optimizations done then maybe 120 FPS.

Hi,N6 will run also script with mediapipe for pose detection ?
I’ve seen that in this repository there are some examples .
openmv/scripts/examples/03-Machine-Learning/00-TensorFlow at master · openmv/openmv · GitHub

Yes, I haven’t ported the model yet. But, honestly, it’s going to be pretty trivial at this point to get it working. The AI framework we have set up is coming out quite well.

Most immediately, though, we are focused on upgrading the USB DBG protocol.

Regarding the pose detection…

All the google media pipe object detector models kinda have the same output style. So, you only need to make something like this:

https://github.com/openmv/openmv/blob/master/scripts/libraries/ml/ml-mediapipe/ml/postprocessing/mediapipe.py#L130

Basically, just spec the anchors, scores tensor, and cordinate output tensor index, and then you can make an example script like:

https://github.com/openmv/openmv/blob/master/scripts/examples/03-Machine-Learning/00-TensorFlow/blazepalm_detection.py

If you look at the output tensor list of the model… and how it needs anchors, knowing what to change is pretty straight forward.

Probably no more than 10 lines of code need to be changed to support pose detection.

Even the land marks models are pretty simple: openmv/scripts/libraries/ml/ml-mediapipe/ml/postprocessing/mediapipe.py at master · openmv/openmv · GitHub

It looks like movement is a keypoint output network, so, you’d want to create a landmark post-processor. stm32ai-modelzoo/pose_estimation/movenet at main · STMicroelectronics/stm32ai-modelzoo · GitHub

Anyway, it’s still pretty easy with our AI framework, you can get this running on the H7 Plus, but, it will be at like 0.1 FPS.