ML Model Failed to allocate tensors

Hello I am trying to use an instance segmentation model out of yolov8 and it throws this error
Model output: ((1,37, 525), (1,32,40,40))

Traceback (most recent call last):
File “”, line 20, in
File “ml/model.py”, line 14, in init
ValueError: Failed to allocate tensors
OpenMV v4.5.8; MicroPython v1.23.0-r6; OPENMVPT-STM32H743
Type “help()” for more information.

import sensor, image, time, math, ml

sensor.reset()
sensor.set_pixformat(sensor.RGB565)
sensor.set_framesize(sensor.VGA)
sensor.skip_frames(time = 2000)

sensor.set_windowing((160, 160))

clock = time.clock()

print(‘debug’)

model = ‘best_int8.tflite’

seg_model = ml.Model(model)

I remember it used to work with the previous tensorflow lite module, can you please highlight how can we use the current module to segment the images?

Hi, is there any more debugging information available?

You should have a massive heap of 8MB and then you can also do: ml.Model(model, load_to_fb=True).

There are two errors possible here:

  1. You don’t have enough RAM to run the model, which is unlikely, however, you can tell how to load it via looking at the file size. I recommend loading it via load_to_fb=True.
  2. An operator is not supported. You should see some error text about this in this case. Note that we are now using the Latest TensorFlow release and have enabled every operator. So, this shouldn’t be the issue.

The error message is thrown here: openmv/src/lib/tflm/tflm_backend.cc at master · openmv/openmv (github.com)

This means that the model can be read, but, TensorFlow itself doesn’t like it. Please look for any messages from the tflm_backend for debugging info.

See this post for how to post-process the image:

OpenMV Firmware v4.5.6 and up TensorFlow Porting Guide - OpenMV Products - OpenMV Forums

Segmentation images are just FOMO models. So, basically you just need this code:

import sensor, image, time, os, ml, math, uos, gc
from ulab import numpy as np

sensor.reset()                         # Reset and initialize the sensor.
sensor.set_pixformat(sensor.RGB565)    # Set pixel format to RGB565 (or GRAYSCALE)
sensor.set_framesize(sensor.QVGA)      # Set frame size to QVGA (320x240)
sensor.set_windowing((240, 240))       # Set 240x240 window.
sensor.skip_frames(time=2000)          # Let the camera adjust.

net = None

try:
    # load the model, alloc the model file on the heap if we have at least 64K free after loading
    net = ml.Model("trained.tflite", load_to_fb=uos.stat('trained.tflite')[6] > (gc.mem_free() - (64*1024)))
except Exception as e:
    raise Exception('Failed to load "trained.tflite", did you copy the .tflite and labels.txt file onto the mass-storage device? (' + str(e) + ')')

def post_process(model, inputs, outputs):
    # example of creating an image from a (1, 16, 16, N) fomo output image.
    ob, oh, ow, oc = model.output_shape[0]
    return [image.Image(outputs[0][0, :, :, i] * 255) for range(oc)]
        
clock = time.clock()
while(True):
    clock.tick()

    img = sensor.snapshot()

    image_list = net.predict([img], callback=post_process)

    print(clock.fps(), "fps", end="\n\n")

From your model output above: ((1,37, 525), (1,32,40,40)) looks like a dual tensor output model. So, we should be able to run it. You’ll find each tensor output under output[0] and ouput[1].

For slicing (1,32,40,40) do:

# making an assumption here... change this if wrong.
ob, oc, oh, ow = model.output_shape[1]
return [image.Image(outputs[1][0, i, :, :] * 255) for range(oc)]