ML Model Failed to allocate tensors

See this post for how to post-process the image:

OpenMV Firmware v4.5.6 and up TensorFlow Porting Guide - OpenMV Products - OpenMV Forums

Segmentation images are just FOMO models. So, basically you just need this code:

import sensor, image, time, os, ml, math, uos, gc
from ulab import numpy as np

sensor.reset()                         # Reset and initialize the sensor.
sensor.set_pixformat(sensor.RGB565)    # Set pixel format to RGB565 (or GRAYSCALE)
sensor.set_framesize(sensor.QVGA)      # Set frame size to QVGA (320x240)
sensor.set_windowing((240, 240))       # Set 240x240 window.
sensor.skip_frames(time=2000)          # Let the camera adjust.

net = None

try:
    # load the model, alloc the model file on the heap if we have at least 64K free after loading
    net = ml.Model("trained.tflite", load_to_fb=uos.stat('trained.tflite')[6] > (gc.mem_free() - (64*1024)))
except Exception as e:
    raise Exception('Failed to load "trained.tflite", did you copy the .tflite and labels.txt file onto the mass-storage device? (' + str(e) + ')')

def post_process(model, inputs, outputs):
    # example of creating an image from a (1, 16, 16, N) fomo output image.
    ob, oh, ow, oc = model.output_shape[0]
    return [image.Image(outputs[0][0, :, :, i] * 255) for range(oc)]
        
clock = time.clock()
while(True):
    clock.tick()

    img = sensor.snapshot()

    image_list = net.predict([img], callback=post_process)

    print(clock.fps(), "fps", end="\n\n")

From your model output above: ((1,37, 525), (1,32,40,40)) looks like a dual tensor output model. So, we should be able to run it. You’ll find each tensor output under output[0] and ouput[1].

For slicing (1,32,40,40) do:

# making an assumption here... change this if wrong.
ob, oc, oh, ow = model.output_shape[1]
return [image.Image(outputs[1][0, i, :, :] * 255) for range(oc)]