ML module in H7

Hi, I have used a custom neural network for object classification that I made from scratch before on a H7+ camera which worked fine. Currently I want to use this same structure for a H7 camera, I made some modifications to have a really small network (~88 kb) to be able to use it locally on this sensor which has less RAM and Flash, I keep getting the following error.

  File "<stdin>", line 24, in <module>
  File "ml/model.py", line 14, in __init__
ValueError: Failed to allocate tensors

I would like to know what is the memory limit or why it doesn’t work, in general terms my network has 3 output classes and the input has 96x96 grayscale sizes. I think the H7 is powerful enough to use but I don’t understand why it doesn’t work.

I appreciate any help you can provide.

A little script of code I used:

model = ml.Model("model.tflite", load_to_fb=True)
labels = ["Person", "Bicycle", "Other"]

And this is the tflite model.

model_FOMO_MLVersion.zip (56.8 KB)

So, does it work on the H7 Plus with the latest firmware but not the H7 with the latest firmware?

If so, then that would only be a memory issue.

Try:

model = ml.Model("model.tflite")
labels = ["Person", "Bicycle", "Other"]

load_to_fb would be taking away fb_alloc RAM which is used to interrogate the model to see how much RAM it needs before putting the Tensor Arena in the heap: openmv/src/lib/tflm/tflm_backend.cc at master · openmv/openmv (github.com)

That said, if this change doesn’t work then it doesn’t fit. The only other hope is to build it into the firmware: openmv/src/lib/tflm at master · openmv/openmv (github.com). This would remove the storage of the model itself from RAM.

Thank you for your response. I’ve made some modifications to my model, and it works fine using the “classical” method, loading the model from internal flash memory. However, I would now like to use a higher resolution (upgrading from QVGA to VGA). I attempted modifying the firmware to save more RAM, but I couldn’t find a working solution. So, I followed the guide for Linux systems (the Docker-based guide didn’t work, possibly due to the ARM GCC compiler). I added my model to the /models folder and updated the index.csv file with my model’s name. The firmware builds and flashes successfully, but when I load the model, I encounter the following error:

sensor.reset()  # Reset and initialize the sensor.
sensor.set_pixformat(sensor.GRAYSCALE)  # Set pixel format to RGB565 (or GRAYSCALE)
sensor.set_framesize(sensor.QVGA)  # Set frame size to QVGA (320x240)
sensor.skip_frames(time=2000)  # Let the camera adjust.

min_confidence = 0.4
threshold_list = [(math.ceil(min_confidence * 255), 255)]

# Load built-in FOMO face detection model   
model = ml.Model("detection_custom")
print(model)

Traceback (most recent call last):
File “”, line 25, in
File “ml/model.py”, line 14, in init
OSError: Could not find the file
OpenMV b56df8d; MicroPython f158a460; OPENMV4-STM32H743

EDIT: I believe the issue is related to the firmware version. I tried using the firmware version cloned from the GitHub repository (version 4.5.9 according to OpenMV IDE), but it crashes in the same way. I would like to know how to clone version 4.5.8. I tried downloading the source from the releases, but it’s impossible to build the firmware because not all the necessary files are included. I would appreciate any help you can provide. Thanks!

If you add it to the index you need to enable it via omv_boardconfig.h file, please see the other boards for an example, otherwise just don’t add it to the index.

# Models listed here are embedded into the firmware image only if they are enabled
# by the board config. Other models in this directoy, not listed in this file, are
# enabled by default.

It worked fine. Thanks a lot :slight_smile: