Hi team, long time no talk ![]()
I’m trying to deploy a custom DeepLabV3+ MobileNetV2 semantic segmentation model (512x512, 5 classes, INT8 quantized, 4.8 MiB TFLite) on the OpenMV N6.
The model uses the same ops as ST’s own deeplab_v3_mobilenetv2_05_16_512_asppv1_int8.tflite in the IDE model zoo.
Offline compilation works with the scripts mpool:
Using the bundled stedgeai v3.0.0 with the n6-allmems-O3 profile (which has 32 MB hyperRAM), the model compiles successfully = 134 NPU epochs, 5.1 MiB weights, ~24 MiB activations. I also generated a valid network_rel.bin via npu_driver.py.
Compilation fails with the firmware mpool:
When I compile with the firmware’s own mpool (firmware/OPENMV_N6/stm32n6.mpool), which is what the ROMFS editor uses via neuralart.json, atonn fails:
Warning: Oauto did not find valid compile options: aborting
total bytes left unallocated=9674752
The firmware mpool exposes only 16 MB hyperRAM to the NPU. The model needs ~24 MiB activations = 9.7 MiB more than available.
The hardware has the memory:
I probed the PSRAM directly from the N6’s MicroPython REPL using uctypes.bytearray_at(), writing unique values at 0 MB, 16 MB, 32 MB, and 48 MB offsets, then reading them all back:
+0MB 0x90000000: wrote 0xAAAA0000 read 0xAAAA0000 OK
+16MB 0x91000000: wrote 0xBBBB1111 read 0xBBBB1111 OK
+32MB 0x92000000: wrote 0xCCCC2222 read 0xCCCC2222 OK
+48MB 0x93000000: wrote 0xDDDD3333 read 0xDDDD3333 OK
+64MB 0x94000000: FAULT (boundary)
The N6 has 64 MB of real, writable PSRAM. The fault at +64 MB confirms the boundary.
The question:
The firmware mpool at firmware/OPENMV_N6/stm32n6.mpool defines hyperRAM as 16 MB at 0x92000000 with USEMODE_RELATIVE. Could this be expanded to 32 MB (or more) so that larger models like the 512x512 segmentation models in ST’s own model zoo can be compiled and deployed via the ROMFS editor?
The model compiles fine with 32 MB. It does not fit in 16 MB. The hardware has 64 MB.
Setup: OpenMV N6, firmware v4.8.1, MicroPython v1.26.0-77, stedgeai v3.0.0
Thanks!

