Running 200kb model on OpenMV H7

I managed to compile the firmware with a 200kb tflite face detection model but I can’t get it to load in memory since the buffer is too small.

tflm_backend: Failed to resize buffer. Requested: 344064, available 239116, missing: 104948

Since the model is built-in I read that it can be directly run from flash without loading it from memory but I’m not sure if that still is the case for the new ml model that allows persistent models. Is there anyway to allocate memory from another place or am I just limited by that amount of memory.

Hi, you are just limited by the amount of memory. We used to allocate it on the frame buffer but dropped that as it prevent models from having memory. The H7 Plus or RT1062 both have ample RAM where this is not an issue. If you want to hack the firmware you can move the memory allocation for the tensor arena into the frame buffer via fb_alloc().

Could you direct me where I need to look to change the tensor arena into fb_alloc()
Is it just changing m_alloc() into fb_alloc() in the tflm backend?

openmv/src/lib/tflm/tflm_backend.cc at master · openmv/openmv

Ok i managed to get that working but it seems not even the frame buffer has enough memory since the model tries to use ~330kb when running. Is there a way to increase the frame buffer memory like shrinking the filesystem or the bootloader partition? Also do you think a model like this would be able to run with enough fps to be usable for face detection or is it too much for the H7 to handle?

You can play around with RAM sizes here: openmv/src/omv/boards/OPENMV4/omv_boardconfig.h at master · openmv/openmv