OSError: Arena size is too small for activation buffers when using Edge Impulse with H7 (non plus)

Hey all,

Im working on a ML project with the H7 and unfortunatly dont have the plus.

I followed this tutorial for working within the constraints of the the H7 (non plus) here

It is unfortunate because my model went from 100% accuracy from the edge impulse tutorial to 14% with the reduction but oh well.

When I try to run the deployed version to the camera I get the following error:
OSError: Arena size is too small for activation buffers. Needed 57648 but only 54640 was available.


is there a way I can reduce the framerate or slow down the camera or something in order to get the camera to run? Is there a different reccomended low powered tool?

Try a smaller frame size, it will make more memory available to TF.

So do i adjust this parameter here:
sensor.set_framesize(sensor.QVGA) # Modify as you like.

do you recommend adjusting that param in both data_capture_script.py when i collect my data as well as ei_image_classification.py?

Yes to QQVGA or lower.

awsome thanks I’ll adjust the sensor and retake the photos.

im not sure this is in the scope of openmv question this may be in edge-impulse land however im curious if after adjusting the frame size if i still need to make the impulse where I down grade the image size to to 48x48 (instead of 90x90 as per edge impulse reccomendation) as well as making them greyscale and using MobileNetV2 0.1 (instead of MobileNetV2 0.35 as per edge impulse reccomendation)

I guess Im uncertain the difference between ram constraints vs having enough frame constraints.

thanks so much!!!

awsome thanks I’ll adjust the sensor and retake the photos.

You don’t need to retain the network, just set the framesize to QQVGA (which is 160x120) so you should be able to scale down to 90x90, while saving some framebuffer RAM. Also retraining a smaller model or switching architecture should work too.

whew glad to know i dont need to retake a bunch of photos. thanks for your help!

when you say retrainning a smaller model does that entail

  • less photos? currently have about 80 40/40 split between classes in addition to using 80% for the model and 20% for testing

  • not using a transfer learning?

Im curious if those shrink the model that needs to be loaded in an ran

No, using smaller images ex 48x48, you don’t have to retake photos I think the EI downscales the images.

Just change the image size from 96x96 to 48x48.

useing 48x48 and rgb I get
MemoryError: Out of fast Frame Buffer Stack Memory! Please reduce the resolution of the image you are running this algorithm on to bypass this issue

Is there a way to do something like really slow fps. i dont anticipate the camera needing to capture very fast for my application

How big is the network itself?

What gets stored on the stack is the network, the activation buffers needed for the network, and the image from the camera.

If the network itself is large and the activation buffers are large then you run into issues.