AllocateTensor problems

Hi,
I tried to transplant my trained small network(64kb) to OpenMV H743,
but got this following error: OSError: Arena size is too small for activation buffers. Needed 1179648 but only 480096 was available. AllocateTensors() failed!
In fact, my network is much smaller than the available ram and i don’t know what this big number, 1179648 is, is there any way to solve this problem? and is there any way to customize the size of AllocateTensor?
Thanks a lot!

It’s because that’s the amount of RAM the network uses to run.

If you’re doing a huge number of convolutions in one layer then this will happen.

Like you probably have a convolution layer with a very larger number of filters. When you do that it means the peak memory usage is very high for that one stage. It’s less RAM if you make the network deeper.

Thanks for your reply!

My network is only a 5-layer depthwise network, and its largest filter number is only 64, i don’t think its a huge number :cry:
The image shows the buffer usage of each layer of my network.
Is there any way to customize the size of AllocateTensor?

Looking forward to your reply!

AllocateTensor does it’s best to use a low amount of RAM. Google folks worked hard on making this efficient.

Can you post the network layers?

Thanks again. :smiley:

Thanks again. :smiley:



You have an output from one layer that’s 96x96x64 into another layer that’s 96x96x64. So, that’s 1,179,648 bytes.

If you want to keep the large filter number reduce the network input to something like 48x48.

Oh…i got it. :smiley:
Thank you soooomuch!

Hi :smiley: ,another question please.
I use the sensor.snapshot() to get a real time image, is there any way to print the arraylike data of images i get? I have not find an appropriate function in the openmv document yet.
Looking forward to your reply, thanks a lot.

You can access image data using on the image. Write a for loop to dump the values.

E.g.

img[(y * img.wdith()) + x]

thanks!!! :smiley: it works!

Hi, i have a tough problem now :frowning:
I want to use openmv to realize a grayscale image classification task, i have transplanted my model in openmv by TFLite API,and used tf.load() and tf.classify() to infer, but always got the same output ,no matter what the picture is, then i found that the image.snapshot() got 0-255 grayscale picture, but my model input should be 0-1, so i rectified the input value, adding a mapping from 0-255 to 0-1 after the input layer, but still got the unchangeble output, actually, the inference of the model in my pc is successful, there must be something i have not considered.
Looking forward your reply, thanks a lot.




4.png

The classify method does all that for you. You just pass the image without doing anything. If your model takes 0-1 then that means that it takes a float. Our code automatically figures that out and changes the 0-255 value to 0-1.

Can you use Edge Impulse to generate a model? Creating a CNN is really rather tricky.