Maximum size OpenMV Cam H7 Plus

Hi, I want to load my tflite file to Open MV Cam H7 plus but I don’t know if I’m ok with the size. My tflite file is about 1.5MB.
I know that the OpenMV Cam H7 Plus has about 31MB of frame buffer RAM but on the other hand I know that the size of the heap memory is 256kB.
So, what’s the difference between frame buffer RAM and heap memory?
What’s the maximum size of my tflite if I want to load my tflite with tf.load()?
If I put the tflite file in the frame buffer RAM will I have a very slow system?
Thanks in advance and sorry for my english but I’m spanish mother tongue
Have a good day!

Hi, you just add load_to_fb=True to the load call and this will load it into SDRAM.

Yes, SDRAM is slower than SRAM onboard. However, there’s a cache so it’s not that much slower.

Also, we try to store the actual activation buffers and etc in SRAM if they fit. So, if your per layer convolutions aren’t too big then they will fit in SRAM. Then only the weights are in SDRAM.

1 Like

Can I try using tf.classify? What’s the difference beetwen having load_to_fb=True to the load call and using tf.classify?

It’s faster as you don’t incur the cost of loading the model each time you run it. The model still has to get off disk into SDRAM. So, if you don’t know it you are always loading from SD card per frame.

I know the size of my model. My model.tflite is about 1.5MB and my model.h5 is about 19MB.
If I use tf.classify where is model.tflite saved?
If I use tf.load con load_to_fb=True?
If I use tf.load con load_to_fb=False? Actually, I suppose I will have an error in this last case!
Thanks

The model always gets loaded to SDRAM. However, with the load to fb thing it’s saved between calls so it doesn’t get re loaded per call.

Sorry, but I didn’t understand.
With the load to fb I’ll have to re load the model every call? So that’s why it’s the slower mode?
On the other hand I can use the load with load_to_fb=false to have my model in the heap and in this case I don’t have to re load it at every call.
Is it correct?
Thanks

Classify reloads the model every time. You can’t do load to fb as false as it won’t fit. So, load the model with load to fb as true.

Please read the document. It’s really straightforward

I read the documentation, classify reload the model every time so it’s slower.
load with load_to_fb=false is not suitable in my case since the size of my model.
I have to load with load_to_fb=true because it is suitable for the size of my model but what are the disadvantages over the load with load_to_fb=false(I know I can’t use it in my case)?

It just reserves 1.5MB of the frame buffer. If you had less than 32 MB this might be an issue. But, for you it should not be a problem.

If you were in the H7 regular with only 400KB of RAM and even if you had a smaller model it would be more of a trade off.

1 Like