tflite image classification out of memory

hello,
i am using openmv h7 cam to solve a classification problem of captured images, where the caputured image is classified whether it contains a “weevil” insect or not.
so it is a simple binary classification for images.
i retrained imagenetv1_025_128_quant (which is the smallest in size between all the pretrained models) then coverted the graph into tflite by tf_lite_converter.
the output model.tflite is 250kb while the heap memory of the h7 is 240 kb so i get “out of heap memory” error at run time.

i tried tunning every parameter in tf_lite_converter but model size never decreased.
i need help on how can i decrese the model size to fit in the ram?
have you tried another tflite image classification either than the person_detection, and if so what are the procedures you followed?


thanks


side notes to be considered:
1)when i use the mobilenetv1_025_128 without quantization and then quantize it using tf, model is still same size.
2) when i use custom cnn model with smaller layers and weights, accurcy drops alot (so this is not considered)
3) when i follow the official tf lite for microcontroller documentation, the script never complete because this documentation is way too outdated starting from the first command “download_mscoco.sh” (this file is removed from tf repo).

Hi, don’t load the model into the heap, please use the version which loads the model into the frame buffer. I.e. don’t use the “load” method. I document this pretty clearly in the API docs that the load method while nice is basically useless unless the model is really small.

Just do tf.classify(“/path_to_model.tflite”, …)

This loads the model from disk per call into the frame buffer which has enough RAM. It’s slower since the model is loaded per frame from disk… but, it’s what will work.

I’ll add a flag to load that will let you take fb space permanently for the model.

Please send Google the note about their procedure being out of date.