Deallocate all of the memory allocated on heap

Hi @kwagyeman, I am facing roadblocks when I try to use a model after the device goes to deep/lightsleep for a while.

At first, I was loading my model once upon boot and using it multiple times, but after a few iterations and usage of sleep, I started getting invoke failed in my log file (since IDE disconnects when using sleep, I wrote a code to output errors with datetime in a .csv file). I assumed this is because the garbage collector is considering the object unused and deleting it. To work around this, I started calling my model in every iteration like this:

# Model (this is outside the main loop)
def invoke_model():
    return ml.Model("/model/trained.tflite", load_to_fb=uos.stat('/model/trained.tflite')[6] > (gc.mem_free() - (64*1024)))

# Inference (part of the main loop)
def inference(images):
    send_to_log("Starting inference")                           # Send string to .csv log file
    net = invoke_model()                                        # Call model
    prob = [None]*len(images)                                   # Empty array to store predictions
    for i in range(len(images)):                                # Predict using loaded model
        predict = list(zip(labels, net.predict([images[i]])[0].flatten().tolist()))
        prob[i] = predict[1][1]
    del net                                                     # Delete model
    return([prob])

This works fine for a few iterations until I start getting memory allocation failed, allocating 2561328 bytes. At the end of every iteration, I call gc.collect() and print available memory using gc.mem_free which is pretty much around 2740656 for all iterations. I assumed that gc would work for framebuffer too but I could be wrong.

I also read the docs to get some clarity; is there a special method to deallocate fb?

PC