Deallocate all of the memory allocated on heap

Is there a function that in micropython that deallocates all the memory on heap?
I actually wish to implement the following thing

while(True):
    img = sensor.snapshot()
    # do some manipulation on image , copy image and store some data out of it
    # deallocate all the memory assigned to heap

is there a way to deallocate this memory which stores copied image and some data variables ??

Snapshot stores the image on a giant stack called the frame buffer. This is managed for you.

You can copy images to the heap if you want. You can’t manually “delete” any python lvalue and it will be freed. However, the images are stored again in the frame buffer stack.

Hi @kwagyeman, I am facing roadblocks when I try to use a model after the device goes to deep/lightsleep for a while.

At first, I was loading my model once upon boot and using it multiple times, but after a few iterations and usage of sleep, I started getting invoke failed in my log file (since IDE disconnects when using sleep, I wrote a code to output errors with datetime in a .csv file). I assumed this is because the garbage collector is considering the object unused and deleting it. To work around this, I started calling my model in every iteration like this:

# Model (this is outside the main loop)
def invoke_model():
    return ml.Model("/model/trained.tflite", load_to_fb=uos.stat('/model/trained.tflite')[6] > (gc.mem_free() - (64*1024)))

# Inference (part of the main loop)
def inference(images):
    send_to_log("Starting inference")                           # Send string to .csv log file
    net = invoke_model()                                        # Call model
    prob = [None]*len(images)                                   # Empty array to store predictions
    for i in range(len(images)):                                # Predict using loaded model
        predict = list(zip(labels, net.predict([images[i]])[0].flatten().tolist()))
        prob[i] = predict[1][1]
    del net                                                     # Delete model
    return([prob])

This works fine for a few iterations until I start getting memory allocation failed, allocating 2561328 bytes. At the end of every iteration, I call gc.collect() and print available memory using gc.mem_free which is pretty much around 2740656 for all iterations. I assumed that gc would work for framebuffer too but I could be wrong.

I also read the docs to get some clarity; is there a special method to deallocate fb?

PC

Hi, the way you have it coded is fine. Deleting the net object frees the frame buffer stack along with the heap it uses. If the model was in flash then no frame buffer stack would have actually been used.

The memory allocation failed is the TensorFlow flow arena allocation failing and/or allocation failing when invoking.

Large buffers are allocated when the model you are using runs… and when though you can use gc.collect to cleanup the heap it still fragments which results in allocations potentially failing.

You generally want to not constantly load the model over and over again. If the heap is large enough then this would not have problems… are you on the H7 Plus or RT1062?

…

BTW, please do not comment on a 3 year old thread. This is bad forum etiquette.

Please open a new thread about this topic.

1 Like