Hi OpenMV team,
I’m currently working with your OpenMV RT1062 MCU and I’m successfully loading YOLO models (such as YOLOv8n,YOLOv5nu and YOLOv8n-cls) with your new firmware. I can successfully run inference on the image classification model YOLOv8n-cls with an output vector of shape [1,1000]. However, when I run the object detection models I run out of heap memory. Due to this I retrained YOLOv8n to output fewer classes and minimized the output array shape. With two classes I have the output shape [1,6,8400], with all 80 classes the output shape is [1,84,8400] - of type int8.
I’m running into MemoryErrors when calling net.detect on the object detection models and have the following questions:
- Why am I running out of heap memory when I call net.detect on the smaller NN model of output shape [1,6,8400] = 50400 bytes < gc.mem_free() = 265184 bytes at run time ?
- Is it possible to use the framebuffer instead of the heap to store inference results similar to this?:
fb_mem = sensor.alloc_extra_fb(84,8400,sensor.GRAYSCALE)
fb_mem_ba = fb_mem.bytearray()
fb_mem_ba = net.detect(img, thresholds=[(math.ceil(min_confidence * 255), 255)])
- I guess the problem with my approach in point 2 is due to how the net.detect source code allocates the output array. Where can I find the source code of the tensorflow library used in the OpenMV firmware? Such that I can modify net.classify and net.detect to use the framebuffer instead of the heap memory to allow for larger output shapes of NN model. I’ve been looking here , but I can’t seem to find the source code. I would greatly appreciate if you could point me to where I can find it.
Kindly find a simple script attached for reference if needed.
main_dev_object_detection.py (4.3 KB)
Thanks in advance!
Cheers,
Koray