Hello,
I’m working on a project using the OpenMV4 Cam H7 Plus with which I would like to access the 32Mb of SDRAM and upload an object detection model larger than 400kb. I’ve trained models using FOMO and I’ve had success uploading smaller models onto this board already. Now, I would like to use a larger model, however I’ve had no luck compiling custom firmware that allows me to actually use the SDRAM. I’ve gone to Github downloaded openmv-development files, changed/added code, attempted to flash the board with my firmware ect. and nothing has worked. I’m at a point where I’m just confused if these are even the correct steps and if this is what I should be doing. I’d be grateful for any advice, thanks.
Hi, the default firmware for the H7 Plus has a heap of 4MB: openmv/src/omv/boards/OPENMV4P/omv_boardconfig.h at master · openmv/openmv · GitHub
You can also load the model itself into SDRAM frame buffer stack using Model(load_to_fb=True)
which is 11MB.
Please use the default H7 Plus firmware. You can install the latest from the IDE.
Thank you for your quick response. I reinstalled the latest firmware. I’m using this line of code net = ml.Model(“/sdcard/best_int8.tflite”, load_to_fb=True) which I’ve been using with smaller models I’ve uploaded. However, I’m still not able to allocate the memory for even a 3Mb model best_int8.tflite. Does the use of an sd card have anything to do with it? Or is the line of code I’m using incorrect? Thanks.
How many megabytes does your model need for the tensor arena? The H7 Plus has 4MB for the heap. If the call is failing, it’s because your model uses more than that.
The model itself will not use up that 4MB with the load_to_fb=True.
I’m not sure, do you know of ways to test this? I know for a fact my model is small enough, I’ve tested with an even smaller model (640Kb) and am still getting a memory error. I’ve also ran print(“Free mem before model:”, gc.mem_free()) with the firmware and am now seeing the 4Mb.
Post the model for me to try?
fomo_model_quantized.zip (506.4 KB)
Let me know if that works, thank you!
Yeah, it’s an error with your model:
tflm_backend: tensorflow/lite/micro/kernels/cmsis_nn/conv.cc Hybrid models are not supported on TFLite Micro.
tflm_backend: Node CONV_2D (number 13f) failed to prepare with status 1
Had to compile with the version of our tflm with logs enable.
ARM Vela doesn’t like it too:
vela.zip (2.9 KB)
I see. So does this mean my model is mixing quantized and float tensors? Is that what’s meant by hybrid model?
Yes, you need to get all the layers to be int8 quantized.