i am currently trying to get a OpenMV firmware from edge impulse to work.
I followed all instructions from the following tutotial under the point “Deploying your impulse as an OpenMV firmware”
After flashing the new firmware (the .bin file from edge impulse) on the nicla vision the blue LED is blinking.
But when I try to run the included micropython-skript from edge impulse the following error occurs:
** Exeption: Could not find the file** and line 19 from the code is highlighted.
I hope you can help me with this error, because I really dont know what I did wrong. Thank you.
This is the .py skript included in edge impulse
# Edge Impulse - OpenMV Image Classification Example
import sensor, image, time, os, tf, uos, gc
sensor.reset() # Reset and initialize the sensor.
sensor.set_pixformat(sensor.RGB565) # Set pixel format to RGB565 (or GRAYSCALE)
sensor.set_framesize(sensor.QVGA) # Set frame size to QVGA (320x240)
sensor.set_windowing((240, 240)) # Set 240x240 window.
sensor.skip_frames(time=2000) # Let the camera adjust.
#net = None
#labels = None
try:
# Load built in model
labels, net = tf.load_builtin_model('trained')
found = False
except Exception as e:
raise Exception(e) **<-- Highlighted line**
clock = time.clock()
while(True):
clock.tick()
img = sensor.snapshot()
# default settings just do one detection... change them to search the image...
for obj in net.classify(img, min_scale=1.0, scale_mul=0.8, x_overlap=0.5, y_overlap=0.5):
print("**********\nPredictions at [x=%d,y=%d,w=%d,h=%d]" % obj.rect())
img.draw_rectangle(obj.rect())
# This combines the labels and confidence values into a list of tuples
predictions_list = list(zip(labels, obj.output()))
for i in range(len(predictions_list)):
print("%s = %f" % (predictions_list[i][0], predictions_list[i][1]))
print(clock.fps(), "fps")
Hi @kwagyeman
Thank you for your answer.
I cant attach a file because I’m a new user, but I uploaded the zip File to dropbox, i hope thats ok.
I used the nicla vision bin-File because I’m working with the Nicla vision.
Hi, I got the same error. Then I tried generating a model myself. And I got this error:
Scheduling job in cluster...
Container image pulled!
Job started
Copy image classification example
Building firmware...
GEN /app/openmv/src/ARDUINO_NICLA_VISION_build/lib/libtf/libtf_builtin_models.c
GEN /app/openmv/src/ARDUINO_NICLA_VISION_build/lib/libtf/libtf_builtin_models.h
Use make V=1 or set BUILD_VERBOSE in your environment to increase build verbosity.
Including User C Module from /app/openmv/src/omv/modules
GEN /app/openmv/src/ARDUINO_NICLA_VISION_build/micropython/genhdr/mpversion.h
CC /app/openmv/src/omv/modules/py_tf.c
CC ../../py/modsys.c
CC ../../extmod/moduos.c
CC ../../extmod/moduplatform.c
CC ../../shared/runtime/pyexec.c
/opt/gcc/gcc-arm-none-eabi-10-2020-q4-major/bin/../lib/gcc/arm-none-eabi/10.2.1/../../../../arm-none-eabi/bin/ld: /app/openmv/src/ARDUINO_NICLA_VISION_build/bin/firmware.elf section `.text' will not fit in region `FLASH_TEXT'
/opt/gcc/gcc-arm-none-eabi-10-2020-q4-major/bin/../lib/gcc/arm-none-eabi/10.2.1/../../../../arm-none-eabi/bin/ld: region `FLASH_TEXT' overflowed by 648032 bytes
collect2: error: ld returned 1 exit status
make: *** [omv/ports/stm32/omv_portconfig.mk:676: firmware] Error 1
Building firmware OK
Copying artefacts...
Copying artefacts OK
Note that the build actually failed but Edge Impulse still gave me a firmware to download. I think they are pathcing our release binaries.
I’ll bring this up with them that they need to fail the build if the model doesn’t fit.
Anyway, as for resolving this issue… you need to shrink the model complexity. The number of weights is too much. E.g. for mine the model needed more than 648032 bytes that what was available (which is around 300000).
Normally you can deploy the model as a tflite file to be deployed such that it’s loaded at run time into SDRAM. However, since the nicla doesn’t have SDRAM it’s limited on the model size you can bake into the free flash in it.
Hi, thank you for the answer.
I tried to make the model smaller. According to impulse edge the model only takes 98,4K of flash usage. But I still got the error. I dont know how I should shrink the model any further without losing to much accuracy.
With this action the accuracy already dropped from 90% to 70%…
Is the model still to big for the available flash or is there maybe another mistake from my side?
Hi, this is the output from the build process. It overflowed by 6904 bytes.
Creating job... OK (ID: 15055450)
Scheduling job in cluster...
Container image pulled!
Job started
Calculating arena size for "Transfer learning"...
Scheduling job in cluster...
Container image pulled!
Job started
Calculating arena size for "Transfer learning" OK
Scheduling job in cluster...
Container image pulled!
Job started
Exporting TensorFlow Lite model...
Found operators ['Conv2D', 'DepthwiseConv2D', 'FullyConnected', 'Softmax', 'Pad']
Exporting TensorFlow Lite model OK
Removing clutter...
Removing clutter OK
Copying output...
Copying output OK
Scheduling job in cluster...
Container image pulled!
Job started
Copying output...
Copying output OK
Copy image classification example
Building firmware...
GEN /app/openmv/src/ARDUINO_NICLA_VISION_build/lib/libtf/libtf_builtin_models.c
GEN /app/openmv/src/ARDUINO_NICLA_VISION_build/lib/libtf/libtf_builtin_models.h
Use make V=1 or set BUILD_VERBOSE in your environment to increase build verbosity.
Including User C Module from /app/openmv/src/omv/modules
GEN /app/openmv/src/ARDUINO_NICLA_VISION_build/micropython/genhdr/mpversion.h
CC /app/openmv/src/omv/modules/py_tf.c
CC ../../py/modsys.c
CC ../../extmod/moduos.c
CC ../../extmod/moduplatform.c
CC ../../shared/runtime/pyexec.c
/opt/gcc/gcc-arm-none-eabi-10-2020-q4-major/bin/../lib/gcc/arm-none-eabi/10.2.1/../../../../arm-none-eabi/bin/ld: /app/openmv/src/ARDUINO_NICLA_VISION_build/bin/firmware.elf section `.text' will not fit in region `FLASH_TEXT'
/opt/gcc/gcc-arm-none-eabi-10-2020-q4-major/bin/../lib/gcc/arm-none-eabi/10.2.1/../../../../arm-none-eabi/bin/ld: region `FLASH_TEXT' overflowed by 6904 bytes
collect2: error: ld returned 1 exit status
make: *** [omv/ports/stm32/omv_portconfig.mk:676: firmware] Error 1
Building firmware OK
Copying artefacts...
Copying artefacts OK
Job completed
Once you do these edits the firmware should automatically build in the cloud on your fork. If everything is good you’ll have a development release with your new firmware. Then just download that and run it. The model will have the name of whatever you call it as a file name when you drop it in that folder.
e.g. load_builtin('fomo_face_detection")
Deleting the fomo model from the firmware saves 55KB. If you need to save more space go to this file and disable stuff:
Regarding why this usage is so complex. On our SDRAM boards you can just load the model from disk and we have 30MB+ to store it. However, on models without SDRAM you have to store it in the program flash… which mean your model has to fit.
This is better than trying to load it given the limited RAM of 512KB where the model would have to share space with the frame buffer, it’s own output when run, and etc.
Hi @kwagyeman Thank you for the response. I found some good settings for the model in edge impulse with the model still fitting in the flash. I also tried your solution now and it works for now. Thank you very much for the good support from your side!
Hi,
Has anyone found an easy solution to that problem? I tried to shrink the firmware as much as possible (64k) but the error still appears.
I also tried the github solution, but somehow not all jobs can be done with sucsess there (process completed with exit code 2).
Thanks in advance
Hi, we just finished do a big refactor of our TensorFlow library and I’m working on documentation now. We will release everything this week.
Once that is done, you should be able to run any network, we’ve enabled all operators on every camera. We’ve also increased the heap size on all boards to fit larger models. Finally, you can control which networks get built into the firmware so you can build in the network you like and remove ones you don’t need from the firmware to make space.
Please stay tuned, I’ll be doing a blog post on the main website and adding porting guides on the forums.