Nicla Vision OpenMV Firmware not working

Hello,

i am currently trying to get a OpenMV firmware from edge impulse to work.

I followed all instructions from the following tutotial under the point “Deploying your impulse as an OpenMV firmware

After flashing the new firmware (the .bin file from edge impulse) on the nicla vision the blue LED is blinking.

But when I try to run the included micropython-skript from edge impulse the following error occurs:

** Exeption: Could not find the file** and line 19 from the code is highlighted.

I hope you can help me with this error, because I really dont know what I did wrong. Thank you.

This is the .py skript included in edge impulse

# Edge Impulse - OpenMV Image Classification Example

import sensor, image, time, os, tf, uos, gc

sensor.reset()                         # Reset and initialize the sensor.
sensor.set_pixformat(sensor.RGB565)    # Set pixel format to RGB565 (or GRAYSCALE)
sensor.set_framesize(sensor.QVGA)      # Set frame size to QVGA (320x240)
sensor.set_windowing((240, 240))       # Set 240x240 window.
sensor.skip_frames(time=2000)          # Let the camera adjust.

#net = None
#labels = None

try:
    # Load built in model
 labels, net = tf.load_builtin_model('trained')   
 found = False
except Exception as e:
    raise Exception(e)                         **<-- Highlighted line** 


clock = time.clock()
while(True):
    clock.tick()

    img = sensor.snapshot()

    # default settings just do one detection... change them to search the image...
    for obj in net.classify(img, min_scale=1.0, scale_mul=0.8, x_overlap=0.5, y_overlap=0.5):
        print("**********\nPredictions at [x=%d,y=%d,w=%d,h=%d]" % obj.rect())
        img.draw_rectangle(obj.rect())
        # This combines the labels and confidence values into a list of tuples
        predictions_list = list(zip(labels, obj.output()))

        for i in range(len(predictions_list)):
            print("%s = %f" % (predictions_list[i][0], predictions_list[i][1]))

    print(clock.fps(), "fps")

Hi, it should work fine. Can you post the zip file from edge impulse. I can test.

Hi @kwagyeman
Thank you for your answer.
I cant attach a file because I’m a new user, but I uploaded the zip File to dropbox, i hope thats ok.
I used the nicla vision bin-File because I’m working with the Nicla vision.

Here is the link to the file:

Hi, I got the same error. Then I tried generating a model myself. And I got this error:

Scheduling job in cluster...
Container image pulled!
Job started
Copy image classification example
Building firmware...
GEN /app/openmv/src/ARDUINO_NICLA_VISION_build/lib/libtf/libtf_builtin_models.c
GEN /app/openmv/src/ARDUINO_NICLA_VISION_build/lib/libtf/libtf_builtin_models.h
Use make V=1 or set BUILD_VERBOSE in your environment to increase build verbosity.
Including User C Module from /app/openmv/src/omv/modules
GEN /app/openmv/src/ARDUINO_NICLA_VISION_build/micropython/genhdr/mpversion.h
CC /app/openmv/src/omv/modules/py_tf.c
CC ../../py/modsys.c
CC ../../extmod/moduos.c
CC ../../extmod/moduplatform.c
CC ../../shared/runtime/pyexec.c
/opt/gcc/gcc-arm-none-eabi-10-2020-q4-major/bin/../lib/gcc/arm-none-eabi/10.2.1/../../../../arm-none-eabi/bin/ld: /app/openmv/src/ARDUINO_NICLA_VISION_build/bin/firmware.elf section `.text' will not fit in region `FLASH_TEXT'
/opt/gcc/gcc-arm-none-eabi-10-2020-q4-major/bin/../lib/gcc/arm-none-eabi/10.2.1/../../../../arm-none-eabi/bin/ld: region `FLASH_TEXT' overflowed by 648032 bytes
collect2: error: ld returned 1 exit status
make: *** [omv/ports/stm32/omv_portconfig.mk:676: firmware] Error 1
Building firmware OK

Copying artefacts...
Copying artefacts OK

Note that the build actually failed but Edge Impulse still gave me a firmware to download. I think they are pathcing our release binaries.

I’ll bring this up with them that they need to fail the build if the model doesn’t fit.

Anyway, as for resolving this issue… you need to shrink the model complexity. The number of weights is too much. E.g. for mine the model needed more than 648032 bytes that what was available (which is around 300000).

Normally you can deploy the model as a tflite file to be deployed such that it’s loaded at run time into SDRAM. However, since the nicla doesn’t have SDRAM it’s limited on the model size you can bake into the free flash in it.

Hi, thank you for the answer.
I tried to make the model smaller. According to impulse edge the model only takes 98,4K of flash usage. But I still got the error. I dont know how I should shrink the model any further without losing to much accuracy.
With this action the accuracy already dropped from 90% to 70%…

Is the model still to big for the available flash or is there maybe another mistake from my side?

Can you post the text from the build process? How much did the firmware overflow by?

Hi, this is the output from the build process. It overflowed by 6904 bytes.

Creating job... OK (ID: 15055450)

Scheduling job in cluster...
Container image pulled!
Job started
Calculating arena size for "Transfer learning"...
Scheduling job in cluster...
Container image pulled!
Job started
Calculating arena size for "Transfer learning" OK
Scheduling job in cluster...
Container image pulled!
Job started
Exporting TensorFlow Lite model...
Found operators ['Conv2D', 'DepthwiseConv2D', 'FullyConnected', 'Softmax', 'Pad']
Exporting TensorFlow Lite model OK

Removing clutter...
Removing clutter OK

Copying output...
Copying output OK

Scheduling job in cluster...
Container image pulled!
Job started
Copying output...
Copying output OK

Copy image classification example
Building firmware...
GEN /app/openmv/src/ARDUINO_NICLA_VISION_build/lib/libtf/libtf_builtin_models.c
GEN /app/openmv/src/ARDUINO_NICLA_VISION_build/lib/libtf/libtf_builtin_models.h
Use make V=1 or set BUILD_VERBOSE in your environment to increase build verbosity.
Including User C Module from /app/openmv/src/omv/modules
GEN /app/openmv/src/ARDUINO_NICLA_VISION_build/micropython/genhdr/mpversion.h
CC /app/openmv/src/omv/modules/py_tf.c
CC ../../py/modsys.c
CC ../../extmod/moduos.c
CC ../../extmod/moduplatform.c
CC ../../shared/runtime/pyexec.c
/opt/gcc/gcc-arm-none-eabi-10-2020-q4-major/bin/../lib/gcc/arm-none-eabi/10.2.1/../../../../arm-none-eabi/bin/ld: /app/openmv/src/ARDUINO_NICLA_VISION_build/bin/firmware.elf section `.text' will not fit in region `FLASH_TEXT'
/opt/gcc/gcc-arm-none-eabi-10-2020-q4-major/bin/../lib/gcc/arm-none-eabi/10.2.1/../../../../arm-none-eabi/bin/ld: region `FLASH_TEXT' overflowed by 6904 bytes
collect2: error: ld returned 1 exit status
make: *** [omv/ports/stm32/omv_portconfig.mk:676: firmware] Error 1
Building firmware OK

Copying artefacts...
Copying artefacts OK

Job completed

Hi, sorry for the late response. Forgot to answer you as I was traveling.

Do this:

  1. Fork our repo on github: openmv/openmv: OpenMV Camera Module (github.com)
  2. Enable github actions on your fork. This will build the firmware in the cloud.
  3. Drop your labels file and tflite file here: openmv/src/lib/libtf/models at master · openmv/openmv (github.com)
  4. Delete the fomo_face_detection model and labels.

Once you do these edits the firmware should automatically build in the cloud on your fork. If everything is good you’ll have a development release with your new firmware. Then just download that and run it. The model will have the name of whatever you call it as a file name when you drop it in that folder.

e.g. load_builtin('fomo_face_detection")

Deleting the fomo model from the firmware saves 55KB. If you need to save more space go to this file and disable stuff:

openmv/src/omv/boards/ARDUINO_NICLA_VISION/imlib_config.h at master · openmv/openmv (github.com)

E.g. comment out apritlags, qrcodes, etc.

Regarding why this usage is so complex. On our SDRAM boards you can just load the model from disk and we have 30MB+ to store it. However, on models without SDRAM you have to store it in the program flash… which mean your model has to fit.

This is better than trying to load it given the limited RAM of 512KB where the model would have to share space with the frame buffer, it’s own output when run, and etc.

Hi @kwagyeman Thank you for the response. I found some good settings for the model in edge impulse with the model still fitting in the flash. I also tried your solution now and it works for now. Thank you very much for the good support from your side! :slight_smile: