Firmware build error: region `CCM' overflowed by 72 bytes

Good day,

I recently received my OpenMV M7 cam, which is an amazing device. I am going to use the cam for edge detection mainly. This parts works very well on the M7.

For now i use the V2.5 firmware and edge detection is doing a great job. The next thing i would like to do is receiving all edge(white) pixel positions in array/list form, without having to iterate trough the full image to find these white pixels while the find_edge method probably already did this.

To get this done, i am busy trying to edit the firmware, but when i start building, i run in to the following error:

/usr/bin/../lib/gcc/arm-none-eabi/6.3.1/../../../../arm-none-eabi/bin/ld: /home/jangerrit/Documents/Firmware/openmv/src/../firmware/OPENMV3/firmware.elf section `._stack' will not fit in region `CCM'
/usr/bin/../lib/gcc/arm-none-eabi/6.3.1/../../../../arm-none-eabi/bin/ld: region `CCM' overflowed by 72 bytes
Makefile:387: recipe for target 'firmware' failed
collect2: error: ld returned 1 exit status
make: *** [firmware] Error 1
11:00:32: The process "/usr/bin/make" exited with code 2.
Error while building/deploying project openmv (kit: Desktop Qt 5.9.1 GCC 64bit)
When executing step "Make"
11:00:33: Elapsed time: 01:18.

Do you know how i can solve this problem?


That means you are out of .data or .bss space. You need to remove whatever variables you added that are static.

Thank you!

I have managed to rebuild the full repository and cleaned up the code. It is working now.

Another thing:

I am trying to detect a white board inside an image. First i start with edge detection which works very well for getting the contours of the board.
When i am trying to find blobs, with the right thresholds, it is working great either. But sometimes, when moving the CAM faster, pixels do fall away from the edges or corners and the blobdetection dissapears for a short time ( milliseconds ).

Is there any way to make sure that for example a pre-defined (rectangle) contour will always be detectable, even when the blob region breaks up sometimes?

Hope to hear from you.

You should add filtering to the algorithm output for these cases. The camera is just a sensor and can be affected by noise. You should filter the output of the algorithm in Python to clean up the results. In Python it’s easy to do a sliding window average since you can manipulate lists without much effort, or a FIR filter is easy too. Finally, if you have a model for how the movement works you can use a Kalman filter.

Um, can you predict the next position of the white rectangle given previous positions? If so a Kalman filter will really help fill in the gaps. Basically you’d predict the next position and then mix in the new reading of the rectangle weighted by how good you think your prediction is and how good you think the sensor reading is.

Thanks for your fast reply.

Right, thanks for pointing me in to the right direction. I am not fully in to machine vision yet, but i have seen and read about cam shift, mean shift and Kalman filters.

For the prediction, do you refer to such as frame differencing or for example the use of any external sensors such as a fast gyroscope? (The CAM will be attached to a persons head or upper-body).
I am very interested in the theory. Can you maybe help me out with a (pseudo)code-based example, or maybe with some references to code-based examples for use with shift or Kalman filters, in relation to the M7 Cam?

To access the image source, would you with processing speed in mind, rather use Micropython or C to generate a list and implement such an algorithm?

You can do everything in Python.

I don’t know how you’d implement the Kalman Filter for your application. But, a simple weighted mean is easy to filter out gaps:

    if len(average) > 10: average.pop(0)
    t = sum(average) / len(average)

So, “average” is a list in python. You append a new value to it every time you have a new reading. Then, if the list is large than the window size (10 in the code above), you remove the last value. Then, the output is the sum of all values divided by the length of the list.

Thank you.