findContours

Hi,

I’m the proud owner of an OpenMV Cam and it works great right out of the box!

I noticed that the MicroPython which it comes with has things like dilate, erode and threshold, but does not have some of the more advanced OpenCV functions like findContours and background subtraction. What are my options for getting those functions? Wait for a port? If so, what is the timeline? Can I port it myself and recompile everything (be honest if this will be next to impossible)?

Thanks


Sean

Actually, I’m working on adding those, Um, if you clone the main repo you can get a lot of fixes. I’ll be merging in with my updates to add a bunch of features soon. I just have to test them.

As for doing your own development. Its… actually very easy. I’ll post an update tonight with all the instructions for this. For now, follow this here: Home · openmv/openmv Wiki · GitHub.

Getting code building and flashing the camera is a snap. You’re going to be waiting a while for findContours, but, we’ll be working on background subtraction next week.

Once I merge my updates with the main branch feel free to dive into the code and add the findContours function if you want. That’s what’s great about this project. You can merge your update back into the main branch.

If you need help for understanding the code, just post here. It’s actually not that complex. Took me about a day to get an idea of what was what.

Hi kwagyeman,

Thanks very much for your quick response. I’m very impressed with this project.


Sean

I was wondering if there is any update on findContours ?

Thank you.

Hi, we haven’t been working on any computer vision functionality right now. We’ve been focused on the tool set. Note that we have background subtraction now.

If you could point me in the right direction to begin incorporating this function into openmv I would like to help.
I am just beginning to familiarize myself with your code.

Hi, I would start by implementing the algorithm in omv/img/contours.c/

  • Add functions prototypes to imlib.h, the source file to src/omv/Makefile (after eye.c) and the object file to src/Makefile (after eye.o)

Add MP bindings to omv/py/py_image.c (see how other functions are implemented, for examples, search for find_blobs, basically you need a C function, an MP_DEFINE_CONST_FUN_OBJ_* and finally an entry in locals_dict_table).

You’ll need to add qstrings (to use them in your function and the dict_table, say MP_QSTR_find_contours) those live in omv/py/qstrdefsomv.h

Make sure to do make clean first after adding qstrs.

Other notes:

  • If you need to alloc fast memory use fb_alloc/free this allocates memory in the unused framebuffer area, make sure to free it (it’s a stack like allocation).
  • See imlib.h for util functions.

Hey

Following on my notes in the last post, here’s a commit that shows how to add a function to the image library:

Note you could add your code there for testing, send a PR and I’ll merge it. Later I could help with moving the code to a separate C file.

Hello,

Thank you for the direction in getting started. I will take a look at that commit.
Hopefully I have enough code progress and time to test in the next few days.

Hi, I’ve looked around and can’t see background subtraction as a method. Is there something else I should be looking under?

Thanks


Sean

The method is here:

http://docs.openmv.io/library/omv.image.html#image.image.difference

Please see the frame differencing script under the filters examples.

my apologies. I should have seen that. Thanks!

Was wondering what the status was on findcontours?

Um, wow, this is an old thread.

Find contours will not be implemented. We have find_blobs() instead. It basically does everything you’d normally use contour finding to do: image — machine vision — MicroPython 1.15 documentation → and here’s the blob object: image — machine vision — MicroPython 1.15 documentation

Note that I could make find countours pretty easily as find blobs has the pixel positions of the edges of a blob when its walking through the blob. However, the list of pixels is really big and you’d run out of RAM on an MCU. OpenCV can use that function because it expects you to have several mega bytes of RAM to save all pixel locations of the edge of a blob.

:slight_smile: Well, I just got my cam and getting into things slowly. I will check out the blobs. Thanks for the info. By the way couldn’t you write to the data to the sd card then do the analysis in chunks. Know it would be slow but better than nothing. Just thinking out loud. Used to have to do this in the old days since RAM (yes RAM was almost non-existent).

Thanks
Mike

The main point of the OpenMV Cam is for use in simple robotic apps. The “goal” is to make things easy for folks to do cool stuff. The OpenCV way of doing things gives you more flexibility but at a cost of a lot more complexity.

Note that I do actually use the file system actively for things like frame differencing. In that case I actually can difference a frame on disk with one in RAM. This involves loading chunks of the image at a time and working on them.

Is there a corresponding function to OpenCV’s findContours already ?


Cheers
Pei

I have no plans to add that feature, you are free to add port the c code for it to the platform if you like. We offer find blobs which finds color blobs. As for finding contours around a object using a CNN outperforms traditional countur algorithms so we have no plans to support find_contours() for gesture recong. Ibrahim found a data set for this and was working on training a CNN to do it.

Great !! How can I get involved ?? We really need this function…
Where can I download the dataset???

Pei

Basic getting started is here: https://github.com/openmv/openmv/tree/master/ml/cmsisnn

We’ll be rolling all the docs out soon. I’m just moving from Oakland to SF so I’m quite busy with other things. In the mean time focus on training some nets on the PC. 3 layer nets like (Conv → Pool → ReLu ) * 3 → IP are good.

http://www.idiap.ch/resource/gestures/

Google for data sets.

I know this is all poorly documented now but moving forwards we’ll be putting our effort into training nets for data sets. Me and Ibrahim are both buying serious training rigs to train up lots of nets.