Hello, I have played around with OpenMV camera and used it to drive my robot for some time. There are some algorithms I need but which are not provided with OpenMV’s image library. So, I have implemented a few of the missing pieces and published them, hopefully useful to someone.
It’s called OpenRV (for Robot Vision). It has three main pieces:
Hu moments, to describe and match shapes
Planar homography, to calculate floor positions from image coordinates
Quickshift++, for clustering
Planar homography, I believe, would be useful to a lot of people. Finding real-world distance and position from an image is a common requirement in a lot of projects.
I don’t foresee implementing more algorithms in the near future because I am putting down OpenMV for a while. But the source code and documents are there for anyone to use. I hope fewer man-hours will be wasted reinventing the wheel.
Question, find_blobs implements all the outputs from hu moments. Was this not directly usable by you? I choose to skip implementing hu moments directly and instead implemented all the features you can get from them since it’s more straight forward to understand.
Do you mean that the Blob objects returned by find_blobs() have all the information captured by Hu moments? It’s not clear to me how that is the case. A Blob’s roundness(), elongation(), density(), and compactness(), etc can help identify shapes, but for complex shapes, they are not nearly as robust as Hu moments.
But I can be wrong. You know more about OpenMV’s image library than I do. Please enlighten me.
By the way, I certainly don’t mind my library being included in the IDE. Please be aware that it depends on two other libraries I wrote, a vector library called vec and a matrix multiplication and linear solver library called mtx. This dependency is detailed on OpenRV’s github page.
I definitely think planar homography would be useful to a lot of people. The downside is that calibrating for the homography matrix is not for the faint of heart (and it requires a Linux machine with numpy to do SVD). This process is also detailed on OpenRV’s github page.
Ibrahim wants to include your library and etc. in the examples folder. Regarding hu moments. Find blobs calculates all those internally but them gets rid of the values in favor of the higher level values. I can just export the hu moments however from find_blobs(). Please make a bug request for this and I’ll do it. It will be way faster than using get_pixel: https://github.com/openmv/openmv/blob/master/src/omv/img/blob.c#L609
This just came to my mind. I think OpenMV should, by default, include some kind of matrix and vector libraries, however basic they may be. Doesn’t have to be powerful as numpy, but at least some basic functionalities such as matrix multiplication and linear solver. They would enable so much more to be done.
I am of course thinking about my mtx and vec libraries. You may have some bigger-picture issues to consider. That’s just my suggestion.
But really, how could a machine vision platform be without some matrix and vector operations?
Well, standard users have no clue how to use that kind of code. The point of the system is for you not to have to write matrix and vector math. It’s a regression of what we want the product to be - easy.
I’d much prefer all this to be hidden away but I understand power users need it.
Anyway, we just added ulab to the system so full numpy support is now available. Check out the ulab module documentation and grab the latest firmware from GitHub (you have to build it). Then you can leverage that.
Ulab is written by another MicroPython user. It’s got all the matrix stuff implemented in C along with FFT code.
The modules we have in scripts/libraries are now frozen (built into the firmware images) so they will be available by default. I noticed we also have umatrix.py and ulinalg.py and I’ve just added vec, mtx and openrv. However, we could really use some examples for openrv, if you have the time please send a PR with an example. Note re calibration, just a thought but I think we could pre-calibrate all the lenses we have and include the matrices in the examples.
Note/UPDATE: We don’t plan on adding too many examples for these modules or documenting them, they’re just there for the users that know how to use them.
ulab is an awesome addition. Having ulab, I think we can ditch umatrix, ulinalg, vec, mtx …, and make ulab the standard on OpenMV. I think OpenMV will benefit from having a single standard of doing linear algebra.
In fact, I am inclined to make my Planar Homography code based on ulab now. I just don’t know when I will have time to do it.
As for the future of OpenRV:
Hu moments will be useless once it is exposed natively by OpenMV
Planar homography, I want to change it to base on ulab. And really, the homographic transformation itself is simple enough. The hard part is to calibrate for the homography matrix.
Quickshift++, the odd one out. I don’t see casual users needing it.
So, I guess the 3 functions of OpenRV will go separate ways. I will check out ulab on OpenMV first. Then, I will move Planar Homography to base on ulab, probably publishing it as a separate project/module. I will get back to you guys when I have some tangible progress. Thanks for the attention.