Hi OpenMV friends, I’m new here.
Is there by any mean, a way to keep memory from frame to frame of the blob id ?
Here is the idea : we’re trying to use the OpenMV H7 + Teensy as a HID multitouch system for windows10 or linux native (fav choice)
We’re able to track an IR pointer as a blob diffusion through a flat video projection screen.
For the moment we only manage to track the position of the IR pointer to move the touch point for only one mouse pointer.
We would like to press and hold while the blob (the light is emitted) is active (like a wii-mote multitouch screen hack if you already seen that)
So there 2 ways we can do it :
- either by computing a lot in the Teensy doing math to track blobs from frame to frame : hard one
- track the blobs going on and off on the OpenMV Cam (and implement a cleaner Reactivision/TUIO-like protocol) : easy one if it exists
Any help on this ?
Thank you so much in advance
Yes, but, you have to write python code to do it… This is isn’t a vision processing thing. It’s called object tracking.
Basically, find blobs is something called perception. It’s just perceiving the world. You then need to have some type of controller code that’s the state of the world. What you want to do is then update the state of the world with the new state you see with find blobs.
So, given a new list from find blobs you’d try to match that new list of objects to a list of blobs that you already have from the last frame. If you see a new blob for multiple frames then you might say that the new detection is not noise and consider it a new blob. Similarly, if you can’t match a blob that you already know about to the blobs you see then you’d consider that blob to have been removed.
For matching blobs you’d use their features and remember that blobs can’t change drastically frame to frame if sampled fast enough. Given this you’d expect each blob to detect to be very similar to the blobs in your tracked blobs list. So, you’d write a function to compute something called a feature vector for each blob. Blobs with similar feature vectors are the same blob. You can use the L2 norm (e.g. distance function of a vector) to compute the similarity score. The feature vector would encode CX, CY, rotation, perimeter, etc.
I’m working on an Arduino Interface library for the OpenMV Cam now that connects to our RPC code. It’s a very robust interface library that lets you do remote calls to the OpenMV Cam. A remote call could be to grab a frame, then compute the tracked objects, and then return the result.
The reason why I wanted to do it on the Teensy instead of the OpenMV (if not possible simply on this latter platform) was for performance issues.
Maybe I’m wrong but that’s another topic.
Btw, thank you so much for all the valuable insights. I’m going to dig all the concepts you told me about !
The OpenMV Cam is as fast as the Teensy for all relevant purposes. Yes, it’s a 600 MHz M7 versus a 480 MHz one. But, those extra MHz won’t matter much for the application.
What will matter is that writing said code in C will be very difficult versus using python.