Enhanced Optical Flow

I’m new to OpenMV and computer vision, we have 2 Cam 7 modules.

I’m trying to use the Cam7 to track position of a robot relative to it’s starting position. The plan is to point the module at the carpet and use it just like an optical mouse. However, the robot can and will turn.

“image.find_displacement(template)” returns dX, dY, and confidence. Is there any way to add rotation to this? I’m worried that as the robot turns, the position will be lost. I do have a reliable heading sensor (9DoF motion processor). However, as the heading and displacement are read at different rates errors will occur.

Or is there an example of how to do this? I’ve been looking for a month.

Any advice/help is appreciated. Thanks

Hi, this is on the list for one of the things I want to do. Basically you have to convert the image to the log polar domain and then you can see rotation and scale changes but loose the ability to see x and y changes.

So, this feature will require you to either take a frame rate drop if you want to do both per frame or you alternate between modes each other frame.

Unfortunately, the way the math works it’s either x/y translation or z_rotation/scale.

I might be able to get to this before the next release. It’s far back on my priority list right now.

So I can do both?

I’ll be doing this on a Cam7. what kind of cycle rate can I assume if I do both algorithms per image? What is the minimum distance this camera can work at?

My current thought on the algorithm is:

  1. grab image
  2. find x/y displacement
  3. find rotation
  4. do math to find x,y,heading relative to starting position


    Thanks
    Jim

I don’t know the cycle time right now. I have to implement the logpolar stuff still. Um, I can fast track this though for you since it’s something I’ve been wanting to take a crack at for a while. Maybe I’ll do that this weekend. The effort isn’t that high really.

Um, anyway, it’s effectively the cost of just doing get displacement twice. So, you can measure with that for the M7.

I’ll just add a keyword parameter to get displacement and it will then output scale and rotation changes instead of displacement changes. Very simple.

You sir, are awesome!

Hi, I’ve been working on the optical flow stuff all weekend and majorly overhauled everything. I’ve got translation optical flow working at 43 FPS on a 64x64 image. Let me know if you want the code and scripts right now for that.

As for rotation/scale… I put the code in for that but after reading some papers I realized that I did it in a rather weak way. So, I need to go back and redo the code somewhat.

Basically, I put this in:

1: Take logpolar() of image1/2
2: Take FFT of both images above.
3: Cross power spectrum.
4. Inverse FFT.
5. Find the peak.

This works for finding rotation/scale changes between two images as long as they have no translation…

However, you can do better:

  1. Take the FFT of image1/2
  2. Get the magnitude of both FFTs (this results in translation info being tossed)
  3. LogPolar convert the two FFTs.
  4. Take the FFT of both logpolar FFTs magnitudes.
  5. Cross power spectrum.
  6. Inverse FFT.
  7. Find the peak.

This gives you the rotation/scale without translation affecting the image output. However, it also requires 2 more FFTs… anyway, it will be for the best.

I’ll also add some scripts showing how to then take the above info… de-rotate and de-scale the secondary image… and run the regular translation detection method to extract translation without having to worry about scale and rotation.

That said, expect this all to run at like 10 FPS on the M7. However, at a 32x32 image we may be able to hit 20 FPS. With the H7 coming later this year we’ll be able to get much higher speeds (and global shutter imaging).

This is actually kinda of exciting. Having a script that does the full pipeline is really powerful. No one really has any thing available like this. While 32x32 or even 16x16 may sound like a low resolution you don’t need many pixels for this algorithm to work well if you’re doing differential estimation between the current and previous image. You just need a lot of speed.

You’re awesome and correct. No one has a Optical flow algorithm that handles rotation without a gyro.

I won’t even pretend to claim I understand your approach. But I am massively impressed.

I would love the code.

If it can get accurate readings with 16x16, I’ll use it. The faster it runs, the faster the robot can move.

The FRC game was released Saturday and there is a ton of emphases on the 15 sec autonomous period this year. If your company could offer a solution to position tracking that high school kids can use, you are looking at demand going way up.

I will make sure our marketing team talks about our use of your product on our robot and list you as a mentor for the team. There are over 4000 FRC teams. I’m pretty sure you could get a large chunk of that market with this capability alone.

I have 2 objectives for the our 2 CAM7s for this 6 weeks: 1 track position and guide the robot to collect bright yellow cubes (nearly cubes).

I really appreciate all you are doing to help us out. I will do my best to return the favor.

Thanks so much for the informative thread!It’s essential for new users!

Hi, here’s what I’ve got so far.

The code right now just does either translation or rotation optical flow. Not the combined just yet. Combining will be made available however through the use of the “fix_rotation_scale=True” argument to find_displacement(). (Note that the not really working code is in there so don’t pass that argument just yet).

You can get 30 FPS with a 64x64 image and 42 FPS with a 64x32 image. This is highly usable on the OpenMV Cam M7.

The fully corrected image will hit about 15 FPS at 64x64. That said, I’m struggling to get this working still. I don’t have an ETA but I will keep working on it. I need to pivot to other things in the mean time though.
22-Optical-Flow.zip (2.11 MB)