# Equivalent to OpenCV's cv2.warpPerspective?

I’d like to convert an oblique image to one that looks like it was taken from above, which in OpenCV would be done with the cv2.warpPerspective or cv2.perspectiveTransform functions. I can’t find anything quite like that in OpenMV, but is there some combination of transforms that would accomplish the same thing?

I don’t think we have anything like that. Um, what’s the use case?

Driving autonomous cars like this. https://wroscoe.github.io/compound-eye-autopilot.html#compound-eye-autopilot

That function turns distorted ground-level camera views into easy-to-parse top level views.

So this

turns into this:

Okay, so, I could implement a function to do this but it’s going to kill the frame rate. Not sure if that makes any sense. Seems like it would be smarter to just deal with the mathematics of the lines found that trying to make them parallel. I can guess roughly that the M7 will only be able to achieve about 10 FPS once you apply a perspective fix like that.

Actually 10 FPS is fine for cars.

The reason to do this is not to make it easier to find the lines, but rather to do this:

“To calculate the steering angle we’ll use a perspective transform to simulate a birds eye view from the tip. This way we can calculate the actual angle of the line relative to the car.”

There are probably other good ways to do that without a perspective transform, but it does neatly address the problem of lines being both translated (car is closer or further from them) as well as rotated (car is not running parallel to them), which can make the math tricky otherwise.

I can probably add a general purpose perspective transform function then. Seen this code twice now for QR Code and AprilTags. Can just copy if from there. You can then just supply a sequence of X/Y/Z rotations.

I think it would be much faster to first find the line on the image, and then apply the perspective transformation to the ends of that line, instead of applying it to each and every pixel in the image.

Much, much, faster.

@ kwagyeman Have some example code to be test,Thanks!

I’m late to the party on this topic but looking for something similar. Framerate really doesn’t matter for my problem.

Did this code make it into the codebase? What are the API’s I should look at?

Thank you

We just have this right now: image — machine vision — MicroPython 1.20 documentation (openmv.io)

This method will be refactored this year, though. I’ll be overhauling the lens correction code onboard such that we can do general purpose remap()'s like OpenCV from a lookup table with bilinear/bicubic sampling applied don’t the lookup table and then the pixel remap.

This will result in code that runs way faster since the lookup table can be generated once, and then we can remap pixels quickly with just bilinear blending.