Please see the rotation correction method. It can do perspective transformation in the x/y/z direction.
As for a more general approach. I plan to add the ability for it to take a list of 4 points and it will do a homography transform.
But, anyway, no, all the vision stuff is running on a microcontroller with has a very limited amount of heap space. When me and Ibrahim started working on this project we were developing with the M4 processor which was very limited. Only on the M7 has it been possible to run desktop libraries like AprilTags/Libdmtx/ZBar etc. Even then, to use these libraries we had to port the code by redoing memory allocations. We've been feature creeping with the platform to make it more useful... but, in general the design goal was to just make a sure easy to get started color tracking system like a Pixy Cam but more flexible and able to do everything itself.
Anyway, it's quite possible to run desktop code on the camera if you use the C programming interface: https://github.com/openmv/openmv/wiki
. You can really extend the system to do what you want. Note that the AprilTag code brings in a full matrix library with SVD support: https://github.com/openmv/openmv/blob/m ... ag.c#L1027