MTF calculation?

I just received my new V.3 camera and am exploring its capabilities.

For my application, I need to do a fast calculation of the modulation transfer function of a test image. I’m starting from scratch and would appreciate suggestions on how best to do this.

Many thanks in advance!

Hi, I don’t quite know what you’re asking about. Can you go into some more detail about the application?

MTF is a common mathematical measure of image sharpness. It involves differentiation and a Fast Fourier Transform. I hoped to find it in one of the standard or extended libraries but so far have not.

Ah, so, we actually have code for the 2D FFT onboard for phase correlation. If you modify the C firmware the camera can do what you want. The phase correlation file shows an example of this: https://github.com/openmv/openmv/blob/master/src/omv/img/phasecorrelation.c

Let me know if you’d like to write it yourself. Otherwise, I can add it to the firmware in the future.

That’s hugely helpful. I’m a newbie with this so a firmware approach in the future would be wonderful. Meanwhile I’ll see what I can accomplish! Thank you.

Okay, I’m assuming that a firmware update that exposes the fft functionality (or better yet, provides a proper MTF method or some other measure of image sharpness) is some time in the future. I’m interested in fiddling with the firmware. Is the process for doing so documented? I’m not finding it.

Many thanks in advance!

Hi, it’s right here.

I’m going to be doing some work on this FFT stuff this weekend. Can you post a link to an easy to understand paper or website for MTF? I can add it then.

Whoa, that is awesome.

From what I’ve gleaned, there are many approaches to determining image quality (sharpness/contrast/resolving ability). The slanted-edge approach seems to be the most accepted, and is in fact the basis of ISO 12233. Below are some links to browse to get the general drift of it.

My newbie’s summary:

o A slant-edge image target (black/white, at a small angle vs. the pixel array axes) serves as a step function for the imaging system. How steeply stepped does your imaging system perceive it to be? The sharper the step, the more high-spatial-frequency content in its Fourier transform; blurrier images have less.

o MTF, then, boils down to calculating how much high-spatial-frequency content is in the image. Lens quality and setup (focus, etc) is an obviously dominant contributor, but the whole imaging system contributes. In the olden days of analog video connections, even cable quality could have a profound impact.

o MTF seems to be most often calculated as the normalized FFT of the derivative of the image, but I suppose there might be other measures as well; maybe even the histogram of pixel values could have utility for this (since a perfect step would have pixel values in only two bins, white and black; any intermediate bins with pixels in them would indicate blur). I would imagine that image-based autofocus approaches do something similar. Those have been around for a long time. My nearly 30-year-old Sony handicam had image-based autofocus.

My own application has frame-rate as a priority, so efficiency trumps exactitude for me. I just want to maximize “sharpness” and don’t care much about rigorous compliance with ISO this-and-that. I’d imagine autofocus would have similar priorities.


Reading material:

http://www.dougkerr.net/Pumpkin/articles/MTF_Slant_Edge.pdf --a few pages of less-interesting background lead up to a nice description of the slant-edge approach

http://harvestimaging.com/blog/?p=1328 --good, concise description of the slant-edge approach

https://www.edmundoptics.com/resources/application-notes/optics/introduction-to-modulation-transfer-function/ --less on the slant-edge approach, more relating to resolving line patterns and other classical stuff

http://www.imatest.com/docs/sharpness/ --measuring sharpness and all sorts of other interesting stuff

This is really cool: There Are Giant Camera Resolution Test Charts Scattered Across the US

and

and
py_mtf/mtf.py at master · weiliu4/py_mtf · GitHub --some Python example codes I’ve been picking through


Thanks! Meanwhile I’ll play around with that firmware link you provided. Many thanks for that!

Still not quite clear on what to do.

Um, anyway, the FFT code I wrote can do 1d FFTs up to 1024 points. It can do both real->complex and complex->complex ffts. It can also do reverse ffts too.

I will be focusing on adding logpolar mapping to the phase correlation code for the a customer.

So I’m googling on [“image based” “auto focus” OR autofocus] and find an intriguing reference to “histogram entropy” as a metric of image sharpness here: http://www.emo.org.tr/ekler/dbdbf7ea134592e_ek.pdf

Might be a useful concept.

Here’s code for an autofocus routine used in microscopy, of interest mostly for how they calculate contrast: https://github.com/micro-manager/micro-manager/blob/master/scripts/autofocus_test.bsh

My application aside: What’s needed is a way to answer the question: How sharp is this image (or a portion of it)?

If I hadn’t started this thread, how would a machine vision engineer have answered that question? Is there a tried-and-true approach?

If so, that might do for me as well as being broadly useful for others.

Sorry, I’m just looking for if you can just outline the steps you want. From the code… I kinda see this behavior:

  1. Grab a row of pixels.
  2. Compute the delta between all pixels in the row.
  3. Take the FFT of those deltas.
  4. Get the magnitude of the FFT.
  5. Return the median of the FFT?

That would seem to be one approach! (I’m trying to figure this out too…!)

I don’t think an entire row would be needed, just the region around the edge of the slanted-step.

More broadly, how does one assess the contrast/sharpness of an arbitrary image? Is there a less-fancy approach that might be more applicable to a tiny processor?

Take a look at c++ - calculate blurness and sharpness of an image - Stack Overflow

Update: also opencv - Calculating sharpness of an image - Stack Overflow

Hm…

Mashing together two posts from opencv - Calculating sharpness of an image - Stack Overflow :

Mat src_gray, dst;
int kernel_size = 3;
int scale = 1;
int delta = 0;
int ddepth = CV_16S;

GaussianBlur( src, src, Size(3,3), 0, 0, BORDER_DEFAULT );

/// Convert the image to grayscale
cvtColor( src, src_gray, CV_RGB2GRAY );

/// Apply Laplace function
Mat abs_dst;

Laplacian( src_gray, dst, ddepth, kernel_size, scale, delta, BORDER_DEFAULT );

//compute sharpness
cv::Laplacian(src_gray, dst, CV_64F);

cv::Scalar mu, sigma;
cv::meanStdDev(dst, mu, sigma);

double focusMeasure = sigma.val[0] * sigma.val[0];

…where “cv” appears to reference OpenCV (https://opencv.org)

Wondering if this might be less compute-intensive than the FFT approach. I’m poking through OpenCV now in search of nuggets.


UPDATE: Per the discussion at http://answers.opencv.org/question/5395/how-to-calculate-blurriness-and-sharpness-of-a-given-image/ OpenCV has a function, calcBlurriness, which would do the job. Unfortunately it’s undocumented (https://docs.opencv.org/trunk/d5/d50/group__videostab.html#ga527fd10de0ee99ed7585d4a7dc23c470). Trying to ferret out the source now.

Pay dirt: GitHub - bvnayak/PDS_Compute_MTF: Implementation of Slant Edge Method for MTF in Python from PDS Image.

Here is something interesting. If you have fast .jpeg compression then the job may already be done. Per Detection of Blur in Images/Video sequences - Stack Overflow (poster Misha), the DCT coefficients provide a measure of the high-frequency components in the image.

Are the DCT coefficients accessible after a .jpeg compression in OpenMV?

Also see computer vision - Assessing the quality of an image with respect to compression? - Stack Overflow --post by the same author.

“Misha” makes several references to a paper by Marzillano that describes an efficient method of calculating sharpness: http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.7.9921&rep=rep1&type=pdf …reading through that now.

I can do the math easily. I just don’t know what particular steps you’d like me to do. We can’t output graphs on the OpenMV Cam. So, everything needs to boil down to one value.

I might have time to write the code for this tommorrow. If you can work out a high level step by step guide for what you want me to do then I can do that. Note that “compute the PSF” is not a sufficient guide… I’ve seen a lot of details on that but I don’t know what they mean.

Thank you! I am working on the step by step list. First I’m sifting through all the references and pointers and opinions to come up with an optimum approach. Expect my input shortly.

Thank you again for your interest and helpfulness!

Okay, I’ve studied this quite a lot today. Needed is a scalar measure of sharpness/contrast/acutance. Focusing and other lens adjustments would serve to optimize this quantity.

Now, I started this thread asking about MTF. But MTF gives a graph vs. spatial frequency, not the figure of merit desired (though I suppose one could pick a spatial frequency and use the value of that bin for optimization).

After my reading, it seems DCT rather than FFT will give us the info we needed with more efficiency. See https://users.cs.cf.ac.uk/Dave.Marshall/Multimedia/PDF/10_DCT.pdf …As I’m sure you know (it was new to me as of today!), DCT is basis of .jpeg, so an efficient implementation probably already exists in OpenMV.

We’d be interested in the information in the lower-right corner of the DCT matrix (=high frequency).

Nice : “One of the properties of the 2-D DCT is that it is separable meaning that it can be separated into a pair of 1-D DCTs. To obtain the 2-D DCT of a block a 1-D DCT is first performed on the rows of the block then a 1-D DCT is performed on the columns of the resulting block.”

So:

  1. Grab region of interest. (Default: whole image)
  2. Divide into 8x8 or 16
  3. Compute 2D DCT ==> Results in 8x8 or 16x16 bins
  4. Average bins in lower right (high frequency) corner (say 3x3). This scalar value is the figure-of-merit. Higher = sharper image.
  5. Note this is a figure of merit and not intended to be computationally rigorous. So, for computation purposes we can eliminate the sqrt(2/N), sqrt(2/M) coefficients and save a couple CPU cycles.

To my eye this is compatible with the slant-edge approach and also can be used for autofocus of arbitrary images.

What do you think? An FFT approach could of course be substituted if preferred.