MTF calculation?
MTF calculation?
I just received my new V.3 camera and am exploring its capabilities.
For my application, I need to do a fast calculation of the modulation transfer function of a test image. I’m starting from scratch and would appreciate suggestions on how best to do this.
Many thanks in advance!
For my application, I need to do a fast calculation of the modulation transfer function of a test image. I’m starting from scratch and would appreciate suggestions on how best to do this.
Many thanks in advance!
Re: MTF calculation?
Hi, I don't quite know what you're asking about. Can you go into some more detail about the application?
Nyamekye,
Re: MTF calculation?
MTF is a common mathematical measure of image sharpness. It involves differentiation and a Fast Fourier Transform. I hoped to find it in one of the standard or extended libraries but so far have not.
Re: MTF calculation?
Ah, so, we actually have code for the 2D FFT onboard for phase correlation. If you modify the C firmware the camera can do what you want. The phase correlation file shows an example of this: https://github.com/openmv/openmv/blob/m ... relation.c
Let me know if you'd like to write it yourself. Otherwise, I can add it to the firmware in the future.
Let me know if you'd like to write it yourself. Otherwise, I can add it to the firmware in the future.
Nyamekye,
Re: MTF calculation?
That’s hugely helpful. I’m a newbie with this so a firmware approach in the future would be wonderful. Meanwhile I’ll see what I can accomplish! Thank you.
Re: MTF calculation?
Okay, I'm assuming that a firmware update that exposes the fft functionality (or better yet, provides a proper MTF method or some other measure of image sharpness) is some time in the future. I'm interested in fiddling with the firmware. Is the process for doing so documented? I'm not finding it.
Many thanks in advance!
Many thanks in advance!
Re: MTF calculation?
Hi, it's right here.
https://github.com/openmv/openmv/wiki
I'm going to be doing some work on this FFT stuff this weekend. Can you post a link to an easy to understand paper or website for MTF? I can add it then.
https://github.com/openmv/openmv/wiki
I'm going to be doing some work on this FFT stuff this weekend. Can you post a link to an easy to understand paper or website for MTF? I can add it then.
Nyamekye,
Re: MTF calculation?
Whoa, that is awesome.
From what I've gleaned, there are many approaches to determining image quality (sharpness/contrast/resolving ability). The slantededge approach seems to be the most accepted, and is in fact the basis of ISO 12233. Below are some links to browse to get the general drift of it.
My newbie's summary:
o A slantedge image target (black/white, at a small angle vs. the pixel array axes) serves as a step function for the imaging system. How steeply stepped does your imaging system perceive it to be? The sharper the step, the more highspatialfrequency content in its Fourier transform; blurrier images have less.
o MTF, then, boils down to calculating how much highspatialfrequency content is in the image. Lens quality and setup (focus, etc) is an obviously dominant contributor, but the whole imaging system contributes. In the olden days of analog video connections, even cable quality could have a profound impact.
o MTF seems to be most often calculated as the normalized FFT of the derivative of the image, but I suppose there might be other measures as well; maybe even the histogram of pixel values could have utility for this (since a perfect step would have pixel values in only two bins, white and black; any intermediate bins with pixels in them would indicate blur). I would imagine that imagebased autofocus approaches do something similar. Those have been around for a long time. My nearly 30yearold Sony handicam had imagebased autofocus.
My own application has framerate as a priority, so efficiency trumps exactitude for me. I just want to maximize "sharpness" and don't care much about rigorous compliance with ISO thisandthat. I'd imagine autofocus would have similar priorities.
Reading material:
http://www.dougkerr.net/Pumpkin/article ... t_Edge.pdf a few pages of lessinteresting background lead up to a nice description of the slantedge approach
http://harvestimaging.com/blog/?p=1328 good, concise description of the slantedge approach
https://www.edmundoptics.com/resources/ ... function/ less on the slantedge approach, more relating to resolving line patterns and other classical stuff
http://www.imatest.com/docs/sharpness/ measuring sharpness and all sorts of other interesting stuff
This is really cool: https://petapixel.com/2013/02/15/there ... sstheus/
https://github.com/habi/GlobalDiagnosti ... ter/MTF.py
and
https://github.com/TheChymera/pyMTF/blob/master/README
and
https://github.com/weiliu4/py_mtf/blob/master/mtf.py some Python example codes I've been picking through
Thanks! Meanwhile I'll play around with that firmware link you provided. Many thanks for that!
From what I've gleaned, there are many approaches to determining image quality (sharpness/contrast/resolving ability). The slantededge approach seems to be the most accepted, and is in fact the basis of ISO 12233. Below are some links to browse to get the general drift of it.
My newbie's summary:
o A slantedge image target (black/white, at a small angle vs. the pixel array axes) serves as a step function for the imaging system. How steeply stepped does your imaging system perceive it to be? The sharper the step, the more highspatialfrequency content in its Fourier transform; blurrier images have less.
o MTF, then, boils down to calculating how much highspatialfrequency content is in the image. Lens quality and setup (focus, etc) is an obviously dominant contributor, but the whole imaging system contributes. In the olden days of analog video connections, even cable quality could have a profound impact.
o MTF seems to be most often calculated as the normalized FFT of the derivative of the image, but I suppose there might be other measures as well; maybe even the histogram of pixel values could have utility for this (since a perfect step would have pixel values in only two bins, white and black; any intermediate bins with pixels in them would indicate blur). I would imagine that imagebased autofocus approaches do something similar. Those have been around for a long time. My nearly 30yearold Sony handicam had imagebased autofocus.
My own application has framerate as a priority, so efficiency trumps exactitude for me. I just want to maximize "sharpness" and don't care much about rigorous compliance with ISO thisandthat. I'd imagine autofocus would have similar priorities.
Reading material:
http://www.dougkerr.net/Pumpkin/article ... t_Edge.pdf a few pages of lessinteresting background lead up to a nice description of the slantedge approach
http://harvestimaging.com/blog/?p=1328 good, concise description of the slantedge approach
https://www.edmundoptics.com/resources/ ... function/ less on the slantedge approach, more relating to resolving line patterns and other classical stuff
http://www.imatest.com/docs/sharpness/ measuring sharpness and all sorts of other interesting stuff
This is really cool: https://petapixel.com/2013/02/15/there ... sstheus/
https://github.com/habi/GlobalDiagnosti ... ter/MTF.py
and
https://github.com/TheChymera/pyMTF/blob/master/README
and
https://github.com/weiliu4/py_mtf/blob/master/mtf.py some Python example codes I've been picking through
Thanks! Meanwhile I'll play around with that firmware link you provided. Many thanks for that!
Re: MTF calculation?
Still not quite clear on what to do.
Um, anyway, the FFT code I wrote can do 1d FFTs up to 1024 points. It can do both real>complex and complex>complex ffts. It can also do reverse ffts too.
I will be focusing on adding logpolar mapping to the phase correlation code for the a customer.
Um, anyway, the FFT code I wrote can do 1d FFTs up to 1024 points. It can do both real>complex and complex>complex ffts. It can also do reverse ffts too.
I will be focusing on adding logpolar mapping to the phase correlation code for the a customer.
Nyamekye,
Re: MTF calculation?
So I'm googling on ["image based" "auto focus" OR autofocus] and find an intriguing reference to "histogram entropy" as a metric of image sharpness here: http://www.emo.org.tr/ekler/dbdbf7ea134592e_ek.pdf
Might be a useful concept.
Might be a useful concept.
Re: MTF calculation?
Here's code for an autofocus routine used in microscopy, of interest mostly for how they calculate contrast: https://github.com/micromanager/micro ... s_test.bsh
My application aside: What's needed is a way to answer the question: How sharp is this image (or a portion of it)?
If I hadn't started this thread, how would a machine vision engineer have answered that question? Is there a triedandtrue approach?
If so, that might do for me as well as being broadly useful for others.
My application aside: What's needed is a way to answer the question: How sharp is this image (or a portion of it)?
If I hadn't started this thread, how would a machine vision engineer have answered that question? Is there a triedandtrue approach?
If so, that might do for me as well as being broadly useful for others.
Re: MTF calculation?
Sorry, I'm just looking for if you can just outline the steps you want. From the code... I kinda see this behavior:
1. Grab a row of pixels.
2. Compute the delta between all pixels in the row.
3. Take the FFT of those deltas.
4. Get the magnitude of the FFT.
5. Return the median of the FFT?
1. Grab a row of pixels.
2. Compute the delta between all pixels in the row.
3. Take the FFT of those deltas.
4. Get the magnitude of the FFT.
5. Return the median of the FFT?
Nyamekye,
Re: MTF calculation?
That would seem to be one approach! (I'm trying to figure this out too...!)
I don't think an entire row would be needed, just the region around the edge of the slantedstep.
More broadly, how does one assess the contrast/sharpness of an arbitrary image? Is there a lessfancy approach that might be more applicable to a tiny processor?
I don't think an entire row would be needed, just the region around the edge of the slantedstep.
More broadly, how does one assess the contrast/sharpness of an arbitrary image? Is there a lessfancy approach that might be more applicable to a tiny processor?
Re: MTF calculation?
Take a look at https://stackoverflow.com/questions/139 ... e#13966888
Update: also https://stackoverflow.com/questions/287 ... fanimage
Hm...
Update: also https://stackoverflow.com/questions/287 ... fanimage
Hm...
Re: MTF calculation?
Mashing together two posts from https://stackoverflow.com/questions/287 ... fanimage :
...where "cv" appears to reference OpenCV (https://opencv.org)
Wondering if this might be less computeintensive than the FFT approach. I'm poking through OpenCV now in search of nuggets.
UPDATE: Per the discussion at http://answers.opencv.org/question/5395 ... venimage/ OpenCV has a function, calcBlurriness, which would do the job. Unfortunately it's undocumented (https://docs.opencv.org/trunk/d5/d50/gr ... a7dc23c470). Trying to ferret out the source now.
Code: Select all
Mat src_gray, dst;
int kernel_size = 3;
int scale = 1;
int delta = 0;
int ddepth = CV_16S;
GaussianBlur( src, src, Size(3,3), 0, 0, BORDER_DEFAULT );
/// Convert the image to grayscale
cvtColor( src, src_gray, CV_RGB2GRAY );
/// Apply Laplace function
Mat abs_dst;
Laplacian( src_gray, dst, ddepth, kernel_size, scale, delta, BORDER_DEFAULT );
//compute sharpness
cv::Laplacian(src_gray, dst, CV_64F);
cv::Scalar mu, sigma;
cv::meanStdDev(dst, mu, sigma);
double focusMeasure = sigma.val[0] * sigma.val[0];
Wondering if this might be less computeintensive than the FFT approach. I'm poking through OpenCV now in search of nuggets.
UPDATE: Per the discussion at http://answers.opencv.org/question/5395 ... venimage/ OpenCV has a function, calcBlurriness, which would do the job. Unfortunately it's undocumented (https://docs.opencv.org/trunk/d5/d50/gr ... a7dc23c470). Trying to ferret out the source now.
Last edited by Scottj on Sat Jan 06, 2018 12:49 pm, edited 1 time in total.
Re: MTF calculation?
Here is something interesting. If you have fast .jpeg compression then the job may already be done. Per https://stackoverflow.com/questions/518 ... sequences (poster Misha), the DCT coefficients provide a measure of the highfrequency components in the image.
Are the DCT coefficients accessible after a .jpeg compression in OpenMV?
Also see https://stackoverflow.com/questions/419 ... ompression post by the same author.
"Misha" makes several references to a paper by Marzillano that describes an efficient method of calculating sharpness: http://citeseerx.ist.psu.edu/viewdoc/do ... 1&type=pdf ...reading through that now.
Are the DCT coefficients accessible after a .jpeg compression in OpenMV?
Also see https://stackoverflow.com/questions/419 ... ompression post by the same author.
"Misha" makes several references to a paper by Marzillano that describes an efficient method of calculating sharpness: http://citeseerx.ist.psu.edu/viewdoc/do ... 1&type=pdf ...reading through that now.
Re: MTF calculation?
I can do the math easily. I just don't know what particular steps you'd like me to do. We can't output graphs on the OpenMV Cam. So, everything needs to boil down to one value.
I might have time to write the code for this tommorrow. If you can work out a high level step by step guide for what you want me to do then I can do that. Note that "compute the PSF" is not a sufficient guide... I've seen a lot of details on that but I don't know what they mean.
I might have time to write the code for this tommorrow. If you can work out a high level step by step guide for what you want me to do then I can do that. Note that "compute the PSF" is not a sufficient guide... I've seen a lot of details on that but I don't know what they mean.
Nyamekye,
Re: MTF calculation?
Thank you! I am working on the step by step list. First I'm sifting through all the references and pointers and opinions to come up with an optimum approach. Expect my input shortly.
Re: MTF calculation?
Thank you again for your interest and helpfulness!
Okay, I've studied this quite a lot today. Needed is a scalar measure of sharpness/contrast/acutance. Focusing and other lens adjustments would serve to optimize this quantity.
Now, I started this thread asking about MTF. But MTF gives a graph vs. spatial frequency, not the figure of merit desired (though I suppose one could pick a spatial frequency and use the value of that bin for optimization).
After my reading, it seems DCT rather than FFT will give us the info we needed with more efficiency. See https://users.cs.cf.ac.uk/Dave.Marshall ... 10_DCT.pdf ...As I'm sure you know (it was new to me as of today!), DCT is basis of .jpeg, so an efficient implementation probably already exists in OpenMV.
We’d be interested in the information in the lowerright corner of the DCT matrix (=high frequency).
Nice : “One of the properties of the 2D DCT is that it is separable meaning that it can be separated into a pair of 1D DCTs. To obtain the 2D DCT of a block a 1D DCT is first performed on the rows of the block then a 1D DCT is performed on the columns of the resulting block.”
So:
1. Grab region of interest. (Default: whole image)
2. Divide into 8x8 or 16
3. Compute 2D DCT ==> Results in 8x8 or 16x16 bins
4. Average bins in lower right (high frequency) corner (say 3x3). This scalar value is the figureofmerit. Higher = sharper image.
5. Note this is a figure of merit and not intended to be computationally rigorous. So, for computation purposes we can eliminate the sqrt(2/N), sqrt(2/M) coefficients and save a couple CPU cycles.
To my eye this is compatible with the slantedge approach and also can be used for autofocus of arbitrary images.
What do you think? An FFT approach could of course be substituted if preferred.
Okay, I've studied this quite a lot today. Needed is a scalar measure of sharpness/contrast/acutance. Focusing and other lens adjustments would serve to optimize this quantity.
Now, I started this thread asking about MTF. But MTF gives a graph vs. spatial frequency, not the figure of merit desired (though I suppose one could pick a spatial frequency and use the value of that bin for optimization).
After my reading, it seems DCT rather than FFT will give us the info we needed with more efficiency. See https://users.cs.cf.ac.uk/Dave.Marshall ... 10_DCT.pdf ...As I'm sure you know (it was new to me as of today!), DCT is basis of .jpeg, so an efficient implementation probably already exists in OpenMV.
We’d be interested in the information in the lowerright corner of the DCT matrix (=high frequency).
Nice : “One of the properties of the 2D DCT is that it is separable meaning that it can be separated into a pair of 1D DCTs. To obtain the 2D DCT of a block a 1D DCT is first performed on the rows of the block then a 1D DCT is performed on the columns of the resulting block.”
So:
1. Grab region of interest. (Default: whole image)
2. Divide into 8x8 or 16
3. Compute 2D DCT ==> Results in 8x8 or 16x16 bins
4. Average bins in lower right (high frequency) corner (say 3x3). This scalar value is the figureofmerit. Higher = sharper image.
5. Note this is a figure of merit and not intended to be computationally rigorous. So, for computation purposes we can eliminate the sqrt(2/N), sqrt(2/M) coefficients and save a couple CPU cycles.
To my eye this is compatible with the slantedge approach and also can be used for autofocus of arbitrary images.
What do you think? An FFT approach could of course be substituted if preferred.
Re: MTF calculation?
The DCT code can't be pulled out of the JPEG code easily. However, the FFT code can easily spit out a 2D FFT of the image in 1 line of C code.
Um, basically, I can make a method that would take an ROI. It would then compute the FFT of that ROI. Note that the function won't be able to operate on a large image so you'll need to use the pool() methods to reduce the resolution (or just pick a small res).
Anyway, after the FFT runs... yes, I can grab the value of the highest bin... or, I can grab the value of the highest 3x3 bins. It's unclear what value you want exactly out of the FFT.
Um, think about the FFT as a 2D mountain map. I can find the largest value... that would tell you the most dominate frequency. But that's not what you want. You want the power of the high frequencies... so... it kinda sounds like I should just return a histogram of the FFT. There's already a lot of code for histogram stuff builtin like finding the max, mode, median, etc. Then you can do anything you please.
So, anyway, I'll take the FFT of the image, compute the magnitude, and then create a histogram of the FFT. You will be able to select the number of bins in the histogram. As for inspecting the histogram you'll be able to reuse all of the already builtin histogram stuff. For example, if you're goal is to get the max frequency we have a method called get_percentile() which you pass a percentile to... like 0.99 and it returns the value that's at the 99% of the histogram distribution. Basically the max value that's not an outlier. A larger value for this means the image is sharper... a lower value means the image is not sharp.
Um, basically, I can make a method that would take an ROI. It would then compute the FFT of that ROI. Note that the function won't be able to operate on a large image so you'll need to use the pool() methods to reduce the resolution (or just pick a small res).
Anyway, after the FFT runs... yes, I can grab the value of the highest bin... or, I can grab the value of the highest 3x3 bins. It's unclear what value you want exactly out of the FFT.
Um, think about the FFT as a 2D mountain map. I can find the largest value... that would tell you the most dominate frequency. But that's not what you want. You want the power of the high frequencies... so... it kinda sounds like I should just return a histogram of the FFT. There's already a lot of code for histogram stuff builtin like finding the max, mode, median, etc. Then you can do anything you please.
So, anyway, I'll take the FFT of the image, compute the magnitude, and then create a histogram of the FFT. You will be able to select the number of bins in the histogram. As for inspecting the histogram you'll be able to reuse all of the already builtin histogram stuff. For example, if you're goal is to get the max frequency we have a method called get_percentile() which you pass a percentile to... like 0.99 and it returns the value that's at the 99% of the histogram distribution. Basically the max value that's not an outlier. A larger value for this means the image is sharper... a lower value means the image is not sharp.
Nyamekye,
Re: MTF calculation?
Thanks pity about the DCT methods not being exposed. So, FFT to the rescue!
FFT of a regionofinterest sounds perfect. Histogram sounds hugely useful and very flexible. I am actually rather excited about your comments. Great suggestions, thank you. For sure, I owe you a coffee, and I look forward to shining a spotlight on this camera and the really impressive libraries and IDE.
One thing to keep an eye out for: There is a possibility that a differentiation or Laplacian might be needed ahead of the FFT, but let's try just the straight FFT first and see if its highorder bin(s) detects focus (etc) by itself. I expect it will do the job.
FFT of a regionofinterest sounds perfect. Histogram sounds hugely useful and very flexible. I am actually rather excited about your comments. Great suggestions, thank you. For sure, I owe you a coffee, and I look forward to shining a spotlight on this camera and the really impressive libraries and IDE.
One thing to keep an eye out for: There is a possibility that a differentiation or Laplacian might be needed ahead of the FFT, but let's try just the straight FFT first and see if its highorder bin(s) detects focus (etc) by itself. I expect it will do the job.
Re: MTF calculation?
Hey, getting your feature in is on the queue of things to do. I should be able to completely tackle it by the weekend. Maybe sooner. It's not hard but I'm working on phase correlation stuff right now.
Nyamekye,
Re: MTF calculation?
Thank you!
Re: MTF calculation?
Hi, please see this:
https://dsp.stackexchange.com/questions ... inmatlab
One of the answers talks about taking circular rings from the FFT space and putting that in a histogram. This is likely what I will do for the FFT. The question is will this work for you?
Given the diagonal cut image you were talking about you'll see a sinc() like function in the frequency domain histogram. I.e some bin will have a high peak falling off to other bins around it. You'll then be able to measure using the mode() to determine the image quality. The mode will fall as the image is blurry.
For a natural image the FFT will look like white noise though with the DC value being the biggest.
https://dsp.stackexchange.com/questions ... inmatlab
One of the answers talks about taking circular rings from the FFT space and putting that in a histogram. This is likely what I will do for the FFT. The question is will this work for you?
Given the diagonal cut image you were talking about you'll see a sinc() like function in the frequency domain histogram. I.e some bin will have a high peak falling off to other bins around it. You'll then be able to measure using the mode() to determine the image quality. The mode will fall as the image is blurry.
For a natural image the FFT will look like white noise though with the DC value being the biggest.
Nyamekye,
Re: MTF calculation?
That looks like it's worth trying.
I'm a little concerned about frame rate, but this looks like a broadly useful approach.
My actual application will be crunching the results into a scalar figure of merit, higher = sharper/better image. I want to do this quickly, to facilitate optimization of the image (focus, mostly). The slantedge is a classical target for such things since it serves as a stepinput to the imaging system, meaning it contains all frequencies, and we're basically evaluating the rolloff due to the optics. More highfrequency content = less rolloff. Optimize the HF content and you've optimized the image, at least within the regionofinterest. The approach you found in that link strikes me as useful for any image... which would seem to be a good thing!
Thank you again. I am hoping you are finding this interesting too!
I'm a little concerned about frame rate, but this looks like a broadly useful approach.
My actual application will be crunching the results into a scalar figure of merit, higher = sharper/better image. I want to do this quickly, to facilitate optimization of the image (focus, mostly). The slantedge is a classical target for such things since it serves as a stepinput to the imaging system, meaning it contains all frequencies, and we're basically evaluating the rolloff due to the optics. More highfrequency content = less rolloff. Optimize the HF content and you've optimized the image, at least within the regionofinterest. The approach you found in that link strikes me as useful for any image... which would seem to be a good thing!
Thank you again. I am hoping you are finding this interesting too!
Re: MTF calculation?
Note, you will be able to high pass the image with morph() using a high pass filter kernel if you want to do the FFT on the delta image versus the normal image.
Nyamekye,
Re: MTF calculation?
That's brilliant. Thanks! Can hardly wait to play with this.
Re: MTF calculation?
Hi, I've been working on your request... and I'm not sure I'm getting what I need from the FFT for doing anything useful. Attached is a binary that does the FFT. Please let me know if you get anything useful from it.
Whats happening here...
1. 2D FFT
2. Mag
3. FFTShfit (puts zero freq components in the center of the 2d image versus the edges)
4. Linpolar (note that I weight the stuff on the top of the image (low frequencies) less than the bottom parts, if I don't do this you just get the DC components since they are so much bigger).
5. Draw the FFT on screen.
The get_fft_histogram() method returns a histogram of the fft of the image. Each bin is equal to the sum across the image from the left to right. There's a bin by default per row in the image.
Pass "phase=True" to get the phase from the FFT instead of the magnitude.
...
Anyway, if you can get back to me quickly I'll be able to merge this feature into the next release. Otherwise it will have to wait. It will be useful to me if you could do "img.save("file.ppm")" on a few test images and send me the ppm files. Please don't save images directly from OpenMV IDE because those are of lower quality due to jpeg compression. Also, I'm daring the FFT on screen right now for debug. The final image won't do that.
From what I've done so far, radial sums of the frequency bands of the FFT don't seem to yield useful results. But, I don't have test patterns to compare with.
Not sure what to do here. I thought the magnitude of radial bands would fall off as you get away from the top of the image but that doesn't appear to be the case. I could switch things around and sum bins by changing the x/y axes. This would make each bin represent the frequencies from 0 to inf in each rotational direction. This seems like the only information the FFT encodes.
...
Note, I've tried looking at the log() of the magnitude. That just gives you an FFT image where everything is relevant and you can't really see any details. The power away from the center does not fall off smoothly. It's basically like one giant mount from the DC part and then little peaks on all the other frequencies. Taking the log just makes the big DC peak as small as the smaller frequency peaks. It does "whiten" the FFT space... but, not much else.
...
Finally, note that solving your problem will help me solve my phase correlation issues too with being able to create a system capable of undoing rotation/scale issues.
Whats happening here...
1. 2D FFT
2. Mag
3. FFTShfit (puts zero freq components in the center of the 2d image versus the edges)
4. Linpolar (note that I weight the stuff on the top of the image (low frequencies) less than the bottom parts, if I don't do this you just get the DC components since they are so much bigger).
5. Draw the FFT on screen.
Code: Select all
# Hello World Example
#
# Welcome to the OpenMV IDE! Click on the green run arrow button below to run the script!
import sensor, image, time
sensor.reset() # Reset and initialize the sensor.
sensor.set_pixformat(sensor.RGB565) # Set pixel format to RGB565 (or GRAYSCALE)
sensor.set_framesize(sensor.B128X128) # Set frame size to QVGA (320x240)
sensor.skip_frames(time = 2000) # Wait for settings take effect.
clock = time.clock() # Create a clock object to track the FPS.
while(True):
clock.tick() # Update the FPS clock.
img = sensor.snapshot() # Take a picture and return the image.
print(img.get_fft_histogram().get_statistics(), clock.fps())
Pass "phase=True" to get the phase from the FFT instead of the magnitude.
...
Anyway, if you can get back to me quickly I'll be able to merge this feature into the next release. Otherwise it will have to wait. It will be useful to me if you could do "img.save("file.ppm")" on a few test images and send me the ppm files. Please don't save images directly from OpenMV IDE because those are of lower quality due to jpeg compression. Also, I'm daring the FFT on screen right now for debug. The final image won't do that.
From what I've done so far, radial sums of the frequency bands of the FFT don't seem to yield useful results. But, I don't have test patterns to compare with.
Not sure what to do here. I thought the magnitude of radial bands would fall off as you get away from the top of the image but that doesn't appear to be the case. I could switch things around and sum bins by changing the x/y axes. This would make each bin represent the frequencies from 0 to inf in each rotational direction. This seems like the only information the FFT encodes.
...
Note, I've tried looking at the log() of the magnitude. That just gives you an FFT image where everything is relevant and you can't really see any details. The power away from the center does not fall off smoothly. It's basically like one giant mount from the DC part and then little peaks on all the other frequencies. Taking the log just makes the big DC peak as small as the smaller frequency peaks. It does "whiten" the FFT space... but, not much else.
...
Finally, note that solving your problem will help me solve my phase correlation issues too with being able to create a system capable of undoing rotation/scale issues.
 Attachments

 openmv.zip
 (2.1 MiB) Downloaded 1 time
Nyamekye,
Re: MTF calculation?
Do, "img.linpolar(reverse=True)" to get back the FFT unshaped.
Mmm, so, one thing I noticed... the strength of like... two arms from the center increase the sharper an edge. And the rotation of those two arms changes based on the camera direction.
Anyway, it seems like the 2D FFT kinda forces you to have to pick a direction to sum up magnitude on. You can't just look at all directions it seems. It's quite clear that a sharp edge on the image moves all power into two radial directions from frequencies 0 to inf. Note that half of the FFT is the same as the previous half.
Mmm, so, one thing I noticed... the strength of like... two arms from the center increase the sharper an edge. And the rotation of those two arms changes based on the camera direction.
Anyway, it seems like the 2D FFT kinda forces you to have to pick a direction to sum up magnitude on. You can't just look at all directions it seems. It's quite clear that a sharp edge on the image moves all power into two radial directions from frequencies 0 to inf. Note that half of the FFT is the same as the previous half.
Nyamekye,
Re: MTF calculation?
Just saw this. Thank you! I'll get to work.
Here's a test image: http://www.graphics.cornell.edu/~westin ... schart.pdf
Here's a test image: http://www.graphics.cornell.edu/~westin ... schart.pdf
Re: MTF calculation?
FFT images of the test patterns in focus and out of focus please. I have a pretty good idea of what to give you but I want to see if I'm right.
Nyamekye,
Re: MTF calculation?
Here are some first fruits. I'm finding the mean statistic to be a decent measure of focus, at least of a small portion of the image dominated by a black/white transition element. Perhaps there is a better statistic to use, but this is a start! I'm not doing any differentiation, highpass filtering or anything of the sort yet.
Here are some screen snaps:
Focused by eye:
2DFFT of Focused by eye:
Defocused:
2DFFT of Defocused:
Refocused by maximizing mean(2DFFT):
Results of refocusing by maximizing mean(2DFFT):
Here are some screen snaps:
Focused by eye:
2DFFT of Focused by eye:
Defocused:
2DFFT of Defocused:
Refocused by maximizing mean(2DFFT):
Results of refocusing by maximizing mean(2DFFT):
 Attachments

 Focused by eye.png (398.17 KiB) Viewed 36 times
Re: MTF calculation?
I just noticed your paragraph about saving the image as .ppm, not from the IDE, etc. Whoops.
I'll set it up again, so sorry to have missed that, was having too much fun with it and my reading comprehension took a dip.
It'll be morning before I will be with the camera again but will make it a priority. Sorry for my oversight.
I'll set it up again, so sorry to have missed that, was having too much fun with it and my reading comprehension took a dip.
It'll be morning before I will be with the camera again but will make it a priority. Sorry for my oversight.
Re: MTF calculation?
I tried the rotation of the histogram bins like I was mentioning and that didn't work at all. I guess what I posted previously is the best we'll get for this. To be honest... it's kinda useless except for getting how sharp the image is.
Nyamekye,
Re: MTF calculation?
By no means useless. It is the beginnings of all sorts of analyses of image quality, and yes sharpness is really a key quantity. The circles may not have panned out (yet) but there's all sorts of stuff that we haven't tried yet, such as differentiation. MTF is, from what I've seen, documented as commencing with a differentiation and there's probably a good reason for that.
Now the foundation is there! Thank you!
I'll upload the images as (and how) you requested in a bit.
Now the foundation is there! Thank you!
I'll upload the images as (and how) you requested in a bit.
Re: MTF calculation?
Files (both of image and fft) attached.
I'm finding I can't save to .ppm if the image is GRAYSCALE. Python complains that the images aren't .ppm if they're grayscale. So, unlike what I used for the FFTs in my earlier post, I'm using RGB565 in all the attached files. These save as .ppm without issue.
==> From my playing so far, the mean statistic seems to be much less useful as a measure of sharpness in RGB565. It does not vary as much with defocus as it does for grayscale, for which it peaks very satisfyingly when focus is achieved. In the short term, grayscale seems the way to go for my purposes, at least using this statistic. Perhaps there is another statistic (or prior filtration) that will work better for RGB? In any case this may illuminate why you've been unimpressed with the utility of this in your own testing.
(Images are in the .zip. The forum won't let me upload .ppms.)
I'm finding I can't save to .ppm if the image is GRAYSCALE. Python complains that the images aren't .ppm if they're grayscale. So, unlike what I used for the FFTs in my earlier post, I'm using RGB565 in all the attached files. These save as .ppm without issue.
==> From my playing so far, the mean statistic seems to be much less useful as a measure of sharpness in RGB565. It does not vary as much with defocus as it does for grayscale, for which it peaks very satisfyingly when focus is achieved. In the short term, grayscale seems the way to go for my purposes, at least using this statistic. Perhaps there is another statistic (or prior filtration) that will work better for RGB? In any case this may illuminate why you've been unimpressed with the utility of this in your own testing.
(Images are in the .zip. The forum won't let me upload .ppms.)
 Attachments

 focusing images.zip
 (117.68 KiB) Downloaded 1 time
Re: MTF calculation?
Super, note you have to save as a .pnm for grayscale. See this for more info: https://en.wikipedia.org/wiki/Netpbm_format (Yes, I have plans to allow you to save any format to any format. Wrote the code to but it's not been committed yet).
Nyamekye,
Who is online
Users browsing this forum: No registered users and 3 guests