Flat Field Correction

I was wondering what the fastest way is to apply a flat field correction to a snapshot? The lighting that I have is a bit uneven and I was want to correct that with a flat field prior to thresholding and finding blobs. I would take a flat field of background and then use this for all subsequent frames. Thanks in advance for any advice.

Regards
redcrow

Use the img.difference() method. See the frame differencing scripts for an example of how to do this. You can allocate a second frame buffer and then difference the current image using that.

I know this is an old discussion, but I was just going through forums looking at past threads and found this one.

Flat field correction is not performed with image differencing, although dark current correction is. To correct a flat field, you need division. But, I took a glance at the docs and see a divide() function, so voila. The proper sequence of events for a basic image correction routine would be:

  1. Capture a flat field image: F
  2. Capture a dark frame to apply to the flat field, D0
  3. Capture the subject image: I1, I2, I3…
  4. Capture a dark frame image: D1, D2, D3…
  5. Repeat steps 3 and 4 multiple times if you are performing an iterative processing like image stacking
  6. Generate a corrected flat field image: F_d = F - D0
  7. For each image perform: I_df = (I - D) / F_d

In many scenarios, the dark frame doesn’t vary much from one image to the next and so you can get away with capturing just one dark frame, such as D0 indicated above, and using it for all images. But often, the dark frame varies over an imaging session, the two most common causes of variation being changing environmental temperature (in astrophotography, the night gets colder right up until sunrise) and changing sensor temperature (the camera gets hotter over an extended session due to the electronics running at length). Back when I did A LOT of astrophotography (I was a member of a group that performed circuit-soldering modifications to off-the-shelf webcams to enable arbitrary exposure duration, and additionally to grab electronic control of the gain circuit so it could be disabled during an extended exposure; I wrote the most widely used Mac astrophotography stacking program ever produced in fact), I would capture a few dark frames across a multi-hour imaging session, not one for every image, but one ever couple of minutes. The idea is then to use the dark frame (or a stack of such frames to increase the SNR) from a time period near each given frame you want to correct.

Bear in mind, that to really do this right, you want access to the raw sensor data before it goes through a deBayering process, as that comingles the pixel values. Most commercial sensors don’t even give you the option of obtaining raw data though, so as hobbyists we have to make do with what we have available to us. Of course, some artifacts don’t occur at the pixel level, such as lens vignetting, and flat field correction works very well on such artifacts even at late stages of image processing. But other artifacts are intrinsic to the CCD/CMOS sensor cells (bias, hot pixels, etc.), and really ought to be corrected on a per-pixel basis before deBayering (or deblooming or anything else for that matter), but we can get decent results working with minimally processed images too.

So, probably TMI, but whatever.

Thanks for posting that info kwiley.

Note: divide() works like the photoshop divide. It’s not a pure numerical divide: Blend modes - Wikipedia. It’s done in a different way as just dividing one image by another would yield 0 for all pixels generally.

https://github.com/openmv/openmv/blob/master/src/omv/img/mathop.c#L521

If you want to divide by a constant use the gamma_correction() method. This allows you to apply do:

p = ((p ^ c0) * c1) + c2

Finally, the global shutter sensor is basically raw pixel output.

I have a global shutter module. I have got to put that thing on my H7 and fire it up. I haven’t even used it once yet.