I know this is an old discussion, but I was just going through forums looking at past threads and found this one.
Flat field correction is not performed with image differencing, although dark current correction is. To correct a flat field, you need division. But, I took a glance at the docs and see a divide() function, so voila. The proper sequence of events for a basic image correction routine would be:
- Capture a flat field image: F
- Capture a dark frame to apply to the flat field, D0
- Capture the subject image: I1, I2, I3…
- Capture a dark frame image: D1, D2, D3…
- Repeat steps 3 and 4 multiple times if you are performing an iterative processing like image stacking
- Generate a corrected flat field image: F_d = F - D0
- For each image perform: I_df = (I - D) / F_d
In many scenarios, the dark frame doesn’t vary much from one image to the next and so you can get away with capturing just one dark frame, such as D0 indicated above, and using it for all images. But often, the dark frame varies over an imaging session, the two most common causes of variation being changing environmental temperature (in astrophotography, the night gets colder right up until sunrise) and changing sensor temperature (the camera gets hotter over an extended session due to the electronics running at length). Back when I did A LOT of astrophotography (I was a member of a group that performed circuit-soldering modifications to off-the-shelf webcams to enable arbitrary exposure duration, and additionally to grab electronic control of the gain circuit so it could be disabled during an extended exposure; I wrote the most widely used Mac astrophotography stacking program ever produced in fact), I would capture a few dark frames across a multi-hour imaging session, not one for every image, but one ever couple of minutes. The idea is then to use the dark frame (or a stack of such frames to increase the SNR) from a time period near each given frame you want to correct.
Bear in mind, that to really do this right, you want access to the raw sensor data before it goes through a deBayering process, as that comingles the pixel values. Most commercial sensors don’t even give you the option of obtaining raw data though, so as hobbyists we have to make do with what we have available to us. Of course, some artifacts don’t occur at the pixel level, such as lens vignetting, and flat field correction works very well on such artifacts even at late stages of image processing. But other artifacts are intrinsic to the CCD/CMOS sensor cells (bias, hot pixels, etc.), and really ought to be corrected on a per-pixel basis before deBayering (or deblooming or anything else for that matter), but we can get decent results working with minimally processed images too.
So, probably TMI, but whatever.