From Dave Thompson, May 4, 2011:
'> > For each pixel, I have fitted a linear function to the counts as a function of exposure time, for the first four exposure times only, where the counts are below 20000, and the detections should still be linear (but this can be debated).
It is not debated (or debatable), HgCdTe
detectors are not linear from DN=0+epsilon. As long as you do not go much past ~80% full well a single second-order polynomial usually fits the data quite well, although some other functional form could work closer to saturation. Your formulation is a bit different than how I would approach the linearity correction, but there is obviously more than one way to do this.
I ran a quick test by feeding an image with a constant 50K counts through your correction algorithm (only in iraf, I am not an IDL user), and I have the following concerns:
1) You can see the quadrant structure (see attached image lin50k.jpg). It is also visible in the coefficients images. Normally you should not see this. I suspect you did not correct for the extra exposure time between the reset and the first read. This can be significant for short exposures at high flux rates (e.g. standard stars, flatfield data), and it would affect your fits in the way it looks in your coefficient images. The extra exposure time is not a constant, there is a ~2s gradient across each readout channel.
2) The distribution of counts in the linearized 50k data is odd (see attached plot imhist.pdf). The minimum across the image is 30861 and the maximum is 94857. I would have expected ~52000 \xB1 a (fairly small) bit, and certainly nothing that gets corrected below 50K (I would expect something more like a steep rise starting at ~51k with a small tail above 53k). The core of the distribution is broader than expected as well, and larger than the shot noise from the photons. This may simply be the result of #1. Running the same test through your MER correction gives even more extreme results (see #4).
3) You say in your report that you set the coefficients to zero for the erratic pixels (so fac = 1.0). It would be better to set them to the mean correction for the rest of the array. In fact if the distribution of coefficients is narrow enough you can reasonably substitute a mean correction (constant) in place of the pixel-by-pixel maps (but you need the array fits to decide if this can be done). You can have erratic behavior in the data/fits for several reasons. They could be truly bad pixels (saturated at minimum exposure times or just dead), in which case they will be masked out of the data when processed. But they could just be noisy when the data were taken, or there could be cosmic rays in the data affecting the fits in some way (it is best to control for this with a double-pass fit with outlier rejection on the second pass). A large fraction of the pixels around the edge are just not illuminated in the data we took because the focal plane mechanism is slightly undersized with respect to the detector, but these are perfectly good pixels that may get used depending on flexure state in the instrument. If your fit was bad but the pixel is usable, no correction would then leave that pixel unlinearized (this is the narrow peak at 50k in the plot).
4) Because the same pixels are used independent of the readout mode, and the readouts go through the same electronics, I would expect the linearization coefficients would be the same in MER10 or O2DCR
readout modes. They are different in your data perhaps because of the same issue discussed in #1 above.
5) It would be best to run a set of real data through the correction prescription to show that the data are properly linearized. Of course, these data do not yet exist (a standard taken on a photometric night with exposure times spanning the range up to saturation would work). I will put this on the roster of tests for the prep nights, and set up a script to take the needed data.
If you have time to rework your fits taking the above into consideration, please contact me. I tried to do this in iraf but the fitting was going to take unreasonably long. And finding enough time to concentrate on this is difficult.
Best Regards, Dave.
- imhist.pdf: From Dave Thompson's May 4, 2011 email
- From Dave Thompson's May 4, 2011 email: