 |
Instrumental Image Calibration Steps |
 |
I. Single Orbit Pipeline Processing
The following is based on
Roc's WSDS Pipelines Subsystem Function Overview.
Here's a broad overview of the main steps:
- Monitor of ingested raw image buffer and metadata; availabilty of
calibration data; transfer to working area.
- Instrumental image calibration (see Section I.1).
- Source detection, characterization and aperture photometry.
- Frame bandmerge.
- Pointing reconstruction (refinement) and astrometric solutions
for scale, distortion.
-
Optical artifact identification and single-frame radhit
flagging based on optimized matched filters (or flagging "bad"
χ2 fits to sources?)
and correlating positions in source lists from frame overlaps. Later
use these to (i) update (cleanup or flag) source detection lists;
(ii) update frame processing status masks for use by coadder;
- dichroic or filter glints;
- diffraction spikes;
- bright star halos;
- optical/electronic ghosts;
- non-uniform stray light;
- scattered light patches from bright objects;
- especially latents (see Section I.7.a);
- frame radhit detection? (see Section I.1.g).
- Solar System object identification.
- Accumulate frame detections, bandmerge and LogN-LogS statistics;
mean photometric offsets from frame overlaps; image shape/asymmetry
characterization for scan-mirror synchronization assessment.
Output measures to QA.
- Photometric calibration.
- Derive and/or collect orbit parameters and statistics:
band-to-band and overlap frame offsets; coverage (area vs. time);
sensitivity.
- Update working group DBs with image data (file system pointers);
new metadata; bandmerged (position tagged) source records.
-
Assemble single orbit QA information from all steps, assign
quality scores, generate QA report.
Below we outline the (TBD) generic instrumental calibration steps for processing
of raw frames across all WISE bands. For a more specific plan of the overall
pipeline infrastructure, and thoughts on how/when
certain calibration products are created and used, see:
Instrumental Calibration Pipeline Plan.
Some of the calibration steps may only be specific to the Si:As arrays
(bands 3&4), as suggested from prior experience with similar detectors
on Spitzer (MIPS-ch1 and IRAC-ch3&4). These, as well as any new corrections
across all bands will be looked for in upcoming ground characterization.
The information below was gathered with assistance from the following sources:
a. Inputs
The main inputs are calibrations matched to the science
frames to be processed. Some of these can be created on-orbit, and if not,
our best guess will have to come from ground characterization.
i. Bad-pixel Mask
For example: known hot, dead, low responsive pixels from ground testing. Can
then validate/update using on-orbit dark sky median statistics from frame stacks
for flat/sky-offset generation. Check for persistence at same x,y locations
between consecutive frames. Can also track using frame pixel histograms.
Can acquire from:
- pre-launch ground tests;
- IOC cover on for bands 1 and/or 2 or too dark?
Note: warm cover will saturate
bands 3 and 4!
- regions of high zodiacal background in ecliptic plane using N-frame
average or median?
- Do we need to care about color biases, i.e., flats will only be
good for calibrating sources that have a similar color to the
background from which they're generated?
- dark sky flats (but need to ensure that wells are appropriately
filled - may be a problem for bands 1 and 2)?
Some points to note:
-
The nominal flats acquired under "uniform" illumination (either on-orbit
or ground) will not be accurate enough to characterize the large scale
(low frequency) sensitivity variations brought about by imperfections
when light goes through
the entire WISE optical system.
The nominal flats are also termed "high frequency flats" since
they mostly characterize
pixel-to-pixel responsivity variations.
To avoid seeing large scale flat-fielding residuals in the data (including
possible vignetting),
we need to derive low frequency correction maps and apply them to
the high frequency flats.
The low frequency correction maps can be derived by placing the same
star over different portions of an array,
measuring relative changes in its brightness after unit normalizing,
and then fitting
a surface function. Since we will have many stars, we need to combine all the
measurements in some optimal way (algorithms already developed for ACS on HST).
Once created, this should remain static. This is a planned task for IOC.
- We also need to beware of absorbing grit on the scan mirror or other
optical components (fingers crossed).
- We also want to track changes after annealing.
- QA information needs to be generated and stored for each product
for trending purposes.
These can be generated from a trimmed mean (or median) of N stacked
images of the sky in a moving window of length TBD.
The inputs are dark-subtracted, linearized, flat-fielded and then the
mean/median image is zero-normalized using the mean/median over all frames.
These will be subtracted from the science frames in that window. These
sky-offset corrections
compensate for any short term
effects not characterized by the "long term" dark or flat corrections.
The optimum size of the moving window can be derived
by monitoring the repeatability in relative
photometry with time. The assumption here is that the temporal variations are
due to varying dark and/or flat calibrations.
To assist in tracking the variations, QA information
needs to be generated and stored for each product.
NOTE: this step may remove real (natural) background gradients from
the science frames. However, this can be
circumvented by averaging (lots of) frames taken over at least half an orbit,
i.e., such that any zodiacal gradients in the
sky frames cancel out. However, this window may be too long to
filter out the short term variations sought for.
iv. Dark Images
These need to be generated on the ground.
We need to track changes
with temperature, especially after annealing.
Can we acquire darks for bands 1 and 2 with the cover on in IOC?
Bands 3 and 4 will be difficult to acquire in orbit.
Can we sacrifice the background and create darks from patches of
"dark" sky (if such exist)? Is there some configuration
of the scan-mirror (i.e.,
it's maximum extent) that can give us a reasonably dark field?
A measure of the "long term" residual dark (offset) above background
can also be derived by normalizing against DIRBE observations.
Prior to second-pass processing, we can find an optimal set of
darks (and flats) by estimating corrections to pre-existing
calibrations that give us minimal variation in time and space (over
each array) in the flux from the same piece of sky, or, a collection of
sources. In the end, the variations will need to be consistent with
random measurement error.
- These can be acquired from ground tests since we can easily zero-out
undesired reads in SUR processing, hence controling the effective exposure
time.
- Can also calibrate in flight by observing same stars over multiple
orbits with different exposure times and brightnesses that span full
dynamic range.
- Adopt same estimation algorithm as used for linearizing MIPS-24
slope data?
- Readout channel (or array position) dependent? Need to quantify
level of variation over array, otherwise, use global correction
function?
- Need to verify that low end of the ramp is indeed linear
(no "hook" like effects).
- If any pixels very non-linear, then can flag as bad pixels.
- QA information needs to be generated and stored for each product.
vi. Other Calibration Files?
e.g. gain and read noise maps.
vii. Uncertainty Images
Need to accompany each of the above calibration products for end-to-end
error propagation.
viii. Processing Parameter Files
I.e., control files. As individual namelists, or, a single ascii file
of all parameters with defaults?
b. Retrieval of raw images, calibration data, and
processing control files
- PL executive to query DB for list of N frames at a time, where
N = TBD, e.g., single orbit. N may need to be greater than
some minimum if in calibration (ensemble) processing mode.
- Also query to ensure all calibration products applicable to
"science frames" in the queue are available (e.g., created previously
from on-orbit data). If not,
default to ground-based cal data on first pass.
- Transfer all required inputs to local disk.
- Processing parameter files: keep in fixed repository (i.e., don't copy
over each time).
c. Sanity check on selected metadata in FITS header / missing pixel
data / new broken pixels (?)
- Ensure keywords necessary for all processing are present in header and
values within range.
- Flag pixels with missing data (values 32,766 - includes loss from overflow)
in a "processing status mask"? This is a good place to initialize this
processing status mask so it can be updated incrementally downstream.
- Also read in bad-pixel mask (e.g. known hot, dead, low responsive pixels)
- treated as a calibration file. Store this information in processing
status mask to propagate downstream?
- Locate any new broken (intrinsically negative) pixels from coded
values 32,767.
- Detect new hot and low-response pixels and flag in processing mask.
Depending on timescale of transients/pixel stability, should we perform this
in an automated manner, e.g., using histograms from processing of ensembles
of images for sky-flat or sky-offset generation; from source detection
lists (in step 3 of overall single orbit processing)
using x,y positional correlation between consecutive frames, or, perform
offline and update bad-pixel calibration masks manually?
- May need to mask out n x n region (where n = TBD) around suspect dead
(or low responsive) pixels if there is leakage of energy into neighboring
pixels?
- May also need to mask out n x n region (where n = TBD) around suspect hot
(high dark current pixels) due to the Inter-Pixel Capacitance (IPC)
effect?
- Update processing status mask for all new hot/dead/broken pixels and
masked regions.
- Store above information in QA table. Update original bad-pixel mask
calibration with new hot/dead/broken pixels and masked regions?
- Do we need to explicitly subtract the 128 (positive constraint offset)
from the slope values, or, let dark subtraction take care of it later?
d. Bias/offset corrections
- Manifests itself as a spatially varying offset (i.e. gradient) that's
due to suspected drifts in the readout electronics. Expected to
disappear in HgCdTe arrays when connected to flight electronics but
expected to persist in Si:As arrays.
- Algorithms devised and tested by James Garnet at Teledyne. Also looked
into by Amy. This involves using the reference pixels to remove offsets
from each of the 16 readout blocks in some optimal self-consistent
manner (TBD).
- Do we need to monitor these offsets (transients) in flight?
- Following these corrections, should we remove the 4-pixel wide
reference region?
e. Saturation Detection/Flagging
- Flag saturated pixels as coded by values 32,753 - 32,765 in processing
status mask.
- Do we need additional thresholding on returned slope values to safeguard
against on-board ramp threshold being too high (i.e. resides in a regime
that is too non-linear) and cannot be changed quick enough? Note: A/D
converters will saturate long before the wells fill up, but pixels may
become strongly non-linear over time (or suddenly)?
- Dependence of saturation level on location of bright source in pixel
(center vs. edge)?
- Bright source limits (in real Jy flux units) that cause hard/soft
saturation? Thus, can predict in advance from 2MASS extrapolations
in bands 1&2?
- May need to mask out n x n region (where n = TBD) around suspect saturated
pixels if neighbors severely affected: e.g., by the IPC effect.
- Report saturated pixels (stats) in QA table.
f. BITPIX conversion and LSB truncation correction
- Conversion from unsigned 15-bit integer format to 32-bit real.
- Also add 0.5 DN (0.5 LSB) to pixel values to allow for bias from
truncation of onboard computed slope values.
- Envisage this being a rudimentary first pass. To be done more robustly
in multi-orbit pipeline.
- Hits that saturate (or disturb) any read in ramp will be flagged by
saturation detection above. Non-saturating hits are more challenging.
Maybe can detect these using thresholded pixel segmentation on median
filtered/subtracted images (Spitzer module).
- Detection using these methods may be difficult due to small dynamic
range in effectively "smoother" slope image (compared to real SUR
data values which we won't have).
- Use conservative (high) threshold to avoid flagging real point
sources?
- Need also to beware of radhits that cause streaks / trails / latents
(see latent correction below).
- Note: an excellent single frame "sharp-edge" (hence radhit) detector
is described in:
http://www.astro.yale.edu/dokkum/lacosmic
- May need to mask out n x n region (where n = TBD) around suspect radhit
pixels if neighbors severely affected: e.g., by the IPC effect.
- Use source detection lists to flag suspect radhits at frame level:
non-repeating positions and/or anomalous curves of growth?
- Flag all affected pixels in processing status mask, also output stats
to QA table.
h. Noise estimation in raw image
From Poisson/read noise model, i.e., initialize uncertainty image
for downstream error propagation.
i. Correct for row-dependent biases in any reads (?)
Was seen in MIPS-24 and termed the "read-2 effect". Here, the second read
of every ramp was seen have a small offset relative to a linear
extrapolation from samples higher on the ramp. The magnitude of this
offset was seen to be row-dependent (cross read-out direction).
j. Droop correction
- As seen in MIPS-24: electronic effect where signal in a pixel is
proportional to the total charge on array.
- Note: counts above the ADC saturation limit (in the well) also
contribute to droop. Thus, must somehow de-saturate to
estimate properly?
- Second order effect also seen: "rowdroop", where in addition, signal
in a pixel depends on total counts from all pixels in its row
(cross readout direction).
- Characterize on ground by masking out portions of array and varying
intensity in unmasked regions (can "easily" desaturate if have all
sample reads). Calibrate constant of proportionality beween single
pixel counts vs. total counts over all array. In flight, can validate
ground calibration using (dark?) reference pixels, or, observe extended
sources with "hard" boundaries and compare on-source versus off-source
(background) counts as source moves off the array.
k. Correct other electronic artifacts?
- Bandwidth effects (as seen in IRAC Si:As): decaying trails of pixels
at regular intervals from bright or saturated sources?
- Readout channel patterns (manifested as "jailbars" in MIPS arrays).
- Inter-band crosstalk if they are all read out simultaneously
(seen in IRAC Si:As)?
l. Dark Subtraction
Subtract dark image (from library of ground calibrations referenced
according to dependencies: e.g., temperature, anneals?)
m. Non-linearity Correction
See calibration options in I.1.a.v.
n. Flatfield correction: non-uniformities in pixel response
See calibration options in I.1.a.ii.
o. Sky offset subtraction (zero-normalized illumination profile)
See calibration method in I.1.a.iii.
p. Generate image frame statistics (skip masked pixels where
necessary)
- Histograms: full, column, row, readout block dependent stats.
- Point source filtered and masked-out pixel noise stats: standard
deviations, median absolute deviations from median, quantiles.
- Generate plots too: mean/median row and column cuts, histograms etc..
- Enumerate bad (masked) pixels, including saturated pixels for later
performance monitoring.
q. Outputs (products) at this stage:
- Instrumentally calibrated frame(s), excluding distortion corrections
but containing raw WCS information from telemetry in headers.
These are also termed Level-1 frames.
- Corresponding uncertainty image frame(s).
- Processing status pixel masks to propagate downstream.
- QA statistics summary (ascii) files and plots.
This section expands on the "latent detection and flagging" substep in
step 6 of overall single orbit processing.
Other artifact identification strategies will be added soon.
a. Latent Detection and Flagging
- Can potentially have three types (e.g., from MIPS-24): (i) "bright" ones
that last from a few to tens of seconds; (ii) dark splotchy ones from
very bright sources (threshold?) that persist for hours; (iii) "bright"
ones from extremely bright/saturated regions that last for days.
- Some latents may also transition from bright to dark in presence of
high background.
- Detection methods: predict flux using calibrated decay time constants;
use regular in-scan frame offsets to predict locations/patterns - need
source extraction positions to locate? Latent will have same pixel
location in consecutive frames, thus can positionally/photometrically
correlate and use probabilistic rejection criteria. [Note, these
methods need: (i) source extractions/photometry; (ii) local noise
estimates for flagging. These are good
reasons to leave latent flagging until later,
i.e., after QA stats (containing noise estimates) and source
characterizations are available].
- Correction method (maybe only for strong dark latents): divide by a
coadded (outlier rejected) stack of N consecutive images following
offending bright source (i.e. a running average/median) - may not work
in high source density regions. Certainly don't want to introduce
additional noise!
- Long-term latencies in Si:As arrays can be removed by annealing?
Last update - 29 July 2007
F. Masci, R. Cutri - IPAC