Coverage Simulations for Band 2 Candidate Array (fpa141)

I. Introduction

We present simulated depth-of-coverage maps using bad-pixel masks constructed from a WISE band-2 (4.7μm) candidate array. This is referred to as detector "FPA141" in the test suite. The objective is to support detector acceptance testing by assessing the impact of bad pixels on the WISE sky depth-of-coverage.

The simulations below are a continuation of those presented for bands 3 and 4 (Si:As arrays): Coverage Simulations for WISE Detector Acceptance Testing. Below we adopt the same assumptions, methodology and software as presented therein, but with a slight improvement in the analysis method.

1. Inputs

The main input is a bad pixel mask image provided by the WISE Science Project Office. This was recieved as a 1024x1024 FITS image with bad pixels denoted with a value "1", and good pixels denoted with value "0". We further included the 4-pixel wide "reference pixel" border around this mask flagged with 1's. This is an inactive region of the array whose readout will be used for DC-offset calibration.

For the FPA141 candidate array, bad pixels were identified using the following criteria:

For more details, please contact the WISE Science Project Office at JPL.

With the above noted criteria, ~22.1% of pixels in FPA141 (in the active region only) are declared as bad. The dominant effect (>16%) is excessive dark current. The FPA141 masks, corresponding to each of the above five criteria are shown in Figure 1. The image at bottom right is the mask containing bad pixels flagged using all criteria. Figure 2 shows a histogram of the dark frame used in the analysis.

Figure 1 - Bad-pixel masks created using criteria in Section I.1. Top row (left to right): "bad dark"; "bad flat"; "bad noise". Bottom row (left to right): "bad IPC"; "bad sat"; "total bad pixels".

Figure 2 - Histogram of dark pixel values (e-/sec) for test array FPA141.

II. Methodology

1. Assumptions

The simulations below make use of realistic mission design parameters available at the time of writing. It's important to note that these are only approximate. Here are the main assumptions:

2. Procedure

The method used by the core simulation software is outlined in Section II.2 of Coverage Simulations for WISE Detector Acceptance Testing. The coverage maps in this previous work correspond to one realization of a simulated 15 orbit scenario - essentially a single execution of the software. To get a handle on the dispersion in the fraction of pixels with a given coverage, we executed 100 independent realizations of the software.

a. Test Sequence

We performed three separate simulations, each consisting of 100 (15-orbit) coverage realizations, corresponding to three separate bad-pixel masks:

  1. Default FPA141 mask derived from all bad-pixel criteria outlined in Section I.1.
  2. Same FPA141 mask, but now convolved with an interpolation kernel. The interpolation kernel is represented by the best available estimate of the PRF as deduced from optical characterization. An interpolation kernel will be used for optimal WISE Atlas image generation. The interpolation kernel has the effect of smearing (or smoothing) "bad" as well as "good" pixel information across the sky. Since source detection will primarily occur off the WISE Atlas (coadd) images, the impact of kernel smoothing on effective converage depth in the presence of bad pixels must be deduced. This will enable a more reliable estimate of sensitivity in the long run. A description of interpolation kernel smoothing, assumptions, and impacts can be found in: Use of Smoothing Kernals.
  3. For comparison, a hypothetical "perfect" detector mask with no bad pixels, but with the 4-pixel wide "reference pixel" border included.

III. Test results

A summary of the three simulation runs outlined in Section II.2.a with links to image masks and coverage maps in FITS format is given in Table 1.

Table 1 - Coverage Simulation Test Summary for HgCdTe detector FPA141
Test Run FITS Mask1 FITS Coverage Map2 Rotation
(deg)
1 bad_pix_tot141 bad_pix_tot141_cov0 0
  bad_pix_tot141 bad_pix_tot141_cov90 90
2 bad_pix_tot141_smooth bad_pix_tot141_smooth_cov0 0
  bad_pix_tot141_smooth bad_pix_tot141_smooth_cov90 90
3 perfect_mask perfect_cov 0 and 90

Notes to Table 1
  1. Click on this column to download the mask image in gzip'd FITS format. Default mask used in Test 1 was provided by project office and a 4-pixel wide reference border was included by us. The mask used in Test 2 includes more bad pixels from kernel smoothing - see Section II.2.a
  2. Click on this column to download the simulated coverage map in gzip'd FITS format. A map here represents that which falls closest to the mean coverage fractions computed over 100 realizations (executions) - see Figures 4, 6 and 8 for histograms.

For each of the test runs outlined in Section II.2.a, we present below mean coverage fractions, maps and histograms computed from the 100 realizations. At a specific depth-of-coverage, these statistics represent the mean fraction of pixels within a simulated 2048x1024 central region. The mean is computed over all realizations. The coverage maps represent those whose coverage fractions fall closest to the mean fractions at the respective coverages.

1. Test Run #1 (Default Mask)

cov.  mean fraction (0 degrees)
1   0.00028095
2   0.00269352
3   0.01501135
4   0.05342559
5   0.12717760
6   0.20711752
7   0.23365784
8   0.18587726
9   0.10747607
10  0.04660579
11  0.01552126
12  0.00400829
13  0.00080187
14  0.00016247
15  0.00008254
16  0.00009998

cov.  mean fraction (90 degrees)
1   0.00018898
2   0.00191286
3   0.01227612
4   0.04846052
5   0.12448507
6   0.21257476
7   0.24403050
8   0.19138428
9   0.10563242
10  0.04248395
11  0.01287307
12  0.00300378
13  0.00053596
14  0.00011322
15  0.00004444

Figure 3 - Coverage maps for Test 1: top = 0 degrees; bottom = 90 degrees. Figure 4 - Coverage distributions for Test 1.
Left: false color JPEG images of coverage maps for Test 1. The color bar at the bottom corresponds to the approximate coverage in pixels. The approximate range shown is ~3.5 (blue) to 10 (white). Right: corresponding coverage distribution with fractions normalized to unity. Dots represent the 100 individual realizations. The lines go through the mean fractions from all realizations. Click on thumbnails to see full-size JPEG maps.

Summary

2. Test Run #2 (with Kernal Smoothing)

cov.  mean fraction (0 degrees)
3   0.00050197
4   0.01632704
5   0.11854209
6   0.26657947
7   0.28923915
8   0.18883699
9   0.08385611
10  0.02738140
11  0.00702859
12  0.00141391
13  0.00024879
14  0.00004444

cov.  mean fraction (90 degrees)
3   0.00025997
4   0.01159792
5   0.10608218 
6   0.27266377
7   0.30500977
8   0.19495896
9   0.08004259
10  0.02309086
11  0.00512552
12  0.00093691
13  0.00016984
14  0.00004166
15  0.00001999

Figure 5 - Coverage maps for Test 2: top = 0 degrees; bottom = 90 degrees. Figure 6 - Coverage distributions for Test 2.
Left: false color JPEG images of coverage maps for Test 2. The color bar at the bottom corresponds to the approximate coverage in pixels. The approximate range shown is 4 (dark blue) to 12.5 (red). Right: corresponding coverage distribution with fractions normalized to unity. Dots represent the 100 individual realizations. The lines go through the mean fractions from all realizations. Click on thumbnails to see full-size JPEG maps.

Summary

3. Test Run #3 (Perfect Mask)

cov.  mean fraction
6   0.00524177
7   0.12215591
8   0.28407260
9   0.29720000
10  0.18370580
11  0.07606465
12  0.02320443
13  0.00611416
14  0.00151106
15  0.00052980
16  0.00019976

Figure 7 - Coverage map for Test 3. Figure 8 - Coverage distributions for Test 3.
Left: false color JPEG images of coverage maps for Test 3. The color bar at the bottom corresponds to the approximate coverage in pixels. The approximate range shown is 6 (black) to 13 (white). Right: corresponding coverage distribution with fractions normalized to unity. Dots represent the 100 individual realizations. The lines go through the mean fractions over all realizations. Click on thumbnails to see full-size JPEG maps.

Summary

IV. Conclusions




Last update - 27 June 2007
F. Masci, R. Cutri, T. Conrow - IPAC