Last modified 10 years ago Last modified on 10/29/2009 04:59:46 AM

CCD Coordinate System

The CCDs that will populate the LSST focalplane have their pixel array partitioned into several segments, which are all read out in parallel. The goal of this proposal is to define a data structure which encapsulates all necessary information about the geometry of these segments within the CCD, and the order in which the pixels in each segment are read. This problem is of long standing in astronomy, since it is faced by all users of imaging mosaics, which have been on the sky for nearly two decades. As an example of the situation we are discussing, here is a schematic of the current design for the LSST CCD (from a presentation in May 08 by Paul O'Connor, ):

It is worth paying special attention to a couple of Paul's comments. The first is that, while the segments are contiguous along the horizontal axis, there is a subpixel gap between the bottom set and the top set. The other is the note that multiple arrangements of the serial shift directions are possible. Elsewhere in his presentation, he further notes that it is possible that the LSST focalplane will be populated with more than one type of CCD.

Another thing to keep in mind is that we need to build a pipeline that can readily be reconfigured to take in images from different mosaics. This is necessary now because we are driving our data challenges with precursor data, and it will be necessary in the future both because the LSST focalplane is likely to evolve in time as CCDs are replaced, and because we wish to enable others to use our pipelines on their own data.

Within our application code, it is clearly desirable to be able to treat an entire CCD as a uniform array of pixels within a MaskedImage. This is the case wherever one wants to work with the maximal piece of the focalplane that has a truly rigid geometry. Examples are in WCS determination and image stacking. There is obviously geometry to be dealt with in constructing this pixel array. Some of it, such as reflecting some segments about the vertical axis to compensate for differing serial readout directions, may be performed by the camera data acquisition system before we see the data. What may be less obvious is that we still need to know the original sequence with which pixels were read out from the focalplane. One reason is that there are two effects that the Instrument Signature Removal pipeline (ISR) may need to correct that depend on the original order. These are crosstalk correction and charge transfer efficiency correction. The second reason is that there are pixel values read out by the camera which do not correspond to physical pixels on the CCD. These prescan and overscan pixels help define the bias level for each row, and are used by the ISR.

Our goal, then, is to define a class, which I'll provisionally call PhysicalCcd?, which flexibly defines all of these complexities in a way which is algorithmically useful and efficient for the pipelines. Before getting into the requirements for the class, it is fair to ask whether there is already something in astronomical use that could meet our needs. I can only say that I don't think so. The most general existing capability I'm aware of is contained within the iraf package mscred. It uses an instrument "database" (really a group of text files with name-value pairs) which defines much of the geometry in question. It does not accommodate geometrical discontinuities, however, and does not clearly separate readout order from geometry. The implementation is quite old, iraf specific, and does not appear attractive to extend.

Onward to PhysicalCcd?:

Requirements for the PhysicalCcd? class

  • Constructed from a Policy, which itself an element of a larger Policy for the entire focalplane
  • Defines the geometry of each individual segment as as it is physically present in the CCD, including:
    • its orientation and translation with respect to a physical coordinate system for the entire CCD
    • the location of the first pixel in the readout sequence
  • Separately defines the "electronic geometry" of each segment, which describes the way the data is presented by the camera data acquisition system

These are likely obvious, but they are not quite enough. Consider again the presence of the subpixel gap between the top and bottom segment groups in the proposed LSST CCD. It is easy enough to define this part of the geometry, but how shall we actually employ it when we wish to do something useful with the resulting MaskedImage? Considering the focalplane example above, how will we calculate the centroid of an object? Clearly its physical position may differ from its logical position by a subpixel amount, which we cannot afford to ignore. Additionally, a FootPrint? which straddles the boundary between the top and bottom segment groups will be missing some flux and will have a distorted shape. What shall we do about this?


The design of the data structures which represent the physical and electronic sensor geometry within PhysicalCcd? is straightforward enough, though careful attention needs to be paid to optimization issues. A detailed proposal for these aspects will be forthcoming. It seems better to start with the functional capabilities of the class. To do this, let's start with a MaskedImage for a whole CCD, and consider building and measuring FootPrints? within it. For building the FootPrints? there is no need to do anything special - just treat the pixel array as usual. But, when it comes time to measure those FootPrints? two things need to happen:

  • Instead of using the logical pixel coordinates in forming the centroid and other quantities that involve the spatial coordinates, such as spatial moments, we need to use the physical coordinates.
  • We need to assign a MaskPlane?, "Discontinuous", which flags the pixels on either side of geometrical discontinuities (the boundary between the upper and lower segment groups in the example above). When we measure a FootPrint?, if any of its pixels have Discontinuous set we can take action (flag it as a bad measurement; regrid those pixels, interpolating across the discontinuity; etc).

It does seem that the first of these would best be satisfied by adding that functionality to the existing functions indexToPosition() and positionToIndex() within afw. In conjunction with the CCB proposal to make the use of these functions mandatory, this would make application code completely transparent to the geometrical discontinuity issue. At the moment, the design of these functions does not allow them to be connected to the appropriate information, so if we want to use them for this some redesign will be required.

More to come...


Comment by krughoff on Wed 28 Oct 2009 11:59:46 PM CDT

Attached is a pdf outlining a class for describing the physical geometry of the camera system. The electronic geometry still needs to be worked out in detail.

Add comment