wiki:Winter2014/Design/CameraGeom
Last modified 5 years ago Last modified on 01/31/2014 12:01:04 PM

Design for W14 Camera Geometry Rework

Many of the requirements listed here come from Jim's Winter 2014 CameraGeom page and Winter 2013 CameraGeom page.

The full design page is here here.

In discussing the design and requirements of the camera geometry it became clear that the representation of the raw data (input data to ISR) is so different from that of assembled exposures that a new container is needed to hold the amp images, the calibration information and assembly information. I have started a design page here.
This work is now part of the RawAmplifier class. See the

NEW SECTIONS 12/17/2013: CameraGeom Strawman Design and C++ tasks for CameraGeom. See Table of Contents.

Tasks

Tasks for Camera Geom:

  • Assemble requirements -- Done
  • Strawman design -- Done
  • Design review -- Done
  • Respond to design review -- In process
  • Implementation -- In process
    • Python hierarchy -- Prototyped (KSK)
    • Python utils -- 2 weeks (KSK)
    • C++ Detector and Amp classes -- Done (REO)
    • C++ TransformRegistry and CameraCoord classes -- In Process (REO)
    • C++ Orientation and CameraPoint -- 2 weeks (KSK)
  • Implement minimum camera models (LSST, SDSS, HSC, etc.) -- 2 weeks
    • LSST, SDSS and DECam (KSK)
    • HSC, SuprimeCam and MegaCam (PAP)

Total is 9 weeks.

Scope

Aspects of the scope that haven't been specifically addressed other than in passing are in italics, and I'd like to get comments on how important they are for this cycle.

  • Rewrite the current camera geometry classes and supporting framework to be less confusing, easier to use, and more complete for DM's purposes.
  • The scope of the camera geom will be to represent a single contiguous field of view.
    • This implies that if a single camera has two detectors with overlapping fields of view (e.g. when using a beamsplitter) each detector will be described by a different camera object.
  • Define and implement a container (possibly pure python) to store the electronic properties of the Detectors.
    • This will contain all the information necessary to do needed calibration as well as any pixel access from the on-disk representation including assembly of amp level images into sensor sized pixel grids.
  • The current FpPoint and FpExtent need to be fit into the new design either as is, or replaced by wrappers around afw::geom objects. (RHL) The type safety is important here (cf. Angle), and I'd be sorry to lose it. JFB may not agree.
  • Take into consideration in the design and implementation the usability by other project sub-systems (camera, telescope, simulations, etc.)
  • Namespace re-factoring should be done before implementation begins: afw::image::wcs -> afw::geom, afw::coord::Coord->afw::geom.
    • JFB: I think that this can be considered and done as a separate piece of work either before or after the cameraGeom work, and given priorities I think it's best to assume it will happen afterwards.
    • [KSK] I was assuming it would be easier to do before hand since there will be more code to change afterword, but I think your right that it's not going to be quick to do. We should wait.

Requirements


Requirement Summary

Here are the requirements distilled from the discussion below. Unless there is more discussion, these will be the main requirements on the design going forward.

  1. The geometric description of the camera (layout of the devices in the focalplane) will be split from the electronic properties of the devices.
  2. The implementation will be in Python where possible. There will be a (hopefully non-polymorphic) class in C++ to hold the coordinate transforms and other information associated with the detectors and needed in C++. This will eliminate any downcasting in Python.
  3. The design must, at a minimum, support LSST, HSC, Suprime-Cam, MegaCam, SDSS, and DECam.
  4. The design will define a set of coordinate systems and a way to transform between them.
  5. The design will be hierarchical with a top level camera object containing detector objects.
  6. The hierarchy will be extensible so that cameras with intermediate levels of sensor grouping (LSST rafts) can iterate over any level.
  7. The standard set of coordinates systems will be extensible so that other coordinate systems (and accompanying transforms) can be defined.
  8. The 6 axis position of every sensor will be supported.
  9. Detectors will be iterable from the camera level. Every component will be retrievable via index or slot name. Component identifiers will contain vendor and serial information.
  10. Performance will be a consideration in the design.
  11. Each Detector will have a container of possible Filters.
  12. Camera objects will be persistable.
  13. The design will provide simple tools for visualizing the camera geometry.

Please expand this list as necessary.


I have tried to leave out as much in terms of implementation as possible, but some implementation suggestions may be given in the sub-bullets.

Please add this list, and feel free to push back to any I've included.

  • Split the on disk representation of the sensors including electronic properties and amplifier layout from the physical positions of the sensors in the focalplane
    • As RHL points out, it is confusing to have the on disk layout of the data mixed up with the layout of the focalplane in physical space.
    • One suggestion is that the Detector objects can be associated with a (possibly pure python) Sensor object that contains the electronic information for that slot.
    • (PAP) Don't think Sensor can be pure python if Detector is C++ (may be technically possible, but too much trouble).
    • [KSK] I was thinking that the electronic information didn't necessarily need to be part of the Exposure (as it is now), but could be carried around beside the Detector object. That being said, the electronic information is currently in a C++ Amp object.
  • Downcasting should be kept to a minimum.
    • A Camera, Raft, and Detector class should be sufficient to represent the layout.
    • (PAP) Also need to worry about amplifiers: in at least some cases we care about coordinates on the amplifier (even after CCD assembly), e.g., diagnosing amplifier effects from photometry.
    • [KSK] That's true, but I don't think we want to include then in the hierarchy. My view is that information should be in the class that holds the rest of the electronic information.
  • Avoid making any LSST specific assumptions.
    • The design should support many different camera designs.
      • [RAS] Enabling (or at least, not precluding) other camera configurations would make this software immensely useful to other projects. One particularly interesting layout is that for the DECam focal plane, where the sensors are interleaved on the focal plane in a broken joint pattern. [RHL] This is not a problem. You can't pretend that CCDs lie on a grid -- it fails for HSC (with 4 rotated chips) as well as DECam
    • It should not be difficult to support very simple (in terms of number of sensors) cameras.
    • (PAP) In particular, it should be simple to support single- (or no-) raft cameras; e.g., skipping the "raft" level in most operations (e.g., iteration).
    • [KSK] That is an important point, Paul. Russell has a proposal about this.
  • The design will define a set of coordinate systems and a way to transform between them.
    • An example of such a set is: "DetectorPixels", "DetectorPhysical", "CameraPhysical", "CameraTangent", "SkyTangent" defined here.
    • Care should be taken in defining the origin for each system.
      • JFB: note that when I wrote the page that recommended those coordinate systems, I was intending them as a straw-man, and I am in no way an expert on this topic; input should definitely be gathered from others as to which coordinate systems should be predefined.
    • (PAP) Write all these coordinate systems in FITS image headers as multiple WCSes.
    • [RAS] Bear in mind that representing image arrays containing overscan pixels is not really supported in FITS.
    • (PAP) Support (if not use) XYTransform.
    • (PAP) Support fast transformations on single and multiple positions (either through providing both single and vector transformation APIs, or providing a transformation object that does).
    • (PAP) Will this support such transforms as focal plane (u,v) --> CCD (i,x,y) without knowing which particular CCD in advance?
  • Support adding additional coordinate systems
    • I'm not absolutely convinced that this is necessary, but it's hard to predict what other subsystems are going to want to do with the camera geometry.
    • If this is a requirement, I think the design should specify a reference coordinate system so that any added coordinate systems need only specify the forward and reverse translation to that system.
      • JFB: I think whether it's a requirement depends on how well we can determine the set of predefined coordinate systems in advance, though I agree that we will need to set a reference coordinate system (we could also allow any of the predefined coordinate systems to be used as a reference).
      • JFB: Considering that some camera element positions may be defined in 3-d, it's possible that we may want to be able to derive 2-d coordinate transformations based on those 3-d positions. That sounds hard, so I wouldn't assume it's a requirement, but it's something to ask others about.
    • (PAP) I believe this is necessary to support the other teams and for ourselves (e.g., telescope coordinate systems, tree-ring-corrected coordinates).
  • Any number of detectors in any orientation with 3D positions will be supported.
    • (PAP) Is there a limitation that the detectors not overlap?
    • [KSK] As I mentioned in the Scope, I don't think we want to support detectors that overlap. If they overlap in the CameraTangent coordinate system, they should be two different cameras.
    • [RAS] As a point of information, I am interested in developing a FITS convention for expressing the arrangement of sensors in a FPA. I think the design work for CameraGeom may provide a compelling use case for the convention.
  • The design will be hierarchical with back references from each child to its parent.
    • A proposal is to have a Camera, Raft, and Detector level.
    • The design should allow direct iteration of any level below it if anything more complicated than {Camera, Detector} is accepted. This simplifies iteration for cameras that have only one Raft.
  • The design will include a mechanism for identifying components by index and by slot name.
    • Of course each component will also have some sort of unique identifier (e.g. serial number, unique name). This should be held with the electronic information mentioned in the Scope section.
    • [RAS] I would definitely include the manufacturer serial number of the sensor. There is the possibility that, during the 10-yr survey one or more CCDs will fail, and be replaced. Thus the electronic properties of such sensors residing at a given index would change within a DRP cycle.
  • Performance should be a concern in the design.
    • Some calibration simulation applications will do many, many queries of the camera to determine which detector contains a particular position (if any at all). This should be as fast as possible.
  • The camera object should be able to provide some summary information for use by other modules (Mapper) that may not need to know the full details of the camera state.
    • When I mentioned this to Russell, he said that he thought the Mapper or Butler should be doing this based on introspection of the Camera object.
    • (PAP) It isn't clear what you intend here. What summary information, and why?
    • [KSK] An example is that pipeQA cares which CCD is next to which, but does not care about the details of the layout or transforming coordinates. Perhaps each component should know how to get the summary information it wants.
  • The Detector object will hold a container of possible Filter objects. with a flag to indicate whether the Filter is to be applied to the camera as a whole statically with individual Detectors (to support SDSS-like cameras).
    • (PAP) I think this is confusing requirements with implementation. The flag is beside the point; the requirement is that a Detector needs to know what filter applies (as opposed to an entire camera having a single filter).
    • [KSK] Agreed.
  • The camera object should be persistable.
    • In cases where the camera is modified programmatically, it should be possible to cache the modified version and read it later.
    • (PAP) Also makes construction very simple, but we should not require a persisted camera in order to create a camera.
  • The design will include tools for making representative visualizations of the camera geometry.
    • This will be very important for building and debugging camera geometries. I think it belongs as part of this design rather than as part of another package.

Further Discussion

A couple of times it has come up that doing the implementation primarily in Python is attractive in some ways. Jim has pointed out, however, that the transforms need to be in C++ and so the majority of the hierarchy does as well. He has also made the point that subclassing the Camera, Raft, and Detector objects should be avoided since they do not hold anything that is unique to a camera. Most of the camera specific information (which is related mostly to the Detectors) could be held in a Sensor object which could be subclassed for different detectors.

Simon is somewhat concerned that other groups may want to use the Camera in conjunction with different kinds of maps (e.g. 2d filter characteristics, sensor height maps), but I don't think the current design precludes that as long as the maps are in one of the supported coordinates systems.

PAP:

  • "The scope of the camera geom will be to represent a single contiguous field of view." --- Is this a necessary restriction?
  • [KSK] It's probably necessary, but if chips overlap it makes things hard. That use case can easily be handled by constructing more than one camera.
  • Pure python is probably not possible, but important to be able to simply build the hierarchy in python. It would be nice to follow Jim's idea of being able to unpersist a simple hierarchy from FITS table or similar.
  • [KSK] Give discussion with Jim and Russell (summarized below) I believe we can get away with putting everything but the Detector and possibly the electronic information classes in Python.
  • I think it's very important to attempt to support the Camera and other groups, and this should be a main goal of this work.
  • Don't care too much about refactoring namespaces: it seems to me that whatever we choose will be sub-optimal as there's so much interplay.
  • [KSK] I descoped this based on suggestions from PAP and JFB

C++/Python boundary and Camera Hierarchy

Russell took these notes based on a phone conversation between Jim B., Simon K. and Russell O. on Oct. 31 2013:

C++ vs. Python

Simon and Russell would prefer to write most of CameraGeom in Python, whereas Jim's document <https://dev.lsstcorp.org/trac/wiki/Winter2014/Bosch/CameraGeom%2BWcs> suggested doing it all in C++. However, Jim was quite amenable to writing much of it in Python as long as the pieces needed by C++ code (e.g. measurement tasks) was written in C++. At this point we think the main thing that must be in C++ is coordinate transformations (which are a natural fit for C++ anyway). However, we solicit ideas for other pieces that must be in C++.

If much of CameraGeom is written in Python then we agreed that the following is true:

  • There is no reason to worry about downcasting and avoiding polymorphism for the Python code.
  • We do not need to go to heroic lengths to enforce immutability of the CameraGeom object. The usual Python conventions will suffice, including:
    • Do not provide mutating methods, thus making it hard to mutate CameraGeom internals
    • Provide a clone method to return a deep copy, for those situations where somebody does want to update internals (e.g. fitting improved coordinate transformations).

What CameraGeom information should be attached to Exposures?

We agreed that the following are the minimum necessary:

  • Coordinate transform(s)
  • Electronic information (we do this now)
  • A detector identifier, so that one can efficiently get the relevant information from CameraGeom
  • [RHL] A common pattern it so
    • Retrieve a raw Exposure from the butler
    • Use the associated Detector to iterate over the amplifiers (e.g. subtracting bias; trimming)

This implies that the amp geometry needs to be present in the C++ part of the object. I'm not convinced that we don't need a C++ object

  • [KSK] I don't have a problem with having a C++ !Detector class.
  • [KSK] Since all the looping over amps is in python (ISR), can't we store the electronic information in python as part of the camera and use the detector id associated with the C++ Detector to retrieve that information when it's needed? I imagine something like:
    ccdImage = butler.get("raw", **kwargs)
    camera = butler.get("camera", **kwargs)
    ccd = ccdImage.getDetector()
    
    ccdInfo = camera.getCcdInfo(ccd.getId())
    dim = ccdInfo.GetDimensions(trimmed=True)
    trimmedImage = ccdImage.Factory(dim)
    for a in ccdInfo:
        data = ccdImage[a.getDataSec(False)].getMaskedImage()  
        tdata = trimmedImage[a.getDataSec(True)].getMaskedImage()
        tdata[:] = data
    
    

We also agreed that it was not desirable to store the complete camera geometry in each exposure. Tasks such as meas_mosaic that need information about neighbors should take CameraGeom as a separate input.

Rafts

Russell floated a proposal that he will add to the Simon's Trac page <https://dev.lsstcorp.org/trac/wiki/Winter2014/Design/CameraGeom>

In order to simplify use of CameraGeom by other projects, and to focus on the essential issues of where detectors are in the focal plane, the base CameraGeom class will be a collection of detectors. Given a CameraGeom one can access detectors by index or by named slot (where the naming convention is camera-specific) and iterate in the usual fashion. Detector geometry is relative to the focal plane.

Rafts or other extra levels of hierarchy will be handled by subclassing CameraGeom. Thus LsstCameraGeom will contain not only the collection of Detectors (or LsstDetectors), as above, but also a collection of Rafts. One can iterate by raft (or by detector) and each Raft knows which Detectors it contains. Since an LSST Detector should probably know which raft it is in, we should subclass Detector to add that information. In this view rafts need not necessarily contain geometrical information (though they can if it proves to be helpful).

This raises a few questions, including:

  • How to name detector slots when one has rafts. One obvious possibility is strings such as "R1,1 S0,2". Rafts will also have named slots.
  • How this interfaces with the butler. Clearly we can retain the current interface by having the butler use rafts and leave it at that. However, I think we should consider adding a new ID key for the full detector slot name (e.g. "R0,1 S2,0") allowing access without directly worrying about rafts. The default butler will always offer that, and it is up to projects to offer a different set of ID keys to support a deeper hierarchy.

CameraGeom Strawman Design

Summary of discussion with Simon and Andy B, updated after some on-list discussion about the coordinate class (with especially useful input from Jim Bosch) and a private conversation with Jim about Length and XYTransform. Updated 2013-12-19 based on a conversation with Robert Lupton and Jim Bosch which resulted in removing the suggestion to template XYTransform and add the Length class.

Introduction

Camera geom supports conversions between various focal-plane-related coordinate systems. The default class will provide conversion between focal plane, pupil and detector pixel coordinates (described below). A particular obs_x package can provide support for additional coordinate systems. These conversions will be carried out by using a ConversionRegistry (see below for design). Each registry will have a reference coordinate system through with all conversions are carried out.

Camera geom knows nothing of pointing or orientation of the focal plane on the sky, but does know about distortion (at least mean distortion, not necessarily time-varying distortion). The Camera will have a ConvsersionRegsitry?. We expect a useful reference coordinate system for Camera ConversiongRegistry? objects will be the focal plance coordinates.

Focal Plane Coordinates:

  • a Cartesian rectilinear coordinate system
  • x, y is in the focal plane
  • it will typically be centered at the middle of the camera, but that depends on the camera

Pupil Coordinates:

  • A 2-d coordinate system that represents distance on the sky: focal plane x,y coordinates with distortion and a scale change
  • x and y are aligned with focal plane x, y

Pixel Coordinates (or Detector Pixel Coordinates):

  • x,y pixel position on a detector

Notes:

  • We may also want Detector Physical Coordinates
  • RHL also suggests that we support Camera Pixel Coordinates. We at UW aren't convinced, because there are no real pixels at those locations, but Russell notes that a typical camera-wide WCS might be easier to understand if its pixel-like units were pixels instead of mm.
  • The Detector will also have a ConversionRegistry? for converting coordinates relative to the detector pixel grid. A natural reference coordinate system for detectors is the pixel grid projected on the back side of the silicon where photons first interact with the device. This allows field line effects like tree rings to be modeled as a perturbation on the reference coordinate system.

Requirements

  • We must be able to construct, persist and unpersist a Detector without constructing a Camera
  • We don't want one Detector to have to know about other Detectors
  • It should be efficient and not too clumsy to go from focal plane mm to detector pixels when one doesn't know the detector in advance. Since Detectors don't know about other Detectors, we think this must be handled by Camera.
  • We must support some kind of wavelength-dependent transformation for our coordinate conversions -- at least at the level of a set of transforms, one per filter.

Classes (all are immutable)

Camera geom will contain the following classes:

DetectorCollection: python

  • Constructed from a list of Detector objects
  • Contains a collection of the input Detectors
  • Provides an iterator over all contained Detectors
  • Allows access of individual Detectors by name, index (position in the input list), or serial.
  • This will be sub-classed to make the Camera object, but can also be sub-classed to function as an LSST-like Raft.

Camera (formerly called CameraGeometry): python

  • Extends DetectorCollection to include coordinate conversions
  • The Camera object will have class methods that will aid in construction of Detectors and coordinate conversion objects (XYTransforms).
    • One example is a method for constructing a first order pupil to focal plane conversion from focal length and pincushion.
    • Another example is a method for constructing focal plane to pixel coordinate conversion from detector position in the focal plane and Euler angles for the device.
  • Has a method that finds a list of Detectors given a position (if any). Russell thinks it is safe to support overlapping detectors.
  • Supports the ability to convert between pupil and focalPlane coordinates, at a minimum. More coordinate systems can be added for a particular camera. Internally this is supported using a ConversionRegistry (see below).

Id: python ( Removed from design )

  • In the current design instances of the Detector class know, and can return, their own name and serial. This makes a python side object to carry this information around unnecessary.
  • ID of Detector and similar objects, such as rafts

Detector: C++

  • Information about a detector, including:
    • amplifier info as an afw table
    • coordinate conversion info as a ConversionRegistry; some converters are shared with Camera's registry and some are unique to the detector.
    • the name of the detector (if Id is in python then this will be stored as a string that is self-explanatory and can be used to produce the correct Id)
  • methods include:
    • toCameraPoint = Detector.convert(fromCameraPoint, toSys)
    • pixelBbox = Detector.getBbox()
    • listOfCameraCoords = Detector.getCorners(coordSys)
    • nameString = Detector.getName()
    • serialString = Detector.getSerial()

ConversionRegistry: C++

  • A registry of coordSysName:functor that converts coordSys<->focalPlane. Functors will be instances of afw::geom::XYTransform or some variant.
  • Each coordinate systems should also have a help string (e.g. as part of the functor).
  • A minimal registry for Camera is "pupil" and "focalPlane". The reference coordinate system for Camera will likely be "focalPlane."
  • A minimal registry for Detector is: "pupil" and "pixels"/"detectorPixels" "focalPlane", plus possibly "detectorPhysical". The reference coordinate system for Detectors is "pixels". Likely LSST will want a set of these, one per filter.
  • Written in C++ because Detector contains a conversion registry and Detector is written in C++
  • Conversions will take and return CameraPoint.
  • Eventually vectorized versions must be provided. These will probably convert plain old vectors (or numpy arrays) and not provide the safety of keeping the coordinate system with the position data.

Note that coordinate conversion will require two functors: one to transform "from" coordinates to focal plane coordinates, and the other to transform focal plane to "to" coordinates. This keeps things fairly simple, though in some cases it may be more natural to write a functor that transforms between other coordinates (e.g. idealized detector X,Y pixels and distorted-by-field-likes X,Y pixels). We have been asked to support users providing local transformations, converting them into transformations to/from focal plane coordinates.

CameraPoint (or CameraCoord, formerly called FpPoint), perhaps: C++

  • A simple object that combines a coordinate system name and a position in that coordinate system. This provides some useful safety when converting to different coordinate systems.
  • It will need a method to get the point data and the coordinate system.
  • A CameraPoint will also carry an identifier that will allow disambiguation between identically named coordinate systems in a single camera. An example is the 'pixels' coordinate system of which there will be one per detector. This makes it possible to round trip a pixel position to pupil position and back without also carrying the detector object with the CameraPoint.

RawData (not the final name), python

  • Support reading raw data, assembling amplifiers and and generally converting to our standard format
  • This is something we know we need, but we do not think it drives much of the design and it is not our first priority, so we're working on other aspects of CameraGeom first.

XYTransform

After extensive discussion with Jim Bosch and Robert Lupton, we agree that we'll keep XYTransform pretty much as it is now. Input and output will be pairs of doubles (or similar). We would like to remove the isDetector flag if possible, but may choose to ignore it. We will avoid templating XYTransform, and instead rely on CameraPoint to provide the safety that we want.