Last modified 9 years ago Last modified on 02/05/2010 05:23:20 PM

Meeting Notes

The following are notes from an SDQA Team meeting held in Tucson on 12-13 Jan 2010.

Attendees: Tim Axelrod, Russ Laher, Deborah Levine, Jeonghee Rho, Dick Shaw, and for the first session by Robyn Allsman and Jeff Kantor. Joining us by phone were Lee Armus, Gregory Dubois-Felsmann, Vince Mannings, and Michael Strauss.

Decisions and Agreements

It has been agreed that IPAC will, within the limits of the available resources, do whatever tasks Tim and Jeff believe are desirable for evaluating the results of DC3b without regard for their applicability to SDQA as it has been defined. (Note from Deboarah: I'd prefer to rename the task given that. Some part of the infrastructure will be relevant, but most probably will not be, especially since limited resources will likely mean solving the immediate problems in an expedient manner. My inclination would be to put this under the envelope of tools and/or pipeline validation. For this note, I will call the team Analysis Support rather than SDQA. )

Format of the Meeting

After reviewing notes from prior meetings, we went through Tim's DC3b data quality goals sequentially, noting down the resultant action items and tasks for the team.

There are 4 categories of tasks/actions identified:

  • ACTIONS to the SIM team to provide formats/intermediate products/information
  • ACTIONS to the stage developers to provide diagnostics
  • ACTIONS that are unassigned
  • TASKS which would be appropriate for the Analysis Support team

The list of TASKS still needs a refined prioritization, including the identification of specific items to be delivered for DC3b in the April time frame.

ACTIONS to the SIM team

Tim will kick off this set of actions with Andy. Due date for kickoff end of next week (20 Jan 2010)

  • To test ISR we want a set of simulated images where the simulator produces the image at the top of the telescope (“sky before optics”).
  • need input cosmic rays and static bad pixels from the SIM team.

Comment from DS: I think we mean a list or image/mask of all pixels that were affected by cosmic rays. Knowledge of the number of CRs or their starting positions & directions is not especially helpful in this context.

  • Need what the simulator has applied for the photometric zeropoint for each image.
  • Need to capture the PSF, preferably in the form of an image, at several locs in focal plane so the derived PSF can be compared to expectations.
  • Need the catalog/list of variable sources and their properties, and be able to relate them to a given subtracted image.

Comment from DS: We need to clarify what is to be compared, and when. The difference images are not persisted data products.

  • In fact, need acess to entire catalog of objects and sources. (including moving objects).
  • Need access to global atmospheric model (for comparison to derived model from global photometric calibration).

Comment from DS: It would probably also be handy to know the model illumination function across the focal plane.

  • Need to know astrometric parameters (e.g., proper motion vector) for objects in the simulated object catalog.

ACTIONS to stage developers

Deborah to put these out to the LSST-data list and ask Suzy/Schuler? to help track them. Proposed due date early February.

  • ISR: populate !imageMode metric/metadata (probably others, need to clean this up)
  • Middleware: Exceptions that are thrown by pipelines should also send failure events. (DAL to send to David Gehrig).
  • ICP: capture and make available the photometric zeropoint (!photZeropt) derived
  • Since the ICP is in the best position to compute many of the needed SDQA statistical quantities, we need to ensure ICP can be run on any LSST image, included subtracted images, background subtracted images, difference images etc.
  • MOPS: tell us what metrics/validation approaches are desirable in detail -- ie. fill out Tim's Data Quality Page.
  • AP: Serge to put DC3a metrics into Tim's Data Quality Requirements page

  • Astrometric Cal: Dave Monet to tell us what parameters we should analyze for DC3b validiation. Presumably this includes the proper motion vector, but are there others?

ACTIONS to database group

Deborah to contact Jacek about these items. Proposed due date early February.

  • Add table for simulated and generated object catalog matches. (Tim "pretty urgent")
  • We need database integrity and consistency checks performed prior to starting science analysis. Tim noted a couple in his Data Requirements Document: (Need for PT1, not as early as Feb).
    • Exposure table entry for every expsure processed with some minimal set of valid metadata
    • no oprphanes souces, DIA Sources
    • action to Database team is to fill out this list and to identify to whom the action should be assigned.


  • Crisp up the requirements associated with photometric calibration (begining of Feb.)
  • Generate Calendar of validation tasks for Science Collaborations to address. (Assigned jointly to Jeff/Tim?. mid-February).

Comment from LA: The science team leaders need to know what types of input (feedback) are needed by the project, on what timescales these are needed, and how they can gain access to the necessary data (or tables). On a much smaller scale, this would be very helpful for the IPAC scientists as well to allow them organize their efforts.

Actions to Analysis Team Members

  • to Deborah -- flesh out astrometric cal section of Data Quality requirements to say task is to compare derived astrometric parameters (e.g parllax, proper motion) in simulated object catalog to those in derived object catalog

Unassigned ACTIONS

  • Need to figure out the mechanics of how Analysts and/or analysis tools can get at the ISR bad-pixel inputs (low priority).
  • Need to construct the necessary components from what would be the CALIBRATION PRODUCTS pipeline, e.g. flats. (needed for PT1) (assign to Suzy, she will need to find someone for implementation). Dick is a possible assignee.
  • Need to do simple image statistics (mean, background, st. dev, etc, as captured in SDQA metrics on any LSST image type. Could be part of ICP or could be standalone. Probably also useful to have canned histogram generator on selected pixel values, and to persist these histograms as data products. (Some, if not most of this is already in the afw). (assign to Suzy to assign) (PT1)

Comment from LA: Will also need to perform these at multiple places in the processing to track changes and assess functionality (background subtraction, flat fielding, etc.). Measuring the "background" on an image is not trivial given the depth of these data, so this will need to be done in concert with an object identification stage that establishes the footprints at each wavelength first for all stars and galaxies.

  • Determine how to generate expected ground truth for properties of coadds, e.g. PSF, background level, noise level based on SIM inputs. (PT 2 or 3) (Suzy to assign).
  • For subtracted images, determine how to select pixels to be used in evaluation of source subtraction quality. (Assign to someone in the Analysis Support Team. High priority for PT1)
    • pixels in footprints of bright stars
    • pixels selected by histogram
    • ?
  • Discuss with MOPS team what metrics are being produced and how useful they are. (Assign to Suzy. Need to understand database needs asap, impelmentation could be PT2 or PT3.)

TASKS for Analysis Support

These tasks are in rough priority order, as the order of execution is a rough priority. To first order, "high" priority is assigned to those tasks that need to be completed for PT1, "medium" for PT2, "low" for PT3.

  • High Priority:
    • Compute the metric which captures the variation of the image mode across the CCD (or FPA for that matter) and compare to a threshold. (See SDQA Background Evaluation proposal.)

Comment from DS: Here is my take on what is needed: accumulate histograms of the input images, or amplifiers in the case of Raw data (with a choice of binning... we do not necessarily want to sample the full dynamic range of the sensor at 1_DN resolution). Note that the histograms should be accumulated only over "good" pixels--i.e., excluding pixel values that have been flagged in the static bad pixel mask (bad columns & such), or that are saturated. Then determine the mode--i.e., the most common value in the histogram (and possibly the median, to defend against wonky distributions) and persist the result. Accumulate Tim's statistic, which as I recall is something like the RMS deviation of the modes from the average of the modes over the relevant area, and subject it to a TBD threshold to determine whether the backgrounds agree sufficiently. To determine whether the bias level was adequately subtracted, the modes should be determined on a per-Amp basis and evaluated over the CCD; to determine whether the astrophysical background is sensible, the mode should be determined per CCD and evaluated over the focal plane. A possible refinement would be to allow for a linear background gradient that would be less sensitive to expected variations in the background, such as from scattered light or smooth variations near very extended targets (angularly large galaxies and emission nebulae). This is at most a few day's work to implement, test and document, assuming the relevant test data are at hand, and allowing a little time to explore some options/refinements as noted above.

Comment from LA: Computing this quantity from an image is straightforward, but we will have to think a bit more about what it is telling us about the variations in the data and how susceptible the mode is to CCD defects or edge effects). Should do this in combination with a filtered mean or median of a smoothed image. It would be good to know when in ISR this happens (at the end after isrDefectStage ?) and how the values are stored. Basic Q: What is the definition of an amplifier segment (as opposed to a CCD image) in the pipeline since the ISR seems to run on a "amplifier segment-sized image as the basic unit ?

  • automate the comparison to ground truth for cosmic ray removal -- want to look at the performance with different image characteristics, such as background level, source crowding, proximity to bright sources.

Comment from DS: I think the most straightforward (and interpretable) approach is to compare the footprint of pixels that have been flagged as affected by CRs to the footprint of the CRs as recorded in the applicable bit-plane of the Simulation image DQ mask. One could construct a separate bit-mask that encoded whether: CR pixels were identified correctly, CR pixels were detected where there were none, or were not detected where they should have been. Summary statistics (indicating frequency but not pixel location) could be recorded as well. LA is correct that the fidelity of CR-affected pixels is likely to vary with the background in the image, proximity to bright sources, etc. Presumably these quantities can be derived from the companion "truth" image, although comparisons of this sort might be beyond this round of automated processing... and more amenable to off-line analysis. Depending upon the choices made above, this work is probably between a few days and a week for implementation, testing, documentation.

Comment from LA: Might want to look at number and location of detected (vs. input) CRs as a function of overall background level, location on the CCD, brightness and shape (angle of incidence perhaps) of the CR, proximity to real objects (bright stars and galaxies), etc. Will the saved quantity be the entire pxl footprint of the detected CR, along with other quantities ? A look at the input parameters for the CRs in the sim data would help us create a list of the most relevant output parameters.

  • collate and present failure rate statistics for WCS, PSF, other selected stages.

Question from DS: we recognized that outright failures (i.e., exceptions) likely means downstream stages would not be executed, nor would intermediate products be preserved. Would this compromise the ability of diagnosing the cause of the failure? Also, are we recording instances of failures, and/or instances where the stages report failure of the result to pass DQ checks?

  • Characterize the fidelity of the WCS solution (see Russ' WCS failure check proposal). Take the pixel cords of the sources used to compute the WCS and derive the sky coordinates from the WCS, then compare to catalog, getting distance stats as in Tim’s document. Or equivalent working back to pixel from sky cords. Raise an event if the result is "too bad". Statistics are higher priority than raising an event.
  • For pixels to be used for evaluating template-subtracted images, generate histogram of those pixels and evaluate using TBD method if there is too much residual structure. OPEN ISSUE

Comment from DS: Most of the power in spurious differences is in the vicinity of brighter sources. One possibility is to consider only those pixels in the footprint of detected sources, and measure the power of the spatial structure. That is, simple (e.g., Gaussian or bipolar) variations presumably record a valid variation (brightness or position, respectively), but higher power may indicate a problem. However, detecting bipolar variations may indicate a spatially mismatched template. This one strikes me as ripe for additional analysis and definition.

  • Compare DIA sources for an image to the input simulated variable sources to determine Completeness, Reliability (false positives) and photometric property recovery.

Comment from DS: Presumably this boils down to comparing photometric properties between the source catalogs as generated by the pipeline vs. the input catalog for the Simulations. Or am I missing something?

  • Ensure we can capture PS-MOPS metrics into our analysis framework.
  • Medium Priority:
    • do simple image statistics on background-subracted image to see if it really looks background free OPEN ITEM HOW TO DO THIS.

Comment from DS: we noted in the discussion that the various contributions to the background (large, diffuse sources; sky background; scattered light; fringing; faulty flat-fielding) often have signatures on different spatial scales. This could perhaps be leveraged in the evaluation.

  • Compare Deep Detection and Measurement object catalog to simulated object catalog. e.g. completeness, properties. Note that Serge's association code can probably be made more general and applied, Russ thinks he's already doing this for PTF. Tim notes this is a good task for the Science Collaborations: we could facilitate this by generating the catalog of matches and let the community do the actual analysis.
  • Low Priority:
    • When modeled PSF is understood, automate comparison of derived PSF to simulated PSF. OPEN ITEM HOW TO DO THIS.

    Comment from DS: it was recognized in the discussion that it is probably hopeless to compare the parameterizations of the simulated PSF vs. the characterization of the PSF by the pipeline. Ideally, some means could be found to compare the images (i.e., realized 2-D profiles) of representative PSFs.

  • In addition, compare derived properties of PSF (flux) to simulated properties. OPEN ITEM HOW TO DO THIS.
  • evaluate PSF and other ICP-type properties for coadded images in the same way as for single-frame images -- compare to expectations. OPEN ISSUE how to get the expectations. Tim thinks this is probably not a priority for DC3b.

Additonal Notes

  • Strauss – scientists need to be able to do science analysis of the data. Motivation for science collabs is that it will set them up to work with data… need documentation. Need handholding. Need clear statement of goals.
  • Schedule meeting to discuss what groundwork needs to be done to support the Science Collaborations and IPAC in analysing data. Maybe IPAC can do legwork to make a list of what's needed, this is summer timeframe.
  • Russ has methodology to use for determining completeness and reliability of cosmic ray removal.
  • There was considerable discussion about how/much to analyze the performance of flatfielding. Maybe best thing would be to come up with a measure of flatness of the image. This is an open item at this point.
  • Tim wants to emphasize we need to be able to compare two images and determine if they are the same – in the case of PSF comparison, one approach might be to synthesize the PSF image postage stamp from the extracted properties and then compare to the simulated image. In other cases statistics on derived image products (difference images) might be sufficient. We tables this until we understand what the simulated info comprises. OPEN ITEM.
  • metrics split into two types, some which work in long run, some which connect ot simulator, need to be able to run some of these post facto so we don't muddy timing stats for the ones that don't really go into ops.
  • we will probably eventually want preview/quicklook images generated and stored (small downsampled postage stamps.)
  • Russ suggests php:mySql as a SQL server for Sci Collabs. TBD who would be setting this up? We'd also need canned SQL queries.
  • Tim notes that science collabs would be helpful to address the item "limitations on systematics" under Photometric Calibration. This is really a hands-on analysis task and not amenable to automation.
  • Lee would like to have it clearly identified where this stuff involves an analyst "touching pixels" as opposed to working with query-able data.

Comment from LA: Especially for DC3b, it's important to identify the areas where analysis requires working with sim or CFHTLS data, as opposed to where some analysis of pipeline-generated tables or metadata is sufficient. A linking of the prioritized action items to the outputs of the relevant pipeline stages would be most helpful here.

Going Forward

DEBORAH will schedule a telecon at 11am PST on Friday Jan 22 to touch base on this.