Changes between Version 7 and Version 8 of DC3/VerificationAndValidationPlan


Ignore:
Timestamp:
01/27/2010 09:33:51 PM (10 years ago)
Author:
robyn
Comment:

Add general commentary on who's doing what testing in DC3

Legend:

Unmodified
Added
Removed
Modified
  • DC3/VerificationAndValidationPlan

    v7 v8  
    7474    * Code modules in the code repository and in the UML model.  
    7575 
    76 == Test Plan Overviews == 
     76== Validation Testing of the DC3b Data Products == 
     77 
     78 
     79=== User Level Acceptance Validation === 
     80User Level Acceptance Validation is performed by Science Collaborators using DC3b data results in pursuit of their scientific interests.  Within 2 months of acquiring the data, they should provide a report to the DM Scientist discussing such topics as: 
     81 * how did you use the data to do your science? 
     82 * what problems did you encounter? 
     83 * what auxiliary data products were missing? 
     84 * what data quality issues did you find? 
     85 * what should we change before LSST goes live? 
     86 
     87It would be especially useful if the Scientist would also provide the tools developed to analyze the DC3b data for possible inclusion in DM's SQA validation toolkit. 
     88 
     89In addition to the summary report, the Science Collaborator may want to provide immediate feedback to the DM Scientist on issues relating to format, use, quality, accuracy, reliability, etc. 
     90 
     91 
     92=== Science Data Processing Validation === 
     93Science Data Processing Validation is performed by Data Management scientists, including support from the IPAC LSST scientists, and is coordinated by the DM Scientist. Each major data deliverable will be validated, independently from an algorithm's developer, for quality metrics appropriate for those data. 
     94 
     95The Science Data Quality Assessment System (SDQA) is an automated software system which collects, performs statistical analyses and presents information related to the quality of the science data. It is designed to be able to analyze the quality of the data collected and processed in operations and to stay on top of the data rate and survey progress. 
     96 
     97The SDQA system will help assess the quality of processing during DC3b. The automated tests will commence once both the DC3b SDQA software and the DC3b software which generates the quality metrics are available. 
     98 
     99Data Management scientists can utilize this system as SDQA analysts to identify issues and provide feedback to the DM Scientist, if and when issues are exposed. The DM Scientist is responsible for either resolving the issues prior to DC3b completion or ensuring the  unresolved issues are retained until fixed in a later release. Status reports on the SDQA progress on each deliverable should be provided in the weekly Applications Working Group meeting. 
     100 
     101 
     102Tests and workflow processes developed whilst validating a data deliverable should be documented.  Analysis tools (e.g. python scripts, matlib routines, database queries) created to assist the analysis should be archived within the LSST source archive.  A status report on the results of the analysis should be provided to the DM Scientist. 
     103 
     104The major DC3b data deliverables are enumerated in Tim Axelrod's DC3b Data Quality Requirements document: 
     105 * http://dev.lsstcorp.org/trac/raw-attachment/wiki/DC3bManagement/DC3b_Data_Quality_Requirements.pdf 
     106A secondary version of that file including additional comments by DM developers is available at: 
     107 * http://dev.lsstcorp.org/trac/wiki/DC3bManagement 
     108 
     109 
     110=== Unit Testing === 
     111Unit Testing is performed by each developer on each module which s/he has either implemented or modified.  Units tests should validate that a module performs according to implementation specifications and is reliable, consistent and accurate.  The unit tests should be archived within the LSST source archive. 
     112 
     113Unit tests should successfully complete without error prior to modifications being merged from the development area (generally a Ticket branch) into the LSST archive 'trunk' branch. 
     114 
     115Unit tests are automatically run daily and failures are immediately emailed to the responsible developer. 
     116 
     117=== Test Plan Overviews === 
    77118 
    78119 * Acceptance Test Plan Overview - not applicable to DC3.