wiki:SDP/VerificationAndValidation
Last modified 6 years ago Last modified on 10/30/2013 05:09:09 PM

Obsolete ............ O B S O L E T E ........... Obsolete

Document superceded by LSE-17: Systems Engineering Management Plan

And more specifically, LSE-17 Appendix A: The DM Software Management Process. September 2013



DRAFT

(Note that there is a known problem in the generation of PDF from this document in Trac: the figures are not correctly incorporated. We have notified the LSST Trac expert (ticket:816).)

LSST Software Verification and Validation Guidelines (LSE-15)

This document is maintained under configuration control in the LSST document repository as LSE-15. It is currently edited in Trac, but the authoritative, project-approved version is always the one to be found in Docushare. The "live" Trac version (SDP/VerificationAndValidation) must always be treated as potentially unstable - it may, for instance, represent a proposed modification to the controlled document.

This document is a subordinate of LSST Software Development Plan (LSE-16) (current unstable version: Trac:SDP/LsstSoftwareDevelopmentPlan).

1. Introduction

Software Verification and Validation checks products against their specifications. This is done by [ 1 ]

  • checking that each software item meets specified requirements;
  • checking each software item before it is used as an input to another activity;
  • sufficiently exercising the software so that, based on a risk assessment, the software is suitable for use.

Different classes of tests map to the stages of software development. Unit tests validate the implementation of the detailed component design; integration tests validate the implementations of the interfaces between architectural design elements; system tests validate the implementations of the software requirements; and acceptance tests validate the implementations of the user requirements. This mapping is frequently referred to as the V model.

Error: Macro Image(GenericVModel.jpg, width="725",height="475",border="2",alt="V Diagram") failed
invalid literal for int() with base 10: '"2"'

1.1 Purpose of the document

The purpose of this document is to describe the process and procedures used by LSST personnel to ensure that all software has its design verified and its implementation validated against the LSST Requirements.

The policies and processes in this document will be used by anyone developing, modifying, testing and/or releasing LSST Project software products from the Telescope & Site, Camera, and Data Management subsystems. The document will also be used by each subsystem's SQA team to verify that verification and validation procedures are being performed according to Project Policies and Standards.

1.2 Scope

This document describes the overall policies and procedures required for LSST Software Verification and Validation. Each LSST software subsystem will prepare for each baseline encompassing a complete Requirements Review / Design / Implement / Test / Release cycle, a derivative Verification and Validation Plan.

This document is written in terms of the software development process and, hence, uses standard industry terms for that process. LSST Project Management uses different terminology. Following is the correspondence between the LSST System Engineering Management Process stage names and the software subsystems' names.

System Engineering Management Process Subsystem Software Group
OperationsOperations
CommissioningAcceptance Testing: validating against Science and Derived Requirements
System Integration System Testing: validating against System Functionality Requirements
Construction Implementation, Unit Testing, and Integration Testing

1.3 Definitions, Acronyms and Abbreviations

  • Static Analysis: process of evaluating a system or component based on its form, structure, content or documentation. Forms of static analysis include:
    • control flow analysis: find unreachable code, infinite loops, recursion violations, etc.
    • data-use analysis: find use of uninitialized variables, declared but not used variables,etc.
    • range-bound analysis: find array bound errors.
    • interface analysis: find interface mismatch between function call and function definition.
    • code volume analysis: count lines per module, modules per system.
    • complexity analysis: measure integration complexity or cyclometric complexity.
  • Test Types: See Glossary of Test Classifications

See also Software Development Plan: Definitions, Acronyms, and Abbreviations.

1.4 Management

1.4.1 Applicable Policies, Standards and Procedures

1.4.2 Resource Summary

The hardware, software and human resources necessary to perform the Verification and Validation should be described within each system's V&V specialization.

1.5 Tools, Techniques, and Methods

1.5.1 Tools

In addition to the common toolset defined in the Software Development Plan, LSST will use tools to assist in the static analysis of the implementation for verification of compliance to programming language standards.

The V&V process should additionally use any support the programming language compiler provides for interface analysis, range bound analysis, and data-use analysis.

1.5.2 Techniques and Methods

The validation process has broadly defined steps common to Unit, Integration, System, and Acceptance testing:

  • Define test requirements based on level of testing in the V model. For example, unit test performs black and white box testing at the minimum.
  • Implement test software to validate the codebase with respect to those test requirements.
  • Build test harness executable.
  • Run tests.
  • Analyze coverage to ensure the required codebase was exercised.
  • Analyze performance to ensure that resource consumption was within bounds.
  • Check output to determine if test passed or failed.
  • Archive test results.

Each Test Plan will refine the validation process appropriately.


2 Verification and Validation Control

2.1 Anomaly Reporting and Resolution

Management of anomalous behavior reports should incorporate:

  • tracking the resolution of the report;
  • tracking the software which was modified due to the report;
  • availability of current status of a report's progress towards resolution;
  • permanent archival of the report's lifecycle.

2.2 Task Iteration Policy

Unless otherwise specified in individual test plans, resumption of testing after failure resolution should

  • retest the item's (or aggregate item's) unit test; then
  • retest all items dependant on the previously-failed item.

This may involve restarting the testing cycle from the beginning.

2.3 Deviation Policy

The Subsystem Project Manager should approve any deviation from an approved Verification or Validation Test Plan. If the deviation becomes frequent, the procedure in the affected Plan should be reviewed for completeness and suitability.

2.4 Control Procedures

The Configuration Management Guidelines include the naming conventions used for identifying each baseline test case and each test run's report and output, as well as the archival requirements for those test artifacts.

2.5 Standards, Practices, and Conventions


3. Verification Activities

3.1 Reviews

3.1.1 Software Inspections

All Release Product software should be reviewed for compliance with ProgramApplicable? Policies, Standards and Proceduresming Language Coding Standards and Software Documentation Standards. Preferably, the reviews are carried out using a Standards Compliance Check tool.

3.1.2 Technical Review

Major technical reviews will be done in accordance with funding agencies' requirements.

3.2 Tracing

LSST SQA traces the direct link between user requirements to code with the assistance of the SparxSystem Enterprise Architect Tool as shown:

  • At the System Engineering Level, as governed by the LSST System Engineering Management Plan, the SRD requirements are ingested into the SysML model.
  • The SRD SysML model is used to derive the LSST System Requirements SysML model.
  • From the LSST System Requirements SysML model, the LSST Functional Performance Requirements Document (FPRD) is generated. Also from that model, the SysML models for the Functional Requirements Specifications (FRS) for each subsystem: Camera, Telescope & Site, and Data Management, are generated.
  • The Subsystem Functional Requirement SysML model includes the definitions of the Subsystem Blocks (WBS elements). The Subsystem Blocks are realized by the Software Engineering Level as governed by the Software Development Plan.

Each Subsystem defines and validates the path from the Susbystem Blocks to the validated code within their own specialization section.


4. Validation Activities

Validation tests and procedures should be used to ensure that individual components interact according to specification. Testing status reports should be generated to enable SQA verification.

A separate Test Plan should be prepared on conclusion of each stage of the baseline's design. On conclusion of the Science Requirement's Review, the Acceptance Test Plan should be developed; on conclusion of the Conceptual Design Review, the System Test Plan should be developed; on conclusion of the Preliminary Design Review, the Integration Test Plan should be developed; and finally, on conclusion of the Detailed Design Review, the Unit Test Plan should be developed. In general, each Test Plan should include the list of components (or features) under test, the test cases, and a schedule for testing.

A Validation Test Plan is composed of:

  • Test Plan Overview which should include
    • objectives;
    • approach to achieving those objectives
      • the features or combination of features to be tested,
      • the sequence of testing those features
      • pointers to the set of Test Case Specifications which realize the test design;
    • testing environment;
    • schedule; and
    • final reports.
  • One or more Test Case Specifications which should include
    • the items under test in this Test Case Specification,
    • the input and output specifications,
    • the environmental needs.

Templates for Validation Test Plans are in the Appendix. The Templates provide additional commentary on content.

4.1 Acceptance Test

The scope of acceptance testing is to validate that the software system is compliant with the LSST SRD requirements. The input to acceptance testing is the software that has been successfully tested at system level. The output from the acceptance testing is a validated Release Product. A test should be defined for every essential science requirement, and for every desirable requirement that has been implemented.

The Acceptance Test Plan's specification of test cases will be developed from the Science Requirements selected for the baseline. The type of tests performed may include:

  • Capability Tests
    • should be designed that exercise each capability. System test cases that verify functional, performance and operational requirements may be reused to validate capability requirements.
  • Constraint Tests
    • System test cases that verify compliance with requirements for interfaces, resources, security, portability, reliability, maintainability and safety may be reused to validate constraint requirements.

4.2 System Test

The scope of system testing is to verify compliance with the system objectives as stated in the FRS. A test should be defined for every essential software requirement, and for every desirable requirement that has been implemented. The input to system testing is the successfully integrated system.

The System Test Plan's test cases will be developed from the FRS requirements selected for the baseline. Black-box and other types of test should be used wherever possible. When a test of a requirement is not possible, an alternative method of verification should be used. The type of tests performed may include:

  • Function Tests
    • should be designed using techniques such as decision tables, state-transition tables and error guessing to verify the functional requirements.
  • Performance Tests
    • should be designed to verify:
      • that all worst case performance targets have been met;
      • that nominal performance targets are usually achieved;
      • whether any best-case performance targets have been met; and
    • should be designed to measure the absolute limits of performance.
  • Interface Tests
    • should be designed to verify conformance to external interface requirements. Interface Control Documents (ICDs) form the baseline for testing external interfaces. Simulators will be necessary if the software cannot be tested in the operational environment.
  • Usability Tests
    • should verify the user interface, man machine interface, or human computer interaction requirements,and logistical and organizational requirements.
  • Load Tests
    • shoud be designed to verify requirements for the usage of resources such as CPU time, storage space and memory. The best way to test for compliance is to allocate these resources and no more, so that a failure occurs if a resource is exhausted.
  • Security Tests
    • should check that the system is protected against threats to integrity and availability. Tests should be designed to verify that basic security mechanisms specified in the System Engineering Requirements have been provided.
  • Compatability Tests
    • should attempt to verify portability by running a representative selection of system tests in all the required environments.
  • Stress tests
    • evaluate a system at or beyond the limits of its specified requirements. Testers should look for inputs that have no constraint on capacity and design tests to check whether undocumented constraints do exist.

4.3 Integration Test

A software system is composed of one or more subsystems, which are composed of one or more units which in turn are composed of one or more modules. The scope of integration testing is to verify the design and implementation of all components from the lowest level defined in the architectural design up to the system level. In particular, the tests should check that all data exchanged across an interface agree with the data specifications in the architectural design; and should confirm that all the control flows in the architectural design have been implemented.

The integration test description should specify the details of the test process for each software component defined in the architectural design and identify its associated test cases and test procedures.

The Integration Test Plan's test cases should be developed from the Subsystem Blocks (i.e. the WBS elements) defined for the baseline. The type of tests performed may include:

  • White-box Tests
    • should be defined to verify the data and control flow across interfaces between the major components defined in the architectural design.
    • If the addition of new components to a system introduces new execution paths, the Integration test design should identify paths suitable for testing and define test cases to check them. All new control flows should be tested.
  • Black-box Tests
    • should be used to fully exercise the functions of each component specified in the architectural design;
    • may be used to verify that data exchanged across an interface agree with the data specifications in the architectural design
  • Performance Tests
    • If the architectural design placed resource constraints on the performance of a unit, compliance with these constraints should be tested.

Depending on the characteristics of the subsystems being integrated, additional test types may be included.

4.4 Unit Test

A 'unit' of software is composed of one or more modules. The scope of unit testing is to verify the design and implementation of all components from the lowest level defined in the detailed design up to and including the lowest level in the architectural design. The inputs to the unit testing process are the successfully compiled modules. These are iteratively assembled and tested during unit testing until the unit test validates components in the architectural design. The successfully unit tested architectural design components are the outputs of unit testing process. LSST software generally uses a bottom-up assembly sequence to interatively compose each architectural component.

The unit test description should provide the sequence for assembling the architectural design units and the types of tests necessary for individual modules.

The Unit Test Plan's test cases should be developed from the detailed design of the baseline. The type of tests performed during unit testing may include:

  • White-box Tests
    • designed by examining the internal logic of each module and defining the input data sets that force the execution of different paths through the logic. Each input data set is a test case.
  • Black-box Tests
    • designed by examining the specification of each module and defining input data sets that will result in different behaviour (e.g. outputs). Black-box tests should be designed to exercise the software for its whole range of inputs. Each input data set is a test case.
  • Performance Tests
    • if the detailed design placed resource constraints on the performance of a module, compliance with these constraints should be tested.

LSST unit test procedures should be invoked by a 'test' process invoked during the build procedure.


5. References


6. LSST Subsystem Verification and Validation Specifics

The following sections include the LSST Subsystem specific policies and procedures for Verfication and Validation. The headings will reflect the layout of the overarching document.

6.1 Data Management V & V

All DM specializations are documented here.

6.2 Telescope & Site V & V

No Telescope & Site specific content has yet been developed.

6.3 Camera V & V

No Camera-specific content has yet been developed.


Glossary of Test Classifications

The following test classification was extracted from [ 1 ].

Black Box Testing: not based on any knowledge of internal design or code. Tests are based on requirements and functionality.

Compatibility Testing: testing how well software performs in a particular hardware/software/operating system/network/etc. environment.

End-to-end Testing: similar to system testing; the 'macro' end of the test scale; involves testing of a complete application environment in a situation that mimics real-world use, such as interacting with a database, using network communications, or interacting with other hardware, applications, or systems if appropriate.

Failover Testing: typically used interchangeably with 'recovery testing'

Functional Testing: black-box type testing geared to functional requirements of an application; this type of testing should be done by testers. This doesn't mean that the programmers shouldn't check that their code works before releasing it (which of course applies to any stage of testing.)

Incremental Unit Testing: continuous testing of an application as new functionality is added; requires that various aspects of an application's functionality be independent enough to work separately before all parts of the program are completed, or that test drivers be developed as needed; done by programmers or by testers.

Install/Uninstall? Testing: testing of full, partial, or upgrade install/uninstall processes.

Integration Testing: testing of combined parts of an application to determine if they function together correctly. The 'parts' can be code modules, individual applications, client and server applications on a network, etc. This type of testing is especially relevant to client/server and distributed systems.

Load Testing: testing an application under heavy loads, such as testing of a web site under a range of loads to determine at what point the system's response time degrades or fails.

Performance Testing: term often used interchangeably with 'stress' and 'load' testing. Ideally 'performance' testing (and any other 'type' of testing) is defined in requirements documentation or QA or Test Plans.

Recovery Testing: testing how well a system recovers from crashes, hardware failures, or other catastrophic problems.

Regression Testing: re-testing after fixes or modifications of the software or its environment. It can be difficult to determine how much re-testing is needed, especially near the end of the development cycle. Automated testing tools can be especially useful for this type of testing.

Sanity Testing: typically an initial testing effort to determine if a new software version is performing well enough to accept it for a major testing effort. For example, if the new software is crashing systems every 5 minutes, bogging down systems to a crawl, or corrupting databases, the software may not be in a 'sane' enough condition to warrant further testing in its current state.

Security Testing: testing how well the system protects against unauthorized internal or external access, willful damage, etc; may require sophisticated testing techniques.

Stress Testing: term often used interchangeably with 'load' and 'performance' testing. Also used to describe such tests as system functional testing while under unusually heavy loads, heavy repetition of certain actions or inputs, input of large numerical values, large complex queries to a database system, etc.

System Testing: black-box type testing that is based on overall requirements specifications; covers all combined parts of a system.

Unit Testing: the most 'micro' scale of testing; to test particular functions or code modules. Typically done by the programmer and not by testers, as it requires detailed knowledge of the internal program design and code. Not always easily done unless the application has a well-designed architecture with tight code; may require developing test driver modules or test harnesses.

Usability Testing: testing for 'user-friendliness'. Clearly this is subjective, and will depend on the targeted end-user or customer. User interviews, surveys, video recording of user sessions, and other techniques can be used. Programmers and testers are usually not appropriate as usability testers.

User Acceptance Testing: determining if software is satisfactory to an end-user or customer.

White Box Testing: based on knowledge of the internal logic of an application's code. Tests are based on coverage of code statements, branches, paths, conditions.


Appendix

Baseline V & V Plan Templates

The Verification and Validation Plan Template should be replicated and modified as appropriate for the Baseline. Include only information which deviates or adds to the V & V Guidelines. Each stage's Test Plan Overview should either be embedded or have locators to it in the Baseline V & V Plan.

The Test Plan Overview Template should be replicated at the end of each design stage and modified to define the testing required to validate that stage's design.

The Test Case Specification Template should be replicated for each specific test case within a Test Plan. If the Test Case Specifications are included as an appendix to the Test Plan, then the 'Test Plan Identification' block should be removed.

Verification and Validation Plan Template

  • Introduction
    • Purpose of Document
    • Baseline Identifier (based on LSST Configuration Management Naming Standard)
    • Definitions, acronyms and abbreviations
    • References
    • Overview of document
  • Reviews
    • Describe inspection and technical review process
  • Tracing
    • Describe how to trace inputs to outputs
  • Test Plan Overviews
    • Acceptance Test Plan Overview (locator or embedded)
    • System Test Plan Overview (locator or embedded)
    • Integration Test Plan Overview (locator or embedded)
    • Unit Test Plan Overview( locator or embedded)

Test Plan Overview Template

  • Test Plan Identification
    • Baseline Identifier (based on LSST Configuration Management Naming Standard)
    • Testing Level (one of {Unit, Integration, System, Acceptance})
  • References
    • Baseline Design Documents
    • Baseline Plan
  • Items to be tested
    • list items to be tested.
  • Features to be tested
    • identify all features and combinations of features to be tested. For example: "the acceptance tests will cover all Science Requirements except those in WBS 2.3"
  • Approach
    • Major activities to test the designated features should be identified
      • Testing Strategy
        • top-down, bottom-up, functional groupings, etc
      • Overall test sequencing
        • module assembly sequence (for unit testing)
        • paths through the module logic ( for unit testing)
        • component integration sequence (for integration testing);
        • paths through the control flow (for integration testing);
      • Test types (e.g. white-box, block-box, performance, stress, etc).
      • Coverage level
  • Item Pass/Fail? Criteria
  • Entry & Exit Criteria
    • Discuss startup state of testbed.
    • Discuss criteria determining if testing completed successfully.
  • Suspension Criteria & Resumption Requirements
    • Discuss if it's permissible to resume testing after failure and how.
  • Test Deliverables
    • List artifacts that must be delivered before testing starts.
      • test plan
      • test cases
      • test procedures
      • test input data
      • test tools
    • List artifacts that must be delivered when testing ends.
      • test reports (build log, test log,
      • test output data
      • test problem reports
  • Testing Tasks
    • identify the set of tasks necessary to prepare for and perform testing;
    • identify all inter-task depedencies.
  • Environmental Needs
    • Describe requirements of Baseline testbed setup
      • test tools
      • mode of use (i.e. standalone, networked)
      • communications software
      • hardware
      • system software
  • Schedule
    • include test milestones identified in software project plan and all item delivery events
  • Planning Risks and Contingencies
    • Identify high risk assumptions of the test plans and contingency plans for each.

Test Case Specification Template

  • Test Plan Identification
    • Baseline identifier
    • Testing Level where <Level> from {Unit, Integration, System, Acceptance}
  • For each Test Case
    • Test Case Specifications
      • Identifier as per CM Naming Convention
      • Test Items
        • List the items (i.e. modules, subsystems) to be tested.
      • Test Case
        • Describe, in general terms, a test case developed from the appropriate stage's design document.
      • Input specifications
        • Describe the test case input.
      • Output specifications
        • Describe the output required from the test case.
      • Environmental Needs
        • Describe hardware and software required.

Attachments