Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.
Comment: Added work area to fully document / agree assumptions and requirements (tracked in JIRA)

...

  • Test cases should favour implementation of a published standard interface for validation
    • Where no standard is available provide API support references
    • If a standard exists and is not followed, an exemption is required
  • The following things must be documented for the test case:
    • Use case specification
    • Test preconditions
    • Basic test flow execution descriptor
    • Post conditions and pass fail criteria
  • The following things may be documented for the test case:
    • Parameter border test cases descriptions
    • Fault/Error test case descriptions
  • Test cases must pass on OPNFV reference deployments
    • Tests must not require a specific NFVi platform composition or installation tool
    • Tests must not require unmerged patches to the relevant upstream projects

Additional Requirements / Assumptions (not yet agreed by Dovetail Group)

These are additonal requirements and assumptions that should be consdered by the group.  This agreement / issue is being tracked in JIRA, under DOVETAIL-352.  As these are agreed, they should be moved above into the full list.  Once the story is completed, this section can be deleted.

Assumptions

  1. Tests start from (use) an already installed / deployed OPNFV platform. OPNFV deployment/install testing is not a target of the program (that is for CI).

Requirements

  1. All test cases must be fully documented, in a common format, clearly identifying the test procedure and expected results / metrics to determine a “pass” or “fail” result for the test case.
  2. Tests and tool must support / run on both vanilla OPNFV and commercial OPNFV based solution (i.e. the tests and tool can not use interfaces or hooks that are internal to OPNFV, i.e. something during deployment / install / etc.).
  3. Tests and tool must run independent of installer (Apex, Joid, Compass) and architecture (Intel / ARM).
  4. Tests and tool must run independent of specific OPNFV components, allowing different components to “swap in”. An example would be using a different storage than Ceph.
  5. Tool / Tests must be validated for purpose, beyond running on the platform (this may require each test to be run with both an expected positive and negative outcome, to validate the test/tool for that case).
  6. Tests should focus on functionality and not performance.
    1. Performance test output could be built in as “for information only,” but must not carry pass/fail metrics.

Dovetail Test Suite Structure

...