Contents
Software testing metrics
Software testing metrics are key for a Test Manager to understand, monitor and control the test phases across test levels. Proper use of software testing metrics is critical during any project and they are one of the test manager’s best allies in understanding where the testing is and which way it is heading within the project and across the organization.
Defining and using Test Metrics
It is important that the proper set of metrics be established for any endeavor, including testing. Software testing metrics can be classified as belonging to one or more of the following categories:
- Project metrics – measure progress toward exit criteria, such as the percentage of test cases executed, passed, and failed
- Product metrics – measure some attribute of the product, such as the extent to which it has been tested or the defect density
- Process metrics – measure the capability of the testing or development process, such as the percentage of defects detected by testing
- People metrics – measure the capability of individuals or groups, such as the implementation of test cases within a given schedule
Any given metric may belong in two, three, or even four categories: a trend chart showing the daily arrival rate of defects can be associated with:
- an exit criterion (zero new defects found for one week)
- the quality of the product (testing cannot locate further defects in it)
- the capability of the test process (finds a large number of defects early in test execution)
The Test Manager role is to use of metrics to measure the progress of testing. Some of the project metrics used for test progress also relate to the product and process. Overall, by using these metrics, this enables testers to report results in a consistent way and enables coherent tracking of progress over time.
Test Managers are frequently required to present metrics at various meetings which may be attended by multiple levels of stakeholders, ranging from technical staff to executive management. Because metrics are sometimes used to determine the overall success of a project, great care should be taken when determining what to track, how often to report it, and the method to be used to present the information.
What test metrics to consider?
Definition of metrics
- Limited set of useful metrics should be defined
- defined based on specific objective(s) for the project, process and/or product
- defined for balance (a single metric may give a misleading impression of status or trends)
- interpretation agreed upon by all stakeholders to avoid confusion when they are discussed
- avoid to define too many metrics, the focus should be on the most pertinent ones
Tracking of metrics
- Reporting and merging metrics should be as automated as possible to reduce the time spent in taking and processing measurements
- Variations of measurements over time for a specific metric may reflect information other than the interpretation agreed upon in the metric definition phase
- to carefully analyze possible divergence in measurements from expectations, and the reasons for that divergence
Reporting of metrics
- The objective is to provide an immediate understanding of the information
- Presentations may show a snapshot of a metric at a certain time or show the evolution of a metric over time so that trends can be evaluated
Validity of software testing metrics
- verify the information that is being reported
- The measurements taken for a metric may not reflect the true status of a project or may convey an overly positive or negative trend
- Before any data is presented, it must be reviewed for both accuracy and for the message that it is likely to convey
Test entry and test exit criteria
Software Test Entry and Exit Criteria are key tools in the arsenal of a Test Manager and should be used each software testing level. These are software testing basics that help set the rules of the game and properly delimit the test levels while also helping achieve Test Closure.
Software Testing Entry Criteria
Test Entry Criteria is a set of generic and specific conditions for permitting a process to go forward with a defined task. The purpose is to prevent a task from starting which would entail more effort compared to the effort needed to remove the failed entry criteria.
Software Testing Exit Criteria
Test Exit Criteria is a set of generic and specific conditions, agreed with stakeholders for permitting a process to complete. These prevent a task from being considered completed when there are still outstanding tasks not finished. The Exit Criteria is also used to report progress against a plan and to know when to stop testing.
Testing Entry and Exit Criteria Reporting
A Test Managers role is to:
- ensure that effective processes are in place to provide necessary information for evaluating entry & exit criteria
- make sure that the definition of the information requirements and methods for collection are part of test planning
- ensure that members of the test team are responsible for providing the information required in an accurate and timely manner
The evaluation of exit criteria and reporting of results is a test management activity. There are also other software test metrics that support software testing reporting.
Software Test Closure
Test Closure consists of finalizing and archiving the test ware and evaluating the test process, including preparation of a test evaluation report. All this is done through the proper evaluation of the relevant test metrics.
Once test execution is deemed to be complete, the key outputs should be captured as these tasks are important (often missed) and should be explicitly included as part of the test plan.
- Test completion check – ensuring that all test work is indeed concluded
- Test artifacts handover – delivering valuable work products to those who need them
- Lessons learned – performing or participating in retrospective meetings where important lessons
- Archiving results, logs, reports, and other documents
This article is based on the ISTQB Advanced Syllabus version 2012 and it also references the ISTQB Foundation Syllabus version 2018. It uses terminology definitions from the ISTQB Glossary version 3.2.