Software testing documentation like Test Strategy is often produced as part of the test management activities. Software testing basics common types of test management documents are:
- Test policy – describes the organization’s objectives and goals for testing
- Test strategy – describes the organization’s general, project-independent methods for testing
- Master test plan (or project test plan) – describes the implementation of the test strategy for a particular project
- Level test plan (or phase test plan) – describes the particular activities to be carried out within each test level
Larger and more formal organizations and projects tend to have all of these types of documents as written work products, while smaller and less formal organizations and projects tend to have fewer such written work products.
In some organizations and on some projects, they may be combined into a single document and in others, they may be found in separate documents. There are even cases where their contents may be manifested as intuitive, unwritten, or traditional methodologies for testing.
The software Test Strategy is a high-level description of the test levels to be performed and the testing within those levels for an organization or Program(one or more projects).
- describes the organization’s general test methodology
- includes the way in which testing is used to manage product and project risks
- Includes the division of testing into levels
- Includes the high-level activities associated with testing
- should provide the generic test entry and exit criteria for the organization or for one or more programs
The software testing strategy should be consistent with the test policy as the same organization may have different strategies for different situations:
- different software development life cycles
- different levels of risk
- different regulatory requirements
Common types of software testing processes found in the test strategy
A test strategy provides a generalized description of the test process, usually at the product or organizational level. Common types are:
Analytical strategies (such as risk-based testing)
- Based on an analysis of some factor (e.g., requirement or risk).
- the test team analyzes the test basis to identify the test conditions to cover
- test analysis derives test conditions from the requirements, tests are then designed and implemented to cover those conditions
- The tests are subsequently executed, often using the priority of the requirement covered by each test to determine the order in which the tests will be run
- Test results are reported in terms of requirements status, e.g., requirement tested and passed, requirement tested and failed, requirement not yet fully tested, requirement testing blocked, etc.
Model-based strategies (such as operational profiling)
- tests are designed based on some model of some required aspect of the product, such as a function, a business process, an internal structure, or a non-functional characteristic
- the test team develops
- a model (based on actual or anticipated situations) of the environment in which the system ex
- the inputs and conditions to which the system is subjected
- how the system should behave
- in model-based performance testing one might develop models of:
- incoming and outgoing network traffic
- active and inactive users
- resulting processing load
- based on current usage and project growth
- Models might be developed considering the current production environment’s:
- data capacity
- Models may also be developed for ideal, expected, and minimum throughput rates, response times, and resource allocation.
Methodical strategies (such as quality characteristic-based)
- relies on making systematic use of some predefined set of:
- tests or test conditions, such as a taxonomy of common or likely types of failures
- a list of important quality characteristics
- company-wide look-and-feel standards
- the test team can use a predetermined set of test conditions, such as:
- a quality standard [IOS25000]
- a checklist or a collection of generalized logical test conditions
- uses that set of test conditions from one iteration to the next or from one release to the next
- In maintenance testing of a simple, stable e-commerce website, testers might use a checklist that identifies the key functions, attributes, and links for each page and cover the relevant elements of this checklist each time a modification is made to the site.
Reactive strategies (such as using defect-based attacks)
- testing is reactive to the component or system being tested, and the events occurring during test execution, rather than being pre-planned
- the test team waits to design and implement tests until the software is received, reacting to the actual system under test
- Tests are designed and implemented, and may immediately be executed in response to knowledge gained from prior test results.
- Exploratory testing is a common technique employed in reactive strategies.
- Testers periodically report results of the testing sessions to the Test Manager, who may revise the charters based on the findings.
Process- or standard-compliant strategies (such as medical systems subject to U.S. FDA standards)
- involves analyzing, designing, and implementing tests based on external rules and standards specified by:
- industry-specific standards
- process documentation
- the rigorous identification
- use of the test basis
- any process or standard imposed on or by the organization
- the test team follows a set of processes defined by a standards committee or other panel of experts where the processes address:
- The proper identification and use of the test basis and test oracle(s)
- the organization of the test team
- in projects following Scrum Agile management techniques, in each iteration testers:
- analyze user stories that describe particular features
- estimate the test effort for each feature as part of the planning process for the iteration
- identify test conditions (often called acceptance criteria) for each user story
- execute tests that cover those conditions
- report the status of each user story (untested, failing, or passing) during test execution
Regression-averse testing strategies (such as extensive automation)
- Motivated by a desire to avoid regression of existing capabilities.
- Includes reuse of existing test-ware (especially test cases and test data), extensive automation of regression tests and standard test suites.
- the test team uses various techniques to manage the risk of regression, especially functional and/or non-functional regression test automation
- When regression testing a web-based application, testers can use a GUI-based test automation tool to automate the typical and exception use cases for the application. Those tests are then executed any time the application is modified
Consultative/Directed strategies (such as user-directed testing)
- driven primarily by the advice, guidance, or instructions of stakeholders, business domain experts, or technology experts
- the test team relies on the input of one or more key stakeholders to determine the test conditions
- In outsourced compatibility testing for a web-based application, a company may give the outsourced testing service provider a prioritized list of browser versions, anti-malware software, operating systems, connection types, etc. configuration options that they want evaluated against their application.
- The testing service provider can then use techniques such as pairwise testing (for high priority options) and equivalence partitioning (for lower priority options) to generate the tests.
An appropriate software testing strategy is often created by combining several of these types of test strategies. The specific strategies selected should be appropriate to the organization’s needs and means or even tailor strategies to fit particular operations and projects.
The software test manager should keep in mind that:
- Different test strategies for the short term and
- the long term might be necessary
- Different test strategies are suitable for
- different organizations and projects
- test strategy also differs based on development models
The Test Strategy may also describe the test levels to be carried out and it should give guidance on the entry and exit criteria for each level, but also on the relationships among levels.
This article is based on the ISTQB Advanced Syllabus version 2012 and it also references the ISTQB Foundation Syllabus version 2018. It uses terminology definitions from the ISTQB Glossary version 3.2.
In late 2019 we have launched A Test Manager’s Guide eBook that stands as the base for this article. You can check out more useful test management lessons by enrolling for free to view Chapter 1 – Back to the basics.