A Test Manager knows that there are four software test phases that need to be under control in order to have a successful testing process: Test Analysis, Test Design, Test Implementation and Test Execution.
Besides this test phases, a test manager does not forget about Test Planning and Test Closure.
Test analysis is process of analyzing the test basis (all documents from which the requirements of a component or system can be inferred) and defining test objectives. Covers WHAT is to be tested in the form of test conditions and can start as soon as the basis for testing is established for each test level.
It can be performed in parallel, integrated or iteratively with Test Design. Evaluates and reviews the test objectives and product risks, while it defines detailed measures and targets for success.
Deciding on the level of detail should consider:
- The level of testing; level of detail and quality of the test basis
- System/software complexity and development life cycle used
- Project and product risk
- Relationship between test basis, what is to be tested and how is to be tested
- Test management tool used
- The level of maturity of the test process and the skills and knowledge of the test analysts
- The level at which Test Design and other test work products are specified
- Availability of stakeholders for consultation
Test Design is an item or event of a component or system that could be verified by one or more test cases (ex: function, transaction, feature, etc.). Covers HOW something is to be tested by identifying test cases with step wise elaboration for the test conditions (from Test Analysis) or from the test basis using techniques identified in the test strategy or plan.
This phase can start for a given Test Level once Test Conditions are identified and enough information is available to enable the production of Test Cases.
In other words, a test case is a set of input values, execution preconditions, expected results and execution post-conditions, developed for a particular objective or test condition, such as to exercise a particular program path or to verify compliance with a specific requirement.
Although it can be merged together with Test Analysis, for higher levels of testing it will remain a separate activity. It is likely that some tasks that normally occur during test implementation will be integrated into the test design process. Especially when using an iterative approach.
The coverage of test conditions by either creating low-level and high-level test cases can be optimized by the creation of test data starting in Test Design.
Test Implementation is the process of developing and prioritizing test procedures, creating test data and, optionally, preparing test harnesses and writing automated test scripts.
This is when tests are organized and prioritized and when test designs are implemented as test cases, test procedures and test data. It is of great importance to pick the right tests and run them in the right order. The importance of this even grows exponentially in risk-based strategies when we prioritize based on the likelihood of risk and problems.
At this stage, the Test Manager should ensure:
- delivery of test environment
- delivery of test data
- constraints, risks and priorities are checked
- test team is ready for execution
- entry criteria is checked (explicit/implicit)
Some organizations may follow the IEEE829 standard to define inputs and their associated expected results during testing. Other only have rigorous rules when they need to provide evidence of compliance for regulatory projects or for adherence to standards.
In the most common cases, the test inputs are usually documented together with expected results, test steps and stored test data. Just like test conditions and test cases, even during test implementation we will face the decision to go into an extensive (detailed) stage or to have a light (generic) approach. This decision should be taken by your understanding of the development lifecycle and by the predictability of software features under test.
Please do not count off the extensive implementation preparation due to the above:
- Concrete test cases provide working examples of how the software behaves
- When tests are archived for long term and re-use in regression these details may become valuable
- Domain experts are likely to verify versus a concrete test rather than an abstract business rule
- Further weakness in software specification is identified
Some defects can be found only in production-like test environments. These are often expensive to procure and difficult to configure and manage. Similar challenges are also faced for the use of production data or production like data which can even lead to data privacy or other headaches.
Test implementation is not all about manual testing, this is the stage where automation scripting takes place, the stage where automation versus manual prioritization and execution order is established.
And I am not talking only about automation, even tool acquisition is done here, especially for test data generation required to prepare for load, volume or performance testing.
Test execution is the process of running a test on the component or system under test, producing actual result.
Tests are designed or at least defined; Tools are in place for test management and defect management and test automation (if applicable); Standards for test logging and defect reporting are published should finish before execution starts. Execution begins once the test object is delivered and the Entry criteria for test execution is met.
During execution, a Test Managers role is to:
- Monitor progress according to the plan
- Initiate and carry out control actions to guide testing
- Ensure that test logs provide an adequate record of relevant details for tests and events
During execution it is important to keep a traceability between test conditions, the test basis and the test objective and to have the appropriate level of test logging. Time should be reserved for experienced-based and defect-based test sessions driven by tester’s findings.
This article is based on the ISTQB Advanced Syllabus version 2012 and it also references the ISTQB Foundation Syllabus version 2018. It uses terminology definitions from the ISTQB Glossary version 3.2.