A review is a type of static testing during which a work product or process is evaluated by one or more individuals to detect issues and to provide improvements. When done properly, reviews are the single biggest, and most cost-effective, contributor to overall delivered quality. This is why a test manager must know about and assess what reviews add the most value in the testing process.
Informal testing review
The Informal review is a review without a formal or documented procedure. Same as: buddy check, pairing, pair review. Main purpose: detecting potential defects and Can also generate ideas, solutions or quickly solve minor problems.
During an informal review we may have the below: review meeting, documented results, use checklists, .be performed by a colleague or a group of people
Formal software testing review
WALKTHROUGH in software testing
A walkthrough is a review in which an author leads members of the review through a work product and the members ask questions and make comments about possible issues. Main purpose: find defects, improve the product, consider alternative implementations, evaluate conformance to a standard or specification. Can also: facilitate the exchange of ideas, training the participants, achieving consensus
A walkthrough can vary from very formal to informal, but during a walkthrough we must have review meeting and scribe for the review meeting. And we may have individual preparation before the meeting; use checklists; use potential defect logs and reports and present it as scenarios, dry runs or simulations.
A Technical review is a formal review type executed by a team of technically-qualified personnel that examines the suitability of a work product for its intended use and identifies discrepancies from specifications and standards. Main purpose: gaining consensus, detecting potential defects. Can also: evaluate quality and build confidence in the product, generate new ideas, motivate and enable authors to improve, consider alternative options
During a technical review we must have individual preparation before the meeting and scribe which is not the author. And we should have reviewers as technical peers of the author, reviewers as technical experts in the same or other discipline, product potential defect logs and review reports. While we may hold a review meeting and may use checklists.
INSPECTION in software testing
An inspection is a formal review type used to identify issues in a work product, which provides measurement to improve the review process and the software development process. Main purpose: detecting potential defects, evaluating quality and building confidence in the product, preventing future similar defects through author learning and root cause analysis. Can also: motivate and enable authors to improve future work products and the software development process, achieving consensus.
There is a defined process with formal documented outputs, based on rules and checklists. We must:
- have clearly defined mandatory roles;
- have individual preparation;
- reviewers as peers of the author or experts in disciplines relevant for the work product
- specified entry and exit criteria
- have a scribe
- have a trained facilitator lead the review meeting (not the author)
- never have the author as leader, reader or scribe
- product potential defect logs and review reports
- collect metrics and use them to improve the entire software development process
Testing MANAGEMENT REVIEW
The management review is a systematic evaluation of software acquisition, supply, development, operation, and maintenance process. Performed by or on behalf of management that monitors progress, determines the status of plans and schedules, confirms requirements and their system allocation, and evaluates effectiveness of management approaches
To achieve fitness for purpose Management reviews are used to monitor progress, assess status, and make decisions. This approach is also used as decision support for adapting the level of resources, implementing corrective actions, and changing the scope of the project.
Management reviews have the following key traits:
- Conducted by or for managers having direct responsibility for the project or system
- Check consistency with and deviations from plans
- Check if management procedures are adequate
- Assess project risks
- Evaluate impact of actions and ways to measure them
An output of this review consists of lists of action items, issues to be resolved and decisions made. Test Manager should participate in and may initiate management reviews of testing processed and should consider such reviews as an integral part of process improvement (ex: management review of processed, project retrospectives, lessons learned, etc.).
Software testing AUDITS
An audit is an independent examination of a work product, process, or set of processes that is performed by a third party to assess compliance with specifications, standards, contractual agreements, or other criteria. Audits are usually:
- performed to demonstrate conformance to a defined set of criteria
- Conducted and moderated by a lead auditor
- provide evidence of compliance collected through interviews, witnessing and examining documents
- have documented results
How to manage testing reviews?
Software Testing Reviews are key for a test manager, but knowing the art of managing test reviews is key to reducing the cost of quality through these tools. Knowing how to manage informal and formal testing reviews helps the Test Manager across projects and products.
Before we deep dive into managing test reviews, please do a recap of the informal and formal testing reviews.
The software testing review strategy must be coordinated with the test policy and the overall test strategy. Reviews should be planned to take place at natural break points or milestones:
- typically after requirement and design specification
- start with business objectives and work down to low level design
- often as part of a verification activity before, during, and after test execution
Before establishing an overall review plan at the project level, the review leader (may be a Test Manager) should take into account:
- What should be reviewed (product and processes)
- Who should be involved in specific reviews
- Which relevant risk factors to cover
The review leader should also:
- identify the items to be reviewed
- select the appropriate review type and level of formality
- identify a budget (time & resource) for the review
- create a business case and include a risk evaluation and return on investment calculation
The optimal time to perform reviews depends on:
- availability of the items to review in a sufficiently final format
- availability of the right personnel for the review
- time when the final version of the item should be available
- time required for the review process of that specific item
The objective of the review must be defined during test planning and should also include:
- conducting effective and efficient reviews
- reaching consensus decisions regarding review feedback
A good tip in case of audit or inspections on big projects is to use brief inspections/audits conducted at the author’s request as document fragments are completed. This will help have initial and iterative checks rather than a big bang approach when all is taken in at once. There are also options to have an advance audit prior the main certification audit.
We should not forget about the project reviews which are frequently held for the overall system and may also be necessary for subsystems and even individual software elements. Based on project complexity or product risks we should adjust:
- the number of reviews
- the types of reviews
- the organization of reviews
- the people involved in reviews
Participants in reviews must:
- Have the appropriate level of knowledge, both technical and procedural
- Be thorough and pay attention to details
- Have clarity and the use of a correct prioritization
- Understand their roles and responsibilities in the review process
Review planning should address:
- The risks associated with technical factors, organizational factors and people issues
- The availability of reviewers with sufficient technical knowledge
- Ensure that each team is committed to the success of the review process
- Ensure that each organization is allocating sufficient time for required reviewers to prepare for and participate in the reviews
- Time allocation for required technical or process training for the reviewers
- Backup reviewers should be identified in case key reviewers become unavailable due to changes in personal or business plans
During execution, a review Leader must ensure:
- Adequate measurements are provided by the participants in the reviews to allow evaluation of review efficiency
- Checklists are created, and maintained to improve future reviews
- Defect severity and priority evaluation are defined for use in defect management of issues found during reviews
After each review, the review Leader should:
- Collect the review metrics and ensure that the issues identified are sufficiently resolved to meet the specific test objectives for the review
- Use the review metrics as input when determining the return on investment (ROI) for reviews
- Provide feedback information to the relevant stakeholders
- Provide feedback to review participants
A software testing review effectiveness is evaluated by comparing results from the review with the actual results found in subsequent testing. In case a work product is reviewed, but later found defective, then the review leader should consider ways in which the review process might have allowed the defects to escape. Some likely causes:
- include problems with the review process (like poor entry or exit criteria
- improper composition of the review team
- inadequate review tools (checklists)
- insufficient reviewer training and experience
- too little preparation and review meeting time
It is possible that reviews lose their effectiveness over time and this can be noticed in project retrospectives. In such cases, the review leader should investigate the cause. If a pattern of escaped defects is consistent across projects this is another indicator that there are significant problems with the review process which have to be assessed and addressed. By using metrics we must focus on the review process and never use them to punish or reward individuals.
Please note that all these software testing reviews can bring wonders to the test process if you are properly managing test reviews.
This article is based on the ISTQB Advanced Syllabus version 2012 and it also references the ISTQB Foundation Syllabus version 2018. It uses terminology definitions from the ISTQB Glossary version 3.2.