Date: 11 June 2013
All too often, defects raised during test design and execution highlight gaps in the specification. During retrospectives, we notice that feedback initially provided during the static review process missed important scenarios altogether. Identifying solution gaps during testing highlights the inefficiency of the techniques used for static testing.
In the interest of bridging this gap, the static review phase conducted on a traditional project is an area where lean techniques could improve the effectiveness of the reviews conducted on the specifications prior to test design and execution.
Here, I aim to share an approach that was successfully applied during the review phase on a traditional project. I do not intend to re-invent the wheel, but rather to describe the context and the subsequent approach used that helped to identify solution gaps in specifications. The aim here is to describe the context in which the approach was used and where it is placed in the software development lifecycle in Part 1 and the lessons discovered through trial and error provided through a practical example in Part 2, which we’ll publish next week.
A great way to find solution gaps in a specification is by identifying and communicating scenarios. As a reviewer of a document (rather than the author), bias is automatically removed from one’s perspective so the ability of the specification to communicate the new process is immediately tested. The trick, as testers, is to find the most efficient and effective way of identifying and communicating scenarios within the unique context of each project.
The nature and structure of a specification can obscure the end-to-end process due to the need of detail in each requirement. This makes it easy to review the process in ‘pieces’; ie. consistency within the document and identifying whether each requirement is testable. However, this makes it a lot harder to identify ‘solution gaps’ that can occur and to communicate these effectively.
In an attempt to identify ‘solution gap’ type defects in a project, walkthroughs have traditionally been run and chaired by business analysts. While effective in many ways, however, a gap is present. The structure of the document tends to be the main focus of the discussions and scenarios are only verbally discussed by the experts in the meetings. That is not to say these walkthroughs shouldn’t occur, just that there is an opportunity to hold additional and more regular reviews between the team members involved where the main focus is to identify and document scenarios. One way of achieving this is to use a small aspect improvised from an approach called Specification by Example by Gojki Adzic.
Specification by Example (SBE) is an approach where examples are provided in the specification that can be automatically validated. ‘Stories’ are developed using a technique to write scenarios where ‘given, when, then’ provide the stories with a structure that are then easily documented and maintained. Through the use of automation tools and a Continuous Integration server, the stories are automated and the outcomes recorded. By using this approach, specifications become living documents.
At the other end of the spectrum, specifications developed on traditional projects produce elaborate specifications where the requirements and their details are already defined. In this context, a tester would generally do static testing on the artefact during the review phase of the project. Following on from this, test scenarios are developed which feed into the test cases which are later executed. These two activities (ie. writing test scenarios and performing a static review) are generally done independently from one another. The use of SBE, as described in Part 2, acts as the glue in which each of these activities can co-exist. Figure 1 below illustrates the concept.
Writing scenarios allows the tester to document test scenarios specifically for static testing. This enables effective communication and forms a base from which solution gap defects can be identified during the review phase of the artefact.
Using this approach on a project has shown that it aids communication of scenarios, has helped to not only identify gaps in the proposed solution but also allows the team to confirm their understanding of the basic flows, is flexible in the level of detail one can go into and promotes collaboration and communication within the team. It may also aid the identification of the Minimal Viable Product (MVP), form part of test case review, aid test case design, and help distinguish the ‘valuable’ feedback from the ‘unnecessary’ feedback during the review process. The nature of the approach involves all team members, thereby not relying on the experts of the systems to identify solution gaps. The increased visibility of the scenarios developed as a team also increases confidence in the product to be delivered.
The benefit in the use of this approach is that solution gap defects are exposed before test design is completed and before test execution commences. It enables the team to identify the gaps, communicate them and analyse the solution as a whole effectively and earlier in the review phase.
Increasing the opportunity for earlier communication between test, development, and business analysts increase the opportunity for discoveries other than solution gaps to be made earlier. For example, earlier communication of large technical changes being required to accommodate the new process. Additionally, the follow-up sessions held on the updated scenarios can expose misunderstandings of the required changes which would have otherwise been exposed in test execution.
This time spent earlier in the lifecycle of the product means that each team member is able to gain confidence in their interpretation of the proposed solution as it changes due to the increased communication.
Providing a regular platform from which test, business analysts, and development assigned to delivering a business process, can communicate scenarios, creates an environment in which the solution can be clarified, challenged, and analysed earlier as a team.
 Solution gap defects: Defects where the tester doesn’t know what the expected outcome should be. Or an outcome is catered for, but all or some of the process steps to get to that outcome are missed. Another example is where the existing business process may affect the new process in a way that was not considered.
 I use the term ‘team’ to refer to the cross-functional team assigned to the delivery of the product. For example, the team assigned to a business process for delivery could involve one tester, two developers, and one business analyst.
 For ease of reference, the area in question will be referred to as ‘scenarios’ as shown in Figure 1.