Test-Case Reviews in Scrum-Teams
In this article, I will share a recent experience where a newly formed Scrum team has evolved and learned through its successes and mistakes.
Join the DZone community and get the full member experience.
Join For FreeIntroduction
Software reviews are advantageous when the bias of the developer is challenged constructively. A software engineer developing anything from an architectural diagram, a unit of code, or a test-case, is biased. An effective review of any software artifact is usually performed by a person different from the person that developed the artifact. Since test-cases are designed to exercise the code, the code reviewer is a good candidate for a test-case reviewer too. After reviewing the code, she/he has an insight into the code which makes her/him an appropriate white-box tester.
In this article, I will share a recent experience where a newly formed Scrum team has evolved and learned through its successes and mistakes. The role of test-case reviews has been twofold. First, they brought technical expertise to the developers (gained insight on how to test) and the testers (learned programming skills). Second and most important, test-case reviews acted as a laboratory for strengthening the bonds between the team members. The scrum team transformed into a cohesive team of members that shared and cared. All of them have gone well beyond their comfort zone for the benefit of the team.
A Path Towards Improvement
In a scrum team that consisted of five developers and a tester, we recently started test-case reviews. In the early stages of the team, it was the tester who was solely responsible for test-case design, development, and execution. Unit-testing started at later stages when the efficiency and effectiveness of the scrum team became an issue. Since the early stages, the question that begged for an answer was: Who should test what, when, and how, to maximize our team’s velocity and minimize the number of bugs in our software?
The Newly Formed Team
When the scrum team was formed, everyone was eager to produce code and help the team become one of the best teams in the company. The goal was simple: Produce code that works as fast as possible. If the working code was not produced soon then there was always the fear that the scrum team would break up and the members would join other teams.
The Bottleneck of Testing Only by the QA Engineer
As all developers started coding, most of the testing job was left to the tester. Without any second thoughts, the tester started gathering all the data needed during the appropriate scrum ceremonies. All information necessary for designing, developing, prioritizing, and executing the test-cases was there. When a user-story was developed and ready for QA-testing, its status was set to QA. By that time, the tester had finished designing and developing test-cases and was waiting for test-case execution.
When QA-testing of a user-story was finished, its status was set to code-merging and then to done. For the first two (two weeks) sprints this was working fine. By the third sprint, however, a bottleneck was evident. There were too many user-stories in QA status for the tester and the velocity of the team was depending on the tester’s ability to test fast. Since scrum velocity involves the number of user stories released per sprint, it was the QA testing activity that slowed-down the release.
Tackling the Testing Bottleneck by Sharing Testers
In an attempt to alleviate the problem, more testers were borrowed from other scrum teams during the test-execution of a user-story. This increased the velocity of the team at the expense of complexity in synchronizing between different teams. When a tester was taken from his team to help another team, a negative impact on his team’s velocity was obvious. Although we managed to test more user stories for many sprints, we found that we created a serious problem for other teams. A single scrum team’s gain was at the expense of the entire company and as a result, we stopped sharing testers.
A First Realization of the Inevitable
Soon enough, the team started realizing that the road forward would be to test as a team. The tester trained the rest of the team about how to smoke-test user-stories at the UI level. Discussions took place about risk-based software testing [1-2], testing for quick feedback [1-2], testing first the most important test-cases, and then iteratively testing the rest of them. For user stories that had no UI to test, unit-testing [3-5] and API-level testing were introduced [1-2]. With unit-testing evolving and growing as well as each developer performing a minimum set of smoke-tests whenever applicable, our bottleneck was alleviated, although not mitigated. Our velocity has improved but there was still great room for improvement.
Reverting to Old Habits
The mentality of the team was that all members should help with testing to improve velocity. Whole-team testing was seen as a measure to overcome our difficulties and not as a development best practice that should form the basis for the team’s growth and improvement. When the velocity problem started to improve the team reverted to the old habits. Testing was mainly done by the tester. In sprints with fewer story points planned, the team either picked up more user-stories or performed bug fixing. The bottleneck of QA-based testing started to become evident again.
From Testing to Mitigate Bottlenecks to Testing as a Development Best Practice
When the team realized that testing is a best practice and not a temporary measure, things started changing drastically. After training and educating and with the help of the retrospective meetings the team started taking testing responsibilities in the long-term. People acknowledged that testing was part of every person’s job. Not the same kind of testing, maybe not everyone testing at the same time or the same level. It was an activity that should be tailored for all members so that they were as productive as possible and the team was performing at its best.
Unit-testing transformed from a nice-to-do into a must-do activity. Following principles, as shown in [3-5] unit-tests became the norm to catch bugs early and improve the overall quality of our software. Factors like the inner-quality of the code [3] became a usual discussion. Code reviews also became the norm. They led to one of the main factors of our team’s growth: Test-case reviews. Although code reviews were between developers, test-case reviews could be between developers or between a tester and a developer.
Improving Technically
Test-case reviews initially started as a training exercise by the tester to the developers. The goal was to teach developers how to test using tester’s best practices. Questions that were addressed included: What are the different ways that a test-case can be created according to what you want to test. How to choose the level of detail in a test-case, how many test-steps should be there, how many things should be tested in a test-case.
Developers started showing their code to the tester, explaining the basics and how they planned to test. Whenever appropriate, they explained how interacting with the UI can be translated into interactions between different parts of the code. As the tester was responsible for the test-case reviews, she needed to have at least a high-level overview of the code. To do that, developers trained her on the basics of the programming language used.
The developers evolved since they tested efficiently and effectively, according to best practices. The tester evolved since she could understand coding basics which made her a good candidate for a software developer in the test.
As soon as developers had the confidence to write and execute their test-cases, test-case reviews were done by the developers doing the code reviews. Each user story included its reviewed test-cases. The team reached a point where test-cases at any level (unit, API, UI) were reviewed by anyone in the team.
Improving Team Bonding
Test-case reviews shifted the inter-personal dynamics of the team positively. The very roles of testing and developing software have been challenged constructively. Where does software testing begin and where does it end? Where does software development begin and where does it end? Is it always beneficial to separate between the two? Test-case reviews were a key activity that raised interesting questions and fruitful discussions. An activity that made team members realize commonalities whilst appreciate the need for different points of view at work.
Conclusion
What started as a means to improve the team’s velocity bottleneck has transformed into a development best practice. It resulted in a whole-team improvement through each member growing and learning. This became possible because, within this path, the team-mentality and bonding also evolved and became stronger. It was the team’s goals, problems, and achievements that motivated each individual to improve, to give, and receive training.
Test-case reviews were a crucial factor in this transformation. They helped the two distinct roles, the developer and the tester, to mix and interact in a constructive and educating way. Based on the test-case artifact and the need to test as a team, test-case reviews acted as the glue of the team, the magic ingredient for improved bonding and performance.
References
- Agile Testing: A Practical Guide for Testers and Agile Teams, Lisa Crispin and Janet Gregory, 2008
- More Agile Testing: Learning Journeys for the Whole Team, Janet Gregory and Lisa Crispin, 2014
- Test-Driven Development: By Example, Kent Beck, 2002
- The Art of Unit Testing, 2nd Edition, Roy Osherove, 2013
- Effective Unit Testing, Lasse Koskela, 2013
Opinions expressed by DZone contributors are their own.
Comments