Good Tests and Bad Metrics: Choosing Quality Over Quantity in Software Testing
Enough with the quantitative metrics already! There are better, and more accurate, ways of measuring your testing progress.
Join the DZone community and get the full member experience.
Join For FreeAs a part of the quality assurance team, you are always expected to carry a quality stick to ensure quality is maintained to a "T." We are always asked to put ourselves into the shoes of the customers and ensure the products/projects meet their expectations with the highest quality achieved.
But the actual irony lies where all our quality metrics boil down to quantitative numbers and terms like bugs logged, the number of test cases written, the number of test cases executed, time spent on testing, URLs tested, browsers checked for cross-browser testing, defect leakage, and more.
We have designed our working system where we are asked to place quality over quantity but eventually are analyzed on the quantitative basis. I believe focusing on a quantitative approach for testing is unfair to your software testing team, and even if we follow the quantitative approach, there has to be a systematic way to judge the individual effort put up on the basis of our software testing metrics.
Is It Okay to Justify Our Software Testing Metrics With Numbers Alone?
We need to question how we justify our testing approach when every path is been quantitatively visualized? This is one of the reasons why the quality of testing has been reducing drastically. A simple example could be when you measure your team efficiency or efficacy with the number of bugs logged.
The very first approach your every team member would be taking is to find as many bugs as they can anywhere in the application. I know many would be arguing stating: how does it matter until we are finding bugs in the web application? But eventually, this is where the quality of testing comes in place.
Looking at the Agile software development approach that we are working on these days, with shrinking cycles and testing being pushed to the end of each cycle, all we are left with is the seemingly high pressure to test applications in the shortest span available. We perform all those risk-based testing and smoke testing for each and every release to ensure the application provides a seamless experience to its end users.
Driving the team with this numbering system won’t help in such crucial situations. It’s not the number that counts but the essence of each bug that can cause the level of disruption if let go off to the customer. So even though we may raise a certain number of bugs into our bucket, we may have tremendously gone wrong when delivering a product in terms of quality, because of the mindset approach we have laid down while hunting down our application. This is one of the biggest reasons to consider while defining our software testing metrics where quality should outnumber quantity anytime.
Today, the approach to maintaining quality is completely different and speaks only in terms of stakeholder’s satisfaction. Quality is completely customer-driven. Quality equals profits for stakeholders. The higher the quality, the higher the level of predictability for the software, which means one can take a risk in terms of pricing play in the market.
The stakeholder should know where they stand in terms of stability of the product and how in-depth they can push the product in the market. But this gets lost completely due to the way we start working when we kick-start the project, which is collecting requirements, defining the scope of testing, team coordination and allocation, testing activities, and more. We tend to forget our real mission as testers, which is "the primary objective of building the project." This is the sole reason behind making the project that helps solve problems for end users.
However, the important question should be: what are we testing to ensure those problems are resolved? If not, do we provide frequent feedbacks to our stakeholders to help them leverage better insight into the project? It’s important to keep ash ourselves questions like: "Is this what the customer expects?" or, "Is there a better way to resolve the same problem?" Just taking requirements from the client and building them does not make our job fulfilled. As we start working on the project, we need to sit with our client to understand what are their expectations from the project and how they visualize the quality aspect of it.
For example, if your client is more focused on the branding aspect than even a pixelated logo, this issue would be high severity for you, if not for the developer. If they are aiming to build a financial application, then maybe UI and UX may be a lesser concern for them as compared to the security of their user’s data. Here, "objective" is the key. This is something we need to engrain in ourselves as testers. This should be the driving force between your team's OKRs (Objectives and Key Results) rather than those number-driven metrics.
OKR is a popular leadership process that helps individuals, teams, and organizations work together to fulfill their goals in one unified direction and set up objectives across teams and organization. Such OKRs help teams to focus on productivity and drive company culture.
Quality Is Subjective and May Change From Customer to Customer
It’s important how we lay down our testing efforts in alignment with these objectives. In fact, this helps us drive our decision of what to fix and what not to. Hence, it all comes down to one bottom line: the clearer picture you have for your stakeholder views and the mission of their project, the better it will be to build and prioritize your testing efforts. Analyzing your risk by asking yourself what your customer wants will help you drive quality. This may sound like more of a business analyst's roles and responsibilities, let’s not forget we as testers need to have this elementary skill, too. Our testing strategy is driven from these analyses.
So, let’s focus on writing a sufficient number of test cases that drives your customer and project objectives rather than focusing on writing a large number of test cases with no major crux involved in them. Highlighting the high severity issues rather than just filling your bucket with those umpteen number of minor bugs. Give precedence to the risk that can bring down your customer and not your evaluation matrix.
Quality was and will be the indisputable winner. Quantifying your testing process won’t suffice in any way, but of course, the question would remain unanswered to all those organizations who somewhere down the line needs a measurement of how we then measure quality then? The main intent of having those metrics was to focus on how to bring those numbers up/down or may be at the same level to achieve quality. But as humans are humans, we get into this numbers business more seriously and it drives them, because they have been marked as our growth evaluation. Hence, it is important to remember, what drives us as testers and how to build our evaluation matrix. If you are concerned about addressing browser compatibility testing, then here is an article which would help you evaluate the cross-browser compatibility matrix for your testing workflow.
So, we know, by far, that quality testing is better than quantity testing. The way to make sure you step in the right direction is by recruiting the right software testers in your team and to imbibe the concept quality of quality testing and not quantity testing in the software testing team that you already have.
How to Judge a Quality Tester Apart From the Others?
Being in the industry for seven years now, and after mentoring so many budding testing professionals, my whole idea on measuring individuals on the quality basis has always derived from their ability to analyze the business requirements, break them down into the smaller level of chunks, and ensure those are built and worked as they are intended to.
It’s always been the intent of the tester that mattered to me rather than the numbers he/she gives me in terms of test cases or bugs. I have always preferred people who ask questions and understand the meaning of priority rather than people who "just test."
The most common behavior I have observed in so many testers is they start writing test cases as soon as the story/requirement gets allocated to them. They tend to forget the basic foundation step, which is to sit and analyze the requirements mentioned in the story. They forget to question themselves by putting on the shoes of an end-user and realizing the workflows end-users may tend to use. Figure out the impacted areas and see through all the validation that user may validate through during the flow.
I always insist on making a checklist before I begin writing effective test cases; this helps in proper test coverage. Another important aspect is backtracking, be it the requirements or the bug occurred. This helps to ensure requirements that they are not left out and one is able to find the root cause of the bug, which helps in reducing the occurrence of such bugs. Good bug reporting and positive attitude help in making of a good tester too.
These are some of the quality aspects I usually push through among my team. As far as their measurements are required, I put them into the skills section and mark individuals across them rather than those metrics.
Some of the qualities of good testers include:
- Understanding the priorities and severity business
- Ability to dig into the system and think through
- Following the quality processes and if required bring corrective measures for further improvements
- A quick and constant learner
- Passion for testing
- Good communication skills
- Analytical ability
- Co-operative and work in unison with other team members
There could be more highly effective skills for becoming a successful software tester.
We, as testers, due to this metrics marathon tend to imbibe some of the "not required" or "bad" qualities as a tester, and trust me, this are something very common these day. For instance :
- Performing testing based on assumptions
- Reporting bug without analysis
- Poor business analysis skills
- Lack of customer insight
- Poor communication skills
- Incompetency to follow processes
- Fear of rejection of work or thoughts
The key is to inject out these qualities and bring on the positive ones among your team. Here is how I encourage them to bring out the best of their qualities:
- Conduct seminars on a regular basis to help them become proficient in writing a bug report. Also, relay to them how they could do better testing without falling into pre-assumptions.
- In order to enhance poor communication skills, I insist that they have an internal call with developers. This helps to boost their confidence and understanding regarding the inbound and outbound process flow of the web application. Aiding quality for continuous testing and software testing metrics.
- Fear of rejection always victimize freshmen or young software testers. This can be easily dealt with cooperative management. I make sure to never criticize them on their mistakes; I instead provide them a suggestion on how they could do better testing of the web application or product. This way, I make sure to get rid of any obstacle that holds my software testers from finding bugs.
- In order to keep spirits high, I make sure to conduct award/gift ceremonies on either a monthly or quarterly basis for recognizing the phenomenal effort. Doing so, I help testers become more competent among each other in a healthy manner.
- I believe shift-left testing helps to boost the product quality. Incorporating shift-left testing with continuous testing could do wonders in terms of time, resources, and money. I encourage young software testers to be a part of shift-left testing, too. This helps them to understand test scenarios right from the client requirement gathering phase. A good understanding of SRS document helps them to stick to quality in terms of software testing metrics.
Don’t Drift Away From Your Objectives!
I have seen it happen often that the software testers would focus too much on raising their bug count, and as a result, they would end up drifting away from the objective of the functionality they were meant to test. I am sure you must have experienced the same, too!
Well, quality test cases are very critical and if we don’t stick to our objectives and report bugs on the basis on increasing the daily bug log count, then we might end up overshadowing the critical, quality test cases.
Think of it as setting up the right OKRs for the test department. If you are a QA lead or manager responsible for aligning the testing team in the release management, then it becomes very critical to set up the right goals for your test department.
You can mark the above-discussed positive skills as your primary objective and measure your team on these bases. This helps bring about improvement in your team members and further growth, which would have a direct impact on your project/product. Our OKRs or evaluation matrix should be built to answers questions like:
- What do we want and value?
- What problems do we perceive and how to recognize them?
With those clearly defined OKRs, we can help deliver a quality product as a team rather than comparing and analyzing the valueless numbers(metrics) across team members. Having said so, collecting those numbers to improve your quality objective should be the only essence. Believe it or not, numbers drive the psychology of people, hence it’s important how we frame and utilize them.
So, let’s not push quantity to drive quality. Quality should be the one and only major aspect of achieving customer satisfaction.
Published at DZone with permission of Sadhvi Singh. See the original article here.
Opinions expressed by DZone contributors are their own.
Comments