Did We Build The Right Thing? That's What UAT Is About.
Non-technical stakeholders are not testers. The often ceremonial UAT stage cannot replace rigorous testing. Constant stakeholder involvement ensures we're building the right thing.
Join the DZone community and get the full member experience.
Join For FreeThere are reasons to give key stakeholders the opportunity to officially sign off on a new software release. We need some formal approval from the people who commissioned it, or at least their delegates. This last stage prior to release is commonly called the user-acceptance test and executed in a UAT environment. It’s an indispensable stage, but treating it as the final step in testing is problematic for several reasons.
Let me start with a car example. Dealerships are generous with free test drives for the same reason that clothing stores let your try on three different shirts. It’s to let the endowment effect do its dirty work: wearing (or driving) something feels more like you already own it. It's to get a taste for the look and feel, and a catalyst for closing the deal. It's not about really testing the vehicle -- they expect it back unscratched. Toyota doesn’t need their customers taking an active part in their QA process.
Skin in the Game
You can be heavily invested in the success of some venture, but be unqualified to judge the quality of the process and the results. My wife and I are not builders or architects, so when we wanted to have an annex in our back garden for my musical hobbies, we got the professionals in. Since we put up the full 50,000 euros, we had serious skin in the game. As the end users, we commissioned and paid for it. We could not meaningfully test the work of the builders, but the work was going on under our very noses, so it was impossible not to get daily progress updates.
No Single Person Fully Represents All Business Interests
Let's move to software now. In enterprise development, it's unreasonable to expect similar involvement from any single person. When it comes to different stakeholders, such roles are rarely rolled into a single individual who perfectly represents ‘the business’. Better to expect that the persons doing the acceptance are less involved, dedicated, and feel less accountable for the success of the release than the team that worked so hard to finish it. Worse, when they are not test professionals (and they usually aren’t), the quality of their judgments must carry less weight, no matter their motivation.
This is because regular end users do not know what rigorous testing means. They will be satisfied when the happy path succeeds. That’s not laziness; it’s just not part of their job description or life’s mission to try and break things. The task may well be an unplanned and unwelcome interruption to their other duties.
At best their well-intentioned efforts will be explorative. They do not work from a script or document their findings. Testing should be planned and documented. If you can’t reproduce unexpected behavior, you can’t investigate what caused it. The only solid process inspires confidence that the thing was built right.
Automated testing also prevents regressions from creeping in. Every developer knows that new stuff tends to break old stuff if you’re sloppy with modularity and isolation. This fact of life is less obvious to non-technical people, for whom spaghetti is only a pasta dish. When stakeholders evaluate a small incremental release, they will naturally focus on what is new. They expect the old functionality to keep on working as it did. They don't know that is a dangerous assumption, seeing how everything is connected beyond the visible interface.
Building the Right Thing
The acceptance session is to give key stakeholders an opportunity to sign off on the delivered product. Ideally, this is a formality. The software should behave exactly as expected, with no defects or unpleasant surprises. Well, that didn’t work in the 1960s and it sure doesn’t work now.
Confidence that the right thing is being built should be everyone’s concern, at every stage in the development lifecycle. This evaluation and adaptation loop is an ongoing iterative process starting well before coding. This requires regular stakeholder involvement, because intentions are never perfectly specified, much less flawlessly interpreted into working code. Things always get lost in translation the first time around.
When we’re knee-deep in code, we zone in on doing things right. The entire pyramid of automated tests, including any exploratory manual and/or usability tests all help ensure quality: a stable reliable, system. This can blind us to the not unlikely prospect that nobody wants to use what we built. That’s why we need the business, early and continuously.
Non-technical stakeholders should regularly be invited to play around with the software as it’s being built. But their input is not about quality assurance and these manual acceptance ‘tests’ have no place in the classical testing pyramid.
Opinions expressed by DZone contributors are their own.
Comments