Test Automation and User Story Done Criteria
Be sure to keep proper test automation and the user experience at the forefront of a DevOps process.
Join the DZone community and get the full member experience.
Join For FreeThis is the story of a team who was on-boarded on a project for building a platform. Discovery and Inception of the project was successfully completed. Product Managers had provided the requirements which Product Owner had broken into Epics. Business Analysts converted the Epics into Stories in Jira. Multiple Scrum teams where formed. Every Scrum team would have an Iteration Planning Meeting wherein they would zero down the scope for the Sprint. Engineers — Dev, QA, DevOps all started picking up Stories as the Sprint started progressing.
Before starting any Story, Dev, QA, UX Designer, and the Product Owner use to sit together. All 4 of them used to go through the description of the story in Jira to brainstorm and freeze the Acceptance criteria. During the kick off; Dev and QA would discuss and finalize on the test cases to be covered by the Dev in unit and integration tests and the ones QA would cover as a part of end-to-end tests. As the Dev would work on the Story, QA would create the test cases, data, environment, and automated test cases. Once the Dev was done with the story; Dev, QA, and BA would once again get together for Desk Check so as to make sure if everything was implemented as per the expectation. Test cases written by QA were executed during this ceremony itself for a faster feedback loop. If all the test cases passed, then Dev and QA would discuss the unit, integration and end-to-end tests. If any verification points were missed, then they were identified and added as suitable on the basis of Test Pyramid. end-to-end tests were integrated with Continuous integration pipelines and were executed regularly against the builds for faster feedback. end-to-end tests comprised of both API tests and UI tests.
Things were working fine for the first few sprints. However, the project had a fixed go-live date which could not be compromised. At the current velocity, it was not possible to achieve the planned go-live date. This is when the business stakeholders started pushing the engineering team to increase the velocity. Engineering team resisted to the change but the resistance went down the drain. The result was to achieve a higher velocity, but the team started compromising on practices thereby cutting corners.
Devs started writing unit and integration tests showcasing 80%+ code coverage. Since the Devs started churning out stories at a faster pace given the Dev to QA ratio, QA started falling short on time to write new end-to-end tests and update the existing ones. The Engineering team raised this numerous times in Retros, Scrum of Scrums but all was in vain. Because of the lack of a safety net in the form of good quality unit, integration and end-to-end suites to prevent the regression the bug count started increasing. Because of the feature silos created across Scrum teams, changes by one team would end up breaking use cases developed by the second team. Amidst all this, there was a constant push from the business stakeholders to increase the velocity so as to achieve the fixed go-live date. QAs in the team started having a hard time due to a lack of safety nets in the form of unit and integration tests, and a lack of time for themselves to write and maintain end-to-end tests. Because of this, QAs started spending more time on executing repeatable manual tests. Over a period of time, because of repeating the tests manually, the efficiency of the QAs started diminishing. This continued until the time the product went live. The product was live in name only. Because of the compromised practices, the quality of the product was very low. The result was as expected: very high production defects.
The mission impossible of taking the product to production by the go-live date was achieved. Business stakeholders had achieved their primary objective. But now was the time when the users from the legacy platform were to be migrated to the newly-deployed shiny platform. But because of the higher number of defects, it was almost becoming impossible to onboard the users. Finally, a new UI with an awesome performance was only going to add value as long as it was displaying the correct data. This is when the focus for the first time was shifted to the quality side of things. And by quality, I do not mean just the UI and the API automation. But this involved unit, integration and performance tests as well.
The entire product architecture was as displayed the snapshot above. There was a write mechanism that was used to fetch the data from various data sources and write into databases. And then there was a read mechanism which use to read the data from these databases using Apis and display it on the UI. Engineering as a whole had underestimated the criticality of the write side of things. Hence there was not much emphasis in terms of quality put there. Majority of the test automation focus as to verify the data from the databases was correctly displayed on the UI or not. But as the number of defects started increasing, the defect trend started speaking things the other way round. The result was, since the incorrect data itself was not present in databases, APIs used to provide incorrect responses and invalid data was displayed on UI. Since the multiple aspects related to quality were compromised, unit, integration, end-to-end tests; there was a need to come up with a test strategy to mitigate this challenge.
There was already a massive technical debt which devs had to deal with. Low unit and integration test coverage was just one of the things amongst the lot. Getting the Technical Debt prioritized was another challenge. Hence, QA team trying to be dependent on the Dev team to add unit and integration tests was a risky affair. Thereby QA team decided to come up with a test Automation suite that would validate the things right from the point where in the data was injected into the system to writing to the databases and thereby validating it through Apis and on UI. However, there were two schools of thoughts when it came up to the validations and the verification points.
One school of thought suggested that the tests only validate the APIs and the UI. The other suggested that the data be inserted into the system by mocking and then validate that data first in the databases; then through APIs and finally, on the UI. As per the first approach, the data was not validated in the databases. So if there was a defect in the code that wrote data into the databases, then that would be flagged by the API and the UI tests. This was very late in the game and the flag was being raised at incorrect point.
The moral of the story is, in this fast-paced world, of Continuous Delivery where deployments to production are done multiple times a day, it is very important to make automation the part of the story done criteria. Without a solid test automation suite, it is impossible to achieve Continuous Delivery. For that to happen, it is very important to have automation as a part of done criteria. Also, as a QA, it is very important to write a test suite that will flag the failed implementation and thereby an incorrect use case at appropriate
Opinions expressed by DZone contributors are their own.
Comments