Handling Manual Tests in Serenity BDD
Learn more about handling manual tests in Serenity BDD.
Join the DZone community and get the full member experience.
Join For FreeOne of the principles of BDD is to have a single source of truth for both the requirements that describe a feature and the automated tests that verify them. And it is a natural step to want to include both automated and manual tests in this single source of truth.
Cucumber is primarily and traditionally used for automating executable specifications. But with Serenity BDD, you can add special tags to indicate that a scenario represents a manual test case.
In Serenity, you can flag any Cucumber scenario as manual simply by using the @manual
tag:
@manual
Scenario: Invoice details should be downloadable as a PDF file
The layout and content of the PDF file should be verified
Given Clive has made a purchase
When he reviews his past orders
Then he should be able to download his invoice as an PDF file
This scenario will now appear as a manual test case in the Serenity reports:
Recording Test Results
But just flagging a test as manual is a little limited. You can take this reporting further by including the test result with the @manual-result
tag. This tag takes values such as passed, failed, and compromised:
@manual
@manual-result:passed
Scenario: Invoice details should be downloadable as a PDF file
...
These manual test results are indicated in a different shade in the Serenity reports and also appear in the summary table on the home page:
Associating Manual Test Results With Application Versions
Manual test results are generally only valid for a given version of an application: when a new version is built, the manual tests may need to be redone.
Different projects deal with manual testing in different ways. For example, some teams refer to a target release version and test against this version when a new feature or story is ready to test. They do not, for example, feel the need to redo every manual test for each commit. They assess on a case-by-case basis whether any given changes might impact the features that have already been tested.
For example, suppose a team is working towards a release at the end of the 15th sprint of their project. This is recorded in the Serenity properties file using the current.target.version property:
current.target.version = sprint-15
A tester can say which version they tested directly in the feature file by using the @manual-last-tested
tag:
@manual
@manual:passed
@manual-last-tested:sprint-15
Scenario: Invoice details should be downloadable as a PDF file
...
If the versions match, the manual test will be reported as passing. But if they do not match, then the test will be reported as pending, as the feature may need retesting for this new version.
Adding Test Evidence
Sometimes, we need to attach screenshots or other files to our manual test reports as additional evidence. Serenity lets you do this with the @manual-test-evidence
tag. Simply include a link to an external resource (such as a page on Confluence or Sharepoint), or include an image in the src/test/resources/assets folder of your project and use a relative link (starting with "assets").
@manual
@manual-result:passed
@manual-last-tested:sprint-15
@manual-test-evidence:https://some.external/link.png
Scenario: Invoice details should be downloadable as a PDF file
...
The link to your evidence will appear alongside the manual test result:
Learn More
You can read more about recording and reporting manual test cases in Serenity in the Serenity Users Manual.
Happy testing!
Opinions expressed by DZone contributors are their own.
Comments