fbpx

Reporting Manual Test Results in Serenity BDD

John Ferguson Smart | Mentor | Author | Speaker - Author of 'BDD in Action'.
Helping teams deliver more valuable software sooner31st December 2018

bdd | cucumber | serenity-bdd |

Serenity is primarily designed to report the results of automated acceptance tests. However, there are times when some tests need to be done manually. And it is useful to report these tests in the overall test reports, to get a broader picture of test coverage.

To make this easier, Serenity with Cucumber provides some support for recording and reporting manual test results.

You can mark a test as a manual test using the @manual tag, e.g.

@Manual 
Scenario: Monitoring a new low risk customer 
    Given Joe is a new customer 
    And Joe is considered a low risk customer 
    When he is onboarded 
    Then his account should be scheduled for review in 3 months time

This will appear in the reports as a manual test, as shown below.

manual test result
Figure 1. A manual test reported in Serenity

By default, manual tests are reported as "pending", like the one above. The individual steps will be marked as ignored, as they are just there for documentation purposes.

You can override this status, and mark a test explicitly as a passing or failing test, like this:

@Manual
@Manual:Passed
Scenario: Monitoring a new low risk customer 
    Given Joe is a new customer 
    And Joe is considered a low risk customer 
    When he is onboarded 
    Then his account should be scheduled for review in 3 months time

Or if you want to report that a manual test was unsuccessful:

@Manual
@Manual:Failed
Scenario: Monitoring a new low risk customer
Given Joe is a new customer
And Joe is considered a low risk customer
When he is onboarded
Then his account should be scheduled for review in 3 months time

The test will then appear as both a manual and a failing test:

failing test
Figure 2. A failing manual test reported in Serenity

If you need to provide more details about the test failure, you can add a note starting with the "Failure:" keyword underneath the scenario title, e.g.

@Manual
Scenario: Monitoring a new low risk customer
Failure:Joe is showing as a high-risk customer

Given Joe is a new customer
And Joe is considered a low risk customer
When he is onboarded
Then his account should be scheduled for review in 3 months time

This error message will then appear in the report:

failure with message
Figure 3. A failing manual test including an error message.

Manual test results also appear in the overall test reports, where they are represented in a lighter shade of the normal test result colour:

manual overview
Figure 4. Manual tests appearing in a summary report

 

Note that you should use this feature with caution, as marking a manual test as passing may be misleading: if you are running your Serenity tests on a CI server, you cannot safely say that they were manually tested with the version that was built on the build server. For this reason, manual test results should be considered as indicative, not definitive.

Although Manual tests can have steps (like the one above), they are not really supposed to have step definitions. If Cucumber finds a step definition for a step, it will execute it, and this may not be what you intend if you mark a test as manual.

Associating manual test results with application versions

Manual test results are generally only valid for a given version of an application - when a new version is built, the manual tests may need to be redone.

Different projects deal with manual testing in different ways. For example some teams refer to a target release version, and test against this version when a new feature or story is ready to test. They do not, for example, feel the need to redo every manual test for each commit. They assess on a case-by-case basis whether any given changes might impact the features that have already been tested.

For example, suppose that a team is working towards a release at the end of the 15th sprint of their project. This is recorded in the Serenity properties file using the current.target.version property:

current.target.version = sprint-15

A tester can say which version she tested directly in the feature file by using the @manual-last-tested tag:

@manual
@manual-result:passed
@manual-last-tested:sprint-15
Scenario: Invoice details should be downloadable as a PDF file
...

If the versions match, the manual test will be reported as passing. But if they do not match, then the test will be reported as pending, as the feature may need retesting for this new version.

No alt text provided for this image

Adding test evidence

Sometimes we need to attach screenshots or other files to our manual test reports as additional evidence. Serenity lets you do this with the `@manual-test-evidence` tag. Simply include a link to an external resource (such as a page on Confluence or Sharepoint), or include an image in the src/test/resources/assets folder of your project and use a relative link (starting with "assets").

@manual
@manual-result:passed
@manual-last-tested:sprint-15
@manual-test-evidence:https://some.external/link.png
Scenario: Invoice details should be downloadable as a PDF file
...

The link to your evidence will appear alongside the manual test result:

No alt text provided for this image

Learn more

You can read more about recording and reporting manual test cases in Serenity in the Serenity Users Manual. You can also learn the finer points of Serenity BDD and other related topics at the Serenity Dojo.

© 2019 John Ferguson Smart