What are your automated tests really worth?

John Ferguson Smart | Mentor | Author | Speaker - Author of 'BDD in Action'.
Helping teams deliver more valuable software sooner2nd February 2016

An article by John Ferguson Smart with Antony Marcano and Andy Palmer

Only the blissfully ignorant or the insanely confident would forgo automated testing in a modern development project. But automated tests have a cost, in terms both of development and maintenance. In this article, we take a look at the economics of automated tests, and investigate what we can do to get more bang for your buck out of your automated test suite.

Automated tests have value

No one would question that an automated test has value. But not all tests are created equal, and it is sometimes worthwhile to step back and question where this value comes from.

They save testers time

The obvious value of an automated test comes from the time saved in manual testing, and the faster feedback on regressions. Automated tests free up manual testing efforts for deeper, more intelligent testing such as exploratory testing.

This is the easiest and most obvious way to measure the immediate value of an automated test suite: how much time (and money) would it cost for manual testers to perform the automated test suite by hand.

However, there are other, less tangible ways that automated tests provide real value.

They reduce the fear of change

Automated tests provide much faster feedback when things go wrong. Faster feedback from automated tests (whether run locally or on a build server) makes it easier for developers to ensure that their changes don't break existing work, and reduces the time wasted during integration.

But the real benefit of faster feedback for developers is that they end up less hesitant to make changes (after all, they know that the tests will tell them if they break anything), which in turn leaves more space for innovation and creativity.

Some teams working on very large projects distinguish between automated tests that exercise core components and features that are in development, and more stable features that are less likely to change, running the regression tests for the latter on a less frequent basis. This lets them provide much faster feedback on the high risk and more volatile areas of the application, while still providing some level of guarantee against regressions in more stable areas.

They provide feedback on progress and living documentation

In addition, when automated tests are used in the context of BDD practices, they can also give real-time feedback on progress, and can be used to document what the application does.

The longer the project, the more valuable the tests

Automated tests are designed to be run a large number of times. And since automated tests provide value each time they are run, the total value they provide increases as long as the project goes on:

A logical consequence of this is that the earlier you start writing automated tests, the more value they will provide.

Automated tests are not free

Automated tests also have a cost, primarily in the time they take to write and to maintain, but also in other areas. These costs needs to be factored in when you decide what tests to automate, and in what order. More importantly, these costs need to be minimised if you want to get the most value out of your automated test suite.

Poor design has a cost

Maintenance costs usually take the form of changes you need to make to a test to cater for application updates. If the test suite is not very well designed, maintenance often also includes changes you need to make to existing tests or test components when you add a new one.

Additionally, if the test framework is not well designed initially, adding new tests can become progressively harder, as adding a new test inevitably involves modifying (and re-testing) components used for existing tests. In extreme cases, this can even outweigh the value provided by the test suite, making the testing efforts unsustainable:

Some applications are easier to test than others, and this too has an impact on the cost of the automated tests. Teams where the developers and testers collaborate closely to ensure that the application is easily testable observe that writing automated tests becomes a great deal easier.

Flaky tests have a cost

But there are other costs. If the tests (or the application) are unreliable or "flaky" (or even if they are perceived as such), they will take time to troubleshoot to determine if they are caused by a genuine regression, or by an issue with the test code. This introduces a costly manual step that wastes developers and testers time and reduces the time savings the automated tests are supposed to bring.

Flaky tests reduce the confidence in the automated test suite, which results in more manual testing and reduces the savings in manual testing that the automated tests should bring.

Slow feedback has a cost

If the tests are not designed to run quickly, then increasingly slow feedback can also be a cost. As the test suite gets bigger, and takes longer to run, the tests take longer to provide feedback. The slower the feedback, the less useful it is to developers, and the more time it will take to address issues raised by the automated tests.

Not all tests are created equal

The calculation of value based on savings in manual testing time is useful and intuitive. However, it is based on value in terms of cost savings, not in terms of added value. We should also consider the value of the reduction of risk and the increased confidence that the application is fit for purpose.

What's a team to do?

What can we do to ensure that our test suite doesn't end up costing more to maintain than it saves? How can we ensure that our test automation efforts are not wasted in areas that will provide little return on our investment?

Build SOLID foundations

One of the most important aspects of any testing framework, and one that we too often see neglected, are the foundations. The choice of tools is important, as is the choice of appropriate patterns and conventions. Well-written test frameworks follow all of the normal rules for good code design, for example:

  • They respect fundamental design principles such as DRY (“Don’t Repeat Yourself), SRP (Single Responsibility Principle) and OCP (Open/Closed Principle).
  • They unit test non-trivial framework or infrastructure test code. It may sound odd to write tests to test your test code, but it saves a huge amount of time troubleshooting flaky tests in the long run.
  • They are regularly maintained: Just like application code, automated test suites benefit from regular refactoring to reduce technical debt and ensure consistency and maintainability. .

Automated test should be designed and implemented with the same level of quality, if not more, than production code.

Know your team

Testing teams are typically made up of a mixture of individuals, with different specialities and varying levels of experience. Some may come from a development background and be well versed in software engineering design practices and patterns; others may come more from a pure QA background, and have an eye for the most important things to check in a particular feature.

Approaches such as the Journey Pattern, for example, are designed to allow testers with less development experience build automated tests using highly-reusable components that are written and maintained by more experienced test automation developers. Pair programming when writing automated tests is also a great way to teach less experienced testers and to encourage consistent development practices.

Focus on the value

It is hard to automate everything. If you need to choose, prioritise tests that will reduce risk and save manual testing time.

Conclusion - consider automated tests an investment

Automated tests should be seen as an investment, with the aim of reducing risk and accelerating delivery. To get the most return out of your investment, make sure your test suite is well designed, well implemented and that it focuses on testing high-value features and high-risk areas of the application first.

JOHN FERGUSON SMART is an international speaker, consultant, author and trainer well known in the Agile community for his many books, articles and presentations, particularly in areas such as BDD, TDD, test automation, software craftsmanship and team collaboration. John helps organisations and teams around the world deliver better software sooner through more effective collaboration and communication techniques, and through better technical practices. John is the author of the best-selling "BDD in Action", as well as "Jenkins: The Definitive Guide and "Java Power Tools", and also leads development on the innovative Serenity BDD test automation library.

© 2019 John Ferguson Smart