Exponential Learning – the key to effective Lean/Agile practices

John Ferguson Smart | Mentor | Author | Speaker - Author of 'BDD in Action'.
Helping teams deliver more valuable software sooner23rd September 2017

In lean/agile software development, learning is not a byproduct; learning is the most valuable output. To up our game, we don't just need to learn how to build and deliver better software. We need to learn how to learn better. And one of the keys to learning better and faster is measuring how well we are doing.

Lean Manufacturing principles do apply to software. Mostly.

In Lean Manufacturing, companies increase production and reduce variability by studying how to streamline the production process. Although Toyota has popularised Lean Manufacturing and ideas such as Kaizen and Improvement Kata, many of the core ideas come from much earlier. As early as the 16th century Venice was mass producing war galleys using a production line with standardised parts.

Lean thinking can also apply to software development. Indeed, there are many ideas from lean manufacturing that also apply to software. Lean values principles such as:

  • Producing what is valuable for the customer using a pull-based approach;
  • Reducing waste;
  • Paying constant attention to quality

These principles all apply to software development.

But there is one very important difference. Software development is not like a production line. We don't look for ways to rewrite the same class faster and faster each time. Rather, we try to build software that will meet the needs of our customers better and better. This applies to both the external customers, whose satisfaction with our software often affects their loyalty to our brand, as well as the internal customers, whose productivity is directly affected by the quality and usefulness of the software we build.

It's the learning, stupid

In lean software development, the main output is not actually software. It is learning.

In other words, while there is certainly value in the working software we deliver to the customer, learning what the customer values is of much greater importance. Any software project is a journey of discovery, where we progressively learn the best way to meet our customer's needs. It is our learning that will enable us to build more valuable features sooner in the future.

Of course we cannot do without technical competence and agility to deliver software quickly and to react to change. Practices such as BDD, Executable Specifications, TDD, Continuous Integration and Delivery, and unrelenting attention to code quality are essential. They give us the speed and agility we need to deliver quickly, and to change course based on our learnings.

But it is even more essential to know what we should be delivering.

Discovering what we don't know

We've known that learning, or rather, ignorance (what we don't know), is the real constraint on throughput for a while now. Dan North coined the term "Deliberate Discovery", to describe how we need to actively seek out what we don't know.

And techniques such as Impact Mapping do help us map the capabilities we want to support, and the features we want to deliver, to business goals and outcomes. These practices also highlight the assumptions about how a particular feature will help an organisation meet its goals.

But how do we know that the assumptions and hypotheses we make are correct, and that the features we build really do deliver tangible value? Understanding the impact a feature has on our business goals is an essential part of the lean/agile learning process.

If we want to adopt a scientific approach to learning, we need more than just guesses about what our users need. We need evidence.

Measures and metrics make our assumptions real

The answer is to use measures and metrics to help us track how well our features are really performing, and whether they are producing the outcome we thought they would. Measures play a vital role in validating or disproving our hypotheses, and in helping us accelerate our learning.

And in Impact Mapping, we do learn to associate metrics with the business goals we define. But many teams struggle with these metrics. High level goals can seem intangible or immeasurable, and it can be hard to know how to measure their effect in production. Controlled experiments such as A/B testing work well for small, tactical features, and when we can have access to a large pool of users, but they don't work for all types of features or goals.

What’s worse, prioritising the wrong measures can stifle creativity and innovation, demoralise teams, and lead to the exact opposite of the outcomes we want to achieve.

A more systematic approach to finding measures

This is why it is so important to ensure that our goals are measurable, and that our measures are the most appropriate ones.

One approach that I find useful revolves around a simple 6-stage incremental process. This thought process aligns quite well with the initial phases of Impact Mapping, and helps give the team a clear idea of what they are trying to achieve, in very tangible terms.

1) Start with a business goal

We start with a business goal. What are we trying to achieve? This is a core part of the Impact Mapping exercise, but it is critically important to agree on our goals no matter what approach we are taking. It is surprising how many teams have trouble articulating their project goals in terms that everyone can agree upon.

Business goals often start off as proposed solutions. For example, high level executive might define a goal by saying "we need a Frequent Flyer programme."

2) Express the goal terms of measurable impact

We need to make sure that our goal is expressed in measurable terms.

We judge a goal by the real business outcomes it produces, or the measurable impact it has for our users or customers. A good goal should be expressed in terms that are measurable and quantifiable. For example, "Increase customer loyalty" or "Improve our corporate image" would be difficult to quantify. "Introduce a Frequent Flyer Programme" would be better described as an action or initiative, rather than a business goal, and would be hard to nail down to a single metric.

Used well, techniques like Impact Mapping and Result Mapping (a technique invented by Performance Management guru Stacey Barr) can help us express our business goal in measurable terms.

For example, the measurable impact we expect our Frequent Flyer programme to have might be that customers that have flown with us before choose to fly with us again more frequently. So we could rephrase our goal in a more measurable form as "Flight sales revenue from regular customers increases".

3) Find evidence-based measures

Once we have a measurable goal, we need to decide what a good outcome would look like. How would we know if we have succeeded? What are the most meaningful and relevant measures for this goal?

In this phase, the team works together to determine what tangible outcomes or observable evidence we expect to see when we have succeeded, and to find the best measures to track this evidence.

For our Frequent Flyer programme, we might ask: how do we know that sales for repeat customers have gone up, and that these sales are related to our Frequent Flyer programme? What value can we measure and track to find out? Some potential evidence might be an increase in (or a large number of) Frequent Flyer membership signups or an increase in the number of flights booked by previously-registered customers.

But finding evidence is only the first step. We need to formalise this evidence into something we can measure and learn from. This is a great time to get participation, buy-in and engagement from the whole team - measures are only effective if they are owned by the development team, and not imposed on them.

For our Frequent Flyer programme, we might conclude that it isn't the number or volume of sales that we are interested in (as these can fluctuate seasonally), but the proportion of flights booked by previously-registered customers who have signed up on the Frequent Flyer programme. And to ensure that our programme is actually helping us acquire new customers, we could also measure the seasonally-adjusted flight sales.

4) Defining the measures

Once we know what we want to measure, we can figure out the nitty-gritty details, including where we get the data, how often do we collect the data, how we calculate the values, and how we display the results.

We also need to decide what good looks like, in more concrete terms. We need to define some targets. Tom Gilb talks of defining targets in terms of three key values:

  • The Benchmark, which represents where we are now
  • The Break-even point, which is the minimum acceptable value that makes the project worth doing, and
  • The Target, where we want to get to

In the case of our Frequent Flyer programme, we could define target such as "the proportion of flights booked by frequent-flyer members reaches 10% within 6 months", and "the total seasonally-adjusted ticket sales increases by 5% within 6 months."

5) Reporting

Many agile teams use information radiators, where for example the current build status is published in a prominent place.

Measures should be part of your information radiator. Put the measures and targets on a wall, as well as the current results and trends. Simple visual reminders can really help people remember where to focus their efforts.

6) Learning

Once you start to collect metrics, you need to discover what you can learn from them.

Interpreting results can be a bit of an art-form, especially when they come from real-world production data. On the surface of it, it seems simple - did you meet your target, or didn't you. But this is actually not a constructive way to look at results. A key thing to remember is that results never indicate failure, only learning. If you do meet a target, celebrate it. If you don't study the results, try to figure out why, and celebrate the learning.

According to one study done by Microsoft, quoted by Jez Humble, Joanne Molesky and Barry O'Reilly in their Lean Enterprise book, 60-90% of ideas do not actually improve the metric they were intended to improve. Failure is to be expected. Experimentation and learning is what will make you succeed, not guessing correctly the first time round.

This attitude essential to making measures work for your team. If measures are used to judge team performance, or if the team thinks they are being used that way, they will, consciously or unconsciously, try to game the system or fudge the results, making the measures a waste of time. Performance Measurement guru Stacey Barr says that measures should be "a tool in your hand, not a rod on your back".

Conclusion

If we are trying to adopt lean/agile practices, we shouldn't be planning our products in terms of what features we want to build. Rather, we should be deciding what goals we want to focus on, and what tangible outcomes we want to change. Once we have done this, a scientific, experimental approach to product development will give us the evidence to drive our features in the right direction and deliver software that really does make a difference.

Thanks to Jan Molak for reviewing and contributing to this article.


If you are interested in learning more, or in learning how to apply these concepts to your own projects, be sure to check out our new workshop on Accelerated Delivery through Measurable Outcomes.

© 2019 John Ferguson Smart