Continuous improvement is a term we hear often when referring to Agile teams and Agile product development. It is in the DNA of the agile movement and just as present in Lean through relentless improvement and in Kanban with kaizen. A commitment to getting better and better.
Commonly we use metrics to know how well we are performing. We define a target and we need to check against this target. It is this simple. Or it should be. I do believe problems arise because many times we do not understand what we are measuring or have no strong grounds on the assumptions of what the target should be. We can sometimes drown in a multitude of metrics believing they will reveal a “complete picture” of how fast we are shipping our product or how productive people are. On top of being a complete waste of time, they usually end up being cumbersome to collect, to render, to understand and to report. Big companies might even have a whole department responsible to try and make sense of these numbers and yet they still have lots of projects going over-budget and late, unhappy customers and growing employee churn.
So… what then seems to be the problem in this scenario? Vanity metrics can be the problem, I would say.
But what is vanity metric?
This term was coined in the world of the Lean Startup mindset and I like to think that the name vanity gives it away. A vanity metric makes your business look good (somehow) to the world but it is actually not helping you. You cannot understand your business performance in a way that informs future strategy through them. In other words, they are, fr the most part, useless.
Think Youtube videos. I f a content producer has 100 thousand subscribers she definitely looks powerful. But look again: how many views her videos actually have? Looking even further, how many of those views actually translate to money via people watching the ads or clicking to buy their product?
Vanity metrics can many times be the easy ones to collect, such as the total number of unique users of a product, but they can be a little bit more complicated to obtain, such as time spent on a page (if your product or service is online). Sometimes they are even just data that is available and because of that availability one might think they can collect it and then try and give it a meaning afterwards.
Think about the users of a product. A lot of people subscribe to test new features and products, try free or basic versions, never to come back or convert to premium. So, number of customers or users per se might not be the number you are after. How many of those are actually paying customers, might be a better one. How many of them are leaving positive reviews so others can build trust and become clients themselves?
Agile teams are not immune
The examples above were all related to products and services, which is at heart of any business, Agile or not in their practices. But we should also measure success on team practices and even team performance. Anything we want to improve, we should be measuring, otherwise there is no baseline for comparison and we are left with guessing work.
Delving into Agile practices, we can surely find examples of measuring the wrong thing. My favorite is Velocity, a measure of work done in Agile software development. It is assumed that by estimating work to be done in story points (complexity) one can follow the evolution of the metrics to understand the distance a team travels to reach to the sprint goal. If the team says they can complete 60 story points this sprint and they completed 40 story points so far and we are mid-sprint, we are left with 20 story points. What does that say? Are we gonna make it?
I don’t believe it says much. There are many articles that question the validity of this metric as a gauge on planning effectiveness or team performance. I personally find it misleading because:
- We cannot guarantee that the remaining 20 points will be completed by the end of the sprint. Complexity does not equal duration.
- The thing we are measuring with velocity is how fast or slow we are burning the points that represent estimates. In other words, we are measuring the accuracy of our guess, not the actual progress of work, even though they can correlate.
- If a team is pressed to show more work, they can inflate those numbers by estimating a much higher complexity for their work items. So, comes next sprint they will be delivering 80 story points even if the work delivered is smaller than past sprints. But hey, we have the appearance of speed. We are faster!
And yet we are not delivering fast enough to hit some market milestones.
Another metric that can be misleading in the software development world is number of bugs. Either the development team or a QA team captures the amount of non-conforming items and logs them as bugs, items to be fixed. That usually happens while developing the product. Teams can read that metric as improving the quality of their product when they list less bugs per sprint. Or they could just be filing less bugs in the official way. Another team can decide they are improving when they fix more bugs per sprint instead of waiting for a long time to fix them through a backlog prioritization. Or are they separating the bugs in smaller issues that are solved separately and therefore we actually cannot know much about our quality?
We can measure anything and measuring is an important step in assessing improvement and progress. But when metrics are used for micromanagement or imposing any sort of pressure over people, be them positive or negative, they will game the system and either invent good-looking metrics or tweak numbers of existing metrics.
Metrics have to make sense first. We must start with a goal.
Look instead for Actionable metrics
Once again I find that the name helps to understand. Actionable metrics are those that help us take action. Is the product successful? How are we faring on quality? Are we improving as a team? Those are the aspirational questions that we ultimately want answered, but without concrete relationship with our product and teams and methods of development we are basically blind.
If we are interested in the quality of a product, we could start from the end result backwards. As a very simple example, suppose we define quality as something perceived by the clients. Therefore, defects are defined by mismatches on what clients expect versus what is being delivered. One can considered then that escaped defects tells a more compelling story than the bugs found during the development cycle. Those bugs in development should definitely be fixed and should probably not even exist in the first place. But ultimately, what will tell the quality of our product is how the clients are reacting. We would want less or even zero defects. One could argue that from the eyes of the customer, zero defects will mean they are either not seeing them or do not care enough to file a defect ticket. The answer would be that this is not a problem. That zero defect tell us that as per our definition we achieved the minimum quality to satisfy the client. We can then decide on concentrating our efforts on delight (and hopefully we have metrics for that as well) or something else.
This was just one possible example and outcome for quality.
If we want to go back to the example of our team and try to understand our productivity, we could then look into some factual element about our delivery, instead of the guessing world of our estimates. We could then look at our throughput instead of our velocity. Now, just as with quality, we need to define what the metric is, what is the granularity of the element we consider “done” or “delivered”. Are we talking user stories done? Components delivered? Tickets closed in the system? Whatever the definition, we also need to understand what is the objective of that metric and observe it ; manage it. That means that the definition behind the granularity of the work done, to be a valid throughput, needs to imply the impact over a business outcome, just like our quality did. It is not a solitary exercise of a team detached from how the reality of their business. The team can break down work in any way they want. The measured items however should come from a breakdown that abstracts the team and clearly demonstrates when business outcomes are achieved. There is where we set our eyes for throughput.
That is also, just one possible metric to try and understand a team’s performance.
The role of the coach in all this
In an upcoming article I will be sharing some of my favorite metrics and how I use them. For now, I would like to leave with a few words on how, as coaches, we can help teams and companies improve through the exercise of measuring. As a coach I believe we should teach our teams to:
1 – Insist in the pursuit of improvement
The first step is probably coaching how improvement can be made explicit, measured and why it should be pursued in the first place. Everyone reacts differently to this call: different values, different work experiences, including traumas and micromanagement. Therefore, as a coach, we need to build trust and collect information that can help build the case for improvement beyond the generic notion of the “inspect and adapt” cycle. We have to appeal to something in the team. We need to know what motivates them in order to tell the story they will listen to.
2 – Search for in factual improvement
Through some of our experience I would say most of us ca probably can help our team with a few pre-cooked metrics, such as the ones we read in articles in the internet (wink-wink) and the most important aspect will be selecting just a few and very straightforward ones. It is easy to get lost in the complexity of what we can measure and an inexperienced team will benefit from some teachings on how to select or build metrics.
The coach will be constantly helping the team to anchor metrics in something they can derive from their daily work and tools and something that they can draw clear and factual correlations from.
3 – Be creative, open-minded and learn
Even though we can use a few tried and tested favorite metrics, it is important as a coach to not only teach, but most importantly, remember that new metrics, different metrics can and should be invented, customized, to the questions a client or a team is looking forward to answer. It is part of accepting the diversity of backgrounds of a team and understanding the uniqueness of a product.
It all starts with a question, a goal. Therefore, the actual metrics and methodology can vary widely.
4 – Challenge the interpretations
Numbers don’t lie because they do not speak. They do not tell a story either. Through interpretation we can derive some conclusion and some storytelling, not the other way around. Teaching by challenging what the numbers might be revealing is an important part of building metrics-awareness with our team. Regularly revisiting the metrics as a collective and putting together the knowledge of events, issues and conditions that may or may not be affecting those numbers, getting used to think “if that is true, then what do we do next?” is a key part of the coaching work. Understanding that there are no easy answers and that a similar number for a given metric might mean different things on a different day. Imagine being able to display the same throughput as past sprint with only half of the team present. This should be object of discussion. Metrics do not require analysis only when they seem to deviate. They require ongoing management.
5 – Retire metrics
A final insight I would share is being aware that metrics expire and teaching that to our teams. Metrics will eventually exhaust interpretations and that can happen when we have major changes in our company or simply through natural performance progression. Some metrics that we use for telling basic stories such as our productivity via throughput will eventually evolve into lead cycle conversations, sources of delay and even compound in more complex metrics. That is normal. That is expected. Let us make sure we get that final lesson in with our teams.
Metrics are a fascinating subject, a powerful yet simple tool and with some open mind and ability to try and learn we can all master their language.