Continuous Delivery, Uncategorized

Continuous Discussions – Metrics That Matter

On January 26th I participated in an online panel on the subject of Measuring DevOps, as part of Continuous

Discussions (#c9d9), a series of community panels about Agile, Continuous Delivery and DevOps. Watch

a recording of the panel:

Continuous Discussions is a community initiative by Electric Cloud, which powers Continuous Delivery at businesses like SpaceX, Cisco, GE and E*TRADE by automating their build, test and deployment processes.

I felt that the atmosphere in the webinar was nice and friendly. The other panelists were knowledgable and nice to talk with, that goes for the hosts too, who facilitated the discussion in a very professional way.

At the end of the episode we were asked what the 3 most important metrics were. Being the first one of the panelists asked I found it a little bit difficult to structure my thoughts and provide a good answer. It is a big question and something that I think quite a lot about. So I was only able to come up with 2, and I focused on measures that I feel is important from an organizational point of view. My 2 were:

1) Lead time. The time it takes from and idea/feature is thought of til an implementation is released to customers
2) Customer satisfaction. How happy are customers with the service/products that are offered to them

Measure What Really Matters

If I was given the question again I would definitely have mentioned flow efficiency as one of the three measures. As Donald Reinertsen says, organisations pay far too little attention to queues, that they are actually blind to this problem:

“To understand the economic cost of queues, product developers must be able to answer two questions. First, how big are our queues? Today, only 2 percent of product developers measure queues. Second, what is the cost of these queues? To answer this second question, we must determine how queue size translates into delay cost, which requires knowing the cost of delay. Today, only 15 percent of product developers know the cost of delay. Since few companies can answer both questions, it should be no surprise that queues are managed poorly today.”

It is not un-normal that 95 % of the time a feature spends in a system, it is waiting in queues. Flow efficiency (the time a feature is actually being worked on – value adding activity) is often not better than around 5-8 %. So there is a tremendous potential for improvement in this area. And this is where Continous Delivery and DevOps excels, and especially when used together with kanban. When focus is on removing silos, limiting work in progress and having truly crossfunctional teams with full control of the value stream from idea to production, it help teams and organisations to make significant improvements in flow efficiency.

Finding the measures that matters can be difficult task, one that many organizations struggle with. Douglas Hubbard, a world-known expert in the field of measurements offers some great advice in how to find what to measure in his book “How to Measure Anything”. Here is his clarification chain:

1) If it matters at all it can be detected
2) If it is detectable it can be detected as an amount (or a range of possible amounts)
3) If it can be detected as a range of possible amounts, it can be measured

Be careful what you measure – you may end up getting it

The episode started off by discussing the danger of measuring the wrong things, which is quite easy in any system. Oftentimes, we go for the stuff that is easy to measure. Lines of code written (I hope no one does that anymore), velocity, test coverage are examples of measuring the wrong things, IMO.

We then went on to talk about which measures matters for different types of people in the organization. Chris started off by challenging the fact that the episode was divide into talking about measures for different types of roles, pointing out the problem with siloed thinking. I totally agree that tearing down silos is extremely important. I shared our story of how we conducted an experiment where we put all roles (devs, testers, ops, business, ux) in one team, to prove how much more effective software development can be if we tear down silos. DevOps and Continuous Delivery is redefining how we work together, thus it also redefines our jobs. Quite a few big organisations work with big common release cycles. Big common releases cycles became a trend a few years into the new millennium. when Enterprise Service Buses became popular in far too many enterprises. With big releases came the need for co-ordination and the release manager became a role in these organisations. With a focus on smaller releases, done by the teams themselves, the need for this type of role goes away. In the new knowledge economy work is changing. That doesn’t mean it goes away, but it will definitely be different 🙂 So that’s something we work quite a lot on 🙂

Below are a few insights from my contribution to the panel:

Dev: What Metrics Matters?

“I must say I’m pretty much aligned with what you already have been talking about, I think that there is a lot of measuring going on and I think a lot of times we measure the wrong things, and that is not good.

I think if we are to be concerned about one metric, it is how fast can we get feedback from our customers. That should be the main focus – how fast can we get feedback from actual users? Delivering a set of features to them, so that we can learn from that instead of measuring stuff like lines of code.

Another one I’ve seen is test coverage: ‘100% test coverage’, that’s one I really hate, because what you then get is just people writing tests for the cause of writing tests, which again gives you a problem with efficiency later on because you need to maintain all these things. So I really think that we need to look at this from a customer’s point of view.

“You definitely need to build quality into your product, there are just so many ways of doing that, instead of just focusing blindly on having 100% test coverage – and I have actually seen that demand in a big government project and it failed miserably. Code coverage alone doesn’t tell you anything, especially if you write the tests after you write the code.”

Ops: What Metrics Matter?

“I want to start with just repeating what Chris said in the beginning, which is the whole silo thing. In our experiment, we started up our movement towards cross-functional autonomous teams by running an experiment, and one thing we wanted to test there was – can we have business, developers, ops and testers function as one unit, together? So we built in a DevOps culture in that team, where the team could, quoting Amazon or Bernard Bourgeois from Amazon, “You build it, you run it”. That was the kind of culture we wanted to have in that team – and we didn’t hire any testers, so we said to the developers, you’re responsible for testing your own code’ which worked great, and really the reason was that we found not one good front-end developer, we found two really good front-end developers, we couldn’t decide which one to choose so we chose both, which meant we had no money for testers… which was kind of serendipitous because it really worked well for us.

“We have had some quality issues, of course, but for one year, while running this team, run the entire web shop for a telecom operator, single-handedly, without even involving ops people. We talked to them, and collaborated with them, and that’s actually the way we want it to be: We want to collaborate with the experts of ops, but we want to run our application ourselves. That being said, I know that they have some measures or things that they find important, in ops, and really it’s about operational stability, how stable are our systems; other things that matter to them are stuff like what’s the response time for a service, and the top leader come in and pokes them if the services don’t respond very well, and of course resolution time is also important.

“A problem that I see and which Gene brings up in ‘The Phoenix Project’, is all the firefighting that we do in many organizations, and I think a key metric should be to reduce the amount of incidents that we have, so that we can actually spend time on, not firefighting, not being reactive but being proactive. I definitely think that this should be a key goal, because what I see is ops people are really really busy people, and they’re doing a great job keeping systems alive, but that’s actually making it more difficult to prevent such errors from happening in the first case.”

Release Manager: What Metrics Matter?

I work for a client where there is a Release Manager, but here’s the thing with DevOps, that it’s kind of redefining our jobs, and I’m actually working together with the current Release Manager, trying to work with her to redefine her role and getting rid of the need for having a Release Manager, so releases will actually be done from each team, with the devs pushing new features and bug fixes to production themselves, so the need for a Release Manager really goes away. So then we have to find new kind of work for her, that’s one thing we’re striving for, we’ve gone from a pretty ‘water-scrum-fallish’ kind of release process with releases every month, with toll gates, etc., and for the past 6-7 months we’ve been shopping up that release in to small bits and pieces, and now we are at a point where there’s not much in what we call the old release process. Because what we saw was that we actually had more errors by releasing software that way, which is for some people kind of a contradiction. Because you would think that if you release more seldom you would have fewer incidents, but what happens is that when you release software seldom you get a lot of stuff into production at the same time, so if something goes wrong, which of course, it will do, you have a hard time tracking down where the bug is. Instead we go for the small independent releases, releasing just a few things at a time, having full control over the environment.

“That what the role is becoming: It’s more of giving advice, defining what’s needs to be in place to deliver software on a more frequent basis. You need to have monitoring in place, you need to have a certain degree of automatic tests, you need to have the ability to do zero down time deployments, that’s part of that new role where you move into more of an advisory position instead of being the one saying ‘No, you’re not allowed to go into production’. We want to take the responsibility into the team, so if you screw up than you have to face it yourself.

“This is what makes it so hard for the big companies, which have all these apps tightly coupled together because they have been released together. If components are released together they also grow together, they have tight coupling automatically. One way to remove coupling is actually to make them have them to release them on their own, but this is really hard work, and this is a problem because management becomes impatient and looks for a solution immediately, while you know as an engineer that this is really really hard work and you just need to do it piece by piece.”

What Other Metrics Matter?

“My number one here would be the ‘Lead Time’, the time from when you start to develop a feature until you can actually get it out into production. I think also measuring customer satisfaction could be something that’s useful, not very DevOps-ish, but I think focusing on building the right thing instead of building the things right is far more important.”

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s