Jim Highsmith recently posited that "velocity is killing agility!" which is kind of a fun hypothesis. Jim observes that company leaders he talks with around the world these days are a little too quick to measure the effectiveness of their agile software development teams by keeping track of the teams' velocity (the average amount of estimated software effort the team delivers per software delivery iteration).
This is quite ironic, of course, since one of the rudimentary things you learn when you first study agile software development is that "velocity" is measured in abstract terms like "points," or "ideal hours." The numbers are relative to each other, and mapping points to time is a fluid process, and only valid to one team at a time. The idea of tracking velocity is so absurd, in fact, that there is an entire web site devoted to the concept of estimation using fruit
(cherry for very small amounts of effort; watermelon for large amounts). If each team
chooses different themes (fruit, mammals, buildings in Chicago,
planets), one can see even more clearly that trying to compare one team
to another is a recipe for disaster.
But of course these executives aren't always being quite so blunder-headed with their metrics as to compare one team to another. Instead, as Jim describes it, they:
But what is a leader to do, if she wants to measure the productivity of her IT staff? You need to figure out who to promote and who to put in the basement without a stapler, after all. I would recommend the following, taking Jim's points in reverse order:
From: http://startofanadultlife.tumblr.com/post/6847194092/nigel-tufnel-the-numbers-all-go-to-eleven-look |
But of course these executives aren't always being quite so blunder-headed with their metrics as to compare one team to another. Instead, as Jim describes it, they:
- Try to get teams to deliver faster and faster--if you delivered 5 points in this iteration, try for 6 next time. Or if your average team velocity was 5 in this project, keep the team together and try to get up to an average of 8 in the next.
- Evaluate effort solely in terms of the software's value at first release to the market--if you measure your effectiveness by "how quickly you get to feature-complete," you quickly lose track of important things like "how quickly can you change" with the market, "how much does it cost" to maintain the thing, and even "how delighted are customers to use the application."
- Lose sight altogether of the actual business bottom line. In real life, software developers are being paid to deliver value to their home businesses, whether measured in increased revenue, decreased cost, increased customer satisfaction, decreased risk of government regulation non-compliance, increased greenness, or anything else in the host organization's balanced scorecard.
But what is a leader to do, if she wants to measure the productivity of her IT staff? You need to figure out who to promote and who to put in the basement without a stapler, after all. I would recommend the following, taking Jim's points in reverse order:
- Measure the value generated by software investments. This is not new--it's Jim's point in his velocity blog post, and in it, he also cites Martin Fowler's 2003 bliki post, "CannotMeasureProductivity," on this exact point. At the end of the day, a business is in business to create value, not to ensure its developers are working at an abstract maximum capacity. If Team A's software generated $10 million in value over two years, and Team B's software generated $1 million, and both efforts cost the same, then it would be worth while asking some questions about why one effort was so much more valuable to the business than the other. You may get a range of interesting answers, some having to do with how well the team is working, and some having to do with the overall value of the concept they were delivering.
- Evaluate your return on investment over the life of the product, not just one quarter. In IT just as in other investments, leaders often think too much in the immediate term. Certainly, it's a good idea to rush to market with software when you can get a first-mover advantage. Even in this case, however, your IT investment should be made with an eye to the long term. What will you do after you have jumped into the market with the first offering of this kind? Have you positioned yourself to maintain your lead, because you have what Jim calls a "supple delivery engine"? Or have you worn yourself out, and put an idea on the market which can easily be copied by others? Will a competitor with a robust software delivery methodology quickly leapfrog you and pull ahead? Unless you're on the brink of insolvency, you need to look at the expected return on investment for the life of the product in the marketplace, not just your profit and loss over this budget year.
- Balance each software-development specific metric with a complimentary one. You may have good reasons to measure the amount of software a team is developing. Perhaps you have concerns about specific individuals whom you feel aren't carrying their weight. Never say never, I say. But if you are going to measure velocity, then make sure you measure other things as well, including quality (fit of software to use and actual failures of software to work as intended) and complexity (is the software impossible to change, because it's so poorly written?) These three metrics balance each other out to some degree, and teams should hit minimum standards in all three dimensions, not just one. If you ask for only speed, that's all you're going to get, and you won't like it.
Comments
Post a Comment