logo design...copyright Elena Yatzeck, 2010-2015

Wednesday, March 28, 2012

Tech-Only Agile Goes Commando--And Not in a Good Way

Pop culture aficionados will be familiar with the South Park "Underpants Gnomes," who roam through people's homes stealing underwear in the night. Their business plan is classic and simple:

http://www.queuefull.net/~bensons/2009/01/12/reflection-on-the-underpants-gnomes-master-plan/
Everyone likes a plan with three steps.

I have been thinking a lot about the Underpants Gnomes lately, and here's why.  If you invert the first two phases, you get a pretty good model for tech-only agile:
  • ???
  • Collect underpants Do Agile Technical Practices
  • Profit
It has the reassuring three steps, but something important is missing.

Let's talk about the company which attempts to roll out agile with a primary focus on the technical practices.  That would be...almost the whole world.  Look at the first line of the Agile Manifesto--it says the goal is "working software," not "VALUABLE working software," although the v-word does come in at the top of the list of "principles" which accompany the manifesto itself.

The IT department generally acts first within a company to bring in agile trainers and concepts, and then everyone works together to bring the business people along as well.  But if they can't play ball, then so be it.  Off we go to build a burn-down chart.

In tech-only agile, the business is Tonto to IT's Lone Ranger.  Business is Arthur to IT's "The Tick."  Business is the sidecar to the Agile IT motorcycle.

From http://www.pashnit.com/bikes/sidecar.htm
It makes sense from a job market perspective.  While "agile" is magical on an IT resume, it is not in the top ten skill sets which recruiters expect to see in a successful product manager's vita (see this general job site, chosen at random, as an example).

"It's all good," we like to say, but it really isn't.  All too many tech-side agile team members (and worse--all too many business people) envision the agile project as something that starts when development starts.  "Wait," someone says, "we need a backlog!  Yes!  A backlog!"  And someone quickly puts something together, and off we go!  We're agile!  Faster than a speeding bullet, we've
  • pair-programmed 
  • some unit tests accompanying (and preceding development of) 
  • modules of no more than 100 lines apiece, 
  • which will be refactored along the way and 
  • can be unit, functionally, and scenario tested automatically 
  • at check-in 
  • with a build process that takes less than ten minutes, 
  • pipelined through to some "investigative testing" pre-production environment 
  • for manual exploratory evaluation, 
  • plunged automatically into production, and 
  • later made available through feature toggles.  
Except nobody flips the toggles.  Nobody knows exactly what they do.
Would this happen in a lean start up?  Never.  Would it happen in a giant corporate enterprise attempting an "agile transformation?"  You bet it would.
  • The business didn't want to spring for an in-room "product owner" and the "subject matter experts" were unavailable to the team, so the team had to make a bunch of guesses.  
  • The collocated team seated at state of the art pairing stations arranged in a well-ventilated team room knew from the second hour of discussions that what the sponsors wanted from the software would cost ten times what was budgeted, but management told the team to keep going to avoid people yelling at them.  "We can always cancel it later," they said.
  • These are the same people who complained that the team designed a Cadillac on a Yugo budget.
No-tech agile, of course, is just as scary as tech-only agile.  Perhaps I will need to write a companion piece.  But to me the heart of agile is building the relationship between IT people and business partners to allow mutual exploration of how IT can strategically empower the business.  The most important part of the project isn't the part where the programmers show up--it's the part where the project is actually funded.  It's a more or less sacred goal of the team to make good on that investment.  The sequence is what is important here.  Pay attention to what you're doing and why, THEN embrace the right technical practices to get you there in the best way.

Tech-only agile is the emperor with no clothes.  Or at least no underpants.

Friday, March 16, 2012

Lead and Lag Measures for Agile Transformation

"Oh for goodness sake, you put it in upside down!"
"I'm sorry, Secret. I thought the pointy end went in first."
-Secret Squirrel and Morrocco Mole, Secret Squirrel

I'm always excited to learn something new, and this week a colleague introduced me to the "Lead/Lag" concept of measuring the performance of a change program such as an agile transformation.  He also introduced me to the "Secret Squirrel (and Morocco Mole)" Hannah-Barbera cartoon series from the 1960s, which briefly seemed to be a more interesting thing to discuss, but I'm pretty sure you guys should all pursue that on your own without further commentary from me.  We agilists are a fun bunch.


From wikipedia
Seriously, though, this Lead/Lag thing gets you exactly where you want to be as you design your agile transformation, and it keeps you from drowning in a pool of agile purism. "Is not Scrum/Is so Scrum" is not the discussion you want to be having for very long, especially when accompanied by ineffectual slapping. You want to be looking at "Is not valuable/Is so valuable."  And yet if you solely focus on "what is valuable for the business," what makes you different as an agilist from anyone else?

This is a serious question and I will conjecture that some of us in the agile/lean community focus so much on "pure" agile behaviors because we are worried we will lose our identity if we throw ourselves whole-heartedly into a process of self-improvement which deviates from some well-known agile or lean script.  Witness the large-scale prejudice of the agile community against the PMI and the BABOK.   We've got excellent pioneers working to put PMI/BABOK/CMM insights to work in agile shops, but they need to be very brave and very impervious to sarcasm.

Anyway, enter the Lead/Lag indicators, perhaps in a Secret Squirrel flying saucer. The lean/lag concept, as most of you probably already knew, is one where you purposely track two sets of measures over time with the expectation that they will correlate. For an agile transformation, the "lead" measures would be things like:
  • Do you have teams that develop iteratively?
  • Is there a card wall?  and even
  • Would Jeff Sutherland certify this team as one following "pure Scrum"?
You can empirically measure how many teams you have that do these things over time, and how well those teams are doing your local form of agile "correctly," as specified for your company, and this is certainly something you want to do if you are heading up an agile transformation program in an enterprise environment.

The "lag" measures, however, are the ones which will motivate your stakeholders.  They will be things like :
  • Increased speed to market
  • Higher code quality
  • Better ROI on dollars invested in IT projects
As you design your program, what you want to do is motivate your stakeholders using the lag measures, and build a case as soon as possible for a correlation between what you measure around the penetration of your chosen agile practices into a firm, and what impact those measures seem to have on things of value to the business.

Thought of this way, there are things you should think about measuring that you might not otherwise do, because they are "obvious" to you as an agilist.  One example is "transparency."  As agilists, we take it for granted that since agile produces working software with every iteration, a program office running a set of agile practices will have the ability to measure EXACTLY what percentage of planned scope is complete at ANY time after the release plan is created.  But for someone new to agile, this concept is not obvious.

So one thing to think about is creating side-by-side snapshots taken at two-week intervals of the "project health dashboard" your PMO keeps on its waterfall projects, and corresponding snapshots taken at the same interval of the health of your first agile projects.  If you're honest, most likely the agile projects will track red at first, not least because the team is processing a lot of new stuff.  Later the projects will go yellow and green.  That's good.

But what is GREAT is that while your agile "health" dashboard is telling the truth every two weeks, the snapshots of the waterfall projects will show a pattern of "all green until red" or "all amber until red" or "all red, then suddenly green," or the like.  The point is that in a waterfall project, you JUST DO NOT KNOW if you are actually okay or not until the day you go into user acceptance testing, and sometimes not even then, if UAT looks bad enough.  By measuring "transparency of project health" through snapshots at the company where you are introducing agile, you make the change concrete and immediately apparent.  This is the type of metric that will "motivate" even the most senior and agile-unfamiliar CIO "elephant," in the terminology of Dan Heath and Chip Heath.

From http://www.analysis-one.com/kpi-analysis.aspx

I am on the road, so I can't get my teenager to explain this graph to you in detail, but the general idea is that by measuring some simple and straightforward things, you will be in a position to do some dynamite and statistically interesting "lean/lag" metrics reports to your agile transformation sponsors weeks or months into your project.  The faster you can get a correlation between "we did this agile practice, and here's the benefit to the business," the faster you're going to get enthusiastic buy-in from your stakeholders, instead of eye-rolling and nervous references to the "TQM" fad from the 1980s.

Moreover, measuring this way keeps you honest.  What key business performance indicator can you impact most quickly in your context, and how?  Maybe you should lead with automated testing, not the business case.  Maybe it's the opposite.  But even if you don't share with your stakeholders (in many cases the truth is something to be very careful with), YOU should know what you're doing and how it matters to the business, with as much quantitative evidence as you can muster.

So don't stop with "we're agile!"  Drive directly to "we're agile, and you can tell, because the business is already better."  Don't settle for putting the pointy end in first.

Sunday, March 11, 2012

Product Owner Safari

I was privileged to conduct a workshop yesterday at the Agile and Beyond Conference in Dearborn, Michigan, on how to be an enterprise Product Owner.  The deck is available here.

Thanks to all who attended!


Saturday, March 3, 2012

Almost Painless: Surviving Feedback

"And everyone likes a party/But no-one wants to clean" -Keb Mo, "Victims of Comfort"
The concepts of "continuous feedback" and "continuous improvement" are central to agile and lean philosophy.  Esther Derby and Diana Larsen have a wonderful book entirely about team retrospectives.  "Inspect and adapt" itself,  the 12th principle underlying the Agile Manifesto, has been subject to inspection and adaptation and trumped by "Plan-Do-Check-Act."  Teams, processes, work-in-progress--all are ideally subject to frequent observation and tuning.

But what about the people?  As agilists (or non-agilists with common sense), we recognize that we succeed or fail based on the quality of the people and interactions on a team, regardless of the process followed.  If we are going to squeeze maximum value out of ourselves, shouldn't we be putting something in place to tune our people even before we tune our processes?  The grim specter of Annual Reviews rears its ugly head.  Or "360-degree Feedback." 
An unintentionally scary portrayal of the "360 Feedback" concept.
Sure, I know there are some overachievers out there who constantly ask for feedback on themselves, the more painful the better, but count me in with those of you who got one scathing anonymous review on a 360-degree review ten years ago and never got over it.  (Bob, I know it was you).  The fact is that person-to-person reviews are tricky and somewhat risky, particularly when the person DOING the review has the power to impact the salary and continued employment prospects of the person RECEIVING it.  And yet if you don't do power-oriented reviews like these with actual ramifications for someone, it is very hard to jump-start a culture where peers provide this type of feedback to each other in a way that everyone benefits.

Moreover, just as company-mandated "fun" isn't fun, company-mandated "feedback" is more about how to game the annual review cycle than it is ever going to be about personal self-improvement.
  • A glowing review from a respected person isn't just a feel-good moment for both of you--it's also your ticket to recognition, title, salary, and/or internal fame within your company.  Your ability to adapt based on inspection is so trumped by these sensible factors that you are likely not to get the nuggets of helpful advice from your reviewer you could actually use.
  • Alternately, your company may have taken the defensive stance that "positive reviews must be discounted," since they are clearly just there to support a black-market economy of political IOUs.  So now you have to do "fake criticism" in your peer review which "inadvertently" reveals how utterly amazing your peer is, when it comes to salary adjustment time.  It's just like in your job interview where you said your own greatest fault was being "too dedicated to the company at your own expense."  (Sure, feel free to use this line next time you're writing a peer review in an anti-positive company culture.  "Sam works too hard, so it's sometimes hard to get him to lighten up."  That kind of thing).
I honestly have no idea how to fix the annual review cycle, or indeed how a company should determine how much to pay its employees, particularly in a flat organization.  I think it might always boil down to politics and whether the person is in high demand in the competitive job market outside the company, but that's just my guess.

BUT even though I am quite freaked out by the whole concept, I still think this is something each of us should do for ourselves.  It behooves each and every one of us to launch our own interpersonal feedback revolution, for our own sakes.  I consider this to be a routine matter of professional hygiene analogous to tooth or lint brushing.  And I can suggest a way to do it that will make it almost painless, almost fun to do, and almost not terrifying.  Here is my Interpersonal Feedback Revolution Manifesto:
  1. Ask for feedback from a peer at least once per week (in case you were wondering, I personally think tooth brushing should be more frequent than this and lint brushing less frequent, unless you constantly wear black and own a white fluffy pet of some type, like a rabbit or a cat).
  2. Provide your reviewer with the format you would like.  My favored format is:
    • Tell me one thing I'm good at first.
    • Tell me one thing I can improve on.
    • Suggest a way to achieve the improvement.
  3. Be prepared to reciprocate.  Your reviewer may well ask you for your feedback in return, either out of politeness, fake professionalism, or because they genuinely think this is a good idea, and almost painless.  USE THE SAME FORMAT, unless they specify they want something different.  One time I had a person ask for feedback, and I gave it in this format, and they accused me of holding out on them and demanded that I go through ALL of their faults IN DETAIL so they could get maximum benefit.  Maybe one day I will be brave enough to make that type of request (and have a year or so to spend).
I am toying with the idea of printing the format on little cards or maybe developing a smartphone app, to make this seem cooler, but those things aren't necessary.  Here are what I regard as the key benefits of my manifesto:
  • Like all good "lean" systems, this is "pull" driven, not "push."  The revolution is that we ask for the feedback from the people we choose, with an open heart and mind to take the advice, because we genuinely want to get better.  It would defeat the purpose of the revolution if anyone adopted it due to a company mandate.
  • This formula puts the positive feedback first:  We wait to make the potentially damaging revelation that we are imperfect until after we have acknowledged something good about the person.  The formula helps us overcome our bias towards only hearing the negative:  The human brain attends to negative stimuli much more than positive.  That's because in the wild, something "negative" could be a lightning bolt, a tiger or a man-eating plant of some kind, so it was worthy of attention more than the "positive" stimuli like the beautiful pollution-free air or the butterflies.  Today's menaces are things more like heart attacks, alcoholism, or road rage car accidents, so it turns out survival now requires us to tune down our natural "fight or flight" reflexes and tune up our rose-colored glasses..
  • Leaves you with a trajectory towards a better future state.  Nobody ever said the system was good just because you "inspect" it frequently.  It gets better because you turn the inspection into an adaptation.  Analogously, nobody says "Plan-Do-Check-Go Home Depressed."  It's a cycle.  As winter comes before spring, an acknowledged area for improvement becomes an opportunity to read a new book or try out a new technique.
Okay that's enough for now.  But think about it.  Ask for feedback, and provide help to the person in how you want to hear it.  If only a few of us do this, well, I was going to say the world will be a better place, but you know what?  The point is, if ANYONE does this, they will benefit.  This revolution is measured qualitatively.