logo design...copyright Elena Yatzeck, 2010-2015

Wednesday, November 21, 2012

Monatizing Transparency

As a budding agile coach at a new site, you are probably poised to champion "transparency."  You eager to introduce your friends to "information radiators," open lines of communication, and fortnightly project showcases.  But not so fast!

From some blogger who borrowed it from Dilbert without permission.
As a wise mentor of mine once said, "the truth is something to be very careful with."  The control of information directly and indirectly translates into control over budgets, income, and even job security.  Sadly, if you are working in any organization which employs humans, you need to be careful.  You will run into a world of pain if you misunderstand what will happen to the flow of money in your company when you change the flow of information. 

Three patterns to look out for:

1)  "Good news up; bad news down."  Are there people in your organization whose entire job it is to make sure that the people at the "coal face" stay in a perpetual state of concerned stress that the project, the division, and even the company is on the brink of disaster, in order to "boost productivity?" Do these same individuals simultaneously feed their bosses a steady stream of nothing but good news, so they can bask in a shared sense of "good management at work?"
  • Double check:  I suggest that if you think this isn't happening with your teams, you should double check.  You will likely NOT know about these people, because they will look exactly like the people who are actually excited to work with you. Once you leave the room, they will order their teams to disregard everything you have said, and try to minimize further contact.  
  • New strategy:  Whenever you start working with a new team, ask yourself what each member of the management chain above the team has to gain or lose if information flows freely from the bottom to the top and back.  Plan accordingly.  Trust, but verify!
2)  "Let's keep the meeting small."  How many times do you hear this?  As an agilist you believe in having face-to-face conversations with small groups of people, preferably in person.  So you may be inclined to accept the "small meeting" suggestion at face value because you already think it's a good idea to set up a meeting that fits your mental model of how a meeting should be run.
  • Double check:  who wants to keep the meeting "small," and what is the reasoning for who gets included and who gets excluded from the meeting?  Who would you have at the meeting, all things being equal, and everyone being available?  
  • New Strategy:  "Small" is never value-free.  "Small" means that the person who wants to "keep the numbers down" wants to build a consensus strong enough to overwhelm the "extra people" who aren't being allowed to slow things down at the meeting.  That can be something you agree with or disagree with, but please be aware of it.  "Knowledge currency" is being actively controlled every time someone plays the "let's keep it to just us three" card.   
3)  "It Depends."  Are you an agile consultant, or do you work with consultant helpers who answer questions with "it depends" as a standard response?   If you are on the giving end of the "It Depends" response, please consider the ramifications.  You likely feel you are speaking from principle, from the heart, and from the Agile Manifesto.  "Don't lock anything down!" you say.  "Make them think for themselves."
  • Double check:  where are you or your consultant friends making their money?  I don't think there is any bad faith going on here, but I do think there is sometimes a "principled" withholding of practical information which is a pattern perpetuated by professional people who make their living doing a bunch of specific things very well.  
  • Double check:  if you are on the receiving end of the "It Depends" thing, and you aren't getting enough useful factoids from your consultants to let you do something in particular better than what you were doing before, why are you paying them?  And if you agree that what you are buying from them is that they keep you "hungry" for continuous improvement, why in the world is that happening?  Were you really not hungry before?  Why did you decide to hire them in the first place?  Where did you get that money?  Seriously!  
  • New strategy:  Parties on both sides of the agile transformation consulting contract need to remember that important concept from the Agile Manifesto:  people are more important than process.  If the people being trained on agile are already empowered, thoughtful, careful, intelligent and prone to continuous improvement, then they should set up an agreement to be trained by their consultants on new techniques with a minimum of philosophical lecturing or Socratic game playing.  If the trainees are in a broken culture which doesn't allow for individual initiative, then there still needs to be a combination of long-term strategy for fixing the culture, which can take a long time, and short term tactics for working with teams within their current limitations.  In neither case should the words "it depends" ever be uttered except as a preface for one or more options, and re-usable advice on choosing between them in the future.
So I guess that was a bit of a rant.  Sorry.  At any rate, I don't mean to be all "Mayan End of Time" here, but you know what?  There is no innocent use of information, and there is no innocent refusal to provide information--every communication is an act in the world.  Please be very thoughtful in your communication choices.

Sunday, November 18, 2012

Einstein's Sailboat: 3 Ways To Speed Learning for Agile Newbies

Albert Einstein turns out to have been both an awesomely brilliant physicist and an enthusiastic but comically bad sailor.  Apparently, he spent whole summers amusing his neighbors by capsizing frequently and needing to be fished out of the water.  You would think that he could be calculating wind directions and consequences in some small corner of his vast brain, while also solving World Peace, but you would be wrong.  Not sure about the World Peace thing, but he definitely couldn't sail.

From a post by "Angel" on http://www.boatdesign.net/forums/sailboats/einsteins-sailboat-tinef-40050.html.  Watermark suggests image wasn't obtained legally by Angel, but what a cool photo!
Einstein's difficulties have several points of relevance to you, if you are a person charged with helping people develop software in an agile way.  It's not just a parable--it's a valuable set of lessons learned!  Here's how to plunge forward in your agile training program with more speed and less splashing.

Speed Teaching Tip 1:  Brilliant High-Level Generalizations Are Not Your Friend When You're Trying To Learn To Do Something In Particular With Multiple Tricky Steps.  Some would-be agile trainers like to sweep their arms expansively and say "everything you need to know is in the Agile Manifesto!"  Or maybe they will agree to throw in the Agile Principles too.  In fact, I challenge you to find an "Agile for Beginners" deck that doesn't start with a little meditation on the Manifesto.  Yet how helpful is that? 

Did the Agile Founding Fathers write up the Manifesto first, and then derive their various methodologies from it?  No, they did not.  They tried a bunch of stuff, kept track of what worked and what didn't, wrote books, staked their intellectual property claims to some different specific methodologies like DSDM and Scrum, and then eventually agreed that the Manifesto captured some common things across those specific methodologies. 

Why, then, do we insist on teaching agile as though it can be derived from the principles instead of vice versa?  We get pointlessly tied into knots that way.  Let's not do that any more.  When working with beginners, you can skip those Manifesto slides.  I give you permission.  Which leads us to:

Speed Teaching Tip 2:  Maps, Checklists, and Procedure Descriptions Really Are Your Friend.  Airplane pilots reliably maneuver their crafts all over the world with far more reliability and far fewer accidents than the average American motorist, and let's just say we're glad Einstein didn't fly us to our most recent conference speaking gig.  How do the pilots do it? 

They use checklists to keep track of the details.  Yes, they have experience, knowledge of the principles of aerodynamics, and muscle memory, and no, they wouldn't be able to depend solely on checklists when conducting an emergency landing on the Hudson.  But they can focus on the things that take human execution because they have written down the detailed things they need to remember, and they make it a routine to consult those lists.

Would it be so bad to create a standard set of checklists for our software development teams?  How about if we stipulate that people can tailor the lists to their needs, and that the lists will change over time?  Complicated things lend themselves to being presented in a predictable order.  Which brings me to:

Speed Teaching Tip 3:  Just Use a Timeline.  When people learn, they need to attach the new information to old information in their brains.  Everyone's brain is going to do associations differently, but you have a fighting chance of attaching at least some information to everyone's brain if you organize your thoughts in chronological order.  Almost everyone you will ever present with a deck can relate to "beginning, middle, and end."  Organizing your agile presentation "topically" the way you think about it will drastically cut down on how much other people can actually absorb from you.  What unifies "Pair Programming, Sustainable Pace, and Evolutionary Design?"  Only you know.

There is something unbearably cute about the idea of Albert Einstein capsizing sailboats around the world in his free time.  But if large scale agile transformation is something you do for actual pay, I recommend stowing the brilliance and hauling out some dry procedural documentation, at least when working with beginners.

Friday, November 9, 2012

Technical Debt is Real Financial Trouble

Do you shun the people at work who hassle you with statistics about your company's software cyclomatic complexity?  Are you tired of this issue, frequently raised by developers whom you need, but whom you privately think of as highly strung prima donnas?  Do the words "JUST GO AWAY" not seem to work with these people?
From http://www.comicvine.com/prophet-of-doom/29-66484/
You and I may find this counter-intuitive, but it turns out these geeky prophets of doom have a point.  "Technical debt" has actually been proven in the real world to cost you money both directly and indirectly.  Lots of money.  Moreover, you incur additional security risks when you turn a blind eye to unnecessary code complexity.  Your current laissez-faire attitude towards code quality could actually turn into an unplanned financial catastrophe at an inconvenient moment in the future.

The "golden age" of research on costs of cyclomatic complexity seems to have been between 1990 and 1992.  A lot of the original work has perhaps been tainted for the modern agile audience because it was done to justify the use of Computer Aided Software Engineering tools (CASE), discussion about which has gone out of vogue.  But studies since the 1990s have consistently had the same results, and if anything, the situation has only gotten worse as the world's code base has grown.

Quick Technical Debt FAQ:

What is Technical Debt?  The phrase was coined by Ward Cunningham in 1992, actually.
Shipping first time code is like going into debt. A little debt speeds development so long as it is paid back promptly with a rewrite... The danger occurs when the debt is not repaid. Every minute spent on not-quite-right code counts as interest on that debt. Entire engineering organizations can be brought to a stand-still under the debt load of an unconsolidated implementation, object-oriented or otherwise.
Right, so it's just coders being fussy.  It's a matter of opinion.  Uh, no.  It is measured in terms of things that actually should be fixed quickly rather than allowed to linger, for actual business reasons.  Examples of this are not aesthetic or subjective.  They are countable things like:
  • High number of known open defects logged in your test repository.
  • Lack of an automated test base, so you have to pay a lot for manual testing in order to know from one release to the next what you might have broken.
  • High cyclomatic complexity, which is a term developed by Thomas J. McCabe, Sr., and is similar to the Flesch-Kinkaid Readibility Test.  It measures independent paths through a software system, if you consider all "if" or "case" statements, and the like.  A high score indicates high complexity.
  • High amount of duplication of individual lines of code.  This means that to "fix" it if it's wrong, you have to go more places and fix it more than once.
  • Very long methods Just as you have to put more effort into reading a blog than a tweet, a programmer has to work harder to make sense of an individual function or method of 1000 lines than of one limited to 100 or so.
  • SEI Maintainability Index -- a mechanical combination of this type of factors to a score indicating overall difficulty of making changes to the codebase as needed.
Why would those things translate to an actual cost?  As your developers would aver, it's fairly intuitive that it costs more to maintain something complicated than something simple.  They seldom describe this impact in terms of money, however.  So let's look at the two main scenarios where the problem moves from "coder stress" to "CFO expenditure."  I did a casual survey of the available interweb research and found the following.  A rigorous review would surely find more:
  • Routine Software Maintenance:  The original referreed MIT research on the cost of complex software in an enterprise environment found that when you control for other variables, "high levels of software complexity account for approximately twenty-five percent of maintenance costs or more than seventeen percent of total life-cycle costs."  This study was published in 1990, when state of the art for commercial software was COBOL.  One can conjecture that in modern multi-component software architectures, opportunities have arisen for even more complexity, and even higher maintenance costs.
  • Security Risk:  A 2012 white paper published by the Coverty company documents software "mistakes" which cost billions of dollars, and for Bank of America, a stock plunge of 15 percent.
Oh, I get it, isn't this is just a classic long-term/short-term decision about capital spending?  Well, yes and no.  The "short term" doesn't last very long on a project.  As Ward Cunningham said in the quotation above, shortcuts are fine for any given release, if your project is small.  On a large project, where you don't release for several months at a time, you can easily accumulate technical debt even before you deploy to market.  Technical debt doesn't age well.  It is a "payday loan" type situation, not the kind they advertise on the radio with "NO MONEY DOWN, NO PAYMENTS UNTIL JUNE!"  The longer you wait to fix the complexity, the more expensive it gets to fix.  Complexity gets added to complexity.  Like financial debt, technical debt compounds.

Your policy should be to maintain a constant state of low complexity except in exceptional cases, and then only with a plan to go back and fix the anomalous complexity immediately after the market advantage of the premature release has been enjoyed.


In the diagram above, "A," "B," "C,"and "D" represent different spending levels to get the same software written on behalf of your company.

A:  represents the money you are budgeting for total cost of ownership of your software today, which calls for paying no attention whatsoever to the financial risks inherent in your increasingly complex code base.

B:  represents the money you could be budgeting and purposefully spending to to avoid expensive maintenance and security risks.

C: represents the money you are spending today.  Technical debt isn't hypothetical.  You are spending 17-25% more than you should be on BAU software support, and that is a conservative estimate.  That's every single day.

D: represents the money you think your developers want you to spend on making the code "elegant."  And maybe they do.  But while you are busy trying to do "A" instead of "D," you have lost track of the opportunity to go from "C" to "B."

It's not just you.  In 2011, Andy Kyte at Gartner estimated that 60 years of accumulated technical debt has created a world-wide ticking financial time bomb worth $500 billion in 2010, with the potential to rise to $1 trillion by 2015.  What's worse is that every day, companies are spending billions just to keep maintaining this mess.  This isn't a "hypothetical" -- it is a cost to your business right now.  Can you really afford this?  Get a jump on your competition by systematically finding and eliminating technical debt the way you would any other serious organizational risk.

Monday, November 5, 2012

The Anti-AMM: Sculpture, Not Pyramid Building

Get a bunch of agile evangelists together, and we like nothing better than argue over the details of our favorite Agile Maturity Models (AMMs) over a few beers, the more complex the better.  "But HOW can you make a REAL DIFFERENCE until you TACKLE HIRING PRACTICES FIRST!" we holler, having (unfortunately) switched to tequila shots a little later in the evening.  Our client practices are raw boulders, and we are going to help them build a pyramid!
http://science.howstuffworks.com/engineering/structural/pyramid.htm
If only our clients will listen to us, we think, we can help them "start small," and gradually evolve into high-performing teams like the ones we're used to.  First they might try Stand-Ups and Burn-Ups, we say.  Then if all goes well, we will try test automation.  Refactoring for extra credit, but only once the analysts have been trained.  Sequence is all, and it gives us a lot to argue about.  Let's build that pyramid!
AMM rendering of the first verse of "The Sloth" by Flanders and Swann
Fellow agile thinkers--have we really stopped to think about what we imply when we use the word "maturity" to describe what we are helping clients build?  Our slide decks feature aggressive use of primary colors, and we tacitly use the word "immature" in close proximity to the words "CIO," "CQO," and "CFO."  What have we been thinking?  This isn't just wrong, it's bad business.

What we need is the exact opposite of a maturity model.  We need to know Nirvana in its pure and complete beauty, and then we need to let our clients help us to sculpt away our assumptions, to make our model even more useful.  We had it backwards:  clients aren't in the infancy of building our Nirvana state.  We are in the infancy of being able to cut Nirvana into the right pieces to efficiently help our clients.

What Does This Big Block of Nirvana Look Like?  First, let's freely admit we need to always have Nirvana with us.  It could even be a Nirvana borrowed from the top of someone else's Agile Maturity Model.  In Nirvana:
  • Product Management knows how to design a software product so that it can be tuned on the fly--try out the "A" and "B" models simultaneously, and if "B" makes more money, flip a switch and turn "A" off.
  • We can release new versions of software to production every week.  Hour.  Second.
  • We have room to make mistakes because we have a portfolio strategy which constantly checks to see that the return on investment for any given project is rising from one iteration to the next, not leveling off.
  • We're satisfying our customers in a fast-changing world, and our code quality is incredibly high, so we can change it on a dime and release something new tomorrow.
How do we do this in Nirvana?
  • all teams are small and collocated, and 
  • they work solely on brand new software, or maybe software that's a year or two in age, 
  • but all of it is swaddled in a Continuous Deployment pipeline 
  • hitched up to 
  • an automated testing framework that 
  • handles all possible levels of testing 
  • either at check-in, or overnight, if the thing to be tested takes a long time.  
  • Also, in Nirvana, all applications are web applications with virtually zero fixed cost for releasing to market.
All teams in Nirvana
  • are self-organized, and 
  • all members of those teams are team-oriented geniuses 
  • who understand that agile is not a fad, but a collection of productive goodness, 
  • who can do anything, and 
  • who would never let you down.  
There is no ill-will in Nirvana, no politics, no personal ambition except Getting the Job Done.  Everyone is competent and no-one is new, except for the occasional hyper-smart individual we hire and quickly train through effective use of pairing.

Sculpting Nirvana to Fit the Real World. How useful is our vision of Nirvana to the average executive?  Not very, especially when it is accompanied by an Agile Maturity Model that measures each company by how much it doesn't match, and calls for fundamental corporate change "before we can even get started."

What must we do to make our vision useful?  Stop thinking in terms of "maturity."  We're not looking to help clients build our ideal pyramid.  Or if we are, we have to stop.  We need to think instead of Michaelangelo's concept of the "non finito:" the thing of beauty constantly emerging from the rock.  Our clients are helping us evolve our vision by testing it against reality (again).
http://raeleenmautner.com/?s=non+finito&submit=Search
But concretely, if you will pardon the word, what do we do?  Here it is:
  1. Pack.  We pack up our big old block of Nirvana and take it with us to visit a client.  But we leave it out in the truck for the initial meeting.
  2. SWOT.  We have a SWOT analysis, whether formal or implied, with our clients.  Not just "what are your pain points," but "what are your opportunities and threats?"  Not "what are your software development processes," but "how can technology minimize threats to your business and capitalize on your opportunities?"  Think like a business person.  Think about increased revenue, decreased operational costs, increased customer satisfaction, creating a new market, lowering cost of business as usual, shrinking time to market, retaining the best staff and being a destination employer.
  3. Rethink.  In two stages:
    1. We take the facts of the actual client business situation, as we understand them, back to our block of Nirvana in the parking garage, and we start chipping away at our own assumptions.  Wow, our client is building middleware with a team on three continents, and they've been through three major "process engineering" fads in the last six months.  Gee, the product management group hates the technology people.  Hm.  The developers don't unit test, and they get mad when testers send code back to them to be fixed.  Yikes!  Eight different "stage gates" with hundreds of employees who will lose their jobs if we try to cut down the bureaucracy.
    2. Once we have thrown away the "all or nothing" view of Nirvana, what can we propose from our Agile bag of tricks that will actually give our client a positive return on their investment for hiring us?  What reasoning and proof can we give that our suggestions will work?  What can they do in 3 months?  A year?  3 years?
  4. Propose.  We go back to the client with a "business proposal," not an AMM.  We suggest things that can be done in 3 months, a year, or 3 years.  We give them our price points for helping them reach these achievable goals, and we give them metrics to suggest why they will see a business return on their investment.
  5. Drive home.  Note that nowhere in this process do we force the client to come out to the parking lot to view our big old block of Nirvana.  It is a model which is the thing that powers our thoughts and provides bounteous patterns to us in every engagement, and it's something we keep refining.  But it is our luggage, not theirs.