logo design...copyright Elena Yatzeck, 2010-2015

Thursday, April 25, 2013

Automated Agile Testing Is Needed, And Someone Needs To Decide To Do It

Did you wake up this morning and think to yourself, "gee, I think I'd like to try some agile today?"  Allow me to leap between you and your coffee to announce that your adoption of agile requires more than just management techniques (stand ups, burn downs, and card walls).  You are going to need a framework for running automated tests on a scheduled and ad-hoc basis, and some well-engineered tests within that framework, if you are hoping to get your company's software faster, better, and more transparently for more than a couple of weeks.


http://www.fanpop.com/clubs/coffee/images/25055460/title/love-coffee-wallpaper
What, you say?  Yes!  You must act now to avoid the following:
  • Bad Quality.  For starters:  How were you planning to know if your software is okay after the first two weeks?  I'm already worried for you, if your plan is to follow a book of some kind and simply have "a product owner" do some "inspection" and say "tally ho."  
  • Gradual Slow Down.  But worse, let's say that works and your new web site or whatnot is a huge success?  Now you may be lured into a false sense of security and get fairly far down the road when you discover that you won't be able to keep up a cycle of 2-week iterations deploying reliably to production for very long before your team starts to buckle under the pressure of regression testing the whole code base for each of these releases, if those tests are manual.  First the product owner will ask others for help.  Then the whole team will be pulled in.  Then you'll stop developing on day 8, so the whole team can pound on the system.  Then to day 7, and so on.  Eventually you develop for a day and regression test for 9.
  • Sudden Unpredictable Screaming.  If, like one project manager I read about recently, you think that all you need to do is some "exploratory testing" with each release, you are living in a scary dream world.  In my world, someone eventually says "how did this break?" and that person, often an authority figure holding the checkbook, expects someone to have an answer to that question.
  • Inoxerable Stealth Reversion to Waterfall.  Last but not least, because I have literally seen this a dozen times if not more, if you are the kind of team that doesn't deploy after every sprint, you may decide at some point to wait until the next iteration to do the testing, so you can show progress on your burn up chart while the testers struggle to keep up.  Eventually, your testers will tell you that you need to do a big, long, regression test at the end of all the sprints so you can make sure everything is okay before you release to production.  Sort of like the big long test period you used to do.  In the "waterfall" SDLC.  In fact, maybe we should just plan it that way from the get-go.  Six months later, agile is gone.
Okay, you say, that's not good.  I want to be agile, really!  What do I need to do?  Dear reader, I am not a certified agile testing authority (CATA), but I can do two things for you:
  1. Point you to some bona fide authorities to find out more (please do look into the work of Lisa Crispin and Elizabeth Hendrickson, always linked to the right of my own posts on this blog).
  2. Tell you what I, an amateur, have been able to find out on the topic.
Things I have learned from my agile testing friends I want you to know about too:
  • Test automation is software.  You should think of the agile manifesto's "Working Software" as two interrelated working software implementations:  the software product itself and the automated test bed that embodies its requirements and validates its behavior.  That test bed can be broadly defined to include the continuous deployment environment within which tests of various kinds are embedded, or narrowly as a set of tests you can run at will from some vendor system, but you do need some tests that can be run quickly by a machine to keep up.
  • Corollary:  "test automators" are "developers."  Do not scare yourself into thinking you need to find and hire specialists who know how to do "test automation."  These people are typically scarce and very expensive.  You have a whole bunch of people working for you already who can do test automation.  There are different schools of thought about whether you should code your automated tests in the same language as your main code bed (for convenience) or in a different language (for additional rigor), but no matter how you slice it, your test automators can be recruited and trained at low cost from your existing pool of software developers.  Of course a developer should not design and execute the tests against their own code--you may earmark a certain set of developers just for testing, or rotate responsibilities--but my main point is that the "test developers" are probably already on your payroll.
  • Testers are still your subject matter experts on what to test and how.  People on agile teams formerly known as "testers" are gradually starting to call themselves "Quality Analysts," not just because it sounds better, but because it is a better description of the job.  There is an entire body of knowledge around what kinds of testing need to be done, when, and how.  There are sub-bodies of knowledge around how to generate, store, and source-code-control the transactional data and driver data for tests.  Testers can and should have the skill sets to help you lay out and implement a quality strategy for the whole program that looks at what investments can be made in quality all along the line, how to get the best ROI for investments in quality practices, and so on.
  • Automation frees testers up so they spend more time designing the right tests and less time mindlessly testing things that are easy to think about.  Exploratory testing doesn't go away in agile.  But the concept of a product owner "inspecting" the code to make sure it's okay should be built on the assumption that the code they are seeing is wrapped in a protective layer of automated tests that cover the basics.
  • There are different kinds of tests.  You probably knew that, but it's just as true in agile.  Generally speaking, in some schools of thought, agile posits that you should have:
    • significant "unit test coverage" for individual code components (which can be run as a set to find problems), 
    • a similarly significant set of tests run below the level of the presentation layer which rigorously test the software from a functional perspective, 
    • a smaller number of automated tests to do "smoke" or "integration" testing automatically (actually run the software through its paces to make sure it is all internally integrated together correctly), 
    • and a slightly smaller group of end-to-end functional tests that actually put the software through some detailed usage scenarios driven through the presentation layer.
Your mileage will vary, and different people will describe these different and sometimes orthoganal dimensions of testing using different names.  Martin Fowler, for example, has just posted some "bliki" updates about these topics.  There is a LOT to say about agile testing, and people more expert and eloquent than I have said those things.

I just wanted to make sure that as you proceed to reading your morning newspaper, you might be motivated to read and understand what those people are saying.

In lieu of my own coffee and newspaper, I was browsing through an entertaining blog post by Ron Jeffries this morning entitled "Manual Testing Does Exist and It Is Bad," and it reminded me that we agilists need to be a little more explicit about certain things than we sometimes are.  And the top of my list of such things, as you can see, is "automated testing." Like gravity, automated testing for agile isn't a "nice to have."  It's the law.

Sunday, April 14, 2013

Fixed Bid Agile Without Cognitive Dissonance

I've been following a LinkedIn forum discussion with interest entitled "Could we use Agile methodology with a fixed bid contract."  There are some quick, snappy answers to the question which came up immediately which I will quickly put into two categories:
  • Straightforward:  NEVER NEVER NEVER.  Run away!  Screaming!  These people don't understand agile.  It's a smell.  It's a TRAP!   I'm embarrassed for you.
  • Nuanced:  If you must do this, here's how to make the best of a bad situation (offers advice with implied background nose holding).
Of course, there were also some really useful answers, pointing to blogs like this really nice one from codesqueeze, recommending "target scope" or "target cost" models rather than either "fixed bid" or "time and materials."

from http://www.cytrion.com/businessmodel
Let's talk about "fixed" things though.
 
I'm always intrigued by people who think a perfect stranger should hire them to build software on a "time and materials" basis, or on a retainer, or on salary, and be willing to wait and see whether they will eventually get some monetary benefit from the resultant software that meets or exceeds overall expenditures.  I suppose I shouldn't be surprised.  Sales is more often than not about emotion, gut feeling, and trust, and not about projected returns on investment.  And the decision to pay someone in particular to build software boils down to a sales proposition, one way or another.

Let's posit a world where software delivery team hiring and retention could be subject to financial projections justifying the cash outlay, and in which the hiring body would routinely compare money spent to software return on investment.  Crazy talk, I know, but humor me.

In such a world, the first thing we would want to do is push a little harder on definitions.

What do you mean by "fixed bid?"

Mary Poppendiek summed it up neatly in 2011 with her Agile Under Contract presentation:  agile software development lends itself extremely well to "Fixed Price" contracts, but agile does not lend itself well to contracts with "Fixed Scope."  One of the great advantages to an agile project is that you can stand up a team of people with a known run rate, agree to pay them for a certain time period, and then focus on questions around "what can I get for my money?"

So if we're talking about the simple question of "should you accept a fixed price contract" for agile, the answer is yes, certainly, so long as you make sure you set a protocol with the client for reviewing expected scope on a regular basis.

If you structure the engagement as a two-phase one, with the first as a workshop engagement to get consensus and structure around the functional details, and then move into a larger fixed price agreement, which includes an agreed-upon contingency for newly discovered functional needs, you are embarking on a project that will be much better than anything this client has seen before, and they will be a great reference for you.

But if we're talking about "fixed scope" projects, isn't that still really insane?  Agilists shouldn't do that!  It's evil!

Hold on a minute, there.  You're right.  That might be insane.  But let's say there's an insane person or company out there that wants to hire someone to do something for them, but only on the condition that the work must be done fixed price/fixed scope.  And they are willing to negotiate on the price up front.  Does Agile philosophy or methodology forbid you to bid on that project?

If they are an evil client planning to take advantage and act in bad faith, then of course you shouldn't agree to work for them, no matter what.  Some agile thinkers regard the fact that a potential client is insisting on a fixed bid as a strong indicator of underlying evil.  And perhaps it is.  Buyer beware.  But as you weigh your prospective clients, please make sure you understand when you have made a judgement call based on the implied trustworthiness of the client, and don't confuse that judgement with a theoretical rejection of their professionalism or intelligence.

For a moment, let's posit a good faith client who operates under old-fashioned methodology, not "a bad person."  Maybe they are working in a highly regulated industry.  Maybe their dad is making them do things this way.  Who knows.  Can you work with them?  This brings us to the final definitional question.

What does your client mean by "fixed scope?"

To me, this is the real crux of the matter.  No matter how you structure your contract, you're making a guess at the beginning about what the end-state scope is going to be.  The project is creating something that has never been created before.  All engagements, in fact, are the combination of an initial agreement and an ongoing professional interaction.

Most people can agree on the first three values of the agile manifesto:  focus on people, working software, and customer collaboration.  The sticking point for your fixed bid oriented clients is most likely on the final point:  with this kind of contract, how can you embrace change and avoid arguments about sticking to an original plan?

I think there's a simple answer to this:  spend some time in your fixed bid contract discussing the "change management" protocol.  Instead of getting all fussy about "that's not agile" or "I'm out of here," placidly agree to do a fixed bid contract, and put your energy into discussing:
  • how scope will be defined and enforced throughout the life cycle (pushing for lower granularity at the start, more defined granularity just before development,)
  • how progress will be reviewed (pushing for regular customer involvement and inspection), 
  • what the threshhold should be for discussing functional changes to the software (push for delegation to functional subject matter experts, except where some overall impact to timeline will be made)
By pushing a pragmatic change management protocol (along with a contingency built into the pricing), you can gain significant agile benefits for clients who wouldn't otherwise accept them.