Skip to main content

For Fellow Testing Novices: Some Basics For Provisioning A New Agile Testing Practice


Agilists are fond of catchy acronyms like INVEST (stories should be independent, negotiable, valuable, estimatable, small, and testable), SMART (acceptance criteria should be specific, measurable, attainable, relevant, and time-bound), and especially KISS (keep it simple, stupid).

I've spent some time in the last months trying to get my arms around automated agile testing practices, and in the end, I have to admit that despite the "why do you even need testers" mantra you sometimes hear, testing cannot be "kept simple" for an agile project of any complexity.  You really need to think it through very carefully, and there are a lot of moving pieces that don't reduce into a nice 4x3 matrix. 

Not to destroy the suspense, but by the end of the post, I am going to advocate that you hire an experienced, full time person to set up your quality strategy and lead its execution.  But here is a quick Sparc Note presentation of the dimensions you should keep in mind as you bring in the right people to help. Test architecture is the Ginger Rogers of the agile team.  You may recall that Ginger "did everything Fred Astaire did, only backwards and in heels."  You should think of testers as doing everything developers do, only in reverse and requiring significant "golden data" setup.


General Things To Know About The Role of Quality In Your Agile Team

1.  You may want to evaluate ROI for "good quality strategy setup" in terms of what you spend versus cost of poor quality (COPQ).  If you search the internet, you will find that COPQ can only be explained through use of a diagram which includes an iceberg.  I think it's a "Six Sigma" thing.  Here's the nicest one I found.

By Javier Rubio, posted on
Even though I think the mandatory iceberg thing is funny, there is a real truth here.  There are a myriad of ways poor quality can cut into your profits and grow your losses.  You need to understand which of those are high risk for you, and you need to build a quality strategy to address them.
2.  Cost of poor quality matters to your bottom line.  As I ranted about in my earlier post about technical debt, clean code allows for fast time to market, fastest enhancements as needed over the life of the product, and lowest total cost of ownership.  If "clean code" includes making sure that you evaluate your intrinsic code quality along with minimizing functional defects, you can save 25% of your annual maintenance costs, and 18% of your total costs (capital plus operational).

3.  Testing is a vocation.  There's a lot of talk out there about "everyone should do everything on an agile team."  And that's good.  But even though most people I know COULD do everything, they don't WANT to.  It's not just about "ability," it's about passion.  You need people on your team who are passionate about getting the quality right.  Those people will develop a distinct set of skills, some of which are innate, some which are learned, and some which are learned over time through hard experience.  There are a lot more moving parts out there on your test team than you will ever understand.  Do not underestimate the difference you can make by hiring people who WANT to enforce good testing practices on your teams.

4. Most professional testers have experience with waterfall practices, not agile.  A good waterfall tester will be a good agile tester--hang onto them.  But you need to spend some time explaining how agile teams do things differently than waterfall teams, to avoid making them very nervous.  Good testers care passionately about quality, and they don't want you getting all giddy, but force them to miss something, which is what they will think if you don't explain yourself.  You should bring in a specifically agile testing coach to clarify agile testing practices, and to explain the basic vocabulary around things like "unit test," "component test," "story test," and so on.

5.  You must automate more then one kind of test for agile, in most cases, and using a different automation strategy than you used before.  An agilist is likely to think "automated testing" means "automated unit tests."  A waterfall person is likely to think it means "automated functional regression tests."  A pragmatic person will look at the whole picture and feel an urgent need to hire a test architect who will explain it all and enforce some standards.  This is because:

6.  Your automated testing code base should be architected as carefully (or more) than your functional code base.  When you read about agile testing on the internet, you will find a lot of rhetoric about "team spirit," and how talking with testers helps developers get done faster, because they understand what they're doing.  This is true, but it is not enough!  Your automated tests are likely to be around a lot longer than your hot-shot, chatty coders.  If your organization has a separate team for software maintenance than for projects, you should be thinking about the quality of your automated test bed, not solely how much the team talked together during development.

And seriously, even if your coders do a combination of development and maintenance, are you expecting them stay with you for the next 25 years or so, and remember vividly what they talked about?  Some corporate software is around that long, or longer.

So what should your agile test architecture look like?    Think of this as a mapping from the classic "V-Model" for Quality, which most of the world's testers know and understand, to the "Agile Test Quadrants," which covers the same ground, but from a different perspective.

The V-Model

The V-Model is the one most modern software testers are familiar with.  So if you are working with a team that is moving from waterfall to agile software development, you need to be aware of the V-model, and you need to be able to discuss agile testing in the context of what people already know.  The V-model says that software development occurs in discreet phases, and that testers should be verifying that nothing bad happened at any phase.

So what are the key roles and responsibilities for the V-model?
  • Business Analysts: Record requirements, build models of how the system should work in the future, get business users to agree the team is on the right track.
  • Systems Analysts:  Translate business requirements into system requirements.  The system requirements are the basis of future "systems testing."
  • Developers:  In the V-model, developers may be encouraged to implement reusable tests at the unit level and the basic integration level (sometimes called the "component" level).  In many environments, special technicians are also brought in to test the non-functional requirements, which includes environment testing, performance testing, security testing, and the like. Developers will typically create some test data to prove out their code, but the system will not be shipped to testing with that data available.
  • Technical/System/Quality Assurance Testers:  these testers, generally employed by the information technology organization, perform "system" tests.  This involves building out a rigorous set of test data (static data, transactional data, and combinations of input data that will exercise all systems options), defining system test scripts, and running them.  The system tests sit somewhere between the component tests the developers build and the user acceptance tests built by the user acceptance people.
  • User Acceptance Testers/Super Users:  UAT testers, sometimes consisting of actual operational people brought in at the end of a development cycle to do some manual testing, and sometimes including professional testers, do a thing called "user acceptance tests." This involves actually using the system in a controlled environment.  In a well-functioning organization, UAT should be a short-lived period of manual "exploratory testing," where business people ensure that their highest value needs are met. This usually happens after everything else, so if users don't want to accept what they see, they have to wait a while to get things fixed.
  • Operations:  in the V-model, operations people do "validation reporting," to sho that the system is working on an ongoing basis.  To do this they typically get some kind of automatically generated report every day that lets them look for variance in standard metrics like "number of widgets shipped."
  • Automated testers:  the V-model doesn't say so explicitly, but "automated testers" in waterfall are people who build a set of end-to-end functional tests after the software is released to production, while people still remember how it was supposed to work.  These end-to-end functional tests are also called "regression tests."
Okay, so far so good.  (You're okay if you just skimmed.)  In comparison, agile uses something called the "testing quadrants" which cover similar ground, but in a different shape.

The Agile Testing Quadrants
From, citing Brian Marick, Lisa Crispin, and Janet Gregory
Most agile testers make reference to Brian Marick's "agile testing quadrants," when they describe the theory behind their practices.  Reassuringly, there is a lot of commonality between the V-model and the quadrants.  Most activities are the same (with some notable exceptions), but the timing is different.

Here are some standard(ish) agile testing roles and responsibilities:
  • Business and Systems Analysts: build written models of how the system should work in the future.  Build ongoing communications channels from business people to the rest of the team.  And, most importantly from a testing perspective, ensure "stories" are created which divide the system up into testable vertical slices of functionality that can be tested functionally without the need for intervening and separate "systems" tests.  Each story has "acceptance criteria" which fit into a larger narrative around potential uses within scenarios.  This is key:  agile stories marry systems requirements to functional requirements, and enable the team to do functional testing to ensure system behavior. 
  • Everyone together: before beginning development on any particular story, and as needed while the story is developed, discuss how the story acceptance criteria should be tested from a functional perspective, and from an architectural perspective, so everyone is clear on their role.
    • Determine how to test any GUI widgets--have dev build an automated unit test using a tool like Jasmine?  Have testers manually verify?  Build an automated test using a UI-friendly tool like Selenium or Twist?  
    • Determine how to test the system's functional behavior as laid out in the acceptence criteria, and what additional edge testing or negative testing might need to be done in addition to what is specified explicitly in the written story.  Which tests should the team drive through the user interface through a combination of manual and automated functional tests?  Which criteria should the team test automatically at the service level or below, using a tool like Fit, FitNesse, Concordian, or JBehav?
    • Consider system impact of tests on the overall test architecture.  How will new tests affect the existing UI-driven test scenario library?  The service-driven library?  The integration test bed used at check-in?  The automated functional/system test set run continuously or daily?
  • Developers:  Through TDD, developers implement reusable tests at the unit level and the basic integration level (sometimes called the "component" level).  The ones that are prone to changing or complex to test at the unit or component level have automated tests created which are run at every check-in.  These tests are the ones identified in quadrants 1 and 4.  Not coincidentally, the matrix calls these tests "Technology Facing."  Additionally, if any instrumentation is needed to support automated tests being scripted by the testers, developers build those fixtures.  In many environments, special technicians are still brought in to test the non-functional requirements, which includes environment testing, performance testing, security testing, and the like.  Developers generally handle issues around environments by automating the deployment process as much as possible using a continuous integration tool like Jenkins, Go, Cruise Control, or HP ALM. 
  • Technical/System/Quality Assurance Testers plus User Acceptance Testers/Super Users:  For a new agile team, we often create a small team of experienced testers from the combined IT and business organizations.  This group focuses on the business-facing user acceptance tests, shown in quadrants 2 and 3.  They build the functional test scripts needed for UI-driven scenarios and system-driven scenarios, supported by tooling done in the development group. Automation is used on scenarios with an eye to plugging local story tests into the end to end test bed run nightly or continuously.  GUI-based testing is done manually, with automations created for functions that are likely to change.  Optimally, the automations cover each "acceptance criterion" in each story, to ensure that the system does everything the business wants it to do.  Story-base tests in agile are called "functional tests," "story tests," "functional acceptance tests," or "examples," depending on who you are reading.
So roles collapse, and everyone needs to work together, and a bunch of things need to be automated, and it pretty much has to happen in real time.  Also, there are a couple of big pieces missing when you do the mapping.

Things the V-model leaves out:
  • Usability and Alpha/Beta testing.  Especially as agile development has incorporated the concept of "continuous delivery," it has become possible to simultaneously deploy multiple versions of the same production code base to different audiences to test usability.  Metrics on the screens capture efficiency, where people navigate, etc.  The ability to test the software and fine-tune its behavior through variables even after deployment provides a strong contrast to what is available in waterfall, where even basic misunderstandings of functionality may not come to light until the user acceptance period, and by then, it may be too late to change the software at a reasonable cost.
I'm not grinding that particular axe right now.  Just saying.  (You're still okay if you were just skimming.  That's the wonder of bullets and bolding!)

Things the Quadrant leaves out:
  • Ongoing operational testing (validation reporting).  Quite a lot of the agile literature assumes that you are working in an industry constantly in motion, maybe delivering web software, and working on a code base which is constantly in motion, so operational reporting won't be an ongoing need.  However, some agile shops will need this type of reporting, so I'm thinking it would be good to add it to the quadrant, perhaps as a deliverable, if not as a type of test.
  • Pre-deployment "system testing" and post deployment "regression testing."  As detailed in the roles and responsibilities section above, system testing is combined with regression testing in a planned way, from a business perspective, and developers retain responsibility for intrinsic code quality at the unit and component level, as well as providing tooling for tests, mocks, stubs, along with virtualizations needed for incomplete end-to-end scenarios.  Testers are building out pieces of end to end regression tests from the beginning.  The regression set should be planned ahead at a high level, and evolved in detail through every iteration, with as much care as that of the code base to be tested.
How much should you automate?  What tools should you use?  And how much will that cost?  Just another couple of diagrams and we'll be there.

How Much Should You Automate?

One common misconception people have is that agile teams are required to automate everything, to reach 100% code coverage as well as 100% acceptance criteria coverage of 100% of the possible test scenarios at all architecture levels.  There are four important ways in which this is not true.

1.  Are you kidding me?  It isn't possible to do this, even if you wanted to.  Good agile testers utilize the same rules about risk-based testing, test case matrices to control test permutations, and common sense that they used when they did waterfall system testing.  And after an agile team determines what needs to be tested and what doesn't, it chooses an even smaller subset to automate.  But how?

2.  The Test Pyramid.  Mike Cohn developed the concept of the "Test Automation Pyramid," and Martin Fowler has used it recently in his bliki posts about automated testing:
The pyramid represents the relative amount of automation teams should build at each level of the architecture.  The pyramid's main point is that teams should generate a lot more unit tests through test driven design (TDD) than they do service-level tests or tests driven through the UI.  This is a cool and helpful concept, and powerfully illustrates the difference in the way teams tackle automation on an agile project than they typically do on a waterfall one.

As Fowler says, the automated tests coming out of a waterfall project typically have few unit-level tests, and probably have too many functional tests driven through the UI, making them look more like an ice-cream cone.

3.  The difference between code coverage and acceptance test coverage (Test Driven Design and Acceptance Test Driven Design): One thing the pyramid doesn't quite represent, as Fowler says, is that you may well use a tool like Jasmine to develop "unit tests" at the UI level, or a tool like ITKO-Lisa to develop "unit tests" at the service level.  Meanwhile, you may have behavior specified in a story's acceptance criteria that can only be tested by interrogating behavior that runs through all three layers.  Test architects and agile teams need to explicitly consider both dimensions:  what is the minimum set of unit and component tests we need to exercise every important line of code, but equally what tests are needed to regression test every acceptance criterion?  And then, of course, what is the smallest and least complex amount of code needed to cause both kinds of tests to pass?  This is a gnarly question and it has to be solved both at the macro level and case by case.  There is no "KISS" rule to send you hopping quickly through it.

4.  The Steel Thread.  A final useful concept for automation which I have seen referenced by Lisa Crispin, is the concept of the "steel thread."  As a team, you want your automated tests to be delivered along with the code, not trailing after it.  This means you need to make choices.  Part of the discussion around every story is "what is the minimum automation we need to test this acceptance criterion."  You don't need to automate every edge case, every negative case, and every possible data permutation.  Exercise the code more thoroughly than you automate it, but leave enough automated so that if someone changes the code in a way that breaks it, you will know, and you can go back to check.

So Where Does that Leave Us?
Proper testing turns out to include a number of different dimensions which don't all line up to each other:
  • Architectural layer you are testing:  UI, Service, Database
  • When and how frequently you should apply the test:  Test Before Check-in, Test Before Deployment, Test when Environments are Available
  • Type of requirement:  functional or non-functional, specialized or general
  • Type of testing:  Unit, Integration, Story Point to Point, System End to End
  • Automation types:  manual, manually-triggered, CI-triggered, continuously triggered, nightly
Seriously, if you want to know about this with any depth at all, or you want to be an actual practitioner, you need to read some books by people who have actually experienced this: Lisa Crispin, Janet Gregory, Elizabeth Hendrickson, Mike Cohn, Ward Cunningham, Brian Marick and others have a lot of great stuff out there.

But in the mean time, if you're trying to ramp up an appropriate testing strategy for your newly-converting-to-agile program in a hurry, you really need to bring in people to help you who know agile, who know testing, and who know agile testing, and you need to keep your entire team focused on quality while they ramp up needed new skills.  This is probably a much bigger deal than you realized.


Popular posts from this blog

A Corporate Agile 10-point Checklist

I'm pretty sure my few remaining friends in the "small, collocated team agile" community are going to desert me after this, but I actually have a checklist of 10 things to think about if you're a product owner at a big company thinking of trying out some agile today.  Some of these might even apply to you if you're in a smaller place.  So at the risk of inciting an anti-checklist riot (I'm sorry, Pez!), I am putting this out there in case it is helpful to someone else.

Here's what you should think about:

1.Your staffing pattern.  A full agile project requires that you have the full team engaged for the whole duration of the project at the right ratios.  So as you provision the project, check to see whether you can arrange this staffing pattern.  If not, you will encounter risks because of missing people.  Concretely it means that:
a.You need your user experience people (if applicable) and your analysts at the beginning of the project, as always, b…

The Agile Business Case

Many agile teams have never seen a business case, ever, and they may even be proud of it.

Our mantra is that we deliver "business value," not just "software," quicker, better, and faster, but if so, we certainly don't spend a lot of time reporting on value delivery, and in fact we may be scornful about "analysis paralysis."  As software developers, we consider ourselves to be doing quite well if we can deliver the software every two weeks (or continuously).  And this is particularly if we've enabled this frequent high-quality delivery through automated testing and automated build-and-release techniques.  We've reduced business risk by making results visible more often, and allowing the business to change direction more frequently.  We assert that along the way of course we're also delivering value.  But how would we prove it?

I've recently posited that we shouldn't even think of doing agile projects without capturing and recording s…

How To Write A One-Page Proposal to an Executive

One day you may need to communicate with an executive. Pro tip 1:  executives do not have time for you to dim the lights and show them forty slides with charts composed of animated dancing leprechauns and flashing arrows that emerge from the void in a checkerboard pattern. Pro tip 2:   Guys, and gals with deep voices, executives also don't need you to use your "Radio Announcer Voice."

As a rule, what executives want is simple: one printed page. No matter what it is, it should be one page. And it should be printed, not emailed.  You should plan to hand it to the executive, and then you should be quiet when they read it and wait for their questions.  It's harder than it sounds.
 So how do you do it?  Here are the steps:
Write the deck that expresses your proposal in as many slides as it takes.  Use imaginative animation and blinking letters if you must.Remove your title slide.Insert a new slide at the front of the deck with "Appendix" written on it in big …