Skip to main content

Agile Quality Tactics Explained in 7 Easy Steps

Are you new to the testing concept, or the "quality" concept, as I've learned to describe it?  I'm still learning, myself.  You may have seen some previous attempts, but I'm happier now.  Here's the framework I've devised most recently to help express how I think you need to design and implement agile quality tactics.  Your mileage will of course vary.  Experienced quality people please jump in and help me, where I've gone completely off the mark!

Step 1:  Know what to test.  This can be a metaphysical question, but our friends at ISO have come up with a good practical starting point, code named SQuaRE:  Systems and Software Quality Requirements and Evaluation, aka ISO 25010.  It has 31 separate quality dimensions which roll up into seven "non-functional" categories, and one "functional" category.

From http://a2build.com/architectedagile/Architected%20Agile.html?ISO25010.html
There are many quibbles out there about whether the ISO categories are the right ones or not.  If you have a better set of categories, I say, use it, and tell us about it!  But for most of us testing novices, this is a real eye-opener.  Many of us may have thought "testing" was a synonym for "functional testing."  And here we find out functional testing is only the tip of the iceberg.  Holy value burn up!

Step 2:  Decide how much to spend on testing, and where to invest that money, based on expected ROI for each of the dimensions.  Each of the 31 ISO dimensions translates into a different level ROI for your project, and those ratios are not fixed.  Security testing on a system protecting trillions of dollars has a bigger ROI than security on a site that lets you draw your favorite Pokemon character.  Performance matters more for real-time systems guiding brain surgery than for systems tracking glacial melting.  Here's a handy chart to help you think about it, showing only the high level dimensions.  It is worth it to consider the 31 individual dimensions in detail, but this chart is huge enough as it is, and you get the point!

ROI-based Tactical Quality Plan
Dimension ROI Will Be Measured In Terms Of Technique Cost Who Delivers This
Portability Cost of delayed deployment or failure due to browser incompatibility Investment in setting up and running Continuous Integration tools,  and setting up multiple platforms for testing.  (Tools such as Hudson or Go) Additional environment setup and maintenance Project environment czar, usually a Dev person, and CI expert. 
Maintainability Ability to leapfrog competitors quickly, lowest cost for basic BAU maintenance Static code quality tests, dev practice dashboards, unit test coverage reports, etc. Cost of initial supervision and enforcement, but may actually self-fund through decreased work in the long term Architecture/Dev leadership
Security Cost of a security breach to the application Security testing Varies with tools and techniques chosen Risk/Dev leadership
Reliability Cost of down time Simulated down conditions Varies with tools and techniques chosen Risk/Arch/Dev/Ops leadership
Performance Cost of slow response: operational inefficiencies or actual lives lost Performance testing through tools like LoadRunner, Dynatrace, etc. Varies with tools and techniques chosen Performance leadership (usually architecture or testing)
Compatibility Cost of virtualizing and/or changing the system or other related systems to allow for seamless interfaces Architectural analysis and estimated costs to change or change partner code Interface contracts, IT Operations SLAs, etc. Business and Architectural Owners of Cross-Impacted systems
Usability Cost of awkward patterns of use to operational users or online customers Usability design and testing techniques Varies with tools and techniques chosen User Experience Owner for project--may be called "design" or "analysis" or architecture
Functional Suitability Value of software continuing to deliver on business strategy over time Functional tests enshrined in a reusable regression set, described in detail below Varies with tools and techniques chosen Functional testing group leadership

What a big table!  I've indicated the type of cost and ROI you may incur--for home use you will want to put actual net present value figures into the box, and potentially even actual dollar values at different project milestones, to understand needed investment and expected timing of returns.

Step 3: Determine which person with P&L responsibility sponsoring the project owns quality decision making, and arrange for them to keep an eye on quality throughout the duration of the project.  I hope the scary table above has suggested to you that quality ownership of a project requires a point person who:
  • Knows how much money (or other desired benefits) the application is going to accrue for its host company, and based on what.  
  • Is available to evaluate the competence of the experts who will be implementing tests in each quality dimensions, and the techniques they choose
  • Is powerful and knowledgeable enough to demand "project health readouts" from some kind of dashboard on a regular basis (certainly as regular as the functional views into the software the "product owner" gets from attending standups and end-of-iteration showcases).  And these readouts need to be on each of these dimensions.
Quality ownership, in short, is a vital part of product ownership, from a software development perspective, but it is difficult to imagine one person able to be expert on all of this at once.  You may need to organize a single "Quality Czar" for a large project, shared across workstreams, or a quality person for your whole company or division who can keep an eye on the dashboards of multiple projects at once.

Step 4:  Decide how to test functionality.  Aha!  You say.  Now we're in familiar territory.  Well, yes and no.  Functional testing itself has a number of dimensions including:
  • How are you measuring value accrued through deployment, as opposed to "absence of defects?"  As Jim Highsmith frequently points out, software delivers value.  It doesn't merely deliver functionality that meets some minimally defined need.  There is a feedback process in every software development endeavor in which decisions can be made to change the way the software works to add additional value at equal or lower cost than doing what the product people initially suggested.  How do you plan to track the up side of functionality delivered as well as the things not delivered or the things broken?
  • What is your test data strategy?  (Do you just build out a masked copy of last quarter's production data and tinker with it, or do you script synthetic data to allow for rigorous control over the tests, even at the developer level?  Who curates the data, and how do they do it?)
  • When do you want to develop the functional test scripts, relative to building out acceptance criteria and building the actual software that delivers project value?  (The default in waterfall is that you start immediately after development, at best.  Agile allows you to have testers work with analysts to build automated scripts immediately prior to and during the software development cycle, working directly with recorded software acceptance criteria)
  • When do you want to run the scripted functional regression tests, relative to software development?  (Your default is likely to be that you first exercise the scripts when development is finished, and the code has been deployed to a "quality" environment which is protected from the developers.  Behavior Driven Development tools such as Fit, Fitnesse, Lisa, Concordian, and Cucumber allow you to run the tests, complete with temporary deployment of just the data needed for each test, as part of the build cycle, pushing almost all tester activity to the front of the SDLC instead of the end).
  • When do you want to exercise the functional regression tests, and how often?  If your testing is manual, that requires a period of code freeze at the end of each software development sprint to allow the testing to occur.  If you have automated BDD testing, at the other end of the automation spectrum, you can be testing and developing continuously, as frequently as with every check-in.
  • How much scripted testing do you want to do manually, on an ongoing basis?  You may not start your agile project with a set of veteran functional test automation experts.  You may need to build a plan for gradually phasing in more automation as you go, and as your team picks up the skills.  Some things may always need to be tested manually, because the automation would be more expensive than the number of manual tests which need to be run on that area of software over the lifetime of the product.
  • How much exploratory testing do you want to do without a script, on an ongoing basis?  In a world of reliable automation, where all high-value business functionality is covered by an automated functional test, you have the luxury of deploying your test analysts primarily to do creative "break the code" efforts, which may lead to additional automated tests.  Contrary to what you might think, exploratory testing should increase, not decrease, with an agile project that uses a lot of test automation.
  • What gets tested by professional testers, and what constitutes "user acceptance" by actual customers or operational users?  Some organizations define "user acceptance testing" as having a group of professional testers paid by the "business owner" of the software re-run the scripted tests built by the "technical owner" of the software.  For both waterfall and agile, this seems unnecessary, if those scripts are high quality.  True "user acceptance" testing should be done by...users.
  • Operational metrics for ongoing analysis.  How much metrics gathering do you want to build into the software itself to be run operationally, and to provide ongoing usability and frequency of use data to drive subsequent versions?  Waterfall and agile projects alike may want to take advantage of the "feature toggles" concept coming out of the Continuous Delivery movement, where different versions of the software can be toggled on or off through environmental variables, to compare "A" and "B" usage scenarios.
Step 5:  Build your quality tactics into your agreed upon and/or written/posted software development life cycle artifacts.  Your decisions about testing impact some of the standard operational rules your team makes for itself at the beginning of the project:
  • Your staffing model.  Traditional teams use a "rule of thumb" about the ratio of developers to testers and analysts (say 2 developers per tester, for example).  A full quality strategy at the departmental, program, or project level makes that type of staffing rule of thumb too crude.  Who does test case design and scripting?  Who does automation?  Who does performance and security testing?  Who plans for computability and usability.  You need to know your quality requirements before you can plan who to staff on your project.  If you haven't thought all these dimensions through, you're staffing in the dark, and unpleasant surprises await you.
  • Roles and responsibilities of team members.  On a related note, people's roles on the team may not match what they're used to.  A functional tester who merely exercises scripts may be a person you don't need any more.  An analyst who designs scripts is someone you need very much.  People who automate test fixtures in addition to developing business functionality need to be found somewhere--is that a development responsibility, or do the functional test people own that set of automation tasks?  What is the responsibility of your overall architect or security person, and what fraction of their time do you need?
  • Definition of done for stories.  In a multi-dimensional quality world, your software may not literally be "done done" until you've run a battery of performance and security tests against your code base.  In many environments, you may find that those performance or security tests can't be run for every iteration.  As a team, you will want to understand what the constraints are on what can be tested, and create a team "definition of done" which includes all testing actually under the team's own control, and which excludes steps that require help from outside organizations.
  • Story development life cycle.  The "life of a story" for your team should account for all of the tests that will be run, and indicate when they will occur, relative to that story.
  • Check-in, build, and automated deployment schedule.  The need to test more than just functionality may drive you to a more aggressive continuous integration/continuous deployment strategy than you might have had otherwise.
  • Extended story (or "narrative") template.  For small, collocated teams, stories may be nothing more than cards with jottings on them to serve as the occasion for conversations and test development.  For larger teams, teams within large programs, and teams which are geographically dispersed, the team may develop a template for what needs to be "extended" in the story, beyond the basic "as a...I need to...so that" (or alternate) story format.  In a regulated environment, the written form of the story may require reference to a central "NFR testing" document which speaks to the non-functional testing the story requires from the development or business resiliance or security teams, in addition to the functional acceptance criteria.
Step 6:  You need a project health dashboard.  There are a lot of metrics your automated tests can collect for you, in addition to the financial data you have available, and the data your card wall can give you about progress against goal.  You do not want to be sifting through hundreds of disparate documents.

For a single collocated project, it may be enough to set up a lava lamp to glow red when the build fails.  For a larger endeavor, however, like governance of a whole business, a line of business, a program, or a department, you will want to think through your dashboarding strategy, and plan to be able to plug each project into it using all of the measures which contribute to all of the financial and progress data, along with all 31 dimensions of quality.

Step 7:  You need to pay attention.  All of this effort is meaningless if you're not running a healthy project in the first place.  In a world of lies, damn lies, and statistics, no dashboard in the world is going to substitute for proper respect, two-way communication and support for the teams on the ground.

Counter to what you might expect, if the data is presented in a helpful way, people love working in an environment where things about their performance are measured automatically, and they can use the feedback to improve their game.  Individuals and teams can treat their job as one big video game, and keep trying to be the one to get the high score.  That concept alone, (plus the seven steps, plus an army of experienced testers), is the key to successful agile quality practices.

Comments

Popular posts from this blog

How Do You Vote Someone Off of Your Agile Team?

One of the conundrums of agile conversion is that although you are ordered by management to "self-organize," you don't get to pick your own team.  You may not have pictured it this way, but your agile team members are going to be the same people you worked with before, when you were all doing waterfall!   I know I wasn't picturing it that way for my first agile team, so I thought I should warn you.  (I thought I was going to get between six and eight original Agile Manifesto signers.  That didn't happen.). Why "warn" you (as opposed to "reassure" you, say)?  Because the agile process is going to reveal every wart, mole, quirk, goiter, and flatulence issue on the team within a few hours.  In the old days, you could all be eccentric or even unpleasant in your own cube, communicating only by document, wiki, email, and, in extreme situations, by phone.  Now you are suddenly forced to interact in real time, perhaps in person, with written messag...

A Corporate Agile 10-point Checklist

I'm pretty sure my few remaining friends in the "small, collocated team agile" community are going to desert me after this, but I actually have a checklist of 10 things to think about if you're a product owner at a big company thinking of trying out some agile today.  Some of these might even apply to you if you're in a smaller place.  So at the risk of inciting an anti-checklist riot (I'm sorry, Pez!), I am putting this out there in case it is helpful to someone else. From http://www.yogawithjohn.com/tag/yoga-class/ Here's what you should think about: 1.        Your staffing pattern.  A full agile project requires that you have the full team engaged for the whole duration of the project at the right ratios.  So as you provision the project, check to see whether you can arrange this staffing pattern.  If not, you will encounter risks because of missing people.  Concretely it means that: a.    ...

Requirements Traceability in Agile Software Development

One of the grim proving grounds for the would-be agile business analyst (henceforth "WBABA")  is the "traceability conversation."  Eventually, you will have to have one.  You may have seen one already.  If you haven't, you may want to half-avert your eyes as you read further.  It gets a little brutal.  But if you close them all the way, you can't read. From:  http://www.highestfive.com/mind/how-to-perform-a-successful-interrogation/ Dialogue: WBABA :   ...so in summary, we complete analysis on each story card, and then we support the developers as they build it that same iteration! Corporate Standards Guy:   but how do you do traceability in agile?  You have to have traceability.  It's broadly recognized as an important factor in building rigorous software systems. These software systems permeate our society and we must entrust them with lives of everyday people on a daily basis. [The last two sentences are an actu...