***DANGER LONG POST, SORRY!***
Agilists are fond of catchy acronyms like INVEST (stories should be independent, negotiable, valuable, estimatable, small, and testable), SMART (acceptance criteria should be specific, measurable, attainable, relevant, and time-bound), and especially KISS (keep it simple, stupid).
I've spent some time in the last months trying to get my arms around automated agile testing practices, and in the end, I have to admit that despite the "why do you even need testers" mantra you sometimes hear, testing cannot be "kept simple" for an agile project of any complexity. You really need to think it through very carefully, and there are a lot of moving pieces that don't reduce into a nice 4x3 matrix.
Not to destroy the suspense, but by the end of the post, I am going to advocate that you hire an experienced, full time person to set up your quality strategy and lead its execution. But here is a quick Sparc Note presentation of the dimensions you should keep in mind as you bring in the right people to help. Test architecture is the Ginger Rogers of the agile team. You may recall that Ginger "did everything Fred Astaire did, only backwards and in heels." You should think of testers as doing everything developers do, only in reverse and requiring significant "golden data" setup.
General Things To Know About The Role of Quality In Your Agile Team
1. You may want to evaluate ROI for "good quality strategy setup" in terms of what you spend versus cost of poor quality (COPQ). If you search the internet, you will find that COPQ can only be explained through use of a diagram which includes an iceberg. I think it's a "Six Sigma" thing. Here's the nicest one I found.
Even though I think the mandatory iceberg thing is funny, there is a real truth here. There are a myriad of ways poor quality can cut into your profits and grow your losses. You need to understand which of those are high risk for you, and you need to build a quality strategy to address them.
3. Testing is a vocation. There's a lot of talk out there about "everyone should do everything on an agile team." And that's good. But even though most people I know COULD do everything, they don't WANT to. It's not just about "ability," it's about passion. You need people on your team who are passionate about getting the quality right. Those people will develop a distinct set of skills, some of which are innate, some which are learned, and some which are learned over time through hard experience. There are a lot more moving parts out there on your test team than you will ever understand. Do not underestimate the difference you can make by hiring people who WANT to enforce good testing practices on your teams.
4. Most professional testers have experience with waterfall practices, not agile. A good waterfall tester will be a good agile tester--hang onto them. But you need to spend some time explaining how agile teams do things differently than waterfall teams, to avoid making them very nervous. Good testers care passionately about quality, and they don't want you getting all giddy, but force them to miss something, which is what they will think if you don't explain yourself. You should bring in a specifically agile testing coach to clarify agile testing practices, and to explain the basic vocabulary around things like "unit test," "component test," "story test," and so on.
5. You must automate more then one kind of test for agile, in most cases, and using a different automation strategy than you used before. An agilist is likely to think "automated testing" means "automated unit tests." A waterfall person is likely to think it means "automated functional regression tests." A pragmatic person will look at the whole picture and feel an urgent need to hire a test architect who will explain it all and enforce some standards. This is because:
6. Your automated testing code base should be architected as carefully (or more) than your functional code base. When you read about agile testing on the internet, you will find a lot of rhetoric about "team spirit," and how talking with testers helps developers get done faster, because they understand what they're doing. This is true, but it is not enough! Your automated tests are likely to be around a lot longer than your hot-shot, chatty coders. If your organization has a separate team for software maintenance than for projects, you should be thinking about the quality of your automated test bed, not solely how much the team talked together during development.
And seriously, even if your coders do a combination of development and maintenance, are you expecting them stay with you for the next 25 years or so, and remember vividly what they talked about? Some corporate software is around that long, or longer.
So what should your agile test architecture look like? Think of this as a mapping from the classic "V-Model" for Quality, which most of the world's testers know and understand, to the "Agile Test Quadrants," which covers the same ground, but from a different perspective.
The V-Model
The V-Model is the one most modern software testers are familiar with. So if you are working with a team that is moving from waterfall to agile software development, you need to be aware of the V-model, and you need to be able to discuss agile testing in the context of what people already know. The V-model says that software development occurs in discreet phases, and that testers should be verifying that nothing bad happened at any phase.
So what are the key roles and responsibilities for the V-model?
The Agile Testing Quadrants
Most agile testers make reference to Brian Marick's "agile testing quadrants," when they describe the theory behind their practices. Reassuringly, there is a lot of commonality between the V-model and the quadrants. Most activities are the same (with some notable exceptions), but the timing is different.
Here are some standard(ish) agile testing roles and responsibilities:
Things the V-model leaves out:
Things the Quadrant leaves out:
How Much Should You Automate?
One common misconception people have is that agile teams are required to automate everything, to reach 100% code coverage as well as 100% acceptance criteria coverage of 100% of the possible test scenarios at all architecture levels. There are four important ways in which this is not true.
1. Are you kidding me? It isn't possible to do this, even if you wanted to. Good agile testers utilize the same rules about risk-based testing, test case matrices to control test permutations, and common sense that they used when they did waterfall system testing. And after an agile team determines what needs to be tested and what doesn't, it chooses an even smaller subset to automate. But how?
2. The Test Pyramid. Mike Cohn developed the concept of the "Test Automation Pyramid," and Martin Fowler has used it recently in his bliki posts about automated testing:
The pyramid represents the relative amount of automation teams should build at each level of the architecture. The pyramid's main point is that teams should generate a lot more unit tests through test driven design (TDD) than they do service-level tests or tests driven through the UI. This is a cool and helpful concept, and powerfully illustrates the difference in the way teams tackle automation on an agile project than they typically do on a waterfall one.
As Fowler says, the automated tests coming out of a waterfall project typically have few unit-level tests, and probably have too many functional tests driven through the UI, making them look more like an ice-cream cone.
3. The difference between code coverage and acceptance test coverage (Test Driven Design and Acceptance Test Driven Design): One thing the pyramid doesn't quite represent, as Fowler says, is that you may well use a tool like Jasmine to develop "unit tests" at the UI level, or a tool like ITKO-Lisa to develop "unit tests" at the service level. Meanwhile, you may have behavior specified in a story's acceptance criteria that can only be tested by interrogating behavior that runs through all three layers. Test architects and agile teams need to explicitly consider both dimensions: what is the minimum set of unit and component tests we need to exercise every important line of code, but equally what tests are needed to regression test every acceptance criterion? And then, of course, what is the smallest and least complex amount of code needed to cause both kinds of tests to pass? This is a gnarly question and it has to be solved both at the macro level and case by case. There is no "KISS" rule to send you hopping quickly through it.
4. The Steel Thread. A final useful concept for automation which I have seen referenced by Lisa Crispin, is the concept of the "steel thread." As a team, you want your automated tests to be delivered along with the code, not trailing after it. This means you need to make choices. Part of the discussion around every story is "what is the minimum automation we need to test this acceptance criterion." You don't need to automate every edge case, every negative case, and every possible data permutation. Exercise the code more thoroughly than you automate it, but leave enough automated so that if someone changes the code in a way that breaks it, you will know, and you can go back to check.
So Where Does that Leave Us?
Proper testing turns out to include a number of different dimensions which don't all line up to each other:
But in the mean time, if you're trying to ramp up an appropriate testing strategy for your newly-converting-to-agile program in a hurry, you really need to bring in people to help you who know agile, who know testing, and who know agile testing, and you need to keep your entire team focused on quality while they ramp up needed new skills. This is probably a much bigger deal than you realized.
Agilists are fond of catchy acronyms like INVEST (stories should be independent, negotiable, valuable, estimatable, small, and testable), SMART (acceptance criteria should be specific, measurable, attainable, relevant, and time-bound), and especially KISS (keep it simple, stupid).
I've spent some time in the last months trying to get my arms around automated agile testing practices, and in the end, I have to admit that despite the "why do you even need testers" mantra you sometimes hear, testing cannot be "kept simple" for an agile project of any complexity. You really need to think it through very carefully, and there are a lot of moving pieces that don't reduce into a nice 4x3 matrix.
Not to destroy the suspense, but by the end of the post, I am going to advocate that you hire an experienced, full time person to set up your quality strategy and lead its execution. But here is a quick Sparc Note presentation of the dimensions you should keep in mind as you bring in the right people to help. Test architecture is the Ginger Rogers of the agile team. You may recall that Ginger "did everything Fred Astaire did, only backwards and in heels." You should think of testers as doing everything developers do, only in reverse and requiring significant "golden data" setup.
From http://www.fanpop.com/clubs/ginger-rogers/images/14574687/title/ginger-rogers-fred-astaire-photo |
General Things To Know About The Role of Quality In Your Agile Team
1. You may want to evaluate ROI for "good quality strategy setup" in terms of what you spend versus cost of poor quality (COPQ). If you search the internet, you will find that COPQ can only be explained through use of a diagram which includes an iceberg. I think it's a "Six Sigma" thing. Here's the nicest one I found.
By Javier Rubio, posted on http://elsmar.com/Forums/showthread.php?t=17616 |
3. Testing is a vocation. There's a lot of talk out there about "everyone should do everything on an agile team." And that's good. But even though most people I know COULD do everything, they don't WANT to. It's not just about "ability," it's about passion. You need people on your team who are passionate about getting the quality right. Those people will develop a distinct set of skills, some of which are innate, some which are learned, and some which are learned over time through hard experience. There are a lot more moving parts out there on your test team than you will ever understand. Do not underestimate the difference you can make by hiring people who WANT to enforce good testing practices on your teams.
4. Most professional testers have experience with waterfall practices, not agile. A good waterfall tester will be a good agile tester--hang onto them. But you need to spend some time explaining how agile teams do things differently than waterfall teams, to avoid making them very nervous. Good testers care passionately about quality, and they don't want you getting all giddy, but force them to miss something, which is what they will think if you don't explain yourself. You should bring in a specifically agile testing coach to clarify agile testing practices, and to explain the basic vocabulary around things like "unit test," "component test," "story test," and so on.
5. You must automate more then one kind of test for agile, in most cases, and using a different automation strategy than you used before. An agilist is likely to think "automated testing" means "automated unit tests." A waterfall person is likely to think it means "automated functional regression tests." A pragmatic person will look at the whole picture and feel an urgent need to hire a test architect who will explain it all and enforce some standards. This is because:
6. Your automated testing code base should be architected as carefully (or more) than your functional code base. When you read about agile testing on the internet, you will find a lot of rhetoric about "team spirit," and how talking with testers helps developers get done faster, because they understand what they're doing. This is true, but it is not enough! Your automated tests are likely to be around a lot longer than your hot-shot, chatty coders. If your organization has a separate team for software maintenance than for projects, you should be thinking about the quality of your automated test bed, not solely how much the team talked together during development.
And seriously, even if your coders do a combination of development and maintenance, are you expecting them stay with you for the next 25 years or so, and remember vividly what they talked about? Some corporate software is around that long, or longer.
So what should your agile test architecture look like? Think of this as a mapping from the classic "V-Model" for Quality, which most of the world's testers know and understand, to the "Agile Test Quadrants," which covers the same ground, but from a different perspective.
The V-Model
The V-Model is the one most modern software testers are familiar with. So if you are working with a team that is moving from waterfall to agile software development, you need to be aware of the V-model, and you need to be able to discuss agile testing in the context of what people already know. The V-model says that software development occurs in discreet phases, and that testers should be verifying that nothing bad happened at any phase.
So what are the key roles and responsibilities for the V-model?
- Business Analysts: Record requirements, build models of how the system should work in the future, get business users to agree the team is on the right track.
- Systems Analysts: Translate business requirements into system requirements. The system requirements are the basis of future "systems testing."
- Developers: In the V-model, developers may be encouraged to implement reusable tests at the unit level and the basic integration level (sometimes called the "component" level). In many environments, special technicians are also brought in to test the non-functional requirements, which includes environment testing, performance testing, security testing, and the like. Developers will typically create some test data to prove out their code, but the system will not be shipped to testing with that data available.
- Technical/System/Quality Assurance Testers: these testers, generally employed by the information technology organization, perform "system" tests. This involves building out a rigorous set of test data (static data, transactional data, and combinations of input data that will exercise all systems options), defining system test scripts, and running them. The system tests sit somewhere between the component tests the developers build and the user acceptance tests built by the user acceptance people.
- User Acceptance Testers/Super Users: UAT testers, sometimes consisting of actual operational people brought in at the end of a development cycle to do some manual testing, and sometimes including professional testers, do a thing called "user acceptance tests." This involves actually using the system in a controlled environment. In a well-functioning organization, UAT should be a short-lived period of manual "exploratory testing," where business people ensure that their highest value needs are met. This usually happens after everything else, so if users don't want to accept what they see, they have to wait a while to get things fixed.
- Operations: in the V-model, operations people do "validation reporting," to sho that the system is working on an ongoing basis. To do this they typically get some kind of automatically generated report every day that lets them look for variance in standard metrics like "number of widgets shipped."
- Automated testers: the V-model doesn't say so explicitly, but "automated testers" in waterfall are people who build a set of end-to-end functional tests after the software is released to production, while people still remember how it was supposed to work. These end-to-end functional tests are also called "regression tests."
The Agile Testing Quadrants
From http://qualitybakedin.blogspot.com/2012/03/agile-testing-quadrants.html, citing Brian Marick, Lisa Crispin, and Janet Gregory |
Here are some standard(ish) agile testing roles and responsibilities:
- Business and Systems Analysts: build written models of how the system should work in the future. Build ongoing communications channels from business people to the rest of the team. And, most importantly from a testing perspective, ensure "stories" are created which divide the system up into testable vertical slices of functionality that can be tested functionally without the need for intervening and separate "systems" tests. Each story has "acceptance criteria" which fit into a larger narrative around potential uses within scenarios. This is key: agile stories marry systems requirements to functional requirements, and enable the team to do functional testing to ensure system behavior.
- Everyone together: before beginning development on any particular story, and as needed while the story is developed, discuss how the story acceptance criteria should be tested from a functional perspective, and from an architectural perspective, so everyone is clear on their role.
- Determine how to test any GUI widgets--have dev build an automated unit test using a tool like Jasmine? Have testers manually verify? Build an automated test using a UI-friendly tool like Selenium or Twist?
- Determine how to test the system's functional behavior as laid out in the acceptence criteria, and what additional edge testing or negative testing might need to be done in addition to what is specified explicitly in the written story. Which tests should the team drive through the user interface through a combination of manual and automated functional tests? Which criteria should the team test automatically at the service level or below, using a tool like Fit, FitNesse, Concordian, or JBehav?
- Consider system impact of tests on the overall test architecture. How will new tests affect the existing UI-driven test scenario library? The service-driven library? The integration test bed used at check-in? The automated functional/system test set run continuously or daily?
- Developers: Through TDD, developers implement reusable tests at the unit level and the basic integration level (sometimes called the "component" level). The ones that are prone to changing or complex to test at the unit or component level have automated tests created which are run at every check-in. These tests are the ones identified in quadrants 1 and 4. Not coincidentally, the matrix calls these tests "Technology Facing." Additionally, if any instrumentation is needed to support automated tests being scripted by the testers, developers build those fixtures. In many environments, special technicians are still brought in to test the non-functional requirements, which includes environment testing, performance testing, security testing, and the like. Developers generally handle issues around environments by automating the deployment process as much as possible using a continuous integration tool like Jenkins, Go, Cruise Control, or HP ALM.
- Technical/System/Quality Assurance Testers plus User Acceptance Testers/Super Users: For a new agile team, we often create a small team of experienced testers from the combined IT and business organizations. This group focuses on the business-facing user acceptance tests, shown in quadrants 2 and 3. They build the functional test scripts needed for UI-driven scenarios and system-driven scenarios, supported by tooling done in the development group. Automation is used on scenarios with an eye to plugging local story tests into the end to end test bed run nightly or continuously. GUI-based testing is done manually, with automations created for functions that are likely to change. Optimally, the automations cover each "acceptance criterion" in each story, to ensure that the system does everything the business wants it to do. Story-base tests in agile are called "functional tests," "story tests," "functional acceptance tests," or "examples," depending on who you are reading.
Things the V-model leaves out:
- Usability and Alpha/Beta testing. Especially as agile development has incorporated the concept of "continuous delivery," it has become possible to simultaneously deploy multiple versions of the same production code base to different audiences to test usability. Metrics on the screens capture efficiency, where people navigate, etc. The ability to test the software and fine-tune its behavior through variables even after deployment provides a strong contrast to what is available in waterfall, where even basic misunderstandings of functionality may not come to light until the user acceptance period, and by then, it may be too late to change the software at a reasonable cost.
Things the Quadrant leaves out:
- Ongoing operational testing (validation reporting). Quite a lot of the agile literature assumes that you are working in an industry constantly in motion, maybe delivering web software, and working on a code base which is constantly in motion, so operational reporting won't be an ongoing need. However, some agile shops will need this type of reporting, so I'm thinking it would be good to add it to the quadrant, perhaps as a deliverable, if not as a type of test.
- Pre-deployment "system testing" and post deployment "regression testing." As detailed in the roles and responsibilities section above, system testing is combined with regression testing in a planned way, from a business perspective, and developers retain responsibility for intrinsic code quality at the unit and component level, as well as providing tooling for tests, mocks, stubs, along with virtualizations needed for incomplete end-to-end scenarios. Testers are building out pieces of end to end regression tests from the beginning. The regression set should be planned ahead at a high level, and evolved in detail through every iteration, with as much care as that of the code base to be tested.
How Much Should You Automate?
One common misconception people have is that agile teams are required to automate everything, to reach 100% code coverage as well as 100% acceptance criteria coverage of 100% of the possible test scenarios at all architecture levels. There are four important ways in which this is not true.
1. Are you kidding me? It isn't possible to do this, even if you wanted to. Good agile testers utilize the same rules about risk-based testing, test case matrices to control test permutations, and common sense that they used when they did waterfall system testing. And after an agile team determines what needs to be tested and what doesn't, it chooses an even smaller subset to automate. But how?
2. The Test Pyramid. Mike Cohn developed the concept of the "Test Automation Pyramid," and Martin Fowler has used it recently in his bliki posts about automated testing:
From http://www.mountaingoatsoftware.com/blog/the-forgotten-layer-of-the-test-automation-pyramid |
As Fowler says, the automated tests coming out of a waterfall project typically have few unit-level tests, and probably have too many functional tests driven through the UI, making them look more like an ice-cream cone.
3. The difference between code coverage and acceptance test coverage (Test Driven Design and Acceptance Test Driven Design): One thing the pyramid doesn't quite represent, as Fowler says, is that you may well use a tool like Jasmine to develop "unit tests" at the UI level, or a tool like ITKO-Lisa to develop "unit tests" at the service level. Meanwhile, you may have behavior specified in a story's acceptance criteria that can only be tested by interrogating behavior that runs through all three layers. Test architects and agile teams need to explicitly consider both dimensions: what is the minimum set of unit and component tests we need to exercise every important line of code, but equally what tests are needed to regression test every acceptance criterion? And then, of course, what is the smallest and least complex amount of code needed to cause both kinds of tests to pass? This is a gnarly question and it has to be solved both at the macro level and case by case. There is no "KISS" rule to send you hopping quickly through it.
4. The Steel Thread. A final useful concept for automation which I have seen referenced by Lisa Crispin, is the concept of the "steel thread." As a team, you want your automated tests to be delivered along with the code, not trailing after it. This means you need to make choices. Part of the discussion around every story is "what is the minimum automation we need to test this acceptance criterion." You don't need to automate every edge case, every negative case, and every possible data permutation. Exercise the code more thoroughly than you automate it, but leave enough automated so that if someone changes the code in a way that breaks it, you will know, and you can go back to check.
So Where Does that Leave Us?
Proper testing turns out to include a number of different dimensions which don't all line up to each other:
- Architectural layer you are testing: UI, Service, Database
- When and how frequently you should apply the test: Test Before Check-in, Test Before Deployment, Test when Environments are Available
- Type of requirement: functional or non-functional, specialized or general
- Type of testing: Unit, Integration, Story Point to Point, System End to End
- Automation types: manual, manually-triggered, CI-triggered, continuously triggered, nightly
But in the mean time, if you're trying to ramp up an appropriate testing strategy for your newly-converting-to-agile program in a hurry, you really need to bring in people to help you who know agile, who know testing, and who know agile testing, and you need to keep your entire team focused on quality while they ramp up needed new skills. This is probably a much bigger deal than you realized.
Comments
Post a Comment