Did you wake up this morning and think to yourself, "gee, I
think I'd like to try some agile today?" Allow me to leap between you
and your coffee to announce that your adoption of agile requires more than just management techniques (stand
ups, burn downs, and card walls). You are going to need a framework for running automated tests on a scheduled and ad-hoc basis, and some well-engineered tests within that framework, if you are hoping to get your
company's software faster, better, and more transparently for more than a couple of weeks.
What, you say? Yes! You must act now to avoid the following:
I just wanted to make sure that as you proceed to reading your morning newspaper, you might be motivated to read and understand what those people are saying.
In lieu of my own coffee and newspaper, I was browsing through an entertaining blog post by Ron Jeffries this morning entitled "Manual Testing Does Exist and It Is Bad," and it reminded me that we agilists need to be a little more explicit about certain things than we sometimes are. And the top of my list of such things, as you can see, is "automated testing." Like gravity, automated testing for agile isn't a "nice to have." It's the law.
http://www.fanpop.com/clubs/coffee/images/25055460/title/love-coffee-wallpaper |
- Bad Quality. For starters: How were you planning to know if your software is okay after the first two weeks? I'm already worried for you, if your plan is to follow a book of some kind and simply have "a product owner" do some "inspection" and say "tally ho."
- Gradual Slow Down. But worse, let's say that works and your new web site or whatnot is a huge success? Now you may be lured into a false sense of security and get fairly far down the road when you discover that you won't be able to keep up a cycle of 2-week iterations deploying reliably to production for very long before your team starts to buckle under the pressure of regression testing the whole code base for each of these releases, if those tests are manual. First the product owner will ask others for help. Then the whole team will be pulled in. Then you'll stop developing on day 8, so the whole team can pound on the system. Then to day 7, and so on. Eventually you develop for a day and regression test for 9.
- Sudden Unpredictable Screaming. If, like one project manager I read about recently, you think that all you need to do is some "exploratory testing" with each release, you are living in a scary dream world. In my world, someone eventually says "how did this break?" and that person, often an authority figure holding the checkbook, expects someone to have an answer to that question.
- Inoxerable Stealth Reversion to Waterfall. Last but not least, because I have literally seen this a dozen times if not more, if you are the kind of team that doesn't deploy after every sprint, you may decide at some point to wait until the next iteration to do the testing, so you can show progress on your burn up chart while the testers struggle to keep up. Eventually, your testers will tell you that you need to do a big, long, regression test at the end of all the sprints so you can make sure everything is okay before you release to production. Sort of like the big long test period you used to do. In the "waterfall" SDLC. In fact, maybe we should just plan it that way from the get-go. Six months later, agile is gone.
- Point you to some bona fide authorities to find out more (please do look into the work of Lisa Crispin and Elizabeth Hendrickson, always linked to the right of my own posts on this blog).
- Tell you what I, an amateur, have been able to find out on the topic.
- Test automation is software. You should think of the agile manifesto's "Working Software" as two interrelated working software implementations: the software product itself and the automated test bed that embodies its requirements and validates its behavior. That test bed can be broadly defined to include the continuous deployment environment within which tests of various kinds are embedded, or narrowly as a set of tests you can run at will from some vendor system, but you do need some tests that can be run quickly by a machine to keep up.
- Corollary: "test automators" are "developers." Do not scare yourself into thinking you need to find and hire specialists who know how to do "test automation." These people are typically scarce and very expensive. You have a whole bunch of people working for you already who can do test automation. There are different schools of thought about whether you should code your automated tests in the same language as your main code bed (for convenience) or in a different language (for additional rigor), but no matter how you slice it, your test automators can be recruited and trained at low cost from your existing pool of software developers. Of course a developer should not design and execute the tests against their own code--you may earmark a certain set of developers just for testing, or rotate responsibilities--but my main point is that the "test developers" are probably already on your payroll.
- Testers are still your subject matter experts on what to test and how. People on agile teams formerly known as "testers" are gradually starting to call themselves "Quality Analysts," not just because it sounds better, but because it is a better description of the job. There is an entire body of knowledge around what kinds of testing need to be done, when, and how. There are sub-bodies of knowledge around how to generate, store, and source-code-control the transactional data and driver data for tests. Testers can and should have the skill sets to help you lay out and implement a quality strategy for the whole program that looks at what investments can be made in quality all along the line, how to get the best ROI for investments in quality practices, and so on.
- Automation frees testers up so they spend more time designing the right tests and less time mindlessly testing things that are easy to think about. Exploratory testing doesn't go away in agile. But the concept of a product owner "inspecting" the code to make sure it's okay should be built on the assumption that the code they are seeing is wrapped in a protective layer of automated tests that cover the basics.
- There are different kinds of tests. You probably knew that, but it's just as true in agile. Generally speaking, in some schools of thought, agile posits that you should have:
- significant "unit test coverage" for individual code components (which can be run as a set to find problems),
- a similarly significant set of tests run below the level of the presentation layer which rigorously test the software from a functional perspective,
- a smaller number of automated tests to do "smoke" or "integration" testing automatically (actually run the software through its paces to make sure it is all internally integrated together correctly),
- and a slightly smaller group of end-to-end functional tests that actually put the software through some detailed usage scenarios driven through the presentation layer.
I just wanted to make sure that as you proceed to reading your morning newspaper, you might be motivated to read and understand what those people are saying.
In lieu of my own coffee and newspaper, I was browsing through an entertaining blog post by Ron Jeffries this morning entitled "Manual Testing Does Exist and It Is Bad," and it reminded me that we agilists need to be a little more explicit about certain things than we sometimes are. And the top of my list of such things, as you can see, is "automated testing." Like gravity, automated testing for agile isn't a "nice to have." It's the law.
Great post, Elena.
ReplyDeleteOne thing I'd add is that, while this is a great place to get started, it can get to be an anti-pattern to have your "Quality Analysts" and your "Test Automators" be distinct sets of people long term. It can set up an uncomfortable "that's not my job!" dynamic, and lead to some "us vs. them" mentality (especially if it becomes codified and one group gets paid more).
Over the long haul, you want your Quality Analysts to learn to automate and appreciate how good software (including the test suite) is architected, and also get your automators to appreciate what kinds of things the testers are looking for.
Mike!
ReplyDeleteI totally agree. There are just a lot of perspectives to keep in the air simultaneously.
If you are a hiring entity, then the point I am hoping to make is that you can and should get developers who already work for you to start up your test automation effort from a widget configuration perspective, rather than advertising in vain for a person with “test automator” on her resume.
Meanwhile I was also hoping to warn hiring entities against the common idea of “let’s replace all our outdated testers who don’t have CS degrees with special, modern automated testers.” I have seen more than one site in the past that threw out years of specialized test experience and left nothing but devs guarding the henhouse, all inthe name of quality.
From the perspective of the agile team, we want to create an environment which encourages the creation of “devsters,” as you are saying--multi-talented people who can both develop and test, perhaps with leanings mor in one direction or the other.
From the perspective of the individual practitioner, I would be skeptical that the corporate world will change quickly to pay testers as much as developers, so if I were a test-originating devster, I would want to be reclassed as a dev or test automator ASAP to be able to bid on work at a higher compensation level.
And I suppose there is alqays the “mrph the eorld” thing, but that goes slow. :-)
Great to see your voice!
Sorry for typos. Should not have attempted on phone! Or proof read.
ReplyDeleteAgree completely. Just a thing for a "long term minded" company to keep in mind.
ReplyDeleteTrying to hold up my end of the whole "blog reading club" thing. :)