Forced by circumstances (and an especially pragmatic client), I've recently been asking peers and "the blogosphere" the apparently naive question, "is it important to do automated testing and clear up what Mike Cohn calls 'manual test technical debt?'"
Theoretically, if you have no Big Upfront Design and you also have no automated test suite, you're pretty much just building a big unplanned mess and you'll never be able to change anything for fear of breaking something else. I understand that, and so does my pragmatic client. But can this issue be quantified? Aside from feeling somewhat inadequate when reading agile theory, I mean.
The reason it's important to ask this question is that setting up and maintaining automated testing is expensive. Unlike many agile practices, which can be set off with some sharpies and a clean wall, automated testing requires a large scale capital expenditure to bring one or more testing packages into an organization, and then some significant strategy development, staging, and training of resources to put those new packages into use and to keep them running.
You need some compelling financial case to show the likely return on an investment which may be half a million dollars for software alone in a medium sized corporation. So how much is it worth to clean up your manual test technical debt?
There are some related questions out there with quantified answers. Gartner and CAST
have recently published estimates of how much technical debt is out there globally and on a per-application basis ($500B and $1M, respectively). "Technical debt" is defined by these studies as "the cost of dealing with delayed and deferred maintenance of the application portfolio." As most people who worry about technical debt note, this "cost to fix" is not the same as what it costs the business to have a software emergency of some kind, or to be slow in rolling out enhancements to the software. And of course this "cost to fix" is not exactly the same as the cost accrued by an organization due to the lack of automated testing. But it's a handy statistic to be able to throw around.
Within your own organization, you may want to look at the following, in terms of making your business case:
Theoretically, if you have no Big Upfront Design and you also have no automated test suite, you're pretty much just building a big unplanned mess and you'll never be able to change anything for fear of breaking something else. I understand that, and so does my pragmatic client. But can this issue be quantified? Aside from feeling somewhat inadequate when reading agile theory, I mean.
The reason it's important to ask this question is that setting up and maintaining automated testing is expensive. Unlike many agile practices, which can be set off with some sharpies and a clean wall, automated testing requires a large scale capital expenditure to bring one or more testing packages into an organization, and then some significant strategy development, staging, and training of resources to put those new packages into use and to keep them running.
You need some compelling financial case to show the likely return on an investment which may be half a million dollars for software alone in a medium sized corporation. So how much is it worth to clean up your manual test technical debt?
There are some related questions out there with quantified answers. Gartner and CAST
have recently published estimates of how much technical debt is out there globally and on a per-application basis ($500B and $1M, respectively). "Technical debt" is defined by these studies as "the cost of dealing with delayed and deferred maintenance of the application portfolio." As most people who worry about technical debt note, this "cost to fix" is not the same as what it costs the business to have a software emergency of some kind, or to be slow in rolling out enhancements to the software. And of course this "cost to fix" is not exactly the same as the cost accrued by an organization due to the lack of automated testing. But it's a handy statistic to be able to throw around.
Within your own organization, you may want to look at the following, in terms of making your business case:
- Regression Testing costs: this may be your easiest way to quantify a return on investment. What does it cost you to manually regression test your most expensive features? How much would it cost you to set up automated testing on those features? It is likely that you'll have some parts of your application where the one-time cost of setting up an automated regression test will pay for itself within the year, in terms of making your manual testers available to do other important testing of new features as they are written.
- Software development costs: if you're in a position to do so, quantify the number of points your project teams deliver per iteration, to get a "cost per point." If your teams are routinely interrupted to fix production bugs and put in patches, you should be able to quantify the cost of these unplanned interruptions, in terms of undelivered feature points. The costs will likely add up to tens of thousands of dollars quite easily.
- Insurance analogy: cost of unexpected down time for your application. Calculate the cost to your company for down time to your application. Even if down time is not catastrophic, since manual work-arounds are available, you should be able to quantify the value of the time of staff diverted to doing manual work-arounds during software down time. If automated tests give you better quality software and reduced down time, a large-scale investment in automated testing may be justified.
- Business losses due to lack of speed in adding new features: this may be harder to readily quantify, but as your software gets harder to change, you will lose the agility you intended to gain when your company first took on agile software development techniques. You may want to put together one or two hypothetical or real cases of changes which were much more expensive to deliver due to the cost of manual regression testing than they would have been, had automated testing been in place. Depending on what your software does, you should be able to assign a business value to software flexibility, in terms of business lost due to delayed time to market.
- Total software replacement cost: at some point your software will become partially or completely impossible to repair, due to the cost of regression testing the whole. What is the cost of starting all over? On the other hand, if your company was planning to totally replace the software during some short time horizon, then maybe your manual testing is fine.
There are less and less situations (user interface technologies) where automatic testing needs to be expensive.
ReplyDelete