Testing, testing, testing. In a recent article by John Parkinson (Strong Signals, CIO Insight magazine) the value of testing is raised on par with the activity of design and coding itself:
Testing is becoming as necessary a profession as design and coding. Skills and experience matter. Process matters. Tools matter. Let the tests begin.
Our systems are becoming more complex — in fact, logarithmically so. Not long ago, entire projects were conceived, designed and coded entirely in-house. Projects were measured in thousands of lines of code, perhaps hundreds of thousands for large scale projects. Everything was created “here,” so to speak, and it was conceivable to understand and test the entire system. Not necessarily so in today’s world.
Today, most of the code in our projects is written by someone else. We integrate third-party libraries, both commercial and open source. We use code repositories that were written by co-workers no longer with the company. As Parkinson points out, testing projects in this environment becomes a far more complex task:
Modern code has millions (perhaps billions) of possible execution paths and program states, and we cannot test every unique combination. So we have evolved as an industry, consciously or unconsciously, toward a testing strategy based on a combination of materiality and risk. … But to do it right, you have to do it consciously. And I suspect that in some shops this isn’t always the case.
Despite this, testing continues to fall under scrutiny and remains one of the first programs to suffer budget cuts or face time constraints. This is evidenced by strategies that balance “materiality and risk.” Stated another way, testing programs are routinely under tight constraints to reduce cost and “get out the door faster.” Consequently, testing program managers are pressured to focus efforts in high-risk areas, choosing which areas to test knowing that comprehensive testing is not possible.
Is an increased need to focus testing on high-risk areas a sign of the times, or a sign of system complexity, or a sign of poor understanding of the software process? Granted, vastly more complex systems are more difficult to test — but does this justify a reduction in the quality of our testing efforts? Isn’t it more logical to conclude that more complex systems require more rigorous, more thorough testing programs?
After decades of experience we’ve learned that it is far more costly to correct defects after they enter an operational phase than before. Finding and fixing a software defect after delivery is often as much as 100 times more costly than finding and fixing the problem during requirements or design (according to Boehm, Software Engineering, IEEE Computer Society, 2007).
By trading-off testing activity in return for lowered up-front costs, we run the constant risk of introducing more defects and facing dramatically higher defect-correction costs down the road. Weighing materiality and risk becomes a slippery slope — one that a strong quality assurance organization will keep an eye on. Of course there are reasonable levels of risk, but the analysis of what is acceptable risk cannot lie with the testing team or quality assurance team. Risk must be clearly and concisely presented to the business. The decision to trade thorough testing for market-based and consumer-based risk should only be made by the business unit.
Placing the responsibility for this decision with the business unit ensures adequate communication of the issues at hand, and also assures that the potential risk will correctly be weighed against long-term cost. With luck, the business unit will make the right risk versus reward decisions — or, at the very least, learn from its mistakes in relatively short order.