Exposing the enterprise to risk: Who decides what not to test?

Testing, testing, testing. In a recent article by John Parkinson (Strong Signals, CIO Insight magazine) the value of testing is raised on par with the activity of design and coding itself:

Testing is becoming as necessary a profession as design and coding. Skills and experience matter. Process matters. Tools matter. Let the tests begin.

Our systems are becoming more complex — in fact, logarithmically so. Not long ago, entire projects were conceived, designed and coded entirely in-house. Projects were measured in thousands of lines of code, perhaps hundreds of thousands for large scale projects. Everything was created “here,” so to speak, and it was conceivable to understand and test the entire system. Not necessarily so in today’s world.

Today, most of the code in our projects is written by someone else. We integrate third-party libraries, both commercial and open source. We use code repositories that were written by co-workers no longer with the company. As Parkinson points out, testing projects in this environment becomes a far more complex task:

Modern code has millions (perhaps billions) of possible execution paths and program states, and we cannot test every unique combination. So we have evolved as an industry, consciously or unconsciously, toward a testing strategy based on a combination of materiality and risk. … But to do it right, you have to do it consciously. And I suspect that in some shops this isn’t always the case.

Despite this, testing continues to fall under scrutiny and remains one of the first programs to suffer budget cuts or face time constraints. This is evidenced by strategies that balance “materiality and risk.” Stated another way, testing programs are routinely under tight constraints to reduce cost and “get out the door faster.” Consequently, testing program managers are pressured to focus efforts in high-risk areas, choosing which areas to test knowing that comprehensive testing is not possible.

Is an increased need to focus testing on high-risk areas a sign of the times, or a sign of system complexity, or a sign of poor understanding of the software process? Granted, vastly more complex systems are more difficult to test — but does this justify a reduction in the quality of our testing efforts? Isn’t it more logical to conclude that more complex systems require more rigorous, more thorough testing programs?

After decades of experience we’ve learned that it is far more costly to correct defects after they enter an operational phase than before. Finding and fixing a software defect after delivery is often as much as 100 times more costly than finding and fixing the problem during requirements or design (according to Boehm, Software Engineering, IEEE Computer Society, 2007).

By trading-off testing activity in return for lowered up-front costs, we run the constant risk of introducing more defects and facing dramatically higher defect-correction costs down the road. Weighing materiality and risk becomes a slippery slope — one that a strong quality assurance organization will keep an eye on. Of course there are reasonable levels of risk, but the analysis of what is acceptable risk cannot lie with the testing team or quality assurance team. Risk must be clearly and concisely presented to the business. The decision to trade thorough testing for market-based and consumer-based risk should only be made by the business unit.

Placing the responsibility for this decision with the business unit ensures adequate communication of the issues at hand, and also assures that the potential risk will correctly be weighed against long-term cost. With luck, the business unit will make the right risk versus reward decisions — or, at the very least, learn from its mistakes in relatively short order.

2 thoughts on “Exposing the enterprise to risk: Who decides what not to test?

  1. Hi Gil, glad to see your interest in Rational Scrum.

    After hearing your videocast it occurs to me that perhaps my original article was not clear on one point. I thought I’d clear that up right now.

    It is not my intention to imply that the business unit should decide what components and systems will be tested. Clearly, the SQA and testing organization needs to implement the appropriate quality assurance and testing plans, and see that those plans are carried out. The testing group is responsible for developing specific test plans and executing those plans. Likewise, it is the responsibility of the quality assurance group to validate that work, and ensure that the product is in fact being well and thoroughly tested.

    However, there are many situations in today’s market where the business places constraints on these processes. We are often required to push software out the door more quickly than desired. In these situations, it is the business unit that must be responsible for this decision. SQA needs to clearly communicate the risks associated with these compromises to the business unit and place accountability for taking the risk squarely with the business unit. In my own experience, a well-educated business unit will often back down, choosing a more conservative route — possibly holding some features for a future release, or extending release dates. Businesses are, by their nature, risk averse. Put this knowledge to use: Make sure the business understands the risks and accepts responsibility for taking them.

    Like

Leave a Reply

Please log in using one of these methods to post your comment:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s