You’re Testing It Wrong…

I really like test driven development. It lets me be lazy. I don’t have to worry about my software quality, or that something I did broke some other thing. And with good dependency injection to make sure every component is working right, “it just works.” Now I code using TDD (writing my tests first, then coding to fulfill them), and I focus our QA efforts on making sure we have great test plans, and great coverage.

Closed System
A closed system wants to be tested.

So, when one of my project teams kept telling me they couldn’t write tests because the database wasn’t ready I got worried. Our team had been immersed in TDD for months — and every single engineer had nodded vigorously when I set expectations. The team leader recited the definition of “dependency injection,” just to drive home how ready they were to embrace it!

But when I asked to see the what was wrong, I knew we had a problem. The team’s tests were not injecting mock objects the right way. The idea behind dependency injection is to replace the smallest component possible in a closed system with another object, a “mock.” That mock can then monitor the system around it, inject different behaviors, and create desired results.

For example, let’s say we have a program that connects to a gizmo — your home thermostat. The thermostat itself is a separate component that lives outside your program. We can expect the thermostat to behave like a thermostat should… reporting the current temperature, and letting the home owner enter a desired room temperature. Pretty straight forward.

So the first step is to write a program that talks to the thermostat. We can wire up a real thermostat, but we’ve got a problem right off the bat. We want to know how our program behaves as the ambient temperature changes — 65 degrees, 32 degrees, or 100 degrees. But a real thermostat is only going to report the actual room temperature, and making the room frigid or boiling just isn’t going to be very comfortable or practical.

Not Mocking
Faking is not mocking.

This is where dependency injection comes in — wouldn’t it be great if we could inject a new gizmo, one that behaves according to our test plan?

It turns out that my team had been taking the wrong approach — but one that is pretty easy to make if you’re new to the idea of mocking and dependency injection. Unfortunately, it meant that they weren’t really testing the application. The were testing something else entirely.

Once we walked through the system, the mistake was clear. During application start up, it created a connection to a database. My team’s approach had been to add a “mocking” variable to the application. In effect, it created a test condition; if the application was in “mocking mode” it would only simulate database calls. If the application was not in “mocking mode” it sent real requests to a real database. Sounds great, right?

But it’s all wrong. Here’s the problem: The application was faking real world behavior. That is, throughout the program there were dozens of little tests, in effect asking, “if we are mocking, then don’t check the database for a user record, instead just return this fake user record.”

This meant that the real program — the actual application logic that would be deployed to the real world — was never tested. Instead, an alternate branch of logic was tested — a fake program, if you will. So two things happened:

  1. We weren’t testing the real program, we were testing something else altogether.
  2. The program itself became terribly complicated because of all the checks to find out “are we mocking?” and the subsequent code to do something else entirely.

And all of that is why my team said they couldn’t really test the system, because the database wasn’t up and running.

So what does real dependency injection look like? It’s simple: You want to change the actual gizmo, but change it in the most subtle way possible — and then you want to put that actual gizmo right back into your program.

Mocking
Real mocking doesn’t affect the original program flow.

Getting back to the thermostat example, an ideal solution would be to modify a real thermostat. You could crack it open, remove the temperature sensor, and add a little dial to it that lets you change the reported temperature. Then you plug the “mock thermostat” into your program, and you change the temperature manually! A potentially better approach would be to change the software that talks to your thermostat, and instrument it so that you can override the actual reported temperature. Your program would still think it’s talking to a real thermostat, and the connecting software could change the actual temperature before handing it off to your program.

In our case, the right solution could be injecting a simple mock component at the just the right point in our program.

For example, lets say our application uses an Authenticator object to log in users. The Authenticator checks the validity of a user in the database, and then returns a properly constructed User object. We can use dependency injection to substitute our own test data by overriding the single function we care about:

object fakeAuthenticator extends Authenticator {
    override def getUser(id: Int): Option[User] = {
        Some(User(id: -1, name: "Fake User"))
    }
}

On line 2, we replace the real Authenticator’s getUser function. The overridden method returns a hard-wired User object (in this case, one that clearly doesn’t represent a valid user account). By overriding the Authenticator in the test package only, the original program is not altered — all that’s left is to inject our altered Authenticator into the program.

The old fashioned way of doing injection is still reliable: Don’t tell, ask. Use a factory object to ask for the Authenticator. Given a factory in the application (let’s call it the AuthenticatorFactory) we can override what the factory actually returns in our test case only:

AuthenticatorFactory.setAuthenticatorInstance(fakeAuthenticator)

A slightly more modern approach is to use a dependency injection framework, but the underlying principle is exactly the same.

Likewise we can take the concept of mock objects further by using frameworks such as Mockito (a framework that works wonderfully with specs2). Mockito makes it easy to instrument real objects with test driven behavior. For example, Mockito will produce a mock object that acts just like a real object, but fulfills expectations (such as testing to make sure that a specific function is called a certain number of times).

Whatever tools and frameworks you use, test driven development has proven itself over the past decade. My own experience is the same: Every TDD project has produced more predictable results, has better velocity, and has been more reliable overall. It’s why I don’t do any coding without following TDD.

Tell me three times: The importance of quality assurance

Tell me three times

The earliest military applications of quality control came about from needing to send messages with reasonable confidence. The protocol was simple: Send every message three times. This triple-redundancy provided a checksum against failure, and could be used to correct broken messages. Any difference between the three copies would usually only show up in one, so the remaining two could be treated as accurate. The incorrect third could be discarded. In time, of course, advances in technology and process made it possible to introduce far more advanced — and secure — methods of communication. But the principle still lives on today in formal quality assurance: By introducing a redundant check on a program or process, we improve our chances of success.

As an industry, software folks have invested a huge amount of time toward figuring out what goes wrong with most projects. The root cause is complexity and our ability to accurately manage that complexity. Finding a method that enables reliable and repeatable results is a tough problem, especially given the variables involved in every project: Changing business environments, customer demands, technical capability and understanding, and team makeup are just a few of the factors that affect every project in a multitude of ways.

To combat the problems of complexity different best practices have become popular — some good, and some abysmal. The more successful techniques have a common theme: The idea that we can manage all this complexity by introducing multiple checkpoints.

Quality assurance and checkpoints

This is what quality assurance and structured software testing is all about — and yet, at least in the commercial industry — there’s always pressure to cut quality assurance or testing budgets. Take, for example, a recent project that ran about 18 months, involved well over 25 people, and launched to huge success (and zero defects in the product). The immediate aftermath of this successful effort? One might imagine kudos were in order. How many times do 18-month long, multi-million dollar projects get out the door without major problems? Instead, the project sponsors criticized the cost of development and, rather arbitrarily, said the team had “spent too much on quality assurance.” The reasoning behind this was simply that, since there were no bugs in the finished product, all that money spent on quality must have been wasted.

Management then demanded the quality assurance budget be cut dramatically — In fact, insisting they would not spend another dime on quality assurance. It was one of the most counterintuitive situations I’ve ever encountered.

The unfortunate consequence of this is an antagonistic relationship between project team and project sponsor. It brings into question how much visibility the sponsor should have when it comes to internal project budgets, and that’s a dangerous line to tread. At the same time, taking the position that QA will not be funded is crossing the line between budgetary management and meddling with technical process — and in this case, a process that had worked with stellar success.

The politics of management

Visibility should not be compromised. The project sponsor needs to know where costs are, and most project sponsors are not going to be happy with a single budgetary line item for “project development.” However, it is equally important that project teams and their organizations maintain a uniform front defending what works.

Stated another way, when you have something that works, treat it as a whole product that cannot be “sliced and diced.” Software development, at least those processes that work, cannot be subject to arbitrary and partial budget cuts. Cutting just the quality assurance department alters a working process. In this case, we had an unreasonable project sponsor that was not interested in understanding the complexity of building a product.

I held a firm line with our sponsor. We could not run a project with arbitrary cuts to parts of the program. Our compromise ended well enough. I was able to tell the sponsor that we would cut the budget across the board, not just in one department. At the same time we came to a mutual understanding: The sponsor was really just concerned with the big picture, the total number of dollars spent. As such, we agreed he would not see another line item labelled “quality assurance.” Future budget reports had a single line item for the total engineering cost, all inclusive of quality-related expenses.

The team, and the project manager, need to defend a system that does a good job. The sponsor needs to be informed that budget cuts cannot arbitrarily target specific program components: Instead, the right way to tackle this problem is for the sponsor to cut the overall budget and let the project organization decide how that cut will be implemented. Most likely, cuts will need to be applied equally across the project — thereby reducing overall output, but not messing with a process that works.

Unfortunately, everyone needs to recognize that this can lead to losing a client or canceling a project. The question is, would keeping the project alive be worth the long-term headaches, knowing that cost will be a constant challenge? You might gain near-term budget cuts, but the problem will come back ten-fold when poor quality and schedule slips lead to unhappy customers.

Once you have a working program and methodology, don’t compromise on what it takes to deliver a project right. It’s better to decide it’s too expensive and walk away, rather than put everyone through the mess of a poorly run project.

Managing with blinders on

Most managers today have blinders on when it comes to solving the problems of complex projects: They are lost among the trees, and can’t see the forest for what it really is. In a recent discussion in the popular Project Management forum of LinkedIn, one moderator posted the question, “what is the most common mistakes of project managers?”

During the ensuing discourse respondents from around the world posted not less than 18 different answers to this question.

Among the responses were answers such as “having poor stakeholder involvement,” “not enough project planning,” “poorly documented requirements,” “the budget being too small or poorly estimated,” and “the [project] goal is not consistent.” To be sure, many of these 18 answers are highly relevant to the success of a project — and yet, every single answer was wrong. None of the 18 responses identified the single, most common mistake of project managers.

In fact, each answer emphasizes the root of the problem: Too many project managers are focused on the day-to-day problems of the project and have lost sight of their overall strategy. They are thinking tactically, putting out fires, rather than strategically — making sure the fires never happen in the first place.

Take, for example, a few of the more common issues raised in this discussion:

  1. Poor stakeholder involvement. Let’s assume for a minute that we have a solution to this problem — perhaps, for example, a project manager has correctly identified all the stakeholders, put together a great communication plan to keep the stakeholders informed, and succeeds in building a collaborative environment with the stakeholders “at the table” on a regular basis. If this solves the problem of stakeholder involvement, does it actually save most of the projects that go off the rails?
  2. The budget was too small. Again, let’s assume that the right process was used to estimate the project from the start, and that the project manager uses a solid method for measuring performance, cost, and schedule (say, Performance Based Earned Value). Certainly, budget overrun is a common problem, but would this actually solve most project problems?
  3. Poorly documented requirements. In my experience, every requirement is poorly documented to start with — so, let’s assume that the right approach is taken to turn poor requirements into great requirements. Quality assurance is involved early, a full-circle approach ties requirements to work product to acceptance, excellent change management is used, and stakeholders provide a final consensus. Will producing great requirements really save more projects than any other strategy?

The list, of course, goes on quite a long ways — and that’s the point. The list is long, and every single item raised is a valid concern for project managers. But with 18 different root causes on the table, could any one of them really make that much difference is the overall landscape?

These are all tactical solutions to specific project problems. So what’s the big picture? What is the one thing that would actually make the biggest difference, that would actually address many, perhaps even most of these 18 different issues?

Let’s take another look at KPMG’s survey of 252 organizations, and their subsequent findings. According to the study, inadequate project management implementation constitutes 32% of project failures, lack of communication constitutes 20%, and unfamiliarity with scope and complexity constitutes 17%. Taken together, 69% of project failures ultimately trace back to poorly implemented project management practices. What this means is simple: Project managers need to step back from the tactical, day-to-day fire fighting, and take a more strategic view. Adopting the right project management strategy will address most of the problems at hand.

How so? Let’s reconsider those first three project issues above.

  1. Poor stakeholder involvement. A thorough project plan, adopted out of an appropriate project management methodology that is fit for the purpose, will place the right emphasis on stakeholder involvement. It will also provide the right tactical tools make sure stakeholders are involved, and appropriate measures should stakeholder involvement begin to fail.
  2. Budget problems. A correctly selected project management methodology will put the right emphasis on budget analysis, and will provide the right tools to stay on top of the budget. The project manager may need to look outside his or her own skill set to manage to those requirements — but the methodology will establish the goals, the tools, and the metrics from which deviation triggers a red flag.
  3. Poor requirements. The right project management plan will include appropriate methods, probably mandated as part of a technical requirements standard, for developing strong requirements. The plans will include adequate validation and verification of requirements — possibly through strong quality assurance measures. Again, all of these tactical solutions will become part of the project and solve the overarching problem.

So, the root cause of project failure — in fact, of 69% of project failures, according to KPMG’s study — is failing at the strategic level to identify and implement appropriate project management practices.

This means choosing the right standards and methodologies for the project. For instance, if tight quality and budget is a concern, more rigorous controls in this regard are needed. That probably means shunning simple methodologies such as lightweight, agile methods in favor of something that uses more ceremony and process (such as that defined in the PMBOK® and other classical project management approaches). It also means sticking to your guns and making sure the methodology is applied. Showing the methodology to the team and putting it on a bookshelf won’t cut it. Application is the key, and that means recognizing that the standards, practices, and procedures are there for a reason — don’t take shortcuts, because doing so means introducing risks to your project’s health.

Dealing with negativity in the team

You are leading a star project team working on a challenging project when you noticed a particular team member spreading negativity, rumors among peers. You are afraid this negative behavior will bring whole team’s morale down. What would you do in this situation?

Every individual is different, and every situation is going to require a different response. Temper tantrums, sexist remarks, chronic lateness, information hoarding, playing favourites … people don’t always behave themselves at work. An adept manager needs to understand the individual nuances of the situation and act accordingly. You need finesse, insight into your team, an understanding of psychology, and often, incredible patience. Here are a few strategies that I always like to try.

1. Engage the malcontent

Quite often, the negative attitude comes from feelings of being disengaged from the team or the project. Perhaps the individual thinks he could do the job better; perhaps he isn’t working on what he wants to work on; or, just feels the project is heading in the wrong direction. Most often negativity stemming from these problems will surface in a team setting, such as passive aggressive behavior, grumbling, openly showing dislike for decisions. I like to engage this individual in finding a solution. Hand accountability to that individual and, in essence, give full reign to fix the problem. With accountability often comes responsibility — and the need to realize that decisions are not always quite as simple as they appear on the surface. Of course, sometimes the individual makes a mistake — but in this case, the lesson is still learned. They “get their way,” but also find out that “their way” wasn’t, afterall, the right way. Of course you’ve got to strive for a better outcome — assign responsibility, and then back them up. Make sure they’ve got resources and help in the decision making process. Hopefully it becomes a learning experience for everyone.

2. Reach consensus

Sometimes it’s not practical to let an individual run with their own ideas. Yet, still you have someone that feels “things” are heading in the wrong direction. I like to try to reach consensus or, failing that, at least agreement that we’ve made the right choices given what we know. One approach is to schedule a round table with the malcontent and his peers, perhaps 3-4 people. Discuss the problem, and try to reach agreement on direction. In the best case, his peers will sway his opinion. More often, the complexities, choices and decisions that have led to the current situation will be discussed — and the “black and white” situation fades in favor of many choices, and trying to make “the best one.” With a little luck, the malcontent employee walks out of the round table with two things: 1) a sense of having been engaged in the decision making process and 2) a new appreciation for the complexity at hand, and the decisions that have been made.

3. Make it clear that it’s a team effort

A one-on-one discussion goes a long way. Spend some time with the individual and really try to listen, and understand what the problem is. Come up with some mutual objectives — some things for the individual to work on (these might be soft skills, such as being less negative) as well as some things for you to work on (these will be things to help ameliorate the bad attitude, such as making sure his opinion is part of the decision process). Make sure it’s mutual, and show some real effort here — there’s tremendous value in demonstrating how much you value each individual’s contribution. Work with the individual to address the problems and find solutions.

4. If all else fails…

If you still have a problem employee on your hand after making a sincere effort to fix the problem, you’ve got to make it clear that continued negative behavior will not be tolerated. You also need to be prepared, so document the problems. Keep a record. After some time, it will become a matter of reprimanding and giving specific, required objectives. This is the worst case scenario and more often that not, the first step toward losing an employee. Sometimes it’s a “wake up call” to the individual, but often this kind of heavy-handed approach just feeds the negativity. Be prepared for either outcome.

Wayne McHale was a senior manufacturing executive when he heard reports that one of his branch offices was getting fed up with the arrogant, condescending attitude of a new manager. He decided to pay a personal visit to the office and put an end to the situation right away. “I made it absolutely clear that while we were delighted to have him on the team, certain behaviours could not be tolerated in a team environment,” says Mr. McHale. “He was taken aback, initially, because I think the behaviours were somewhat ingrained. He was a star and had been told for too long that he was wonderful.”

Whatever the case, make sure you have a good documented history. You can use it when talking about the problem with the employee, making sure you have concrete references to poor behavior. In the worst case situation, you can also use it to back up termination papers.

Above all else, don’t be an enabler

Some organizations actually nurture bad behaviour, according to Lew Bayer, president and CEO of Civility Experts Worldwide. For example, an all-star employee with a primadonna attitude may be tolerated because a manager decides it’s too costly or too much hassle to seek a replacement. Or perhaps certain rules may not apply to someone who has formed a friendship with a senior manager. In situations like this, it’s often the boss that’s the problem.

You can’t avoid dealing with workplace performance issues — it will come back to haunt you in the long term. Perhaps other employees will get fed up and quit. The problem employee might have a temper tantrum in front of a client. It’s hard to predict but one thing is almost certain: It’s going to happen at the worst time, when stress is high and a lot is on the line. Ignoring the issue won’t make it go away; it will just get worse.

Don’t wait until the problem becomes a problem for everyone. Be proactive, and recognize that the workplace is above all a place for professionalism. If your star performer is worth keeping, coaching can help. If your disaffected team member needs to feel involved, a few changes can make that happen. But, only if he’s open to the idea. If not, it may be time to take more direct action in order to preserve the integrity of your team.

Do hackers make the best testers?

Recently, I was asked “what makes a good software tester,” and as a subtext, whether hacking and testing share a similar mindset, and how wide a skill set testers need to have.

I think the most valuable asset a Software Tester can have is an attitude of gleeful problem discovery. Someone that loves to break systems, discover their imperfections, and explore their weaknesses makes a great tester. I haven’t met many people that really enjoy and excel at this, but it’s probably is an attribute that is common with Hackers as well.

It’s also wonderful to have a tester that really cares about the quality of the product. It’s absolutely essential for someone that wants to excel as a tester. That means having the patience and desire to work closely with the Quality Assurance group, to understand what a “good customer experience” means, and to really grasp things like quality of services, user experience, and customer needs.

Part of being a good tester means enjoying running down the rabbit hole. Where the hole leads is a mystery. Perhaps testing discovers problems stemming from poor UI design, SQL injection problems, performance issues caused by heavy loading, or playing the clueless user that always clicks the wrong thing and triggers a logic error.

The “how” of testing is another matter though. Yes, there are well understood principles and techniques, and often tools, for testing all of these things. I have found that in most cases, good testers tend to specialize. I don’t expect to find one person that can find the flaws in the user interface, perform load testing, and also look for SQL injection vulnerabilities. To get really good at all of these things, you need a team — some of those team members will focus on the back end, some on security, some on database systems, some on the front end. Finding someone that’s great at tackling a couple of those verticals is pretty rare. That said, every tester should have an adequate, at least shallow understanding of all of these areas. In order to properly localize a problem, you need to understand what could be causing it. But having other resources to bring in to help diagnose the specialty areas is critical.

Managing risk in global projects

I was recently asked what are the most relevant, pressing risks that affect global project management. Many come to mind but one stands out immediately: One of the most significant risks we identify is a globally disparate (geographically separated) team. Teams working in separate regions face tremendous challenges that a co-located team doesn’t have to think about. This is exacerbated when outsourcing, where conflicts in language, time, culture, and business environment all affect the organization.

Organizations facing these environmental issues need to put a considerable investment into mitigating the associated risks. This is essentially why the “promise of outsourcing” has been toned down over the past decade: Gone is the illusion that you can get solid work for 25 cents on the dollar. “Real” outsourcing costs tend to range anywhere from 70 cents on the dollar to $1.20 on the dollar (yes, outsourcing can often lead to higher costs — but sometimes it’s not just about cost, but geographic presence, distribution, foreign market penetration, etc.)

Language barriers pose some of the most difficult issues to work around. Being unable to easily communicate means poor communication becomes a barrier to the entire team. This can lead to misunderstood requirements, misinterpretation of directions, even a complete disconnect on whether a team is in trouble or doing fine. Ideally, open communication, information radiators, and visibility are central to successful projects. Any barriers increase risk, and that means increasing efforts to compensate. Closely related to language barriers are cultural barriers. A pretty obvious example is the straightforward U.S. business culture in regard to the respectful and tradition-rich Japanese culture. Even seemingly similar cultures pose barriers; for example, East Indian cultures and U.S. cultures don’t easily connect until interpersonal barriers have time to break down.

Business environment and common bias also contributes to the risk of disparate teams, especially those separated by business culture. For example, consider a client developing a legal work product solution in the U.S. market while using East Indian resources. The lack of a common business foundation can easily lead to a complete disconnect regarding assumed business objectives (in other words, the legal system is very different in the U.S. versus India, which means a lack of common understanding regarding some pretty basic business goals).

All of these issues can be mitigated with appropriate practices. The necessary measures will vary from one project or organization to another — there are a lot of variables at work, and that means every project has to be treated uniquely. The common thread is communication. Breaking down these barriers by using process, technology and culture is critical. The disparate team needs to become one team, working as a unit — and that usually means a significant investment in tools, strong processes and team-building exercises. I strongly advocate rotating team members across the organization or project as one example. This helps across the board: It breaks down communication and culture barriers, helps team members get to know one another, lets distant teams experience local culture, and helps to build a collaborative “whole team.”

Doing away with ineffective, broken risk management

We all want to be Apple. We want to have their reputation, at any rate. A zealous customer base, fantastic products that seemingly flow out of design and into production without a hitch, and a virtually zero record of recalls or product delays.

But it’s the part about the customer that really grabs our attention. So the question is, how do they do it? If we put the right people in a room together will they just “get it,” and execute a flawless vision?

That’s likely a key part of it, at least in so far as it takes the right people to make the right decisions. But how do we execute our vision with such precision? And if we look at other successful companies, will we find some theme that’s in common with Apple? Absolutely. That common theme isn’t just one thing — But every single successful company has one common element in their strategy: A mechanism for avoiding undue risk.

Risk management has become mainstream. It’s no longer the domain of rocket scientists and actuaries. In fact, it’s become so mainstream that formal risk management practices are showing up everywhere we look. Most of the time, we’ll see the word Enterprise included in the definition — a way of letting us know “this is for the whole firm.” Enterprise Risk Management (ERM), Business Continuity Planning (BCP) and Governance and Risk Compliance (GRC) are just a few of the different names risk management flies its flag under.

Is More Attention A Good Thing?

But is all this sudden attention to risk management going in the right direction? To answer that, we need to look at the specifics of different risk management techniques.

For example, the Project Management Institute (PMI) and National Institute of Standards and Technology (NIST) have both put forward standards that devote significant space to the topic of risk management. The PMI standard of risk management (PMI-RMP®, or Risk Management Professional) includes some pretty extensive methods for identifying, quantifying and mitigating risk.

Much of the PMI-RMP standard can be considered a brief introduction to risk management. It doesn’t introduce quantitative analysis or provide any background of Judgement and Decision Making (JDM) theory. It does, however, provide a starting point, some kind of a baseline that we can use to at least make sure that our projects, programs and organizations are addressing risk management — at some level.

This is good, at least at first blush. But, unfortunately, when we dig deeper there could be a more subtle problem here: The practices advocated by PMI and NIST standards are, quite simply, apt to cause more harm than good.

Worse Than Nothing

There are decades of remarkable research in JDM and risk management theory. The research that has gone into this kind of theory has produced an invaluable treasure trove of tools, processes and techniques that we can leverage to learn how to accurately and effective assess risk across our organization.

This same research has also largely debunked “crackpot” risk management theory and poor decision making practices. For instance, Harvard Business Review led a study of over 200 popular management tools, like TQM, ERP and so on. Independent external reviews of the degree of implementation of each of these various tools was compared to stakeholder return on investment over a five year period. The resounding conclusion from this in-depth study, as reported by HBR, was that: “Our findings took us quite by surprise. Most of the management tools and techniques we studied had no direct causal relationship to superior business performance.”

But this shouldn’t be a surprise, at least not to anyone familiar with formal risk management and JDM theory. In research conducted over many decades, such as that of Brunswik, Kahneman, Hubbard and others, most of these recently introduced management practices have been exposed as ineffective and often even harmful.

Consider, for example, the principle method for quantifying risk in the PMI standard is a matrix-based weighted scoring system. This system advocates highly subjective risk assessment practices, such as relying on risk assessment almost entirely from subject matter experts. Studies have shown that even well trained experts — let alone the people that often serve as experts on review boards — tend to provide highly inconsistent and spotty assessment results. One study by Hubbard tested a group of experts in their ability to assess risk across a portfolio of projects. Unbeknownst to the participants, two of the assessed projects were identical — and, hence, we should expect identical risk assessment of the two projects. But that’s not what the study shows: Participants only agreed with their own risk assessment 22% of the time. The rest of the time, risk assessment varied widely, sometimes as much as 35% by the same individual.

Fixing It

Of all the professions that practice risk management, actuaries are the only ones that can claim a real profession. Actuaries, much like accountants, doctors and scientists, must demonstrate their ability to assess risk using scientifically proven methods. And, like other formal professions, an actuary puts her license on the line when certifying a Statement of Actuarial Opinion. As with doctors and lawyers, if she loses her license she can’t just get another job next door. The industry of risk managers, modelers and assessors outside of the insurance industry would be greatly served by this level of professional standards.

Likewise, organizations such as PMI and NIST should stop promulgating what amounts to crackpot risk management practices. Decades of extensive study have shown that the core principles of risk management integrated into the PMI and NIST standards simply do not work. Worse, in many cases these practices actually cause more harm than good. Scoring methods should be disposed of. Instead, standards should rely on existing bodies of proven risk management and JDM practices.

But in the meantime, attaining a greater awareness of the risks associated with bad risk management practice is our responsibility. Understanding what to look for in risk management, and consulting trained professionals that can employ statistical risk methods is a good starting point. At the very least, firms should consult with formally trained professionals — and look for empirical, statistics-based methods. Anyone proposing a weighted scoring system should be shown the door!

If you would like to learn more about risk management theory and practical methods of assessing and avoiding risk, see Hyrax International’s seminars on these topics. Attendees are welcome at public presentations. If you are interested in hosting a presentation at your firm, contact Hyrax International directly. Introductory seminars are offered at no cost.