Hurricane Katrina is not yet over. While the first round of destruction created gruesome damage, the secondary and tertiary waves are wreaking havoc on our assumptions about uncertainty and risk management. Uncertainty, it appears, is not quite so certain as we had once supposed. Risk management, which we had assumed led to good decisions and therefore to favorable outcomes, might actually have nothing to do with favorable results.
In the aftermath of Katrina, the world started paying attention to the various models that yearly predict the likely frequency, severity and locations of hurricanes. But at the beginning of 2006, the models were predicting a very bad year for those living in the Gulf region. Insurers quickly increased rates, financial markets dumped billions of capacity into Bermuda and everyone battened down the hatches in anticipation of fierce tropical storms.
Somehow, in all of the preparation, someone forgot to tell the hurricanes what they were supposed to do. The season turned out to be mild, and many quickly blamed the models and the modelers. The real problem was that the models were understood by many to be clairvoyant--that somehow having a probability for an event dictated whether that event would actually occur.
For example, consider a coin toss. We know that the probability of the coin landing on heads is 50 percent. However, that does not mean that, if I flip the coin twice, I will be certain to have it land once on heads and once on tails. Even with a known probability, the outcome of a coin toss is uncertain.
Likewise, if there is a 50 percent chance of a monster hurricane hitting Houston, that does not mean that it will definitely happen in the next two years. That was the mistake made with the hurricane predictions of 2006. A high likelihood of a bad hurricane season is not the same as the certainty of a bad hurricane season.
In anticipating the certainty of catastrophic wind damage, many purchased expensive risk-transfer policies. Of course, when no damage was done, it was inevitable that some would suggest that this was a bad decision. Because money was spent unnecessarily on insurance, some might conclude that the decision to buy it must have been a bad one.
Outcomes and decisions, in reality, have almost nothing to do with each other. Ron Howard, the Stanford professor who invented "decision analysis," makes this point on the first day of his freshman-level courses.
Suppose, he suggests, that you have given your teenage son the keys to your car and he decides to drive home drunk. The outcome is good--he arrives home safely. But no one would suggest that his decision to drive was a good one simply because the outcome was positive. Good decisions are based on probabilities, not outcomes.
In the face of overwhelming opinion that the 2006 hurricane season was going to be unusually bad, buying insurance, even at inflated rates, was probably a reasonable decision. That the hurricanes did not cooperate by causing a lot of insured damage does not make the decision any less reasonable. Nor does it make the decision to remain uninsured a particularly wise one.
Risk management requires that we peer into the vast uncertainty of the future and attempt to divine information about the likelihood of future events. Our decisions regarding risk treatment are based on these predictions. As long as it is understood that risk management deals with probabilities and not clairvoyance, there should be very little second-guessing. However, once the predictions are confused with certainties, risk managers become exposed to hindsight.
lives in Colorado and manages risk for Sun Microsystems Inc.
March 1, 2007
Copyright 2007© LRP Publications