By CYRIL TUOHY, managing editor of Risk & Insurance®
Forget economics being called the "dismal science." You want to dabble in a dismal science? Try catastrophe modeling. The best CAT modeling can do, indeed the best it has ever done, is to estimate losses from an earthquake or a hurricane. Even at its very best, CAT modeling only comes close. It offers a window of loss, from $10 million to $15 million, for example, or from $1 billion to $1.5 billion. That's not bad, but it it's hardly good enough.
What risk managers really need from these CAT models is to get the projected losses right. What we need out of our CAT models and modelers is for them to get it right every time. Nobody said it was going to be easy, and CAT modelers, I grant you, may be working with a certain handicap.
Insurance data collection systems, bulk coding, incomplete exposure characteristics, inaccurate valuation of building contents, poor geocoding information, all conspire to corrupt the accuracy of models. It's the classic garbage in/garbage out problem, and it has yet to be solved for insurance data.
Modelers know this, though, even before going into the business, so my vote is not to let them off the hook. My vote is to hold their feet to the fire, and come down hard on them when they are off base: lash them to a Miami dock in the midst of a Category 3 storm. How come so many models fell short in estimating the $12.5 billion in insured losses caused by Hurricane Ike, when it roared ashore in Texas last year?
Let's not talk about how Hurricane Katrina put everyone in the modeling community to shame--running up insured losses north of $41 billion when it slammed in New Orleans in 2005, way above anything that the CAT community had ever even conceived of.
As every risk manager knows all too well, CAT models matter. They matter because the loss estimates they spit out are used by underwriters to price risk. Pricing risk eventually shows up in the premiums risk managers pay for their insurance policies.
When Quaker Oats Co. pays more for a commercial insurance policy, the higher the price of my Quaker Oats, and let me tell you, I get steamed by the high prices of Quaker Oats. Cedants are similarly miffed when the CAT models are off base--either overestimating or underestimating catastrophe loss exposures, and thus the catastrophe reinsurance premiums paid.
The standard response from the CAT modeling crowd is that the models are improving. With every catastrophe, we're told, there's more data collected and parsed through ever more complex algorithms.
I don't buy it, not for a minute. In fact, just the opposite is happening. For every new storm, and for every new data point collected, estimating the losses at risk in the next one becomes more difficult. The more data available, the more difficult the accuracy challenge. Why is certainty so difficult for CAT modelers? The last catastrophic earthquake in the United States, for example, occurred in 1994, in Northridge, Calif. It toppled apartment buildings and buckled freeways, causing $12.5 billion in insured losses.
Since then there have been scores of upgrades to the models. Despite this, there has been no significant opportunity to evaluate claims data specific to the San Fernando Valley, where the earthquake struck.
Even where claims data has been more available, hazard data is still sparse. In Florida, for example, the modeling firms have only had about 80 historical hurricanes, with only 10 of them reaching Category 4 or Category 5.
Since models must be able to simulate all potential wind speed combinations, modelers make assumptions from the historical evidence for the range of possible combinations, but that is still not good enough.
We now know, thanks to events of the last four or five years, that it is way too easy to underestimate Mother Nature's potential, and that the relationship between size and intensity, at least for Hurricane Katrina, surpassed that of all simulated events in the existing CAT model.
(To read the other side of the argument from Senior Editor Matthew Brodsky, click here.)
October 15, 2009
Copyright 2009© LRP Publications