By KAREN CLARK, president and CEO of consultancy Karen Clark & Co.
Catastrophe modelers continually update their models, and model updates are typically assumed to be better, even if the modeled loss estimates swing widely, both up and down, from one update to the next. Assuming a newer model is always a better model is not a sound assumption, particularly if we define "better" as a model for which the loss estimates are more credible or closer to reality.
If we were getting closer to reality with each model update, then the variability and volatility in loss estimates would be decreasing, not increasing as we've seen with recent model updates.
A lot of scientific research underlies catastrophe models, but very few scientific facts. The science is highly uncertain due to the paucity of data behind most model assumptions. There are many "unknowns" for which no amount of analyzing recent events can shed more light--even after tens of billions of dollars of claims data is incorporated.
For example, all the data from the hurricanes of the last few decades tell us very little about the probabilities of major hurricanes hitting the mid-Atlantic and Northeast regions. Scientists have very little recorded data on hurricanes striking the coast in these regions. In fact, scientists have reliable recorded information on maximum wind speeds over land for only two Northeast hurricanes, Gloria (1985) and Bob (1991), both Category 2 storms.
If scientists don't know how many major storms have historically struck these regions, how can scientists estimate the future probabilities? Even more problematically, how can a model pinpoint a number for each insurance company's one-in-100 probable maximum hurricane loss in these regions? It's easy to see how the model-generated probable maximum losses can swing by 100 percent or more.
When there is so little data, the model assumptions are based on scientific guesstimates that can change significantly from model to model, and update to update. No authoritative source exists to say which set of guesstimates is better than another. If one model says the chance of a major hurricane in the Northeast is one in 90 years, for instance, and another says one in 110 years, which is more accurate? If those same models are updated and now one model says 75 years and the other 125 years, which is better? All four of these estimates are credible and scientifically defensible, but they can lead to large differences in PML estimates.
WHEN UPDATES GO WRONG
While a new model is not necessarily a better model, a model update can go very wrong, particularly at a regional and local level. There are several reasons for this. First, while the models are good for assessing relative risk, they can only go so far in distinguishing high-risk versus low-risk areas.
While Florida is clearly the region most exposed to hurricane losses, is Florida 200 or 300 percent more exposed than Texas? Is the Florida loss potential five, six or seven times that of the Northeast?
While California is clearly the most earthquake-exposed state, is Northern California less or more risky than Southern California, and by how much?
At higher resolution, it's even more difficult to distinguish the risk from one location to another because secondary causes of loss come into play, such as storm surge and liquefaction. These secondary causes of loss can drive the loss estimates in some areas, but their effects are difficult to quantify and model. Specific areas can be significantly under or overly penalized by a model.
The second reason model loss estimates can go awry is because the catastrophe models have become overspecified. We are trying to model things that we can't even measure. For hurricanes, wind speeds are estimated at individual locations using many assumptions on surface roughness and gust factors when, with very few exceptions, there are no recorded hurricane wind speeds for any of those locations. For earthquakes, ground motion is estimated at individual locations using many assumptions on soil conditions and other factors when there is no recorded data for the vast majority of locations.
High degrees of uncertainty surround all of these assumptions. The problem is compounded by the fact that the model loss estimates are highly sensitive to even small changes in the assumptions. For example, changing an assumption about surface roughness by just 10 percent can cause the hurricane loss estimates to change by over 50 percent. This is a fundamental problem with all the models.
Finally, a model can be overcalibrated to one or two events. There is no average hurricane or earthquake. All catastrophes are unique, and the modeling companies have to decide what can be generalized and what is relatively unique about each actual event.
The Northridge earthquake, for example, occurred on an unknown fault and caused unexpected ground motion at many locations in the Los Angeles area. Scientists would caution against calibrating all future earthquake events to Northridge.
Hurricane Ike was a very unusual storm, particularly with respect to inland damage. Much of Ike's high winds and inland damage were caused by meteorological factors not related to the storm. Occasionally, storms do move far inland, and this should be accounted for in the models. However, calibrating every storm to Hurricane Ike could lead to overestimating inland damage for most storms.
Because model users do not have the benefit of transparency on all of these model assumptions, the model-generated loss estimates should be fully vetted before being utilized for important underwriting and pricing decisions. The model loss estimates can go awry, and there are ways to detect anomalies and test the numbers to make sure they are within the bands of credibility and reasonability. No one knows the right answer, but we can certainly weed out the very wrong answers.
If a model update is not credible for a particular book of business, the update should not be used. Major shifts in underwriting strategies are disruptive to business goals and objectives. The test of a "better" model is if the loss estimates are more credible for a particular book of business--not whether it's the latest research a modeling firm has to offer.
June 1, 2011
Copyright 2011© LRP Publications