By KAREN CLARK, president and CEO of Karen Clark & Co.
The use of catastrophe models in financial decision-making has increased dramatically over the past 25 years. They've gone from nonexistent to ubiquitous. These models have had such a dramatic impact on the insurance industry, not so much because of advances in science, but because of advances in computing power.
Catastrophe models were first developed in the 1980s. While the science underlying the models was well known before then, it was only in the 1980s that computers became accessible and powerful enough to run the models in a timeframe useful for decision-making. Advances in computing capabilities enabled the catastrophe models to simulate thousands of potential scenarios, which meant they could provide information on the probabilities of losses of different sizes rather than just individual scenarios. The models could also account for the actual exposures of insurance companies and could superimpose the simulated events on relatively high-resolution property data to estimate losses.
Catastrophe models are valuable because they provide a framework for bringing together scientific and engineering knowledge with respect to catastrophic events, property exposure data and policy conditions. Using catastrophe models, insurance companies can get reasonable estimates of their potential losses based on current property values. Companies can also get reasonable estimates of the relative risk by geographical area and type of business.
Before catastrophe models, property underwriters employed simplistic formulas and rules of thumb to make pricing and underwriting decisions. The first hurricane catastrophe model indicated that these simplistic approaches were underestimating hurricane loss potential by a factor of 10. Adoption of the models spread quickly after the 1992 hurricane season when Hurricane Andrew revealed the problems with traditional approaches and the much higher credibility of the catastrophe models.
In the early days of catastrophe models, when computing power was still relatively expensive, going through an event simulation even 1,000 times could take several hours or even days depending on the size of the portfolio. Given the limitations of available computing power, insurance companies primarily used the models to obtain estimates of overall portfolio losses. Because the models were so new, there was also a healthy dose of skepticism, and model users understood they were getting credible, but very approximate, estimates of their potential losses from catastrophes.
By the mid 1990s, a decade after the first catastrophe models were introduced, computing capabilities had increased more than fiftyfold. Model usage expanded to include ratemaking because model output could be produced by territory and even five-digit zip code. Once the models could perform hundreds of thousands of simulations and produce output down to the individual location level, usage expanded to individual risk underwriting and pricing.
The catastrophe models became fully institutionalized in the late 1990s when A.M. Best started considering the PML estimates in BCAR. These model-derived PMLs began to take on much greater significance in the determination of capital requirements and financial-strength ratings.
By the 2005 hurricane season, the pendulum had swung far in the other direction, from not enough use of models to overreliance on models. This was demonstrated clearly by the industry's surprise and dismay when the models did not accurately assess the losses from Hurricane Katrina. While they are clearly superior to the underwriting rules of thumb used in the 1980s, the catastrophe models are not precision instruments, but rather relatively blunt tools characterized by a high degree of uncertainty. The models provide rough estimates of potential losses and not precise answers.
Despite this fact, catastrophe model output is frequently produced to losses estimates down to the penny. This false precision has given model users a false sense of model accuracy.
Catastrophe models have become more detailed and complex, not so much because of increased scientific knowledge, but because increases in computer power have enabled model builders to add greater detail for both input and output. Advances in computing capabilities over the past 25 years have grossly outpaced advances in scientific knowledge with respect to catastrophes. Computing power has increased several orders of magnitude since the first catastrophe models were introduced. From a scientific perspective, what we know today about the frequencies and potential magnitudes of large events has improved somewhat, but has not changed dramatically since the first catastrophe models were developed in the 1980s. This is because scientists do not have much additional data to work with, and the catastrophe models are inherently constrained by the available scientific data.
So while models can generate reams of numbers and produce those numbers at five decimal points of significance, the model-generated loss estimates are highly uncertain and can easily be off by well greater than 50 percent--on the low or high side, particularly at the location-level. So much focus on and investment in the models has significantly constrained the ability to apply reasonable underwriter judgment and credible adjustments to account for model weaknesses, such as estimating business-interruption losses and the damage to specific types of properties and occupancy types.
Today, rather than using the models as valuable tools, we have become slaves to them. Even if the models are off by a mile for certain types of risks in certain geographical areas, we are compelled to go with the models, because that's the only way to manage the ratings agency-driven one in 100 and one in 250 PMLs and therefore capital requirements and financial strength ratings. We now manage PMLs to models rather than to reality.
As the developer of the first catastrophe model, it is on one hand gratifying to see the models rise to such prominence in the industry. On the other hand, given what I know about how little data there is to support most of the model assumptions, and how sensitive the model results are to even small changes in assumptions, it's frightening.
The goal now is to find the right balance between utilizing the models as a very valuable framework and being able to improve the loss estimates utilizing other credible information. We also need to start basing risk management decisions on credible, robust ranges of loss estimates rather than ever-fluctuating, model-generated point estimates.
May 1, 2010
Copyright 2010© LRP Publications