Earthquake models have a couple things going for them. First, models have steadily improved thanks to better geological and claims data.
Though limited, geologists' understanding of faults grows through their use of ground sensors and GPS to monitor their activity. Scientists "trench," or dig into, faults to uncover "paleoseismic" evidence of ruptures thousands of years old. The United States Geological Survey gathers what it's learned, as well as what the country's universities and regional earthquake centers have learned, and builds the consensus into its National Seismic Hazard Maps. Modelers build their product on top of the most updated version of the hazard maps.
"So the net result for the industry, in the background they have a tremendous amount of knowledge and expertise that has been distilled," says Andres Mendez, principal scientist, Aon Re Impact Forecasting.
Modelers also tap into claims and engineering data from past quakes, such as the 1994 Northridge and the 1985 Mexico City events, to learn how structures cope with shaking and how soil types influence this shaking, for instance.
The modeling firms apply the lessons from the hurricanes of 2004 and 2005 to their shake product. They're tackling the issue of "loss amplification," which goes beyond the supply-and-demand issues of construction and labor post-catastrophe, the so-called demand surge. It accounts civil commotion, "claims leakage" from the difficulties of mass adjusting, inabilities to discern which peril caused a loss, and the overall economic shock of multiple perils affecting multiple lines in a large, crowded geographical setting.
"That is something that we absolutely applied to the different models," says Don Windeler, earthquake practice lead for Risk Management Solutions Inc.
Insurers recognize these refinements to the models, and understand they're still the best tool out there for certain tasks.
"There's really not much else for a benchmark out there. So to have some kind of an answer that you can interpret to decide if you think that answer's high or low, is incredibly valuable as you evaluate the risk for your portfolio," says Jim TeHenappe, earthquake model expert with reinsurance intermediary Benfield.
But insurers realize the models' limits, even with their new loss amplification feature.
"I question whether what's in the model for this component is adequate, especially for longer return period events," says Dan Loris, senior vice president, property lines, at Zurich's North America commercial operation.
Insurers have learned that modeling is only part of the process when underwriting a risk. "Modeling is part of that, but not the only part of that decision," says George Stratts, senior property executive at Lexington.
Loris talks about assessing portfolio risks from earthquakes "holistically," supplementing the models with "our own real-life experiences." "The end result is that we create uplift factors, meaning we increase the model results by a factor that we think is more representative of the risk," he says.
FM Global creates its own customized hazard maps to support its engineering and underwriting activities, says Louis Jacobs, assistant vice president, natural hazard perils.
Insurers are also working on a tool that would work like reinsurers' total sum insured, an exposure number for event levels that would take the frequency issue "off the table," says John Beckman, president at Carvill's ReAdvisory service.
And the unanimous mantra from industry sources when it comes to models and portfolio and property valuation data is: "garbage in, garbage out."
April 1, 2007
Copyright 2007© LRP Publications