Today's forecast from the U.S. Geological Survey: 100 percent chance of an earthquake. Actually, thousands of them. Luckily for property owners and their insurers, chances are today's earthquakes will be so mild as to not even rattle a window or so remote as to not unsettle the daily routine of civilization.
Then again, today could be a hell of a day for the New Madrid temblor that levels both St. Louis and Memphis, Tenn. Or it could be the day for a freak magnitude-7 quake that rumbles through New York. Or, it could herald the much-anticipated San Francisco repeat, or the magnitude-9, tsunami-sending rupture off the Pacific Northwest. Why not? They're all within the realm of possibility.
If today is the day for the Big One, catastrophe models could end up like a tall bookshelf during an earthquake. Insurers won't want to stand by them, unless they want to risk their books of business lying in ruins after the shaking stops.
One problem is modeling technology is too narrowly focused to capture the Big One.
"They're very specific to the lines of business that you enter into them," says John Beckman, president at Carvill's ReAdvisory service, which last year released a report called "Could an Earthquake Be the Next Katrina?" on the topic.
"These kinds of megacatastrophes have a way of affecting multiple lines of business beyond the ones you just put into your CAT model," he says. Insurers won't know which lines until it happens.
Losses for lines entered into the model could be off too. For instance, life, health and workers' compensation losses are modeled. But accurately?
Probably not, says Paul Nunn, head of exposure management at Lloyd's. The technology for these lines is a handful of years old and untested. And Nunn can already see problematic scenarios.
It is "almost going to be impossible" for models to predict the time of an event, he says. "If it happens at 9:30 in the morning versus lunchtime versus midnight, you're going to end up with very different levels of workers' compensation."
Another line of concern is business interruption.
"I think business interruption is somewhat getting at the soft underbelly," says Dan Loris, senior vice president, property lines, for Zurich's North America commercial operations in Schaumburg, Ill. "There are a lot of additional variables that come into play for business interruption that the model has a more challenging time predicting."
These additional things--such as supply-line issues or damage to utility distribution networks--have more complex interactions and heavier impacts the bigger an earthquake, says Nunn.
Enter contingent BI.
"Contingent BI is a special coverage that no models really account for," says Jayanta Guin, vice president of research and modeling at AIR Worldwide Inc. in Boston, "because you really have to look at the full comprehensive picture of the damage in order to estimate what the contingent BI losses are likely to be."
Part of the reason that modelers have trouble with contingent business interruption, he says, is insurers, who can't provide the necessary data on their supply chains.
Loris at Zurich agrees. "The model is only as good as what goes into it. Does that information adequately represent contingent business income loss potential?" he says. "Do we know and can we quantify exactly all of the contingencies that may occur to an operation in Illinois affected by a client that suffers an earthquake in California?"
Or consider, as ReAdvisory did in its report, a quake shutting down the port of Oakland, Calif., which is the nation's fourth busiest for containers.
Tom Larsen, senior vice president, product management, at Oakland-based modeling vendor Eqecat Inc., envisions an even more far-reaching scenario: a 100-year California earthquake that costs 11 percent of the gross state product and sends the entire state into a recession. In 2005, California represented about 7 percent of the entire U.S. gross domestic product.
Where will the business interruption end? Not at the models.
Insurers could also have business lines exposed, unaware, because of the Big One's collateral damage, from secondary perils that domino out of control after being triggered by the primary event.
For example, fire-following. Many states, including California, obligate primary fire insurers to cover damage for the peril no matter the cause, with very few exceptions. For insurers like Zurich, who have a "vast number" of policyholders in the small business and middle-market side with all-risk policies and no stand-alone earthquake cover, the models could underestimate this exposure if a scenario similar to the famous San Francisco earthquake of 1906 unfolds, Loris says.
California statutes would require fire carriers to pay out for some portion of certain claims, says Loris. The models might miss this because they are typically more focused on "adding a load" to account more for fire-following in policies with earthquake cover, Loris says.
"The load may not fully account for potential exposure on those policies that don't include earthquake to begin with, but are in harm's way," he says.
Loris also foresees civil disturbance as a secondary peril after the Big One, just as it was an issue after Katrina.
Angry citizens with Molotov cocktails and stolen televisions in their hands pose questions models might not have answers for.
Which line pays the loss and how? Is it under the all-risk quake sublimit? A difference-in-conditions policy? Is it a second occurrence?
"You've got a lot of different hypothetical situations that you have to address," Loris says. "The end result is that oftentimes, the model isn't going to contemplate all of those. History has shown there is additional exposure there."
Chalk up flood damage from sprinkler leakage or, say, a tsunami or a broken dam as "additional exposure." Sure, these last two secondary perils are more likely to appear in a Hollywood hack's script. But they're possible in real life. AIR's Guin recalls the 1971 San Fernando earthquake, when the Lower Van Norman Dam collapsed but luckily was not breached.
Or there's the Pacific Northwest earthquake circa 1700 A.D. Scientists discovered this event happened after uncovering Japanese historical accounts of an unannounced tsunami. They put two and two together with dead forests and sediment patterns in America traced back to the same time.
The largest quake in North America in recorded history, the 1964 Alaska quake, generated tsunamis throughout the Pacific, with wave heights reaching an estimated 219 feet in Shoup Bay.
These events' infrequency, though, is the most frequently cited reason by modelers for why they're not part of their earthquake products. That, and tsunamis aren't easy to model. "It's a fairly complicated process to model," says Don Windeler, earthquake practice lead for Risk Management Solutions Inc. "That's not an excuse. We deal with hard problems all day."
To be fair, modelers are working hard on difficult problems. From all accounts--and not just from the modelers' accounts--earthquake models are still the best tool out there. They're always improving, doing the best they can with limited data. There are far fewer actual events with which to test and refine models relative to, say, hurricanes. Whatever weaknesses models have, their creators are up-front about it.
"We build them to be as accurate as the data allows--but those are areas where there's less confidence just because we don't have as much track record with it," says EQE's Larsen.
Also, the point's been driven home: the models aren't completely to blame for these weaknesses. Insurers haven't been inputting the best data into them. Carriers now accept responsibility for this, regret their knee-jerk attack on models after Katrina, and understand that tools beyond models are available, and needed, to underwrite catastrophes.
Apologies to models aside, they don't succeed at their prime mission. Their goal is to turn earthquake risk into any other business risk, says Larsen--to "translate an abstract 'oh, it could be bad,' to 'how often is it going to occur?' "
Nor might models succeed at this mission any time soon. Their foundation is built on unsettled silt, rather than bedrock. How frequently will a big bad earthquake happen? Where? Scientists cannot say with precision, so neither can the models using that science, and so neither can the insurers when pricing their risks based on those models.
AIR's Guin admits the models' faulty frequency underpinnings:
"There are higher levels of uncertainty in those estimates that compare to other perils like hurricanes," says Guin. "We might have a good understanding of the average frequency of magnitude-7 earthquakes, but we cannot say that by individual faults."
He uses California as an example: "For the entire region of California, with a lot of confidence we can say a magnitude-7 earthquake is likely to occur once every 10 years, for example. But within California, which fault is likely to experience a magnitude 7 is extremely difficult with today's state of the science."
Mind you, California is one of the world's most studied seismic regions. The rule seems to be, the less there is known about a fault's return rate, or frequency, the worse potential a quake there could be.
"The things with high frequency, we're much more confident," says Steve Jakubowski, executive vice president, chief operating officer, Aon Re Services, "and the things with long return periods, there's more debate in the scientific community."
The New Madrid Seismic Zone has gotten plenty of attention lately. When the last monster occurred there in 1811-1812, legend has it the Mississippi River reversed course, islands disappeared, and church bells in Boston and Philadelphia rung.
Some argue that New Madrid belongs on the list of "well-known" earthquake risks, but many say it is still a question mark. Experts aren't sure how big it could be.
"Problem is, we don't have enough earthquakes in that area to really understand," asks Larsen, "is it really going to cause damage in Boston or Philly?"
Then there's the monster's return period. "There is still a significant amount of difference out there in academic circles as to what the actual probability is," says Nunn.
What's a little disagreement among brainiacs?
"A small change in that frequency would greatly affect what's one in a hundred," says Beckman at Carvill.
In other words, small changes could greatly affect how insurers' understand and price risk there.
Windeler explains how the USGS once considered New Madrid a 1,000-year event, but after uncovering further evidence of prior activity, it readjusted the frequency in its latest hazard maps to a 500-year event--double the likelihood.
Some experts contend that the return period could be every 200 years--quintuple the probability.
Is that a small change?
Lesser-known areas include the Northeast and the Southeast, where even a relatively small quake could wreak widespread property damage because building codes generally fail to take earthquakes into account.
"So future earthquakes could cause lot of damage in these regions, but again, it's uncertain because we cannot very accurately say what is the frequency of these future earthquakes," says Guin. "But the risk is certainly there."
Yes, risk is there--even when experts don't know it. The Puente Hills fault just underneath Los Angeles--which could cause upward of $140 billion in insured losses according to AIR estimates--wasn't discovered until 1999. What other undiscovered faults are out there?
"There are earthquake exposures almost everywhere," says Louis Jacobs, assistant vice president, natural hazard perils, at FM Global.
We'll have to wait and see where. And so will the models.
"There will be inevitably things that we learn as an industry, and the CAT modeling of the sector learns, following an earthquake," says Nunn.
Gee, can't wait.
MATTHEW BRODSKYis associate editor of Risk & Insurance®.
April 1, 2007
Copyright 2007© LRP Publications