They blame you, risk managers. When insurance and reinsurance companies get together with their catastrophe modeling vendors, they say your property data is to blame for a lot of the inaccuracy (they say "uncertainty") in the models.
One such catastrophe lovefest just happened in February: the Reinsurance Association of America 2008 Catastrophe Modeling 2008 event in Tampa. If you had gone, risk managers, you'd have heard people like Paul VanderMarck, the blond-haired and outspoken chief products officer at the largest modeling vendor, Risk Management Solutions Inc., say things like, "You look at data schedules coming in today, it's astonishing."
"What's clear to everyone: You have to fix the problem upstream," he says.
Upstream. That's you, risk managers.
VanderMarck likes to say how RMS now employs 80 full-time analysts in India just to do data-cleaning. They scrubbed and prepped 10,000 data schedules for input into CAT models in the last year alone. VanderMarck estimates the work included data from about two-thirds of the Fortune 500.
Don't take his word on the extent of the data garbage floating downstream? In an as-yet unpublished survey of reinsurers by Ernst & Young, more than 60 percent of the 12 respondents said they are very concerned about the quality of data from large commercial policyholders.
"It's logical for reinsurers to be much concerned in the large commercial area," said the study's author and actuary, Navid Zarinejad.
Even your own brokers run their hands through their hair (if they have some) at the extent of the problem.
Ben Fidlow, a principal with Integro Insurance Brokers and head of the firm's actuarial and analysis practice, figures as many as half of big commercial accounts have "glaring holes" in the property information provided to insurers. Three out of four have "serious" data issues.
Or as reported from George Davis, vice president at modeler AIR Worldwide Corp., a recent case study from Montpelier Re of 18 portfolios from 11 different insurers showed that "every one of the portfolios showed evidence of being underinsured."
The underlying current here is not just that you're flowing Swiss-cheese data toward insurers and reinsurers. You're passing off bad property risk management too.
"The real focus should be insureds better understanding their risk," says VanderMarck, in such a way that convinces you he knows what he's talking about after handling 10,000 property schedules in the last year.
HOLD UP, WAIT A MINUTE
Did he just say that risk managers don't know what they're really doing when it comes to catastrophe exposure? Are there risk managers out there who really don't have a handle on all of their real estate? Who let some facilities slip through the cracks of a property schedule?
It seems that modelers are realizing a big dirty secret: Insurers have been providing property cover to policyholders based on this incomplete and inaccurate data for years. But now that modeling is part of the "fabric" of the industry, as modelers like to say, it's time the industry clean up its act.
A slight change in a property construction type or a missing street address can greatly affect the tools' output. For instance, entering in steel frame or light metal frame as construction type can open up a $50 million difference in loss exposure depending on whether you're talking quake or 'cane risk, says Bruce Norris, senior vice president and head of modeling at HRH. If you only give a modeler a zip code versus an actual street address for a facility, the difference in flood exposure can be worlds away.
"Literally across the street can be a different flood zone," says Norris.
In a sample test of 3,000 hypothetical properties done by AIR, you could have as much as a 20 percent lower loss estimate if, instead of "unknown," occupancy data was refined to reflect that facilities were office buildings, says Davis.
Bad data has costs all down the stream. According to the E&Y survey, about 70 percent of the reinsurers said they add a 20 percent charge on cedents with perceived bad data. Some said they won't deal with those cedents at all, reports Zarinejad.
Still, risk managers seem to bristle at the finger pointed at them because their data is inadequate.
"The main problem is that the underwriters have unrealistic expectations for what their policyholders are capable of producing," says risk management consultant John Dempsey of Dempsey Partners LLC. "They provide very little guidance, meaningful guidance, as to the accuracy of the numbers, the basis of the numbers, that the policyholders are expected to provide."
With the modelers' focus on even more technical details, the modelers' expectations are even more unrealistic, and their guidance is even worse.
How can they expect a risk manager at a large multinational with 6,000 to 7,000 properties to provide soil compaction types under each, or to know the distance between nails on each of their roofs?
Risk managers are keeping track of newly acquired properties, says Dempsey, and passing that along to insurers in a timely way. "Most insureds are very, very diligent about reporting," he says.
And modelers agree that some risk managers get it. "A lot of risk managers at major corporations are working to get a handle on their catastrophe risk," says Richard L. Clinton, president of Eqecat Inc.
Take one of Norris' clients, an 800-store retailer who he helped with a "huge push" to send engineers to every building for detailed property data.
VanderMarck relates how his firm is doing work with "leading thinkers" in the corporate risk management world. He talks of firms that have hardened facilities and improved business-continuity plans, then turned back around with their data and modeling results to prove to underwriters that they are good risks.
"The mechanism the industry has to recognize that is catastrophe models," says the RMS executive.
But what of unsophisticated risk managers? They seem to be the majority still. "A lot of risk managers haven't kept up with the curve," agrees Integro's Fidlow. Too often these folks don't even provide complete addresses and replacement values for properties.
These essential data points are "ignored," says Norris. "It's not taking the time to do it because everyone is in a rush," he explains.
BLAME GAME: NO WINNERS
But Norris and others in the industry believe that blame shouldn't all be simply heaped on these busy risk managers. Blame for dirty data can get dished around. And vice versa, fixing the problem will also require everyone's help.
Brokers, says Norris, have been "spotty" in helping clients. Norris sees his role as helping to "train" risk managers on how to feed and use models, and on how to "stop the train" when numbers pouring in and out of the models don't look right. Data will never be perfect, the HRH executive says, but at the very least he makes sure clients have good numbers for the big risks in their portfolio, such as distribution centers or headquarters.
As for insurers, who repeat the "upstream" mantra as well as anybody, they aren't exactly pushing insureds to deliver the data. Especially now in the current market, the pressures on carriers actually encourage less data collection, says Karen Clark, founder of AIR and industry bright light who left the modeler to start her own company.
Data collection and cleaning is costly, and demanding tighter data controls on customers could actually drive them to competitors that write without good data. "The trend is toward the lowest common denominator," says Clark.
One of the main pushes of Karen Clark & Co. is to come up with rigorous standards that could be applied to how insurers use catastrophe exposure data, both the input and output. She believes that change will occur when outside forces, like ratings agencies and investors, start wielding such a standard to measure one insurer against another.
"The companies with high-quality data will get recognition of that," she says, "and then it will become not competitive to have bad data."
Only then will it perhaps be possible for carriers to get as aggressive about the upstream issue as modelers would like. Risk managers with bad data will get singled out; underwriters will refuse capacity for their spotty schedules.
Speaking of those modelers, blame rests also with them, says Clark. The reason: They've failed to agree on one standard and open data format. "This is something that not only will lead to improved data quality, but they're also going to save their own clients millions of dollars in their own staff time," says Clark. Until they accept responsibility on this point, she says, "how can the modelers be taken seriously on the data quality issue?"
And they, and insurers, can also be blamed for the lack of guidance that Dempsey complained about earlier. Parties involved, Clark says, should come up with a "middle ground" of data requirements--with modelers and insurers limiting data demands to seven or eight points, not the 30 to 40 currently sought, and then clarifying to risk managers what exactly these points entail. Norris sees a good start in just getting insureds to solidifying the two primary characteristics: location and value.
In the meantime, billions of dollars in decisions are made on this drift of bad data. "It's kind of a scary situation out there," says Clark.
is senior editor/Web editor of Risk & Insurance®.
April 1, 2008
Copyright 2008© LRP Publications