Reinsurers Tackle Catastrophe Risk Management With Data Quality
By NAVID ZARINEJAD, senior actuarial advisor in the Bermuda office of Ernst & Young Ltd, and TRISH CONWAY, actuarial advisor in Ernst & Young LLP's Financial Services Office in New York
A major lesson learned as a legacy of Hurricane Katrina is the importance of basing catastrophe risk measurement practices on accurate property exposure data. As a result of lessons learned, primary insurers have assessed the quality of their property exposure data and strengthened controls around data collection and management. This is not an overnight fix however,
Three years later, the industry needs to continue its focus on improving its data and models, and the alignments of the data and models. This is a continuous improvement and learning process that is fundamentally transforming catastrophe risk management in a way that is comparable to the new approaches in credit scoring and direct distribution that emerged several years ago.
IMPACT OF POOR EXPOSURE DATA QUALITY
The most dramatic revelation from Katrina from an insurance risk measurement perspective was the incomplete and inaccurate data that commercial and residential insurance brokers and carriers relied on to estimate losses. These data influenced modeling results, leading to an underestimation of values and losses.
The data quality issue is exacerbated by a carrier's data aggregation practices. Companies often aggregate customer exposure data for geographic areas by ZIP code, rather than longitude and latitude of specific properties. Hurricane losses, for example, can be different at the spot of a coastal landing compared to locations even a mile inland or further up or down the coast from the landing, making ZIP codes an unreliable source.
Because the most significant data-related issue driving loss models is the calculation of insured value of specific properties and structures, the models place strong emphasis on detailed property variables. This challenges underwriters to improve the accuracy and completeness of data on insured properties, including:
--Year of construction: understanding the implications of how building codes have evolved and obtaining information about structure deterioration or retrofitting and renovation.
--Construction type: distinguishing occupancy versus nonoccupancy classes.
--Multiple use and multiple occupancy: factoring data about resorts, golf courses, corporate campuses and other properties that contain multiple structures and businesses.
--Location geocoding: differentiating site risk with more granularity based on latitude and longitude.
More reliable and detailed data provide a more accurate snapshot of the vulnerabilities of structures and commercial operations. However, one of the greatest advantages is that improved quantification invites better decision-making by owners, insurers and reinsurers.
THE ACHILLES HEEL: SETTING CONTROLS
Commercial carriers are aware that the property exposure data collection component of their underwriting process needs improvement. Both independent brokers and underwriters need more training and insight into the purpose and importance of the property data they collect, particularly in relation to catastrophic events. There is sometimes lack of understanding of how structural details impact the severity of risk.
Modelers share the challenge of maintaining the completeness and currency of the independent data upon which their models rely and of updating the total insured values of the insured properties. Some models lack the ability to red-flag potential data inconsistencies or other data-related problems.
HIGH QUALITY DATA STILL HAS ITS REWARDS
In early 2008, Ernst & Young surveyed leading reinsurers to determine how the quality of property catastrophe data affects reinsurance underwriting decisions. Participants included domestic and off-shore, pure-play catastrophe and diversified companies that cover commercial and property exposures. The majority of respondents were chief underwriting officers or heads of the property underwriting or catastrophe modeling units at their companies.
The survey found that, while nearly all reinsurers apply surcharges to compensate for data quality deficiencies, many were also open to rewarding companies for better quality data. In fact, the vast majority of reinsurers agree that if the cedant used strong collection, enhancement and data maintenance controls, the risk would be more attractive to them.
Additionally, seven in 10 companies said they would consider both extending additional capacity and offering premium credits of 5 percent to 15 percent to a cedant that could demonstrate strong controls and policies via an independent report.
The insurance industry is responding to the call for better data quality. Companies are making upgrades to underwriting systems, strengthening internal controls and seeking ways to ensure the effectiveness of their data controls, including through third-party assurance services that provide:
--Assessment of current controls including review of data quality standards and sample data testing against those standards.
--Evaluation of catastrophe-related broker and underwriter training.
--Evaluation of modeling assumptions in relation to exposure data used and with regard to the hazard itself.
--Evaluation and validation of submissions to reinsurers, including verification that exposure data utilized were subjected to rigorous collection and review practices.
--Evaluation of proposed reinsurer premium risk loads.
Across the financial services industry, it is recognized that independent assurance of the data collection and management controls companies use in their compliance and reporting strengthens the quality and credibility of their data and the credibility of their reporting.
Applying similar assurance discipline and consistency to catastrophe exposure data and risk management processes could significantly improve the decision-making and pricing of insurers and reinsurers.
October 15, 2008
Copyright 2008© LRP Publications