The Global Financial Crisis: Lessons From a Parallel Modeling Universe?
By HEMANT SHAH, CEO of Risk Management Solutions Inc. (RMS)
Following Hurricane Andrew, a review of the pre-1992 mentality concluded that "risk evaluation in Florida was a textbook example of a collective misevaluation, a denial, of risk." Today, a similar observation can be made of the wider wreckage resulting from the global financial crisis.
Yet since Andrew, the global insurance industry has employed more sophisticated catastrophe models that describe losses well beyond the historical record. Modelers and insurers can take some confidence in the progress that has been made in understanding and managing catastrophes, while shaking our heads in disbelief at the financial sector's seemingly reckless disregard for the accumulation and global realization of systemic risk.
It is not that easy, however. Crucial questions asked about the financial models in the wake of the banking crisis have relevance to the insurance industry, and modelers everywhere have an opportunity to rise to the challenge in addressing these.
THE UPSIDE: THE DIFFERENCE
While the recent crisis raises certain universal lessons, there are crucial differences between catastrophe modeling and financial modeling. Catastrophe models combine science, engineering knowledge, hazard and claims data to help insurers and reinsurers make informed decisions. Although value-at-risk models are particularly robust only within modest deviations from expected losses, catastrophe models are explicitly designed to quantify the fat tail of exposure correlation, measuring risk far beyond probable maximum losses.
One reason investors were unaware of how risky certain assets were was the lack of information about them. So little underlying exposure data was attached to mortgage-backed securities and collateralized debt obligations that there still remains much uncertainty about their actual value. Banks and investors relied on the credit ratings assigned to these assets rather than doing their own due diligence.
By contrast, there is far more transparency behind the data that goes into modeling in the insurance industry. Using detailed exposure data, catastrophe models quantify both the risk to each insured location, as well as how these risks correlate across a wide range of simulated events.
Additionally, the knowledge we have about natural catastrophes is real, built upon a mix of historical forensics dating back decades and centuries, physical theory, computational and laboratory experimentation, and empirical data from actual events.
LESSONS FROM THE DOWNSIDE
With catastrophe modeling, the insurance industry is better equipped to manage rare tail events than the banking sector proved to be. But now is not the time for complacency or self-congratulation. Lessons from financial crisis should reinforce and reinvigorate efforts already underway to improve how models are designed and used. The industry has come a long way since the early days of catastrophe modeling, but there is still a lot to learn.
Catastrophe models are an approximation of a complex suite of phenomenon, and our knowledge of the frequency of the more catastrophic manifestations of events like hurricanes, windstorms, earthquakes and floods is laden with uncertainties. With each event, new knowledge is revealed that must be prudently incorporated into the models. Large events can also reveal unforeseen correlations; exposures previously assumed to be independent are often coupled when a supercatastrophe occurs, much as the banking sector has recently discovered.
While the quality of data has improved, many challenges remain. The industry needs more systematic efforts to objectively measure whether information is fit for use and to what extent uncertainties and inaccuracies in the data impacts the risk quantification.
In many ways, the most important insights about a model are an acute awareness of how wrong it could be and how it is useful despite these limitations. Model users should question how reliable the parameters of the model are. They should look for uncertainty in risk measurements and find out how these uncertainties vary across their book of business.
Gaining these insights is more than a shared responsibility between the users and developers of models. Now, more than ever, the modelers need to show leadership to truly internalize a profound understanding that their models need to be far more transparent, so those who use them can use them responsibly.
With each catastrophe event, with each new version, models improve as our knowledge increases. However, upgrades to models need not to be viewed as destinations of revealed truth, but as mileposts along a continuum of understanding.
Catastrophe models are central to critical decisions that range from the pricing and underwriting of individual accounts to the management and allocation of capital on a global scale, and when appropriately used, they are essential tools for making informed judgment. But it is important to properly understand the limitations of the models to fully appreciate uncertainty behind any one decision.
Commonsense and experience are qualities the insurance industry holds dear, and with good reason. Underwriting decisions can make or break an insurance company the next time the wind blows or the ground shakes. In the end, it's people who make decisions, not models.
Yet the modeling community has a crucial role to play. By continually improving the models and being accountable and transparent, they provide invaluable risk assessment tools. Tools that will continue to help underwriters, risk managers and policy-makers make tough decisions every day.
Editor's note: This piece was based on Shah's paper
"A Model Manifesto" published in CEO Risk Forum.
March 1, 2010
Copyright 2010© LRP Publications