U.S. Hurricane Modeling - Can We Afford to Ignore the Science?
RYAN OGAARD, senior vice president, and Claire Souch, vice president, at Risk Management Solutions
This year, Risk Management Solutions released a major upgrade of its North American hurricane model suite that brought unprecedented changes to companies' loss estimates. As the industry works to understand and adopt an altered view of risk, fundamental questions are surfacing about the use of models in risk management and even about the underlying philosophy of model building.
Should modeling companies seek steady, incremental change so as not to disrupt business plans? Does conforming to a slow and steady status quo endanger the risk management process? Can we find a middle ground?
Within the market, two voices can be heard: the first is that version 11.0 is a valid and more advanced model, while the second says that might be so, but it has changed too much. Both have valid points and concerns. It is tempting to see the "better model" viewpoint as an intellectually superior position, but the model impacts the marketplace. Companies use it to determine how much capital to hold, and where concentrated risk can pose a danger. A change to the model means a change to the business plan, which can be highly disruptive.
GETTING CLOSER TO REALITY?
Catastrophe models are unlike other models of insurance risk. By their nature, catastrophes are rare events, and models, by definition, provide a representation of complex physical phenomena. From a modeler's perspective, the task is to simulate, realistically and adequately, the most important aspects of this very complex system. We conduct extensive and independently reviewed validation at multiple levels and provide users with transparency into the assumptions and results of these validation tests.
Data and peer reviewed science form the foundations, and every so often researchers make step-changes in understanding. Thanks to advances in computing power, model building today typically involves running hundreds of thousands of simulations. We can also get information from actual events, but long gone are the days when a catastrophe model was based on taking historical storm experience and applying it to other regions, like the first catastrophe models.
But are the model changes too complex? Have we now over-analyzed the problem? Some industry voices have said that we need to go back to a basic view of the risk; that in most cases we can simply use a set of scenarios to get a sense of the risk, as was the case 20-plus years ago in the days before catastrophe models existed.
Most experienced risk professionals disagree. During the recent the Florida Commission on Hurricane Loss Projection Methodology public meeting to certify the new RMS model for use in Florida, Dr. Hugh Willoughby, professor at Florida International University, noted, "... a well-written model that's physically based in the directions you guys are going reveals pathways to unexpected events that we wouldn't be able to learn about other than waiting until we look out and see the wreckage in the morning.
We may have been underestimating the danger of high winds inland just as the Japanese underestimated the danger of big tsunamis north of Tokyo.
The hope of the models is that they can tell us what the worst possibilities are and how likely they are. So I think if you don't like the uncertainty, that's part of life."
Models will change
-- that's a fact -- and it's na´ve to think that every change can be made in small increments. If we don't change because we have new science and engineering insight, we'll change when we analyze the "wreckage in the morning." This will cause business plans to become obsolete, underwriting guidelines to shift, risk concentrations to be re-evaluated in retrospect -- all of which are costly and troublesome occurrences for (re)insurers. Better to be ahead of the game.
LIMITING CHANGE DISRUPTION
How disruptive the change is depends on how a company uses the model. Models should not, and generally are not, used to just pinpoint a 1-in-100 probably maximum loss. However, there is a range of practices between companies. At one end of the spectrum, a company might use multiple sensitivity tests on the model output to set boundary conditions in underwriting and capital allocation. At the other end is detailed optimizing against the model itself, where the prime decision criteria for underwriting or capital decisions become the modeling assumptions. In both cases a major change in model results can be disruptive, but a company optimizing a highly concentrated book of business to a model can literally have to reverse course and be aware of this possibility.
As we have worked side-by-side with companies that are digesting the version 11.0 changes, we've encouraged several practices that seem to be helpful in implementing a new model and in recognizing the possibility of future change. These steps should help companies develop a bespoke view of risk that acknowledges the potential for model uncertainty.
- Understand the model you use deeply. Engage with the modelers about their product.
- Understand where the biggest uncertainties and sensitivities are. Within the version 11.0 view of hurricane risk there are in fact three models: the baseline wind and vulnerability model, the storm surge model, and the medium term hurricane rate forecast. Each has its own sensitivities and uncertainties.
- Next, look at the relative uncertainty within the model as it applies to your specific book(s) of business. The model has more uncertainty around some areas or lines, than others. Look at uncertainty in terms of the various tranches of risk in the book. Run analyses that isolate the parts of your business that are more subject to uncertainty. Work to develop a view of your company's susceptibility to a surprise from a new event and develop strategies to minimize this potential.
- Review the ways that the model metrics are geared into your company's financials. If the 1-in-100 loss estimate shifts by 20 percent, what happens to your reinsurance strategy? What would rating agencies say? The answer will be very obvious in some cases, but even this reveals a vulnerability to change. Is your business plan overly dependent on the model being within a close tolerance? On the other hand, is any business being optimized to very specific, model-driven criteria? Are you betting that the model is accurate in terms of age, construction, occupancy?
- Don't rely on a single point on the curve: use tail value at risk metrics, scenario events and exposure accumulation analyses to build a broader picture of your tail-end risk.
In spite of the most intense review that any RMS model has ever undergone, we have been passing the tests, and acceptance and adoption of our new view of risk have been progressing steadily. Of course, discussion and debate continue because the subject is large and highly complex, and so are the models.
As we look to the future, our role as modelers will continue to be to bring the most informed view of risk possible, with the consequence that future model changes will happen. Along with that is also the responsibility that we help the industry understand the key uncertainties and provide an environment of transparency that facilitates the transition process in the future.
September 23, 2011
Copyright 2011© LRP Publications