By MATT REID, a senior director of solutions marketing at Risk Management Solutions (RMS) with more than 20 years of experience in software engineering, product marketing and communications.
Insurance and reinsurance companies face increasing scrutiny from ratings agencies and regulators as they search for profitable and sustainable growth in increasingly complex and competitive markets. Being able to make well-informed decisions in a timely manner and having the most up-to-date view of the entire business are critical to success in such a demanding environment.
For carriers that manage catastrophe risk, that means a growing reliance on modeling and analytics. They must also have the ability to integrate and automate as much of their business workflow, from initial data acquisition and cleansing through modeling and on to advanced decision support and reporting.
In other industries where a competitive advantage can be gained through sophisticated and thorough analysis of large volumes of data, many market leaders have turned to high-performance computing (HPC) to maintain and expand their edge.
This technology, once the domain of super computers and government-funded laboratories, is now available for commercial use and can provide benefits for IT organizations, business users and the enterprises they support, including insurers.
WHAT IS HPC?
High-performance computing consolidates computing resources onto a single grid where those resources can be shared by, and dynamically allocated to, any number of business units, users and tasks. This model typically results in higher utilization of compute resources, lower IT overheads and improved workflow capabilities while also enabling a single, high-priority task to execute very quickly by commandeering a large proportion of the grid until it completes.
High-performance computing environments provide IT organizations with the ability to configure and manage infrastructure from one central location, administer user capabilities, identify and empower priority users, structure resources around user groups and business functions, monitor key performance indicators, and run system diagnostics across the entire infrastructure.
Clearly, HPC can provide demonstrable benefits to the IT organization, but the real value for the insurance industry comes through its ability to deliver significantly improved analytical capabilities to the enterprise.
THE GROWING CHALLENGE
As catastrophe modeling has become more and more sophisticated and more deeply embedded into many insurers' businesses, there is a fundamental need to develop a profound understanding of the models and their underlying assumptions, limitations and capabilities. A clear grasp of model behavior and the inherent uncertainty in their output is fundamental to making sound decisions.
At the same time, the high volumes of data and the time it takes to analyze them is introducing "data latency" into many risk management processes. By the time many insurers and reinsurers have completed the process of analyzing all of their data, that position has changed as some policies or treaties expire and others are bound in their place.
Using high-performance computing, analysts and underwriters can more efficiently employ hardware to scale and manage catastrophe model performance to match their immediate business requirements and gain deeper insights into their books of business. In addition, analysis run times can be significantly reduced, allowing companies to gain more up-to-date views of portfolio risk and a near real-time understanding of modeled losses.
In enterprises that consist of multiple business units each with their own teams of analysts, it is common to provide each team with its own independent modeling environment. As the complexity of the business and organization increases, these environments proliferate. Each of these environments will be sized to support the peak business load, which results over time in underutilization of the total capacity, even while some units still struggle to process all of their business during peak periods.
Combining resources onto a single high-performance computing grid using high-performance computing could allow the resources to be dynamically shared and appropriately allocated to the users experiencing the highest demand at any given point in time. This effectively increases the overall utilization of the analysis environment and allows for an increase in the number of analyses that can be processed in any given period of time.
With overall capacity increased, it also becomes possible for individual analyses to be run very quickly by commandeering a large portion of the grid when business conditions require a very fast turnaround on a particular job. A single priority job could then actually utilize more compute capacity than would have been available had the business unit maintained its own dedicated analysis environment.
Customizable job templates allow for fast, simple submission of jobs onto the grid further increasing analyst productivity by reducing the time taken to prepare job submissions and ensuring that jobs receive the appropriate priority and capacity on the grid based on their size, complexity and urgency. (These templates are featured in RMS' application of high-performance computing in catastrophe modeling, the recently released Enterprise Grid Computing.)
THE BOTTOM LINE
The ability to maximize computational resources, drive higher volumes of analyses and fast-track priority work delivered by HPC environments enables these crucial activities to be better embedded into the insurance workflow. This allows decision-makers to make both faster and better judgments based on the most up-to-date views of their overall risk exposure.
Those who effectively utilize the full potential of high-performance computing and the advanced analytics that it enables will be able to make the most informed and timely risk-based decisions. And, those that make the most informed and timely decisions are those that lead their industries.
October 15, 2010
Copyright 2010© LRP Publications