By LISA DRAGON, vice president and managing director of Global Professional Services for Aon eSolutions
At this point in the evolution of risk
management and insurance technology, systems vendors and risk managers alike share an emerging consensus on the business value that advanced reporting and analytics tools can deliver. Many organizations are realizing value from dashboards, business-intelligence applications and other functionalities; many others have recognized the potential and are considering how to implement these tools or enhance existing ones.
Wherever an organization falls on the reporting and analytics spectrum, however, ensuring data quality and consistency is critical to maximizing the value of those tools to a company's risk and insurance program. Data quality is a key factor in driving better business decisions around the full range of insurance programs, from policy to property to safety--thus optimizing total cost of risk.
Yet ensuring data quality and consistency is not necessarily a straightforward task. Organizations are often faced with challenges and questions when looking to make the best use of a wide range of data aggregated and standardized from disparate systems.
The process of ensuring high-quality data for use in reporting and analytics should begin by accurately identifying the data required for the organization's highest-priority workflow and reporting purposes. This process includes identifying not only the specific data to track but also classifying that data by type.
So how does an organization identify criteria for what, exactly, constitutes high-quality data? How will it measure the effectiveness of data-quality improvement initiatives?
And--perhaps most critical and difficult to achieve--how can the organization make the most of reporting tools to convert raw data into usable information that contributes significantly to business decision-making?
Without examining such questions, achieving the desired levels of quality, consistency and streamlined operations is nearly impossible.
One of the challenges in identifying criteria for high-quality data is that "high-quality data" has many meanings. For instance, when people refer to high-quality data, they might discuss tracking required data or meeting reporting and workflow requirements. It can also connote that data is balanced and financially makes sense, such as transactional data reconciled with summary balances.
Other examples of high-quality data include:
-- Having data with defined standards to allow for effective analytical reporting, such as using a lookup list as opposed to a text field
-- The elimination of "unknown" data that would have occurred if data manually entered or electronically loaded was not "mapped" correctly
-- Putting the appropriate business rules and validations in place to ensure the data is logical and correct
-- Building a database where distinct data objects are related or associated to each other in order to preclude duplicate data entry
Despite the difficulty of choosing from a broad scope of potential criteria, it is a crucial step in the process of implementing new reporting procedures. By taking the time to focus on what information is of value to their needs, organizations help ensure the efficacy of their reporting and analytics initiatives, especially those involving advanced technology tools.
Once the criteria for high-quality data are determined, measurement of that data becomes critical. Reporting on errors--both from a validation aspect as well as a logical aspect--is the most effective tool for assessing data quality. Because most organizations work from both electronic and manual data feeds, error reporting must be handled in different ways.
Electronic data feeds should have, at a minimum, error handling to log errors on data validation, lookup mapping and business logic validation. This data should be available for user administrators to review, evaluate and define a process for correcting the data. Error handling for electronic feeds should also include "correcting" logic to allow for data to be corrected where possible.
As for manual data entry, measuring data quality is more difficult because the business logic sitting behind the scenes of the data-entry screens does not allow for incorrect data to be entered, to a certain extent. There is usually no validation that "unknown" codes are entered. There are instances where it's recommended that validation rules are defined so that "unknown" codes are not allowed based on other data fields. If unknown codes are allowed out of necessity of getting data in quickly--i.e., incident intake--then monitoring processes should be defined at the next level to ensure this data is reviewed and corrected so that useful analytics can be generated.
While each organization and its data feeds are different, there are six key practices that can help guide effective measurement. Reporting tools should:
1. Have defined fields to allow for the most effective, efficient and accurate data entry
2. Ensure lookup lists are clearly defined
3. Ensure the appropriate business rules and validations are triggered during the manual entry and electronic loading processes
4. Drive clients to standardized NCCI code mapping for fields critical to analytical reporting during the requirements gathering process
5. Enforce ongoing review of data
6. Define metrics for meeting quality requirements
Implementation of an effective quality-improvement initiative begins with an organization's full review of all standard inbound feeds to ensure client-critical data is loading properly and that code mapping is up to date. Data-quality reports that provide users with salient information on any issues found during the data-loading process are helpful. Such reports can be set to run automatically once a production data load has been processed.
Following are some of the best practices related to improving data quality for reporting and analytics:
Design lookup lists to be hierarchical in nature. This allows for both detailed and summary reporting within a single data field (e.g., major cause and detail cause).
Bring in data from more-varied sources. These include certificates of insurance, bonds and training information. Relate them to standard risk management information system (RMIS) modules for enhanced analytical reporting;
Move allocation processing. Take it from within a report template out to a configured module that consolidates data and performs all the necessary calculations while preserving the historical data that formed the report.
Take the leap from spreadsheet-based processes.
Move to automated, RMIS-enabled procedures (e.g., insurance submissions and renewals).
With criteria identified and measurement tools in place, organizations can then work with their technology providers to launch tailored initiatives designed to leverage data in new ways, improving business processes and overall results. By converting raw data into usable information, reporting tools become key contributors to the business decision-making process.
November 1, 2010
Copyright 2010© LRP Publications