Avoiding the Hassles of Claims Data Compliance and Conversion
By ROB KOPP, senior vice president of information technology for Columbus, Ohio-based Avizent, a national risk management
and claims service provider
In the not too distant past, the data demands for claims management were fairly straightforward and easy to meet. The set of required data was finite, relatively small, and fairly easy to gather, track and report. The few who needed to use the information accepted it in the format in which it was reported. That changed quickly.
Now, it seems that everyone--companies, insurers, brokers, third-party administrators and government agencies--not only need vast quantities of data, they require that it comply with their data specifications, whether it's for state EDI, IAIABC, first report of injury, second report of injury or a myriad of other uses.
That's where the dilemma arises. With data coming into risk management information systems from so many different sources, the input is hardly consistent. And yet for the data output to be universally useful, the disparate data inputs must somehow be corralled into compliance with the requirements of everyone who wants to use the information. The situation isn't exactly the old "garbage in, garbage out" scenario. But in the current environment, the quality of the data that's going in often makes what comes out less than useful.
The consequences can be serious at virtually any point along the data usage continuum. In day-to-day claims processing, for example, noncompliant data can create problems that range from annoying administrative delays and inaccurate payment amounts to more serious liability issues, regulatory noncompliance and even penalties.
Companies often experience similar or even more severe business operations consequences when migrating data to new systems, or when switching from one carrier or TPA to another. The difficulty lies in transferring data smoothly and seamlessly from one system to another, especially if the originating data is housed in a legacy or proprietary system. The good news is that there is a solution that really works.
Databases store information in separate fields--each with its own descriptor or identification. Because all databases do not store the same information in the same fields, the data may not map (or match) correctly when transferred from one system to another.
Consider this simple example: If the "last name" field in a TPA's database does not map directly to the "last name" field in the claims-payer's database, the claimant's last name could instead end up in the "street name" field, or the database could read the Social Security number as a procedure code.
The situation becomes much more complicated when different parties--TPAs and carriers for example--use completely different field descriptors, categorize coverage data in different ways, or have different rules about how to express dates and dollar amounts.
Although mapping data between databases isn't all that difficult to understand conceptually, the process itself can be very complex, time-consuming and costly. It becomes even more so as the number of input sources and output users increases. Add to that the complexity of introducing adjusters' notes, diary items and other informal data into the mix.
Given current conditions--the lack of universal data standards and the varying states of technological sophistication among the parties--achieving the ideal state of "clean" compliant data from square one calls for a dynamic, multifaceted solution. It requires a robust system and innovative processes that can quickly and accurately map data from one system to another, regardless of source or platform, and then report that information to the users.
Whether it's for reporting claims information that is compliant with state requirements or making a major transition of an entire claims management program from one TPA to another, today's optimal solution is a comprehensive approach that includes data analysis, mapping, translation, conversion and quality assurance.
It's a fact of life that older technologies simply can't keep up with the data compliance needs of the industry.
For many players, the answer may very well lie in migrating to new systems. This often happens when the band-aid solutions they've applied to older legacy systems become totally inadequate, resulting in the output of yet more incomplete, inaccurate data that further impairs claims management processes.
Making the process run quickly and seamlessly, however, requires the use of advanced technology in the form of "visual" information distribution and EDI software capabilities.
The key to these is a graphical interface that automates previously labor-intensive processes, such as establishing hierarchies, creating maps and entering data from various sources.
The next step is mapping all the various inputs to a predefined structure that establishes the relationships of corresponding data elements to one another to create a single file. That file in turn can be mapped to any number of output files in total compliance with each individual user's needs.
For starters, though, the most efficient, logical solution is to capture all of the data correctly on the front end of the claim. Using this method, data enters the claims management process in a way that can render it completely compliant with all users' requirements even before the adjuster begins working the claim.
If that doesn't happen, the adjuster becomes a data manager instead of a claims manager, continually digging back into original data sources like an archeologist in search of a missing link.Such misuse of the adjuster's time and expertise can result in claims delays, human errors and increased claims costs.
In one recent case, Finial Reinsurance Co. was facing the dual challenge of a fast approaching state EDI reporting deadline and a TPA that was going out of business. With more than 33,000 take-over claims that had to be converted to comply with the reporting guidelines, the company with the help of its new TPA applied advanced visual technology.
The entire data conversion process took just two days and the quality assurance steps were completed within weeks. This included transferring all data, cleaning up errors from the previous TPA, notifying claimants of the transition, conducting quality assurance and, of course, meeting the compliance deadline.
The take-away lesson for companies facing the complexity of managing enormous amounts of claims data from a broad range of sources is that it's about much more than simply collecting the data and sending it along its way.
The challenge today is also about making sure that at every critical juncture in the data life cycle--from getting, converting, balancing and reporting data--consistent methods are applied to ensure it is compliant with the requirements of every user. The keys to that are a comprehensive approach, mastery of advanced technology and impeccable quality assurance throughout.
December 1, 2008
Copyright 2008© LRP Publications