At the intersection of technology and healthcare, there are innumerable opportunities for automation, machine learning, and artificial intelligence to improve efficiency and outcomes. This is certainly the case for clients involved in the subrogation process.
One Saxony client, a large healthcare services company that represents state-based Medicaid services nationwide, needed an AI intervention to determine how certain claims should be processed, as well as whether cases should close or remain open.
This is the story of how our consultants and digital architects helped that company harness the power of their data and leverage AI in a practical, real-world manner.
First, a note about names
For the sake of privacy, we are keeping the name of the healthcare company (and the states with which this company partners) anonymous. That said, the work we delivered on behalf of our client – data analysis, business process review, building and training of analytics models, API inclusion, and automated model retraining – can all be easily replicated for any other client with similar needs.
What’s the deal with subrogation?
In short, subrogation occurs when there’s more than one payer involved in paying a claim. If you’re driving a car and another car hits you, there are going to be at least two payers with a claim to your claim. Figuring out which payer is on the hook for what amount (if any) can be a pain, even for private insurers. But when taxpayer money is involved, it raises the ante.
As mentioned, this client represents state-run health systems, which are tax-funded. These systems are designed to be the payer of last resort – that’s what this company was paid to ensure. But determining payer responsibility is not easy. One, these state systems are not uniform – each have their own idiosyncrasies. Two, one case can equal a lot of claims. Over the lifecycle of a case, a mountain of data can accumulate.
“In the case of our client, we’re talking hundreds of thousands of claims against thousands of cases,” said Alan Stein, Vice President of the Healthcare practice at Saxony Partners. “And these cases take a long time to come to a resolution. In the meantime, they’re collecting tons of data on patients, demographics, and claims.”
How do you begin to sort things out? Our client attacked the data with an army of analysts.
“They’ve got a large staff, many of whom have a clinical background,” Stein said. “They examine each of these claims, analyzing codes, trying to determine whether the claim is related to a case. It’s very time consuming and very inefficient.”
But what if the client could take this mountain of data, pair it with an AI platform, and leverage that platform to help lighten the workload? That’s what Saxony Partners was tasked to do.
Ultimately, the company’s analysts had to answer two questions with their data.
First, is a claim related to a given case?
Back to the unfortunate example of you having a car accident. Immediately after the accident, you are taken by ambulance to an emergency room, where it’s determined that you have a mild neck sprain. Now, a case is open – and those two claims (the ambulance ride and the ER visit) would obviously be associated with this case.
A few weeks later, you go back to the doctor, but this time for a shoulder exam. Is this new claim related to the case? Maybe, maybe not.
Several days after that, you are back at the doctor’s office, but this time for your annual physical. This is claim will certainly not be related to the open case.
The client’s large staff was undertaking this analysis without any AI-related assistance. Here’s where Saxony saw an opening.
After aggregating all of the patient, demographic, and claims data – and while maintaining the strict data security guidelines mandated by HIPAA and others – Saxony’s data architects began programming an AI model to determine whether or not a claim was related to a case. By analyzing key factors – time, medical codes, etc. – our team built a model that achieved a very high degree of accuracy in predicting claim outcomes.
Then, we added an API (application programming interface) that integrated the model with the client’s existing user interfaces. The result? You could run a claim through the model, and the model (via the API) would return a percentage chance that a claim was associated with a given case.
To close the loop on our example from earlier – let’s assume the model examines the ambulance ride claim and concludes there’s a 99.99 percent chance that it is related to the ongoing case. The shoulder exam? A 60 percent chance. The physical? Less than one percent. Armed with that AI-powered information, the humans could make the final call on where to place each claim.
And then there’s the second key question: which is the final claim in a case?
If you did wrench your neck or your shoulder in the accident – who’s to say how long you’ll be seeing the doctor for treatment? What about on-going physical therapy or the future need for orthopedic specialists? Determining which claim is the final claim determines when a case can be closed. This is important, as restitution may not be paid until a case is officially brought to an end.
Once again, our data architects built and trained a model that assigned a percentage likelihood that any claim would be the final claim in a case. They added an API, which allowed human analysts to integrate that information into their current processes and workflows. The model eliminated yet another layer of tedious guesswork for the client’s teams.
Show your work!
You may be thinking – “Hey, that’s great, a model that predicts the likelihood that a claim is related to an open case, and a model that gives odds on whether or not a case can be closed. And, because of the API, it is integrated into my already established workflows. Super.”
You may also be thinking – “But, how are the models coming up with these numbers? We’re talking about complicated legal proceedings here, after all. You’re asking for a high degree of trust in this technology.”
We, too, were curious about the methods behind the model. We needed to show our work.
“With regard to artificial intelligence, there are those who think it’s going to be the biggest thing ever, and others who are very skeptical,” Stein said. “For both camps, it’s helpful to be able to show how the model was able to come to the conclusion that it did.
“We were able to reverse engineer the results and rank what were the key factors in that result.”
For example, the single biggest factor in determining if a claim is associated with a given case was the time that elapses between the opening of that case and the filing of a claim. However, the reverse engineered report determined that a host of factors (not just time) played a role in the model’s output.
Engineering for the future
A worthwhile AI model is one that can adapt to change and perpetuate its value well into the future.
After all, data changes. There are new cases, new patients, new codes, new clients, new laws. If you build a model that cannot adapt to what will be demanded of it in the future, you have less a piece of cutting-edge technology, and more a future paperweight.
When devising both of the models that the client is utilizing, our data architects built in a continuing education component.
“We wanted this model to be set up for retraining,” Stein said. “We built it so that, as new data comes in, the client can run a fairly automated process to update the model and how the model weighs the inputs and that will affect outcomes and predictions. The model itself has built-in resilience.”
This isn’t just a Saxony innovation – it’s an AI best practice.
“If someone is not doing this, they are not doing AI in the right way,” Stein said. “At the end of the day, you want a malleable model that can be retrained.”
Results and lessons learned
The client is in the process of rolling these predictive models out to their state-based clients – and, in the process, outsourcing a lot of tedious, high-touch work to the algorithms. The result? Time saved, money saved, and accuracy improved.
“As a result of this project, the client can now process hundreds more claims per month that they could previously,” Stein said. “And cases that were sitting idle for months are now being finalized.”
For Stein, this project affirms the centrality of AI in the future of healthcare
“We hear a lot about AI and its impact on the provider side, the clinical side,” Stein said. “AI reading mammograms, making a diagnosis, and things like that. But just as impactful is what AI can do for healthcare service providers and payers. By improving efficiency and eliminating manual tasks and waste, you’re going to save a lot of money. And our healthcare system could stand to save some money.”
By leveraging technology and data strategy, Saxony Partners helps our healthcare clients improve outcomes, reduce waste and inefficiencies, and drive performance. We live at the intersection of technology and healthcare. Click here to learn more.