Home  |   Subscribe  |   Resources  |   Reprints  |   Writers' Guidelines

Autumn 2025 Issue

No Longer in Denial
By Susan Chapman, MA, MFA, PGYT
For The Record
Vol. 37 No. 3 P. 10

AI is proving to be a lifesaver for denial management.

Health care organizations are increasingly turning to AI to address the challenges of tighter reimbursement policies and rising denial rates. The focus is especially strong on the back end of the revenue cycle, where automation can streamline claim follow-up and denial management. AI tools are not necessarily replacements for traditional technology but can instead be complementary innovations that enable more advanced functions. Automating routine tasks can reduce the need for human intervention and allow organizations to bring in more revenue with less demand on staff.

According to Teri Gatchel-Schmidt, vice president of consulting and business development with SYNERGEN Health, denials in health care have increased some 64%, which has created bottlenecks in the payment process for many health care organizations. These holdups are often due to increased volume, along with staff and technological limitations, including poor denial code posting.

“Everyone in the health care industry is feeling the effects of these bottlenecks. Because of staff shortages and the increased volume, denials are often being posted quickly, without a denial code/remark code, codes that provide the reason for the denial. Now, when someone wants to run a report or needs to be able to identify what’s happening to their cash flow, they don’t have that visibility,” Gatchel-Schmidt explains. “Not having that visibility into root causes is significant. I’m still surprised how many health care systems and hospitals cannot report down to the actual denial code.”

As hospitals and other providers have integrated workflow automation with AI, they have realized a significant improvement in efficiency—a 400% increase—and are currently processing up to 80,000 appeals each month. Optical character recognition, known as OCR, in conjunction with automation, is improving manual correspondence processing by 173%. Additionally, using predictive analytics, organizations can utilize up to six months of data to forecast denial trends and expected revenue downstream.

“It’s important that organizations resist investing solely in workflow management solutions because what they’re missing is the ability to incorporate automation and AI to complete the task required,” Gatchel-Schmidt says. “We can automate activities and perform similar tasks in bulk. For example, just imagine having identified by payer, 100 of the same denial code combinations, by CPT and diagnosis codes, that require the same response. Using machine learning, we’ve tracked the appropriate fix based on historical trends. Then, using automation, for instance, we can simply perform appeals in bulk with one click vs working each individual claim within the work queue. The efficiency gained in being able to do that is enormous. That has taken workflow management to another level, and it’s a huge time saver.”

Keys to Success
Some organizations have been more successful than others when implementing AI-based solutions, and experts agree there are fundamental reasons for those differences in success rates. “If you look within the revenue cycle, there are multiple processes—scheduling, prior authorization, verification of benefits, coding, and denial management posting, for example. So, if you look at all these processes, we see different levels of AI adoption within each of them. There are different phases of implementation,” says Sunil Konda, chief product officer and vice president of products at SYNERGEN Health. “Some organizations are still in the discovery-to-pilot stage, and others have AI solutions that are in production. It can have to do with the solutions area, available expertise/skill set, investments, and the organization’s size. So, different organizations are just on different pages.”

Another issue is the technology itself. Generative AI (GenAI), among AI’s latest iterations, can produce original output based on patterns it has learned from historical data.

“GenAI capabilities that are available now open up a whole new realm of being able to work with unstructured data, like user-entered notes, for example,” explains Rick Stevens, chief technology officer at Vispa. “GenAI can make sense of that data, which can be cryptic to a layperson, and parse through what can be a treasure trove of information. For humans to go through that much data would be incredibly time-consuming, but GenAI gives us the tools to extract information, see what the resolution steps were, and learn what does and doesn’t work.”

However, Konda points out, GenAI has its challenges. There are questions regarding the technology’s reliability, how it fits into an organization’s existing processes, and whether certain processes need reengineering as the technology is deployed.

AI agents, which can act as humans performing human-type tasks, represent another recent evolution in AI technology that health care organizations are beginning to adopt. “Agentic AI is becoming a real focus area we’re going to see a lot more of this year,” Stevens predicts. “We could have AI agents calling payers to talk about denials, and there’s no human on the phone. Those voice capabilities, combined with large language models, are so good that you do not know if you’re talking to a human or not. Now, you could have agents taking the place of humans, waiting on hold and then having conversations.

“But it’s likely that we’ll first see AI agents logging into portals, looking things up,” he says. “That’s where AI is powerful. You can give it a task or a goal and the tools it needs, and then it should be able to do those things. That’s what’s really going to revolutionize the industry. At some point, most, if not all, of denial follow-up could be fully automated with the rise of AI agents.”

Oversight Required
Stuart Newsome, CPCO, vice president of revenue cycle management insights for Infinx, notes, however, that users often have the misconception that an organization can use AI without human oversight. “People often believe they can just deploy AI in some fashion and walk away, that it’s going to eliminate the need for human activity,” he says. “But that’s just not the case, and I think that may play into why some health care organizations are succeeding while some aren’t. The organizations seeing the most success are those that treat AI as a collaborative partner—augmenting their teams, not replacing them.”

To Newsome’s point, experts concur that AI is only as good as what it’s been trained to do. When AI fails, it often does so at scale, making it more dangerous than manual inefficiencies. For example, if an AI agent makes a call, and the recipient is certain what the agent says is incorrect, there needs to be follow-up to ensure that issue is handled correctly. That’s why oversight isn’t optional—it’s structural. Without a human or system-of-checks layer in place, errors can compound instead of resolve. One future solution Stevens posits is AI orchestration, in which AI coordinates and monitors other AI.

“Whereas, right now, human oversight is necessary,” he says. “Someone has to be reviewing the work of the AI agents. But in the future, you could have AI that is purely quality control, reviewing in real time what other AI bots are doing and holding them to a standard.”

“Currently, though, the idea is to leverage AI to learn behaviors and patterns that impact denials and understand how to prevent them,” Newsome says. “Providers can often see the same problem over and over on the back end, and it’s costing them money to resubmit and touch claims repeatedly. This is exactly where AI can deliver value today—helping organizations gain the knowledge to prevent denials from happening to begin with. Instead of reacting to denials, organizations can shift left—using insight to scrub claims better upfront, reduce redundancy, and improve first-pass yield.”

Implementing AI
The first step an organization needs to take when implementing AI is to identify the kinds of solutions or use cases they want the technology to address. It’s important to clearly define the expected benefits before the project begins.

“I think understanding AI itself is important. It’s not a monolithic technology. The components of AI have been there for decades now, from simple machine learning to deep learning, natural language processing, computer vision, and one of the latest iterations—GenAI,” Konda notes. “Trying to understand what kind of technology is needed to solve a particular problem or to get the outcome that you want is key. I think the first step is making sure that is well defined, basically, the use case as well as the benefit.”

Another significant decision point for an organization is determining whether AI will be utilized in copilot mode, similar to what Newsome describes, or fully autonomously. Konda explains, “You might start off with the assumption that you want a fully autonomous AI handling a particular process, but what we’re seeing is that since it’s not 100% accurate, AI works best in a copilot role. It can significantly enhance the productivity or throughput of a person, especially when used as an assistant.”

Konda offers further insights into how a copilot-based approach can help health care organizations find that sought-after sweet spot between humans and technology. “Ideally, you want to work on the claim once and fix it. You don’t want to be going back to the same claim multiple times because every time you touch it, there is an additional cost. AI can help reduce those instances,” he says. “An AI-based solution can help improve quality, as well. When we use an AI solution and analyze results, we can sometimes see an accuracy rate of 90%. The remaining 10% means that someone has to do a manual check, but you’ve still gotten a 90% or so benefit. So, we’re realizing big improvements in productivity through staffing savings, using a copilot approach, especially in denial management.”

Defining Data
Data readiness is an essential consideration and potential challenge, as well. Because organizations have enormous amounts of data, they have to ensure that the data used for training and validation is clean, properly vetted, and applicable.

“When you operationalize the technology, you also have to ensure that the model you trained is still relevant to the current data,” Konda says. “Therefore, the information itself is a major hurdle. If you look at machine learning and deep learning, a lot depends on the quality of the data used to train the models. Unless you’ve spent a lot of time making sure you have the right data, you might not get the right outcomes when you deploy those solutions.”

Gatchel-Schmidt agrees and underscores how important it is to have accurate data, particularly on the front end. “A lot of time is spent creating rule engines, which should be done correctly to scrub the claims on the front end before they actually go out. That’s what’s going to give a health care organization a much better first-time payment rate,” she explains. “It’s going to help get the claims out the door and speed cash flow. Being able to understand what is actually kicking out on the back end may be directly related to what’s happening on the front end. Using technology to help scrub on the front end to produce a clean claim is critical. That said, we can also use the technology efficiently downstream, analyzing claim rejections and identifying where there are patterns. We can use AI to help us streamline our processes and create fixes, or we can identify what the root causes are and decide what our next steps should be.”

Other important aspects of implementing AI-based solutions successfully include understanding operational challenges. For example, organizations need to ensure that staff have received adequate training and understand how the solution fits into their operational workflow. “The simple process of going through change control is one of the dimensions that can derail or cause issues when implementing these AI solutions,” Konda says.

The Future of AI
Health care organizations face the challenge of balancing innovation with the industry’s rigorous privacy, audit, and regulatory standards. Not only should security and compliance be among the first design principles addressed when choosing AI-based solutions, Konda says hospitals and providers also need to know that the solutions they plan to utilize within the organization follow the same compliance requirements. “You must ensure that the new solutions you’re deploying are fully compliant with HIPAA, HITRUST, or any other relevant compliance standards enforced by the organization,” he says. “Especially with GenAI solutions like Chat-GPT, Gemini, or Llama 3, where some of the data may be externalized, it’s critical to have the right safeguards and ensure compliance from the outset. Most of the larger organizations now have security standards in place. So, whenever there is a new application or new solution that’s brought into the organization, be it internally or externally, it goes through compliance checks.”

Many AI models, such as ChatGPT or Gemini, run in the cloud. Konda says if a health care organization plans to use cloud-based AI technology as part of their solution, they have to ensure that they protect the information that is going on the cloud or have a business associate agreement with cloud providers. “Making sure that cloud-based AI is compliant is critical. We also recommend having regular security audits, annually or biannually, of all the solutions to ensure their ongoing compliance,” he adds.

Gatchel-Schmidt expects that forecasting trends in denial management through predictive analytics will continue to be a significant benefit of AI implementation. “By having this information, we can find the most efficient ways to correct denial issues,” she says.

Additionally, by tracking denial patterns and successful fixes, Gatchel-Schmidt says organizations can create a customized denial playbook. “From this playbook, we can determine where we can automate tasks, according to payer, CPT, diagnosis, and denial codes. Technology monitors every single claim, all the time, and now we know immediately when something has changed. We don’t have to wait 30, 60, or 90 days,” she says. “Also important is that, with the implementation of the playbook, we’re seeing staff adherence to the correct fix of 98% to 99%, which is very rare. It’s all really exciting, and I think that we’ll start to see more use cases for AI to be applied, which is going to help us continue to navigate challenges across revenue cycle.”

— Susan Chapman, MA, MFA, PGYT, is a Los Angeles–based freelance writer, editor, and author.