E-News Exclusive |
By Elizabeth S. Goar
In February 2026, CMS awarded a $4.28 million contract to ePathUSA to conduct a two-year pilot program to test the viability of leveraging AI for medical record review (MRR). The program, part of CMS’s push to modernize its medical records management infrastructure, is meant to evaluate the role of optical character recognition, machine learning, natural language processing, and other AI tools in enhancing medical documentation management and processing.
The CMS AI MRR pilot offers several anticipated benefits, says Alexandra Moylan, an attorney, shareholder, and member of Baker Donelson’s Health Law Group and Data Protection, Privacy, and Cybersecurity Team. While AI-enabled MRR has the potential to make Medicare oversight faster, more consistent, and more sustainable for payers, providers, and patients, “HIPAA compliance considerations arise when sensitive medical records are processed through third-party AI systems,” she says.
The Promise of AI MRR
Noting that the benefits of AI MRR “will depend on the extent to which patients, providers, and payers can understand how AI-assisted decisions are made and have mechanisms to seek review when questions arise,” Moylan anticipates several benefits across the health care spectrum:
Melissa Levine, a partner with Hogan Lovells, agrees that the benefits of AI-enabled record review, such as accelerated reviews and enhanced efficiencies, can be “quite significant in terms of increased efficiency and reducing overall health care costs.”
Additionally, while it does not fully eliminate humans from the loop, AI MRR does free up personnel for higher-level tasks, she adds. AI has also been shown to catch issues that humans miss and flag higher-risk patients.
“That’s all extremely beneficial to the health care system,” Levine says. “In terms of payment … it can help doctors make sure they're billing with the right codes [and] make sure that they are able to maximize what they are filing in terms of claims. Then, on the flip side … AI is helping [payers] do a more careful and thorough review of the claims that are submitted for payment.”
AI MRR is not without risks, however. “It's the same sorts of risks that we have with AI generally, but given the sensitive nature of medical information, it’s amplified,” Levine says. “There are always risks that AI gets it wrong, and that happens a lot. Someone told me that ‘AI’ stands for ‘almost intelligent,’ which I like, as it sums up the current state of AI.”
HIPAA Implications
HIPAA concerns aren’t unique to the AI MRR pilot, but rather “apply broadly to any health care AI deployment that processes protected health information,” Moylan says. She notes four general areas of concern:
Moylan points to the December 2025 HHS AI Strategy and the CMS AI Playbook, which emphasize governance and risk management and require predeployment testing and AI impact assessments for “high impact” AI applications that could significantly affect health outcomes or individual rights.
“These federal frameworks signal that robust HIPAA compliance will be integral to any sanctioned health care AI deployment,” she says.
Melissa Soliz, a partner with Coppersmith Brockelman, PLC, agrees that using AI to review medical records poses unique risks to the privacy, security, and integrity of the records and the output generated from them. The key questions, she says, are whether CMS and its prime contractor “are ensuring that the AI vendors and tools being used are limiting the use of PHI to the minimum amount necessary, there are HIPAA-compliant business associate agreements in place, there are security measures in place to protect against novel security threats, and there are protocols in place to explain and surface the basis for coverage denials.”
Michael Sutton, an associate with Sheppard, says a particular concern arises from the fact that AI is, “at its core, a voracious consumer of information. It learns by ingesting enormous quantities of data in an attempt to replicate human decision making.”
While HIPAA draws “a hard line against using PHI for product development or other commercial purposes,” it does permit its use to train AI models with the patient’s authorization. “Simple enough in concept, but operationally challenging, as obtaining patient consents at scale is a significant lift. Compounding the problem is the pace of AI development itself,” Sutton says.
Companies are racing to market, and “the fastest path to a trained model involves acquiring access to preexisting troves of data,” he notes. “Of course, that data was collected before AI was on anyone’s radar, which means the necessary consents were likely never obtained.”
Mitigating the HIPAA Risks
While the pilot program’s prime contractor, ePathUSA; its partner on the project, baysys.ai; and CMS did not respond to interview requests, Baker Donelson’s Moylan says that publicly available information indicates that their AI infrastructure is designed around transparency and auditability to produce auditable decision trails that can be validated by humans.
HIPAA concerns are also addressed by broader policy frameworks established by CMS and HHS, including the December 2025 HHS AI Strategy, which created a “OneHHS” approach with an AI Governance Board and mandatory risk management for high-impact AI applications.
“For organizations deploying AI MRR solutions, practical implementation should align with the organization’s existing AI governance and risk management policies,” Moylan says.
She recommends key steps that include the following:
“These steps are achievable but require meaningful compliance investment,” she says.
Levine points out that, as a HIPAA-covered entity, Medicare has specific obligations regarding data privacy, as will any vendors acting on its behalf or at its direction. Beyond that, she says, “we want to make sure that we understand the privacy and security controls [the AI] vendor has in place to protect the data that they will have access to and be utilizing.”
It is also important to understand how the AI tool will function. For example, will it retain PHI, or have generative AI features that could result in one patient’s PHI being generated in response to a query about another patient? Is data retained in the tool, or is it used to train or retrain the tool?
“Those are all additional considerations that are relative risks on top of the general HIPAA privacy and security ones for an AI tool,” Levine says.
Conclusion
The intersection of AI innovation and health care privacy regulation is evolving rapidly, Moylan says. How CMS structures oversight, transparency, and accountability mechanisms in the AI MRR pilot and how it balances efficiency gains against privacy and due process considerations, will likely impact federal rulemaking and industry standards for AI-assisted health care administration more broadly.
“Stakeholders should monitor how the pilot addresses algorithmic transparency, appeals processes for AI-flagged claims, and the adequacy of existing HIPAA frameworks for large-scale AI processing of protected health information,” she says.
— Elizabeth S. Goar is a freelance health care writer based in Wisconsin.