Home  |   Subscribe  |   Resources  |   Reprints  |   Writers' Guidelines

E-News Exclusive

AI and Health Care Cybersecurity Risk

By Elizabeth S. Goar

The downside of the transformative impact of AI on the health care landscape is its impact on provider organizations’ cybersecurity risk profiles. Cybercriminals are following in health care’s footsteps and using AI to improve their workflows, eg, automating reconnaissance, crafting more effective phishing messages, and altering attack vectors.

And both sides are honing their weapons: The bad guys can now use AI to alter code in real time to avoid detection, while the good guys are working on entropy-based anomaly detection to flag early stages of ransomware encryption. As a result, cybersecurity is now about pitting adaptive algorithms against adaptive defenses.

“Health care organizations face a convergence of challenges that make AI-driven breaches harder to prevent, including increasingly sophisticated social engineering attacks that are more convincing, faster, and highly targeted,” says Eric O’Neill, founder of Nexasure AI and The Georgetown Group and the former FBI operative who brought down the nation’s first cyber spy, Robert Hanssen.

A recent Censinet survey revealed 92% of health care organizations experienced AI‑related cyberattacks in 2024. LevelBlue’s 2025 Spotlight Report found that just 29% of health care leaders surveyed believed their systems were ready for AI-driven threats, despite 41% expecting targeted attacks in the near future.

AI is being used by nefarious actors to up their social engineering game with increasingly sophisticated methods of identifying targets and crafting realistic phishing emails, smishing text messages, and vishing calls and messages to steal credentials and gain system access. Once inside, “most ransomware cybercriminals conduct double extortion attacks that both steal data and encrypt systems.

“The theft is used for leverage to make an organization pay for the decryption keys. Bad guys know that resilient organizations deploy backup solutions. Stealing the information before encrypting it with ransomware ultimately becomes extortion,” O’Neill says.

Social engineering exploits hospitals’ most common vulnerabilities: help desks and identity and access management systems, legacy/unsupported systems, failure to enable two-factor authentication, third-party connectivity to critical data, and “always-on” clinical devices.

AI automates the discovery of undocumented shadow application programming interfaces (APIs) hidden within complex medical record and patient portal systems, says Eric Schwake, director of cybersecurity strategy at Salt Security.

“Attackers use these tools to carry out ‘low-and-slow’ attacks, disguising themselves as legitimate traffic to find vulnerabilities without activating traditional security defenses,” he says. “This creates a critical visibility gap, where sensitive patient data can be stolen through unmonitored connections that are often overlooked by standard security measures.”

Risks increase as vendor ecosystems expand, creating data sprawl and making it difficult to track ePHI. Shadow AI compounds these risks, as staff turn to consumer AI tools when approved enterprise solutions are slow or unavailable, increasing the likelihood of unintended data leakage.

Ultimately, however, threat actors can’t compromise what they can’t reach. In that respect, provider organizations are often their own worst enemies.

Rapid AI Adoption Accelerates Risk
“When hospitals and health systems rush to integrate AI tools in their workflows, they are unknowingly implementing tools that are beyond the maturity of their security systems. This, in turn, results in a larger digital attack surface than traditional tools,” says Pujitha Gourabathini, a quality assurance manager in the risk management department at Becton, Dickinson and Company, a global medical technology company.

AI tools’ opacity and lack of explainability can lead to assumptions about their safety that can create risks. They also add an additional layer of vulnerability when security risks are not properly assessed or mitigated in the rush to adopt. Additionally, labeling some AI tools as “productivity enhancers” can foster the misconception that the same rigor as a regulated device or clinical decision support system is unnecessary.

“The major gap is the gray area where AI is being treated as a convenience feature [rather] than being assessed as a safety-critical technology that touches patient data and clinical workflows,” Gourabathini says.

Lane Sullivan, senior vice president and chief information security and strategy officer at Concentric AI, adds that the issue isn’t AI, but rather “how quickly sensitive data is connected to new tools without clear ownership, access boundaries, or visibility into downstream use. Organizations have failed at implementing effective data governance, and that puts ingesting any data into AI at higher risk.”

The complex nature of health care environments exacerbates the challenge. From massive data sprawl and limited security staffing to competing clinical priorities, “the biggest challenge is visibility: organizations often don’t know where sensitive data lives, how it’s being accessed, or how AI tools are interacting with it,” Lane says.

Zachary Lewis, CIO at the University of Health Sciences and Pharmacy, adds that “the biggest problem is not necessarily that AI is insecure, [it’s] that the underlying data itself isn't properly secured with the proper permissions and access. That’s leading to a higher risk of data breaches.”

Internal Factors
Other internal factors accelerating risk include inadequate controls and insufficient staff training. The industry’s aversion to downtime is problematic, leading to missed patches and delayed updates. Shadow AI is also a significant threat.

“You can say ‘don't use it,’ but everyone will anyway,” Lewis says. “If you don't have proper controls, or you don’t know where data is going, it can lead to breaches.”

Even small missteps—a misconfigured API or internal policies that permit overly broad access—can quickly lead to disaster. AI tools that move data between EHRs, imaging systems, billing platforms, and third-party models expose potential attack points in an environment where highly sensitive data is critical to daily operations, Schwake says.

A single misstep can turn “what used to be isolated incidents into widespread problems,” he says. “AI-driven attacks often come from trusted sources, such as compromised accounts, insiders, or rogue automation. This makes traditional defenses and basic authentication less effective. Hospitals also struggle to see where sensitive data flows, especially when AI tools are added quickly to improve efficiency or care. Combine this with staffing shortages, tight budgets, and the complexity of securing AI-generated code or third-party services, and many organizations end up reacting to risks rather than actively preventing them.”

Disruption avoidance, particularly with EHRs and other clinical systems, is another vulnerability. Delayed updates and patches create vulnerabilities that threat actors can exploit by using AI to generate a list of ways to compromise a system, Lewis says.

“AI is being used not so much to attack systems, but to give [threat actors] ideas on how to go about doing so,” he says.

New Rules of Engagement
AI has created a world in which trust is hard to come by. A 2025 Black Book Research survey of chief information security officers found that 74% view AI, EHR, and cloud vendors as their top emerging cyber risk, and 91% believe their current third-party risk management practices are inadequate for the complexity of modern digital health and AI environments. Their fears are not unfounded, with 63% reporting a vendor-linked incident in the previous 24 months.

“Hospitals are especially vulnerable because they run very complex systems that combine legacy technology, modern cloud platforms, medical devices, and outside vendors,” Schwake says. “Weak spots often include EHR integrations, patient portals, scheduling and billing systems, and APIs connecting clinical systems to analytics or AI tools.”

Further, with AI, attackers can exploit connected medical devices more quickly and efficiently, creating threats that go beyond mere data exposure and pose a threat to patient safety, for example, changing dosage parameters without authentication, Gourabathini says.

“Infusion pumps are especially vulnerable because they are in every hospital/health care system and connected right from the patient bedside to servers to EHRs,” she says.

Hospital data is increasingly being ingested into AI systems for model training or analytics, creating additional risk. “These AI data stores frequently fall outside traditional security monitoring, creating a third-party risk surface that is attractive to attackers,” Sullivan says.

Data Protection
The first step to protection, he says, is “data awareness: knowing where sensitive data exists, who can access it, and how it flows across systems and AI tools.”

Also, establish policies designed to reduce unnecessary access to sensitive data with basic data hygiene and governance, which, Sullivan says “often delivers more risk reduction than deploying expensive point tools.”

To protect against shadow APIs, Schwake says a unified inventory and real-time visibility into APIs is essential and should cover their roles in digital transformation and as communication channels for AI agents and Model Context Protocol servers.

“To counter modern threats, health care providers must adopt AI-powered behavioral detection to identify complex malicious intent and block autonomous attacks, while also maintaining immutable audit logs for compliance proof,” he says.

O’Neill advises providing an approved, controlled enterprise IT environment with logging, data loss prevention, tenant isolation, retention controls, and a clear prohibition on entering ePHI into public AI tools. Multifactor authentication should be enforced across all systems, as should least-privilege access, rapid offboarding, and tight management of privileged accounts. Stricter vendor access controls, hardened email defenses, and regular tabletop incident-response exercises aligned to clinical downtime procedures are equally critical, he says.

“These measures are achievable even for smaller organizations if treated as nonnegotiable priorities; the limiting factor is typically execution capacity—time and personnel—rather than a lack of understanding of what needs to be done,” O’Neill says.

Also critical are backups, which should be tested and stored offsite, Lewis says. He also recommends network segmentation to separate systems that hold sensitive data.
“Also do data mapping and classifications … If you can get in there and really see where data is and who’s accessing it, and when, where, and why, it gives you a much better idea of what’s going on,” he says.

While many recommendations come with hefty price tags and resource requirements, there are highly effective, lower-cost measures to reduce the risk of AI-enabled breaches, such as the following:

  • enable phishing-resistant multifactor authentication wherever possible;
  • apply conditional access for remote and administrative logins;
  • fully leverage built-in endpoint detection and response, antivirus, patching, and email security tools already included in most enterprise platforms;
  • disable legacy authentication;
  • implement basic data loss prevention rules to block social security numbers and ePHI from leaving via webmail or chat;
  • restrict public AI tool access on managed devices;
  • establish clear AI acceptable-use standards;
  • maintain an inventory of all connected devices; and
  • reinforce standards through brief, recurring training.

“For smaller organizations, the assessment pays for itself in terms of better controls that reduce risk, lower the chance of a breach, improve resilience, and lower cybersecurity insurance costs,” O’Neill says.

Finally, treat AI as a data-handling system, “not a gimmick to work faster,” he says. “The fastest way to lose control is to block employees from modern tools and then act surprised when they use public/free AI. Provide a safe, sanctioned AI option, pair it with DLP+ logging, and you’ll cut shadow IT while getting the productivity upside—without donating patient data to the internet.”

— Elizabeth S. Goar is a freelance health care writer based in Benton, Wisconsin.