Home  |   Subscribe  |   Resources  |   Reprints  |   Writers' Guidelines

PATH Develops Ethical Guidelines on the Use of AI in Health Care

PATH, the Partnership for Artificial Intelligence, Telemedicine and Robotics in Healthcare (www.pathhealth.com), has developed a set of guidelines for developing and implementing artificial intelligence applications in health care. "The principles were created to help developers and health care professionals assure patients and the public that the emerging use of artificial intelligence in health care will always be dedicated to providing safe, equitable, and highest quality services," says Jonathan Linkous, cofounder and CEO of PATH.

The principles include the following:

  1. First Do No Harm: A guiding principle for both humans and health technology is that, whatever the intervention or procedure, the patient's well-being is the primary consideration.
  2. Human Values: Advanced technologies used to delivery health care should be designed and operated to be compatible with ideals of human dignity, rights, freedoms, and cultural diversity.
  3. Safety: AI systems used in health care should be safe and secure to patients and providers throughout their operational lifetime, verifiably so where applicable and feasible.
  4. Design Transparency: The design and algorithms used in health technology should be open to inspection by regulators.
  5. Failure Transparency: If an AI system causes harm, it should be possible to ascertain why.
  6. Responsibility: Designers and builders of all advanced health care technologies are stakeholders in the moral implications of their use, misuse, and actions, with a responsibility and opportunity to shape those implications.
  7. Value Alignment: Autonomous AI systems should be designed so that their goals and behaviors can be assured to align with human values throughout their operation.
  8. Personal Privacy: Safeguards should be built into the design and deployment of health care AI applications to protect patient privacy including their personal data. Patients have the right to access, manage, and control the data they generate.
  9. Liberty and Privacy: The application of AI to personal data must not unreasonably curtail people's real or perceived liberty.
  10. Shared Benefit: AI technologies should benefit and empower as many people as possible.
  11. Human Control: Humans should choose how and whether to delegate decisions to AI systems, to accomplish human-chosen objectives.
  12. Evolutionary: Given constant innovation and change affecting devices and software as well as advances in medical research, advanced technology should be designed, in ways that allow them to change in conformance with new discoveries.

The principles were developed by members of PATH with additional guidance from other leaders in health care and have incorporated parts of existing statements such as the Asilomar AI Principles and the Hippocratic Oath.

PATH is an alliance of stakeholders working together to improve care and build efficiencies using advanced technologies. PATH and its members are working to gain the support of decision makers and the public about the use of advanced technology in health care, moving the field beyond research and pilot projects, and laying out a pathway for the integration and use of advanced technologies in the worldwide ecosystem of medicine.

— Source: PATH