Skip navigation

Organization Develops Safety Guidelines for Artificial Intelligence in Healthcare

The Partnership for Artificial Intelligence, Telemedicine and Robotics in Healthcare is working to shape how robotics are used.

The innovation and adoption of automation, robotics, and artificial intelligence (AI) in a healthcare setting lags behind other industries, according to the Partnership for Artificial Intelligence, Telemedicine and Robotics in Healthcare (PATH) 

The membership-based alliance is looking to shape adoptions of new technologies in healthcare on a global scale.

"With spiraling costs, increased need and decreasing resources, and rapidly advancing new technologies and participants, healthcare lags significantly behind," PATH states on its website. "But innovation alone does not equal adoption and use, especially in healthcare. Innovations must gain the support of a spectrum of decision-makers – healthcare providers, regulators, payers, and consumers."

In order to advance the use of technology in global healthcare, PATH recently released a set of principles for developing and implementing artificial intelligence applications in the industry. The organization's goal is to improve care and build efficiencies using advanced technologies.

"The principles were created to help developers and healthcare professionals assure patients and the public that the emerging use of artificial intelligence in healthcare will always be dedicated to providing safe, equitable and highest quality services," said Jonathan Linkous, co-founder and CEO of PATH.

Members of PATH with additional guidance from other leaders in healthcare incorporated parts of existing statements such as the Asilomar AI Principles and the Hippocratic Oath.

The principles include:

  1. First Do No Harm: A guiding principle for both humans and health technology is that, whatever the intervention or procedure, the patient's well-being is the primary consideration.
  2. Human Values: Advanced technologies used to delivery healthcare should be designed and operated to be compatible with ideals of human dignity, rights, freedoms, and cultural diversity.
  3. Safety: AI systems used in healthcare should be safe and secure to patients and providers throughout their operational lifetime, verifiably so where applicable and feasible.
  4. Design Transparency: The design and algorithms used in health technology should be open to inspection by regulators.
  5. Failure Transparency: If an AI system causes harm, it should be possible to ascertain why.
  6. Responsibility: Designers and builders of all advanced healthcare technologies are stakeholders in the moral implications of their use, misuse, and actions, with a responsibility and opportunity to shape those implications.
  7. Value Alignment: Autonomous AI systems should be designed so that their goals and behaviors can be assured to align with human values throughout their operation.
  8. Personal Privacy: Safeguards should be built into the design and deployment of healthcare AI applications to protect patient privacy including their personal data.  Patients have the right to access, manage and control the data they generate.
  9. Liberty and Privacy: The application of AI to personal data must not unreasonably curtail people's real or perceived liberty.
  10. Shared Benefit: AI technologies should benefit and empower as many people as possible.
  11. Human Control: Humans should choose how and whether to delegate decisions to AI systems, to accomplish human-chosen objectives.
  12. Evolutionary: Given constant innovation and change affecting devices and software as well as advances in medical research, advanced technology should be designed, in ways that allow them to change in conformance with new discoveries.

Additional information can be found on the PATH web site.


Hide comments


  • Allowed HTML tags: <em> <strong> <blockquote> <br> <p>

Plain text

  • No HTML tags allowed.
  • Web page addresses and e-mail addresses turn into links automatically.
  • Lines and paragraphs break automatically.