heart.jpg

Organization Develops Safety Guidelines for Artificial Intelligence in Healthcare

July 10, 2019
The Partnership for Artificial Intelligence, Telemedicine and Robotics in Healthcare is working to shape how robotics are used.

The innovation and adoption of automation, robotics, and artificial intelligence (AI) in a healthcare setting lags behind other industries, according to the Partnership for Artificial Intelligence, Telemedicine and Robotics in Healthcare (PATH) 

The membership-based alliance is looking to shape adoptions of new technologies in healthcare on a global scale.

"With spiraling costs, increased need and decreasing resources, and rapidly advancing new technologies and participants, healthcare lags significantly behind," PATH states on its website. "But innovation alone does not equal adoption and use, especially in healthcare. Innovations must gain the support of a spectrum of decision-makers – healthcare providers, regulators, payers, and consumers."

In order to advance the use of technology in global healthcare, PATH recently released a set of principles for developing and implementing artificial intelligence applications in the industry. The organization's goal is to improve care and build efficiencies using advanced technologies.

"The principles were created to help developers and healthcare professionals assure patients and the public that the emerging use of artificial intelligence in healthcare will always be dedicated to providing safe, equitable and highest quality services," said Jonathan Linkous, co-founder and CEO of PATH.

Members of PATH with additional guidance from other leaders in healthcare incorporated parts of existing statements such as the Asilomar AI Principles and the Hippocratic Oath.

The principles include:

  1. First Do No Harm: A guiding principle for both humans and health technology is that, whatever the intervention or procedure, the patient's well-being is the primary consideration.
  2. Human Values: Advanced technologies used to delivery healthcare should be designed and operated to be compatible with ideals of human dignity, rights, freedoms, and cultural diversity.
  3. Safety: AI systems used in healthcare should be safe and secure to patients and providers throughout their operational lifetime, verifiably so where applicable and feasible.
  4. Design Transparency: The design and algorithms used in health technology should be open to inspection by regulators.
  5. Failure Transparency: If an AI system causes harm, it should be possible to ascertain why.
  6. Responsibility: Designers and builders of all advanced healthcare technologies are stakeholders in the moral implications of their use, misuse, and actions, with a responsibility and opportunity to shape those implications.
  7. Value Alignment: Autonomous AI systems should be designed so that their goals and behaviors can be assured to align with human values throughout their operation.
  8. Personal Privacy: Safeguards should be built into the design and deployment of healthcare AI applications to protect patient privacy including their personal data.  Patients have the right to access, manage and control the data they generate.
  9. Liberty and Privacy: The application of AI to personal data must not unreasonably curtail people's real or perceived liberty.
  10. Shared Benefit: AI technologies should benefit and empower as many people as possible.
  11. Human Control: Humans should choose how and whether to delegate decisions to AI systems, to accomplish human-chosen objectives.
  12. Evolutionary: Given constant innovation and change affecting devices and software as well as advances in medical research, advanced technology should be designed, in ways that allow them to change in conformance with new discoveries.

Additional information can be found on the PATH web site.

About the Author

Stefanie Valentic

Stefanie Valentic was formerly managing editor of EHS Today, and is currently editorial director of Waste360.

Sponsored Recommendations

Navigating ESG Risk in Your Supply Chain

Sept. 26, 2024
Discover the role of ESG in supply chains, from reducing carbon footprints to complying with new regulations and enhancing long-term business value.

Understanding ESG Risks in the Supply Chain

Sept. 26, 2024
Understand the critical role of ESG in supply chains, the risks for hiring companies, and the competitive edge suppliers gain by prioritizing sustainability.

Best Practices for Managing Subcontractor Risk

Sept. 26, 2024
Discover how to effectively manage subcontractor risk with unified strategies, enhanced oversight, and clear communication for consistent safety and compliance.

Building a Culture of Support: Suicide Prevention and Mental Health in the Workplace

Sept. 26, 2024
Find best practices for setting up an organizational culture that promotes positive mental health and suicide prevention.

Voice your opinion!

To join the conversation, and become an exclusive member of EHS Today, create an account today!