Cybersecurity insights from industry experts.

What the FDA and ONC Have Said About AI in Healthcare

US government organizations responsible for making sure healthcare products are safe and effective have proposed rules and are soliciting industry feedback regarding artificial intelligence and machine learning.

Bill Reid, Office of the CISO

December 5, 2023

4 Min Read
An apple, a computer keyboard, a stethoscope, and a bottle of pills on a grey table.
Source: Thomas Baker via Alamy Stock Photo

In the United States, the Food and Drug Administration (FDA) is responsible for ensuring that healthcare products are safe and effective. Similarly, the Office of the National Coordinator for Health Information Technology (ONC) leads the government's health IT efforts and promotes standards-based information exchange in the healthcare industry.

As new technologies like AI and machine learning play greater roles in healthcare, these organizations are issuing proposed guidelines and inviting public feedback regarding how these technologies will impact patient safety and treatment efficacy.

AI Transparency in Healthcare

In science and healthcare, we want to know why something works and how it works. Science is about explainability and understanding. With that in mind, it makes sense to expect transparency of the tools we use in providing care. It is vital that they can be understood, both in how they are created and in what they recommend.

The FDA has released a discussion paper and is collecting feedback from the industry to inform the regulatory framework it will apply to healthcare AI. The discussion paper focuses on three main areas:

Human-led governance, accountability, and transparency. The FDA stresses the value of having humans involved at each stage of AI model development and deployment to aid in accountability and ensure the model is adhering to legal and ethical guidelines.

Quality, reliability, and representativeness of data. Since an AI model is only as good as the data used to train it, the quality and appropriateness of that training data is vital. Consistent inputs should produce consistent results, and the sample data should be sufficiently similar to the intended patient population.

Model development, performance, monitoring, and validation. There are two primary factors to consider when evaluating the risk inherent in using an AI model in healthcare decision making: the model influence (how much weight is given to the model's recommendation in making a decision) and decision consequence (how significant the potential consequences of an incorrect decision may be).

The FDA has also discussed many specific uses of AI in healthcare in more detail, but the three areas of focus outlined above provide a basic framework for considering and evaluating AI tools in the field.

New ONC Rules for DSIs

The ONC is responsible for certifying electronic health records (EHRs), which are vital to the way the healthcare industry operates. A newly proposed rule known as ONC HTI-1 contains a number of provisions designed to modernize the agency's approach to EHRs and healthcare data management. Though it doesn't address AI/ML explicitly, it's important for healthcare organizations to understand that adjustments to the rules around decision support interventions (DSIs) do have important implications for these new technologies.

The proposed rules define DSIs as "technology intended to support decision-making based on algorithms or models that derive relationships from training or example data and then are used to produce an output or outputs related to, but not limited to, prediction, classification, recommendation, evaluation, or analysis." The new rules specifically set out to make sure that models being used in this way are "fair, appropriate, valid, effective and safe" — or FAVES for short. This would be achieved through increased data transparency requirements and public disclosure regarding how certified health IT developers manage risks related to DSI models.

The ONC is also very concerned about ensuring that there is equity and fairness — that model bias is avoided, and that training is fair and broadly sampled and appropriate for the audience addressed by the model. There is also concern with the input data for the model — being clear and transparent about what inputs are used by a model to make a recommendation. These ONC concerns align closely with the FDA's focus on "safety and effectiveness," underscoring the importance of these concepts in healthcare technology.

What Healthcare Companies Should Know About Using AI/ML Tools

The proposed rules and discussion papers issued by the ONC and FDA should make it clear that, when it comes to using AI in healthcare settings, safety and effectiveness are paramount. Those goals can be achieved by making sure tools used are transparent, reliable, and explainable.

While the work is being done to ensure that models and tools are being developed and used in ways that help deliver safe and effective care, without bias, industry leaders also have to maintain a clear focus on ensuring that the work is protected by sound security practices. It's clear from the FDA's rule 524B and recent guidance around medical devices that without security, we cannot have safe and effective systems. Security is a critical component of quality.

Read more Partner Perspectives from Google Cloud

Read more about:

Partner Perspectives

About the Author(s)

Bill Reid

Office of the CISO, Google Cloud

Bill Reid is part of Google’s Office of the Chief Information Security Officer (CISO) where he serves as a Security Advisor to Google Cloud’s Health and Life Sciences customers, providing guidance on ways to achieve their business goals while adopting a high security bar.  

Prior to Google, he was CISO and VP for National Resilience, a bio-manufacturing company, where he established and ran the Security (Physical, IT, and OT), Privacy, and Compliance organizations. During his tenure, the company grew from several dozen to over 2000 employees with operations in the US and Canada.

Before Resilience, Bill was the Security Leader for Amazon Care, a telehealth and in-person care organization established by AWS.  He built the security and privacy team as part of the launch of the service.  Also at AWS, Bill led the AWS Security Solution Architecture team, working with the company’s enterprise customers, and co-led the global security community of practice.

Earlier, Bill held various CISO roles at healthcare technology and medical device companies.  He was also at Microsoft, where he ran a Microsoft Consulting Services practice, was part of the Trustworthy Computing initiative, and was Director of Product Management for Microsoft Health Solutions Group, working on products like HealthVault, a platform for personal health management.   He began his career in healthcare administration at Group Health Cooperative (now Kaiser) where he served in a number of clinical and financial management roles. 

Keep up with the latest cybersecurity threats, newly discovered vulnerabilities, data breach information, and emerging trends. Delivered daily or weekly right to your email inbox.

You May Also Like


More Insights