Cybersecurity insights from industry experts.
What the FDA and ONC Have Said About AI in Healthcare
US government organizations responsible for making sure healthcare products are safe and effective have proposed rules and are soliciting industry feedback regarding artificial intelligence and machine learning.
In the United States, the Food and Drug Administration (FDA) is responsible for ensuring that healthcare products are safe and effective. Similarly, the Office of the National Coordinator for Health Information Technology (ONC) leads the government's health IT efforts and promotes standards-based information exchange in the healthcare industry.
As new technologies like AI and machine learning play greater roles in healthcare, these organizations are issuing proposed guidelines and inviting public feedback regarding how these technologies will impact patient safety and treatment efficacy.
AI Transparency in Healthcare
In science and healthcare, we want to know why something works and how it works. Science is about explainability and understanding. With that in mind, it makes sense to expect transparency of the tools we use in providing care. It is vital that they can be understood, both in how they are created and in what they recommend.
The FDA has released a discussion paper and is collecting feedback from the industry to inform the regulatory framework it will apply to healthcare AI. The discussion paper focuses on three main areas:
Human-led governance, accountability, and transparency. The FDA stresses the value of having humans involved at each stage of AI model development and deployment to aid in accountability and ensure the model is adhering to legal and ethical guidelines.
Quality, reliability, and representativeness of data. Since an AI model is only as good as the data used to train it, the quality and appropriateness of that training data is vital. Consistent inputs should produce consistent results, and the sample data should be sufficiently similar to the intended patient population.
Model development, performance, monitoring, and validation. There are two primary factors to consider when evaluating the risk inherent in using an AI model in healthcare decision making: the model influence (how much weight is given to the model's recommendation in making a decision) and decision consequence (how significant the potential consequences of an incorrect decision may be).
The FDA has also discussed many specific uses of AI in healthcare in more detail, but the three areas of focus outlined above provide a basic framework for considering and evaluating AI tools in the field.
New ONC Rules for DSIs
The ONC is responsible for certifying electronic health records (EHRs), which are vital to the way the healthcare industry operates. A newly proposed rule known as ONC HTI-1 contains a number of provisions designed to modernize the agency's approach to EHRs and healthcare data management. Though it doesn't address AI/ML explicitly, it's important for healthcare organizations to understand that adjustments to the rules around decision support interventions (DSIs) do have important implications for these new technologies.
The proposed rules define DSIs as "technology intended to support decision-making based on algorithms or models that derive relationships from training or example data and then are used to produce an output or outputs related to, but not limited to, prediction, classification, recommendation, evaluation, or analysis." The new rules specifically set out to make sure that models being used in this way are "fair, appropriate, valid, effective and safe" — or FAVES for short. This would be achieved through increased data transparency requirements and public disclosure regarding how certified health IT developers manage risks related to DSI models.
The ONC is also very concerned about ensuring that there is equity and fairness — that model bias is avoided, and that training is fair and broadly sampled and appropriate for the audience addressed by the model. There is also concern with the input data for the model — being clear and transparent about what inputs are used by a model to make a recommendation. These ONC concerns align closely with the FDA's focus on "safety and effectiveness," underscoring the importance of these concepts in healthcare technology.
What Healthcare Companies Should Know About Using AI/ML Tools
The proposed rules and discussion papers issued by the ONC and FDA should make it clear that, when it comes to using AI in healthcare settings, safety and effectiveness are paramount. Those goals can be achieved by making sure tools used are transparent, reliable, and explainable.
While the work is being done to ensure that models and tools are being developed and used in ways that help deliver safe and effective care, without bias, industry leaders also have to maintain a clear focus on ensuring that the work is protected by sound security practices. It's clear from the FDA's rule 524B and recent guidance around medical devices that without security, we cannot have safe and effective systems. Security is a critical component of quality.
Read more Partner Perspectives from Google Cloud
Read more about:
Partner PerspectivesAbout the Author
You May Also Like
A Cyber Pros' Guide to Navigating Emerging Privacy Regulation
Dec 10, 2024Identifying the Cybersecurity Metrics that Actually Matter
Dec 11, 2024The Current State of AI Adoption in Cybersecurity, Including its Opportunities
Dec 12, 2024Cybersecurity Day: How to Automate Security Analytics with AI and ML
Dec 17, 2024The Dirt on ROT Data
Dec 18, 2024