FDA Issues Comprehensive Guidance on the Use of AI Models for Regulatory Decision-Making in Drug Development

The U.S. Food and Drug Administration (FDA) has issued a new guidance document aimed at providing clarity on the use of artificial intelligence (AI) models in the regulatory decision-making process for drug products. This guidance is particularly relevant for situations where AI models are used to generate data or information that will support decisions regarding the safety, effectiveness, or quality of drugs.

The core of the guidance is a risk-based credibility assessment framework, which is designed to help sponsors ensure the reliability of AI model outputs. This framework emphasizes the importance of establishing credibility through careful planning, organization, and documentation of the AI model’s performance. Key factors such as oversight levels, performance criteria, risk mitigation strategies, and the type of documentation required are all tailored to the risk level of the AI model and its intended context of use (COU).

Notably, the guidance clarifies that it does not address AI applications related to drug discovery or operational efficiencies that do not directly impact patient safety or the reliability of clinical study results. It also encourages sponsors to engage with the FDA early in their development process to assess whether their AI use aligns with the scope of this guidance.

For those seeking a more detailed understanding, the full FDA guidance is below.

Anterior
Anterior

FDA Issues Draft Guidance on Pulse Oximeters: Recommendations for Performance Testing and Labelling

Próximo
Próximo

Smart MDR offers an exclusive campaign for startups looking to market their medical devices in the European market!