Ten contributions to an efficient framework for regulating the use of artificial intelligence in the medicinal product lifecycle

Contributions by Faus Moliner to the EMA’s Reflection Paper on the use of AI in the lifecycle of medicines

Claudia Gonzalo and Laia Rull

Capsulas Nº 247

The European Medicines Agency (EMA) opened for public consultation a “reflection paper” on the use of artificial intelligence (AI) in the development and regulation of human and veterinary medicines. Our feedback is as follows:

General considerations: transparency

When it comes to AI and machine learning (ML), it is essential to ensure transparency and intelligibility of the systems used. One of the most appropriate tools to achieve this is for companies to have an adequate legal documentation framework that addresses compliance, risk assessment, data protection and mitigation plans. This need may be more evident in high-risk contexts, but even where AI uses are categorised as lower risk, this documentation work should be considered a good practice to promote.

A common taxonomy

It is crucial to work towards a common taxonomy at least on the basic theoretical principles to avoid situations, e.g. where a system might be considered AI by the EMA and not by other bodies or institutions.

In this regard, the EMA should adopt the same definition of an AI system as is adopted in the AI Act. Although at the time of writing the regulation is still at the technical work stage, it appears that the definition of AI system will be the same as that proposed by the OECD. The OECD has defined AI as “a machine-based system that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that [can] influence physical or virtual environments. Different AI systems vary in their levels of autonomy and adaptiveness after deployment”.

Clinical trials

In general, we support the EMA’s approach on the need for AI systems used in clinical trials to comply with Good Clinical Practice (ICH Guideline E6 on Good Clinical Practice). In our view, such general guidelines can coexist with specific GCP advice for the most common uses of AI/ML in the context of a clinical trial. In addition, it would be advisable for the EMA to make it explicit that, when AI/ML systems are used for the conduct of clinical trials, data quality standards and metrics should be well documented (the dimensions they refer to, how reliable data are, their mutability, potential for bias, etc.).

Regarding the specific uses of AI in the context of clinical trials, we have observed that AI systems are often used for patient recruitment. Given that the European Health Data Space (“EHDS”) is in the pipeline, there is a need to ensure ethical use of data and to have mechanisms in place to assess the possible effects of recruitment biases that may derive from geographic, demographic or personal characteristics. This will be particularly relevant if the EHDS eventually provides for opt-in or opt-out clauses for secondary uses, as studies suggest that these rights are used unequally across different social groups.

AI systems can also be used to predict potential adverse effects, interactions or to monitor safety during the conduct of the clinical trial. The EMA’s preliminary analysis differentiates between early phase clinical trials and pivotal clinical trials. In our view, this approach could be improved if risk allocation depended not only on the type of clinical trial, but also on the risk inherent in the intended use. For example, the risk of using AI/ML to optimise or manage protocol design may vary depending on factors such as the use of real world evidence obtained to re-evaluate the correctness of the algorithm or the intention to apply datasets to data-limited populations (e.g. in the case of paediatric or rare diseases).

In addition, the use of AI/ML can help overcome some of the obstacles of decentralised clinical trials, such as data collection and processing. Since patients are located off-site, they must regularly and consciously submit their own participation data. This can lead to patient compliance issues and data errors. Sponsors, CROs and medical research institutions can leverage AI to solve these problems in a number of ways: for example, they can create algorithms to analyse patient data and make decisions that achieve the desired outcome – in this case, consistent patient compliance. AI can optimise and generate notifications that prompt patients to complete electronic clinical outcome assessments (eCOAs) for a more reliable data set. In addition, AI programmes can support patients in the process of submitting their data by analysing the quality of their data in advance. For example, an AI programme can evaluate an image to see if it meets the requirements of the clinical trial. It can then ask the patient to retake the image with recommendations regarding image quality, such as lighting or angle. This limits the number of insufficient or substandard submissions, thereby reducing data processing errors.

Upgrade and release management

AI systems are constantly evolving. The process of incorporating upgrades and new versions of existing systems should be feasible and combine the need to maintain patient safety without unduly and unnecessarily hampering innovation.

It is understandable that the EMA does not want to regulate this aspect in detail, as technology is advancing at a very fast pace these days. We suggest developing a framework on how to manage upgrades to existing IA systems in terms of establishing the level of documentary and training support that will be required, as well as the procedural aspects from a regulatory point of view, in line with a risk-based approach.

Product information

The most common risk with AI-generated product information (e.g. a prospectus) is that the models may include sentences or information that are grammatically plausible, but wrong in terms of content.

Our recommendation is that the work of the EMA should go hand in hand with the development of electronic product information (“ePi”) foreseen in the proposed revision of the EU pharmaceutical legislation.

For example, if AI/ML models were used to process changes in the safety profile of medicines or adapt product information to the patient’s profile, risks of errors, biases and misinformation could be triggered, with likely public health consequences. We therefore suggest that the EMA explore best practices in prevention and work closely with healthcare professionals in this area.

Environmental management

The manufacture and use of medicinal products have a significant impact on the environment. In this respect, the review of the EU’s general pharmaceutical legislation has identified gaps and proposes to strengthen the obligations of manufacturers. We believe that AI can be another strong ally in this area, for example in the identification and calculation of environmental risk impacts or in monitoring. The EMA document should therefore devote a section to address how AI can support the reduction of the environmental impact of medicines.

Collaboration between authorities and with the medical devices industry

In the health sector we have to take into account that most of the uses of AI will be embedded in medical devices categorised as high risk under the future AI Act. We suggest considering that (i) the need for transparency should be combined with the fact that medical devices do not have regulatory protection as medicinal products, so that regulation needs to be cautious not to deter research and innovation in the face of what could be perceived as insufficient protection of commercially or technically sensitive data, such as certain data relating to algorithm performance; and (ii) as far as possible, the conformity assessment of medical devices and the assessment for high-risk uses under the IA Act be carried out jointly in order to avoid overlaps and/or divergences.

Liability regime

At the time of writing, the Proposal for a Directive of the European Parliament and the Council on adapting non-contractual civil liability rules to artificial intelligence (AI Liability Directive) is in the pipeline.

Although this is a horizontal Directive, i.e. not specific to the health sciences sector, the EMA should consider how the proposed liability regime for harm caused by AI-enabled products and services could affect the lifecycle of the medicinal product. This assessment will need to be reviewed and, if necessary, updated as the AI Liability Directive progresses.

Towards an integrated health ecosystem

Solutions that promote a coordinated approach to healthcare while helping patients manage their own health are growing rapidly and have real potential to advance healthcare. The EMA’s reflection paper provides considerations on the use of AI/ML in the lifecycle of medicines, including the post-authorisation phase where this care dimension could fit in.

Some legislative proposals are starting to take a similar approach. For example, one of the aspects being considered in the EHDS is the use of data from wearables and wellness apps collected in the patient’s daily life, both for primary and secondary uses. Similarly, the EMA should also consider the potential of real-world data in patient monitoring. These uses have different levels of risk: it is not the same if the AI/ML system is used to inform or drive clinical management (low risk) as it is to treat or diagnose (high risk). The EMA reflection paper could provide some recommendations in its final version, grouping the most common types of interventions.

Furthermore, AI systems in this context may have a highly relevant use in the context of early detection of supply problems through the increased use of available data and algorithms.

The importance of the use of AI systems to support patient journey

It could give the impression that addressing the use of AI systems throughout the patient journey is closely linked to healthcare provision and could therefore fall within the competence of Member States under Article 168 of the Treaty on the Functioning of the EU (TFEU).

On the other hand, the use of AI systems could help to make relevant decisions on adherence to treatment, changes in interventions or dosing. Given this direct impact on the lifecycle of medicines, we suggest that the EMA further explore this area and provide recommendations on how to integrate this data into post-authorisation decisions.

Uso de cookies

Este sitio web utiliza cookies para que usted tenga la mejor experiencia de usuario. Si continúa navegando está dando su consentimiento para la aceptación de las mencionadas cookies y la aceptación de nuestra política de cookies, pinche el enlace para mayor información. ACEPTAR

Aviso de cookies