Everyone is talking about AI: What is it? Will it take our jobs? Will it save the world or will it ruin everything?

So when I was lucky enough to be invited to a panel discussion called “Regulating AI in Pharmacovigilance” I jumped at the chance. This is a topic of key relevance to our clients right now and something I was feeling rather bewildered by.

We heard from visionary leaders Felix Arellano and Andrew Bate, plus experienced fellow Pharmacovigilance professionals, including Julia Appelskog, who literally wrote the book on this!

Discussion points included:

The biggest issue in PhV right now is recruitment and retention of talent.

AI could help with this by taking on some of the time-consuming, high-volume work. This could release the talented individuals within PhV organisations to use their brain power for more strategic activities.

Human oversight remains critical and the outputs from AI still require review by a qualified individual. But several examples were cited where AI could perform some tasks more efficiently than a human, including reviewing literature to identify relevant safety data, language translations, MedDRA coding and processing new or follow up case reports, which would then be presented for review by one of us.

We have been using Bayesian methods in PhV for many years.

This was the cutting-edge when it was introduced and many were wary of it at that time, but this machine learning became the accepted method for detecting signals that we now rely on.

Regulatory Agency ongoing work

Agencies are assessing the risks and potential impacts of AI. For example in the MHRA Symposium in February this year one of the topics discussed was AI and their work with Health Canada in this area.

In March this year FDA published “Artificial Intelligence and Medical Products: How CBER, CDER, CDRH, and OCP are Working Together,” which represents the FDA’s coordinated approach to AI and builds on their previous action plan.

Many other agencies around the world have begun to develop their own versions of GVP, following in the EMA footsteps, and will be incorporating their ideas on AI into their country-specific regulations.

However, this leads to variability between the regions, with consequent increases in costs of development of new medical interventions, that ultimately translate into higher prices for healthcare systems.

CIOMS XIV Working Group

The objectives of this Working Group per their website (CIOMS XIV) are “to establish and promote principles and guidance for the use of artificial intelligence or intelligence augmentation in the field of pharmacovigilance building on and complementing the broader initiatives underway”. It’s great that this group are already discussing the topic and we hope to see some outputs over the next couple of years.

But even while you are reading this the technology is advancing at light-speed and one of the key challenges is trying to design guidelines that keep up with all the available different types of AI.

I remain optimistic that the urgent need to regulate AI will bring everyone back to the table and lead to harmonisation.

This panel discussion gave me a great deal of insight and understanding, building on the excellent presentation on AI by Trishan Panch I attended at the Faculty Symposium last year. But the more I learn about AI the more I realise how much there is to learn!