Over the past several years, pharmaceutical companies have been steadily integrating artificial intelligence (AI) into many aspects of clinical development. Today, AI’s impact is being felt from the bench to the clinic and beyond.
According to a survey by the Tufts Center for the Study of Drug Development (Tufts CSDD), one-third of respondents report partial or full implementation of AI to support clinical trial planning, design, execution, and regulatory submission. As per the same survey, the use of AI results in an average time savings of 18% in clinical trial implementation tasks and activities.
Since 2015, 75 AI-discovered molecules have entered the clinic, and 67 of these were in ongoing trials as of 2023. A watershed moment occurred in 2023 when Insilico Medicine’s candidate for treating idiopathic pulmonary fibrosis (IPF), INS018_055, became the first drug discovered and designed by generative AI to enter Phase II clinical trials.
Sponsored Post
Webinar: Tips To Have Your Practice Succeed This Year
This on-demand webinar will offer insights and share tips on helping practices run more efficiently.
By Elation Health and MedCity News
These examples only scratch the surface of AI’s benefits to pharma. McKinsey identified 12 use cases that illustrate the ability of AI to greatly improve quality, speed, and efficiency in clinical development. These use cases showed lower costs, accelerated enrollment, and higher success rates as a result of incorporating AI.
Using AI with care
Those surveyed by Tufts CSDD say having more successful implementations and use cases will help drive AI adoption. Still, this does not completely bypass the unique challenges that have hindered the adoption of AI in pharma.
One of the biggest obstacles is the fear that AI could potentially expose sensitive patient data. Companies are addressing this risk in part by employing anonymization and de-identification, data masking, and pseudonymization to remove personally identifiable information from datasets before they are used in AI applications. Furthermore, models trained on de-identified data are also further protected or encrypted to safeguard patient-level data further. For instance, the use of more sensitive data points, such as date of birth, can be avoided by using proxies instead, such as age.
presented by
Artificial Intelligence
What Keeps Healthcare CIOs Up at Night: Balancing Technology Investments with Consumer Expectations
When it comes to managing inbound phone calls, underperformance has devastating cost implications.
By Stephanie Baum
Another concern is the varying quality of the data used to train AI-driven models. In clinical development, an AI model trained on bad data can increase errors in forecasting and introduce further delays in trial timelines. Thus, it is vital companies ensure data are coming from trusted sources with strong data management practices, as well as undergo the required quality assurance and transformations prior to being used for training AI models.
Human bias in data occurs when one answer or outcome is intentionally or unintentionally encouraged over another. If the AI model is built with or trained on biased data, it can perpetuate these biases in its outcomes and worsen existing inequities. In one study, when researchers purposely trained an AI assistant with biased data, the accuracy of the assistant’s diagnoses decreased by 11.3%. IBM says awareness of bias must be built into each data processing step, and that ongoing monitoring and testing with real-world data can catch and correct bias before it becomes embedded in the AI model. For instance, data points such as ethnicity or race should only be used as a data filter to identify eligible participants if the protocol were restrictive from an epidemiological standpoint. In fact, such data points should not be used as predictors of clinical trial operational metrics, which should be based on objective, measurable, and more epidemiologically relevant criteria instead, such as diagnoses, etc.
Pharma companies also need to ensure their compliance tools are up to par so that they can ensure their AI applications stay within regulatory bounds. This can be challenging, given that some regulatory guidelines, such as the FDA’s “Good Machine Learning Practice,” are lagging the rapid advancement of AI, for instance considering the pervasive use of generative AI. Using up-to-date and comprehensive compliance tools to assess, shape, and monitor data and AI models will help provide AI with sturdy regulatory guardrails.
Smooth skies ahead
Pharma will continue to embrace AI in 2025, with companies likely to adopt use at every phase of the development process. According to Fairfield Market Research projections, the global market for AI in pharma will reach revenue of more than $4.45 billion by the end of 2030, with a robust compound annual growth rate (CAGR) of 19.1% from 2023 to 2030.
This deployment of AI will likely be encouraged by the newly ensconced Trump administration, which has made its stance on AI clear from the outset. Shortly after taking office, President Trump helped announce a $500 billion joint venture between OpenAI, Oracle, and SoftBank that will invest in AI infrastructure. During the announcement, Oracle CEO Larry Ellison suggested that part of the project will be linked with digital health records and touted AI’s potential for developing new treatments for diseases such as cancer.
Some ways in which pharma companies will be using AI technologies in 2025 include screening compounds and assessing their epidemiological suitability and potential for repurposing them, planning and optimizing clinical trials design and execution, improving diversity and inclusion in clinical trial enrollment, and streamlining and enhancing the regulatory disclosure process.
Some ways in which pharma will use AI in clinical trial planning and optimization include predictive and prescriptive modeling, identification of drug candidates and re-positioning approved drugs to treat other indications, and designing and optimizing clinical trial protocols. Protocol design, in particular, is seen as a promising frontier for use of AI in drug development, given the burden of time and effort required to write trial protocols, as well as the delay brought by protocol amendments in clinical trial timelines.
Companies like Merck & Co. are already using AI in development workflows like assisting with and accelerating medical writing and intend to leverage AI agents for automating repetitive tasks, such as data cleaning or preliminary analyses.
As sponsors conduct clinical trial enrollment, the future will see more pharma companies leveraging AI to improve diversity and inclusion, especially in trials that are diverse by design, as well as the odds of finding patients who fit an acceptable set of inclusion/exclusion criteria to accelerate clinical trial timelines.
For example, rather than waiting for patients to come to its trials, Johnson & Johnson is using AI to locate clinical research sites and investigators with eligible and suitable patients that could be helped by the J&J drugs being studied. J&J is also using data and AI to diversify clinical trials by finding providers where diverse patients are more likely to be treated and prioritizing the enrollment of eligible patients from those providers.Use of AI in regulatory applications is another area that will likely see expansion in 2025. Pfizer plans to use ML-driven analyses to identify which requests for information government regulators may have and prepare the answers to those queries ahead of time, thus saving weeks of dialogue. It also is exploring the use of AI in automating the production of the plethora of reports and documentation required by regulators.