Being a life sciences researcher today involves being inundated with data from every direction. Every day, new studies are published, conferences held, and experiments conducted.
Despite rumours the age of the ‘blockbuster’ drug might be over, we’re currently seeing record breaking approval numbers for new compounds. On top of this, researchers also need to factor in large volumes of data from ‘new’ sources – such as wearable and connected devices or genomic sequencing. All told, lab scientists barely have time to process one source of data before another dozen come along, so it’s understandable some feel they are drowning in the deluge.
A robotic response
In response to this onslaught, pharma companies are turning to AI to radically bring down the cost of drug discovery and development. Today, whether R&D efforts are directed towards realizing the full potential of precision medicine, identifying drug candidates for repurposing in rare disease treatment, or analyzing the safety and efficacy profiles of compounds in early R&D, AI is likely to be an essential component of any successful outcome.
AI can work at a pace far beyond any human, incorporating information from hundreds of sources at once to deliver the type of valuable insights pharmaceutical firms are looking for. This not only accelerates R&D but leaves researchers free to focus on what the data is telling them in relation to their study. Yet, the promise of AI so far hasn’t been turned into a reality thanks to several key stumbling blocks – e.g., the quality of data available for AI algorithms to ‘learn’ from and the generic nature of AI platforms available.
The barriers to successful AI
The first challenge facing any pharma company trying to deploy AI is how to feed it ‘good’ data. The reality is the vast majority of data firms hold is either incomplete or unusable in its current format, meaning it cannot be applied by an AI platform until it’s been cleaned and reformatted. Today, data scientists spend 80 percent of their time on tasks such as cleansing, integrating and formatting data from different sources to make it usable – an enormous undertaking given the amount of data. All of this means firms are frequently seeing their AI efforts frustrated thanks to multiple data management barriers, leaving them unable to generate the expected insights from AI.
Secondly, researchers need to understand whether they are using the right model to tackle the problems they’re trying to solve. Using different models can result in wildly different outcomes and can affect the degree to which end user scientists are able to trust AI systems. AI cannot simply be a magic ‘black box’ that provides answers – it must ‘show its workings’. Firms need to fully understand the way their AI systems work so they can be sure their predictions are actually valid. Moreover, drug development entails experimenting on living beings, so transparency about where data comes from and how it contributed to the result is essential, so we can be certain no unnecessary risks are taken with safety and well being – either for humans or animals.
Why life sciences is different
These two challenges will significantly impact which AI system firms choose to deploy. AI is currently being implemented in virtually every industry, so it isn’t surprising that there’s been a rush to create platforms capable of ‘doing it all’. The marketing of these one-size-fits-all platforms suggests they can intelligently process any form of data to produce valuable insights, handling drug discovery and call-centre complaints alike. But scientific data is far more complex, and generalist platforms have, to date, proved limited when handling life sciences data because they haven’t been designed with these challenges in mind.
Given the cost of implementing a new AI system, pharma firms should look to invest in purpose-built platforms designed to handle life sciences data from a range of sources. This includes internal company databases, LIMS, ELNs, instrument data and scholarly literature from numerous disciplines – and intelligently combine them together. This allows researchers to produce more accurate predictive models across the drug development chain, including drug efficacy studies, risk-benefit analyses, and pharmacovigilance. Moreover, because such a platform is attuned to the nuances of scientific research, these connected data streams can then be mined productively to draw out the right inferences and uncover new potentially lucrative research avenues.
There is no doubt that AI is integral to the future of pharmaceutical R&D. There is simply too much data for human researchers to tackle alone, and automated solutions will need to be employed if we’re to continue seeing innovative new therapies brought to market. However, AI is not, and will never be, a magic bullet. Unless pharma companies are willing to invest the time and resources required to provide specialist platforms with high-quality, consistent data, AI-fuelled drug discovery will remain a pipedream.
Tim Miller, the Vice President of Life Sciences Platform Solutions at Elsevier
Tim Miller is the Vice President of Life Sciences Platform Solutions at Elsevier. The group is dedicated to establishing a common data and analytics framework for current and future products and services. He has extensive experience of product development in the life sciences and intellectual property domains, having developed a range of products within his 35 years with Thomson Reuters and having spent a year embedded with the Clinical Oncology team at AstraZeneca.
Elsevier is a global information analytics business that helps institutions and professionals progress science, advance healthcare and improve performance for the benefit of humanity. Elsevier provides digital solutions and tools in the areas of strategic research management, R&D performance, clinical decision support, and professional education.