Using Machine Learning to Make Drug Discovery More Efficient
Here is an introduction to using machine learning and AI to assist pharmaceutical companies in discovering new drugs, showing that machine learning with enough data can assist in every real-world problem solving.
Join the DZone community and get the full member experience.Join For Free
new drugs typically take 12-14 years to make it to market, with a 2014 report finding that the average cost of getting a new drug to market had ballooned to a whopping $2.6 billion.
it’s a topic i’ve covered before , with a study published earlier this year highlighting how automation could be used to reduce the cost of drug discovery by approximately 70%.
it’s an approach that a number of companies are taking to market. for instance, london-based start-up benevolent.ai utilizes complex ai to look for patterns in the scientific literature.
they have already managed to identify two potential drug targets for alzheimer’s that has already attracted the attention of pharmaceutical companies.
automating drug discovery
a nice example of what could be possible is provided by a recent study published in cell chemical biology . the study reveals a big data-based approach to detecting toxic side effects that would prohibit a drug from being used on humans before it gets to the expensive clinical trial stage.
the approach is nice, because rather than looking solely at the molecular structure to test its viability, they look instead at a number of features related to how the drug binds to molecules.
“we looked more broadly at drug molecule features that drug developers thought were unimportant in predicting drug safety in the past. then we let the data speak for itself,” the authors say.
it’s an approach known as proctor, and it took its inspiration from the moneyball method popularized in baseball. the researchers analyze each drug using 48 different features to gauge its safety for clinical use, and it does all of this automatically using machine learning.
training the machine
the algorithm was trained using hundreds of drugs that had already been approved by the fda, together with those that had failed clinical trials due to some sort of toxicity problem.
this training allowed them to develop a so-called proctor score that helps them to distinguish the drugs that were approved by the fda from those that failed the toxicity test.
“we were able to find several features that led to a very predictive model,” the team say. “hopefully this approach could be used to determine whether it’s worth pursuing a drug prior to starting human trials.”
they hope that their method will be used for post-approval surveillance of drugs that have already received fda approval but still carry a risk of toxicity. for instance, one diabetes drug that was on the market was flagged by proctor and when it was investigated further, it did indeed reveal that it had been taken from the market in europe.
this automated approach has a huge amount of potential, both to improve our drug discovery process but also to make it cheaper and more effective. with something like 90% of drugs failing to make it through the trial process, anything that can improve matters has to be welcomed.
central to such methods, however, is having the data available to run the algorithms. something like 50% of clinical trial results remain unpublished, and this puts a significant hurdle in the way of big data-based approaches like those by proctor and benevolent.ai.
give them the data and the sky really is the limit.
Published at DZone with permission of Adi Gaskell, DZone MVB. See the original article here.
Opinions expressed by DZone contributors are their own.
What to Pay Attention to as Automation Upends the Developer Experience
From CPU to Memory: Techniques for Tracking Resource Consumption Over Time
Top 10 Engineering KPIs Technical Leaders Should Know
DevOps in Legacy Systems