Researchers have developed a platform that combines automated experiments with AI to predict how chemicals will react with one another, which could accelerate the design process for new drugs. Predicting how molecules will react is vital for the discovery and manufacture of new pharmaceuticals, but historically this has been a trial-and-error process, and the reactions often fail. To predict how molecules will react, chemists usually simulate electrons and atoms in simplified models—a computationally expensive and often inaccurate process.
Now, researchers from the University of Cambridge have developed a data-driven approach, inspired by genomics, where automated experiments are combined with machine learning to understand chemical reactivity, greatly speeding up the process. They’ve called their approach, which was validated on a dataset of more than 39,000 pharmaceutically relevant reactions, the chemical “reactome.” Their results, reported in Nature Chemistry, are the product of a collaboration between Cambridge and Pfizer.
“The reactome could change the way we think about organic chemistry,” said Emma King-Smith, PhD, from Cambridge’s Cavendish Laboratory, the paper’s first author. “A deeper understanding of chemistry could enable us to make pharmaceuticals and so many other useful products much faster. But more fundamentally, the understanding we hope to generate will be beneficial to anyone who works with molecules.”
How does the “reactome” approach work?
The reactome approach picks out relevant correlations between reactants, reagents, and performance of the reaction from the data, and points out gaps in the data itself. The data is generated from very fast, or high throughput, automated experiments. “High throughput chemistry has been a game-changer, but we believed there was a way to uncover a deeper understanding of chemical reactions than what can be observed from the initial results of a high throughput experiment,” said King-Smith.
“Our approach uncovers the hidden relationships between reaction components and outcomes,” said Alpha Lee, PhD, who led the research. “The dataset we trained the model on is massive—it will help bring the process of chemical discovery from trial-and-error to the age of big data.”
In a related paper, the team developed a machine learning approach that enables chemists to introduce precise transformations to pre-specified regions of a molecule, enabling faster drug design. The approach allows chemists to tweak complex molecules—like a last-minute design change—without having to make them from scratch.
Making a molecule in the lab is typically a multistep process, like building a house. If chemists want to vary the core of a molecule, the conventional way is to rebuild the molecule, like knocking the house down and rebuilding from scratch. However, core variations are important to medicine design.
The power of late-stage functionalization reactions
A class of reactions, known as late-stage functionalization reactions, attempts to directly introduce chemical transformations to the core, avoiding the need to start from scratch. However, it is challenging to make late-stage functionalization selective and controlled—there are typically many regions of the molecules that can react, and it is difficult to predict the outcome. “Late-stage functionalizations can yield unpredictable results and current methods of modeling, including our own expert intuition, isn't perfect,” said King-Smith. “A more predictive model would give us the opportunity for better screening.”
The researchers developed a machine learning model that predicts where a molecule would react, and how the site of reaction varies as a function of different reaction conditions. This enables chemists to find ways to precisely tweak the core of a molecule.
“We pretrained the model on a large body of spectroscopic data—effectively teaching the model general chemistry—before finetuning it to predict these intricate transformations,” said King-Smith. This approach allowed the team to overcome the limitation of low data: There are relatively few late-stage functionalization reactions reported in the scientific literature. The team experimentally validated the model on a diverse set of drug-like molecules and was able to accurately predict the sites of reactivity under different conditions.
“The application of machine learning to chemistry is often throttled by the problem that the amount of data is small compared to the vastness of chemical space,” said Lee. “Our approach—designing models that learn from large datasets that are similar but not the same as the problem we are trying to solve—resolve this fundamental low-data challenge and could unlock advances beyond late-stage functionalization.”
- This press release was originally published on the University of Cambridge website