Maximising discovery success

Published: 7-Mar-2016

While the number of drug discovery targets has increased dramatically, unravelling their complex molecular interactions and successfully transforming them into new drugs remains a colossal task. Susan Birks reports on new R&D technology and trends

You need to be a subscriber to read this article.
Click here to find out more.

Many analysts have recently pondered the industry’s apparent lack of innovation and high failure rate in drug discovery. Yet the US Food and Drug Administration (FDA) approved more than 40 new molecular entities (NMEs) in 2015, and the European Medicines Agency (EMA) recommended the approval of 39 new therapeutics last year. In fact, biopharmaceutical drug development has been experiencing one of its most productive periods. In the past decade, NMEs in the R&D pipeline have been rising by 6% each year and now exceed 10,000 active drug candidates targeting unmet and under-served medical needs1.

As the blockbuster has been in decline, the market for personalised treatments has grown in size and value. The Personalized Medicine Coalition (PMC) concluded that of the new FDA-approved drugs in 2014, more than 20% were personalised medicines2. Meanwhile, market researcher GlobalData reports the market value for one of the first personalised medicines – HER2-negative breast cancer therapeutics – is forecast to increase more than fourfold from its 2013 US$1.45bn value to an estimated $6.12bn by 2023.3 These figures suggest a different story from the media’s doom and gloom predictions for the pharma industry.

That said, it cannot be denied that the number of new drugs per billion dollars spent has dropped sharply in recent decades. But take into consideration that the industry is now tackling some of the least understood, more complex therapeutic areas such as oncology and CNS – and that regulators expect more effective, better targeted medicines with fewer side-effects.

Recent Big Pharma moves to increase pipelines and improve drug discovery success
AstraZeneca is employing the pioneering genome editing technique, CRISPR tech, across its entire discovery platform to identify and validate new drug targets in preclinical models that closely resemble human disease. AstraZeneca says it plans to share cell lines and compounds with its partners and work with them to publish findings of its application of CRISPR technology in peer reviewed journals, contributing to broader scientific progress in the field and building on its ‘open innovation’ approach to R&D.
The company is also diving into the world of the ‘secretome’ in the hunt for new drugs and better ‘cell factories’ for making biotech medicines. The secretome is a collective name for the proteins secreted by cells and it accounts for around one third of human proteins. Like the human genome, scientists have recently unravelled the full array of proteins involved and AstraZeneca hopes to benefit from this opportunity through a three-year collaboration with the newly established Wallenberg Centre for Protein Research in Sweden. The new centre is funded primarily by the Wallenberg family, which also owns Investor, the third largest shareholder in AstraZeneca.
Sanofi has announced it is collaborating with Warp Drive Bio on developing novel oncology therapies and antibiotics based on proprietary platforms. The privately held biotechnology company Warp Drive Bio uses its proprietary Small Molecule Assisted Receptor Targeting (SMART) and Genome Mining platforms to discover novel oncology therapeutics and antibiotics.
Roche is to partner with the Dortmund, Germany-based Lead Discovery Centre (LDC) to identify and advance drug discovery projects across several treatment areas. The diseases to be addressed were not disclosed, but the partners will work to pinpoint targets and identify preclinical candidates. Over an initial three-year period, LDC will act as a translational incubator for Roche and carry out small molecule projects in collaboration with the scientific inventors and their academic institutions.
Merck Sharp & Dohme (MSD), known as Merck in North America, recently bought Scottish drug discovery company IOmet Pharma in a deal worth up to US$400m (£280m). Edinburgh-based IOmet specialises in the discovery and development of small molecules for the treatment of cancer and the deal includes its comprehensive pre-clinical pipeline of IDO (indoleamine-2,3-dioxygenase 1), TDO (tryptophan-2,3-dioxygenase), and dual-acting IDO/TDO inhibitors. IOmet will become a wholly-owned subsidiary of MSD.

The sector has, however, had to transform its approach to discovery. According to Research and Markets4, over the past two decades, the industry has seen unprecedented change – attempting to overcome issues of patent expiration, adapting to the shift towards biologics and downsizing internal discovery departments in favour of outsourcing such activities.

That change is also reflected in the way today’s drugs are discovered. Whereas traditionally drug discovery (TDD) was achieved by identifying the active ingredient from traditional remedies or by accidental discovery, the rise of automation and computerisation has allowed the approach to be automated and expanded, such that whole chemical libraries of synthetic small molecules, natural products or extracts are screened in intact cells or animals to identify substances that have a therapeutic action. Robotics has further increased the speed and reduced the costs of this process. However, the sequencing of the human genome and the recent synthesis of large quantities of purified proteins has produced and explosion of potential targets making the process unwieldy. How best to sift through these vast data sets in search of the proteins with beneficial therapeutic action is the industry’s main preoccupation.

The sequencing of the human genome and the recent synthesis of large quantities of purified proteins has produced an explosion of potential targets

Phenotypic drug discovery (PDD), where many different compounds are tested on a system until one results in an observable physical expression or characteristics of that trait but the actual mechanisms of the compounds’ action are not considered, has probably led to the development of more commercial drugs than the TDD method. However, today much of the new development has been in further refining these processes to increase the speed and accuracy and reduce the considerable cost.

In an effort to improve gene-expression profiling Cambridge, MA-based Genometry recently commercialised a high-throughput gene-expression assay developed at the Broad Institute of MIT and Harvard, which it claims operates at a fraction of the cost of conventional methods.5 It uses measurements of 1,000 genes to estimate accurately and quickly the activity of all the 20,000 or so genes expressed in a cell. The fast, low-cost assay reportedly not only allows for much larger experiments than previously possible, but also for gene-expression profiling to be used earlier in the drug discovery process, which may speed up development.

Scientists at the Florida campus of The Scripps Research Institute (TSRI), meanwhile, are looking to reduce the cost by using microbeads to miniaturise HTS from a large-scale laboratory process to a bench-top device.6 Current HTS systems that analyse the biological activities of samples typically occupy 900m2 of space or more, use high volumes of the samples, and cost millions to operate. This innovation will apparently allow the volumes of the process to be scaled-down by a factor of 100,000, opening up the process for smaller-scale research projects.

In another development, Researchers from Carnegie Mellon University have created the first robotically driven experimentation system to determine the effects of a large number of drugs on many proteins. The advanced system has been shown to be capable of process ‘learning’ which can reduce the number of necessary experiments by 70%.7 The model, presented in the journal eLife, is claimed to use an approach that could lead to accurate predictions of the interactions between novel drugs and their targets, helping to reduce the cost of drug discovery. As the machine progressively performs the experiments, it identifies more phenotypes and more patterns in how sets of proteins are affected by sets of drugs. Using a variety of calculations, the team determined that the algorithm was able to learn a 92% accurate model for how the 96 drugs affected the 96 proteins, from only 29% of the experiments conducted.

Medicinal chemists have been increasing the odds of finding potentially useful compounds by labelling chemical compounds with barcode-like fragments of DNA

Medicinal chemists have also been increasing the odds of finding potentially useful compounds by labelling chemical compounds with barcode-like fragments of DNA. The resulting DNA-encoded libraries – which can dwarf conventional small-molecule libraries – offer advantages to drug discovery. Rather than testing each compound individually, researchers can put all of the DNA-tagged small molecules into a single mixture and then introduce the target protein. Any compounds that bind with the target can be identified easily thanks to their DNA barcodes. Such libraries are slowly gaining popularity. In 2007, GSK acquired the technology through the purchase of Praecis Pharmaceuticals, while Novartis and Roche, have reportedly started their own in-house DNA-encoded-library programmes. While DNA-encoded libraries are not likely to replace the already heavily invested in HTS screening, they do offer a cheap and efficient complementary route to finding chemical structures that bind to new or more challenging targets.

Gaining a better understanding of the molecular interactions is a major challenge for the discovery sector, and a new multi-disciplinary team from Dijon University, France, has been working on this problem. They have come up with a robust approach to detect molecular interactions in a reliable way using flow cytometry – a laser-based, biophysical technology used widely to count and sort cells. It works by suspending the cells in a stream of fluid and then passing them through electronic detection apparatus.8 By coupling this to Försters resonance energy transfer (FRET) – a non-invasive technology that permits the study of protein interactions with a resolution of 1–10nm in vivo – the researchers believe they have obtained a robust and quick approach to analysing protein interactions in viable cells. The patented software, which is capable of combining the power of flow cytometry with the FRET technique, is called FRETinFLOW.

High performance computing

Finally, increasingly high performance computing (HPC) is also playing an important role in relieving the drug discovery bottleneck. A new paper9 by prominent researchers in China, and published online in a recent issue of National Science Review, by Oxford Journals examines China’s growing use of HPC in pursuit of new drugs. Here in the West too, many computational drug delivery services are being offered. Domainex and Cresset Discovery Services, for example, are the latest to form an alliance to provide laboratory-based and computational drug discovery services through a combination of their respective capabilities in chemistry and biology.





4. drug_discovery)





9. ‘Applying high performance computing in drug discovery and molecular simulation’[i]

All web articles last accessed 24 Feb

You may also like