Solid-phase peptide synthesis is a well-established process that’s used to manufacture peptides for both clinical development and commercial supply. Currently available technologies, however, are prone to errors and may not provide all the answers that modern formulators need. Dr Kevin Robinson (KSR) recently caught up with Almac’s Steve McIntyre (SM), Manager, Peptide Process Development, to discuss a novel solution that takes a digitalisation approach to overcoming these hurdles
The solid-phase peptide synthesis (SPPS) process typically sits on a critical development path for new therapeutics, particularly during the early clinical phases. As such, a tool that accurately predicts how a peptide will perform during SPPS and highlights any potential challenges is highly valuable.
“A number of peptide prediction tools are available,” says Steve, “but these typically rely on the aggregation potential of adjacent amino acids; and, although of some value, they can be inaccurate and often don’t highlight the real issues within peptide synthesis.”
In response, Almac has created a peptide synthesis tool (PREDICTIDE™) that examines an amino acid sequence and not only interrogates pairs of adjacent amino acids, but groups as well, alongside other factors such as peptide length, resin choice, etc.
“PREDICTIDE will determine the initial starting conditions for a peptide synthesis procedure, as well as areas that are likely to be problematic.” And, notes Steve: “Almac has been in the peptide business for more than 30 years now. We’ve made a lot of them. The PREDICTIDE toolkit uses real data from the synthesis of >10,000 peptides that have been manufactured at Almac and, based on additional data, continues to evolve.”
“Sometimes a client is able to provide some information about how their peptide has been made; sometimes it has to be plucked from a protein and other sections have to be constructed; but, whatever the starting point, making the peptides is a key business for Almac."
"And there are a number of ways that you can approach peptide synthesis, using all the tools at your disposal — long coupling times, vats of reagents, etc., — which will probably produce a pure peptide in the end. But, it’s not cost effective … and it might not always work.”
“So, what a lot of people do now is use tools that, based on the sequence of the peptide of interest, predict which particular sections are going to be tough to synthesise. Those traditional tools are based on peptide aggregation."
"If you have a number of hydrophobic amino acids within the sequence, the peptide itself is likely to aggregate and fold in on itself; as the N terminus grows, it actually becomes less accessible. Coupling is difficult, the peptide is aggregated and access is limited.”
“All the peptides we’ve made in the past have started with an assessment of their aggregation potential, what areas might be challenging and how we cope with that. What we’ve realised recently is that we can do a lot of things in parallel with our synthesis equipment."
"Instead of making one peptide at a time and going through a cycle of discovering and solving a series of issues, which is quite time consuming, we wanted to understand how we could best use the sequence data to identify in advance what these issues were going to be."
"Instead of just looking at the aggregation potential, we started to datamine all the peptides we’d previously manufactured, examined problematic elements of each of them — particularly the ones that had failed — and looked for common overlaps.”
“It was no longer just about hydrophobic amino acids; it was about the order in which the amino acids occurred and noting that certain sequences tended to lead to more troublesome syntheses … and more failures."
Based on those observations, Craig Johnson, Peptide Chemistry Team Leader, and our IT team then built a tool that allowed us to use all the historical information we had from every peptide we’ve ever manufactured, which allows us to generate a more accurate peptide synthesis predictor.”
“Now when we start to build a peptide, we’re more likely to understand where the problems are. And, because we can run a number of syntheses in parallel, we can reach a more positive endpoint in a shorter period of time. So, rather than going through an iterative problem-solving process, we have a novel system that more accurately predicts any problems, enables us to run multiple syntheses at once and reduces the overall project time from months to a couple of weeks.”
KSR: Obviously there are time-to-market benefits for a technology such as this, but were there any other factors within the industry that prompted the development of PREDICTIDE?
SM: Time-to-market is key. I work in the Process Development Group at Almac on the new chemical entities that our clients need to take into clinical studies. We’re very aware that funding in the biopharma environment may rely on reaching specific phases of clinical development, so timelines are absolutely critical for our customers. As such, getting the peptide synthesis right as quickly as possible plays a crucial role for both them and us.
In addition, Almac is also active in the neoantigen peptide space, which is a slightly different concept compared with classic GMP peptide production. Within that sector, our clients may ask us to make 20–25 peptides, for example, knowing that some of them will be tough — if not impossible — to produce.
Out of those 20–25, you may get, say, 15, which will hopefully be enough to address the needs of the specific application. There is, however, a huge rate of attrition; and, in today’s world, that’s no longer acceptable. Fortunately, we can also use PREDICTIDE to be more efficient and effective in the neoantigen space as well and reduce those attrition rates.
KSR: So, reading between the lines, we’re talking deadlines, cost efficiencies and being more sustainable by not wasting a lot of raw materials.
SM: Absolutely. That’s incredibly important at the moment.
KSR: Perhaps building on that thought, did PREDICTIDE have to go through several iterations before you were able to apply it commercially or did it deliver results straight away?
SM: We’re using it right now but, at the same time, it’s a “living document.” As we make more and more peptides, we update the algorithm. Our approach was to apply PREDICTIDE to the peptides we’ve manufactured before and ask the program: what’s going to go wrong with this synthesis?
And, it’s been much better able to predict the issues we encountered compared with the tools we used historically. It’s certainly working for us; we’ve applied it to a number of different projects and, coupled with being able to do more in parallel, it’s definitely accelerating the first phases of process development.
KSR: Would I be right in thinking that it can help with fail fast, fail early decisions?
SM: Yes. Quite often, clients will come to us with a protein and they want to understand which parts they need to map to get a therapeutic that might work in the body to, maybe, mimic specific immune response.
Essentially, they’re looking at chopping up the protein into different sections and manufacturing those peptides. What we can do is advise where to make the cuts with the understanding that certain peptides are going to be much easier to synthesise.
The other advantage we have is that the technology enables us to make more accurate quotations, provides speed benefits (as mentioned) and gives clients peace of mind that we can deliver on our promises. It’s a competitive market and providing a reliable service can be make or break, especially during the early stages of development.
KSR: And how is the peptide market at the moment? Have the events of recent years affected what companies are doing and/or their need for new peptides?
SM: Very much so. In general, the therapeutics pipeline is increasing. And although there’s always been a gap between small and large molecule therapeutics, peptides are starting to close that gap. The clinical pipeline of peptide therapeutics has increased at a faster rate than small molecules. We’re not seeing approvals just yet, but there’s definitely an upward trend.
Recently, Novo Nordisk has marketed an orally available peptide, which is a game-changer.
Many people have shied away from peptides because of the necessity for intravenous administration, which is not ideal for a medicine that needs to be taken every day.
The oligonucleotide is changing as well, with the rise of mRNA vaccines and peptides being inherently involved to stimulate some of the T cell-based immune responses. We’re seeing a lot of traction and, certainly at Almac, our peptide business has increased by a factor of 3–4 during the last 5–6 years in terms of revenue.
KSR: Based on this growth, I'm assuming that there weren’t any issues within the highly regulated pharmaceutical industry about adopting this new technology?
SM: None at all. In fact, being able to tackle certain aspects of sustainability may have made it even more attractive. That's such a fundamental consideration nowadays and a true goal within peptide chemistry. PREDICTIDE means we can limit the number of experiments that need to be done, so there have been no issues with the adoption of this tool in our day-to-day business.
KSR: Looking ahead, could such a tool be used to predict the sequence required to produce a peptide therapeutic to address a particular need? If you knew how the peptide had to function, could you reverse engineer the technology?
SM: I’m sure we could. I’d not actually thought about it from that angle; but, we’ve previously collaborated with academic groups and discussed designing peptides from a manufacturability perspective. So, why couldn’t we take it a step further?
Thankfully, peptide medicines are derived from proteins and, at the moment, how you chop that protein up still involves a little bit of trial and error. But, taking a design approach has worked in the past, so I certainly wouldn’t discount being able to custom build a peptide based on its required function in the future.
Perhaps on a related point, we’re currently applying artificial intelligence (AI) to the process. Right now, we’re working on a human database-driven design model to make the system even more predictable, which, we hope, will further reduce the timelines involved and deliver better end products.