(Prote)omics in Next-Gen Drug Development
Collaboration between different omics fields will be key in supporting the next generation of medicines
Mark Rogers | | Opinion
Everybody is different. The stark contrast in individual responses to COVID-19 has gravely highlighted this fact. And we also know responses vary when it comes to medications; what works for one person won’t necessarily work for another. Next-generation medicines promise a more personalized approach to drug development. When therapeutics are customized to an individual’s needs, rather than a broad population, there is a real opportunity to boost efficacy. But where does proteomics come in?
Proteomics plays a crucial role in the discovery of such drugs as it allows the variation in an individual’s response to be characterized at the molecular level. In short, proteomics allows the identification of the structural differences in proteins – a major driver of variation. Subsequently, medicines can be designed to specifically target or exploit these structures.
Initial separations for proteomics were based on two-dimensional gel electrophoresis, but the field eventually moved on to separation by LC and protein identification via MS. The key difference nowadays compared with when the field started in the 70s is that we can run studies a lot quicker. Then there is “nanoproteomics.” Recent advances in technology have made it possible to conduct proteomic studies on a much lower number of cells, and even perform single-cell proteomics studies. The era of nanoproteomics opens up a whole new wealth of information, including insights into rare cell populations and hard-to-obtain clinical samples.
In addition to the improvements in proteomics research, the move towards data-driven diagnoses and personalized therapies has largely been enabled by advances in other fields. Bioinformatics has seen tremendous progress, now being able to deal with the extremely large amounts of data that these studies generate – and very quickly. The other development, though not particularly specific to proteomics, is automation. We’ve been able to go from an almost fully manual process to one that requires almost no human interaction. For example, automation of sample preparation and data interpretation have enabled much more rapid processing at either end of the analytical method. In short, we’re now able to both generate and process a lot more data than ever before.
In the next 10 years or so, I believe artificial intelligence will further help us identify target proteins and areas where there are problems. I also believe (or perhaps hope) that data filtering will improve. Around 99 percent of the massive amounts of data we collect from proteomic studies is not useful – being able to filter out that 99 percent will prove invaluable.
It’s clear that to obtain a true picture of our responses to drugs, we must look in every corner of biological complexity – from the genome to the proteome and the metabolome (and perhaps beyond). The wide coverage needed is central to a challenge in the drug discovery endeavor; currently, there are many groups that work in each of these omics areas, but rarely do they come together in one place. We must break down these silos and focus on bringing different fields together. Only by working holistically can we begin to more fully understand what’s going on at the cellular or molecular level in response to treatments, edging ever-closer to truly personalized medicine.