Proteomics for the People
Sitting Down With… Steven Carr, Proteomics Platform Director at the Broad Institute of MIT and Harvard.
You’re giving Pittcon’s Wallace H. Coulter Plenary lecture – how about a sneak preview?
Essentially, I’ll be “taking the temperature of proteomics” – where the field is today, how it is being applied, and where it is going. I’ll describe how it fits in with our partners in genomics, and why it’s so critical. I’d also like to highlight that, for the very first time, I feel we can put hand over heart and say we are actually ‘doing proteomics.’
Could you qualify that last point?
In genomics, we can define all of the genes in the organism being studied and the expression levels of those genes. In proteomics, up until quite recently, large portions of the proteome as well as post-translational modifications of those proteins (such as phosphorylation, glycosylation, and so on), were undetectable ‘dark matter’ for proteomics. Improvements in sample handling and instrumentation as well as the introduction of quantitative approaches in discovery proteomics now enables us to confidently measure a high percentage of the proteome and to define differences between one cell or tissue condition and another, which gives us much better information about biological states, disease presence or aggressiveness, or response to treatment – all of which are areas of focus at the Broad Institute.
Where have advances been made?
The dynamic range of proteins has been a huge analytical challenge, but materials science, chromatography and instrumentation have all seen vast improvement. Combined, these have enabled us to ramp up coverage to 50-80 percent of the expressed mammalian proteome. In some microbial systems, we can measure the whole thing. So now the bottleneck is gradually shifting downstream: now, we must make sense of all the information.
As we generate high quality data, we simply must, in parallel, develop high-throughput methods for building biological understanding. The computational approaches for integrating information from proteomics (including the many tens of thousands of modifications we can now detect) with genomics and ultimately metabolite profiling, are woefully behind.
What is it like working at the Broad Institute?
It’s incredibly interesting. Every day I interact with top scientists in biology and clinical medicine: arriving at new biological insights requires collaborative interaction, and that’s what I enjoy most in my current position. The Broad is unique in the academic world in that we apply technologies and capabilities on a scale similar to a biotechnology or pharmaceutical company, with whom we share many of the same questions: what are the right targets, how do we drug them, why do current drugs stop working? The difference is that our main objective is to solve scientific questions, such as the response and resistance to cancer therapy, though group collaboration within the Broad’s matrix of capabilities.
What part do you play in the matrix?
I run the Proteomics Platform. Platforms sit alongside the institute’s programs (in cancer, infectious disease, chemical biology, and so on) and are led by people with a long history and deep understanding of a particular field. My team and I keep the Proteomics Platform moving forward to address current needs and anticipate future requirements of the entire Broad community. Groups come to me to discuss projects that require proteomics input, but we also connect them with other groups or platforms: not all pieces of their puzzle will be solved by proteomics. In that sense, I help knit the community together.
Proteomics is rapidly evolving, so we need to be amenable to innovation. That means being well connected with technology, data analysis, biological, clinical and software development fields. And our field is only just catching up with statistical data analysis, so expertise is needed there also.
What are you currently working on?
The Broad’s mission is to leverage the genome to improve patient treatment and quality of life. Knowledge of the proteome (and all its modifications) provides an essential and orthogonal view into cellular function and physiology, and is entirely complementary to the genomic-based approaches. Much of our research focuses on how the proteome changes under different perturbational conditions. As an example, we just published a paper in Science that describes the mechanism of action of an anticancer drug and, unexpectedly, revealed a new way to develop targeted therapies for this cancer.
Is the collaborative approach sometimes limiting?
In modern science, there’s less and less room for a “my lab does it all” mentality. Disease mechanisms are far too complex. A rich mix of technologists, biologists and clinicians working collaboratively is needed to provide solutions to improve patient outcomes – and that is what we have at the Broad. This notion will resonate with anyone working in a biotech or pharma. And even with all the right skills being applied in collaboration, human biology remains human biology. What works in cell culture or an animal model, may not translate to patients.
Rich Whitworth completed his studies in medical biochemistry at the University of Leicester, UK, in 1998. To cut a long story short, he escaped to Tokyo to spend five years working for the largest English language publisher in Japan. "Carving out a career in the megalopolis that is Tokyo changed my outlook forever. When seeing life through such a kaleidoscopic lens, it's hard not to get truly caught up in the moment." On returning to the UK, after a few false starts with grey, corporate publishers, Rich was snapped up by Texere Publishing, where he spearheaded the editorial development of The Analytical Scientist. "I feel honored to be part of the close-knit team that forged The Analytical Scientist – we've created a very fresh and forward-thinking publication." Rich is now also Content Director of Texere Publishing, the company behind The Analytical Scientist.