The Power List 2015
Editor-in-Chief, Analytical Chemistry; James R. Eiszner Family Endowed Chair in Chemistry and Director of the School of Chemical Sciences, University of Illinois at Urbana-Champaign, USA.
Most important lesson I think one of the most important lessons I haven’t learnt is how to say “no” to the many requests I receive. I do try, but it never seems to work for me. My colleagues and friends tell me I do too much – but I think it’s simply that I have too many things on my plate. However, one of the important things I have learnt is that I will pick important but difficult research goals and I keep moving towards them no matter the intervening challenges or obstacles, which clearly keeps me on target.
Encounters with serendipity I didn’t expect to become editor of Analytical Chemistry. The job fell into my lap without any advanced thought or preparation – it just happened! I’m really enjoying the job, although it takes up a lot of my time.
My whole career is really just a collection of what seem to be fairly random events. My passion for analytical chemistry was ignited by working at the LBNL for three consecutive summers. Had I not gone there, I may have ended up doing something completely different. Most people could say something similar, and I think that I got to where I am now through random events and mentorship by great people like Richard Zare and Richard Scheller at Stanford University. I see people like them as the hallmark for the rest of my career. Research is different though and I don’t think serendipity has much of place; I consider research to be a continuous effort to solve problems.
Eye on the horizon There are a number of areas that are getting a lot of press attention and are obviously important; for example, the integration of various techniques with nanoscience, portable analysis through microfabrication, the advances in mass spectrometry instrumentation, and informatics driving new types of analysis. Then there is “big data”, which is something all analytical chemists are grappling with. Today, it’s very easy to generate a terabyte of data – but the format and quality is such that we need new ways of managing and analyzing it. Even the journals are questioning how they best archive the data. And there really is no solution because data quantity and complexity grows faster than anyone can design an adequate solution.
I am on a joint journal task force – with the American Chemical Society (ACS) and various other publishers – that is trying to come up with a big data solution. In narrowly defined fields, such as proteomics, it is possible to create a solution that works, but for larger more generic big data problems, there isn’t an obvious solution. Many of the data archiving systems are setting out ground rules for accepting data because they can’t cope with the enormity of the task. For example, the US National Institutes of Health (NIH) will take genomic data but it will not accept proteomic data. And what about the private repositories? How long will they stay around? If they only receive a five-year grant, will the data be available in 20 years? Even ACS is saying although it can store data it hasn’t the resources to annotate it. Right now, I don’t think there is a global solution to this big data problem.