Trust, but Verify!
Open method sharing – through online repositories – could finally put an end to niggling issues of method irreproducibility
Analytical scientists, like most scientists, are driven by the research process: being inspired, translating this inspiration into an experiment, analyzing results, and drawing conclusions.
My favorite part: the inspiration. We are inspired by our surroundings, and scientific inspiration can come through any medium, from an interesting paper to a talk, webinar, or even a tweet. After your initial idea comes a period of intense method development, trying to tweak every parameter to get the best results. Sometimes it works. Sometimes we learn – that is part of the game. But what follows a successful experiment?
We publish! As scientists, we have to communicate our findings, primarily with the community, and also to support our career development. (One may argue against the last reason, but that’s another topic for another time.) When we begin the writing process, we must ask an important question: what information is essential for reporting my methods?
In answering this question, I find it useful to ask another: “If my method inspires another scientist, have I provided all the information to allow them to implement it directly in their lab?” We are well placed to ease the future experiments of our peers by providing such “plug-and-play” solutions.
Method reporting guidelines represent a growing topic of discussion in the scientific community. Heather Bean even explored this problem in The Analytical Scientist fairly recently, emphasizing the need to improve repeatability. “Give a senior scientist a paper and ask him to reproduce the study based on the method section… he will fail,” she wrote. The sentence is shocking because it is true. What is the purpose of the method section if it does not allow you to replicate the study?
But the reality is that providing such a method walkthrough is easier said than done. Even if you were to communicate all the experimental details, how could you do it efficiently? Initiatives like the Metabolomics Standard Initiative have defined some guidelines to help provide an answer (lots of lists is one answer). Yet it is difficult to generate relevant lists with so many different techniques (and combinations of techniques) being used.
I work mostly in GC×GC-MS. For the last two years, we have organized focus groups on this topic during the Multidimensional Chromatography Workshop and the ISCC and GC×GC symposium. The main output: trying to list all required parameters is a challenge; between instrument specificities, sample preparation steps, separation parameters, detection, and data processing, it’s a near-impossible task to include all the necessary information.
I don’t believe that adding an extra 10 pages of tables and lists to the supplementary materials is a real solution. It would be difficult to review and implement in another lab, even if the same instruments are used. A perfect solution would allow direct transfer of the required information from instruments and software, with automatic manuscript format checking.
The automatic format checking is mandatory. You cannot ask reviewers to verify every single parameter of an entire method. Software, however, could check that every value is listed. We could even add advanced screening to request comment if a value is out of a given range (such as a high temperature value). Templates to check parameters against could even be generated directly from instrument or processing software; these files could be then uploaded in online repositories. And these repositories could also keep track of replication studies, which would in turn demonstrate the robustness of the published method. Instrument providers could even establish instrument-specific open repositories, where you could find a method relating to a specific sample via a specific method and import it directly. In such cases, the method section could be a brief summary of the protocol, which references the relevant repository files.
Put simply, open method sharing has the potential to improve our trust in publications, as well as allowing replication and cross-laboratory validation. So what are we waiting for?
Lead scientist and Lecturer, Liège University, Belgium