Subscribe to Newsletter
Techniques & Tools Pharma & Biopharma, Data Analysis

Care to Repeat That?

Over the years, my colleagues and I have amassed a large number of columns, review articles and even books, all dealing with various aspects of analytical method validation (AMV). If one types these three words into a Google search, around 4.33 million results will pop up. It appears to be a very popular area of analytical chemistry. But what has this to do with much of today’s scientific literature (especially in the biological and medical sciences) appearing to be of questionable reproducibility (1)(2)(3)(4)(5)(6)(7)(8)(9)(10)(11)? Where is this apparent lack of generic reproducibility coming from and how can it be rectified in the future? These are worrying, pressing and, as of yet, not fully addressed questions. And I have many, many more for you...

Much has been written on these subjects but there seems to be some confluence of AMV, reproducibility/repeatability, and publishing poor science in general. Why? And where do the scientific journals (of all types) come into the picture, if at all? Should the burden of responsibility be on authors, journal reviewers, funding agencies, editors, peer review processes, graduate students, postdocs, or elsewhere?

Reviewing reviews

As a reviewer of (mainly) analytical papers for several decades, I receive too many papers that contain little-to-no true AMV, and no serious discussions of the topic – most of the data are single points with no evidence of any repeatability or reproducibility (n=1). There is, of course, rarely any statistical treatment of said data because there is simply not enough. How is it possible that such manuscripts even reach a reviewer (via the editors)? Why would anyone submit such a manuscript for serious consideration by a reputable journal? Why do some reviewers accept such data, allowing the paper to be published, requesting only minor revisions but no added data or studies?

Inherent heterogeneity or inherent laziness?

Antibody-based publications appear to demonstrate the very least reproducibility of all analytically-oriented papers. Antibodies, being proteins, often vary from source to source, as a function of how they were expressed and purified – perhaps this is the source of some irreproducibility in such papers/journals, but I believe most of the blame lies squarely at the door of researchers themselves.

As a practicing academic with an active research group for decades, I was always amazed by how few academic colleagues demanded that their researchers, graduate students, postdocs, visiting scientists, and/or undergraduates learn as much about AMV and the demonstration of repeatability and reproducibility as possible – and demonstrate it in all of their studies. It was (is) as if they never considered such behavior as an important part of doing quality research or publishing high-quality papers.

Even if the antibodies themselves are not reproducible, good method validation would prove the fact – in addition to indicating the reproducibility of the overall research. If such studies are not pursued or demanded by editors or reviewers, then more and more papers will eventually and inevitably be shown to be irreproducible – which is exactly where we currently find ourselves. Is it possible that biologists are never taught anything about AMV? If so, is it also possible that research advisors and mentors do not require their students to know about this field or push them to work harder towards credible publications in the open, scientific literature? More remarkable is the fact that even PhD theses specifically focused on analytical chemistry often do not contain evidence of true method validation, repeatability or reproducibility.

All of the above leaves me with a big question mark over the reproducibility of the vast majority of papers appearing in analytical journals. Should we discount everything with little to no AMV? In any case, we need to find and fix the underlying problem.

Time to change

I think it’s fair to say that the problem lies with our own efforts, and not ‘in the stars’. But how do we correct the problem? How do we ensure a future where science will not be discredited by the suggestion that most of its publications are just not reproducible or useful? I think we can all agree that if even the originators of a piece of research cannot reproduce their findings, future researchers will also struggle... and that means everyone is just spinning wheels, wasting time, energy, hope, money and the future of science.

Suffice to say, everyone who publishes any type of science (or engineering for that matter) should be required to demonstrate – in the very first publication using such methods – complete AMV. There is no excuse not to. The field has now been perfected; it is used throughout the pharma/biopharma industries, it is a major part of ICH and all regulatory guidelines around the world for such products. Indeed, scientists in any industry that is regulated by a government agency (whether FDA, EMA, JPA or others) must validate all analytical methods or they cannot submit a chemistry, manufacturing and control (CMC), investigational new drug (IND), new drug application (NDA) or any other request to file and market pharma/biopharma products. However, complete AMV has never really been accepted, respected or adopted by the major group of scientists who publish scientific articles – the group commonly known as academics.

While industry scientists toil over replicate experiment after replicate experiment, academics bear no such cross. They simply need to convince the journal editors, peer reviewers, and funding agencies that their work is analytically valid and reproducible. The burden of the cross has been passed on, in our current system, to the journal editors and peer reviewers who determine if a given manuscript is ready to be published or not. And if these gatekeepers of all scientific literature also fail to practice, understand or utilize the principles of true AMV, then their reviews will be useless or worse.

Better gatekeepers or better gates

We clearly need gatekeepers who understand the science being presented, as well as the method validation requirements that must be met before any manuscript can be accepted for publication. Editors must also take responsibility in all of this mess by requiring, before any kind of peer review, that all manuscripts demonstrate full and complete method validation data, to the standard required of pharma/biopharma submittals to the FDA/ICH and most other regulatory agencies. Why should journals be any different than the regulatory agencies in what they expect of their publications? Perhaps journal editors are afraid to demand such a fundamental requirement of all submittals because they may not have a sufficient number of acceptable papers for the next issue...

There could be a more conspiratorial flaw in the peer review process. If all reviewers simply accept manuscripts without any real validation data, their own submissions are consequently less likely to require such data. Let’s hope that the entire system is not so rotten. But it would be very interesting to know how many publications (in any area of science) with analytical data are accepted without evidence of true and  (perhaps) complete method validation data. It would certainly account for the apparent lack of reproducibility in so many different areas of science today.

I’ve asked many questions. And now you are most likely thinking: “OK Ira – you’ve made your point – but how do we rectify the problem?” Rectification comes with due diligence from everyone involved, and in having QA/QC procedures for this assurance. Journals must establish required guidelines for all future submissions. To a large extent, both Nature and Science now have such guidelines in place – better late than never (12). Such guidelines have been designed to ensure that everything needed to reproduce the work involved is present and that sufficient AMV studies are also indicated and verified. However, if the authors are not made to abide by these guidelines, then we cannot move on from the present impasse. Hence, editors and peer reviewers must enforce the guidelines; if the prerequisite AMV material is not contained within the text of the manuscript, then the paper should be rejected outright or accepted pending further revisions, to fully meet the guidelines. If the authors then fail to provide the information required to meet the guidelines, the manuscript must be rejected. ‘Guidelines’ is perhaps the wrong word to use for academics, as it may imply some degree of freedom – ‘mandatory rules’ may be better. In any case, it should clearly be the responsibility of the editors and (especially) the reviewers to ensure suitable and adequate AMV for all accepted manuscripts.

We can do better

We find ourselves at an unprecedented point in the history of publishing scientific articles, and of science itself: the majority of papers in certain areas cannot be easily reproduced. We have arrived at this terrible juncture because we have been far too lax in what was – and is – required to publish in reputable journals, especially regarding AMV. And though journals may guard the gates, academic institutions and the academics within them have a big role to play. I believe mandatory undergraduate and graduate courses in AMV would make a difference – and at the very least, mentors and advisers should coach best practice in AMV and expect no less. Funding agencies should not take a back seat either, but deny future funding to those researchers who refuse to perform or report AMV in their papers.

I look forward to a future where peer reviewers begin to assume responsibility for rejecting manuscripts because of a general lack of AMV; where students no longer gain an advanced degree without knowing a great deal about AMV or how to apply it in the real world; where scientists and their students take AMV seriously, and thereby avoid publishing irreproducible papers that result from work that was never demonstrated to be reproducible in the first place.

Finally, we analytical scientists should be setting the very best example. If we aren’t taking AMV seriously, how can we expect scientists in other disciplines to do the same? Don’t be afraid to offer guidance when you’re involved in a collaborative project that is going ‘off the rails’ – other members of the team may not be as well versed in the need for AMV. And don’t be afraid to stand up and decry research or publications that fail to meet even the basic requirements for reproducibility. The whole of science is at stake.

Ira S Krull is Professor Emeritus, Department of Chemistry and Chemical Biology, Northeastern University, Boston, USA.

Receive content, products, events as well as relevant industry updates from The Analytical Scientist and its sponsors.
Stay up to date with our other newsletters and sponsors information, tailored specifically to the fields you are interested in

When you click “Subscribe” we will email you a link, which you must click to verify the email address above and activate your subscription. If you do not receive this email, please contact us at [email protected].
If you wish to unsubscribe, you can update your preferences at any point.

  1. SA Bustin, “The reproducibility of biomedical research: sleepers awake”, Biomol Detect Quantif, 2, 35–42 (2014).
  2. SA Bustin, T Nolan, “Improving the reliability of peer-reviewed publications: we are all in it together”, Biomol Detect Quantif, 7, A1–A5 (2016).
  3. L Nassi-Calo, “Reproducibility in research results: the challenges of attributing reliability”, Available at: bit.ly/2jTTskr (Accessed 6 February 2017).
  4. DB Allison et al., “Reproducibility: a tragedy of errors”, Nature (Comment), 530, 27–29 (2016).
  5. RM Johnson, “Data integrity: getting back to basics”, PharmTech, 39, 12, (2015).
  6. Editorial, “Enhancing reproducibility”, Nature Methods, 10, 367 (2013).
  7. Nature’s Reporting Checklist for Life Sciences Articles (May 2013).
  8. NS Blow, “A simple question of reproducibility”, BioTechniques, 56, 8 (2014).
  9. JM Perkel, “Tech News, The Antibody Challenge”, BioTechniques, 56, 111–114 (2013).
  10. M Rosenblatt, “An incentive-based approach for improving data reproducibility”, Sci Transl Med, 8, 336 (2016).
  11. D Laframboise, “Peer Review, why skepticism is essential”, The Global Warming Policy Foundation, GWPF Report 20 (2016), Available at: bit.ly/2lfKRKH (Accessed 6 February 2017).
  12. Reporting Checklist for Life Sciences Articles, Nature, May 2016. Available at: go.nature.com/2kiva78 (Accessed 6 February 2017).
About the Author
Ira S Krull

Ira S Krull is Professor Emeritus, Department of Chemistry and Chemical Biology, Northeastern University, Boston, USA.

Related Application Notes
Enabling 3D Multiplexing Spatial Omics Workflows in Neuroscience

| Contributed by Zeiss

Safer AAV Analysis with Non-toxic AEX Method

| Contributed by Tosoh

Novel Approaches to Measure Receptor-Mediated Endocytosis of Herceptin-GB05N0009I

| Contributed by Zeiss

Related Product Profiles
Higher Peaks – Clearly.

| Contributed by Shimadzu Europa

Compact with countless benefits

| Contributed by Shimadzu Europa

The fine Art of Method Development

| Contributed by Shimadzu Europa

Register to The Analytical Scientist

Register to access our FREE online portfolio, request the magazine in print and manage your preferences.

You will benefit from:
  • Unlimited access to ALL articles
  • News, interviews & opinions from leading industry experts
  • Receive print (and PDF) copies of The Analytical Scientist magazine

Register