Conexiant
Login
  • The Analytical Scientist
  • The Cannabis Scientist
  • The Medicine Maker
  • The Ophthalmologist
  • The Pathologist
  • The Traditional Scientist
The Analytical Scientist
  • Explore

    Explore

    • Latest
    • News & Research
    • Trends & Challenges
    • Keynote Interviews
    • Opinion & Personal Narratives
    • Product Profiles
    • App Notes
    • The Product Book

    Featured Topics

    • Mass Spectrometry
    • Chromatography
    • Spectroscopy

    Issues

    • Latest Issue
    • Archive
  • Topics

    Techniques & Tools

    • Mass Spectrometry
    • Chromatography
    • Spectroscopy
    • Microscopy
    • Sensors
    • Data and AI

    • View All Topics

    Applications & Fields

    • Clinical
    • Environmental
    • Food, Beverage & Agriculture
    • Pharma and Biopharma
    • Omics
    • Forensics
  • People & Profiles

    People & Profiles

    • Power List
    • Voices in the Community
    • Sitting Down With
    • Authors & Contributors
  • Business & Education

    Business & Education

    • Innovation
    • Business & Entrepreneurship
    • Career Pathways
  • Events
    • Live Events
    • Webinars
  • Multimedia
    • Video
    • Content Hubs
Subscribe
Subscribe

False

The Analytical Scientist / Issues / 2025 / November / How to Avoid a Third AI Winter
Data and AI Data and AI Innovation

How to Avoid a Third “AI Winter” 

Bob Pirok is cautiously optimistic about the future of AI-enabled analytical workflows – but analytical science has twice previously fallen into the trap of hype-driven AI development…  

By Henry Thomas 11/12/2025 8 min read

Share

Bob Pirok

Artificial intelligence promises to reshape analytical science – from peak detection and retention-time alignment to automated method development. But Bob Pirok, Assistant Professor at the University of Amsterdam, urges caution. The field has seen excitement outpace substance before – in the eras of expert systems and early chemometrics – and he warns that the same could happen again if today’s enthusiasm for generative AI overshadows the need for scientific rigor and transparent validation.  

Drawing on lessons discussed in his recent book, Analytical Separation Science (co-authored with Peter Schoenmakers), Pirok argues that lasting progress depends on understanding the statistical and chromatographic foundations that underpin machine learning. In this interview, he explores how AI can make analytical workflows more efficient and reproducible – and what must be done to prevent a third “AI winter.” 

When we talk about “AI” in an analytical science context, what are we actually referring to? 

To answer this question, it’s helpful to first define “AI” and “machine learning.” Broadly speaking, artificial intelligence (AI) refers to the field of study aimed at developing systems that can perform tasks typically requiring human intelligence. In the 1980s and 1990s, machine learning (ML) began to establish itself as a key subfield of AI, migrating from its earlier focus on symbolic reasoning and rule-based expert systems. While AI previously relied on explicitly programmed logic, the introduction of machine learning provided data-driven, statistical models able to learn patterns and make predictions from empirical data.  

In analytical science, when we refer to AI, we’re typically talking about ML and statistical modeling approaches that can identify patterns in complex datasets, support classification or regression tasks, and assist in decision-making. This is quite different from the generative AI tools that dominate current headlines. Large language models (LLMs) such as ChatGPT and Copilot have brought AI into the public eye, but they often create the impression that AI is omnipotent, or even autonomous in its capabilities.  

In analytical chemistry, by contrast, we deal with structured, quantitative, and domain-specific problems. These require careful calibration, validation, and physical interpretability – demands generative AI is not designed to meet. So while the public may associate AI with creativity and fluent language, in our field it’s more about robustness, reproducibility, and control.  

How long has AI been around in analytical science? Have there been any “false dawns”?  

AI has a surprisingly long history in analytical science. Rule-based expert systems appeared in the 1980s, and by the 1990s, machine learning and multivariate statistical techniques had become central to chemometrics; supporting applications such as spectral deconvolution, pattern recognition, and predictive modeling. Techniques like PCA and PLS aren’t machine learning in the strictest sense, but are often used in ML workflows, forming part of the broader statistical modeling toolbox of analytical science. 

To understand the notion of “false dawns,” it helps to first take a wider look at the history of AI. The term was coined in the 1950s, with early optimism and anticipation of its potential spurred by demonstrations, such as early chess-playing programs. However, progress soon ran into computational and conceptual roadblocks, and veteran researchers like Marvin Minsky and Roger Schank warned, especially in the 1980s, that excessive hype could trigger a backlash.  

They forecasted a chain reaction: initial disillusionment in the research community would be followed by media skepticism, then funding cuts, and thus an overall decline in research momentum. These predictions proved accurate. The field went through at least two major “AI winters,” one in the mid-1970s, and another in the late 1980s to early 1990s, marked by reduced interest and investment. Today, some researchers worry that, if expectations are not managed and real-world limitations are overlooked, the hype surrounding generative AI could lead to a third.  

We've experienced parallel moments of inflated expectations in analytical science, particularly when promising algorithms are applied to noisy or poorly curated data. Often, the infrastructure to support robust, validated use of AI has lagged behind. That’s why it's essential – now more than ever – to stay grounded in scientific rigor, transparency, and domain-specific validation, to avoid repeating history.  

Could you give us a general introduction to the ways AI is being applied in analytical science today? 

AI is increasingly being used to automate and enhance decision making at various stages of the archetypal analytical workflow. In chromatography and spectrometry, this includes tasks like peak detection, deconvolution, background correction, retention-time alignment, and peak tracking. More advanced applications involve predicting retention or spectral features, suggesting optimal gradient profiles, and even guiding method development through optimization frameworks.  

That said, many of these methods still rely heavily on supervised learning, which therefore depends on clean, labeled, and realistic data. The latter are not always readily available in analytical laboratories. This data dependency remains one of the key bottlenecks in scaling up AI applications.  

Curiously, when Tijmen Bos and I taught our short Introduction AI in Chromatography course at HPLC2025 in Brugges, we asked participants why they were interested in AI and what they hoped it would deliver. The answer was surprisingly consistent: efficiency. There’s a growing expectation that AI tools will help us extract more insights from our data and generate better data from our instruments. That’s ultimately the goal: to make better use of the systems and information we already have.  

What motivated you to write your book Analytical Separation Science? 

The book, written together with Peter Schoenmakers, was born from years of teaching, and the realization that students – and even experienced analysts – struggled to find accessible, integrated resources that brought theory, practice, and modern techniques together. While AI wasn’t the sole motivator, it certainly shaped our thinking. One major barrier to progress in AI is that people often use it without understanding what goes on under the hood. We wanted to provide a solid foundation in chromatography and data analysis, including a large chapter on chemometrics and statistics, to ensure readers are able to critically engage with machine learning tools rather than treating them as black boxes. We also launched an interactive website (ass-ets.org) alongside the book’s release so students can work with real data and test algorithms themselves. 

Could you speak about the ways AI is currently being used – or could be used – to support method development?  

One of the most promising areas for AI is in method development, which traditionally requires deep expertise, trial and error, and many hours of experimentation. There are several research lines in this field, including significant efforts by groups in our community to predict chromatographic selectivity and retention behavior by linking molecular descriptors, often drawn from cheminformatics databases, with experimentally measured retention times. This approach underpins quantitative structure-retention relationship (QSRR) modeling.  

Another significant line is the automation of method development through tools such as reinforcement learning and Bayesian optimization. Groups have demonstrated the capability to automate parts of the method-development process. The goal isn’t to replace the human expert, but rather give them a decision-support toolkit that accelerates development and reduces redundant work. My group in Amsterdam began exploring this, as we noted that state-of-the-art high-resolution separation technology such as 2D-LC is still slow to make its way into analytical labs due to the complexity of method development. The idea is for AI to help experts employ it sooner, so that these state-of-the-art separation systems become accessible to a larger portion of society.  

What are the main barriers to progress in this area?  

Computational sciences and informatics communities have already developed powerful machine learning tools. The barriers truly lie in our own chromatographic community.   

One significant bottleneck is that many machine learning methods require large quantities of high-quality data. This is difficult for us chromatographers for several reasons: experiments are expensive, wasteful and take time. Moreover, every analytical chemist knows that it takes skill and effort to maintain a consistent chromatographic performance. Pump seals leak, columns bleed, mass spectrometers respond strongly to slight changes in chromatographic separation (and so on). If our main objective is to make method development better and more efficient, these challenges must be addressed. With this in mind, our own group has developed simulators that model separations based on experimental data that’s already fed into the machine learning method. By applying the chromatographic theory, developed by our community over decades, we can train our ML tool with realistic data, significantly reducing the experimental data consumption. 

A second challenge is the issue related to the well-known “garbage in, garbage out” paradigm. As a community, we have hundreds of different signal processing methods at our disposal. There exist many strategies to get rid of noise, remove background and detect peaks – but which method works best in what scenario? We already know that no single method is best; performance depends on the properties of the LC(-MS) signal. But how significant is the error? Ultimately, the machine learning models depend on the data that we feed them. Any errors in the data processing workflow thus directly translate into the reliability of the machine learning model. This is again a case where, for example, a data simulator is useful. As long as it can capture realistic features of the signal, it will allow us to determine the errors that peak detection methods make in various scenarios.  

The final barrier worth highlighting comes close to something every separation scientist must answer every day: What is the actual purpose of our method? When is the chromatogram good? This may seem like a no-brainer, but it is actually a question for which our community still eludes a proper answer. Why is this important? Well, in order for any AI tool to help you with your method development, you first must equip it with a mathematical expression for your desired outcome. We call this a chromatographic response function. Decades of research from many groups is evidence that this is a tough nut to crack. Simply targeting a maximum number of separated peaks may cause the AI to promote peak splitting by solvent mismatch. To make things worse, each laboratory, naturally, has different desired outcomes. Some scientists need to determine the purity of a single compound, whereas others must characterize highly complex samples. For each of these cases, we must solve this puzzle.  

Are there any recent AI advances that have you especially excited?  

The development of realistic, simulation-based benchmarking frameworks for evaluating signal processing algorithms is a game-changer. We now have the ability to test peak detection or alignment methods against synthetic datasets that behave like real experimental data, but with the added benefit of knowing the ground truth. This enables a far more rigorous comparison of algorithms than we’ve had in the past. I’m also encouraged by the growing use of probabilistic alignment and template-based tracking in 2D-LC and LC-MS, which help overcome issues like modulation drift and co-elution.  

What kind of real-world impact do you think AI could have on analytical science in the near term?  

In the near term, AI could dramatically improve efficiency, reproducibility, and data quality in analytical labs. For example, faster and more accurate peak detection means fewer errors in quantification. Smarter optimization algorithms mean we can develop robust methods in less time, using fewer samples and reagents. Finally, integrated ML tools can help analysts focus on interpretation and decision-making, rather than repetitive data processing. These gains are especially valuable in high-throughput environments, or where expertise is limited. As a community, we’re still yet to actively employ 2D-LC methods on a routine basis, despite groups showing that you can separate highly complex samples in under 30 (or even 15) minutes using it. This technique was already invented in the previous century, before I was born!  

Instead of developing 3D-LC-MS/MS, we should first help society to benefit from what has already been developed.  

Do you see the role of analytical scientists evolving due to AI?  

Yes, and I think it’s a positive shift. The role of the analytical scientist is moving from purely manual execution to method development, with a focus on strategic oversight. The next-generation analytical scientist will be able to use more sophisticated technology in resource-restricted environments. AI won’t replace analytical expertise, but it will reshape how we apply that expertise, with more emphasis on selecting and validating tools, interpreting complex outputs, and designing meaningful experiments.  

What is your overall perspective or vision on the potential impact of AI on analytical science over the next 5-10 years?  

I’m cautiously optimistic. I think AI will become an integral part of the analytical workflow, much like autosamplers or diode array detectors have done in the past. But for that to happen, we need to stay grounded in scientific principles. What I’m not a fan of is a growing trend I’ve noticed, where manufacturers, industrial labs and scientists alike, feel almost forced to present something with AI, simply to affirm that they’re on top of the game. I just strongly disagree with that.   

We need transparent models, validated tools, and community standards for benchmarking. If we build AI on a solid foundation of domain knowledge, it can help make analytical science more accessible, reproducible, and scalable. If we don’t, we risk falling into the trap of hype-driven development, and possibly another “AI winter.” The AI community has shown us that this is a real risk.  

Newsletters

Receive the latest analytical science news, personalities, education, and career development – weekly to your inbox.

Newsletter Signup Image

About the Author(s)

Henry Thomas

Deputy Editor of The Analytical Scientist

More Articles by Henry Thomas

False

Advertisement

Recommended

False

Related Content

The Analytical Scientist Innovation Awards 2024: #5
Data and AI
The Analytical Scientist Innovation Awards 2024: #5

December 4, 2024

4 min read

Welcome to the 5th ranked Innovation, Pyxis – introduced here by Matterworks co-founder Jack Geremia

The Climate Conversation: Part Two – Michael Gonsior
Data and AI
The Climate Conversation: Part Two – Michael Gonsior

December 5, 2024

7 min read

In the second part of our interview, Michael Gonsior explores the pressing challenges in carbon cycle research, transformative tools and technologies, as well as analytical glimmers of hope

Green is Digital
Data and AI
Green is Digital

December 16, 2024

4 min read

Software tools can optimize resource management, streamline workflow processes, predict outcomes, and optimize experimental conditions – contributing to more sustainable laboratory operations

Could AI Ever Replace The Analytical Scientist?
Data and AI
Could AI Ever Replace The Analytical Scientist?

December 18, 2024

1 min read

Working closely with an ever-expanding network of experts helps keep our content relevant and engaging. And keeps artificial intelligence at bay, right?!

False

The Analytical Scientist
Subscribe

About

  • About Us
  • Work at Conexiant Europe
  • Terms and Conditions
  • Privacy Policy
  • Advertise With Us
  • Contact Us

Copyright © 2025 Texere Publishing Limited (trading as Conexiant), with registered number 08113419 whose registered office is at Booths No. 1, Booths Park, Chelford Road, Knutsford, England, WA16 8GS.