Conexiant
Login
  • The Analytical Scientist
  • The Cannabis Scientist
  • The Medicine Maker
  • The Ophthalmologist
  • The Pathologist
  • The Traditional Scientist
The Analytical Scientist
  • Explore

    Explore

    • Latest
    • News & Research
    • Trends & Challenges
    • Keynote Interviews
    • Opinion & Personal Narratives
    • Product Profiles
    • App Notes
    • The Product Book

    Featured Topics

    • Mass Spectrometry
    • Chromatography
    • Spectroscopy

    Issues

    • Latest Issue
    • Archive
  • Topics

    Techniques & Tools

    • Mass Spectrometry
    • Chromatography
    • Spectroscopy
    • Microscopy
    • Sensors
    • Data and AI

    • View All Topics

    Applications & Fields

    • Clinical
    • Environmental
    • Food, Beverage & Agriculture
    • Pharma and Biopharma
    • Omics
    • Forensics
  • People & Profiles

    People & Profiles

    • Power List
    • Voices in the Community
    • Sitting Down With
    • Authors & Contributors
  • Business & Education

    Business & Education

    • Innovation
    • Business & Entrepreneurship
    • Career Pathways
  • Events
    • Live Events
    • Webinars
  • Multimedia
    • Video
    • Content Hubs
Subscribe
Subscribe

False

The Analytical Scientist / Issues / 2026 / May / Bridging the Gap Between Untargeted and Quantitative MS
Omics Omics Metabolomics & Lipidomics Data and AI

Bridging the Gap Between Untargeted and Quantitative MS

Gary Siuzdak explains how his uMRM workflow could remove one of metabolomics’ biggest bottlenecks

05/06/2026 4 min read
  • Full Article
  • Summary
  • Listen
  • Quiz
  • Top Institutions

Share

Multiple reaction monitoring (MRM) remains the gold standard for quantitative mass spectrometry – but it comes with a cost: predefined targets, extensive optimization, and reliance on authentic standards. Meanwhile, untargeted LC-MS/MS offers breadth but lacks straightforward routes to robust quantitation. The gap between the two has limited scalability, reproducibility, and cross-laboratory consistency in metabolomics and lipidomics.

In a recent study, Gary Siuzdak and colleagues introduce untargeted multiple reaction monitoring (uMRM), a data-driven framework that converts untargeted MS/MS datasets into optimized MRM transitions. Using pooled-sample acquisition, automated filtering of in-source fragments, and spline-based modeling of fragmentation behavior, the method generates quantitative assays that can be deployed across triple-quadrupole platforms – without compound-specific optimization. Benchmarking across seven instruments showed strong agreement with experimentally optimized methods, highlighting the approach’s transferability.

Here, Siuzdak – Professor and Director of the Center for Metabolomics and Mass Spectrometry at Scripps Research – explains how uMRM provides a practical path toward scalable and transferable quantitative mass spectrometry.

 

Gary Siuzdak

What inspired you to develop uMRM?

One of the main motivations behind our 2025 MRM and 2026 uMRM Analytical Chemistry papers was seeing how genomics achieved continuity across laboratories in a way that small-molecule science still struggles to achieve. In metabolomics and lipidomics, methods are often difficult to transfer, results can vary from lab to lab, and scaling quantitative measurements remains challenging. My time with the NIH Metabolomics Consortium reinforced that this was not just a technical inconvenience, but a broader need across the field.

This effort also has a long history. Guowang Xu’s 2013 and 2015 Analytical Chemistry papers were important early steps toward linking untargeted discovery with targeted quantitative analysis, and our 2018 Nature Methods MS/MS to MRM generation paper represented another advance in that direction. Together, those studies helped define both the promise and the limitations of the approach.

A particularly important catalyst was our collaboration with Rob Plumb at Waters, which helped clarify the practical requirements needed to make such a workflow viable across instruments. uMRM grew out of that broader perspective: to make the path from discovery to quantitation more systematic, more transferable, and more reproducible. A particularly useful practical advantage is that it converts very large discovery-style datasets into much lighter quantitative outputs that are independent of instrument manufacturer, making it easier to integrate data across platforms and laboratories.

Can you describe the challenges researchers currently face around obtaining standards, and how this shapes or constrains targeted analysis today?

Authentic standards are expensive, often unavailable, and difficult to obtain at scale. As a result, many biologically relevant molecules never make it into targeted assays, not because they lack importance, but because the experimental burden is too high. In practice, this constrains targeted analysis to a relatively small and somewhat biased portion of chemistry. It also contributes to poor continuity across laboratories, because different groups often end up measuring different molecules in different ways.

Your workflow includes an AI refinement layer. At a high level, how did AI support the development process, and what did it enable that traditional modeling alone couldn’t?

AI helped us move from observing fragmentation behavior to systematically learning from it at scale. Rather than relying on predefined rules or manual optimization, we modeled how fragment ion intensities change as a function of collision energy across empirical datasets. This allowed us to prioritize transitions that are not only theoretically plausible, but consistently informative in real experimental conditions.

In that sense, the AI layer is tightly coupled to the experimental design. It builds directly on measured MS/MS data, using spline-based representations of fragmentation behavior to identify optimal quantifier and qualifier ions. Traditional approaches can capture aspects of this behavior, but they tend to break down across chemically diverse molecules. AI made it possible to generalize these patterns while remaining grounded in empirical data.

Importantly, this was not about replacing physical principles, but about scaling them. The goal was to preserve the underlying chemistry while making the system more adaptive, data-driven, and transferable across instruments and laboratories.

It was also very much a team effort, and Winnie Uritboonthai, Aries Aisporna, Linh Hoang, and Elizabeth Billings were all important in helping develop, refine, and implement the workflow in practical terms.

Large language models also played a role in refining the algorithms. How did you incorporate LLMs into the development process, and what did you learn from that experience?

The role of large language models was primarily in development rather than chemical interpretation. They helped accelerate code generation, test alternative implementations, and iterate on workflow design, allowing us to iterate through algorithmic approaches more efficiently.

They were particularly useful for translating conceptual ideas into working code and refining the structure of a fairly complex computational pipeline. While they did not contribute directly to understanding fragmentation chemistry, they improved development speed and flexibility.

What we learned is that LLMs can be effective development partners when guided by domain expertise and validated against experimental data. They complement, rather than replace, scientific understanding.

Where do you see uMRM having the most immediate impact?

Large-cohort studies are an obvious area, because they demand quantitative performance at scale and benefit from better cross-laboratory consistency. Preclinical pharmacology is another strong use case, especially when researchers want to move quickly from discovery into measurement. Another major advantage is the retrospective value of the approach. Because uMRM is derived from broader discovery-style information, it gives researchers a much better opportunity to return to datasets and interrogate them after the fact, rather than being limited only to what was predefined at the start. It also produces lighter, more portable outputs than full untargeted datasets and reduces dependence on spectral matching during downstream deployment, which makes data easier to share, combine, and integrate across multiple sources. That translational potential has also become clearer through interactions with collaborators such as Julijana Ivanisevic, Chelsea C. Cates-Gatto, Amanda J. Roberts, Anna Popova, and James R. Williamson, whose work reflects the need for scalable and biologically meaningful quantitative follow-up.

In your view, what will be most important to ensure broad adoption?

The method has to be practical, transferable, and easy to implement across platforms. Just as important, it must be built on rigorous molecular identification and careful data processing. Scaling quantitative workflows only works if the underlying signals correspond to real molecules; otherwise, errors are simply propagated.

That makes high-quality reference data, such as METLIN, and robust filtering strategies, including in-source fragmentation correction, essential. Adoption will ultimately depend on both usability and trust, along with keeping workflows simple, reducing data complexity, and avoiding unnecessary steps when the goal is reliable quantitative deployment.

What limitations or open questions remain, and which ones are you most eager to address next?

A major limitation is that computation can take us only so far; for the most challenging molecules, experimental validation is still essential. Quantitative workflows are also only as strong as the annotations and filtering behind them. If the identifications are wrong, or artifacts are left in, scaling the analysis does not solve the underlying problem. What we are most excited about next is broadening chemical coverage while making identification, filtering, and quantitation more reliable across more molecule classes and more laboratories.

Stepping back, how do you see AI-enabled workflows influencing mass spectrometry-based analysis more broadly over the coming years?

I think AI will increasingly help researchers manage complexity rather than simply automate existing workflows. Mass spectrometry produces enormous, information-rich datasets, and AI is well suited to extracting structure, prioritizing signals, improving annotation, and guiding assay development. Its greatest impact will come when it is integrated with strong empirical data rather than treated as a black box.

Finally, what direction do you see this work taking as it evolves?

I see uMRM moving toward a more seamless continuum between discovery and quantitation. Ideally, researchers should be able to identify signals in untargeted experiments and rapidly convert them into quantitative assays with far less manual effort. The broader direction is to make quantitative mass spectrometry more scalable, more accessible, and less dependent on the traditional bottlenecks that have limited the field for years. Part of that future is also making quantitative outputs lighter, more portable, and less tied to any single vendor ecosystem, so that data generated across instruments, laboratories, and time can be more easily integrated. More personally, one direction that especially interests me is the eventual identification of the biological activities of specific metabolites. Looking ahead, we are also exploring more autonomous, adaptive workflows that further reduce the gap between data generation and quantitative assay development.

Newsletters

Receive the latest analytical science news, personalities, education, and career development – weekly to your inbox.

Newsletter Signup Image

False

Advertisement

Recommended

False

Related Content

The Analytical Scientist Innovation Awards 2024: #7
Omics
The Analytical Scientist Innovation Awards 2024: #7

December 2, 2024

4 min read

Frank Steemers, co-founder and CSO of Scale Biosciences, tells us the story of ScalePlex – the 7th ranked innovation on this year’s Awards

The Analytical Scientist Innovation Awards 2024: #4
Omics
The Analytical Scientist Innovation Awards 2024: #4

December 5, 2024

6 min read

Thermo Fisher Scientific’s high-sensitivity mass spec for translational omics research – the Stellar MS – is ranked 4th in our annual Innovation Awards

Let Me See That Brain
Omics
Let Me See That Brain

December 9, 2024

1 min read

TRISCO sets a new standard for 3D RNA imaging, delivering high-resolution and uniform images to offer insights into brain function and anatomy

The Analytical Scientist Innovation Awards 2024
Omics
The Analytical Scientist Innovation Awards 2024

December 11, 2024

10 min read

Meet the products – and the experts – defining analytical innovation in 2024

Affiliations:

Specialties:

Areas of Expertise:

View Full Profile Follow
Contributions:

False

The Analytical Scientist
Subscribe

About

  • About Us
  • Work at Conexiant Europe
  • Terms and Conditions
  • Privacy Policy
  • Advertise With Us
  • Contact Us

Copyright © 2026 Texere Publishing Limited (trading as Conexiant), with registered number 08113419 whose registered office is at Booths No. 1, Booths Park, Chelford Road, Knutsford, England, WA16 8GS.