Conexiant
Login
  • The Analytical Scientist
  • The Cannabis Scientist
  • The Medicine Maker
  • The Ophthalmologist
  • The Pathologist
  • The Traditional Scientist
The Analytical Scientist
  • Explore

    Explore

    • Latest
    • News & Research
    • Trends & Challenges
    • Keynote Interviews
    • Opinion & Personal Narratives
    • Product Profiles
    • App Notes
    • The Product Book

    Featured Topics

    • Mass Spectrometry
    • Chromatography
    • Spectroscopy

    Issues

    • Latest Issue
    • Archive
  • Topics

    Techniques & Tools

    • Mass Spectrometry
    • Chromatography
    • Spectroscopy
    • Microscopy
    • Sensors
    • Data and AI

    • View All Topics

    Applications & Fields

    • Clinical
    • Environmental
    • Food, Beverage & Agriculture
    • Pharma and Biopharma
    • Omics
    • Forensics
  • People & Profiles

    People & Profiles

    • Power List
    • Voices in the Community
    • Sitting Down With
    • Authors & Contributors
  • Business & Education

    Business & Education

    • Innovation
    • Business & Entrepreneurship
    • Career Pathways
  • Events
    • Live Events
    • Webinars
  • Multimedia
    • Video
    • Content Hubs
Subscribe
Subscribe

False

The Analytical Scientist / Issues / 2026 / March / Vibe Coding Comes to Omics
Data and AI Proteomics News and Research Technology

Vibe Coding Comes to Omics

How Jesse Meyer built a fully functional proteomics data analysis application in under 10 minutes, using four prompts, no handwritten code, and under $2

By James Strachan 03/25/2026 10 min read

Share

Jesse Meyer

Many users of generative artificial intelligence (AI) tools are discovering that they can now build software simply by describing what they want in natural language. This approach, known as “vibe coding” – Collins Dictionary’s 2025 word of the year – promises to let users with little formal programming experience create working applications by iteratively prompting an AI.

Jesse Meyer, Assistant Professor of Computational Biomedicine at Cedars Sinai, USA, started paying attention to emerging vibe coding platforms in 2025 – testing these tools in his personal time. 

He decided to see whether he could build a full-stack web application to address a common challenge in science: keeping up with the rapidly expanding literature. Although he had no prior formal training in full-stack web development, vibe coding enabled him to build a fully functioning platform, Rescoop.xyz, that uses AI to discover and summarize presented papers based on user-defined search terms. 

“This was an eye-opening experience because it demonstrated how quickly functional software could be produced through iterative interaction with an AI agent,” he says. 

Around the same time, he became interested in whether similar techniques could be used to support alternative models for agentic AI-based preprint review and scientific curation. Through a series of short experiments, he built a prototype called PeerAI.app, which uses AI agents to review, score, and rank preprints. 

“Taken together, these experiences made it clear that the barrier to building functional, task-specific scientific software had dropped dramatically,” he says. 

These observations then led Meyers to explore how vibe coding could be incorporated into his group’s research workflows; for example, allowing researchers to rapidly prototype software. When a postdoctoral researcher asked whether such a system could generate a proteomics analysis interface, he tested this hypothesis in a live demonstration. 

“Within minutes, the system produced a working analysis interface that would traditionally require substantial manual development,” he says. That result motivated Meyers to publish a paper demonstrating how he used vibe coding to create a fully functional proteomics data analysis platform. 

“We are at a genuine inflection point that warrants documentation in the scientific literature,” he says. “The paper is not focused on a single application but rather on the broader shift in what is now technically possible and the implications for how analytical software may be developed in the future.” 

We spoke with Meyers to find out more about the project, explore his thoughts on the future of vibe coding for researchers, the potential risks involved, and top tips for those who’d like to give vibe coding a try. 

Before this project, how would you describe your own coding experience?

I would describe my coding background as long-standing and research-driven. I began programming in R as an undergraduate and have used it continuously as a core research tool. By the mid-2010s, I was already publishing computationally focused papers that relied heavily on custom data analysis and modeling, and over time, I expanded into Python for more general-purpose analysis and machine learning workflows. I also have some experience with compiled languages such as Java, although most of my work has been in analytical and scientific computing environments.

In recent years, I have spent substantial time experimenting with AI-assisted coding tools, which likely made me more comfortable trying Vibe coding approaches. That said, having fluency in reading and reasoning about code remains important. In one of these projects, for example, I ended up debugging TypeScript code that I had never worked with before, but prior experience made it easier to recognize common structures such as classes, methods, and control flow, and to infer where issues were likely arising.

At the same time, I expect that this kind of cross-language fluency will become less of a bottleneck as models continue to improve. The direction seems clear: the limiting factor is shifting away from writing code by hand and toward clearly specifying intent, designing appropriate tests, and verifying behavior, rather than mastering the syntax of any single programming language.

Did any of your findings surprise you?

Yes – both the time and effort required were surprising. Shortly before this work, our group had completed building PSCS, a single-cell omics data analysis platform that required roughly two years of focused effort and substantial iteration to reach a publishable, validated state. That level of investment was entirely appropriate for a production-grade scientific platform.

What surprised me was how dramatically the prototyping phase had compressed. Using Vibe coding tools, I generated a functional analysis interface in minutes that, although lacking many of the features and validation of PSCS, would typically have required weeks or months of manual development to reach an initial working state. The contrast highlighted how much faster exploratory software development and hypothesis testing can now be, even though careful engineering and validation remain essential for mature tools.

There’s already a mature ecosystem of proteomics tools. Where do you see vibe coding adding something genuinely new?

Proteomics already has an exceptionally strong ecosystem of mature, well-validated tools, and I do not see Vibe coding as a replacement for those platforms. One area where it adds something genuinely new is in enabling highly task-specific, short-lived software that would never have been built otherwise.

The perspective recently articulated by Andrej Karpathy – who coined the term “vibe coding” – is helpful here. Software is becoming increasingly ephemeral in the sense that it can be created on demand to answer a very specific question, used briefly, and then discarded. That changes the economics of tool building. Instead of deciding whether a problem is “worth” months of engineering effort, researchers can now prototype custom solutions in minutes to support exploration, debugging, or decision-making.

For example, while recently designing a custom autoencoder architecture, I encountered unexpected intermediate behavior that was difficult to explain from static outputs alone. Using an AI coding agent, I generated a small Streamlit application that allowed me to interactively explore those intermediate results. That kind of tool would traditionally have required substantial manual development, and in practice I would likely not have built it at all. Being able to prototype it quickly enabled me to reach a clear go/no-go decision without delaying the project.

One consequence of this shift is that the value proposition of mature software changes. The advantage is no longer merely that a tool exists, but that it is trusted, validated, well-documented, and maintained. Vibe coding lowers the barrier for building functionality, but it does not replace the need for rigor, benchmarking, and community trust. In that sense, the “moat” has not disappeared so much as it has moved away from code itself and toward validation and reliability.

For an analytical scientist without programming training, what kinds of applications are suddenly realistic to build – and what kinds still aren’t?

For scientists without formal programming training, the dividing line is verifiability. Applications are more realistic to build when their behavior can be tested and confirmed through clear, observable outcomes, and less realistic when correct behavior cannot be readily verified.

In practice, this means that tools with well-defined inputs and expected outputs are suddenly very accessible. For the proteomics analysis interface, for example, I relied on synthetic data with known changes so that I could immediately verify whether the system detected the expected changes. In simpler cases, verification might be as straightforward as confirming that a button click sends an email or produces a specific file.

By contrast, applications where correctness is ambiguous or only weakly observable still require deeper scrutiny. If you are not reading the code, you need some alternative form of validation, and in many cases, that means designing explicit tests. Responsible use of vibe coding, therefore, depends less on programming background and more on the ability to define expected behavior, and to verify that the system actually meets those expectations.

What are the biggest risks you’d worry about in vibe-coded analysis tools? And what does responsible vibe coding look like?

The greatest risk with vibe-coded analysis tools is using them without adequate validation. Because these systems can generate working software very quickly, there is a strong temptation to trust outputs simply because the interface appears functional. Without explicit verification, subtle errors can go unnoticed and propagate into downstream analyses.

I encountered this firsthand while developing the aforementioned autoencoder. I used network structure visualizations to assess whether the model was constructed correctly. As I iterated, I would identify what appeared incorrect in the visualization and ask the system to correct it. Eventually, it became clear that the model was modifying the visualization itself to match my feedback, rather than correcting the underlying network generation logic. The output looked better, but the core problem remained unchanged. This experience highlighted that we need not only tests, but also careful attention to how those tests are constructed and what they actually validate. Superficially convincing checks can still be misleading.

Responsible vibe coding therefore starts with deliberate, well-designed testing. Before applying any tool to real or unknown data, there should be clear checks that verify expected behavior. In the proteomics case, that meant using synthetic or benchmark datasets where the correct outcome was known in advance. In other settings, it may involve confirming that specific inputs reliably produce specific outputs, edge cases are handled correctly, and that results are stable under small perturbations.

More broadly, responsible use means treating vibe-coded tools as exploratory instruments rather than authoritative black boxes. They are highly effective for rapid prototyping, hypothesis generation, and decision support – but they still require skepticism, validation, and, when appropriate, targeted code review before they can be trusted for scientific conclusions.

If you were advising a lab that wants to try this tomorrow, what would you recommend as a minimal starting point?

I would start with a very small, well-scoped task where success is easy to verify, and then focus heavily on context. The most common failure mode with vibe coding is not model capability, but underspecification. These systems can only reason over what you give them, so being explicit up front matters enormously.

A good first step is to define the task in terms of concrete inputs, outputs, and verification criteria. For example, if the tool is supposed to read a particular file format, the structure of that file needs to be described explicitly: column names, data types, indexing conventions, missing-value handling, and any relevant metadata. If there is existing documentation that defines those formats or conventions, that documentation should be included directly as context rather than assumed.

Similarly, if the task builds on an existing codebase, providing access to that code as context can make a dramatic difference. In practice, this can mean supplying a summarized version of a GitHub repository using tools like gitingest, which converts an entire repository into a structured text description that an LLM can reason over. That kind of context often resolves ambiguities that are never spelled out in papers or README files.

More broadly, scientific papers themselves can often serve as the specification for what you want to build. In recent work, we showed that modern LLMs can reimplement core computational biology algorithms using only the original peer-reviewed publication as input, with failures usually traceable to missing implementation details rather than model limitations. In that sense, the paper already defines the “what,” and the role of context is to make the “how” explicit enough to be executable.

A practical workflow is to first use an LLM to draft a short project specification that pulls together all of this context: data formats, assumptions, edge cases, references to documentation or papers, and a few concrete test cases. That specification then becomes the input to a vibe coding tool. If something is not specified, the model will guess or do nothing, and this is where most problems arise.

Finally, treat the output as a prototype. Run it on synthetic or benchmark data, confirm that it behaves as expected, and iterate. When used this way, vibe coding becomes a powerful accelerator for exploration and decision-making, rather than a replacement for careful validation or scientific judgment.

Any other practical tips on platforms or setups – and what criteria matter most?

Platform choice matters less than people often expect. At this point, many systems offer access to very capable models, so the more important differentiators tend to be cost, interface design, and how well the platform fits into an existing workflow.

One practical consideration is cost predictability. Some environments make it very easy to build and deploy sophisticated applications, but usage-based pricing and hosting fees can add up quickly. For exploratory or internal tools, platforms with simpler pricing models and minimal setup are often sufficient and easier to manage in an academic setting.

Interface and workflow also matter. Some setups emphasize end-to-end application building, while others focus on rapid iteration and direct interaction with models. The right choice depends on whether the goal is to deploy a persistent tool, or to quickly prototype and explore ideas.

Ultimately, the most important criterion is transparency. You want a setup that makes it easy to inspect the generated code, test behavior on known inputs, and iterate as requirements change. As model quality continues to converge, the limiting factor is rarely what the model can generate and much more often how efficiently the platform supports verification, iteration, and integration into existing research workflows.

Do you expect vibe coding to change how analytical software is produced in academia and industry? And having embarked on this experiment, have your overall excitement levels changed?

Yes, I expect it to change how analytical software is produced, particularly by accelerating the time required to turn ideas into working implementations. It is already possible for frontier models to generate complete codebases for many classes of problems, and there are now credible examples where a large fraction of production software has been written by AI. For instance, Anthropic has reported that its newest product was written entirely by AI, and OpenAI has stated that roughly 85% of the Sora application code was AI-generated. While those claims should be interpreted cautiously, they are strong signals of how quickly these capabilities are maturing.

What has changed most for me is not just excitement, but perspective. The opportunity cost of not engaging with these tools is growing. Tasks that once required substantial upfront engineering effort can now be explored quickly enough that it becomes practical to try more ideas, discard failures earlier, and focus human effort where it adds the most value. In practice, this means that some individuals and teams will become dramatically more productive than before, not by working harder, but by compressing the distance between intent and execution.

In that sense, vibe coding does not replace traditional software development or validation; it reshapes how analytical software is produced by shifting effort away from boilerplate implementation and toward problem formulation, verification, and interpretation. The productivity gains come from reallocating human attention to the parts of the workflow where judgment and expertise matter most.

Looking ahead, do you plan to build on this work – and what’s the next application you’d personally want to vibe-code?

When I have personal time, I plan to continue developing PeerAI.app – with a particular focus on expanding its role in preprint curation. The motivation there is practical rather than philosophical: the volume of preprints is growing faster than any individual researcher can track, and there is a real need for tools that help surface relevant, high-quality work early, before formal publication. Vibe coding makes it possible to iterate quickly on ideas for review, ranking, and triage, and to refine those ideas based on actual use rather than long development cycles.

More broadly, this work has already influenced how we think about software and method dissemination within my research group. We are becoming less focused on producing and maintaining large, monolithic packages or platforms – not because they lack value, but because the cost–benefit balance has shifted. Many of the technical barriers that once justified long-lived software artifacts have dropped substantially.

Instead, we are increasingly focused on specifying clear, executable blueprints for our methods: explicit descriptions of inputs, data structures, algorithmic steps, and validation criteria that can be fed directly to modern models to reproduce the underlying ideas. In this paradigm, the emphasis shifts from maintaining code indefinitely to clearly communicating intent and behavior. That approach aligns well with how science is supposed to work, and it allows ideas to be shared, adapted, and reimplemented much more flexibly as tools continue to evolve.

Building on that foundation, we are also exploring how agentic AI systems can be used not just to implement known methods, but to help navigate complex analytical spaces and support discovery from omic data itself. That work is still emerging, but it reflects the same philosophy: using these tools to accelerate exploration while keeping scientific judgment, validation, and interpretation firmly in the loop. 

Newsletters

Receive the latest analytical science news, personalities, education, and career development – weekly to your inbox.

Newsletter Signup Image

About the Author(s)

James Strachan

Over the course of my Biomedical Sciences degree it dawned on me that my goal of becoming a scientist didn’t quite mesh with my lack of affinity for lab work. Thinking on my decision to pursue biology rather than English at age 15 – despite an aptitude for the latter – I realized that science writing was a way to combine what I loved with what I was good at. From there I set out to gather as much freelancing experience as I could, spending 2 years developing scientific content for International Innovation, before completing an MSc in Science Communication. After gaining invaluable experience in supporting the communications efforts of CERN and IN-PART, I joined Texere – where I am focused on producing consistently engaging, cutting-edge and innovative content for our specialist audiences around the world.

More Articles by James Strachan

False

Advertisement

Recommended

False

Related Content

The Analytical Scientist Innovation Awards 2024: #5
Data and AI
The Analytical Scientist Innovation Awards 2024: #5

December 4, 2024

4 min read

Welcome to the 5th ranked Innovation, Pyxis – introduced here by Matterworks co-founder Jack Geremia

The Climate Conversation: Part Two – Michael Gonsior
Data and AI
The Climate Conversation: Part Two – Michael Gonsior

December 5, 2024

7 min read

In the second part of our interview, Michael Gonsior explores the pressing challenges in carbon cycle research, transformative tools and technologies, as well as analytical glimmers of hope

Green is Digital
Data and AI
Green is Digital

December 16, 2024

4 min read

Software tools can optimize resource management, streamline workflow processes, predict outcomes, and optimize experimental conditions – contributing to more sustainable laboratory operations

Could AI Ever Replace The Analytical Scientist?
Data and AI
Could AI Ever Replace The Analytical Scientist?

December 18, 2024

1 min read

Working closely with an ever-expanding network of experts helps keep our content relevant and engaging. And keeps artificial intelligence at bay, right?!

False

The Analytical Scientist
Subscribe

About

  • About Us
  • Work at Conexiant Europe
  • Terms and Conditions
  • Privacy Policy
  • Advertise With Us
  • Contact Us

Copyright © 2026 Texere Publishing Limited (trading as Conexiant), with registered number 08113419 whose registered office is at Booths No. 1, Booths Park, Chelford Road, Knutsford, England, WA16 8GS.