AI-Generated Rat Beats Peer Review
But what are the consequences for academic integrity and the future of scientific writing?
Markella Loi | | 3 min read | Opinion
If you scrolled on social media recently, you may have seen the storm of reactions after a paper with (rather outrageously) AI-generated figures was published. It has since been retracted.
It was the images that first caught people’s attention – I mean how can someone ignore a giant dissected rat with “dislocttal stem ells” [sic]? But it turns out that the entire paper may have been penned by generative AI.
“How this got through peer review really needs to be dissilced [a reference to one of gobbledygook terms generated by the AI],” tweeted Zoë Ayres – a comment which resonated with (and tickled) many X users.
Last year, Victoria Samanidou wrote a series of articles about the integrity of peer review. “The peer review process should promote integrity in research publications. Unfortunately, the system is often biased, unjustified, incomplete – and sometimes plain insulting, unfair, ignorant, or incorrect,” she said.
Is AI set to make things worse? Well, a quick search on PubPeer for “As an AI language model, I…” demonstrates that this isn’t an isolated incident…
There’s no doubt that many researchers are using AI powered tools to help them write their papers. In some cases, as our rat example shows, generative AI isn’t much use. But Gianluca Grimaldi and Bruno Ehler asked GPT-3 for a scientific article on the lead toxicity in perovskite photovoltaics – and it delivered exactly that. With minor human supervision and edits, the authors believe that the paper could even pass peer review as a Perspective Article. Surprisingly, when the authors asked the tool itself whether it can write a scientific paper, it downplayed its role and abilities.
Their conclusion? AI’s emergence – similar to when computers first came out – is changing how science is performed and communicated. But the behavior of these systems depends strongly on their training, so we should worry more about the emergence of malicious strategies to amplify the relevance of selected opinions.
“Like our forest-dwelling ancestors that discovered fire, we need to be mindful of the unwanted consequences of our exciting advances, to reap their benefits without setting our homes ablaze,” Grimaldi and Ehler end their paper.
Is the use of AI in this context simply a matter of saving time and hassle, allowing researchers to do what they do best: making new discoveries in the lab? Or is generative AI set to exacerbate problems within the peer review process? Could it be that AI is simply papering over a deeper problem: the pressure to publish? Answers to these questions may determine the future of scientific publishing and integrity.
Drop me an email and let me know what you think!
Generative AI in Education
You could tell if your students were using ChatGPT, right? Well, that’s what Christopher Harrison thought, until he decided to test out some simpler “calculate the pH of a solution of X” problems. The AI solved the Henderson-Hasselbach for the pOH using the pKb for acid, despite his request for pH. But then it dawned on him…
“A group of students in my class had used that approach several times to solve similar homework problems, despite me never teaching it to them. Why would I, when that approach is so convoluted for getting a pH when given a pKa? Evidently, they had been using ChatGPT to try to answer my homework problems all semester!” You can read more here.
Will an AI revolution bring positive reform or will the challenges outweigh the rewards? In the second of our series on generative AI in education, Alan Doucette – Professor in the Department of Chemistry at Dalhousie University, Canada – weighs in.
“AI is an invaluable educational tool, just like the internet before it, or books before that – did you know that Socrates felt writing would train the mind to forget?” Read more here!
Associate Editor, The Analytical Scientist