AI Meets AI
Upholding academic integrity (AI) in a world of artificial intelligence (AI)
Alan Doucette | | 10 min read | Discussion
I instinctively approach the topic of artificial intelligence (AI) with caution – probably all the sci-fi films I’ve watched over the years. I don’t think the “machines” will rise up and take over society, but AI is increasingly integral to a growing list of applications: social media, security, commerce, gaming, transportation, and more. And we’re just scratching the surface of its full potential. As a chemistry professor, I am excited (and nervous) to see how AI might transform education. Will an AI revolution bring positive reform or will the challenges outweigh the rewards?
The perfect storm…
My introduction to ChatGPT (1) came to my department as a warning. “Watch out for this new online tool.” Sure enough, if you feed it a question on a seemingly endless list of topics, it will almost instantly spit out an answer – in simple human language. Though the potential to enhance learning seemed obvious to me, this wasn’t brought up. Rather, our concern was that ChatGPT could be exploited for take-home assignments, lab reports, and online tests. It could even draft an original essay.
Then again, was such a tool really any different from all the others? Most assignment answers are in principle, just a click away. Or, as was done the old-fashioned way, students can simply collaborate with one another. Why not just pay someone to complete their work? Realizing that some students would never cheat, the reality is that some might, given the opportunity. This is why in-person assessments (tests, exams) employ multiple precautions – checking IDs, distributing multiple test versions, and hiring extra invigilators – to maintain academic integrity. But it is impossible to watch everyone, always.
The 2020–2021 pandemic forced schools to move learning – and assessments – online. Asynchronous lectures, online meetings, virtual classrooms, virtual labs, and online exams were all normalized. Both students and teachers struggled to adapt to this new learning environment. And in terms of maintaining academic integrity? Well, the perfect storm had erupted.
The pandemic created an educational gap for companies to fill. Websites advertised experienced writers who could customize an essay – for a fee. Chegg became popular for their rich depository of questions and answers (2). Students could also “ask an expert” at Chegg and receive near real-time responses. Why not ask a Chegg expert for help on a test question, during the online exam? Though such practice violated Chegg’s terms of use policy, it still happened regularly (ask me how I know!). These online tools are not the cause of academic integrity violations, and many of the providers of these tools are willing to work with academic institutions to maintain honest forms of learning; for example, Chegg’s “Honor Shield” program allows instructors to upload test questions ahead of class release, which are blocked by the program, for example.
And yet, academic integrity violations continue to increase. A recent survey found 95 percent of students admitting to some form of cheating (3). As educators, we encourage peer collaboration, group work, and independent learning. Consider also the pressures faced by students: the need to rise to the top, the increasing demands on students’ time, working to pay rising tuition costs, competing for scholarships, entrance to specialized programs… Sure, all of these could be just excuses, but the motive to cheat is clearly there. Now, more than ever. And so too are the means.
Enter AI
Generative AI is something new. It’s not a search engine. The “generated” responses represent an original answer created for a specific query. So how does it work? I’m not the expert, so I asked ChatGPT to explain:
“You provide it with a prompt or a question, and it generates a response based on its learned knowledge and patterns from the training data. The model generates text by sampling from the probability distribution of possible words, considering the context and relevance to the input.”
ChatGPT continued to explain that it does not “understand” its own answer, nor does it have any awareness of the context. It has just taken advantage of immense computational power to recognize patterns from the training data (in other words, lots and lots of text). From that, it can compute a statistically likely string of words that associate with the input text.
Still confused? Personally, I just think I’m playing a word-association game with the computer.
If I say “up”, you might think “down.”
If I say “Disney?” Your response might be “Mickey Mouse.”
If at first you don’t succeed? …Try, try again.
10? … Number
20? … Double
21? … Blackjack!
These were all answers generated by ChatGPT, because the words are naturally associated. When we feed in a longer string of words, ChatGPT will still calculate and return a set of words that associate with our input. It doesn’t need to understand the words, or their context, just that they go together “like peas and…” (How would ChatGPT respond?)
AI to enhance learning?
Though students can search for answers online, the capacity for ChatGPT to “create” original explanations for specific questions is something new. And since ChatGPT is built on human-like language, students can have a back-and-forth conversation with the program, fine-tuning their questions, and teasing a more specific response. Let’s explore...
In my senior undergraduate courses, self-directed learning, class presentations, and written reports are an integral aspect of the curriculum. These presentations keep the class content relevant and enhance science communication. I asked ChatGPT to suggest “modern applications of mass spectrometry” as possible presentation topics for students enrolled in my course on mass spectrometry. Students do find it challenging selecting a topic that speaks to their interests, so perhaps ChatGPT could assist.
I was rather impressed with ChatGPT’s initial response: proteomics (my field of study), clinical diagnostics, environmental analysis, sports doping, food safety, MS imaging… In fact, all topics covered by prior students (and also listed among the potential topics I provide as inspiration to the class). So I asked ChatGPT for 20 more topics, but this time asking for more “unusual” applications. Art forgery, forensics, breath analysis, authenticating archeological artifacts... Wonderful! Though again, still part of my own suggestions (it’s a long list). “ChatGPT, give me 20 more topics,” I asked. “List another 20…” I could have gone on. But surely, a student would find some inspiration among the many choices. They would especially find relevance if they fed ChatGPT additional information – their personal interests, career ambitions, or a focused area to explore. These are the same questions I ask my students if they come to me for topic selection advice – I hope the students pick a topic that personally interests them.
I finally settled on a topic: “Oceanography: marine biogeochemistry and ecosystem dynamics.” Putting myself in the shoes of my students, I next asked ChatGPT to “list 5 specific references on the topic of Oceanography & Mass Spectrometry.” The response was five recent, peer reviewed articles, each from a reputable journal, with interesting titles on a variety of applications in this field. What a perfect start to research my presentation! Or so I thought. I soon discovered that I could not retrieve any of these references online. I asked ChatGPT why:
“I apologize for the confusion. While the references provided were intended to demonstrate the types of research articles available in the field of mass spectrometry in oceanography, they were generated by the language model and may not correspond to specific published articles.”
Translation: ChatGPT made them up.
Limitations and outlook
Though it may sometimes seem like they do, no computer “understands” the question nor the answer – not yet anyway. When I asked for references, ChatGPT returned what looked like references, but they were purely hypothetical. The language model “created” references, in a style that matched the pattern of true references. But it lacked the context to appreciate that a reference connects to a specific study – a published article. Of course, had I simply rephrased my question to ask for “real” references to previously published peer reviewed articles, ChatGPT could have provided a correct response (when I did, ChatGPT returned an article authored by one of my former students, so I can vouch that it was a real study). Though students might be challenged in navigating the relative truth of AI responses, it does at least provide opportunities to enhance critical thinking. After all, being able to ask the right question is as important as finding the right answers…
In my recent class on analytical separations, the students completed an in-class midterm, after which I allowed the class to take the test home for a second attempt at answering the questions. I wanted to see if the students would revise (improve) their answers, given more time, access to their class materials, and to the internet in general. My test questions were primarily calculation based, but also prompted students to explain various scenarios. Given my cautious awareness of ChatGPT, before conducting this take-home test, I spent some time with the program to see how it would respond to my questions. I’ve heard ChatGPT can pass the bar exam or get a medical license, but apparently it has a lot to learn to become a successful chemist. It could not answer a single one of these questions correctly – even after I redirected my prompts to hint at the answer. Not that the questions were impossibly difficult, but clearly the topic was not a sufficient part of the training data for ChatGPT to provide a meaningful response. Again, not yet.
I now openly discuss ChatGPT in my classes. I wanted the class to use the tool in a way that best assists their learning. I also wanted my class to know that I was aware of the tool. Related to written essays (to complement their topic presentation), I explain how powerful generative AI can be. Other tools, such as SciSpace Copilot (4), can act as a personal assistant to interpret and simplify complex material including published research articles. Need a lay summary of the paper as a whole? An explanation of Figure 2? More background on the equations presented in the paper? The significance of the work? These tools can do all of that. Coincidentally, these are the same things I ask my class to demonstrate through written assignments – not only to research and digest complex facts, but to distill and explain their context to others. Today, I must inform my classes that I am looking to assess “their” written work – not the computer’s. Unfortunately, not every student understood this message, which has forced me to rethink if this exercise can continue. Nevertheless, we still need our students to be able to think and express their ideas – even in a world where AI can do “some” of it for us.
What now?
I’ve always stood by the importance of academic integrity. I believe every student needs a fair and equitable opportunity to demonstrate their success. I also believe every student has the potential to succeed, and that hard work is the key to that success. With that in mind, it’s a silly exercise to ban AI tools from being used in education. AI is an invaluable educational tool, just like the internet before it, or books before that – did you know that Socrates felt writing would train the mind to forget? (5). What parameters do we establish to define how much “help” is acceptable with AI tools? How do we know if students have passed those boundaries? And how do we encourage our students to uphold these limitations? I don’t have these answers. I just know that the education system is changing at such a pace that no one person can keep up.
I already asked ChatGPT – and it told me that communication is key. And that is why I’m asking you for your input.
- OpenAI, “ChatGPT” (2023). Available at: https://bit.ly/3Y4R6nG.
- Chegg, “Honour Shield” (2023). Available at: https://che.gg/3On68lv.
- ICAI, “Facts and Statistics” (2020). Available at: https://bit.ly/3Nl3pXX.
- SciSpace Copilot (2023). Available at: https://scispace.com/.
- Plato, “Phaedrus.” Available at: https://bit.ly/3rycOEr.
Alan Doucette is a Professor in the Department of Chemistry at Dalhousie University, Canada