Your Students Are Already Using ChatGPT
Can you even tell? If so, please let me know how…
Christopher R. Harrison | | 3 min read | Opinion
This past fall, I was having a discussion with my fiancée about ChatGPT and the ramifications it could have on teaching. She’s the Chair of an English department, so the impact of the AI on writing assignments were obvious and concerning to her. Smugly, I claimed, “We in chemistry don’t have that problem.” After all, how could an AI that is good at spitting out summaries of classic novels or personal statements be of use to students trying to calculate the exact amount of acid to add to a solution to get it to buffer at the correct pH?
Oh, how wrong I was!
Later that evening, as I thought more about it, I decided to test ChatGPT with one of the simpler “Calculate the pH of a solution of X” problems. What didn’t surprise me was that the AI failed spectacularly at getting the correct answer. What did surprise me was how it failed. It solved the Henderson-Hasselbach for the pOH using the pKb for acid, despite my request for pH.
But then it dawned on me. A group of students in my class had used that approach several times to solve similar homework problems, despite me never teaching it to them. Why would I, when that approach is so convoluted for getting a pH when given a pKa? Evidently, they had been using ChatGPT to try to answer my homework problems all semester! The pOH and pKb approach to the buffer problem was the giveaway. When I had asked them about why they were using this approach, one student offered the explanation that their older sister learned it this way at Berkeley. I didn’t question it at the time, but maybe I should have pushed – I doubt my colleagues at Berkeley are teaching such painful approaches.
In hindsight it was clear, they were feeding the questions to ChatGPT and pulling out the calculations to answer the questions. But it was only clear because it was so grossly wrong. Would I have noticed it if the AI had done things correctly? Or even just a bit closer to right, using the pKa instead of the pKb? More worryingly, had they been using AI for other homework assignments throughout the semester? It bothers me that I cannot know with certainty.
This is where the “danger” of ChatGPT and other AI tools becomes clear. A student could get passing grades on all their assignments with close to correct answers from an AI, while not learning anything, and most crucially not gaining my attention to intervene and help them. The only evidence of their lack of knowledge would arise on the exams.
When I tasked ChatGPT with writing abstracts for research similar to my own, or summarizing work that I was familiar with, the results were impressive. They truly looked as though a competent undergraduate student had written them – communicating that vague sense of a superficial understanding of the material.
What can we do about this? Is it even a problem?
I think that AIs like ChatGPT pose a potential problem in education, particularly if students are unwilling or unable to critically evaluate the results that the AI generates for them. But they may also be a potential tool – perhaps we can use generative AI as a means of getting students to think critically about the data that they are presented with.
Furthermore, as ChatGPT “learns,” it may become capable of finding the right answers to all the problems. I may have even helped it on that path. I pointed out the errors it was making on the buffer calculation problems and it responded like a student, learning incrementally how to get to the right answer, and now it may do them correctly.
In either case, how should we treat the student’s use of ChatGPT and similar AIs? Is this akin to using Google or Wikipedia to find information? Or is it closer to looking up the solutions to problems on Chegg and pure plagiarism?