27 april 2025 - regarding cheating in the age of AI
There's ongoing debate within cs pedagogy about the correct response to AI-based assignment completion, which would traditionally be considered "plagiarism."
On one hand, a majority (perhaps) of students are now producing all of their code via an LLM app, which would certainly fall under the restrictions on the use of external resources.
That said, LLMs are here to stay, and only getting harder to detect through any objective measure.
(My school uses "interview grading", where we ask students random questions about their code in hopes of catching a cheater. This has roughly a 0% success rate.)
My take is that we already have an analogous pedagogical model: mathematics education.
Math ed, like CS ed, largely relies on 1) graded exercises and 2) synchronous examination. Currently, a key difference is that CS heavily upweights the graded exercises (i.e. "homework"). Meanwhile, mathematics upweights examinations, because for the highest level of math that most students will ever take, there are online tools that can perfectly solve any problem you can assign.
I think CS ed is now in a similar spot thanks to ChatGPT. Chat (as affectionately it is known) can whiz through any CS assignment for like 90% of classes. After all, that's essentially the main thing it was trained to do (w.r.t. coding). Think about the format of a coding instruct prompt--"Write a function that..."--that is essentially the same as a homework prompt. LLMs are not just good at homework, they are quite literally trained to be homework machines.
Anyways, the obvious solution is to reweight how we do grading, and massively upweight exams, and make it clear to students that they don't have to work hard on their homeworks, but it will cost them come exam.
mg