Blame Schools, Not AI
It is unproductive to punish students without regard for the systemic pressures that push them towards AI.
With the beginning of the fall term, we’ve all been skimming through—or, hopefully, at least opening—that fresh wave of crisp, new syllabi. And as always, that familiar policy of academic integrity and AI usage prohibition greets us right near the bottom (which we typically skip).
It’s not news to anyone at the University of Toronto that the use of generative AI among students has been on the uptick. Late night essays, overdue problem sets—and let’s be honest, it’s pretty tempting to offload all of the work and stress when that 11:59 p.m. due date flashes in front of your eyes. The last thing you’re thinking about is integrity when making your Program Of Study (POSt) is on the line.
But this is not lost on the institutions themselves. Canadian universities have rushed to police AI, and some blame this increasing trend on the laziness or dishonesty of the students—after all, how hard can it really be to submit an assignment on your own? But have any of them paused to consider the role of the academic system?
Think back to the last few times you wrote a test: how often did you ask your peers, “What was your grade?” rather than, “Hey, wasn’t learning about DNA replication pretty cool?” Grades have always been the currency in school, paramount to actually understanding anything. In a system like this, it’s not surprising that students turn to a tool that can access countless data in a matter of seconds and generate the product their educators are demanding.
Terry Underwood’s Substack article on the nuanced relationship between AI usage and academic dishonesty, from the perspective of an academic and teacher, describes that a “hidden curriculum” is embedded in the education system, one in which grading pretends to measure us objectively, but really just assigns us a number. As soon as that graded assessment comes back, we’re pressured to focus on the score, to view it as the concrete measure of our capability and value. Under this demand, nothing other than the final product and its numerical worth makes a difference. This makes the pivot to using AI to meet demands quickly and avoid the risk of making mistakes seem almost natural. AI wouldn’t have to devalue integrity and credentials if these credentials weren’t made out to be the be-all and end-all to begin with.
And if grades are the currency, assignments are the transactions—made to be rigid, standardized, and designed for efficiency. That’s one of the reasons why the rise of AI says more about the flawed design of assessments than it does about a lack of academic integrity. The British Educational Research Association (BERA) criticizes the university assessment strategies in place, calling them obsolete—they reward output while discouraging critical thought. It’s a design rooted in capitalistic ideals where product is favoured over progress. After all, our current model of schools was designed to train factory workers to become obedient and agreeable. It’s unsurprising that students are assessed based on compliance rather than originality when they’re expected to become numbers in the workforce. And the thing about AI is that it is very good at compliance—in fact, it is made pretty much just for that.
While it is necessary that students take responsibility for cheating, it is unproductive to punish them without regard for the systemic pressures that push them towards it.
It is also worth considering whether or not the nature of these assignments reflects the course content effectively. If students immediately revert to using AI to complete assignments rather than attempting them on their own, it begs the question: are the students lazy, or are the assignments not prompting actual input? If the assignments can be done by a machine that simply spits out answers and requires little genuine thought, it implies that the assessment scheme isn’t doing a lot for the students in terms of learning, understanding, or skill development. If schools placed more emphasis on free thought and process, AI would not have to replace the student.
Has AI broken higher education?
Not necessarily—it might have just exposed some of the cracks. Until universities value curiosity over compliance and growth over grades, students will keep looking for shortcuts. And maybe we’ll stop asking, “What did you get?” and start asking, “What did you learn?”

