top of page

AI Detection in Schools: What We Are Getting Wrong

  • Writer: Bruce Sarte
    Bruce Sarte
  • 1 day ago
  • 4 min read

I was recently working with a student and they were walking me through some trouble they were having turning in an assignment in Canvas. The student got a little sidetracked and began telling me about the hard work they had put in researching the topic, and agonizing over their conclusion to the paper.



As she talked, she hesitated for a moment and said something that stuck with me:

"I’m kind of nervous to turn this in… what if they think I used AI?"


She hadn’t used AI. In fact, she was proud of her work. But instead of feeling confident, it was evident that she was worried. She was worried that her writing might sound too polished, too structured, or somehow “flagged.”


That moment really captures where we are right now in education when it comes to AI.


We’re trying to respond quickly to new technology, but in the process, we may be creating new problems for both teachers and students.


The Truth About AI Detection Tools

Let’s start with the most important point: there is no AI detection tool that works reliably. Not with training, and most certainly no on its own.


Not Turnitin. Not GPTZero. Not one of the tools currently available.


All of these systems rely on probabilistic models. They don’t actually know whether AI was used. Instead, they look for patterns in the writing. They look for sentence structure, predictability, vocabulary use, and how formulaic the writing appears.


From there, they generate a likelihood score.


In other words, they’re making an informed guess.


This leads to two unavoidable issues:

  • False positives: Human writing gets flagged as AI

  • False negatives: AI-generated writing goes undetected


And these aren’t rare situations—they are part of how the technology functions.


Teachers Are Being Asked to Solve an Unsolvable Problem

At the same time, teachers are being asked to figure out when AI is being used inappropriately.


But without reliable tools, that’s incredibly difficult.


Many teachers find themselves in situations where they:

  • Suspect AI use, but don’t have clear proof

  • Feel pressure to rely on detection scores that aren’t definitive

  • Are navigating unclear expectations around what “counts” as misuse


And AI use isn’t always obvious. Students aren’t just submitting fully AI-written essays. They are also:

  • Using AI to edit or improve their writing

  • Generating ideas or outlines

  • Pulling in information that may or may not be accurate


This creates a gray area that teachers are often left to manage on their own.


Students Are Adapting to AI. But Not in the Ways We Hoped

Students are paying attention to how these tools work—and they are adjusting their behavior.


Some of the patterns we’re seeing include:

  • Training AI to match their personal writing style so it blends in

  • Dumbing down their writing so it doesn’t appear “too good”

  • Avoiding strong vocabulary or complex sentence structure

  • Hesitating to revise or improve their work


Think about that for a moment. Instead of encouraging students to grow as writers, we may be unintentionally encouraging them to hold back. To "dumb it down," in essence.


Even more concerning is how students are feeling:

  • Nervous about being falsely accused

  • Unsure how to prove their work is their own

  • Cautious about how much effort they put into assignments


For some students—especially multilingual learners—this concern can be even greater, as their writing may naturally be flagged due to patterns that detectors misinterpret.


When Detection Becomes a Barrier

The use of AI detection tools can quietly shift the tone of a classroom.

Instead of feeling like a space for learning and growth, it can start to feel like a space where students are being monitored and judged—sometimes without clear evidence.


For students who already feel uncertain, this can become a real barrier.


They may:

  • Avoid taking risks in their writing

  • Avoid using tools that could actually help them learn

  • Pull back from engaging fully in assignments

  • Lose trust in the process


And perhaps most importantly, they may feel like they can be accused of something they didn’t do—with no real way to prove otherwise.


So Where Do We Go From Here?

If AI detection tools aren’t reliable—and may even be causing unintended harm—what’s the alternative?


The goal shouldn’t be to ignore AI, but to approach it differently.


Some more effective directions include:

  • Shifting away from “catching” students and toward teaching responsible AI use

  • Focusing more on the process of learning (drafts, revisions, reflections)

  • Creating space for honest conversations about how students are using AI

  • Setting clear expectations about what is acceptable

  • Supporting teachers with guidance—not just tools


Final Thoughts

That student I mentioned at the beginning?

She did turn in her assignment. But she did it with hesitation, not confidence.

And that’s something worth paying attention to.


AI detection tools may seem like a solution, but right now, they are far from that. More importantly, how we use them can shape how students experience school.

If we’re not careful, we risk creating classrooms built on suspicion instead of trust.


And in education, trust matters just as much as anything we’re trying to measure.

Comments


Using Mobile Phones

Stay Connected with Bruce Sarte's BLOG

Contact Bruce Sarte

bottom of page