Sep 18
/
Katarzyna Truszkowska
Should Students Be Scared and Lecturers Relieved? A new Turnitin feature, “Anti-AI Humanizer”
On August 27, 2025, Turnitin unveiled its newest feature: the “AI Bypasser Detection” tool. The claim is ambitious: it can spot text that has been run through “AI humanizers,” those services designed to disguise ChatGPT or other machine-generated content and make it read more convincingly as human. Turnitin frames this as a breakthrough for fairness and originality in student submissions. But the question remains. Is it really the game-changer it sounds like, or just another tool marketed to play on institutional fears?
Why I’m Skeptical
I remain cautious about sweeping claims like these. For one, this announcement doesn’t feel nearly as revolutionary as the marketing suggests. Many lecturers already rely on advanced grammar and writing support tools such as Grammarly Pro, which effectively provide feedback on clarity, structure, and style. If those are already in widespread use, what exactly makes this “bypasser detection” so distinctive? It’s like an AI tool checking after another AI tool. Really?!
More importantly, the claim that it can definitively distinguish between human-edited AI text and authentic human writing rests on shaky ground. Once a student revises a passage, whether independently or with technological help, there is no clear line left to detect.
And let’s not underestimate the students themselves. They are often far ahead of the technology. I work with students at top UK universities who regularly integrate ChatGPT into their assignments, and they continue to achieve Merits or higher. This isn’t because they are “getting away with it,” but because assessment design still makes it possible to combine AI drafting with individual editing and meet expectations. Against that reality, it is hard to see how another layer of detection restores fairness.
More importantly, the claim that it can definitively distinguish between human-edited AI text and authentic human writing rests on shaky ground. Once a student revises a passage, whether independently or with technological help, there is no clear line left to detect.
And let’s not underestimate the students themselves. They are often far ahead of the technology. I work with students at top UK universities who regularly integrate ChatGPT into their assignments, and they continue to achieve Merits or higher. This isn’t because they are “getting away with it,” but because assessment design still makes it possible to combine AI drafting with individual editing and meet expectations. Against that reality, it is hard to see how another layer of detection restores fairness.
To put it simply:
- Myth: This tool is groundbreaking. Reality: Similar support already exists in other writing technologies.
- Myth: AI vs. human writing is easy to detect. Reality: Once text is edited, the distinction disappears.
- Myth: Students will be caught. Reality: They adapt more quickly than detection tools evolve.
- Myth: More detection equals fairness. Reality: Without transparency and accuracy, it only adds risk.
The bigger issue
Turnitin has always thrived by selling reassurance. Universities invest in its services not only for plagiarism detection, but for the sense of control they create. The new “AI Bypasser Detection” fits that pattern perfectly. It taps into fears that students are ahead of educators, that academic integrity is collapsing, and that technology is accelerating too quickly to manage. But while fear may be profitable, it is not the same as a solution.
This would be less worrying if Turnitin’s track record inspired confidence. When its first AI detector launched in early 2023, it produced so many false positives that entire cohorts were flagged as submitting AI-generated work. A tool that is unreliable doesn’t increase fairness; it creates risk instead. And when the cost of error is an accusation of misconduct, students’ futures are on the line.
This would be less worrying if Turnitin’s track record inspired confidence. When its first AI detector launched in early 2023, it produced so many false positives that entire cohorts were flagged as submitting AI-generated work. A tool that is unreliable doesn’t increase fairness; it creates risk instead. And when the cost of error is an accusation of misconduct, students’ futures are on the line.
Where the Real Solution Lies
If we are serious about supporting both students and educators, the answer does not lie in more detection software. It lies in smarter education:
- Educating educators so they understand how generative AI and bypass tools really work.
- Rethinking assessment design so success depends on originality of thought, critical reasoning, and process, not only polished prose.
- Ensuring transparency and fairness whenever detection is used, so students know how results are calculated, what the margins of error are, and what rights of appeal exist.
Why This Matters to Me
At Oxford Academy of English, I began by tutoring international students who were adjusting to the demands of academic writing. Over time, that work expanded into long-term mentoring that gave me a front-row seat to the ways students adapt, innovate, and succeed in the higher education system. That perspective has convinced me that detection tools are not the answer.
The future of education will depend less on policing and more on guidance - helping both students and lecturers learn how to use AI responsibly and transparently. Trust, pedagogy, and reform are the real game-changers.
The future of education will depend less on policing and more on guidance - helping both students and lecturers learn how to use AI responsibly and transparently. Trust, pedagogy, and reform are the real game-changers.
The Real Game-Changer
Turnitin’s new feature may well find a market, especially among institutions that feel compelled to “keep up.” But until universities confront the deeper problem, the gap between how education is designed and how students are already learning with AI, no detection software will solve the challenge. The path forward lies in building a culture of fairness, adapting pedagogy, and equipping students for the AI-driven reality they already inhabit.
Get in touch
-
Oxford Academy Of English Ltd
-
1 & 3 Kings Meadow, Osney Mead, Oxford, OX2 0DP, UK
-
contact@oaoe.co.uk
-
+44 (0) 7356 030202
Our Newsletter
Get weekly updates on live streams, news, tips & tricks and more.
Thank you!
Copyright © 2025
