Dec 17 / Katarzyna Truszkowska

Moving Beyond AI Shame: What If AI Use Is a Symptom, Not the Disease?

"Since when do you say 'touch base'? Someone's been using ChatGPT..."

A colleague recently told me about this playful exchange between two teachers. One had written an email using a phrase they'd never used before.

The other called them out gently, but publicly.

The teacher who wrote the email felt instantly embarrassed.

But here's my question: Why? Maybe this teacher will use "touch base" confidently next time, now that it's part of their repertoire. Isn't that just learning?
Yet this tiny moment of shame reflects something much larger. Across education, we're making high-stakes policy decisions about AI and academic integrity without the research we need, often based on assumptions rather than evidence, and almost always wrapped in shame.

Students are accused of cheating.
Teachers are criticised for using AI.
Everyone is uncertain about what's acceptable.

Here's the reframe we need. AI use isn't primarily an integrity problem. It's a symptom of deeper structural issues in how we teach, support, and assess writing.

Address those underlying issues, and many of the "integrity concerns" resolve themselves.

Here are three perspectives that reveal what we're really dealing with.

1. Students: When Practice Doesn't Come Before Performance

I work primarily with international students, which has given me insight into a pattern that affects far more than just this population. Here's what I observe repeatedly:

We lack basic research on who uses AI and why. There is no evidence showing that international students use AI or engage in academic misconduct more than domestic students. We simply don't have the data. Yet policies are often shaped by anecdotal impressions that can reflect bias more than reality.

The real pattern is this:
Students across all backgrounds are turning to AI when they haven't received adequate preparation for the writing we're asking them to produce.

The typical progression looks like this:

  • Several weeks of lectures about academic writing
  • Minimal opportunities for low-stakes practice with feedback
  • The first significant writing attempt happens during assessed coursework
  • Feedback arrives too late to inform the learning process

Assessment measures learning; it doesn't create it. Yet for many students, their only substantive writing practice occurs at the point of measurement.

Add the linguistic dimension.
Many students—both international and domestic—are being asked to produce extended arguments in academic English, which is genuinely no one's native language.

Academic writing is its own specialised register: dense, structured, highly conventional. It requires extensive practice to master, regardless of your first language.

When students turn to AI under these conditions, we might ask: Is this a moral failing, or a predictable response to insufficient scaffolding?
If we provided structured, low-stakes writing practice throughout our programs with timely feedback that students could actually use, would we see the same patterns of AI dependency?

2. Teachers: The Unsustainable Workload Behind AI Adoption

Here's a trend that surprised me. An increasing number of educators are using AI to generate feedback on student writing or assist with marking.

This surprised me not because I'm opposed to AI - I use it extensively in my own work - but because marking is where I learn most about my students' thinking. It never occurred to me to outsource it.

So why are colleagues making different choices?

They're facing impossible conditions.
Marking hundreds of essays in a matter of days isn't unusual. It's the reality of modern higher education staffing models. Time pressure pushes people toward any available efficiency.

Many lack training in efficient, effective feedback practices.
We invest heavily in conversations about academic integrity, but we rarely invest in developing educators' feedback literacy. Questions like "How do I provide meaningful feedback efficiently?" or "How do I calibrate my judgment beyond annual standardisation meetings?" are left for individuals to figure out on their own.

AI fills a real gap, but at a cost.
AI can help draft comments, but it severs a crucial learning loop. When educators engage directly with student writing, they understand their students more deeply, notice patterns in where people struggle, and adapt their teaching accordingly. In other words, AI can potentially teach writing, but only educators who engage directly with student work can truly understand and respond to their writers.

This isn't a story about teachers taking shortcuts. It's a story about under-resourced institutions asking educators to do more than is humanly sustainable and then being surprised when they reach for tools to cope.

3. The Wider Public: Where Do We Draw the Line?

Outside academia, the ethical boundaries become even less clear.

Is it wrong to use AI to draft a difficult email? To express a delicate idea more diplomatically? To write confidently in a second or third language?

We've accepted tools at every stage of the writing process such as spell-checkers, grammar tools, translation apps, templates. Yet when the tool becomes more capable, we suddenly attach moral weight to using it.

Clarity shouldn't require an apology.
If someone uses AI to communicate more effectively, professionally, or respectfully, is that unethical? Or is it simply what people have always done: use available tools to express themselves as well as they can?

Language has always evolved alongside technology. AI is simply the latest iteration of that centuries-old pattern.

The question isn't whether people should feel ashamed for using AI. The question is: What are they using it for, and what does that tell us about the support they're not receiving elsewhere?

Moving Forward: What Institutions Can Actually Do

If AI use is a symptom of structural problems, what are the solutions? Here are four starting points for institutional leaders:

1. Build practice into curricula, not just assessment. Create multiple low-stakes writing opportunities with timely feedback before high-stakes assessments. This benefits all students, but especially those navigating linguistic and cultural transitions.

2. Invest in educator development. Provide training in feedback literacy, efficient marking strategies, and how to design assessments that promote learning rather than just measure it. Teachers need support as much as students do.

3. Develop constructive AI policies. Move beyond prohibition toward guidance. Help students understand when AI use supports their learning versus when it displaces necessary skill development. Clear, practical examples work better than abstract principles.

4. Acknowledge linguistic diversity as a resource, not a deficit. Students working across languages bring valuable perspectives. Our policies should recognise this rather than penalising linguistic difference or creating conditions where AI becomes the only viable support.

The Real Question

Shame has dominated too much of the AI and academic integrity conversation. It's not a productive frame, and it's not an accurate one.

Students aren't morally deficient for turning to AI. Instead, they're responding rationally to the conditions we've created.

Teachers aren't taking shortcuts. They're trying to survive unsustainable workloads without adequate training.

Professionals aren't cheating. They're using tools to communicate effectively.

The real question isn't "Should people feel ashamed for using AI?"

The real question is: "What gaps in our systems is AI use revealing and what are we going to do about them?"