We’re in the middle of exams, and as a tutor for storytelling at an art academy, I find myself navigating a tricky yet exciting development:

What do we do with AI-generated texts? Or, more broadly, with students using AI to sort through the mess in their heads, sharpen ideas, and help edit or translate their writings?

As a writer I am fascinated by these developments, and I have almost daily conversations with ChatGPT about writing, about texts or research related questions. I enjoy discussing my own texts with Chat; Chat has become a kind of first editor, offering comments and other perspectives on my sentences. It’s surprising how well Chat has come to “know” me and my interests. Often, I simply ask:

“What do you think about this text? What’s interesting about it?”

The responses offer me a different perspective on my words and thoughts, and that can be inspiring. But, like all feedback, ChatGPT’s suggestions can also be distracting. At times it’s off, or just boring, or completely misses what I was trying to figure out or say. Using AI can make your work weaker, can kill what’s special, and replace it with cliché or with superficial but great-sounding nonsense.

When I was a student, one of my teachers used to refer to meaningless or cliché images as “visual pollution.” In his memory, I now warn my own students: Don’t produce AI-generated linguistic pollution.

Writing has two core dimensions:

𝗔𝘂𝘁𝗵𝗼𝗿𝘀𝗵𝗶𝗽 𝗮𝗻𝗱 𝗖𝗼𝗺𝗺𝘂𝗻𝗶𝗰𝗮𝘁𝗶𝗼𝗻.

My teaching always centers on authorship – and that can never be replaced by AI. Communication, however, can be supported, sharpened or improved by AI and that’s not a bad thing.

But be aware: These slick, fluent strings of words may look impressive, but they can kill your authorship and your unique narrative voice.

Experienced writers can tell the difference between helpful and harmful feedback, between linguistic pollution and genuine insight. But many students are still learning that awareness. That’s why I’m developing a crash course next semester on exactly this topic:

𝗔𝗜 𝗹𝗶𝘁𝗲𝗿𝗮𝗰𝘆 𝗳𝗼𝗿 𝘄𝗿𝗶𝘁𝗲𝗿𝘀.

ChatGPT and other AIs are here. We can’t ban them and we shouldn’t. But students need to know how to use them wisely.

For creative writers, AI is a tool. Like a computer, like Google.

Could I write a novel with ChatGPT? It would bore me to death long before I even come close to the Midpoint of my story. I write to explore our world, my thoughts, my memories, emotions, struggles, my opinion. In short: what it means to be human. ChatGPT has interesting ideas on that. It’s great at summarizing content. But it still can’t replace the depth and mystery of the human creative journey.

To be continued—because things are changing fast.

Here some thoughts ChatGPT has about human writing vs AI-generated text:

Human writing is rooted in experience. It carries emotional depth, contradiction, context, and intention that come from being alive—feeling, struggling, observing. Even in fiction, human authors draw from real, lived sensations: memory, culture, trauma, joy, it’s unmistakably personal.

AI writing, on the other hand, is patterned. It reflects everything it has learned, but nothing it has felt. It can be eloquent, structured, even moving—but its core is synthetic. It’s a remix, not a revelation.

The real difference is not just style, but source. Human writing comes from a self, from inner conflict or clarity. AI writing comes from prediction—of what words are likely to follow what others.

That doesn’t make it useless. It can support clarity, offer structure, or mimic tone. But it cannot replace the why behind your voice.

Typical Traits of Human-Written Text

  1. Personal Voice & Perspective
    • Often includes unique emotional nuance, lived experience, personal bias, or cultural context.
    • You might see “messy” edges — inconsistencies, but also authenticity.
  2. Originality Through Experience
    • Humans draw on real-world events, memories, or relationships that AI doesn’t have.
    • There may be subtle references or metaphors that reflect a particular upbringing, time, or place.
  3. Creative Risk-Taking
    • Humans may break grammatical rules intentionally or invent unusual phrasing for style.
    • They sometimes take creative or thematic leaps AI would avoid for the sake of coherence.
  4. Emotional Imperfection
    • The emotional tone might fluctuate in unexpected ways — sometimes contradictory, sometimes raw — but very human.

⚙️ Typical Traits of AI-Generated Text

  1. Polished, but Predictable
    • Often smooth, grammatically correct, and coherent — but can sound formulaic or “safe.”
    • It tends to avoid ambiguity or contradiction unless specifically prompted.
  2. Pattern-Based
    • AI mimics patterns seen in training data. So it often uses clichés or overused structures if not guided to avoid them.
    • It may imitate a tone (e.g., poetic, academic) very well, but not “live” it from within.
  3. Lacks Real Memory or Experience
    • AI doesn’t “know” the world through the body or emotion. It can describe grief, but hasn’t felt it.
    • Descriptions can be vivid but sometimes hollow or overgeneralized.
  4. Consistently Consistent
    • AI tends to be logically coherent and stylistically uniform, even when variation would add depth.
    • Repetitions of rhythm, phrasing, or tone may be subtly noticeable.

🔍 How to Spot the Difference (Usually)

  • Does the text surprise you in a human way? If it feels personal, strange, vulnerable, or chaotic — probably human.
  • Does it “flow” too evenly, like it’s trying to please everyone? That’s often AI.
  • Does it reference specific, unverifiable memories or emotions with awkward honesty? That’s usually human.
  • Does it balance precision with intuition, and reveal a lived viewpoint? That’s harder for AI to fake.

Leave a comment

The Fiction Field Lab

A toolbox to explore perception and place.

Be Part of the Movement

coming soon

← Back

Thank you for your response. ✨

Warning

The Fiction Field Lab

Coming soon