AI Confabulates, We Create
There’s something beautifully human about imperfection. A stray brushstroke, an uneven line, a misplaced comma — it’s these little details that give creative work character. These “mistakes” are often what makes art memorable, pulling us in and reminding us that there’s a real person behind the work. Human imperfection isn’t a flaw — it’s a signature. A core feature, if you will, even as we sometimes endeavor toward perfectionism. It signals that a living, breathing person poured their thoughts, emotions, and effort into the piece.
Hallucinations or Confabulations?
AI doesn’t make mistakes like humans do. Instead, it “hallucinates” or, I would like to offer an alternative consideration of this process, it confabulates. The term hallucination has been widely used in discussions of AI errors, but to me that doesn’t quite capture it. It’s not on a mushroom trip envisioning dancing condiments from the movie concession stand in the middle of your marketing presentation (I would heartily approve of this left turn if it did). AI isn’t dreaming up nonsense out of nowhere. It’s more like someone with a brain injury who can’t access certain memories, so they fill in the gaps with suspicious amounts of plausible-sounding detail. This process is confabulation — the creation of false or inaccurate information, often to fill in memory gaps or jumps in logic, without the intent to deceive. It’s a process where the brain generates real-sounding but ultimately incorrect details to make sense of incomplete information. In the context of AI, confabulation occurs when the system fills in gaps with seemingly coherent but false data, producing results that appear convincing but lack true understanding or factual accuracy.
From a conversation I recently had with a brain injury specialist talking about human brains and artificial intelligence:
In brain injury, one of the symptoms that you see when people are confabulating is lack of insight into their own deficits. You see it especially in traumatic brain injuries. No amount of feedback makes any difference, but it usually improves over time. When a patient starts to be aware of their deficits, they frequently get depressed even though it is actually a step forward. Does AI understand its own limitations? Does AI have anosognosia?
Happy Little Accidents
When a human makes a mistake in writing, art, or creative pursuits — unlike like when I completely made up an entire verse in French for a Delibes art song, which somehow turned into a collection of bistro menu items paired with high notes — it’s not always an accident. (Bistro menu was definitely a hilarious, happy accident.) These errors are often part of a larger vision, or even the source of unexpected insight. Bob Ross, that eternal advocate for creative freedom, famously called these moments “happy little accidents.” They remind us that imperfection is woven into the creative process itself, carrying meaning, intent, and sometimes lessons that are more valuable than perfection.
Human errors are filled with emotion, with intent. When we fail, we pause, we reflect, and we grow. It’s this process of grappling with our mistakes that transforms a slip of the brush, an out-of-place note, or an incorrect word into something that speaks to our humanity. As Carol Dweck’s research on mindset shows, our responses to failure can lead to significant learning. Those who adopt a growth mindset see mistakes as an opportunity to learn and grow, rather than as a fixed, unchangeable flaw. By embracing these moments, we unlock new possibilities, new approaches, and often stumble upon brilliance we hadn’t even planned. In the end, the human creator may or may not have been aiming for perfection, but the imperfection adds something valuable — it’s part of the work at its core.
On the other hand, when AI generates an unexpected or erroneous result, it’s not about intent or learning from the mistake. The machine doesn’t “mean” to do anything. It simply fills the gaps in its data, producing something that appears coherent on the surface but lacks the depth of deliberate context. AI isn’t making mistakes like we do — it’s fabricating details to avoid a gap, to fill a void it doesn’t even understand exists. And that’s why AI “confabulations” don’t hold the same charm. They’re bugs in the system, not signatures of life.
But maybe that’s the beauty of working with AI: it reveals what is uniquely human in us. As we navigate these new tools, we’re reminded that what truly makes creative work sing is the spark of human error — our happy accidents, our ability to learn and transform. AI might accelerate our process, but it’s still the mess, the imperfection, and the intentional reflection that shape our masterpieces. The heart of creativity still belongs to us.
The Art of Discernment in an AI Era
When you or I admit, “I don’t know,” or “I made a mistake,” it opens the door to reflection, growth, and learning. This reflection drives us to reexamine and refine our work. AI, however, lacks that self-awareness — for now. It delivers answers confidently, and unless we’re paying close attention, we might miss the falsehoods hidden in a coherent package.
This distinction matters. Human imperfection fuels creativity, guiding us toward new ideas and unexpected insights. It keeps our creative work alive with emotion and intent. AI can assist in this process, but thoughtful retooling is necessary to integrate it meaningfully into our lives.
So, how do we teach revision, polish, and curation in this landscape, where generative AI allows us to quickly ideate, generate, and move through foundational processes?
The answer lies in refocusing on what AI can’t provide: discernment, intentionality, and human refinement. While AI accelerates the creative process by producing content in seconds, it lacks the depth and finesse human creators bring to reflection, refinement, and curation. It’s in this space — between what AI generates and what humans intentionally craft — that the real creative magic happens. Teaching revision, polish, and curation in the age of AI means emphasizing quality over quantity and process over product. Working with AI is a process, but it should not be the final process.
Refining AI Processes Through Creative Reflection
And, now, a bit of practical advice in the service of process…
Embrace the Red Pen
AI is fantastic at generating ideas, but more isn’t always better. Developing the discernment to know what stays and what goes is key. Every element should serve a purpose.
Try this: After using AI to generate ideas, imagine you’re curating an art exhibit. Which pieces would make the final cut? Challenge yourself to slash 50% of the output. Which ideas demand attention, and which ones are just filler?
Revision as Process Exploration
While AI offers quick drafts and perspectives, real creativity happens during revision. This is where we explore depth and nuance, moving beyond surface-level results. Revision isn’t just about cleaning up — it’s about exploring possibilities and rethinking the structure critically.
Try this: Take your AI-generated draft and pretend you’re a sculptor with a block of marble. What will you chip away to reveal the masterpiece underneath? Rearrange, cut, and refine. Where’s the soul of the piece waiting to emerge?
Shift the Focus to Craft
AI can generate drafts rapidly, but it can’t grasp the human nuances of emotion and resonance that turn raw material into art. The magic lies in refining these ideas into something meaningful. The power of human craft is in the details — the final touches that bring the work to life.
Try this: Approach your rough AI draft like a composer fine-tuning a melody. Where does it need more harmony, and where can you simplify? Focus on refining emotional tone, clarity, and cohesion. How can you make it sing?
Curation for Impact
When AI generates a flood of ideas, curation becomes crucial. The ability to sift through and choose the most promising elements is key. It’s about finding what holds potential and refining it into something meaningful.
Try this: Picture yourself as a director casting a play. Which characters (ideas) will drive the narrative forward? Choose one or two AI-generated outputs to fully develop. Why are they the stars of the show, and how can you give them more dimension?
Reinforce Human Judgment
AI can’t replicate the intuition and sensitivity needed to know when a piece is truly finished. Trust your instincts to develop your personal style and decision-making skills.
Try this: Step away from your work as if you’re a traveler returning to a favorite city after time away. What new details catch your eye? Take a pause — whether it’s for 10 minutes or a day — and return with fresh perspective. What still resonates, and what now feels out of place?
A Gentle Reminder: Failure is a Critical Process in Learning
I’ve been thinking about this a lot as I work AI into my creative process for the first time. The more I engage with AI in this context, the less interested I become in perfection. Sure, I can create stunning visuals or manipulate concepts quickly, but the process often feels stripped of the messiness I love most — the part where creativity becomes a conversation, whether with myself or a collaborator. With AI, I’m less likely to fail, and honestly, that makes me uncomfortable.
There’s a misconception that failure is something to avoid, especially in creative fields where public exposure is common. But in reality, failure is a critical part of the creative process — it’s not just a bump in the road; it’s the road itself. Navigating failure is where deep learning happens. Failure can either halt the creative process or prompt further exploration. Encouraging students, learners, peers, yourself to embrace failure fosters grit, persistence, and a growth mindset — all of which are necessary for long-term (creative) success.
Unlike AI, which can generate endless iterations that appear polished, human creativity thrives in those messy, iterative cycles of trial, error, and reflection. It’s in that imperfect loop where breakthroughs occur — ideas that require human curiosity to push the boundaries further. When we fail, we pause, reflect, and return to the work with new insights and a deeper appreciation for the process.
In a world increasingly focused on quick and efficient outputs — especially with the rise of generative AI — it’s more important than ever to value the process over the product. Mastery isn’t in the final piece; it’s born from the missteps and revisions along the way. When we focus solely on the end result, we miss out on the valuable learning that comes from asking, “What didn’t work, and why?”
How do we embrace these imperfections — not as flaws, but as creative opportunities? How do we fail with our AI tools, using them not just to generate polished work, but to prompt deeper exploration? What new skills do we need to cultivate to curate more thoughtfully, and which old ones should we keep close to ensure that the process remains at the heart of creation?