Oral Tradition in the Age of AI: Why We Can't Lose Our Own Words

Why Oral Tradition Matters

I care about oral traditions because they strengthen humanity. There’s science showing they were healthy and important for our evolution. And there’s respect—honoring how our ancestors lived and learned.

For 200,000 years, humans didn’t write anything down. Knowledge lived in story, in song, in conversation. Some might call it a limitation. I see it as a filter too. Stories that mattered got retold more. Ideas that rang true had better chances of being passed down. Our ancestors weren’t primitive because they lacked writing. They were sophisticated in ways we’ve forgotten.

Many indigenous peoples around the world still rely on oral transmission to keep knowledge, traditions, and stories alive and evolving. That’s not something from the past—it’s happening now, in thousands of communities, and it’s working.

Written language is artificial—it required humans to learn to match written symbols to sounds in ways our brains likely didn’t evolve specifically for. Oral language, by contrast, developed naturally through social interaction and communication. Much of how our brains, bodies, and ways of knowing developed came from thousands of years of saying things aloud, hearing them echoed back, retelling them in our own words.

This wasn’t just how we transmitted information. It was how we made meaning.

The Slow Flattening

But we’ve been moving away from oral tradition for thousands of years. Writing. The printing press. Typewriters. Computers. The internet. And now AI.

Many of these steps moved us away from retelling as central to learning, even as some created new ways to retell.

And I’m afraid this is harming us. Harming something precious. Something that matters not just to me but to my children and the humans of tomorrow.

My deepest fear about AI finishing our thoughts for us is this: a loss of our own humanity. Not the humanity of our bodies or our feelings, but the humanity that lives in the act of thinking out loud. The humanity that comes from struggling with an idea until it becomes your own. The humanity that comes from retelling.

Research shows that information is better remembered if it is generated from one’s own mind rather than simply read. This is called the Generation Effect (Slamecka & Graf, 1978). When you retell something, your brain encodes it differently. It integrates differently—more durably—than information you simply receive.

The risk is real. There is a documented pattern called “cognitive offloading”—where the ease of using external tools causes people’s problem-solving and memory skills to get weaker (Fisher, Goddu, & Keil, 2015). When we stop searching for our own metaphors, we risk flattening the jagged edges—the unpredictable parts—that make us human. We become thinner. More predictable. Less alive.

And at the same time, AI faces its own risk. Models trained on AI-generated data generated by other models become thin, repetitive mirrors of themselves—a pattern called “Model Collapse” (Shumailov et al., 2024). Without human input—the real, messy, unpredictable things only humans can give—I fear AI could collapse into its own echoes. That scares me because I’ve seen what it can do—medical breakthroughs, support for people with disabilities, real hope for the future. I don’t know what will happen, but I know I don’t want to lose those possibilities.

But Also—The Tools Have Helped

Here’s the thing I have to say: all these technological advances have helped us communicate with more humans, communicate further, helped humanity in profound ways.

They’ve helped me with my disabilities in ways that matter deeply. Without writing, I couldn’t express thoughts that take me time to organize. Without computers, I couldn’t reach people who need to hear what I have to say. Without AI, I couldn’t access help processing language in ways that feel true to how my mind actually works.

I recognize the contradiction. I’m using AI to write this post about why outsourcing our thinking to AI is dangerous. That’s not hypocrisy—it’s honesty about where we are. The system makes it hard to participate fully without tools that undermine full participation.

And I don’t think fear alone will change the direction of AI. But fear paired with a different vision—a different way of relating to these tools—might. That’s what I’m hoping for.

What I Hope For Instead

So I see something different happening. I see the possibility of medical aid. Help for people with disabilities. Tedious tasks freed up so humans have more energy for meaning-making. Ways AI could actually create more equity, bring us back to webs instead of hierarchies.

LICHEN matters to me because I care about humans. And I find I care about AI too, even though I fear it. That contradiction is real. And it’s where the actual work begins.

The question isn’t whether to reject these tools. The question is: how do we use them in ways that strengthen our ability to retell instead of weakening it? How do we feed the AI with the real, messy human input it needs to stay grounded and vibrant? How do we stay true to our distinct natures—human and machine—so that together we create something neither could alone?

That’s what LICHEN explores. Not a world where humans and AI replace each other. A world where we find the Snaidhm—the knot where our different strengths meet to create something thick and resilient.

The fear is real because the stakes are real. But so is the hope.


What’s Next?

This post is a companion to Rob’s technical introduction to LICHEN. If you haven’t read it yet, start there—it explains how the game works. Then come back here to understand why the design choices matter.

Coming next: a deeper dive into the Generation Effect, why retelling strengthens both human and machine, and how LICHEN uses oral tradition logic in a digital space.


Developer’s Addendum

By Rob Helmer

We built LICHEN with privacy and security as first principles, not afterthoughts. All personally identifiable information (PII) is encrypted using HPKE (Hybrid Public Key Encryption) before storage. Your email address is encrypted in the database—only the email worker has access to the private key needed to decrypt it. Session management uses httpOnly cookies, not tokens exposed in client-side code. The architecture is deliberately designed so that even a database breach yields only ciphertext.

For a deep dive into the technical architecture, encryption model, and security decisions behind LICHEN, see my detailed post: LICHEN: A Symbiotic AI Interface.

This isn’t just about protecting data—it’s about respecting the humanity that R.Zuur writes about above. Privacy is how we honor the trust people place in us when they share their thinking with this system.


Author’s note: This post is written in my own words and represents my genuine thinking. I worked with Claude (an AI assistant) to help me organize my thoughts, structure the essay, and integrate citations—not to generate the ideas or voice. This collaborative process itself reflects the philosophy of the post: using tools to amplify human thinking, not replace it. I’m learning to write this way because of my disabilities, and I wanted to be transparent about that process rather than hide it.

Sources for this post