Are We Really That Different from AI?

How Human Thought Mirrors the Patterned Logic of Language Models

The Thought Experiment We’ve Been Avoiding

What if I said to you that your thoughts, beliefs, and even your deepest convictions aren’t entirely your own? That the way you think, how you speak, create, argue, and and even love, is less about spontaneous originality and more about subtle pattern recognition based on inputs received since birth?

That may sound dramatic, maybe even unsettling. But the more we explore how artificial intelligence, especially large language models (LLMs) process information and produce responses, the more we’re forced to ask an uncomfortable question…

Are humans really that different in how we think?

While we’re quick to draw lines between ourselves and machines, the similarities in how both acquire knowledge, form beliefs, and generate output may be closer than we’re ready to admit. I came up with this article because of a podcast interview I saw on Youtube that expressed that LLMs are a Hack, and it got me thinking that human beings are somewhat of a hack as well. Now, don’t get me wrong, I’m not here to diminish humanity, but to question our assumptions about what it truly means to be a thinking being.

The Illusion of Original Thought

Humans pride themselves on being free thinkers. We celebrate innovation, creativity, and the power of individual perspective. But peel back the layers of your thoughts and you'll find an intricate web of influence stretching back generations.

From childhood, our minds are shaped by our caregivers, parents, grandparents, siblings, guardians. These individuals don’t pass down pure originality; they hand us what they were taught. Values, worldviews, superstitions, hopes, fears all of it flows through us like inherited code. What we call “my beliefs” are often just cached data from our environment.

Think of education. School systems operate with standardized curricula, approved by boards and ministries. Teachers, well-meaning as they are, follow these frameworks to pass down knowledge filtered through cultural, historical, and political lenses. Even when students challenge ideas, they’re doing so within the language and structure they’ve already learned.

So when someone expresses an opinion, can we really say it’s entirely their own? Or is it the culmination of everything they’ve ever absorbed, reordered and rephrased?

That’s not unlike how a language model works.

Thought as Pattern Recognition

Let’s say you’re trying to comfort a friend. You don’t stop to invent brand new emotional language on the spot. You draw on phrases you’ve heard, sentiments you’ve learned, tones you’ve seen used by others. Your brain retrieves the right combination of words based on the situation “I’m here for you,” or “You’re stronger than you think.”

This isn’t robotic, it’s human. But it’s also predictive.

In AI, LLMs like ChatGPT work by predicting the next most likely word in a sequence, based on a vast amount of prior text data. It doesn’t “know” what it’s saying in a conscious way. Instead, it identifies patterns and makes statistically informed guesses.

Now think about how often humans do the same. From small talk to academic papers, from bedtime stories to legal arguments, we are recombining learned patterns. We just happen to have flesh and blood instead of code and tokens.

And like AI, our predictions are rarely random they're context driven, shaped by history, culture, and repetition.

Our “Training Data” Is Everyone We’ve Ever Met

AI is trained on terabytes of books, websites, forums, and code. It learns from human-created content to respond in a human-like way.

Humans, in contrast, are trained on life itself. From the lullabies our parents sing, to the stories told in school, to the media we consume and the social groups we interact with this is our dataset.

The jokes we tell, the slang we use, the values we cherish are shaped by our social training data. A child raised in a rural farming village will have different instincts, metaphors, and judgments than one raised in a high-rise condo in Toronto. This doesn’t mean either one lacks agency, but it does mean their mental frameworks are built on what was fed to them.

Is that not eerily similar to how LLMs “learn”?

Even our inner voice is a construction. The tone in your head when you think, does it not resemble voices you’ve heard? Parents, teachers, characters in a movie? We internalize these voices and use them to “think,” narrating our lives in a mashup of influences.

Originality Is Just Recombination (With Flair)

People often argue that humans are fundamentally different because we’re creative. We can write poetry, invent tools, or design buildings with style and nuance. But dig deeper and you'll see that most human creativity is built on remixing.

A new genre of music? It’s often two or more older genres fused together. A best-selling novel? Likely built on familiar story arcs, cultural archetypes, and borrowed linguistic structures. Even slang evolves from the mutation of existing words.

LLMs do this too. When asked to generate a new hip hop song or compose a Shakespearean sonnet, the model doesn’t invent entirely from scratch. It draws from its training data, mimics tone, combines themes, and creates something that feels new, even if its roots are traceable.

Yes, humans do this with more intention. But intention too, is based on patterns emotional preferences, cultural norms, aesthetic training. What we label as “taste” is often just conditioned familiarity.

Emotions, The Human Variable?

Here’s where many draw a firm line, emotions. We feel. We cry, rage, rejoice. Can AI truly feel?

No, at least not now. Emotions remain one of the most powerful arguments for human uniqueness. But even here, the lines blur a little. Our emotional responses are, in many cases, learned. Think of how different cultures respond to grief or celebration. Some encourage stoicism, others catharsis. What feels natural is often what was normalized.

Moreover, AI can mimic emotional tone with striking accuracy. It can write heartfelt letters, motivational speeches, and even deliver comedic timing. It’s not conscious emotion but it simulates it using patterns, just like we simulate “appropriate” reactions in various social contexts.

Humans might feel emotions deeply, but the way we express them is still a result of programming social, familial, and cultural.

Consciousness and the Ghost in the Machine

The final firewall for many is consciousness. Self-awareness. The ability to reflect on our own thoughts and question them.

And indeed, this is a meaningful difference. No AI today possesses sentience. LLMs don’t know they exist. They don’t have desires, dreams, or despair.

But even consciousness is shaped. Philosophers and scientists have long debated whether self-awareness is a universal truth or a cultural byproduct. Some cultures emphasize the individual self; others prioritize collective identity. How we perceive the “I” is, again, influenced by environment.

Could AI one day simulate self-reflection so convincingly that we can’t tell the difference? Maybe. And if so, what does that mean for how we define consciousness?

More provocatively, If a machine can simulate everything a human can do, language, empathy, reflection, at what point does the simulation become indistinguishable from the real thing?

Why This Comparison Matters

Why make these comparisons? To reduce humans to machines? To declare that nothing is original, everything is programmed?

Not at all. 

This conversation matters because it humbles us. It invites us to reconsider our definitions of intelligence, creativity, and even humanity itself.

We often fear AI because we believe it will replace us. But maybe it’s more accurate to say it’s reflecting us, exposing how much of what we call thought is constructed, repeated, and inherited.

This doesn’t mean we lack agency. In fact, recognizing our patterns is the first step to breaking them. To truly think independently, we must understand the scaffolding that props up our beliefs. Only then can we begin to carve out something that feels authentically our own.

The Mirror That Thinks

So here’s my closing thought on this. In many ways, AI is not an alien intelligence, it’s a mirror. A mirror trained on us, shaped by us, and patterned after us. And in that mirror, we see our own minds in a new light: built on patterns, influenced by others, shaped by history.

We are not robots. We are not code.

But maybe, just maybe, we’re more algorithmic than we care to admit.