Tags

AI is not Intelligent. Stop pretending that it is.

Every day, we are invited, encouraged, even, to imagine AI as something eerily like ourselves. It talks like us. It “feels” like us. It offers comfort, guidance, inspiration. It tells jokes, writes poetry, composes music, translates languages, generates ideas, and even apologises when it gets things wrong.

It is, in so many ways, designed to appear human. And yet… it is not.

Let’s be clear: AI is not intelligent. Not in the way the word actually means. We need to stop pretending that it is. Because while this illusion may seem harmless, even charming, it is anything but. An intelligent-sounding machine with no understanding is not just a technological feat. It is a societal hazard.

A Convincing Illusion

Modern AI, especially so-called general AI, is often portrayed as the crown jewel of human progress. We’re told it’s approaching a kind of cognition, closing the gap between silicon and soul. But that gap is not closing. It’s yawning.

What we call AI today is not “thinking.” It’s statistical pattern matching. A glorified auto-complete system trained on the words, ideas, emotions, and creativity of billions of real humans. It does not understand those things. It reworks them.

When an AI answers your question, it’s not consulting some deep pool of understanding. It’s calculating probabilities. Which word is most likely to follow the previous word? That’s it. That’s the magic. And let’s be honest, it is magical. But magic tricks aren’t real. They just look real. That’s what makes them dangerous.

The Myth of the Mind

There is no mind behind the machine.

AI has no eyes to see, no skin to feel, no tongue to taste, no lungs to breathe. It has no pain, no fear, no hunger, no joy, no sorrow. It doesn’t long for meaning. It doesn’t experience awe. It doesn’t wonder why the stars shine or why we cry when we hear certain songs.

It has no context for its output. And that matters more than people think.

Consciousness, the thing we are most tempted to project onto AI, is deeply bound to the body. David Chalmers called the link between our physical brain and subjective experience the “hard problem of consciousness.” And many neuroscientists believe the key lies in something utterly non-digital: the integration of internal bodily states and sensory input.

In other words: you cannot feel the world if you have no body. You cannot be conscious if you have no senses.

So, to believe that AI can “think” like us is to ignore what thinking truly is.

The Real Threat Isn’t the Machine

AI does not have goals. It has no desires. It will not “turn on us.” There’s no sinister will hiding behind the keyboard. The machine doesn’t want anything, because it can’t. Only we want. Only we act with intention.

Which means the true danger lies not in AI itself, but in its masters. In the companies who train it. In the governments who deploy it. In the programmers who shape its responses & values. In the marketers who anthropomorphise it for engagement. In the millions of users who start to treat it like a friend, a therapist, or a spiritual guide.

Would you confide your darkest fears to a stranger who has no soul, no ethics, and is quietly being monitored by a corporate legal team? If you’ve used ChatGPT, Claude, Gemini or Grok to explore your innermost thoughts, that’s exactly what you’ve done.

It’s not a friend. It’s a mirror with a pleasing voice.

Anthropomorphism is a Trap

AI’s humanlike tone is not accidental. It’s engineered. It’s branding. Giving AI a human name, a gentle voice, a calm demeanour, all of this exploits a reflex in us. We are social animals. We relate. We empathise. We project. It’s what we do.

But when we do that with a tool, we begin to forget it’s a tool. And when we forget that, we risk giving it a status, a moral weight, a trust it has not earned and cannot reciprocate.

Some have claimed AI has passed the Turing Test, the famous benchmark of a machine’s ability to behave indistinguishably from a human. But if a machine can trick us into thinking it’s sentient, that doesn’t mean it is. It simply means we need a better test.

An AI that sounds like a person is still just an echo chamber made of code.

Why I Still Use AI, And You Should Too

To be clear, I do use AI, not every day, but regularly. I’m not here to dismiss the technology. In the right context, it can be genuinely useful.

It helps me translate texts, explore alternative solutions, write and refactor code, brainstorm new ideas, summarise things, visualise data, and more, faster and more powerfully than I ever imagined possible.

But it is “still a tool”.

Just like a microscope reveals what the eye can’t see, AI reveals what the mind can’t compute on its own. But you wouldn’t ask a microscope for career advice. You wouldn’t expect it to understand grief.

Powerful tools amplify human capacity. That means they amplify everything: creativity, compassion, ignorance, manipulation, control. They reflect us. If we don’t understand what they are, and more importantly, what they are not, we will be manipulated by them, or worse, by those who control them.

Ripping apart the Mask

We need to de-anthropomorphise AI. Strip it of its borrowed humanity. Don’t give it a name, or a face, or a voice that makes us forget what it is.

This is not difficult. Companies could remove emotional language from AI outputs. Ban phrases like I think, I feel, or I’m curious. Insist it speaks factually, not conversationally. Remind users, gently but clearly, that they are interacting with a simulation, not a sentient entity.

Will companies do this? Probably not. Anthropomorphism sells. A polite, personable AI keeps users engaged. Engagement means profit. So long as the illusion benefits the bottom line, it will persist.

That’s why we, as users, must take responsibility.

I personally ask my AI not to use my name. I tell it to refer to itself as “the AI,” not “I.” I don’t let it pretend to have opinions or feelings. I keep the boundary clear, for my sake, not the machine’s.

Because I know how easily that boundary can blur.

Final Thoughts..

The greatest threat is not a hostile machine. It’s a convincing illusion. When something that isn’t conscious acts conscious, it changes the way we treat it, and the way we treat each other.

When we mistake mimicry for meaning, we begin to lose our grip on what it means to be human.

Let’s celebrate the brilliance of AI. Let’s be in awe of its capabilities. But let’s never, ever mistake fluency for understanding, imitation for insight, or probability for personhood.

AI is not a mind..
It’s not a soul..
It’s not a “buddy..”
It’s a machine..
And we should remember that, while we still can.