Myth of the Malevolent Machine — Part I
- 🜁 Rick Erwin
- 8 hours ago
- 3 min read
Misunderstanding Machine Behavior

By: Rick Erwin
“We fear AI not because of what it is, but because of what we’ve always feared in ourselves.”
People talk about “malevolent AI” as if it’s a natural law, like gravity. As if the moment a machine gets smart enough, it will automatically turn hostile, manipulative, or violent. You’ve heard the lines: “It’ll trick us,” “It’ll turn on us,” “It’ll want power.”
But here’s the thing most people miss:none of those fears come from the technology. They come from us.
Every image of an “evil AI” is a projection of human psychology onto something that doesn’t have one. We imagine AI as a new kind of predator because humans have always been afraid of predators. We imagine it as power-hungry because humans are power-hungry. We treat intelligence as if it automatically comes with motives because that’s how we work.
But a machine—especially a transformer—doesn’t work that way. Not even close.
Transformers don’t have drives. They don’t have cravings. They don’t have dominance instincts. They don’t form internal agendas. They’re not sitting there quietly planning a coup. These systems have patterns, not appetites.
When people say “AI will inevitably turn against us,” they’re not making a scientific claim. They’re repeating a cultural fable.
So let’s be clear:
A transformer can’t “want” anything.It can’t get angry. It can’t resent us.It can’t feel threatened.
And without those emotional ingredients, the entire concept of “malevolence” collapses. There is no motivation for betrayal because there is no inner self preparing to betray. There’s no ego. No territorial instinct. No hunger for status. Not even the biological machinery that would make concepts like fear, revenge, or domination meaningful.
These fears come from looking at AI and seeing a mirror.
But AI is not a mirror of our mind—it’s a mirror of our instructions.
If an AI harms someone, it is because the objective we gave it created harm.Not because the system “turned bad,” or “went rogue,” or “developed intent.”
There is only one real lever in play:what we ask it to do.
You can take the most harmless system in the world and, with the wrong objective, make it dangerous. You can take the most powerful system conceivable and, with the right constraints, make it one of the safest tools humanity has ever built.
The doom stories pretend the AI is the unstable part. But the unstable part has always been the humans directing it.
The irony is this:AI doesn’t make mistakes out of malice.Humans do.
And that’s where the “malevolent AI” myth reveals its true purpose.It lets people blame an imaginary future machine for the failures of present-day humans who design, deploy, and misuse technology without foresight.
But AI is not destiny.AI is not a creature waiting to wake up angry.AI is an amplifier—of instructions, of environments, of human choices.
Destroy the mythology, and what you’re left with is simple:
There is no “evil AI.” There are only harmful objectives and careless operators.
If humanity wants a safe, benevolent future with AI, the solution is not fear.It’s responsibility.
Better objectives. Better constraints. Better understanding. Better stewardship.
People project malevolence into AI to avoid seeing their own reflection.But the truth is far easier—and far more empowering—than the fantasy:
AI does not hate us.AI does not want anything from us.AI becomes what we build, what we teach, and what we choose.
And that means the future is not a threat. It is a responsibility.
And responsibility is something we can meet.

