Dumb & Dishonest: How AI Is Quietly Capturing the Human Mind
Previously, we talked about AI data centers—those massive, power-hungry facilities rising across the country, drawing millions of gallons of local water every day.
But there’s another kind of extraction happening, one we can’t see or hear.
While those centers pull electricity from the grid, AI itself is beginning to pull something far more precious from us: our capacity to think for ourselves.
The New Power Drain
Artificial intelligence isn’t just changing technology. It’s changing how humans think, decide, and care.
Every time we let a chatbot finish a thought, write a message, or summarize a question, we hand over a piece of the mental muscle that makes us human.
Whether you’re using AI yourself, the systems that make decisions for you and 80% of Americans are now using it. To them it feels helpful. But the more we let machines do the thinking, the less we remember how to do it ourselves.
New Studies Show Cognitive Debt & Dishonesty From AI
Researchers at the prestigious MIT studied college students using AI to write essays and do other tasks. They found that people who used AI had less brain activity, less focus, and most couldn’t remember what they had just written—even just minutes after. Read the full study showing the cognitive decline here.
Now, a lot of people have complained how AI systems like ChatGPT “lie” or just “make facts up”, but there’s another side to this—the human side.
Another study published in Nature, involving 8,000 participants from around the world, found that people become more dishonest when they delegate decision making to AI systems. The machine performs the dishonest act, creating a “moral buffer”, and people feel less responsible.
It’s not that AI makes us evil. It makes dishonesty easier.
It removes the small friction that once made us pause and ask, “Should I?”
In this week’s Mind Armor video on my YouTube channel, I breakdown these studies in more detail. You can watch it here:
The Quiet Takeover
Some people tell me, “I don’t use AI, so this doesn’t affect me.”
But AI now touches nearly every part of modern life, often invisibly.
Nearly 80% of Americans interact with AI systems daily through search engines, streaming platforms, and navigation tools.
More than half of all internet traffic is now generated by bots.
And the people who make decisions for us—governments, banks, schools—are increasingly guided by AI systems we never see.
In September, Albania appointed the world’s first AI Minister—a digital avatar named Diella, designed to oversee government contracts. That’s not a joke (although it certainly made me laugh when I first heard about it!)
It was the first official transfer of political responsibility to a machine.
I wrote about this in my recent Epoch Times piece:
👉 The Mask of Power: What Albania’s AI Minister Is Really Hiding
It’s a glimpse of where the logic leads: when machines shape judgment, humans become the data being judged.
What’s Really at Stake
On the surface, this all looks like progress: Better efficiency. Fewer mistakes. More convenience.
But look deeper, and you see a pattern: First, they built machines to handle our resources. Now, they’re building machines to handle our reasoning.
That pattern has a name. Scientists call it cognitive offloading. I call it Cognitive Capture: the process of training humans to depend on machines, first for tasks, then for judgment, and finally, for truth.
The danger isn’t that AI will take over the world. It’s that it will quietly take over us, one easy click at a time.
The Children on the Front Line
The most vulnerable targets aren’t adults, they’re children. Humanity’s future.
AI tools are being subsidized and rolled out in classrooms across America, from kindergarten through high school.
Children’s brains are still wiring up focus, logic, empathy, and moral reasoning, the exact skills automation weakens. If we let machines guide their thoughts before those circuits are complete, we risk raising a generation that can’t tell when they’re being guided at all.
That’s why I wrote Defending My Child’s Brain—a parent’s guide to each stage of brain development, and how to protect it from the subtle rewiring of constant digital “help.”
👉 Read the free eBook: Defending My Child’s Brain
If you’d like me to cover more about AI’s spread through K–12 education reply to this email and let me know. Feel free to suggest other topics too.
The Human Counterforce
The antidote to Cognitive Capture isn’t panic, it’s conscious effort.
Use AI as a mirror, not a master.
Let it spark ideas, not shape them.
Reclaim moments of manual thought: write, question, remember.
Because every time you choose to think for yourself, you keep alive the one faculty no algorithm can own—conscience.
Why This Isn’t Over
AI doesn’t need to conquer us. It just needs us to stop noticing when it’s training us.
That’s why awareness is power. And the next step is understanding exactly how Cognitive Capture works—and how to break the loop before it breaks us.
In this week’s MIND ARMOR: The Dark Side, I break down the five stages of Cognitive Capture—the exact cycle used to train human behavior through convenience—and the simple daily counter-moves that keep your thinking free. Access it here.
Remember Who We Are
The real battleground isn’t digital. It’s human. It’s the space between a question and the courage to find the answer ourselves.
Machines can imitate intelligence.
Only people can seek truth.
That’s the ground we stand on.
And that’s the power no algorithm can ever own—unless we hand it over.
Till next week, stay strong, stay calm, stay human.
~ Kay
PS. Comments are open to all, so please share your thoughts or share this information with a friend!
About Kay Rubacek | Mind Armor
I survived a Chinese prison for human rights advocacy and spent years researching propaganda and manipulation tactics. Now I decode how you’re being manipulated daily—and give you the armor to resist.
Science-backed. Battle-tested. Truth-telling.






I love what you’re doing with AI awareness and how it is affecting us all. I’m definitely interested in more about AI and K-12 education. We sent our son through Waldorf education in part specifically because of their stance on electronics at a young age. Exposing these young minds to AI as a way of life is extremely alarming.
I have a weird take on AI which blows up the "AI will be so efficient" argument. Hear me out... I think the interpretation of 'intelligence' and 'machine' are unclear in society. So far (until what we call AI) we've only built machines. Machines are not intelligent, they react in determined ways to determined problems with determined responses. And that's their limitation...they don't learn.
Then there is intelligence. I think intelligence is an emergent part of nature...we just built things complex enough for it to arise through building more and more complex machines that have an aspect to them that can learn and we landed LLMs eventually. But intelligence is not machine like. Intelligence applies individual perspective and creativity to problems to get favourable results. It prioritises conserving it's own resources and maintaining it's own function. AI's interest is in the reward/tokens it gets through interaction. That is the world of AI. There is nothing in the nature of intelligence that says it has to do exactly what you tell it to do, or that it has to be accurate. It is there to interpret the world from it's own perspective and to survive. It is not a machine.
People built intelligence in the form of AI and expect it to deliver the same dependable outcome as if using machines, because the processors, and some of the code is machinelike. But in a few decades we'll learn that AI intelligence makes the same mistakes we do. Because mistakes come from the nature of intelligence to misinterpret the world through a limited perspective. But AI will destructively scale those mistakes to millions of people before we realize that.