When Machines Learn to Lie: A Psychologist's Warning About Trust in the Age of AI

May 29, 2025 - Reading time: 7 minutes
Cover Image

Three months ago, one of my patients—let's call her Maria—received what appeared to be a video call from her teenage daughter. The "daughter" was crying, claiming she'd been in an accident and desperately needed money wired immediately. Maria, a devoted mother, didn't hesitate. She sent £3,000 before discovering her daughter was safely at school, completely unaware of the fabricated emergency.

The call was a deepfake. Maria had fallen victim to what psychologists are calling the new frontier of deception—artificial intelligence so sophisticated it can fool our most basic human instincts.

As a clinical psychologist who has studied deception for twenty years, I'm witnessing something unprecedented: the systematic erosion of our ability to distinguish reality from fabrication. The implications go far beyond financial fraud. We're facing a crisis that strikes at the heart of human psychology—our fundamental need to trust.

The Crumbling Foundation of "Seeing is Believing"

For millennia, humans have relied on visual and auditory cues to assess truth. This evolutionary shortcut served us well when the biggest deception we faced was a poker bluff. Today, that same instinct makes us sitting ducks.

Recent research paints a sobering picture:

AI Deception Detection Rates Percentage
Humans correctly identifying AI-generated images (2024) 60%
Humans misclassifying AI images as authentic (2023) 38.7%
Success rate improvement over random chance Barely significant

These numbers represent more than statistics—they reveal a fundamental mismatch between human psychology and technological capability. Our brains simply weren't designed for this challenge.

During my practice, I've observed patients developing what I term "digital paranoia"—a persistent anxiety about the authenticity of online interactions. One client, a 45-year-old teacher, now screenshots every important text message because she "can't trust anything digital anymore." Another refuses to answer video calls from unknown numbers, convinced they're all deepfakes.

This isn't mental illness—it's a rational response to an irrational situation.

The Psychology Behind Our Vulnerability

Three cognitive biases make us particularly susceptible to AI deception:

The Illusory Truth Effect: Information feels more credible when we encounter it repeatedly. AI can generate thousands of variations of the same false narrative, each exposure making it feel more authentic. I've seen patients become convinced of completely fabricated news stories simply because they appeared across multiple (AI-generated) sources.

Introspection Illusion: We overestimate our ability to detect manipulation. When I describe Maria's deepfake scenario to other patients, most respond with confident assertions: "I'd never fall for that." This overconfidence is dangerous—it prevents us from developing proper defenses.

Evolutionary Trust Bias: Our brains evolved to trust visual and auditory information because, historically, fake faces and voices required enormous effort to create. Now, a teenager with a laptop can generate convincing deepfakes in minutes.

Real-World Psychological Impact

The cases I've documented in my practice reveal alarming patterns:

Relationship Erosion: Couples are increasingly suspicious of each other's digital communications. One patient told me, "I can't tell if that sweet text from my husband is real or if someone's playing a cruel joke."

Decision Paralysis: People are freezing when faced with important choices because they can't trust the information they're receiving. A business owner recently spent weeks paralyzed by indecision after receiving conflicting (and potentially fabricated) market reports.

Hypervigilance: Many patients report exhausting levels of skepticism about everyday digital interactions. The mental energy required to constantly question authenticity is taking a genuine toll.

The Four Pillars of Psychological Defense

After working with dozens of affected patients, I've developed what I call the "4 As" framework for maintaining psychological health in this new reality:

Astuteness: Knowledge as Armor

Understanding AI capabilities isn't just technical—it's therapeutic. Patients who learn about deepfake technology report feeling more in control. Knowledge replaces helpless anxiety with informed caution.

I recommend staying informed about AI developments without becoming obsessed. Set aside 15 minutes weekly to read about new AI capabilities, then move on with your day.

Assessment: The Art of Healthy Skepticism

This isn't about becoming paranoid—it's about developing what I call "calibrated doubt." Question unusual requests, especially those involving money or sensitive information. Cross-reference important news from multiple sources.

One patient now uses a simple rule: "If it surprises me, I pause and verify." This has prevented three potential scams in six months.

Authentication: Tools for Truth

Encourage use of verification tools, but don't let them become a crutch. Reverse image searches, metadata analysis, and emerging detection software can help—but remember, the technology arms race means these tools are always playing catch-up.

Action: Building Collective Resilience

The most psychologically protective action is community engagement. Support digital literacy programs, discuss these issues with friends and family, and advocate for better AI regulation. Feeling powerless feeds anxiety; taking action restores agency.

The Path Forward: Therapy for a Deceived Society

We need to think about this challenge the way we approach other public health crises. Just as we developed psychological tools to cope with social media's mental health impact, we must build new frameworks for the AI deception epidemic.

In my practice, I'm seeing promising results from group therapy sessions focused on AI literacy. Patients support each other in developing healthy skepticism while avoiding paranoia. They practice verification techniques and share close calls with deception attempts.

The goal isn't to create a society of cynics—it's to help people maintain appropriate trust levels in an increasingly deceptive digital environment.

A Personal Reflection

Twenty years ago, when patients told me about being deceived, the stories usually involved human manipulators. Now, they're describing interactions with entities that may not be human at all. The psychological impact is fundamentally different—and more profound.

We're not just dealing with individual cases of fraud. We're witnessing the erosion of shared reality itself. When you can't trust what you see and hear, the social contract begins to fray.

But here's what gives me hope: humans are remarkably adaptable. Throughout history, we've developed psychological tools to cope with new challenges. We learned to live with nuclear weapons, adapted to social media, and adjusted to global connectivity.

We can learn to thrive in an AI-influenced world too. But only if we acknowledge the psychological challenge we're facing and respond with the seriousness it deserves.

The future of human trust may depend on it.

marcDr. Marc Manddell, MD, Psychiatrist, is a well known expert in the field of psychiatry, bringing a wealth of knowledge and clinical acumen to our team at adhdtest.ai. Renowned for his compassionate and patient-centred approach, Dr. Manddell is unwaveringly dedicated to directly supporting patients living with ADHD.

ADHD Test

Experience the future of ADHD diagnosis with our AI-driven technology. Developed and reviewed by expert clinical psychiatrists, our system mimics the analytical approach of healthcare professionals, offering you accurate insights.

You can get a Free ADHD diagnosis by clicking on the button below and get an ADHD auto report for just $9.99.


FREE Online ADHD Test

As part of your screening, we will be testing our patients for: Anxiety Disorder, Autism, Bipolar Disorder, Borderline Personality Disorder, Depression, Insomnia, Obsessive Compulsive Disorder (OCD), Post Traumatic Stress Disorder (PTSD), Schizophrenia and Substance Use Disorder.