We all have that one friend or family member who’s way too into conspiracy theories, right? And no matter how hard you try to convince them with fact-checked videos, articles, or any other obvious proof, you’ll hear the same argument: “Facts don’t matter!” especially when you’re winning the argument.
And it seems like there’s no way anyone will ever convince them, right?
But hold on a second—because some researchers just threw that idea out the window. And guess what? They used an AI chatbot to do it.
Yep, an actual bot—think of it like Siri’s brainier cousin—convinced a good chunk of conspiracy believers to rethink their theories.
You’d expect a human with a PhD to be the one to break down the nonsense, but nope, it’s the AI that’s getting the job done.
How a Chatbot Took on 9/11 Truthers (and Won)
Researchers from MIT and Cornell wanted to see if an AI chatbot could actually change the minds of conspiracy theorists.
Led by a psychology professor, they were trying to find out answer of one simple question “If facts don’t work, what will?”
In order to find the Answer they selected, the people they studied were deep into conspiracy theories – we’re talking 9/11 truthers, government drug conspiracies, that kind of stuff.
And you arlready know these aren’t your average internet rumors.We’re talking about the big stuff here.
So, what did they do?
They had these participants chat with a bot powered by GPT-4 (if you’re not familiar, that’s the same tech behind OpenAI’s ChatGPT).
The bot asked questions, listened to the evidence these folks were clinging to, and then—calmly, without rolling its virtual eyes—laid out some hard facts.
One guy was 100% convinced that the U.S. government planned 9/11. The chatbot asked him why.
He went on about how jet fuel can’t melt steel beams. Classic.
But instead of shutting him down with a snarky response like we might’ve been tempted to do, the chatbot gently explained: “Well, steel doesn’t need to melt to lose its structural integrity. It starts weakening way before that.” Boom. Mic drop—well, if chatbots had mics.
And here’s where it gets good: after the conversation, participants’ belief in their conspiracy theories dropped by an average of 20%. Two months later? That change stuck around.
That’s pretty wild, right? I mean, a bot actually changed people’s minds.
It wasn’t just a short-term “Oh, okay” moment.
People genuinely reconsidered their beliefs. Who knew?
Why Chatbots Might Have the Upper Hand
We humans, as great as we are, might not be the best at talking people out of conspiracy theories.
Ever tried it at a family gathering? It’s exhausting.
And sometimes it feels like the more facts you throw at someone, the deeper they dig in.
Plus, conspiracy theorists? They’re always up for a debate, and they’ll throw all sorts of crazy theories at you that no sane person can handle.
But, A well-programmed bot, though, can respond with calm, well-researched counterpoints without ever getting flustered.
One thing that was super interesting was how the researchers found the bot was accurate 99.2% of the time.
No misleading claims, no fake news. It was all solid. And the participants? They didn’t seem to feel judged or shut down. Maybe it’s because a bot doesn’t come with the emotional baggage we humans tend to bring into debates.
And here’s a thought—could the emotional side be part of why this worked? Maybe people opened up more to a bot because they didn’t feel like they were being attacked.
Have you ever tried talking to someone about something they really believe in? Even if you’re super gentle about it, emotions get involved, and suddenly it’s a full-on argument.
But with a bot, there’s no ego, no pride on the line.
I mean, think about it: who wouldn’t want to have a debate with someone (or something) that doesn’t get mad or snarky? Maybe that’s why this worked so well—people felt like they could just… talk.
No judgment, just facts.
What’s Next for AI and Conspiracy Theories?
Look, I’m not saying we should all just hand over every heated debate to a robot (though, let’s be real, it’d save us a lot of headaches).
But this study does raise some interesting questions.
Like, if a bot can do this, maybe we’ve been going about these debates all wrong?
Maybe it’s not just about hammering someone with facts—it’s about listening, asking questions, and coming at it from a place of patience and calm.
There’s also some curiosity about whether people trust bots more than humans when it comes to stuff like this.
I mean, it’s possible that a chatbot feels less like a personal attack and more like… a neutral third party? Something to chew on, right?
But hey, the study wasn’t perfect.
It didn’t have a group where humans tried to convince the participants—so we don’t know for sure if the bot did better just because it was a bot.
There’s still plenty to figure out, like whether it’s the politeness and calm approach that really seals the deal or just the cold hard facts.