Why Your Brain Sees Gods and Ghosts in Random Events

TL;DR: The third‑person effect is our tendency to believe propaganda and persuasive media influence others far more than ourselves—a bias exploited by political campaigns, advertisers, and social media algorithms. Research shows we systematically overestimate our immunity to manipulation while underestimating others' critical thinking, leading to support for censorship and vulnerability to targeted misinformation. The only effective counter? Recognizing that we're all susceptible, cultivating metacognitive awareness of our own biases, and building media literacy that focuses on internal cognitive processes rather than simply critiquing others' media consumption.
You've probably felt it before: scrolling through your social media feed, you spot a friend sharing yet another dubious political meme or falling for an obvious marketing trick. "How could they believe that?" you wonder. You'd never fall for such obvious manipulation.
Except here's the uncomfortable truth: you probably already have. And the very confidence that tells you otherwise is precisely what makes you vulnerable.
Welcome to the third‑person effect—one of the most pervasive cognitive biases shaping our media landscape. First identified by sociologist W. Phillips Davison in 1983, this phenomenon describes our tendency to believe that "mass communications have on the attitudes and behavior of others" far more than on ourselves. It's the psychological sleight of hand that lets propaganda work its magic while we remain blissfully unaware.
And in 2025, as AI‑generated content floods our feeds and psychographic targeting grows ever more sophisticated, understanding this bias isn't just academically interesting—it's essential for democratic survival.
Davison's discovery emerged from an unexpected source: World War II. He documented how Japanese forces during the Pacific campaign dropped leaflets on Allied troops, believing the messages would demoralize soldiers. But here's the twist—the soldiers themselves weren't particularly affected. Instead, military commanders worried that other troops would be influenced, leading them to enact policies based on perceived rather than actual vulnerability.
This pattern—overestimating others' susceptibility while underestimating our own—has since been replicated in over 200 studies across dozens of countries. A 2008 meta‑analysis by Sun, Shen, and Pan found an average effect size of r = .13 for the perceptual component, meaning the bias appears consistently across cultures, demographics, and message types.
The effect grows dramatically stronger when messages are perceived as "undesirable"—violence, pornography, hate speech, or deceptive advertising. For these socially negative messages, a 2024 reanalysis found effect sizes of d = 0.83, meaning people rate themselves as substantially less influenced than others. Paradoxically, for "desirable" messages like public health campaigns or inspirational content, the effect reverses into what researchers call the first‑person effect (d = ‑0.47)—we suddenly believe we're more influenced by positive messages than others are.
This asymmetry reveals something profound: the third‑person effect isn't just about overconfidence. It's rooted in self‑enhancement motivation, the fundamental human drive to see ourselves as smarter, more rational, and more resistant to manipulation than our peers.
The third‑person effect didn't begin with social media, or even with television. Its roots stretch back through every major shift in communication technology.
When radio emerged in the 1920s, critics warned that its persuasive power would transform listeners into mindless automatons—other listeners, of course. The critic was always immune. During the 1950s television boom, parents worried that violent programming would corrupt children's minds, leading to the first calls for content regulation based entirely on presumed influence on others.
The printing press sparked similar fears five centuries earlier. Religious and political authorities banned books not because they themselves felt swayed, but because they assumed the masses lacked their sophisticated discernment. Each technological revolution in media has triggered the same pattern: elites perceive new forms of communication as dangerously influential on others while remaining confident in their own immunity.
What's changed in the digital age isn't the existence of the bias—it's the scale and precision with which it can be exploited.
Three interconnected cognitive mechanisms drive the third‑person effect, each amplifying the others in a self‑reinforcing cycle.
Selective Perception and Memory
When we encounter media messages—a news article, an advertisement, a campaign speech—we process information through radically different lenses for ourselves versus others. Research by Vallone, Ross, and Lepper in their classic 1985 study on the "hostile media effect" demonstrated this beautifully: they showed identical news coverage of Middle East conflicts to pro‑Israeli and pro‑Palestinian students. Both groups perceived the same coverage as biased against their position and favorable to the opposing side.
This wasn't dishonesty; it was selective attention. Each group remembered different segments of the coverage, applied stricter standards to evidence supporting the opposing view, and interpreted ambiguous content through their preferred frame. When we imagine how others respond to the same content, we assume they'll be swayed by the elements we dismissed as manipulative.
Motivated Reasoning and Attribution Asymmetry
When we resist a persuasive message, we attribute our resistance to our superior critical thinking, education, or values—stable personal qualities. When others disagree with us or fall for misinformation, we attribute their beliefs to external influences: propaganda, manipulation, lack of intelligence.
This attribution asymmetry is core to what psychologists call self‑enhancement bias. A 2024 study using the Misinformation Susceptibility Test (MIST) found that nearly 40% of participants who claimed they were adept at spotting fake news were actually less capable than average at distinguishing real from false headlines. Generation Z participants, despite being digital natives, performed worse than older cohorts—yet accurately predicted their poor performance, suggesting some awareness of the gap between confidence and competence.
Psychological Distance and Abstract Thinking
Construal Level Theory offers a third explanatory mechanism. The further something is from us—temporally, spatially, socially, or hypothetically—the more abstractly we think about it. When we imagine a distant other person encountering propaganda, we think in broad, stereotypical terms: "vulnerable," "gullible," "easily manipulated." When we imagine ourselves in the same situation, we think concretely about our specific reasoning process, how we'd fact‑check, cross‑reference, apply skepticism.
This distance‑dependent abstraction means we literally can't imagine others' mental experience with the same granular detail we apply to ourselves. The result? We systematically underestimate others' cognitive sophistication while overestimating our own.
The third‑person effect wouldn't matter much if it remained confined to self‑perception. But the bias has a powerful behavioral component that shapes everything from policy to platform design.
Support for Censorship and Regulation
The strongest predictor of supporting media censorship isn't your own perceived vulnerability to harmful content—it's your belief that others are vulnerable. Research consistently finds that people who exhibit stronger third‑person perceptions are significantly more likely to support restricting pornography, violent video games, hate speech, and even political advertising.
A comprehensive analysis found that the behavioral effect size (d = .646) is actually larger than the perceptual effect. This means the bias doesn't just change what we think—it changes what we do. Politicians and advocacy groups regularly exploit this pattern, framing regulatory proposals around protecting vulnerable others rather than acknowledging universal susceptibility.
The Cambridge Analytica Paradigm
The 2018 Facebook‑Cambridge Analytica scandal offered a masterclass in third‑person exploitation. By harvesting data from 87 million Facebook profiles, Cambridge Analytica built psychographic models that predicted personality traits, political leanings, and vulnerability to specific message frames.
Here's where the third‑person effect entered: the firm crafted "dozens of ad variations on different political themes such as immigration, the economy and gun rights, all tailored to different personality profiles." Each voter saw messages designed to feel personally persuasive while appearing obviously manipulative if shown to someone with a different profile.
Christopher Wylie, a former Cambridge Analytica employee, later confessed: "We exploited Facebook to harvest millions of people's profiles. And built models to exploit what we knew about them and target their inner demons. That was the basis the entire company was built on."
The scandal worked because most people recognized in the abstract that targeted advertising influences behavior, but couldn't perceive how the specific ads they personally encountered were designed for them. The third‑person effect created a blind spot in exactly the place where exploitation occurred.
The 2024 Election Misinformation Ecosystem
Data from the 2024 U.S. presidential election revealed stark "media consumption realities": Americans who primarily consumed Fox News and conservative media were significantly more likely to answer factual questions about inflation, crime, and immigration incorrectly compared to consumers of cable news or national newspapers. Yet these same consumers reported higher confidence in their ability to identify media bias and misinformation.
This isn't about partisan intelligence—it's about information ecosystems. The third‑person effect allows each media bubble to maintain internal coherence: members believe they're seeing through propaganda that everyone else falls for, when in reality they're consuming carefully tailored content designed to feel like objective truth.
Algorithmic Amplification of Bias
Social media platforms' engagement‑driven algorithms create a perfect breeding ground for third‑person effects. Research published in the Proceedings of the National Academy of Sciences found that just 15% of the most habitual Facebook news sharers were responsible for spreading 30–40% of all fake news. These habitual sharers forwarded fake news six times more frequently than occasional users—not because they were less intelligent, but because platform reward systems had conditioned automatic sharing behavior.
The algorithmic curation that determines what you see creates a curated reality that feels comprehensive. A 2023 study found that 55.2% of Americans endorsed the "News‑Finds‑Me" perception—the belief that if important news happens, it will reach them through social media without active searching. This passive consumption breeds overconfidence: if you believe you're seeing everything important, you naturally assume you'd notice propaganda when it appears.
Meanwhile, the platform hides from view both the algorithmic curation shaping your feed and the very different feeds your neighbors see. This invisibility of infrastructure makes the third‑person effect nearly impossible to overcome through individual effort alone.
Acknowledging universal susceptibility doesn't just make us more humble—it opens doors to genuinely effective countermeasures.
Media Literacy That Actually Works
Traditional media literacy focuses on teaching people to identify manipulative techniques in content: recognizing emotional appeals, spotting logical fallacies, checking sources. These skills matter, but research shows they're insufficient because they reinforce the third‑person effect. Students learn to see manipulation in others' favored media while remaining blind to it in their own.
A 2024 study published in PubMed tested a different approach: confirmation bias awareness. Researchers exposed participants to information explicitly explaining how confirmation bias operates, including examples of how it affected their own thinking patterns. The intervention worked: participants showed "reduced susceptibility to misinformation and increased ability to general discernment of veracity."
The effect was strongest among participants initially most skeptical of COVID‑19 vaccines—precisely the group most likely to dismiss traditional fact‑checking as biased. Why? Because the intervention targeted internal cognitive processes rather than external content evaluation. It shifted focus from "that message is manipulative" to "my brain systematically processes information in biased ways."
States like New Jersey and Texas have passed legislation making news and media literacy mandatory in schools. Early evidence suggests that teens who receive this training are more likely to actively seek out news rather than rely on passive consumption—breaking the News‑Finds‑Me cycle.
Platform Design for Transparency
Some researchers argue the third‑person effect could be weakened by making algorithmic curation visible. Imagine if social media platforms clearly labeled each post with information like: "This post was selected for you because you previously engaged with similar content from sources rated [credibility score] and sharing this content typically generates [engagement metrics] among users similar to you."
Such transparency wouldn't eliminate bias, but it would make the infrastructure of influence visible. Users could see that they're not receiving a neutral feed but a carefully curated selection designed to maximize their engagement—which might reduce the confidence that fuels third‑person perceptions.
AI as a Double‑Edged Sword
Recent experimental research offers a surprising finding: when news articles are labeled as AI‑generated rather than human‑authored, readers perceive them as less biased. Two studies with 1,197 participants found that AI‑generated news significantly reduced hostile media perceptions (HMP) by an effect size of –0.53—but only among readers with negative or moderate prior attitudes toward AI.
The mechanism appears to be the "machine heuristic": people assume machines lack human biases, so content produced by algorithms feels more objective. Paradoxically, this makes AI‑generated content more persuasive even when it contains identical information to human‑written articles.
This represents both promise and peril. If audiences learn to trust AI‑generated analysis specifically because they believe it transcends human bias, bad actors will simply slap "AI‑generated" labels on propaganda. The third‑person effect will migrate: "AI might fool others, but I can still think critically."
Yet if we could genuinely develop AI systems that flag logical fallacies, track claim consistency, and highlight missing perspectives—and if these systems were transparent and auditable—they might offer scaffolding for human cognition, compensating for our systematic blind spots.
Every solution to the third‑person effect faces the same meta‑problem: interventions designed to help people recognize bias can themselves become tools for exploitation.
The Weaponization of Media Literacy
Teaching people to identify propaganda techniques can backfire if they apply those techniques selectively. A media‑literate partisan who learns about "framing" and "loaded language" might become better at dismissing opposing viewpoints as manipulation while remaining blind to identical techniques in their preferred sources.
This isn't hypothetical. Research on the "hostile media effect"—the tendency for partisans on opposite sides to view identical coverage as biased against them—shows that media sophistication can actually increase perceived bias. The more you know about persuasion techniques, the more readily you can spot them in content you disagree with, while remaining blind to them in content that confirms your priors.
The Privacy–Transparency Tradeoff
Making algorithmic curation transparent requires platforms to reveal their recommendation systems—which could enable bad actors to game those systems more effectively. Showing users why specific content was selected for them might also require revealing what the platform knows about them, raising privacy concerns.
Moreover, most users don't actually want transparency if it requires cognitive effort. A platform that clearly explains its algorithmic choices might lose users to competitors offering simpler, more addictive experiences.
Inequality in Cognitive Tools
Media literacy interventions and bias‑awareness training require time, attention, and educational infrastructure. These resources are unequally distributed. If sophisticated bias‑detection tools become available only to educated elites, the third‑person effect might actually worsen: elites will become even more confident in their immunity while vulnerable populations remain susceptible.
The MIST study found that lower educational attainment, older age, and certain political orientations were associated with higher susceptibility to misinformation. But the study also revealed that many highly educated participants overestimated their resistance to fake news. Providing better tools to those who are already overconfident might simply raise their confidence further without improving their actual discernment.
The Metacognitive Infinite Regress
Here's the deepest challenge: once you learn about the third‑person effect, you face an infinite regress of self‑doubt. "I think I'm less influenced than others... but wait, that's exactly what the third‑person effect predicts I'd think... but recognizing that might make me overcorrect and become too susceptible... but that recognition itself might be..."
Perfect calibration—accurately assessing your own susceptibility to influence—may be psychologically impossible. Our self‑enhancement motivation is deeply wired, serving important functions for mental health and confidence. Eliminating it entirely might be neither feasible nor desirable.
While the third‑person effect appears across cultures, its magnitude and behavioral consequences vary in revealing ways.
Individualist Versus Collectivist Societies
Research in East Asian cultures—where collectivist values emphasize social harmony and group welfare—finds weaker third‑person effects for harmful content compared to Western individualist cultures. In collectivist contexts, people are more willing to acknowledge universal vulnerability and support collective protections.
Conversely, individualist cultures show stronger third‑person perceptions, especially for undesirable messages. The self‑enhancement motivation that drives the bias is itself culturally shaped: in societies that prize individual rationality and autonomy, admitting susceptibility to influence threatens core identity.
Media Trust and Authoritarian Contexts
The third‑person effect operates differently in authoritarian regimes with state‑controlled media. Citizens may publicly profess that propaganda affects others while privately remaining skeptical—not because they're immune, but because expressing immunity is socially safer than acknowledging the regime's influence.
Gallup polling in the United States revealed that trust in mass media hit its lowest point in five decades in 2024, with only 31% of Americans reporting trust in the media "a great deal or a fair amount." But trust diverged sharply by party: 59% of Republicans reported no trust in the media, compared to much lower distrust among Democrats.
These trust differentials create asymmetric third‑person effects: Republicans are more likely to believe media influences Democrats, and vice versa. Both groups are probably right—but both groups also underestimate how their own trusted sources shape their beliefs.
Professional Credibility Gaps
Interestingly, perceived credibility varies by medium in ways that moderate the third‑person effect. In 2024, only 13% of Americans rated television reporters as having "very high or high honesty and ethical standards," compared with 17% for newspaper reporters. This professional credibility gap means audiences perceive television as more manipulable—not because TV content is inherently less trustworthy, but because the medium itself carries credibility baggage.
These cultural and professional variations suggest that the third‑person effect isn't a fixed human universal but a pattern modulated by social context. Interventions that work in one culture or media environment might fail or backfire in another.
As media environments grow more complex and AI‑generated content proliferates, protecting yourself from the third‑person effect requires cultivating specific metacognitive habits.
Practice Reflective Thinking
Regularly examine your own thought processes. When you encounter a compelling article or social media post, pause and ask: "What about this message is designed to appeal specifically to someone like me? What would someone with different values see in this same content?"
This isn't about doubting everything—it's about recognizing that persuasive messages work because they're tailored. The posts you find most convincing are often the ones custom‑fit to your existing beliefs.
Employ the Socratic Method on Yourself
Ask questions that challenge your own assumptions: "Why do I believe this? What evidence would change my mind? Am I applying the same standards to sources I agree with as to sources I disagree with?"
Research shows that people with high confidence in their beliefs are actually more likely to seek out contradictory information—but only if they've been trained in self‑questioning techniques. Without that training, confidence leads to selective exposure.
Seek Diverse Perspectives Actively
Algorithmic feeds won't do this for you. Deliberately follow sources that challenge your assumptions, not to adopt opposing views uncritically, but to understand how the same facts can support different interpretations. This isn't about false balance—it's about recognizing that your information diet is always incomplete.
Slow Down Emotional Reactions
Media literacy educators emphasize: "News literacy teaches you to slow down and notice when you have a strong emotional reaction to something you see online." Content that triggers immediate anger, fear, or excitement is designed to bypass critical thinking. The emotion isn't evidence that the content is false, but it's a signal to engage your analytical faculties before sharing or acting.
Understand Platform Incentives
Recognize that social media platforms profit from your engagement, not your enlightenment. The average teen spends five hours daily on social media—time during which algorithms are optimizing for attention, not accuracy. Knowing that your feed is engineered to maximize engagement rather than inform you creates healthy skepticism about the completeness and balance of what you see.
Track Your Own Prediction Accuracy
One powerful debiasing technique: keep a record of confident predictions you make about events, then check back later to see how often you were right. Research consistently shows that people overestimate their predictive accuracy. Confronting your own track record of errors builds the intellectual humility that counteracts third‑person overconfidence.
The third‑person effect won't disappear. It's woven too deeply into human psychology, serving functions that evolution shaped over millennia: maintaining self‑esteem, signaling discernment to potential allies, and navigating complex social hierarchies.
But we can build social and technological infrastructure that compensates for the bias rather than exploits it.
That means media literacy education focused not on identifying bad content out there, but recognizing cognitive vulnerabilities in here. It means platform design that makes algorithmic curation visible and gives users genuine control over their information diets. It means journalism that acknowledges its own framing choices rather than claiming false objectivity. It means research funding for interventions that actually work rather than simply teaching people to critique others' media consumption.
Most importantly, it means embracing a profound humility: we are all vulnerable to influence. The messages that shape us most powerfully are precisely the ones we don't recognize as persuasion. Propaganda works best when it feels like common sense, when it confirms what we already believed, when it makes us feel smart for agreeing.
The next time you see someone sharing obvious misinformation and wonder "How could they fall for that?"—pause. Ask yourself: what obvious manipulation am I currently blind to? What am I sharing that future‑me will recognize as propaganda?
The answer won't come easily. Our brains are designed to resist it. But asking the question is the first step toward seeing through the most powerful bias we face: the unshakeable conviction that we, unlike everyone else, are immune to influence.
We're not. And recognizing that uncomfortable truth might be the only real protection we have.
Recent breakthroughs in fusion technology—including 351,000-gauss magnetic fields, AI-driven plasma diagnostics, and net energy gain at the National Ignition Facility—are transforming fusion propulsion from science fiction to engineering frontier. Scientists now have a realistic pathway to accelerate spacecraft to 10% of light speed, enabling a 43-year journey to Alpha Centauri. While challenges remain in miniaturization, neutron management, and sustained operation, the physics barriers have ...
Epigenetic clocks measure DNA methylation patterns to calculate biological age, which predicts disease risk up to 30 years before symptoms appear. Landmark studies show that accelerated epigenetic aging forecasts cardiovascular disease, diabetes, and neurodegeneration with remarkable accuracy. Lifestyle interventions—Mediterranean diet, structured exercise, quality sleep, stress management—can measurably reverse biological aging, reducing epigenetic age by 1-2 years within months. Commercial ...
Data centers consumed 415 terawatt-hours of electricity in 2024 and will nearly double that by 2030, driven by AI's insatiable energy appetite. Despite tech giants' renewable pledges, actual emissions are up to 662% higher than reported due to accounting loopholes. A digital pollution tax—similar to Europe's carbon border tariff—could finally force the industry to invest in efficiency technologies like liquid cooling, waste heat recovery, and time-matched renewable power, transforming volunta...
Humans are hardwired to see invisible agents—gods, ghosts, conspiracies—thanks to the Hyperactive Agency Detection Device (HADD), an evolutionary survival mechanism that favored false alarms over fatal misses. This cognitive bias, rooted in brain regions like the temporoparietal junction and medial prefrontal cortex, generates religious beliefs, animistic worldviews, and conspiracy theories across all cultures. Understanding HADD doesn't eliminate belief, but it helps us recognize when our pa...
The bombardier beetle has perfected a chemical defense system that human engineers are still trying to replicate: a two-chamber micro-combustion engine that mixes hydroquinone and hydrogen peroxide to create explosive 100°C sprays at up to 500 pulses per second, aimed with 270-degree precision. This tiny insect's biochemical marvel is inspiring revolutionary technologies in aerospace propulsion, pharmaceutical delivery, and fire suppression. By 2030, beetle-inspired systems could position sat...
The U.S. faces a catastrophic care worker shortage driven by poverty-level wages, overwhelming burnout, and systemic undervaluation. With 99% of nursing homes hiring and 9.7 million openings projected by 2034, the crisis threatens patient safety, family stability, and economic productivity. Evidence-based solutions—wage reforms, streamlined training, technology integration, and policy enforcement—exist and work, but require sustained political will and cultural recognition that caregiving is ...
Every major AI model was trained on copyrighted text scraped without permission, triggering billion-dollar lawsuits and forcing a reckoning between innovation and creator rights. The future depends on finding balance between transformative AI development and fair compensation for the people whose work fuels it.