Why Your Brain Sees Gods and Ghosts in Random Events

TL;DR: We all overestimate how well we understand complex topics, mistaking familiarity for true comprehension. This cognitive bias fuels political polarization, poor decisions, and overconfidence—but simple techniques like explaining concepts aloud can expose gaps and improve judgment.
Quick test: Can you explain exactly how a flush toilet works? Not just "you push the lever and water goes down," but the actual mechanics—how the ballcock valve operates, why the water level rises and falls, what role the overflow tube plays? If you're like most people, you probably thought you understood it perfectly until you tried to explain it. Welcome to the illusion of explanatory depth.
This cognitive trap affects all of us, from voters confidently holding political opinions to professionals making business decisions. We walk through life convinced we understand how things work, but when pressed for details, our knowledge crumbles like wet paper. The scary part? We often don't realize how little we know until someone forces us to explain it.
The illusion of explanatory depth was first identified by psychologists Leonid Rozenblit and Frank Keil at Yale University in 2002. They discovered something unsettling: people systematically overestimate their understanding of how things work, but only for a specific type of knowledge called explanatory knowledge—understanding that involves complex causal patterns.
In their original experiment, 16 Yale undergraduates rated their understanding of everyday devices like bicycles, speedometers, and flush toilets on a seven-point scale. Then researchers asked them to write detailed, step-by-step explanations of how these devices worked. Finally, participants re-rated their understanding.
The results were striking. After trying to explain something, people's confidence scores consistently dropped. They realized they didn't understand as well as they thought. But here's what makes this bias so interesting: it only appears for explanatory knowledge. When researchers tested procedural knowledge (how to tie a shoe), narrative knowledge (plot of a movie), or factual knowledge (capitals of countries), the illusion largely disappeared.
This reveals something fundamental about how our minds work. We mistake familiarity for understanding. Because we've seen toilets flush thousands of times, we assume we grasp the underlying mechanism. Our brains confuse recognition with comprehension.
Cognitive scientists believe this happens because we naturally think in abstractions. We store high-level concepts without the supporting details. When you think about a bicycle, your mind might hold the general idea of "pedals connect to wheels through a chain," without encoding precisely how the chain engages the gear teeth or why different gear ratios matter. This mental shortcut works fine for daily life—you don't need to understand gear ratios to ride to the store. But it creates a dangerous gap between perceived and actual knowledge.
The bias manifests everywhere, but it's particularly problematic in three domains: politics, technology, and complex systems like climate change.
Political opinions might be where explanatory depth does its worst damage. A 2018 study during the U.S. presidential election found that people with higher levels of explanatory depth illusion were significantly more likely to endorse conspiracy theories.
Think about hot-button policy issues. Most voters hold strong opinions about healthcare reform, tax policy, or immigration. But when researchers at Duke University asked politically engaged citizens to explain how their preferred policies would actually work—step by step, cause and effect—something remarkable happened. Their extreme positions moderated significantly.
The act of explaining forced people to confront what they didn't know. Someone might feel certain that "single-payer healthcare will reduce costs," but explaining the mechanisms—how price negotiations work, why competition decreases, what happens to innovation incentives—reveals gaps. This gap confrontation reduces polarization because it injects humility.
Political psychologists now believe the illusion of explanatory depth helps explain why political polarization has intensified. When people believe they fully understand complex issues, they become more extreme in their positions. Social media amplifies this by rewarding confident assertions over nuanced explanations.
Technology presents a unique challenge because it actively feeds our illusion. A 2015 study at Yale found that searching the internet inflates estimates of our internal knowledge. After Googling answers to general knowledge questions, participants rated their own knowledge as higher—even on completely unrelated topics.
The researchers called this the "cognitive merger" between self and Internet. We've become so accustomed to instant access to information that we mistake the Internet's knowledge for our own. You might think you know how cryptocurrency works because you've read articles about blockchain. But if someone asked you to explain how proof-of-work validation prevents double-spending without access to Google, could you do it?
This gets worse with AI. As systems like ChatGPT become more sophisticated, we're likely to merge even more deeply with external information sources. Future research needs to examine how AI interaction affects metacognition—our ability to accurately assess our own understanding.
Climate change might be the most consequential domain where explanatory depth illusion operates. Most people acknowledge climate change is happening. But public understanding of the mechanisms remains shallow.
Ask someone how carbon dioxide causes warming. Many will say "it traps heat," which is true but incomplete. Do they understand the molecular mechanism—how CO2 molecules absorb infrared radiation at specific wavelengths? Can they explain feedback loops, albedo effects, or why methane is 25 times more potent as a greenhouse gas? Most can't, but they feel they understand the issue well enough to have strong policy opinions.
This shallow understanding has serious consequences. People support ineffective solutions while opposing effective ones because they don't grasp the underlying dynamics. Someone might religiously recycle (minimal climate impact) while opposing carbon taxes (high impact) because recycling feels like direct climate action while carbon pricing seems abstract.
Worse, the illusion creates vulnerability to climate denial. When someone doesn't truly understand the mechanisms, they're more susceptible to misleading arguments. A person who superficially understands greenhouse effects might be swayed by "CO2 is plant food" arguments without recognizing why that's irrelevant to atmospheric warming.
The consequences of this bias extend beyond abstract scenarios into real decisions with serious outcomes.
A 2025 study of over 2,000 national security professionals revealed widespread overconfidence. Intelligence analysts consistently overestimated their understanding of geopolitical situations and assigned higher confidence to predictions than accuracy warranted.
This matters because these professionals make recommendations that shape military interventions, trade policy, and diplomatic strategy. When decision-makers suffer from explanatory depth illusion, they miss important variables, discount opposing evidence, and commit to courses of action with unwarranted certainty.
The study found that even after extensive training, experts remained overconfident. The illusion persists because expertise in one area creates a halo effect—if you deeply understand Middle Eastern politics, you might falsely believe you equally understand Asian geopolitics or cybersecurity threats.
Writer David Epstein describes his experience with fact-checking as a constant confrontation with explanatory depth illusion. While writing his books, he worked with a fact-checker named Emily who would leave notes in the manuscript: "How do you know?"
These simple questions repeatedly exposed gaps in Epstein's understanding. He'd written confident statements about creativity, psychology, and human performance—his areas of expertise—only to realize he couldn't cite specific evidence. He'd confused general familiarity with the literature for detailed knowledge of particular findings.
What makes this example powerful is that Epstein isn't a casual writer. He's a journalist who specializes in scientific topics and regularly interviews researchers. If someone with his background and training falls victim to this bias, it suggests how pervasive the problem really is.
Entrepreneurs and executives frequently suffer from explanatory depth illusion when evaluating new technologies or market opportunities. A CEO might feel confident they understand "how blockchain could revolutionize our supply chain" after attending a few presentations. They greenlight expensive initiatives without grasping the technical limitations, integration challenges, or why simpler database solutions might work better.
This leads to what venture capitalists call "half knowledge"—the dangerous trap of thinking you know enough to make major decisions. It's why so many corporate digital transformation projects fail. Leaders mistake conceptual familiarity for implementation understanding.
The good news? Simple interventions can reduce this bias. Here are practical techniques you can use immediately.
Take any topic you think you understand and try explaining it to someone with no background knowledge. Better yet, write it out. Don't allow yourself to use jargon or hand-waving phrases like "basically" or "essentially."
This exercise quickly reveals gaps. If you can't explain something simply, you probably don't understand it as well as you think. Richard Feynman, the Nobel Prize-winning physicist, famously said that if you can't explain something to a first-year student, you don't really understand it yourself.
When you hold a belief or opinion, force yourself to diagram the causal mechanism. If you think raising the minimum wage will reduce unemployment, draw the chain of cause and effect:
Higher minimum wage → employers pay more → what happens next? Do they raise prices? Reduce hours? Increase automation? Improve productivity? Each step should be logical and defensible.
This technique exposes gaps because it requires specificity. Vague notions of "how things work" don't survive when you must connect specific causes to specific effects.
For every explanation you hold, generate the strongest possible alternative explanation. If you think your company should invest in AI because "it will improve efficiency," what's the best case against that investment? What could go wrong? What alternative uses of capital might yield better returns?
This mental exercise works because the illusion of explanatory depth creates overconfidence. By deliberately considering rivals, you recognize the limits of your current understanding and identify what you need to learn.
Share your explanations with someone knowledgeable in the domain. Ask them not to be polite but to genuinely probe for gaps and weaknesses. This recreates what fact-checker Emily did for David Epstein—forcing confrontation with knowledge gaps.
The key is finding someone who'll push back. Your spouse saying "that makes sense" doesn't count. You need someone who'll say "wait, explain step three again—I don't follow how A leads to B."
Here's a crucial subtlety: how you ask people to explain matters enormously. Research shows that asking for an "explanation" reduces the illusion, but asking for "justification" can actually make positions more extreme.
When people justify their beliefs, they marshal arguments supporting their position. This reinforces conviction. But when people explain mechanisms, they confront complexity and gaps in understanding. The prompt "justify why you support this policy" produces different results than "explain how this policy would work."
This has practical implications for education and public discourse. Teachers should ask students to explain how things work rather than justify why they believe something. Journalists should press politicians to explain policy mechanisms rather than defend positions. Managers should ask team members to explain implementation details rather than pitch ideas.
One interesting finding: the illusion is significantly weaker among domain experts. Someone who's actually designed bicycles can usually rate their understanding accurately. The bias primarily affects novices who've picked up surface knowledge.
But this creates a dangerous corollary. True experts know what they know in their domain, so they might incorrectly assume their self-assessment is accurate in other domains. A world-class engineer might accurately assess their understanding of thermodynamics but overestimate their grasp of economics or epidemiology.
This helps explain why highly accomplished people sometimes hold confidently wrong opinions outside their expertise. Success in one domain creates a false sense of competence in others. The engineer assumes the same rigorous thinking that makes them excellent at their job transfers to political or economic analysis—but without the deep knowledge, they're just as vulnerable to the illusion.
Understanding explanatory depth illusion should fundamentally change how we approach knowledge and decision-making.
First, it suggests we should be deeply skeptical of our own confidence. When you feel certain about something complex, that certainty is probably a red flag. Real understanding comes with awareness of nuance, exceptions, and limitations.
Second, it elevates the importance of metacognition—thinking about our thinking. The most valuable cognitive skill isn't knowledge itself but accurate assessment of our knowledge. Knowing what you don't know is more valuable than confidently believing wrong things.
Third, it points toward institutional solutions. If individuals struggle to overcome this bias through awareness alone, we need systems that force confrontation with gaps. Peer review in science, fact-checking in journalism, red team exercises in national security—these processes exist partly to counteract explanatory depth illusion.
Perhaps the most important implication is cultural. We live in a society that rewards confidence and punishes uncertainty. Politicians who say "I don't know" lose elections. Business leaders who express doubt lose investor confidence. Social media amplifies the most certain voices, regardless of actual knowledge.
This creates perverse incentives. People learn to fake confidence even when they lack understanding. The illusion of explanatory depth becomes socially adaptive—until it leads to catastrophically bad decisions.
We need to flip this script. Organizations should celebrate intellectual humility. Leaders should model comfort with uncertainty. Education systems should reward students who accurately identify the limits of their knowledge, not just those who project confidence.
This doesn't mean wallowing in skepticism or refusing to make decisions with incomplete information. It means developing a more honest relationship with knowledge—recognizing that true understanding is rare, complex, and hard-won.
The illusion of explanatory depth reveals a fundamental tension in how we navigate the world. We need confidence to act. Excessive self-doubt leads to paralysis. But false confidence leads to disasters.
The solution isn't eliminating confidence but calibrating it accurately. Before making important decisions, force yourself to explain the mechanism. Before forming strong opinions, test your actual understanding. Before judging others, recognize that your own knowledge is probably shallower than it feels.
Most importantly, build systems and relationships that challenge your understanding. Seek out people who'll ask "how do you know?" Find environments that reward accurate self-assessment over projection of confidence. Create habits that force regular confrontation with knowledge gaps.
The good news is that awareness helps, even if it doesn't cure the problem. Once you understand explanatory depth illusion, you'll start noticing it everywhere—in your own thinking, in public discourse, in expert pronouncements. That recognition is the first step toward better decisions.
You might not be able to fully escape the trap of mistaking familiarity for understanding. But you can learn to recognize when you're caught in it, question your own certainty, and build practices that expose gaps before they lead to serious mistakes.
The next time you feel absolutely certain about something complex, pause. Ask yourself: could I actually explain this to someone else? Could I diagram the mechanism? Could I defend it against intelligent criticism? If the answer is no, you've just discovered you know less than you thought.
And that realization—uncomfortable as it is—makes you smarter than you were five minutes ago.
Recent breakthroughs in fusion technology—including 351,000-gauss magnetic fields, AI-driven plasma diagnostics, and net energy gain at the National Ignition Facility—are transforming fusion propulsion from science fiction to engineering frontier. Scientists now have a realistic pathway to accelerate spacecraft to 10% of light speed, enabling a 43-year journey to Alpha Centauri. While challenges remain in miniaturization, neutron management, and sustained operation, the physics barriers have ...
Epigenetic clocks measure DNA methylation patterns to calculate biological age, which predicts disease risk up to 30 years before symptoms appear. Landmark studies show that accelerated epigenetic aging forecasts cardiovascular disease, diabetes, and neurodegeneration with remarkable accuracy. Lifestyle interventions—Mediterranean diet, structured exercise, quality sleep, stress management—can measurably reverse biological aging, reducing epigenetic age by 1-2 years within months. Commercial ...
Data centers consumed 415 terawatt-hours of electricity in 2024 and will nearly double that by 2030, driven by AI's insatiable energy appetite. Despite tech giants' renewable pledges, actual emissions are up to 662% higher than reported due to accounting loopholes. A digital pollution tax—similar to Europe's carbon border tariff—could finally force the industry to invest in efficiency technologies like liquid cooling, waste heat recovery, and time-matched renewable power, transforming volunta...
Humans are hardwired to see invisible agents—gods, ghosts, conspiracies—thanks to the Hyperactive Agency Detection Device (HADD), an evolutionary survival mechanism that favored false alarms over fatal misses. This cognitive bias, rooted in brain regions like the temporoparietal junction and medial prefrontal cortex, generates religious beliefs, animistic worldviews, and conspiracy theories across all cultures. Understanding HADD doesn't eliminate belief, but it helps us recognize when our pa...
The bombardier beetle has perfected a chemical defense system that human engineers are still trying to replicate: a two-chamber micro-combustion engine that mixes hydroquinone and hydrogen peroxide to create explosive 100°C sprays at up to 500 pulses per second, aimed with 270-degree precision. This tiny insect's biochemical marvel is inspiring revolutionary technologies in aerospace propulsion, pharmaceutical delivery, and fire suppression. By 2030, beetle-inspired systems could position sat...
The U.S. faces a catastrophic care worker shortage driven by poverty-level wages, overwhelming burnout, and systemic undervaluation. With 99% of nursing homes hiring and 9.7 million openings projected by 2034, the crisis threatens patient safety, family stability, and economic productivity. Evidence-based solutions—wage reforms, streamlined training, technology integration, and policy enforcement—exist and work, but require sustained political will and cultural recognition that caregiving is ...
Every major AI model was trained on copyrighted text scraped without permission, triggering billion-dollar lawsuits and forcing a reckoning between innovation and creator rights. The future depends on finding balance between transformative AI development and fair compensation for the people whose work fuels it.