Most “Bizarre Beliefs” Aren’t Real Beliefs
By Jon Scaccia
18 views

Most “Bizarre Beliefs” Aren’t Real Beliefs

A health worker in a coastal city scrolls through WhatsApp messages during a lunch break. A voice note says vaccines are dangerous. The sender is not a stranger—it’s a cousin. The message is short, confident, and emotionally loaded.

Now picture a student in an overcrowded lab, trying to decide whether a “new study” posted on X is reliable. The study has charts, a fancy logo, and a thread of people saying, “This is being censored.”

These moments happen everywhere, in every language, across cities and villages. And they reveal a core truth: most people don’t have time—or access—to check claims like professional fact-checkers do.

The study’s big idea: it’s not “post-truth,” it’s “bad inputs”

In It’s Our Epistemic Environment, Not Our Attitude Toward Truth, That Matters, philosopher Neil Levy takes aim at the popular story that we live in a “post-truth era. That story usually has two parts:

  1. lots of people believe obviously false things, and
  2. they believe them because they don’t respect truth.

Levy argues both parts are shaky. First, surveys and social media can exaggerate the number of people who sincerely believe bizarre things. Second, even when people hold false beliefs, it often isn’t because they “hate facts.” It’s because their belief system is built on trust—trust in some sources more than others. It s Our Epistemic Environment …

In plain terms, people can be rational given the information and cues they’re receiving.

An analogy you can feel: cooking with a mislabeled spice jar

Think about cooking. If the jar says “salt,” and it looks like salt, and everyone in your kitchen uses it as salt, you’ll cook with it like salt. If it turns out to be sugar, your soup will taste wrong. But you didn’t “reject cooking reality.” You followed the best signals you had.

Levy’s point is similar: our beliefs respond to subjective evidence—what seems reliable from inside our social world—not always to objective reality. He uses a crime-fiction example: a detective can follow planted evidence and make the wrong arrest, while still being rational in their reasoning.

For mis/disinfo workers, this is a relief and a warning:

  • Relief, because it suggests many people are not hopeless.
  • Warning, because it means the battlefield is the information environment, not just “bad thinkers.”

The twist: many “belief signals” are actually play, trolling, or identity

Levy highlights something that mis/disinfo teams often learn the hard way: people sometimes say they believe things they don’t actually believe. He calls this “endorsement without belief.” Why would someone do that?

Because the internet rewards:

  • expressive responding (“this shows my tribe who I am”),
  • trolling (“watch me wreck the conversation”), and
  • play (“it’s funny to pretend it’s true”).

This matters because a lot of research and monitoring systems treat every “I believe X” as sincere. But if a chunk of those signals is performance, then the numbers can lie—even when the graphs look clean.

And when ridiculous content goes viral, it can create a distorted picture of polarization: it makes it seem like “everyone is crazy,” even if most people are not.

But here’s where it gets interesting…

What this means for people fighting mis/disinfo

If you work in this field, Levy’s argument pushes you toward three practical shifts.

1) Measure behavior, not just stated belief

Instead of only asking “Do you believe this?”, look for signals like:

  • How confident are they?
  • Would they share it publicly?
  • Would they act on it?
  • Do they defer to a trusted authority when one is available?

Because stated beliefs are noisy—especially online—Levy argues we often overread them.

2) Treat trust as the core mechanism

Levy is blunt: false beliefs are often owed to trust in unreliable sources. And trust is not just a personal trait. It’s shaped by identity, institutions, past failures, and whether people feel respected by the system.

He notes how groups can rationally defer to “apparent experts” who share their values, especially when mainstream institutions feel hostile or politicized.

So if your intervention treats people as idiots, you may deepen the very distrust that feeds misinformation.

3) Stop over-investing in “critical thinking” as the main fix

This one stings because critical thinking education feels like the heroic solution. But Levy argues that teaching critical thinking is unlikely to help much with today’s epistemic problems.

Why? Because people don’t evaluate every claim from scratch. They defer—like we all do. If the trust network is broken, “think harder” doesn’t solve the pipeline.

The global stakes: bad beliefs don’t need to be believed to do damage

One of the most important lines in the paper is about harm. Levy argues that even endorsing bad beliefs—repeating them, boosting them, laughing along—can warp the whole environment. It can polarize communities and slowly nudge people into vague suspicion: “maybe they’re up to something.”

He connects these distortions to real-world crises such as climate change and vaccine uptake, where beliefs and endorsements shape public support and behavior.

This is especially urgent in places where:

  • health systems are stretched,
  • crises move fast (storms, heat, conflict, outbreaks),
  • reliable information channels are limited,
  • and trust in institutions is fragile.

In those settings, the goal isn’t just “debunk the claim.” The goal is to keep the environment clean enough that people can recognize reliable guidance in time.

The hardest part: cleaning an epistemically polluted world

Levy calls our current reality “epistemically polluted.” The cues we normally use to spot reliable information—credentials, consensus, track record—are now widely mimicked and distorted by actors who benefit from confusion.

He also warns that restoring trust is hard, especially when “merchants of doubt” work to keep it low and people amplify errors by authorities.

So what’s the hopeful note?

Levy suggests minds can change surprisingly quickly when institutional cues shift, and people come to trust more reliable sources. That means interventions that rebuild credibility signals—through transparency, consistent performance, and trusted messengers—can matter a lot.

Let’s Explore Together

  • In your community, what signals make information feel trustworthy—voice notes, local leaders, scientists, family, religious figures?
  • If you were on this research team, what would you measure to separate real belief from trolling or identity performance?
  • What everyday problem (health, money, safety, education) do you wish science could help solve next?

Discussion

No comments yet

Share your thoughts and engage with the community

No comments yet

Be the first to share your thoughts!

Join the conversation

Sign in to share your thoughts and engage with the community.

New here? Create an account to get started