We Have Virality But No Immune System: Why Every Fix Fails
P3.1.2 The Immune System Series
Every obvious solution has been tried. Every one has failed.
Fact-checkers? Created. Media literacy programs? Funded. Expert panels? Appointed. Accountability mechanisms? Implemented.
None of it worked. And it failed for predictable, mechanical reasons rooted in system design.
In Part 1, I showed you the five-stage cascade: how I watched economic nonsense become policy in real-time. Initial incoherence → media translation → viral spread → social proof → failed corrections. Stage by predictable stage.
You saw the pattern. Now comes the harder question:
If we can see the problem so clearly, why can’t we fix it?
This piece won’t offer false hope. But understanding why fixes fail is the first step toward anything that might actually work.
I. Why Traditional Filters Have Failed
The Fatal Flaw of Institutional Filters
The most intuitive solution sounds reasonable: Create expert panels. Economic advisory councils. Policy review boards. Institutions that must sign off on proposals before they become policy.
It works everywhere else, right? Engineers need licenses. Doctors need board certification. Lawyers pass the bar. Why not require economic coherence from policies?
Because the people who benefit from lies control the institution that would stop the lies.
Look at what’s happening right now. The CDC exists to protect public health based on scientific evidence. But when political appointees who reject vaccine science get positioned to lead Health and Human Services, what happens to that institutional authority? The institution works—until someone with power decides it shouldn’t.
Economic advisors get ignored when they say things politicians don’t like. Any “truth commission” gets immediately labeled partisan, defunded, or captured.
This is the core problem: You cannot create a filter inside a system where the filter’s targets have the power to destroy the filter.
It’s like asking a surgeon to self-regulate while giving them the power to fire the medical board. The moment the board says “you’re doing surgery wrong,” the surgeon fires the board. There’s no higher authority to appeal to.
This isn’t partisan. Both sides do this when convenient. The system design allows institutional capture. There’s no enforcement mechanism beyond political power itself.
Why Media Can’t Be the Filter
“The media should just do better journalism” sounds obvious. Call out nonsense. Fact-check in real time. Stop amplifying lies. Be the fourth estate.
But the media business model is fundamentally incompatible with being an effective filter.
The incentive structure works against accuracy:
Media organizations need three things to survive:
Advertising revenue (requires eyeballs)
Access to power (requires sources)
Appearance of neutrality (requires credibility)
Here’s the problem: Calling out lies threatens #2 and #3, but amplifying lies helps #1.
Conflict generates clicks. Controversy drives engagement. “Trump’s childcare plan sparks debate” gets more traffic than “Candidate gives incoherent answer.” The economically rational choice for a media company is to frame nonsense as “controversial policy” rather than call it nonsense.
And it gets worse. If you’re the one outlet that breaks from the pack to call something incoherent, you risk seeming partisan. You might lose access to sources. Meanwhile, your competitors who played it safe get the exclusive interviews.
This is a prisoner’s dilemma. If all media agreed to stop amplifying nonsense, it might work. But any single outlet that defects gets the competitive advantage. The Nash equilibrium is a race to the bottom.
Social media algorithms accelerate this. Outrage engages better than nuance. Conflict performs better than context. Lies spread faster than truth—this isn’t theory, MIT researchers documented it. Media follows where the algorithms drive the audience.
This isn’t about individual journalists being bad people. Many try incredibly hard to do good work. This is about system design. You can’t fix incentives with appeals to ethics.
The Timing Asymmetry of Fact-Checking
Fact-checkers should work, right? Truth is truth. Lies are lies. Show people the evidence. Publish corrections.
But truth is slower than lies by design.
Here’s the speed differential:
Lie: Instant. Spreads at social media velocity.
Fact-check: Takes hours or days. Spreads at journalism velocity.
By the time the correction arrives, the lie has already saturated the discourse.
Remember the pattern I described in Part 1? The firehose of falsehood:
Put out a lie
Experts formulate response (takes time—they have to verify facts)
Publish analysis (takes more time—they have to write it well)
Already old news
Discourse moved to lie #2 and #3
The goal isn’t making people believe the lie. The goal is exhaustion. You just need people too tired to fight it.
And fact-checks often backfire. Corrections can reinforce the lie through repetition. Telling people they’re wrong triggers psychological defense mechanisms. What’s often called the backfire effect captures this: when confronted with information that challenges core beliefs, people can dig in harder rather than change their minds.
Plus, corrections don’t penetrate information bubbles. People who need corrections don’t see them. People who see corrections didn’t need them. The information landscape is fragmented by design.
You cannot win a race when your opponent starts with a 48-hour head start and you’re required to fact-check every step. This isn’t about effort—it’s about physics.
You might be thinking: “But doesn’t truth eventually win? Doesn’t the arc of history bend toward justice?”
Maybe. But how would we ever know if it didn’t? The truth about Iraq’s WMDs has been documented for two decades—we invaded based on lies. Yet there’s still no public consensus, no accountability, no systemic change to prevent it happening again.
And more importantly: it’s little consolation to the people who suffered while the universe did its bending. If truth takes 20 years to “win,” and by then the damage is done and everyone’s moved on, did truth really win?
The Rational Ignorance Problem
“If people were just better informed, better educated, understood economics—they wouldn’t fall for this.”
Let’s do the cost-benefit analysis that a rational person does:
Cost of being informed about tariffs:
Time: Hours to understand trade policy, tax incidence, general equilibrium effects
Cognitive load: High complexity, multiple interacting concepts
Opportunity cost: Could be working, with family, enjoying life
Social cost: Going against consensus, being “that guy” at Thanksgiving
Psychological cost: Admitting you might have been wrong
Benefit of being informed:
Your vote: One among millions (negligible marginal impact)
Policy: Won’t change based on your individual understanding
Personal outcome: Won’t materially improve your life
Might actually make you less happy (ignorance is bliss)
The rational choice: Stay ignorant.
This isn’t people being stupid. This is people responding rationally to incentive structures. The collective action problem makes individual education functionally worthless for political outcomes.
💡 The Uncomfortable Math of Individual Action
I can educate one person about tariffs. That person’s vote still doesn’t matter. They still can’t change policy. They’ve just spent hours learning something that has no effect on their life except making them more frustrated.
This isn’t cynicism—it’s why collective action and systemic change matter more than individual knowledge.
Why would anyone make that trade?
The economists call this “rational ignorance,” and it’s rational because the system makes knowledge politically worthless at the individual level.
II. What Actually Works (Sort Of)
Before diving into proposed solutions, let’s acknowledge what has shown some effectiveness—even if insufficient.
Distributed expertise networks: When engineers, economists, scientists use their platforms to correct misinformation in their domains, it helps. But it’s slow, reaches limited audiences, and requires sustained effort that most experts can’t maintain while doing their actual jobs.
Direct economic consequences: When tariffs actually raise prices, some people notice. But by then the policy is already implemented, political capital is invested in defending it, and the causal chain is muddy enough that most people don’t connect the dots back to the original lie.
Trusted individual voices: People like Brian Riedl (conservative economist) calling out economic nonsense from his own side creates cognitive dissonance that can break through tribal filters. But there aren’t enough trusted voices, and building that trust takes years—trust that can disappear in an instant when someone gets labeled a “Republican in Name Only” or accused of “moving left.” The tribal pressure to maintain loyalty overrides the value of honest analysis.
These approaches help at the margins. They don’t solve the systemic problem. They can’t, because they’re working against the structure of the system itself.
III. Four Proposed Solutions (And Why They’re Inadequate)
Let me walk through the four most common proposals I’ve heard for fixing this, and explain exactly why each falls short.
Approach 1: Make Truth Go Viral
The idea: Create engaging, shareable content that explains the truth. Make fact-checking entertaining. Use social media algorithms in service of accuracy.
Why it’s better than nothing: When it works, it can reach millions. Some creators do excellent work making complex topics accessible.
Why it’s inadequate:
Truth has structural disadvantages that virality can’t overcome:
Lies are simpler (no nuance required)
Lies confirm biases (people want to believe them)
Lies arrive first (timing advantage)
Corrections trigger defensiveness (backfire effect)
You’re trying to make a complex, uncomfortable truth compete with a simple, appealing lie on a platform optimized for engagement, not accuracy. The algorithm doesn’t care about truth—it cares about watch time.
Even when viral truth partially works, you’re fighting with one hand tied behind your back. By the time your viral video explaining tariffs reaches 2 million views, the lie has reached 50 million and is already embedded in political discourse.
Approach 2: Build Credibility Networks
The idea: Develop networks of trusted experts and communicators with proven track records. Build reputation systems that help people identify reliable sources.
Why it’s better than nothing: This is what I’m trying to do with The Statecraft Blueprint. Show my work. Build a track record. Create value that establishes credibility independent of political affiliation.
Why it’s inadequate:
It takes years to build credibility and seconds to destroy it. It reaches limited audiences—people who are already inclined toward complexity and nuance. It doesn’t scale fast enough to counter the firehose of falsehood.
Plus, credibility networks can be attacked as partisan or elite. The moment you correct a lie from someone with tribal loyalty, a portion of that tribe views you as the enemy, regardless of your track record.
This is necessary work. It’s part of the solution. But it’s too slow and too small to solve the problem alone.
Approach 3: Prediction Markets and Accountability
The idea: Create mechanisms where experts and politicians face real consequences for being wrong. Prediction markets, performance bonds, public track records.
Why it’s better than nothing: In theory, accountability creates incentives for accuracy. If your predictions are public and trackable, you face reputational costs for being wrong.
Why it’s inadequate:
Politics doesn’t work on the timescale where accountability matters. By the time the consequences arrive (tariffs raise prices 2 years later), the political moment has passed. Politicians face elections based on vibes and identity, not policy track records.
And prediction markets only work if people check them, trust them, and change votes based on them. Most voters don’t systematically consult any specialized forecasting tools—and surveys show they already struggle to navigate misinformation and don’t trust each other to make informed choices. Even the largest prediction market (Polymarket) saw just $3.3 billion wagered on the 2024 presidential election—sounds like a lot until you realize that’s from a tiny fraction of the electorate.
The rational ignorance problem strikes again.
Accountability is good. We should have more of it. But it doesn’t solve the root cause—it just creates slightly better incentives that still lose to the structure of the system.
Approach 4: Institutional Economic Literacy Bodies
The idea: Create an independent body—like the Congressional Budget Office, but for economic policy coherence. Any major economic proposal must pass review before becoming policy.
This is the most obvious solution. It’s also dead on arrival.
Why it can’t work: We’re back to the fatal flaw from the beginning.
You cannot build a filter where the filter’s targets control the filter.
Who appoints this body? Congress. Who can defund it? Congress. Who can ignore its findings? Congress. The moment it says something politically inconvenient, it gets destroyed, captured, or defanged.
Look at how this plays out:
The Federal Reserve is supposed to be independent. Still faces constant political pressure.
The CBO provides nonpartisan analysis. Gets attacked as biased whenever its numbers are inconvenient.
The CDC provides health guidance. Now faces would-be leadership that doesn’t believe in its core mission.
In engineering, physics enforces standards—you can’t override gravity. In medicine, biology enforces standards—the patient dies, you failed. In politics, there is no enforcement beyond political power itself. And political power can override any institution.
This isn’t pessimism. It’s mechanical reality. The system design prevents the fix.
IV. The Pattern: System Design Is The Problem
Here’s what all these failed or inadequate solutions have in common:
They try to fix the outputs without changing the system that produces those outputs.
Our political system is designed—not metaphorically, but actually designed by incentive structures and institutional rules—to:
Reward virality over accuracy (social media algorithms optimize for engagement)
Reward conflict over nuance (media business model requires controversy)
Allow institutional capture (political appointment power)
Create rational ignorance (collective action problem makes individual knowledge worthless)
Disconnect actions from consequences (delayed feedback, diffuse causation)
This isn’t a bug. It’s not a failure of the design. It’s the design working exactly as structured.
The uncomfortable question: If the system design is the problem, and changing system design requires power, and the people with power benefit from the current design... how do you change it?
I don’t have a complete answer. No one does. Anyone who tells you they do is selling something.
But I can tell you this: Understanding that this is a system problem, not a people problem, is the first step. Seeing the mechanism clearly is necessary before you can begin to build something different.
And that’s where Part 3 comes in.
Looking Ahead: The Stakes Are Higher Than You Think
We’ve covered a lot of ground:
Part 1 showed you the five-stage cascade in action
Part 2 explained why every obvious fix fails
Now we need to ask the bigger question:
If our system can’t handle something as straightforward as “tariffs are a sales tax,” how can it handle:
Climate change (infinitely more complex)
AI regulation (moving faster than any policy cycle)
Healthcare reform (decades of failed attempts)
Infrastructure modernization (requires long-term thinking)
Pandemic response (requires trusting expertise during crisis)
Any genuinely complex policy challenge
This isn’t just about tariffs. This is about whether our 18th-century decision-making system can handle 21st-century problems.
Part 3 will show you how this same cascade applies to every complex challenge we face. Why “21st century problems with an 18th century system” isn’t just a catchy phrase—it’s a civilizational risk. And what we can actually do about it.
Not magic solutions. Not false promises. But honest assessment and realistic paths forward.
Because here’s the thing: The meta-problem affects everything. If we can’t create a political immune system for complexity, every other challenge gets harder.
Understanding why is the first step toward building something better.
Continue to Part 3: “21st Century Problems, 18th Century System—And What We Can Actually Do About It”
The Statecraft Blueprint | Part 2 of The Immune System Series
References
Downs, Anthony. An Economic Theory of Democracy. New York: Harper & Row, 1957. https://en.wikipedia.org/wiki/Rational_ignorance
Inman, Matthew. “You’re Not Going to Believe What I’m About to Tell You.” The Oatmeal, May 2, 2017. https://theoatmeal.com/comics/believe
Paul, Christopher and Miriam Matthews. “The Russian ‘Firehose of Falsehood’ Propaganda Model: Why It Might Work and Options to Counter It.” RAND Corporation, 2016. https://www.rand.org/pubs/perspectives/PE198.html
Pew Research Center. “How Americans View Trust, Facts, and Democracy Today.” February 19, 2020. https://www.pew.org/en/trust/archive/winter-2020/how-americans-view-trust-facts-and-democracy-today
“Polymarket.” Wikipedia, November 2024. https://en.wikipedia.org/wiki/Polymarket
Vosoughi, Soroush, Deb Roy, and Sinan Aral. “The spread of true and false news online.” Science 359, no. 6380 (2018): 1146-1151. https://www.science.org/doi/10.1126/science.aap9559



