If AI is genuinely amplifying stupidity at civilizational scale, we should be able to measure it. This chapter examines the evidence: polarization metrics, misinformation spread rates, trust in institutions, epistemic confidence versus actual knowledge, and other indicators that might reveal whether we're in the midst of an intelligence crisis or simply experiencing normal growing pains of new technology.
The challenge is that measuring stupidity is harder than measuring ignorance. We can test knowledge with quizzes and surveys. But how do you quantify the rejection of better information? How do you measure confidence in false beliefs? How do you distinguish between people who don't know and people who refuse to know? This chapter explores both the metrics we have and the gaps in our understanding.
The Polarization Paradox
Political polarization has increased dramatically in recent decades, particularly in countries with high internet and social media adoption. Studies show that Democrats and Republicans in the United States, for example, have grown more ideologically distant, more hostile toward each other, and more likely to view the opposing party as a threat to the nation's wellbeing.
But here's the puzzle: polarization on policy issues hasn't increased nearly as much as polarization on identity and affect. People haven't become that much more extreme in their actual policy preferences. What's changed is how much they hate the other side and how certain they are that the other side is wrong about everything.
This suggests that we're not seeing a polarization of ideas so much as a polarization of stupidity—each side using AI-enhanced media ecosystems to generate and consume content that demonizes the other, confirms their own righteousness, and makes compromise impossible. The tools designed to connect us are instead perfecting our ability to hate each other.
Misinformation Velocity
MIT researchers found that false information spreads roughly six times faster than true information on Twitter. False news reaches 1,500 people six times faster than true news. And this effect is more pronounced for political news than for any other category.
Since that study, AI has made generating misinformation vastly easier. The volume of false content has exploded. Deepfakes, AI-generated articles, synthetic social media accounts—all contribute to an information environment where falsehood has structural advantages over truth.
We can measure this acceleration. False stories go viral faster now than they did five years ago. Debunking reaches fewer people. Corrections are less effective. The window between a lie being published and it becoming "common knowledge" has shrunk from days to hours.
This velocity matters because it creates what researchers call "zombie facts"—false claims that keep circulating even after being thoroughly debunked. By the time the debunking reaches most people, they've already integrated the false claim into their worldview. The correction arrives too late to matter.
Trust Collapse
Edelman's Trust Barometer, which has tracked trust in institutions since 2000, shows a steady decline in trust across nearly every category: government, media, business, NGOs. In many countries, less than 50% of people trust these institutions to do the right thing.
This trust collapse predates AI, but AI has accelerated it. When anyone can generate content that looks official, when deepfakes can show leaders saying things they never said, when AI can fabricate credible-seeming scandals, why trust anything? Better to trust your gut, your tribe, your personal "research"—all of which can be manipulated by the very AI tools enabling your skepticism.
The paradox is that while trust in institutions collapses, trust in personalized, algorithmically-curated content increases. People don't trust "the media" but they trust their social media feed. They don't trust "experts" but they trust the AI-generated content that confirms what they already believe.
This isn't a rational recalibration of trust—it's a pattern we'd expect if stupidity were being systematically amplified. Trust is being withdrawn from accountable institutions and redirected toward unaccountable algorithms.
The Knowledge-Confidence Gap
Perhaps the most direct measure of amplified stupidity is the growing gap between epistemic confidence (how certain people are) and actual knowledge (what they actually know). Multiple studies show people are becoming more confident in their beliefs while not becoming more knowledgeable about the topics they're confident about.
Surveys find that people who score lowest on tests of scientific literacy are often most confident in their scientific opinions. People with minimal understanding of economics express the strongest certainty about economic policy. People who can't locate countries on a map have the firmest views on foreign policy.
AI contributes to this gap by making it easy to find information that supports any conclusion. Someone can spend hours "researching" a topic using AI tools and emerge with strong opinions based on systematically biased information. They feel informed because they did research. They're actually more ignorant because their research consisted of AI-curated confirmation bias.
This confidence-knowledge gap is precisely what we'd expect from amplified stupidity: people aren't just wrong—they're certain they're right, armed with AI-generated "evidence" that makes their certainty feel justified.
Conspiracy Theory Proliferation
The number and reach of conspiracy theories have exploded in the AI era. QAnon went from 4chan obscurity to having millions of followers in just a few years. Anti-vaccine conspiracies reached unprecedented scale during COVID-19. Election denial theories spread to majorities of voters in some countries.
We can measure this proliferation. Researchers track conspiracy theory mentions on social media, finding exponential growth in both the variety of conspiracies and the number of people engaging with them. More importantly, they find that people are increasingly believing multiple conspiracy theories simultaneously—a pattern suggesting general epistemic dysfunction rather than isolated misinformation.
AI hasn't just made individual conspiracies more prevalent—it's created a conspiracy theory ecosystem where different conspiracies reinforce each other. AI-powered recommendation algorithms connect conspiracy theorists across different domains. Someone interested in one conspiracy gets recommended content about others. The result is communities of people who reject mainstream reality across multiple dimensions simultaneously.
Educational Outcomes
If AI is enabling shallow learning and the illusion of understanding, we should see it in educational outcomes. And we do—though the picture is complex. Test scores in many developed countries have plateaued or declined slightly despite unprecedented access to information.
More tellingly, educators report increasing numbers of students who can't distinguish reliable from unreliable sources, who struggle with tasks requiring sustained attention, who rely on AI to complete assignments without learning the underlying material. Students who use AI extensively for homework perform worse on assessments designed to test actual understanding.
This doesn't mean AI is universally harmful for education—we saw in Chapter 4 how it can amplify learning when used well. But the aggregate data suggests that many students are using AI in ways that amplify ignorance rather than knowledge: getting answers without learning, appearing informed without understanding, passing courses without genuine comprehension.
Media Literacy Decline
Studies of media literacy—the ability to critically evaluate information sources—show concerning trends. People are getting worse at distinguishing real news from fake news, even as awareness of "fake news" as a concept increases. The ability to identify credible sources, evaluate evidence, and assess bias is declining, particularly among younger generations who've grown up in algorithm-curated information environments.
Paradoxically, confidence in one's ability to identify misinformation is increasing. People think they're good at spotting fake news while consistently failing tests designed to measure this ability. This is classic Dunning-Kruger effect: the less competent someone is at a skill, the more confident they are in their competence.
AI amplifies this effect by making fake content increasingly sophisticated. What once looked obviously fake—poor grammar, low-resolution images, implausible claims—now looks professional. The cues people used to identify misinformation no longer work. But rather than becoming more cautious, people become more confident, trusting their gut rather than developing better evaluation skills.
The Replication Crisis Continues
Science itself provides evidence of epistemic dysfunction. The "replication crisis"—where many published studies fail to replicate—has revealed how much published research is unreliable. This predates AI, but AI makes it worse by enabling mass production of low-quality research, p-hacking at scale, and rapid generation of studies designed to reach predetermined conclusions.
More concerning, AI enables the generation of fake studies that look real. Sophisticated language models can produce scientific-seeming papers complete with methods, results, and discussion sections. Some of this fake research enters the literature. More of it circulates on social media, cited by people who don't realize it's fabricated.
The result is an information environment where even scientific evidence—supposedly our most reliable form of knowledge— becomes suspect. Not because science itself is failing, but because AI has made it trivially easy to create things that look like science but aren't.
What the Metrics Show
Taken together, these metrics paint a troubling picture: increasing polarization without corresponding policy disagreement, accelerating misinformation spread, collapsing institutional trust, growing confidence-knowledge gaps, conspiracy theory proliferation, declining media literacy, and corruption of scientific discourse.
These aren't independent problems—they're interconnected symptoms of amplified stupidity. AI gives people tools to remain ignorant while feeling informed, to reject expertise while believing they're being skeptical, to embrace conspiracy theories while thinking they're doing research.
The evidence suggests we're not just experiencing normal growing pains of new technology. We're in the midst of an epistemic crisis where large populations are losing the ability to distinguish truth from falsehood.
But—and this is critical—the problem isn't uniform. Some people and communities are using AI to amplify wisdom. Others are using it to amplify stupidity. The aggregate metrics reflect this battle, and right now, stupidity appears to be winning on several fronts.
The question for the remaining chapters: what do we do about it?
