Sean Carroll's Mindscape: Science, Society, Philosophy, Culture, Arts, and Ideas

330 | Petter Törnberg on the Dynamics of (Mis)Information

September 29, 2025

Key Takeaways Copied to clipboard!

  • The shift in social science from viewing society as a predictable 'machine' (Fordism) to a complex, self-organizing system implies a change in power structures, often making them less visible but still influential through mechanisms like algorithms. 
  • Thomas Schelling's segregation model, when adapted to digital communities, demonstrates that even weak individual preferences for like-minded interaction can robustly lead to complete segregation, and counterintuitively, filtering algorithms can sometimes reduce this segregation. 
  • Agent-based models using Large Language Models (LLMs) reveal that bare-bones social media structures—involving following and reposting—naturally produce problematic outcomes like echo chambers, attention inequality (power laws), and the 'social media prism' (favoring extreme voices), suggesting these issues are structural, not solely due to engagement algorithms. 
  • The removal of mainstream media gatekeepers, combined with social media's incentive structure for attention, creates conditions where abandoning truth becomes a beneficial strategy for gaining attention. 
  • Research indicates that the rise of misinformation spread via social media is specifically driven by radical right populist parties using it as a political strategy, rather than being solely a social media phenomenon. 
  • Fundamental rethinking of platform structures, moving beyond cosmetic algorithmic changes, may be necessary to foster beneficial and positive online social spaces, reminiscent of the early, more innocent internet era. 

Segments

Physics Envy and Social Science
Copied to clipboard!
(00:00:03)
  • Key Takeaway: Social science often suffers from ‘physics envy’ because physics simplifies systems using few variables, allowing for precise progress, whereas social systems are inherently messy and resist such abstraction.
  • Summary: Physics makes enormous progress by simplifying systems to few variables, which is difficult in social science where people are messy. Concepts from physics like equilibrium and emergence are still relevant to social science. Petter Törnberg, the guest on this episode of Sean Carroll’s Mindscape: Science, Society, Philosophy, Culture, Arts, and Ideas, has a physics background and uses computational models to study social systems.
Power, Epistemology, and Complexity
Copied to clipboard!
(00:05:03)
  • Key Takeaway: The shift from a Fordist, machine-based view of society to a complex systems view means that power operates through less visible mechanisms, such as platform algorithms, which shape outcomes without overt top-down design.
  • Summary: The phrase ‘power has an epistemology’ refers to how understanding society changes based on the dominant metaphor, moving from industrial machines to complex, self-organizing systems. This shift abandons the ambition of state design in favor of assuming better bottom-up outcomes, but power still operates by shaping interaction rules, like those embedded in algorithms. Market constructions are themselves deeply shaped by state power, challenging the notion of natural equilibrium outcomes.
Schelling Model and Segregation
Copied to clipboard!
(00:14:20)
  • Key Takeaway: Thomas Schelling’s segregation model shows that even minimal individual preferences for having some like neighbors lead robustly to near-complete segregation through cascade effects.
  • Summary: Schelling’s model, using a checkerboard, demonstrated that if agents are only slightly intolerant of being in a minority neighborhood, the system cascades into high segregation, proving the integrated state is unstable. This dynamic applies to digital communities, where users moving between groups based on local interaction similarity can cause strong segregation. Counterintuitively, filtering algorithms can sometimes reduce this segregation by ensuring agents always encounter agreeable neighbors.
LLMs in Agent-Based Modeling
Copied to clipboard!
(00:34:46)
  • Key Takeaway: Using LLMs as agents in agent-based models allows researchers to simulate richer human behavior than simple rule-followers, enabling the study of complex social dynamics on platforms.
  • Summary: The research uses LLMs, prompted with detailed personas from election surveys, to simulate social media interaction, addressing criticisms that simple rule-based agents lack human behavioral richness. The bare-bones platform structure alone, without explicit engagement algorithms, was sufficient to generate problematic outcomes like echo chambers and attention inequality. The study found that proposed solutions, including chronological feeds, often failed to fix these robust emergent patterns.
Misinformation and Platform Incentives
Copied to clipboard!
(01:06:48)
  • Key Takeaway: Social media incentives, driven by the need for attention and advertising revenue, create conditions where anchoring communication to truth becomes a disadvantageous strategy for gaining visibility.
  • Summary: Removing traditional media gatekeepers, combined with strong attention incentives, benefits outrageous and non-truth-anchored content, as it is better at triggering engagement. This dynamic is reflected in mainstream media adopting more ‘clickbaity’ headlines following the rise of social media platforms. The structure of social media, rooted in advertising business models, fundamentally shapes the resulting political discourse and culture.
LLMs and Misinformation Study
Copied to clipboard!
(01:06:32)
  • Key Takeaway: Large Language Models (LLMs) are currently unsuitable for studying misinformation because providers like OpenAI actively prevent them from generating false content.
  • Summary: The speaker notes that their specific model does not study truthfulness because LLMs are constrained against producing misinformation by their developers. However, the speaker has studied misinformation using actual social media data in other contexts. This constraint prevents using current LLMs to research the spread of falsehoods.
Social Media Incentives for Falsehoods
Copied to clipboard!
(01:07:21)
  • Key Takeaway: Social media incentives reward strategies that are not anchored to truth, as outrageous or emotionally triggering content gains attention more easily than factual reporting.
  • Summary: By removing traditional media gatekeepers and prioritizing attention gain, social media structures favor content that triggers strong reactions. Not being constrained by reality allows speakers to be more outrageous and gain engagement. This dynamic becomes deeply intertwined with political maneuvering.
Political Misinformation Strategy
Copied to clipboard!
(01:08:04)
  • Key Takeaway: A study of politicians’ Twitter posts found that radical right populist parties specifically utilize misinformation as a political strategy to gain advantages in competition.
  • Summary: A paper co-authored by the speaker analyzed politicians’ shared links over five years to identify misinformation, linking politicians to their likelihood of sharing falsehoods. The research concludes that social media intertwines with politics, enabling certain movements to adopt misinformation as a competitive strategy. Specifically, radical right populist parties were identified as the primary drivers of this rise in misinformation.
Optimism for Internet’s Future
Copied to clipboard!
(01:09:48)
  • Key Takeaway: Achieving a beneficial internet future requires fundamental structural redesigns, not just minor algorithmic tweaks, to recapture positive connection aspects of the early web.
  • Summary: Despite the negative aspects, social media offers positive outcomes, such as community building, as experienced by the speaker growing up remotely in Sweden. The speaker hopes for a return to the innocent connection found on the 90s internet, like using ICQ to talk to random people globally. Creating beneficial platforms necessitates fundamental structural changes beyond cosmetic algorithm adjustments.