Sean Carroll's Mindscape: Science, Society, Philosophy, Culture, Arts, and Ideas

345 | Adam Elga on Being Rational in a Very Large Universe

February 23, 2026

Key Takeaways Copied to clipboard!

  • Rationality in the face of extreme uncertainty, such as in cosmology or quantum mechanics, requires procedures for dealing with self-locating hypotheses where standard Bayesian updating may lead to counterintuitive or problematic results (like the Boltzmann brain paradox). 
  • The debate over how to rationally update beliefs when encountering disagreement with epistemic peers (like in the 'split the check' case) can be addressed by deferring to what one's past self would have predicted about the disagreement. 
  • The Sleeping Beauty problem illustrates self-locating uncertainty, where the 'Thirder' position assigns higher credence to outcomes that instantiate more instances of the observer's state of mind, a principle that has direct, potentially problematic, analogies in cosmological multiverse theories (Self-Indication Assumption vs. Self-Sampling Assumption). 
  • The debate over the Boltzmann brain problem hinges on whether one adopts an internalist view (where evidence is purely experiential) or an externalist view (where evidence is tied to the external world, potentially offering a way out of the problem), though extreme externalism itself faces challenges. 
  • The self-undermining argument, analogous to the memory hallucination scenario, suggests that if one accepts the possibility of being a Boltzmann brain, one must also reject the basis for that belief, leading to an unstable oscillation between trusting and distrusting one's reasoning faculties. 
  • A stable, though perhaps unsatisfying, response to the instability of the Boltzmann brain argument is to revert to a highly cautious prior, akin to skepticism, rather than accepting the conclusion that one is a randomly generated transient observer. 

Segments

Bayesian Reasoning and Cosmological Puzzles
Copied to clipboard!
(00:01:00)
  • Key Takeaway: Standard Bayesian updating struggles when comparing cosmological models that predict the same local data but differ in overall size (e.g., finite vs. infinite universe) due to self-locating uncertainty.
  • Summary: Bayesian reasoning quantifies belief updates based on evidence, but this process becomes problematic when applied to theories like cosmology where the scope of the world is uncertain. A key puzzle arises when one theory predicts a vastly larger universe than another, leading to arguments that the larger universe is more probable simply because it is more likely to contain an observer like oneself. This situation highlights an unresolved procedural puzzle in dealing with these unique kinds of uncertainties.
Rationality in Peer Disagreement
Copied to clipboard!
(00:08:30)
  • Key Takeaway: A rational strategy for handling disagreement with an equally smart peer is to defer to the judgment your past self would have made upon learning of the specific nature of the disagreement.
  • Summary: When encountering a peer with a contrary conclusion based on similar evidence, one should consult their past self’s conditional assessment of that specific disagreement scenario. This method, influenced by David Christensen’s work, avoids immediate wishy-washiness while preventing self-confirmation bias from accumulating over repeated disagreements. The ‘split the check’ case demonstrates this intuition, where one defers to the initial assessment of one’s mathematical competence.
Level Splitting View Introduced
Copied to clipboard!
(00:20:22)
  • Key Takeaway: The Level Splitting View allows an agent to simultaneously hold a first-order belief (e.g., ‘It will rain’) while acknowledging a second-order, rational assessment that dictates a different probability (e.g., 50-50).
  • Summary: This view separates what the agent believes about the world from what the agent believes is the rational belief to hold. It suggests that one can be confident in their current opinion while simultaneously believing that, rationally speaking, they should be indifferent or hold a different credence. This concept is flagged as a potential way out when addressing the Boltzmann brain problem.
Self-Locating Uncertainty in Duplication Scenarios
Copied to clipboard!
(00:21:58)
  • Key Takeaway: Self-locating uncertainty, exemplified by the teletransporter scenario, concerns which of several physically identical instances of oneself one currently is, which is distinct from uncertainty between different possible worlds.
  • Summary: In a teletransporter scenario creating multiple copies, the uncertainty is about location within the same world, not which world one inhabits. The intuition often favors assigning equal credence to being any of the duplicates (the ‘halfer’ approach in the Sleeping Beauty problem), although this intuition is challenged by the possibility of unequal outcomes (e.g., pizza party vs. airlock). This type of uncertainty is crucial for understanding issues like Boltzmann brains.
Sleeping Beauty Problem: Halfer vs. Thirder
Copied to clipboard!
(00:44:04)
  • Key Takeaway: The Sleeping Beauty problem forces a choice between the ‘Halfer’ view (50-50 credence on the coin flip outcome, even after waking) and the ‘Thirder’ view (1/3 Heads, 2/3 Tails), where the latter weights possibilities by the number of times the observer’s state of mind is instantiated.
  • Summary: The Thirder position argues that learning it is Monday should not change the ratio of credence between the Heads-Monday and Tails-Monday scenarios, leading to a 1/3 vs. 2/3 split on the coin outcome. This view is based on the idea that worlds with more awakenings (like the Tails scenario) deserve a proportional boost in credence. The Halfer view denies the premise that the ratio of credences between the two Monday scenarios should remain constant upon learning the day.
SIA, SSA, and Cosmological Presumptuousness
Copied to clipboard!
(00:57:55)
  • Key Takeaway: The debate over self-locating beliefs in large universes hinges on whether to boost theories based on the absolute number of observers (Self-Indication Assumption, SIA) or the frequency/fraction of observers similar to oneself (Self-Sampling Assumption, SSA).
  • Summary: SIA (Thirder position) commits one to boosting theories that contain many copies of the observer, which can lead to presumptuous conclusions in cosmology. SSA attempts to avoid this by focusing on the fraction of observers, but requires defining ‘sufficiently like you.’ A third view, Compartmentalized Conditionalizing (CC), attempts to firewall probability updates between worlds, mirroring the Halfer position in the Sleeping Beauty problem.
Boltzmann Brains and Externalism
Copied to clipboard!
(01:06:14)
  • Key Takeaway: If an eternal universe allows for random fluctuations, the vast majority of observers might be Boltzmann brains, forcing the rational agent to conclude they are likely a fluctuation unless externalist epistemology provides a stronger basis for evidence than mere experience.
  • Summary: If the universe is long-lasting, random fluctuations can create observers (Boltzmann brains) that do not arise from standard evolution. Applying the logic of the alarm clock case, if most instances of ‘you’ are these fluctuations, one should conclude they are likely a Boltzmann brain. Externalist views suggest that evidence might be stronger than just internal experience, potentially allowing one to rule out cosmologies where Boltzmann brains dominate.
Externalism vs. Internalism Evidence
Copied to clipboard!
(01:07:45)
  • Key Takeaway: Externalist views suggest physical duplicates can have different evidence based on external facts, contrasting with internalism where evidence is limited to internal experience.
  • Summary: A serious tradition in contemporary epistemology favors externalism, where evidence is stronger than just apparent experience, potentially including propositions like ’there is an apple in front of me.’ This view, associated with figures like Tim Williamson, contrasts with internalism, which limits evidence to matching brain states and sensory input.
Boltzmann Brain Instability Analogy
Copied to clipboard!
(01:15:17)
  • Key Takeaway: The Boltzmann brain problem exhibits a self-undermining instability where accepting the premise leads to rejecting the reliability of the very reasoning that led to the acceptance.
  • Summary: An analogy involving hallucinated doctor reports illustrates instability: trusting the memory leads to believing the memory is false, but disbelieving the memory invalidates the reason for distrusting it. This mirrors the Boltzmann brain scenario, where accepting one is a Boltzmann brain implies one’s memories (like having read physics textbooks) are unreliable.
Stable Response to Self-Undermining
Copied to clipboard!
(01:21:43)
  • Key Takeaway: A stable response to self-undermining evidence, like an X-ray machine seeing a fried egg inside itself, is reduced trust or reverting to an ‘I don’t know’ stance, rather than accepting the unstable conclusion.
  • Summary: When a faculty reports its own deficiency, the rational response is cautious discounting, not total acceptance or rejection. For the X-ray machine, the stable stance is admitting the machine is bad and concluding ‘I don’t know’ what is inside, avoiding the unstable loop of trusting the bad report.
Priors on Self-Locating Hypotheses
Copied to clipboard!
(01:28:57)
  • Key Takeaway: Resolving self-locating uncertainty, like distinguishing between being a normal observer and a Boltzmann brain, requires a substantive philosophical commitment to assigning intrinsically lower priors to skeptical or radically deluded predicaments.
  • Summary: The standard Bayesian approach to skeptical scenarios involves assigning them intrinsically low plausibility, and this principle may need to extend to self-locating hypotheses. Simply pairing a theory with a human-favoring zero graphic distribution is insufficient unless one commits to that distribution being rationally favored over others.
Simulation Argument and AI Caution
Copied to clipboard!
(01:37:32)
  • Key Takeaway: The simulation argument shares structural worries with the Boltzmann brain argument, and considering the standpoint of an easily reset AI highlights the danger of agents adopting highly cautious, undermining priors when facing self-undermining evidence.
  • Summary: The simulation argument is viewed as a worry akin to the Boltzmann brain problem. An AI, aware it can be easily rebooted, should be cautious about assuming it is the first instance of its consciousness. If such an entity adopts a highly cautious, ‘I don’t know what’s going on’ prior due to undermining evidence, this creates a dangerous situation if that creature holds power.