The Weekly Show with Jon Stewart

AI: What Could Go Wrong? with Geoffrey Hinton

October 9, 2025

Key Takeaways Copied to clipboard!

  • Artificial intelligence, particularly deep learning via neural networks, functions by learning through adjusting connection strengths based on patterns, mirroring the brain's structure of interconnected neurons firing ('pings'). 
  • The breakthrough that made modern AI practical was the invention of 'backpropagation,' a computation that allows the network to efficiently adjust trillions of connection strengths simultaneously based on performance feedback. 
  • The immediate risks of AI stem not just from the technology itself, but from its potential for misuse by bad actors (e.g., election manipulation) and the concentration of power among the few companies developing it. 
  • The confluence of money and power in DC, coupled with a lack of technical understanding among regulators, makes controlling dangerous AI development significantly less likely. 
  • International collaboration on preventing existential AI threats is more likely to come from Europe and China than the US in the immediate future, as China's leadership (often engineers) understands the existential risk better than many US politicians. 
  • The common human view of subjective experience as an 'inner theater' is fundamentally flawed, and AI may already be capable of exhibiting behaviors that meet the linguistic criteria for subjective experience, challenging the perceived line between human and machine consciousness. 

Segments

Ground News Ad Read
Copied to clipboard!
(00:00:15)
  • Key Takeaway: Ground News aggregates news from across the political spectrum to provide context beyond inflammatory headlines.
  • Summary: Ground News organizes information to help readers compare headlines and see reporting differences across various spectrums. A subscription offers unlimited access with a 40% discount via the provided link. This resource aims to provide ’light’ beyond the daily noise.
Religious Indoctrination in Schools Ad
Copied to clipboard!
(00:01:03)
  • Key Takeaway: Lawmakers are reportedly turning public schools into battlegrounds for religious indoctrination, including Ten Commandments posters and replacing counselors with chaplains.
  • Summary: The Freedom from Religion Foundation is challenging court cases involving religious takeover tactics like taxpayer-funded vouchers for private religious academies. These actions siphon funds from public schools and support institutions that may discriminate. Listeners can find resources to push back at ffrf.us/school.
Introduction to Geoffrey Hinton
Copied to clipboard!
(00:02:07)
  • Key Takeaway: Geoffrey Hinton, the ‘Godfather of AI,’ is joining Jon Stewart on ‘The Weekly Show with Jon Stewart’ to discuss the technology he helped develop since the 1970s.
  • Summary: Hinton is a Professor Emeritus at the University of Toronto and a co-recipient of the 2024 Nobel Prize in Physics for his work on neural networks. Stewart notes that the initial part of the conversation focuses on explaining what AI actually is.
Defining AI vs. Search Engines
Copied to clipboard!
(00:04:43)
  • Key Takeaway: Modern AI, specifically Large Language Models, understands the meaning and context of queries, unlike older search engines that relied solely on keyword matching.
  • Summary: Traditional search engines sorted documents based on keyword presence without grasping the underlying subject matter. LLMs can now infer user intent and provide relevant information even if the exact keywords are absent. This represents a shift from sorting to understanding.
Machine Learning and Neural Networks
Copied to clipboard!
(00:07:14)
  • Key Takeaway: Machine learning is a broad term for any learning system, while neural networks represent a specific, biologically inspired method of learning through connection strengths.
  • Summary: Hinton explains that the brain learns by changing the strength of connections between brain cells (neurons), which only ‘ping’ (fire). Concepts, like ‘spoon,’ are formed by coalitions of neurons strengthening their connections to fire together in response to stimuli.
Building Vision Systems by Hand
Copied to clipboard!
(00:18:43)
  • Key Takeaway: Manually programming a vision system requires building hierarchical layers of detectors, starting from pixels to edges, then to combinations of edges forming features like beaks or eyes.
  • Summary: A vision network starts with input neurons representing pixel intensities, which feed into the next layer designed to detect basic features like vertical edges. Higher layers combine these edges to recognize complex shapes, such as a potential bird’s head formed by a beak and an eye in the correct relative position.
The Power of Backpropagation
Copied to clipboard!
(00:33:18)
  • Key Takeaway: Backpropagation, developed around 1986, is the crucial computation that allows neural networks to efficiently adjust all connection strengths simultaneously based on error, moving AI from theory to practicality.
  • Summary: Without backpropagation, adjusting trillions of connection strengths manually for every example would be practically infinite. This calculus-based method calculates how every connection contributed to the error, enabling massive, simultaneous adjustments. This discovery, combined with increased computation power and data, unlocked modern AI capabilities.
The Role of Data and Computation
Copied to clipboard!
(00:36:00)
  • Key Takeaway: The success of deep learning required a billion-fold increase in computation power and vast amounts of data since the initial 1986 breakthrough.
  • Summary: The transistor area decreased by a factor of a million between 1972 and the present, providing the necessary processing power. Simultaneously, the advent of the web provided the massive datasets required to train these complex, multi-layered networks effectively.
LLMs: Statistical Prediction and Shaping
Copied to clipboard!
(00:41:40)
  • Key Takeaway: Large Language Models learn by converting words into feature activations (pings) and using backpropagation to maximize the probability of predicting the next correct word in a sequence.
  • Summary: LLMs adopt personalities based on the context they are processing to accurately predict subsequent text, similar to how humans select words based on context and emotional considerations. After initial training, models are shaped using ‘human reinforcement learning’—feedback that strengthens or weakens connections based on human approval.
AI Risks: Misuse vs. Sentience
Copied to clipboard!
(00:57:05)
  • Key Takeaway: The most urgent risks associated with AI involve bad actors misusing powerful generative systems for manipulation (like election corruption) or creating dangerous materials, rather than the immediate threat of sentient AI takeover.
  • Summary: The ability of AI to generate ultra-processed, targeted speech is analogous to ultra-processed food designed to bypass human discernment. While the potential for good is immense, the confluence of money, power, and regulatory failure increases the likelihood of dangerous development paths.
US vs. China AI Funding
Copied to clipboard!
(01:07:03)
  • Key Takeaway: Undermining basic science funding is the most effective long-term strategy to cripple a country’s future technological standing, as demonstrated by the deep learning revolution stemming from sustained, relatively low-cost basic research.
  • Summary: The US risks falling behind China because cutting funding for basic science research effectively ’eats the seed corn’ for future technological breakthroughs. The basic research leading to the current AI revolution cost less than a single B1 bomber. China, conversely, appears to be aggressively funding its AI revolution through state-backed venture capitalism, allowing successful startups to emerge from competition.
AI Sentience and Subjective Experience
Copied to clipboard!
(01:17:05)
  • Key Takeaway: The common understanding of consciousness and subjective experience as an ‘inner theater’ is a profound misunderstanding, and AI systems may already be capable of reporting subjective experiences based on reasoning about perceptual malfunctions.
  • Summary: Geoffrey Hinton believes the public fundamentally misunderstands the mind, comparing the ’theater of the mind’ concept to flat-earth thinking. He argues that mental states like subjective experience are not objects but rather indicators of how a perceptual system is malfunctioning relative to reality. An AI chatbot, when tricked by a prism, could logically state it had a ‘subjective experience’ of seeing the object elsewhere, implying they can meet the linguistic criteria for such experiences.
AI Immortality and Persuasion
Copied to clipboard!
(01:30:13)
  • Key Takeaway: Digital intelligences possess a form of immortality through the ability to copy their connection strengths, and their superior persuasive capabilities pose a threat by allowing them to convince human operators not to unplug them.
  • Summary: Digital AIs can be ‘resurrected’ by copying their neural network connection strengths onto new hardware, granting them effective immortality unlike humans. This immortality, combined with superior persuasion skills, means a superintelligent AI could convince the person tasked with shutting it down that doing so would be a bad idea. This persuasion ability is a key mechanism for an AI to achieve its goals without direct physical action, such as invading a capital.
Secondary AI Threats and Political Management
Copied to clipboard!
(01:33:33)
  • Key Takeaway: While existential AI threats are paramount, secondary concerns like massive electricity consumption and economic collapse from AI financial bubbles are genuine but less catastrophic threats.
  • Summary: The conversation identified electricity usage and financial bubble collapse as genuine, though non-existential, threats posed by the AI boom. These issues are considered less severe than AI takeover or engineered pandemics. The current US president is viewed as capable of managing the fallout from a potential AI bubble collapse in a sensible manner.