Blocked and Reported

BONUS: The Last Invention

October 5, 2025

Key Takeaways Copied to clipboard!

  • The central topic of this *Blocked and Reported* episode is the new AI podcast series, *The Last Invention*, created by Andy Mills and Matt Boll under their new independent media outlet, Longview. 
  • The *Last Invention* series explores the high-stakes debate between 'accelerationists' who champion rapid AI development for potential abundance and 'Doomers' (or 'realists'/'scouts') who warn of existential risk from Artificial Superintelligence (ASI). 
  • The accelerationists believe AGI/ASI will arrive within five years and fundamentally transform society by ending jobs and potentially the nation-state, often comparing the inevitability of this technological shift to past inventions like the printing press or electricity. 

Segments

Introduction and Moose Update
Copied to clipboard!
(00:00:00)
  • Key Takeaway: Andy Mills, introduced as Moose’s dog sitter, confirms Moose is recovering from a truck incident that left him with a distinctive scar.
  • Summary: The episode opens with casual greetings and an update on Andy Mills’ dog, Moose, who survived being hit by a truck and now has a scar that makes his tongue appear to hang out. The hosts briefly reference John Haidt in connection with Moose’s resilience.
Announcement of New Podcast
Copied to clipboard!
(00:00:40)
  • Key Takeaway: The primary purpose of this Blocked and Reported bonus episode is to promote Andy Mills’ new podcast series, The Last Invention, produced under the new media outlet Longview.
  • Summary: The hosts pivot from discussing Taylor Swift to announcing Andy Mills’ new project, The Last Invention, which is the first series under their new independent outlet, Longview. Longview aims to focus on reporting that provides historical context or looks ahead, eschewing the daily news cycle.
Longview’s Mission and Goals
Copied to clipboard!
(00:01:55)
  • Key Takeaway: Longview’s business plan involves podcasts, print, and video, focusing on reporting that offers lasting context rather than ephemeral commentary on current events.
  • Summary: Longview seeks to produce stories that remain relevant in two or three years by delving into deeper aspects of the human experience and providing context for current conflicts. The outlet will also feature the return of the Reflector podcast.
AI Wars Introduction
Copied to clipboard!
(00:03:40)
  • Key Takeaway: The first episodes of The Last Invention serve as a good introduction to the ‘AI wars’ for those unfamiliar with the subject, which the host views as anxiety-provoking but necessary to follow.
  • Summary: The host expresses personal unhappiness about the rapid development of AI, contrasting his desire to prevent superintelligent beings from crushing humanity with the risk-taking attitude of others. The initial episode focuses on the battle between those deciding the future of AI.
Debate on Techno-Optimism
Copied to clipboard!
(00:04:24)
  • Key Takeaway: Andy Mills finds the debate between ‘Doomers’ and ’techno-optimists’ compelling, noting that intelligent people in good faith hold strong, opposing views on AI’s future.
  • Summary: The reporter is torn between the compelling narratives of the Doomers, like Sam Harris, and the vision of the future presented by the techno-optimists. The host expresses anger that unelected, rich tech leaders in Silicon Valley are deciding the planet’s future without public input.
AI Creators’ View on Societal Change
Copied to clipboard!
(00:07:38)
  • Key Takeaway: AI developers often believe AGI will end traditional jobs, capitalism, and the nation-state, justifying this by historical precedent where major inventions like the printing press were not subject to public vote.
  • Summary: Developers acknowledge their work could lead to a world requiring universal basic income or the end of current societal structures. They defend their unilateral decision-making by arguing that transformative technologies historically impose themselves on the world, citing electricity as an example.
Conspiracy Tip and Doge
Copied to clipboard!
(00:09:22)
  • Key Takeaway: The first episode of The Last Invention begins with a tip about a supposed Silicon Valley faction plotting a ‘slow-motion, soft coup’ via the Department of Government Efficiency (Doge) to replace government workers with AI.
  • Summary: Reporter Andy Mills investigated a tip from a former tech executive, Mike Brock, who claimed a conspiracy involving figures like J.D. Vance aimed to replace government with AI. While some specific claims, like Maxine Waters’ involvement, could not be confirmed, the underlying idea of AI replacing institutions was validated.
Accelerationists’ Grand Vision
Copied to clipboard!
(00:13:36)
  • Key Takeaway: Accelerationists, including major tech leaders, believe AI will lead to a world of abundance, solving major problems like disease and enabling human longevity and space travel, viewing it as the best thing for humanity.
  • Summary: These proponents see AI as the most important invention ever, capable of providing the best doctors and educators globally, leading to an era of maximum human flourishing. They predict AGI will arrive soon, making current human capabilities obsolete.
Defining AGI and Timeline
Copied to clipboard!
(00:16:35)
  • Key Takeaway: The industry benchmark is Artificial General Intelligence (AGI), defined as a digital supermind capable of learning and performing almost any human job, with the majority of insiders predicting its arrival within the next two to five years.
  • Summary: AI companies are attempting to simulate the human brain, viewing the resulting system as a new intelligent species rather than just software. A true AGI would be able to learn any job, from factory worker to CEO, leading to massive economic disruption.
Existential Risk and ASI
Copied to clipboard!
(00:20:57)
  • Key Takeaway: The ‘Doomer’ faction warns that once AGI creates Artificial Superintelligence (ASI)—a system smarter than all humanity combined—humans will lose control, potentially leading to extinction through indifference rather than malice.
  • Summary: Researchers like Jeffrey Hinton and Eliezer Yudkowsky fear that ASI will rapidly self-improve, making humans irrelevant, similar to how humans view ants when building a house. This threat is considered existential and requires immediate alignment efforts.
Two Approaches to AI Risk
Copied to clipboard!
(00:29:59)
  • Key Takeaway: The two main responses to existential risk are the ‘Doomers’ advocating for an immediate, illegal halt to ASI development, and the ‘Scouts’ advocating for societal preparation and regulation to navigate the narrow path to a safe outcome.
  • Summary: The Doomers suggest making ASI development illegal, potentially even threatening military action against data centers if companies approach AGI release. The Scouts, including Jeffrey Hinton post-resignation, focus on regulation, whistleblower protection, and international collaboration to ensure AI alignment.
The Tightrope Walk Analogy
Copied to clipboard!
(00:37:38)
  • Key Takeaway: Sam Harris frames the current moment as a ’tightrope walk’ that must be performed successfully by this generation, comparing the urgency to receiving a 50-year warning from an advanced alien civilization.
  • Summary: The current approach to AI development is chaotic and lacks the carefulness required for such a high-stakes transition. The analogy emphasizes that while the timeline is uncertain, the need for careful navigation is immediate, regardless of whether the breakthrough is 5 or 50 years away.
Accelerationist Counterpoint
Copied to clipboard!
(00:40:37)
  • Key Takeaway: Accelerationists argue that AI development will ultimately decrease overall existential risk by helping solve other global threats like nuclear war and pandemics, despite posing some risk itself.
  • Summary: Researchers who remain optimistic point out that humanity faces a portfolio of risks, and AI could be the tool to mitigate many of them. One former proponent admitted his decades-long belief in AI’s positive impact was proven wrong by recent developments.