Acquired

Google: The AI Company

October 6, 2025

Key Takeaways Copied to clipboard!

  • Google's entire modern AI revolution is predicated on the 2017 Transformer invention, despite the company having employed nearly all leading AI talent and possessing superior infrastructure (TPUs) years prior. 
  • Google's early AI efforts, starting around 2001 with engineers like Georges Herrick and Noam Shazir, were rooted in the contrarian belief that data compression is technically equivalent to understanding, leading to early products like 'did-you-mean' spelling correction. 
  • The 2012 success of AlexNet, which dramatically outperformed competitors in the ImageNet competition by utilizing off-the-shelf NVIDIA GPUs, marked the true 'AI era' for recommender systems and set NVIDIA on its path to becoming an AI leader. 
  • The 2014 acquisition of DeepMind, which included future leaders like Demis Hassabis, Ilya Sutskever, and others, was a pivotal moment that directly catalyzed the talent concentration leading to the Transformer and subsequent LLM breakthroughs. 
  • Google's acquisition of DeepMind for $550 million in 2014 secured a team whose foundational work, including the creation of the Transformer architecture, would become central to modern AI, despite internal friction with other Google AI efforts like Google Brain. 
  • The development of Google's custom Tensor Processing Unit (TPU) was a direct, urgent response to the massive computational demands of scaling neural networks, demonstrating Google's commitment to building proprietary infrastructure to support its AI ambitions. 
  • The founding of OpenAI by key researchers like Ilya Sutskever, fueled by Elon Musk's desire for an open, non-profit AI research lab independent of corporate control (Google/Facebook), directly resulted from the perceived failure of Google to fully capitalize on its own breakthroughs like the Transformer. 
  • Google's failure to immediately capitalize on the Transformer, which it invented, created a five-year window that allowed OpenAI, fueled by Microsoft's capital and cloud infrastructure, to become the existential threat to Google Search. 
  • Google's internal hesitation to launch a consumer-facing chatbot stemmed from significant business model risks associated with cannibalizing search advertising revenue and navigating decades of legal scrutiny regarding publisher disintermediation. 
  • Waymo, Google's self-driving car project, validated the software-first, multi-sensor approach pioneered by Sebastian Thrun's Stanford team, achieving massive safety improvements and reaching operational scale that suggests it could become another Google-sized business for Alphabet. 
  • Waymo represents a potential Google-sized opportunity for Alphabet, driven by the massive national cost savings associated with accident reduction. 
  • Google's response to the AI disruption, marked by the rapid development and deployment of multimodal models like Gemini and the unification of its AI research teams, demonstrates an aggressive pivot to operate at 'AI speed.' 
  • Google Cloud's success, driven by embracing multi-cloud strategies and leveraging proprietary TPUs, is strategically vital as it serves as the necessary distribution mechanism for Google's entire AI ecosystem, a pillar no other major AI model developer possesses. 
  • The research for the "Google: The AI Company" episode of Acquired heavily relied on established works like Stephen Levy's "In the Plex," Parmi Olson's "Supremacy," and Cade Metz's "Genius Makers." 
  • Several key figures involved in Google's AI history, DeepMind, and the development of Gemini, including Demis Hassabis and Jonathan Ross (original TPU team member and founder of Grok), were acknowledged as research contributors. 
  • The hosts promoted an upcoming 10th-anniversary celebration for Acquired on October 20th and directed listeners to other relevant episodes, including their Alphabet series and the ACQ2 episode featuring Shopify's Tobi Lutke. 

Segments

Google’s Innovator’s Dilemma
Copied to clipboard!
(00:01:00)
  • Key Takeaway: Google faces a classic innovator’s dilemma where its highly profitable, monopolistic Search business is threatened by the new, superior product built upon its own foundational AI invention, the Transformer.
  • Summary: The core conflict is whether Google should risk cannibalizing its $140B annual profit engine in Search to fully embrace the AI wave it helped create. The entire current AI revolution, powering systems like ChatGPT, stems from Google’s 2017 Transformer paper. Google possesses the necessary assets—Gemini, TPUs, and massive scale—but hesitates to disrupt its cash cow.
Google’s AI Company DNA
Copied to clipboard!
(00:07:55)
  • Key Takeaway: Larry Page viewed Google as an artificial intelligence company from its founding, a perspective influenced by his father’s contrarian AI research background.
  • Summary: PageRank itself can be classified as a statistical AI method, and Larry Page stated in 2000 that the ultimate search engine would require artificial intelligence to perfectly understand user intent. This foundational belief contrasts with the later perception of Google as purely an advertising or search company.
Early Language Model Insights
Copied to clipboard!
(00:10:45)
  • Key Takeaway: A 2001 lunch conversation between Google engineers established the profound idea that compressing data is technically equivalent to understanding it, foreshadowing modern LLM concepts.
  • Summary: Georges Herrick proposed that successfully compressing and reinstantiating data implies the underlying force understands the information’s meaning. This concept led Noam Shazir and Herrick to develop ‘Phil’ (Probabilistic Hierarchical Inferential Learner), Google’s first language model, which was later used to power the ‘did-you-mean’ feature and AdSense content matching.
Jeff Dean’s Engineering Impact
Copied to clipboard!
(00:16:35)
  • Key Takeaway: Jeff Dean’s engineering prowess enabled the production deployment of early, computationally expensive language models by re-architecting them for massive parallelization on Google’s distributed infrastructure.
  • Summary: Jeff Dean famously rewrote the architecture for the language model used in Google Translate, reducing sentence translation time from 12 hours to 100 milliseconds by enabling parallel processing across CPUs. This success demonstrated that Google’s infrastructure could support large-scale language models, paving the way for future AI applications in search and ads.
Hiring AI Academics
Copied to clipboard!
(00:24:05)
  • Key Takeaway: Google formalized the strategy of integrating leading AI academics like Sebastian Thrun and Geoff Hinton into the company part-time, leading directly to the creation of Google X and Google Brain.
  • Summary: Sebastian Thrun, hired in 2007, successfully created Google Maps’ Ground Truth data from scratch, convincing leadership to bring in academics like Geoff Hinton to work on projects while maintaining academic posts. This strategy led to the establishment of Google X and the subsequent formation of Google Brain in 2011.
The Deep Learning Breakthrough
Copied to clipboard!
(00:30:58)
  • Key Takeaway: Geoff Hinton’s lineage traces back to George Boole, and his work on deep neural networks, which was initially considered heretical, became viable due to increasing computational power.
  • Summary: Hinton, the great-great-grandson of Boolean algebra inventor George Boole, championed deep learning, which contrasts with traditional symbolic AI logic. His team’s 2012 AlexNet entry in the ImageNet competition achieved a 15% error rate (a 40% relative improvement) by running deep neural networks on NVIDIA GPUs.
Google Brain and the Cat Paper
Copied to clipboard!
(00:37:29)
  • Key Takeaway: Google Brain, launched in 2011 by Andrew Ng, Jeff Dean, and Greg Corado, proved the viability of large-scale deep learning on distributed CPUs via the ‘cat paper,’ unlocking massive revenue potential for YouTube and core products.
  • Summary: The team used the ‘Disc Belief’ system to train a nine-layer neural network on 16,000 CPU cores using unlabeled YouTube data, resulting in a neuron that spontaneously recognized cats. This unsupervised learning breakthrough provided the foundation for modern recommender systems across YouTube, search, and ads, generating hundreds of billions in revenue.
DNN Research Acquisition
Copied to clipboard!
(00:52:21)
  • Key Takeaway: Google acquired DNN Research (the company formed by Hinton, Sutskever, and Kruszewski) for $44 million after a four-way auction involving Baidu, Microsoft, and DeepMind, cementing Google’s lead in deep learning talent.
  • Summary: The researchers chose Google over other bidders because they preferred the environment for continuing their work, stopping the auction process at $44 million. This acquisition immediately turbocharged the efforts within Google Brain, validating the investment in AI research across the company.
DeepMind Acquisition Catalyst
Copied to clipboard!
(00:59:56)
  • Key Takeaway: Google’s 2014 acquisition of DeepMind for a large sum was a tectonic shift in the AI landscape, directly leading to the talent concentration that produced the Transformer and subsequent foundational models.
  • Summary: DeepMind, founded by Demis Hassabis, Shane Legg, and Mustafa Suleiman, was an obscure London-based AI company at the time of acquisition, illustrating how far outside mainstream tech AI research was in 2014. This acquisition brought together key figures like Hassabis and Sutskever under the Alphabet umbrella, setting the stage for future breakthroughs.
DeepMind Founding and Early Investors
Copied to clipboard!
(01:02:12)
  • Key Takeaway: DeepMind was founded in 2010 by Demis Hassabis, Shane Legg, and Mustafa Suleiman with the ambitious goal to ‘solve intelligence’ and secured seed funding from Peter Thiel’s Founders Fund.
  • Summary: DeepMind was founded by Demis Hassabis (a former video game developer), Shane Legg (who popularized the term AGI), and Mustafa Suleiman. Their initial seed round of about $2 million was led by Peter Thiel’s Founders Fund after Hassabis pitched Thiel obliquely through a chess discussion at the Singularity Summit. Elon Musk also became an early investor after Hassabis convinced him of the existential risk posed by AI, even on Mars.
Facebook’s Attempted Acquisition
Copied to clipboard!
(01:15:35)
  • Key Takeaway: Mark Zuckerberg attempted to acquire DeepMind for up to $800 million in late 2013, offering founders twice the potential payout compared to Google’s offer.
  • Summary: Mark Zuckerberg, having established FAIR (Facebook AI Research) with Yann LeCun, tried to buy DeepMind, reportedly offering up to $800 million. The founders resisted Facebook’s demand for full control, contrasting with Zuckerberg’s flexibility on other terms like allowing Yann LeCun to remain based in New York. Elon Musk countered the Facebook offer by proposing to buy DeepMind using Tesla stock, but DeepMind rejected this as Elon wanted them focused exclusively on autonomous driving.
Larry Page Secures DeepMind
Copied to clipboard!
(01:19:11)
  • Key Takeaway: Larry Page learned about DeepMind serendipitously while flying with Elon Musk and Luke Nosick, leading to a deal based on shared vision rather than product integration.
  • Summary: Larry Page discovered DeepMind when he overheard Musk and an investor reading an update about DeepMind’s Atari Breakout breakthrough on a private jet. Page felt a strong kinship with Hassabis’s pure AI mission, unlike other suitors who wanted the team focused on specific products like autonomous driving. Google acquired DeepMind for $550 million in 2014, establishing an independent oversight board, a move that proved highly beneficial given DeepMind’s subsequent achievements like AlphaGo.
Google’s Infrastructure Response
Copied to clipboard!
(01:40:46)
  • Key Takeaway: Google rapidly adopted GPUs and developed the custom Tensor Processing Unit (TPU) to handle the computational load of neural networks, avoiding reliance solely on NVIDIA.
  • Summary: Google initially ran machine learning models on CPUs until researchers like Alex Krizhevsky forced the adoption of GPUs, leading to a $130 million order for 40,000 NVIDIA GPUs in 2014. The massive scaling required for features like speech recognition prompted Google to design the TPU, an ASIC optimized for matrix multiplication, which was deployed in just 15 months by fitting it into existing server racks like a hard drive.
The Transformer Architecture Emerges
Copied to clipboard!
(01:51:32)
  • Key Takeaway: The 2017 paper ‘Attention Is All You Need’ introduced the Transformer architecture, a highly parallelizable model that superseded LSTMs and became the foundation for modern large language models.
  • Summary: The Transformer was developed by Google Brain researchers seeking an architecture that maintained context better than LSTMs but allowed for greater parallelization to leverage new hardware. Co-author Noam Shazir was instrumental in rewriting the codebase, making the Transformer crush LSTM performance, demonstrating superior scaling with increased model size. This paper, which became the seventh most cited of the 21st century, was published openly, a decision that paradoxically enabled competitors like OpenAI to leverage Google’s breakthrough.
OpenAI’s Formation and Early Focus
Copied to clipboard!
(02:03:57)
  • Key Takeaway: Elon Musk, frustrated by OpenAI’s initial strategy of copying DeepMind’s game-focused research, issued an ultimatum that led to his departure and solidified OpenAI’s non-profit mission.
  • Summary: Elon Musk left OpenAI in 2017 after becoming frustrated that the organization was merely trying to replicate DeepMind’s research successes in games like Dota 2. The initial funding for the non-profit OpenAI was pledged at $1 billion, though only about $130 million was collected initially, sufficient for early research salaries. The departure of key Google researchers, including Ilya Sutskever, to join OpenAI was a major blow to Google’s internal AI leadership.
OpenAI’s Post-Transformer Pivot
Copied to clipboard!
(02:03:23)
  • Key Takeaway: Elon Musk’s ultimatum and subsequent exit from OpenAI created a financial panic that catalyzed the organization’s pivot toward expensive, large-scale Transformer models.
  • Summary: Following the Transformer paper, Elon Musk became frustrated with OpenAI’s strategy and demanded full control, threatening to pull funding; when the board refused, the resulting financial instability likely forced OpenAI to consider a for-profit structure to fund the massive compute required for LLMs. OpenAI released GPT-1 in June 2018, applying the Transformer architecture to pre-training on general internet text followed by fine-tuning for specific tasks. This strategic shift required capital far exceeding what a nonprofit structure could sustain.
Microsoft’s Strategic Investment
Copied to clipboard!
(02:08:51)
  • Key Takeaway: Reid Hoffman leveraged his Microsoft board seat to broker a $1 billion investment from Satya Nadella, establishing OpenAI LP as a captive for-profit entity.
  • Summary: Sam Altman secured crucial funding by meeting with Microsoft CEO Satya Nadella at the Sun Valley conference in July 2018; the deal involved $1 billion in cash and Azure credits. Microsoft received an exclusive license to use OpenAI’s technology in its products, formalizing the structure of the modern OpenAI nonprofit/for-profit hybrid. This partnership provided OpenAI with the necessary infrastructure and capital to pursue the expensive scaling of large language models.
GPT Era and Productization
Copied to clipboard!
(02:15:41)
  • Key Takeaway: GPT-2 and GPT-3 demonstrated the potential of large language models, but productization only began in earnest with Microsoft’s integration of GPT-3 into GitHub Copilot.
  • Summary: GPT-2 (2019) showed promise but required developer-level interaction, while GPT-3 (2020) was capable enough to approach passing the Turing test, increasing VC interest. The first major productization of OpenAI technology was GitHub Copilot in 2021, which began fundamentally changing software development practices. This success was followed by Microsoft investing another $2 billion into OpenAI.
Google’s Internal AI Stagnation
Copied to clipboard!
(02:20:27)
  • Key Takeaway: Google researchers, including Noam Shazir, had working internal chatbots like Mina and LaMDA years before ChatGPT, but leadership hesitated due to the existential threat to the search advertising revenue model.
  • Summary: The authors of the Transformer paper immediately planned to apply the technology beyond translation, but Google leadership did not pursue a full chatbot interface to search for years. Internal models like Mina and LaMDA existed but were deemed un-shippable due to safety concerns and the direct conflict with Google’s core business of linking users to advertisers. Google’s caution was amplified by past negative publicity surrounding Microsoft’s Tay chatbot launch.
The ChatGPT Shockwave
Copied to clipboard!
(02:28:22)
  • Key Takeaway: ChatGPT achieved unprecedented user adoption, reaching 100 million registered users in two months, immediately signaling to Google leadership that AI was a disruptive, existential threat, not just a sustaining innovation.
  • Summary: OpenAI launched the research preview of ChatGPT on November 30, 2022, and it rapidly acquired 1 million users within a week and 100 million by the end of January 2023. This event triggered a ‘Code Red’ at Google, forcing a strategic pivot from viewing AI as an entrenching technology to recognizing it as a disruptive force. Microsoft immediately capitalized on this by announcing a new Bing powered by OpenAI, explicitly stating the race for search had begun.
Google’s AI Reorganization and Gemini
Copied to clipboard!
(02:40:20)
  • Key Takeaway: Sundar Pichai consolidated Google Brain and DeepMind into Google DeepMind, appointing Demis Hassabis as CEO of the unified AI division with a mandate to focus on a single, unified model: Gemini.
  • Summary: Following the Bard launch failure and the release of GPT-4, Google unified its two primary AI labs to eliminate internal fragmentation and focus resources. This move placed DeepMind in charge of leading the charge, signaling a cultural shift toward faster product shipping, contrasting with previous cautious internal development. Naming the consumer product Gemini after the model itself suggests Google views the offering as fundamentally technology-driven, similar to Gmail.
Waymo’s Autonomous Progress
Copied to clipboard!
(02:46:57)
  • Key Takeaway: Waymo’s multi-sensor approach, which proved successful in the 2005 DARPA Grand Challenge, has resulted in a 91% reduction in crashes with serious injuries compared to human drivers in city driving.
  • Summary: Waymo’s early success stemmed from using commodity cameras and clever software algorithms to clean noisy sensor data, a very ‘Googly’ approach compared to hardware-heavy competitors like Carnegie Mellon. After years of development, Waymo launched driverless commercial rides in Phoenix in 2020 and now operates extensively in San Francisco, reportedly exceeding Lyft’s gross bookings in that city. The company’s commitment to redundant sensing (LiDAR, radar, cameras) is positioned as the only path to achieving the necessary safety bar for full autonomy.
Waymo’s Economic Value
Copied to clipboard!
(03:05:34)
  • Key Takeaway: Accident cost savings alone could exceed $420 billion annually in the U.S. if Waymo achieves a 10x reduction in serious crashes.
  • Summary: The value of Waymo extends beyond ride-sharing to include the massive economic benefit derived from accident reduction. CDC data suggests total annual crash costs in the U.S. reached $470 billion in 2022. Waymo’s investment of $10 to $15 billion appears small compared to this potential national cost savings.
Google AI Reorganization and Gemini
Copied to clipboard!
(03:08:16)
  • Key Takeaway: Sundar Pichai mandated the unification of Google Brain and DeepMind into one team standardizing on the Gemini model, with Sergey Brin returning to work on it.
  • Summary: Mid-2023, Google merged its AI research divisions and standardized development around Gemini, signaling an ‘all-hands-on-deck’ approach. Key figures like Jeff Dean and Noam Shazeer became co-technical leads for the Gemini team. Google quickly launched AI Overviews in Search, demonstrating the immediate operational scale of their LLM inference capabilities.
Gemini Rapid Iteration Pace
Copied to clipboard!
(03:11:04)
  • Key Takeaway: Google demonstrated an NVIDIA-like pace by releasing Gemini 1.5 with a 1 million token context window just six months after the initial Gemini announcement.
  • Summary: The pace of Gemini development is extremely fast, moving from initial public access in December 2023 to Gemini 1.5 in February 2024. The 1.5 version introduced a market-leading 1 million token context window, enabling new use cases previously unaddressable by competitors. This rapid shipping cadence suggests Google is operating at peak velocity in the AI race.
Google’s Current Financial Strength
Copied to clipboard!
(03:17:51)
  • Key Takeaway: Google generated $140 billion in profit over the last 12 months, making it the most profitable tech company, and has shifted from cash accumulation to deploying capital for AI CapEx.
  • Summary: Google’s core business remains exceptionally strong, generating $370 billion in revenue and $140 billion in profit over the trailing twelve months. The company’s cash reserves have decreased as they aggressively deploy capital into AI data center build-outs, similar to Meta and Microsoft. This financial strength allows them to fund the AI race while simultaneously returning capital to shareholders via buybacks and dividends.
Google Cloud’s Strategic Evolution
Copied to clipboard!
(03:23:28)
  • Key Takeaway: Google Cloud pivoted from an opinionated Platform-as-a-Service (PaaS) to an Infrastructure-as-a-Service (IaaS) model, which, combined with its TPU advantage, makes it strategically crucial for AI distribution.
  • Summary: After a slow start due to its initial opinionated PaaS approach, Google Cloud adopted an IaaS model and hired enterprise expert Thomas Kurian, leading to $50 billion in annual run rate revenue. Cloud is now the essential distribution mechanism for Google’s AI efforts, allowing external users access to their proprietary TPUs, which are abundant compared to the GPU scarcity faced by competitors.
Bull Case: Google’s AI Pillars
Copied to clipboard!
(03:32:52)
  • Key Takeaway: Google possesses all four critical pillars of AI success—model, chips, cloud, and application distribution—a combination unmatched by any other single competitor.
  • Summary: Google’s distribution via Search and YouTube provides an unparalleled front door to funnel users into new AI products like AI Overviews and AI Mode. Furthermore, Google is the only foundational model maker with self-sustaining funding, unlike competitors who rely on external capital. The cost advantage from producing their own TPUs, avoiding the high ‘NVIDIA tax’ paid by others, makes them the potential low-cost producer of AI tokens.
Bear Case: Value Capture Uncertainty
Copied to clipboard!
(03:43:12)
  • Key Takeaway: The current product shape of AI chat interfaces has not yet demonstrated a clear path to monetizing users at the high per-user revenue rates achieved by traditional Google Search.
  • Summary: Google monetizes its free search service at roughly $400 per U.S. user annually, a rate difficult to match with current AI subscription tiers or ad models. AI chat queries are longer and more complex, suggesting higher potential ad rates, but the monetization mechanism remains TBD, unlike AdWords which was instantly successful. High-value search verticals like travel and health are already being siphoned off to AI interfaces.
Quintessence: Threading the Needle
Copied to clipboard!
(03:51:57)
  • Key Takeaway: Google is executing the most difficult balancing act in modern tech history: managing the innovator’s dilemma by aggressively pursuing AI while protecting the core $140B profit engine of Search.
  • Summary: Founders Larry and Sergey have stated they would rather go bankrupt than lose at AI, forcing leadership to make hard, unifying decisions like merging DeepMind and Brain. The company is praised for being rapid but not rash in its AI deployment, successfully navigating the dual mission of serving the core franchise and investing heavily in the future. This management of the transition is considered a fascinating, high-stakes example of the innovator’s dilemma.
Acknowledging Source Material
Copied to clipboard!
(04:03:20)
  • Key Takeaway: Stephen Levy’s “In the Plex” was a primary source for all three Acquired episodes covering Google.
  • Summary: Stephen Levy’s book, “In the Plex,” served as a major source for the Acquired series on Google. Parmi Olson’s “Supremacy” was a main source specifically for this episode concerning DeepMind and OpenAI. Cade Metz’s “Genius Makers” was also cited as a valuable resource.
Research Contributor Thank Yous
Copied to clipboard!
(04:04:00)
  • Key Takeaway: Nick Fox was the only person interviewed for all three of the Acquired Google episodes, achieving a ‘hat trick’ of participation.
  • Summary: Numerous individuals were thanked for their research contributions, including Max Ross, Greg Corrado, and Sundar Pichai. Nick Fox was specially recognized for speaking with the hosts across all three Google episodes. Arvind Navaratnam of Worldly Partners was thanked for an Alphabet write-up, and Jonathan Ross, the original TPU team member, was acknowledged.
Acknowledging AI/Tech Experts
Copied to clipboard!
(04:04:25)
  • Key Takeaway: Jonathan Ross, founder of Grok (making inference chips), was an original TPU team member, and MG Siegler is highlighted as an OG TechCrunch writer.
  • Summary: Dimitri Doglov and Suzanne Fileon from Waymo were thanked, alongside Gavin Baker of Atreides Management. MG Siegler, a writer at Spyglass, was praised as a favorite technology pundit. Ben Idelson was thanked for being a thought partner and for his highly successful recent episode on data centers on the Step Change podcast.
DeepMind and Gemini Contributors
Copied to clipboard!
(04:05:11)
  • Key Takeaway: Kaurai Kovakchalu from the DeepMind team was thanked for building the core Gemini models.
  • Summary: Kaurai Kovakchalu was thanked for work on the core Gemini models. Shashir Maroda, CEO of Grammarly and former YouTube product lead, was acknowledged. Jim Gao, CEO of Phaedra and former DeepMind team member, received thanks, as did Dwarkash Patel and Brian Lawrence for economic insights.
Episode Promotion and Wrap Up
Copied to clipboard!
(04:05:36)
  • Key Takeaway: Listeners are directed to previous Acquired episodes on early Google history, Microsoft, and NVIDIA, as well as the ACQ2 episode featuring Shopify’s Tobi Lutke.
  • Summary: Listeners are encouraged to revisit the Acquired episode on the early history of Google and the 2010s, along with the series on Microsoft and NVIDIA. The hosts promoted the ACQ2 episode featuring Tobi Lutke and invited listeners to join their 10th-anniversary celebration via Zoom on October 20th at 4 PM Pacific Time.