Debug Information
Processing Details
- VTT File: Skeptoid_EnvironmentallyFriendlyAI_Ep994.vtt
- Processing Time: September 11, 2025 at 02:39 PM
- Total Chunks: 1
- Transcript Length: 21,638 characters
- Caption Count: 175 captions
Prompts Used
Prompt 1: Context Setup
You are an expert data extractor tasked with analyzing a podcast transcript.
I will provide you with part 1 of 1 from a podcast transcript.
I will then ask you to extract different types of information from this content in subsequent messages. Please confirm you have received and understood the transcript content.
Transcript section:
[00:00:03.440 --> 00:00:19.440] The word on the street is that all the new AI engines, artificial intelligence like ChatGPT or MidJourney, are taking up a disproportionate share of the world's computing resources, not unlike cryptocurrency mining was a few years ago.
[00:00:19.440 --> 00:00:27.200] Can our limited infrastructure keep up with the demands required by AI place on our power grid?
[00:00:27.200 --> 00:00:33.760] Or will this lead to even more carbon emissions and more environmental impact?
[00:00:33.760 --> 00:00:37.760] That's coming up right now on Skeptoid.
[00:00:43.840 --> 00:00:48.640] Join us for an exclusive three-day exploration of historic Death Valley.
[00:00:48.640 --> 00:00:55.920] From October 21st to 24th, we'll take you from Las Vegas deep into the heart of this rugged, otherworldly landscape.
[00:00:55.920 --> 00:00:59.760] All transportation, lodging, and meals are included.
[00:00:59.760 --> 00:01:02.480] Your guides will be Skeptoid's Brian Dunning.
[00:01:02.480 --> 00:01:04.000] Hey, I know that guy, he's me.
[00:01:04.320 --> 00:01:07.600] And Death Valley expert, geologist Andrew Dunning.
[00:01:07.600 --> 00:01:16.400] Together, they'll lead you to world-famous sites like Badwater Basin and the Artist's Palette, plus hidden gems that you won't find in any guidebook.
[00:01:16.400 --> 00:01:20.960] This year's trip features all new destinations with minimal overlap from last year.
[00:01:20.960 --> 00:01:26.080] And here's a bonus: Skepticamp Las Vegas begins the same evening we return to Las Vegas.
[00:01:26.080 --> 00:01:31.360] Make it a two-for-one trip and stick around to hear me talk about my visit to Area 51.
[00:01:31.360 --> 00:01:34.800] Details at skeptoid.com/slash events.
[00:01:34.800 --> 00:01:36.640] Spots are very limited.
[00:01:36.640 --> 00:01:38.080] Secure yours today.
[00:01:38.080 --> 00:01:46.160] Email help at skeptoid.com with questions and join the conversation with fellow adventurers at skeptoid.com/slash discord.
[00:01:46.160 --> 00:01:48.160] Death Valley is calling.
[00:01:48.160 --> 00:01:50.960] Are you ready to answer?
[00:01:54.800 --> 00:01:56.480] You're listening to Skeptoid.
[00:01:56.480 --> 00:02:00.040] I'm Brian Dunning from Skeptoid.com.
[00:01:59.440 --> 00:02:03.640] Making AI Environmentally Friendly.
[00:02:04.920 --> 00:02:17.000] Welcome to the show that separates fact from fiction, science from pseudoscience, real history from fake history, and helps us all make better life decisions by knowing what's real and what's not.
[00:02:17.000 --> 00:02:22.680] In 2017, there were various financial upheavals all around the world.
[00:02:22.680 --> 00:02:27.800] Japan became the first country to recognize Bitcoin as legal tender.
[00:02:27.800 --> 00:02:30.120] Australia followed soon after.
[00:02:30.120 --> 00:02:45.240] The natural result was that Bitcoin's value jumped by 20 times that year, and more entrepreneurs than ever began Bitcoin mining operations in earnest, often with millions of dollars of investor backing.
[00:02:45.240 --> 00:02:46.760] Why did they need it?
[00:02:46.760 --> 00:02:47.800] Two reasons.
[00:02:47.800 --> 00:02:54.440] First, you have to buy a whole bunch of tiny computers that are optimized for these particular types of calculations.
[00:02:54.440 --> 00:02:59.720] And second, you'll quickly learn that your electric bills are going through the roof.
[00:02:59.720 --> 00:03:07.880] These tiny computers, often referred to as mining rigs, suck up an incredible amount of electricity to power their processors.
[00:03:07.880 --> 00:03:19.000] This, in turn, causes those processors to emit a surprising amount of heat, which sends your air conditioning bill into the stratosphere to keep the computers from melting down.
[00:03:19.000 --> 00:03:26.760] And while that was the problem a decade or so ago, its analog has reared the same ugly head again.
[00:03:27.080 --> 00:03:31.480] Not in cryptocurrency, but this time in AI.
[00:03:32.760 --> 00:03:36.440] Just listen to these recent nightmarish reports.
[00:03:36.440 --> 00:03:50.480] A 2024 study by Known Host found that CHAPGPT alone produces 261 tons of CO2 per month, with every page view producing 1.59 grams.
[00:03:50.800 --> 00:03:57.680] But some, such as Writer, produce as much as 10.1 grams of CO2 per page view.
[00:03:58.320 --> 00:04:13.360] Using data from Lawrence Berkeley National Lab, a 2025 article in the MIT Technology Review found that at current trends, AI in the United States alone will consume as much power as 22% of all U.S.
[00:04:13.360 --> 00:04:16.000] homes by 2028.
[00:04:16.320 --> 00:04:27.360] In its 2024 sustainability report, Google revealed that in the past five years, AI has caused their greenhouse gas emissions to increase by 48%.
[00:04:27.360 --> 00:04:32.320] Moreover, they expect their energy use to triple by 2030.
[00:04:32.640 --> 00:04:48.240] At the 2024 World Economic Forum, OpenAI CEO Sam Altman told Bloomberg that the future power needs of AI are currently unreachable and that, quote, there's no way to get there without a breakthrough.
[00:04:50.480 --> 00:04:57.600] All of this negativity surrounding AI's environmental impact is, to put it bluntly, basically true.
[00:04:57.600 --> 00:05:05.600] But like with all burgeoning technologies, it's necessary to go through the steps of doing things badly on the way to learning to do them well.
[00:05:05.600 --> 00:05:11.040] The first airplanes were terrible, but we pushed through in order to learn to make better ones.
[00:05:11.360 --> 00:05:19.040] 150 years ago, medical care was likely to do more harm than good, but that's how we learned to do it so well today.
[00:05:20.000 --> 00:05:33.240] In 2022, Ethereum, the second largest cryptocurrency after Bitcoin, made a fundamental change to the way it works in order to address these environmental concerns and massive costs.
[00:05:29.680 --> 00:05:38.680] This was switching from proof-of-work validation to proof-of-stake validation.
[00:05:38.680 --> 00:05:46.200] And while you don't need to understand what that means, it meant a reduction in power consumption of over 99%.
[00:05:46.520 --> 00:05:52.280] Might there be a fundamental shift like this in the future of AI?
[00:05:53.560 --> 00:05:57.080] In January of 2025, we thought there might be.
[00:05:57.080 --> 00:06:03.880] You may have heard about DeepSeek, a Chinese AI that claimed to be 90% more efficient than other models.
[00:06:04.200 --> 00:06:09.480] A 90% reduction in power consumption would indeed be a game changer.
[00:06:09.480 --> 00:06:12.840] But it turned out this claim was not what everyone hoped.
[00:06:12.840 --> 00:06:18.840] The efficiency gains were only realized during DeepSeek's initial training period.
[00:06:18.840 --> 00:06:30.920] The hardware used for this was indeed highly optimized, but it was only trained for 10% as many GPU hours as those it was being compared against, including Meta's LLAMA AI.
[00:06:31.240 --> 00:06:35.720] So, obviously, it would have only consumed 10% as much power.
[00:06:35.720 --> 00:06:48.600] But now that it's trained and it's up and running, analysts have found that DeepSeek is actually less efficient at running each query given to it, consuming up to twice as much power as the same query on LAMA.
[00:06:48.920 --> 00:06:55.480] So, once again, be skeptical of amazing science news coming from China.
[00:06:56.440 --> 00:07:00.760] Let's take a quick look at how AI is sucking up all this power.
[00:07:00.760 --> 00:07:04.400] There are various kinds of AIs for different applications.
[00:07:04.400 --> 00:07:12.920] Neural networks, natural language processing, generative AI, machine learning, but they all run on basically the same hardware.
[00:07:12.920 --> 00:07:23.680] A conventional data center that runs websites and cloud computing consists of thousands of computers, each with a CPU, memory, and storage, but little else.
[00:07:23.680 --> 00:07:34.240] But like cryptocurrency mining, AI computers are constantly doing complex calculations, and so they are much more reliant on GPUs.
[00:07:35.200 --> 00:07:45.520] Most of us know GPUs, graphics processing units, as that thing our computer needs to drive an extra monitor or to play video games with amazing real-time graphics.
[00:07:45.520 --> 00:07:59.280] While a CPU may have a few powerful cores, a GPU has thousands of tiny cores, allowing it to perform thousands of simpler, repetitive tasks simultaneously, in parallel.
[00:07:59.280 --> 00:08:04.480] This is ideal for the matrix operations that are central to AI processing.
[00:08:04.480 --> 00:08:17.440] So today, these simple yet highly specialized machines, loaded up with powerful GPUs, are what carry the lion's share of AI, and what eat up all that power.
[00:08:18.400 --> 00:08:26.640] However, growing needs drive innovation, and innovation in the land of AI hardware means ever greater efficiency.
[00:08:26.640 --> 00:08:29.680] Higher efficiency serves two needs at the same time.
[00:08:29.680 --> 00:08:37.360] It improves the speed and power of the AIs, and doing more with less also means reduced power consumption.
[00:08:37.360 --> 00:08:44.560] In 2015, Google developed a new kind of chip called a TPU, a tensor processing unit.
[00:08:44.560 --> 00:08:51.520] These are optimized for processing multi-dimensional arrays, a central computing function of AI.
[00:08:51.840 --> 00:08:56.640] They're designed explicitly for this and can't really do anything else.
[00:08:56.640 --> 00:09:01.880] Thus, they work faster and consume less energy than GPUs.
[00:09:02.840 --> 00:09:09.000] A TPU is one kind of ASIC, application-specific integrated circuit.
[00:09:09.000 --> 00:09:14.440] These are chips that are hardwired to perform a single specific function or algorithm.
[00:09:14.440 --> 00:09:24.120] Compared to running that same task in software, an ASIC does it far faster and requires much lower computing and power resources to do so.
[00:09:24.120 --> 00:09:30.440] TPUs are not the only ASICs that have been developed for AI, but you get the idea.
[00:09:30.440 --> 00:09:40.520] The more AI algorithms mature, the more the most resource-intensive part of the AI infrastructure can be made vastly more efficient.
[00:09:44.280 --> 00:09:47.320] Fall is here and Skeptoid has you covered.
[00:09:47.320 --> 00:09:55.960] Literally, our back to school sale is happening all September long with 20% off everything in the Skeptoid store.
[00:09:55.960 --> 00:10:06.200] Grab a cozy hoodie for those chilly mornings, sip your favorite roast from a Skeptoid coffee mug, or sport one of our shirts that proudly promotes critical thinking.
[00:10:06.200 --> 00:10:12.760] Just use the code Skeptoid20 at checkout and save 20% on your entire order.
[00:10:12.760 --> 00:10:15.880] Don't wait, this sale ends September 30th.
[00:10:15.880 --> 00:10:21.560] Head to skeptoid.com/slash store and get your gear today.
[00:10:25.400 --> 00:10:35.160] Another interesting way that AI can be made more resource-efficient is by methodological improvements like model pruning and quantization.
[00:10:35.160 --> 00:10:39.960] This is something like using heuristics or shortcuts in the way we think about things.
[00:10:39.960 --> 00:10:50.640] If you ask an AI whether you should bring an umbrella this afternoon, there are a million answers it could give you that you don't care about, all of which would require more computing time.
[00:10:50.640 --> 00:10:54.320] How many raindrops per square meter per second are falling?
[00:10:54.320 --> 00:10:58.160] What's the barometric pressure likely to do in the next three hours?
[00:10:58.160 --> 00:11:02.560] No, you only care whether it's raining a lot or hardly at all.
[00:11:02.560 --> 00:11:12.960] Simplifying the math where it makes sense, fewer significant digits, ignoring all but the most important inputs, can cut the size of the job tremendously.
[00:11:12.960 --> 00:11:18.960] Faster results, less power consumed, more useful in every way.
[00:11:20.240 --> 00:11:30.400] A question we might ask at this point is: which one is likely to accelerate faster, improvements in efficiency or demand for more AI?
[00:11:30.720 --> 00:11:35.360] We do have a solid answer for this, and it's not the one we'd like to hear.
[00:11:35.360 --> 00:11:39.280] Remember those numbers at the top of the show were basically correct.
[00:11:39.280 --> 00:11:48.000] As of now, 2025, the best projections show that global data center power consumption will probably double by 2030.
[00:11:48.000 --> 00:11:50.960] And that's in spite of all the gains in efficiency.
[00:11:50.960 --> 00:11:55.840] But it's also due, in part, to those same gains in efficiency.
[00:11:55.840 --> 00:12:00.000] As we reduce their energy consumption, we're able to do more with them.
[00:12:00.000 --> 00:12:02.320] They become even more useful.
[00:12:02.320 --> 00:12:05.200] And that drives their demand even faster.
[00:12:05.200 --> 00:12:08.560] It's a type of feedback loop we call the snowball effect.
[00:12:08.560 --> 00:12:15.360] The faster it rolls, the more snow it picks up and the heavier it gets, making it roll even faster, and so on.
[00:12:15.680 --> 00:12:22.640] This is called Jevon's paradox after the 19th century British economist William Stanley Jevons.
[00:12:22.640 --> 00:12:27.200] Increased efficiency leads to increased consumption.
[00:12:28.160 --> 00:12:31.000] But we live in a capitalistic world.
[00:12:31.000 --> 00:12:34.040] Supply and demand are forever intertwined.
[00:12:34.040 --> 00:12:38.120] When demand becomes too great, we have to do one of two things.
[00:12:38.120 --> 00:12:42.280] We increase the supply or we reduce the demand.
[00:12:42.280 --> 00:12:49.960] In this case, we're not physically able to increase the supply, so we do the other thing: reduce the demand.
[00:12:49.960 --> 00:12:52.760] And we do that by raising the price.
[00:12:52.760 --> 00:13:01.160] Expect AI to get more expensive, potentially a lot more, as much as it takes to avoid melting the grid.
[00:13:02.440 --> 00:13:13.800] But wait, you say, a lot of AI is open source, meaning the algorithms and software, or at least analogs comparable to the commercial versions, are freely available to all.
[00:13:13.800 --> 00:13:15.000] That's nice.
[00:13:15.000 --> 00:13:18.120] The hardware and electricity are not.
[00:13:18.120 --> 00:13:21.880] This is a system where water is going to find its own level.
[00:13:22.840 --> 00:13:30.440] One thing nearly all the industry experts are projecting is that data centers are going to turn increasingly to renewable energy.
[00:13:30.440 --> 00:13:34.520] It's the one variable in this equation that's a one-time expenditure.
[00:13:34.520 --> 00:13:41.000] Invest once, now, and the energy needs will be covered for the industry's next phase of growth.
[00:13:42.200 --> 00:13:47.960] And that brings us to a whole other side to this issue that many people don't take into consideration.
[00:13:47.960 --> 00:13:57.160] Powering the AI engines may indeed have a high environmental impact, but some of what the AIs are doing is protecting the environment.
[00:13:57.160 --> 00:14:09.320] An AI can be trained to do just about anything, and whether your application is reducing carbon emissions or protecting old-growth forests, somebody probably has an AI at work on that problem.
[00:14:09.640 --> 00:14:15.000] Is the benefit each program produces worth the cost of generating the power to run it?
[00:14:15.600 --> 00:14:19.680] Well, maybe in some it is, maybe in some it isn't.
[00:14:19.680 --> 00:14:22.480] Let's take a look at a few examples.
[00:14:23.440 --> 00:14:27.200] Here's one that more than directly pays for itself.
[00:14:27.200 --> 00:14:35.440] One way we capture carbon out of smokestacks is to react the flue gas with a limestone slurry, which can absorb the carbon.
[00:14:35.440 --> 00:14:38.240] Pumping that slurry takes a lot of power.
[00:14:38.240 --> 00:14:54.880] The University of Surrey developed an AI which samples the CO2 in the flue in real time, looks at current renewable energy availability and grid energy prices, and dynamically adjusts both the slurry pump rate and the slurry pH.
[00:14:54.880 --> 00:15:04.560] In field trials in India, the system saved 22% in power costs while capturing 17% more CO2.
[00:15:05.840 --> 00:15:16.880] Another way this is done is with the use of MOFs, metal organic frameworks, which are materials that selectively adsorb CO2 directly out of the gases.
[00:15:16.880 --> 00:15:29.760] A team from Argonne National Laboratory, the University of Illinois, and the University of Chicago set up an AI to invent new MOF compounds with a high predicted carbon selectivity.
[00:15:29.760 --> 00:15:39.120] In only 30 minutes, it came up with 120,000 of them, which were then fed to a supercomputer to run molecular dynamics simulations on them.
[00:15:39.120 --> 00:15:44.240] Six were as good as the top industrial adsorbents on the market.
[00:15:44.240 --> 00:15:50.560] That was just in the first 30 minutes, although the supercomputer simulations took considerably longer.
[00:15:51.840 --> 00:16:06.840] A California nonprofit called the Rainforest Connection has developed a novel system consisting of a sensitive microphone, transmitter, mini-computer, and solar panels that is mounted high in the treetops in places like the Brazilian rainforest.
[00:16:06.840 --> 00:16:16.760] The computer uses onboard AI to analyze the sounds being recorded, listening for the telltales of illegal logging operations and also poaching.
[00:16:16.760 --> 00:16:26.280] This allows law enforcement to catch the operators red-handed, whereas before they'd had to rely on random patrols in hopelessly vast areas.
[00:16:27.560 --> 00:16:37.240] Now, of course, these are only three of many, many such initiatives that turn to AI to increase efficiency and cleanliness of processes throughout the world.
[00:16:37.560 --> 00:16:42.760] But so far, they are not nearly enough to offset the costs of running the AIs.
[00:16:42.760 --> 00:16:47.240] Renewable energy helps, but it too is in a losing battle.
[00:16:47.240 --> 00:16:57.320] And so far, nobody sees anything on the horizon comparable to what Ethereum did in 2022 to reduce its consumption by 99%.
[00:16:58.280 --> 00:17:06.600] The bottom line is that gains against the environmental impact of AI are likely to be evolutionary, not revolutionary.
[00:17:06.600 --> 00:17:18.120] In the meantime, we can probably expect economic levers to be about the only effective tool we have, and that means jacking up the cost more and more to reduce the demand.
[00:17:18.760 --> 00:17:25.560] Hopefully, in a few years, I'll have the pleasure of updating this episode with a major new development.
[00:17:26.520 --> 00:17:33.480] We continue with more on how AI is improving weather forecasting in the ad-free and extended premium feed.
[00:17:33.480 --> 00:17:39.560] To access it, become a supporter at skeptoid.com/slash go premium.
[00:17:44.040 --> 00:17:53.920] A great big skeptoid shout out to our premium supporters, including James, Ever Hopeful, Brad Fonseca, Marla, and Chris Ling.
[00:17:54.240 --> 00:17:56.960] Did you know you can have Skeptoid come to you?
[00:17:56.960 --> 00:18:02.160] I love doing live shows, either at meetup clubs, university groups, and conferences.
[00:18:02.160 --> 00:18:09.520] I can show one of our movies like Science Friction, do a live podcast, or just give one of my popular presentations.
[00:18:09.520 --> 00:18:15.280] For more information, come to skeptoid.com and click on Live Shows.
[00:18:15.600 --> 00:18:19.520] Follow us on your favorite social media for even more great content.
[00:18:19.520 --> 00:18:26.240] You'll find both Skeptoid and me, Brian Dunning, on all your favorite social media platforms.
[00:18:26.560 --> 00:18:29.520] Skeptoid is a production of Skeptoid Media.
[00:18:29.520 --> 00:18:33.760] Director of Operations and Tinfoil Hat Counter is Kathy Reitmeyer.
[00:18:33.760 --> 00:18:37.360] Marketing guru and Illuminati liaison is Jake Young.
[00:18:37.360 --> 00:18:41.360] Production Management and All Things Audio by Will McCandless.
[00:18:41.360 --> 00:18:43.520] Music is by Lee Sanders.
[00:18:43.520 --> 00:18:46.880] Researched and written by me, Brian Dunning.
[00:18:46.880 --> 00:18:54.080] Listen to Skeptoid for free on Apple Podcasts, Spotify, Amazon Music, or iHeart.
[00:18:56.000 --> 00:18:59.680] You're listening to Skeptoid, a listener-supported program.
[00:18:59.680 --> 00:19:03.520] I'm Brian Dunning from Skeptoid.com.
[00:19:05.040 --> 00:19:11.040] From the office to a first date, there's a lot of pressure to wear the right clothes for every situation.
[00:19:11.040 --> 00:19:12.880] But clothes don't make the man.
[00:19:12.880 --> 00:19:20.080] With Mac Weldon, you can build a wardrobe that looks polished in any setting and tells the world that you stay you wherever you go.
[00:19:20.080 --> 00:19:28.320] They combine timeless style with modern performance materials to keep you looking good and feeling comfortable all day, no matter what's on today's agenda.
[00:19:28.320 --> 00:19:32.120] Browse everything from shorts and polos to sweats and crews.
[00:19:32.120 --> 00:19:40.600] Go to Macweldon.com and get 25% off your first order of $125 or more with promo code MAC25.
[00:19:40.600 --> 00:19:47.320] That's M-A-C-KWE-L-D-O-N.com promo code MAC25.
[00:19:48.920 --> 00:19:50.600] From P-R-X.
Prompt 2: Key Takeaways
Now please extract the key takeaways from the transcript content I provided.
Extract the most important key takeaways from this part of the conversation. Use a single sentence statement (the key takeaway) rather than milquetoast descriptions like "the hosts discuss...".
Limit the key takeaways to a maximum of 3. The key takeaways should be insightful and knowledge-additive.
IMPORTANT: Return ONLY valid JSON, no explanations or markdown. Ensure:
- All strings are properly quoted and escaped
- No trailing commas
- All braces and brackets are balanced
Format: {"key_takeaways": ["takeaway 1", "takeaway 2"]}
Prompt 3: Segments
Now identify 2-4 distinct topical segments from this part of the conversation.
For each segment, identify:
- Descriptive title (3-6 words)
- START timestamp when this topic begins (HH:MM:SS format)
- Double check that the timestamp is accurate - a timestamp will NEVER be greater than the total length of the audio
- Most important Key takeaway from that segment. Key takeaway must be specific and knowledge-additive.
- Brief summary of the discussion
IMPORTANT: The timestamp should mark when the topic/segment STARTS, not a range. Look for topic transitions and conversation shifts.
Return ONLY valid JSON. Ensure all strings are properly quoted, no trailing commas:
{
"segments": [
{
"segment_title": "Topic Discussion",
"timestamp": "01:15:30",
"key_takeaway": "main point from this segment",
"segment_summary": "brief description of what was discussed"
}
]
}
Timestamp format: HH:MM:SS (e.g., 00:05:30, 01:22:45) marking the START of each segment.
Now scan the transcript content I provided for ACTUAL mentions of specific media titles:
Find explicit mentions of:
- Books (with specific titles)
- Movies (with specific titles)
- TV Shows (with specific titles)
- Music/Songs (with specific titles)
DO NOT include:
- Websites, URLs, or web services
- Other podcasts or podcast names
IMPORTANT:
- Only include items explicitly mentioned by name. Do not invent titles.
- Valid categories are: "Book", "Movie", "TV Show", "Music"
- Include the exact phrase where each item was mentioned
- Find the nearest proximate timestamp where it appears in the conversation
- THE TIMESTAMP OF THE MEDIA MENTION IS IMPORTANT - DO NOT INVENT TIMESTAMPS AND DO NOT MISATTRIBUTE TIMESTAMPS
- Double check that the timestamp is accurate - a timestamp will NEVER be greater than the total length of the audio
- Timestamps are given as ranges, e.g. 01:13:42.520 --> 01:13:46.720. Use the EARLIER of the 2 timestamps in the range.
Return ONLY valid JSON. Ensure all strings are properly quoted and escaped, no trailing commas:
{
"media_mentions": [
{
"title": "Exact Title as Mentioned",
"category": "Book",
"author_artist": "N/A",
"context": "Brief context of why it was mentioned",
"context_phrase": "The exact sentence or phrase where it was mentioned",
"timestamp": "estimated time like 01:15:30"
}
]
}
If no media is mentioned, return: {"media_mentions": []}
Full Transcript
[00:00:03.440 --> 00:00:19.440] The word on the street is that all the new AI engines, artificial intelligence like ChatGPT or MidJourney, are taking up a disproportionate share of the world's computing resources, not unlike cryptocurrency mining was a few years ago.
[00:00:19.440 --> 00:00:27.200] Can our limited infrastructure keep up with the demands required by AI place on our power grid?
[00:00:27.200 --> 00:00:33.760] Or will this lead to even more carbon emissions and more environmental impact?
[00:00:33.760 --> 00:00:37.760] That's coming up right now on Skeptoid.
[00:00:43.840 --> 00:00:48.640] Join us for an exclusive three-day exploration of historic Death Valley.
[00:00:48.640 --> 00:00:55.920] From October 21st to 24th, we'll take you from Las Vegas deep into the heart of this rugged, otherworldly landscape.
[00:00:55.920 --> 00:00:59.760] All transportation, lodging, and meals are included.
[00:00:59.760 --> 00:01:02.480] Your guides will be Skeptoid's Brian Dunning.
[00:01:02.480 --> 00:01:04.000] Hey, I know that guy, he's me.
[00:01:04.320 --> 00:01:07.600] And Death Valley expert, geologist Andrew Dunning.
[00:01:07.600 --> 00:01:16.400] Together, they'll lead you to world-famous sites like Badwater Basin and the Artist's Palette, plus hidden gems that you won't find in any guidebook.
[00:01:16.400 --> 00:01:20.960] This year's trip features all new destinations with minimal overlap from last year.
[00:01:20.960 --> 00:01:26.080] And here's a bonus: Skepticamp Las Vegas begins the same evening we return to Las Vegas.
[00:01:26.080 --> 00:01:31.360] Make it a two-for-one trip and stick around to hear me talk about my visit to Area 51.
[00:01:31.360 --> 00:01:34.800] Details at skeptoid.com/slash events.
[00:01:34.800 --> 00:01:36.640] Spots are very limited.
[00:01:36.640 --> 00:01:38.080] Secure yours today.
[00:01:38.080 --> 00:01:46.160] Email help at skeptoid.com with questions and join the conversation with fellow adventurers at skeptoid.com/slash discord.
[00:01:46.160 --> 00:01:48.160] Death Valley is calling.
[00:01:48.160 --> 00:01:50.960] Are you ready to answer?
[00:01:54.800 --> 00:01:56.480] You're listening to Skeptoid.
[00:01:56.480 --> 00:02:00.040] I'm Brian Dunning from Skeptoid.com.
[00:01:59.440 --> 00:02:03.640] Making AI Environmentally Friendly.
[00:02:04.920 --> 00:02:17.000] Welcome to the show that separates fact from fiction, science from pseudoscience, real history from fake history, and helps us all make better life decisions by knowing what's real and what's not.
[00:02:17.000 --> 00:02:22.680] In 2017, there were various financial upheavals all around the world.
[00:02:22.680 --> 00:02:27.800] Japan became the first country to recognize Bitcoin as legal tender.
[00:02:27.800 --> 00:02:30.120] Australia followed soon after.
[00:02:30.120 --> 00:02:45.240] The natural result was that Bitcoin's value jumped by 20 times that year, and more entrepreneurs than ever began Bitcoin mining operations in earnest, often with millions of dollars of investor backing.
[00:02:45.240 --> 00:02:46.760] Why did they need it?
[00:02:46.760 --> 00:02:47.800] Two reasons.
[00:02:47.800 --> 00:02:54.440] First, you have to buy a whole bunch of tiny computers that are optimized for these particular types of calculations.
[00:02:54.440 --> 00:02:59.720] And second, you'll quickly learn that your electric bills are going through the roof.
[00:02:59.720 --> 00:03:07.880] These tiny computers, often referred to as mining rigs, suck up an incredible amount of electricity to power their processors.
[00:03:07.880 --> 00:03:19.000] This, in turn, causes those processors to emit a surprising amount of heat, which sends your air conditioning bill into the stratosphere to keep the computers from melting down.
[00:03:19.000 --> 00:03:26.760] And while that was the problem a decade or so ago, its analog has reared the same ugly head again.
[00:03:27.080 --> 00:03:31.480] Not in cryptocurrency, but this time in AI.
[00:03:32.760 --> 00:03:36.440] Just listen to these recent nightmarish reports.
[00:03:36.440 --> 00:03:50.480] A 2024 study by Known Host found that CHAPGPT alone produces 261 tons of CO2 per month, with every page view producing 1.59 grams.
[00:03:50.800 --> 00:03:57.680] But some, such as Writer, produce as much as 10.1 grams of CO2 per page view.
[00:03:58.320 --> 00:04:13.360] Using data from Lawrence Berkeley National Lab, a 2025 article in the MIT Technology Review found that at current trends, AI in the United States alone will consume as much power as 22% of all U.S.
[00:04:13.360 --> 00:04:16.000] homes by 2028.
[00:04:16.320 --> 00:04:27.360] In its 2024 sustainability report, Google revealed that in the past five years, AI has caused their greenhouse gas emissions to increase by 48%.
[00:04:27.360 --> 00:04:32.320] Moreover, they expect their energy use to triple by 2030.
[00:04:32.640 --> 00:04:48.240] At the 2024 World Economic Forum, OpenAI CEO Sam Altman told Bloomberg that the future power needs of AI are currently unreachable and that, quote, there's no way to get there without a breakthrough.
[00:04:50.480 --> 00:04:57.600] All of this negativity surrounding AI's environmental impact is, to put it bluntly, basically true.
[00:04:57.600 --> 00:05:05.600] But like with all burgeoning technologies, it's necessary to go through the steps of doing things badly on the way to learning to do them well.
[00:05:05.600 --> 00:05:11.040] The first airplanes were terrible, but we pushed through in order to learn to make better ones.
[00:05:11.360 --> 00:05:19.040] 150 years ago, medical care was likely to do more harm than good, but that's how we learned to do it so well today.
[00:05:20.000 --> 00:05:33.240] In 2022, Ethereum, the second largest cryptocurrency after Bitcoin, made a fundamental change to the way it works in order to address these environmental concerns and massive costs.
[00:05:29.680 --> 00:05:38.680] This was switching from proof-of-work validation to proof-of-stake validation.
[00:05:38.680 --> 00:05:46.200] And while you don't need to understand what that means, it meant a reduction in power consumption of over 99%.
[00:05:46.520 --> 00:05:52.280] Might there be a fundamental shift like this in the future of AI?
[00:05:53.560 --> 00:05:57.080] In January of 2025, we thought there might be.
[00:05:57.080 --> 00:06:03.880] You may have heard about DeepSeek, a Chinese AI that claimed to be 90% more efficient than other models.
[00:06:04.200 --> 00:06:09.480] A 90% reduction in power consumption would indeed be a game changer.
[00:06:09.480 --> 00:06:12.840] But it turned out this claim was not what everyone hoped.
[00:06:12.840 --> 00:06:18.840] The efficiency gains were only realized during DeepSeek's initial training period.
[00:06:18.840 --> 00:06:30.920] The hardware used for this was indeed highly optimized, but it was only trained for 10% as many GPU hours as those it was being compared against, including Meta's LLAMA AI.
[00:06:31.240 --> 00:06:35.720] So, obviously, it would have only consumed 10% as much power.
[00:06:35.720 --> 00:06:48.600] But now that it's trained and it's up and running, analysts have found that DeepSeek is actually less efficient at running each query given to it, consuming up to twice as much power as the same query on LAMA.
[00:06:48.920 --> 00:06:55.480] So, once again, be skeptical of amazing science news coming from China.
[00:06:56.440 --> 00:07:00.760] Let's take a quick look at how AI is sucking up all this power.
[00:07:00.760 --> 00:07:04.400] There are various kinds of AIs for different applications.
[00:07:04.400 --> 00:07:12.920] Neural networks, natural language processing, generative AI, machine learning, but they all run on basically the same hardware.
[00:07:12.920 --> 00:07:23.680] A conventional data center that runs websites and cloud computing consists of thousands of computers, each with a CPU, memory, and storage, but little else.
[00:07:23.680 --> 00:07:34.240] But like cryptocurrency mining, AI computers are constantly doing complex calculations, and so they are much more reliant on GPUs.
[00:07:35.200 --> 00:07:45.520] Most of us know GPUs, graphics processing units, as that thing our computer needs to drive an extra monitor or to play video games with amazing real-time graphics.
[00:07:45.520 --> 00:07:59.280] While a CPU may have a few powerful cores, a GPU has thousands of tiny cores, allowing it to perform thousands of simpler, repetitive tasks simultaneously, in parallel.
[00:07:59.280 --> 00:08:04.480] This is ideal for the matrix operations that are central to AI processing.
[00:08:04.480 --> 00:08:17.440] So today, these simple yet highly specialized machines, loaded up with powerful GPUs, are what carry the lion's share of AI, and what eat up all that power.
[00:08:18.400 --> 00:08:26.640] However, growing needs drive innovation, and innovation in the land of AI hardware means ever greater efficiency.
[00:08:26.640 --> 00:08:29.680] Higher efficiency serves two needs at the same time.
[00:08:29.680 --> 00:08:37.360] It improves the speed and power of the AIs, and doing more with less also means reduced power consumption.
[00:08:37.360 --> 00:08:44.560] In 2015, Google developed a new kind of chip called a TPU, a tensor processing unit.
[00:08:44.560 --> 00:08:51.520] These are optimized for processing multi-dimensional arrays, a central computing function of AI.
[00:08:51.840 --> 00:08:56.640] They're designed explicitly for this and can't really do anything else.
[00:08:56.640 --> 00:09:01.880] Thus, they work faster and consume less energy than GPUs.
[00:09:02.840 --> 00:09:09.000] A TPU is one kind of ASIC, application-specific integrated circuit.
[00:09:09.000 --> 00:09:14.440] These are chips that are hardwired to perform a single specific function or algorithm.
[00:09:14.440 --> 00:09:24.120] Compared to running that same task in software, an ASIC does it far faster and requires much lower computing and power resources to do so.
[00:09:24.120 --> 00:09:30.440] TPUs are not the only ASICs that have been developed for AI, but you get the idea.
[00:09:30.440 --> 00:09:40.520] The more AI algorithms mature, the more the most resource-intensive part of the AI infrastructure can be made vastly more efficient.
[00:09:44.280 --> 00:09:47.320] Fall is here and Skeptoid has you covered.
[00:09:47.320 --> 00:09:55.960] Literally, our back to school sale is happening all September long with 20% off everything in the Skeptoid store.
[00:09:55.960 --> 00:10:06.200] Grab a cozy hoodie for those chilly mornings, sip your favorite roast from a Skeptoid coffee mug, or sport one of our shirts that proudly promotes critical thinking.
[00:10:06.200 --> 00:10:12.760] Just use the code Skeptoid20 at checkout and save 20% on your entire order.
[00:10:12.760 --> 00:10:15.880] Don't wait, this sale ends September 30th.
[00:10:15.880 --> 00:10:21.560] Head to skeptoid.com/slash store and get your gear today.
[00:10:25.400 --> 00:10:35.160] Another interesting way that AI can be made more resource-efficient is by methodological improvements like model pruning and quantization.
[00:10:35.160 --> 00:10:39.960] This is something like using heuristics or shortcuts in the way we think about things.
[00:10:39.960 --> 00:10:50.640] If you ask an AI whether you should bring an umbrella this afternoon, there are a million answers it could give you that you don't care about, all of which would require more computing time.
[00:10:50.640 --> 00:10:54.320] How many raindrops per square meter per second are falling?
[00:10:54.320 --> 00:10:58.160] What's the barometric pressure likely to do in the next three hours?
[00:10:58.160 --> 00:11:02.560] No, you only care whether it's raining a lot or hardly at all.
[00:11:02.560 --> 00:11:12.960] Simplifying the math where it makes sense, fewer significant digits, ignoring all but the most important inputs, can cut the size of the job tremendously.
[00:11:12.960 --> 00:11:18.960] Faster results, less power consumed, more useful in every way.
[00:11:20.240 --> 00:11:30.400] A question we might ask at this point is: which one is likely to accelerate faster, improvements in efficiency or demand for more AI?
[00:11:30.720 --> 00:11:35.360] We do have a solid answer for this, and it's not the one we'd like to hear.
[00:11:35.360 --> 00:11:39.280] Remember those numbers at the top of the show were basically correct.
[00:11:39.280 --> 00:11:48.000] As of now, 2025, the best projections show that global data center power consumption will probably double by 2030.
[00:11:48.000 --> 00:11:50.960] And that's in spite of all the gains in efficiency.
[00:11:50.960 --> 00:11:55.840] But it's also due, in part, to those same gains in efficiency.
[00:11:55.840 --> 00:12:00.000] As we reduce their energy consumption, we're able to do more with them.
[00:12:00.000 --> 00:12:02.320] They become even more useful.
[00:12:02.320 --> 00:12:05.200] And that drives their demand even faster.
[00:12:05.200 --> 00:12:08.560] It's a type of feedback loop we call the snowball effect.
[00:12:08.560 --> 00:12:15.360] The faster it rolls, the more snow it picks up and the heavier it gets, making it roll even faster, and so on.
[00:12:15.680 --> 00:12:22.640] This is called Jevon's paradox after the 19th century British economist William Stanley Jevons.
[00:12:22.640 --> 00:12:27.200] Increased efficiency leads to increased consumption.
[00:12:28.160 --> 00:12:31.000] But we live in a capitalistic world.
[00:12:31.000 --> 00:12:34.040] Supply and demand are forever intertwined.
[00:12:34.040 --> 00:12:38.120] When demand becomes too great, we have to do one of two things.
[00:12:38.120 --> 00:12:42.280] We increase the supply or we reduce the demand.
[00:12:42.280 --> 00:12:49.960] In this case, we're not physically able to increase the supply, so we do the other thing: reduce the demand.
[00:12:49.960 --> 00:12:52.760] And we do that by raising the price.
[00:12:52.760 --> 00:13:01.160] Expect AI to get more expensive, potentially a lot more, as much as it takes to avoid melting the grid.
[00:13:02.440 --> 00:13:13.800] But wait, you say, a lot of AI is open source, meaning the algorithms and software, or at least analogs comparable to the commercial versions, are freely available to all.
[00:13:13.800 --> 00:13:15.000] That's nice.
[00:13:15.000 --> 00:13:18.120] The hardware and electricity are not.
[00:13:18.120 --> 00:13:21.880] This is a system where water is going to find its own level.
[00:13:22.840 --> 00:13:30.440] One thing nearly all the industry experts are projecting is that data centers are going to turn increasingly to renewable energy.
[00:13:30.440 --> 00:13:34.520] It's the one variable in this equation that's a one-time expenditure.
[00:13:34.520 --> 00:13:41.000] Invest once, now, and the energy needs will be covered for the industry's next phase of growth.
[00:13:42.200 --> 00:13:47.960] And that brings us to a whole other side to this issue that many people don't take into consideration.
[00:13:47.960 --> 00:13:57.160] Powering the AI engines may indeed have a high environmental impact, but some of what the AIs are doing is protecting the environment.
[00:13:57.160 --> 00:14:09.320] An AI can be trained to do just about anything, and whether your application is reducing carbon emissions or protecting old-growth forests, somebody probably has an AI at work on that problem.
[00:14:09.640 --> 00:14:15.000] Is the benefit each program produces worth the cost of generating the power to run it?
[00:14:15.600 --> 00:14:19.680] Well, maybe in some it is, maybe in some it isn't.
[00:14:19.680 --> 00:14:22.480] Let's take a look at a few examples.
[00:14:23.440 --> 00:14:27.200] Here's one that more than directly pays for itself.
[00:14:27.200 --> 00:14:35.440] One way we capture carbon out of smokestacks is to react the flue gas with a limestone slurry, which can absorb the carbon.
[00:14:35.440 --> 00:14:38.240] Pumping that slurry takes a lot of power.
[00:14:38.240 --> 00:14:54.880] The University of Surrey developed an AI which samples the CO2 in the flue in real time, looks at current renewable energy availability and grid energy prices, and dynamically adjusts both the slurry pump rate and the slurry pH.
[00:14:54.880 --> 00:15:04.560] In field trials in India, the system saved 22% in power costs while capturing 17% more CO2.
[00:15:05.840 --> 00:15:16.880] Another way this is done is with the use of MOFs, metal organic frameworks, which are materials that selectively adsorb CO2 directly out of the gases.
[00:15:16.880 --> 00:15:29.760] A team from Argonne National Laboratory, the University of Illinois, and the University of Chicago set up an AI to invent new MOF compounds with a high predicted carbon selectivity.
[00:15:29.760 --> 00:15:39.120] In only 30 minutes, it came up with 120,000 of them, which were then fed to a supercomputer to run molecular dynamics simulations on them.
[00:15:39.120 --> 00:15:44.240] Six were as good as the top industrial adsorbents on the market.
[00:15:44.240 --> 00:15:50.560] That was just in the first 30 minutes, although the supercomputer simulations took considerably longer.
[00:15:51.840 --> 00:16:06.840] A California nonprofit called the Rainforest Connection has developed a novel system consisting of a sensitive microphone, transmitter, mini-computer, and solar panels that is mounted high in the treetops in places like the Brazilian rainforest.
[00:16:06.840 --> 00:16:16.760] The computer uses onboard AI to analyze the sounds being recorded, listening for the telltales of illegal logging operations and also poaching.
[00:16:16.760 --> 00:16:26.280] This allows law enforcement to catch the operators red-handed, whereas before they'd had to rely on random patrols in hopelessly vast areas.
[00:16:27.560 --> 00:16:37.240] Now, of course, these are only three of many, many such initiatives that turn to AI to increase efficiency and cleanliness of processes throughout the world.
[00:16:37.560 --> 00:16:42.760] But so far, they are not nearly enough to offset the costs of running the AIs.
[00:16:42.760 --> 00:16:47.240] Renewable energy helps, but it too is in a losing battle.
[00:16:47.240 --> 00:16:57.320] And so far, nobody sees anything on the horizon comparable to what Ethereum did in 2022 to reduce its consumption by 99%.
[00:16:58.280 --> 00:17:06.600] The bottom line is that gains against the environmental impact of AI are likely to be evolutionary, not revolutionary.
[00:17:06.600 --> 00:17:18.120] In the meantime, we can probably expect economic levers to be about the only effective tool we have, and that means jacking up the cost more and more to reduce the demand.
[00:17:18.760 --> 00:17:25.560] Hopefully, in a few years, I'll have the pleasure of updating this episode with a major new development.
[00:17:26.520 --> 00:17:33.480] We continue with more on how AI is improving weather forecasting in the ad-free and extended premium feed.
[00:17:33.480 --> 00:17:39.560] To access it, become a supporter at skeptoid.com/slash go premium.
[00:17:44.040 --> 00:17:53.920] A great big skeptoid shout out to our premium supporters, including James, Ever Hopeful, Brad Fonseca, Marla, and Chris Ling.
[00:17:54.240 --> 00:17:56.960] Did you know you can have Skeptoid come to you?
[00:17:56.960 --> 00:18:02.160] I love doing live shows, either at meetup clubs, university groups, and conferences.
[00:18:02.160 --> 00:18:09.520] I can show one of our movies like Science Friction, do a live podcast, or just give one of my popular presentations.
[00:18:09.520 --> 00:18:15.280] For more information, come to skeptoid.com and click on Live Shows.
[00:18:15.600 --> 00:18:19.520] Follow us on your favorite social media for even more great content.
[00:18:19.520 --> 00:18:26.240] You'll find both Skeptoid and me, Brian Dunning, on all your favorite social media platforms.
[00:18:26.560 --> 00:18:29.520] Skeptoid is a production of Skeptoid Media.
[00:18:29.520 --> 00:18:33.760] Director of Operations and Tinfoil Hat Counter is Kathy Reitmeyer.
[00:18:33.760 --> 00:18:37.360] Marketing guru and Illuminati liaison is Jake Young.
[00:18:37.360 --> 00:18:41.360] Production Management and All Things Audio by Will McCandless.
[00:18:41.360 --> 00:18:43.520] Music is by Lee Sanders.
[00:18:43.520 --> 00:18:46.880] Researched and written by me, Brian Dunning.
[00:18:46.880 --> 00:18:54.080] Listen to Skeptoid for free on Apple Podcasts, Spotify, Amazon Music, or iHeart.
[00:18:56.000 --> 00:18:59.680] You're listening to Skeptoid, a listener-supported program.
[00:18:59.680 --> 00:19:03.520] I'm Brian Dunning from Skeptoid.com.
[00:19:05.040 --> 00:19:11.040] From the office to a first date, there's a lot of pressure to wear the right clothes for every situation.
[00:19:11.040 --> 00:19:12.880] But clothes don't make the man.
[00:19:12.880 --> 00:19:20.080] With Mac Weldon, you can build a wardrobe that looks polished in any setting and tells the world that you stay you wherever you go.
[00:19:20.080 --> 00:19:28.320] They combine timeless style with modern performance materials to keep you looking good and feeling comfortable all day, no matter what's on today's agenda.
[00:19:28.320 --> 00:19:32.120] Browse everything from shorts and polos to sweats and crews.
[00:19:32.120 --> 00:19:40.600] Go to Macweldon.com and get 25% off your first order of $125 or more with promo code MAC25.
[00:19:40.600 --> 00:19:47.320] That's M-A-C-KWE-L-D-O-N.com promo code MAC25.
[00:19:48.920 --> 00:19:50.600] From P-R-X.