Debug Information
Processing Details
- VTT File: 7182bd38.vtt
- Processing Time: September 11, 2025 at 01:28 PM
- Total Chunks: 1
- Transcript Length: 19,727 characters
- Caption Count: 158 captions
Prompts Used
Prompt 1: Context Setup
You are an expert data extractor tasked with analyzing a podcast transcript.
I will provide you with part 1 of 1 from a podcast transcript.
I will then ask you to extract different types of information from this content in subsequent messages. Please confirm you have received and understood the transcript content.
Transcript section:
[00:00:00.400 --> 00:00:04.320] Hey, it's Arvid, and this is the Bootstrap Founder.
[00:00:09.120 --> 00:00:21.360] Today, we will talk about how AI systems, and particularly the ones that do all the work for us, can both massively amplify and hinder our effectiveness at the things that we want to do.
[00:00:21.360 --> 00:00:28.240] This episode is sponsored by Paddle.com, my merchant of record payment provider of choice, and they've been helping me focus on PodScan from day one.
[00:00:28.240 --> 00:00:29.360] That's my own business.
[00:00:29.360 --> 00:00:35.840] They're taking care of all the things that are related to money so that founders like you and me can focus on building the things that only we can build.
[00:00:35.840 --> 00:00:40.400] Paddle then handles the rest, sales tags, credit cards failing, all of that.
[00:00:40.400 --> 00:00:41.440] I highly recommend it.
[00:00:41.440 --> 00:00:44.400] So, please check out paddle.com.
[00:00:46.000 --> 00:00:54.640] Now, picture this: an engineer gets called to fix a massive industrial machine that's been down for hours and that costs the company thousands.
[00:00:54.640 --> 00:01:02.800] They walk in, they look around for maybe two minutes, they pick up a hammer and give one precise tap on a specific component, machine wars back to life.
[00:01:02.800 --> 00:01:03.440] The bill?
[00:01:03.440 --> 00:01:04.560] $10,000.
[00:01:04.720 --> 00:01:06.640] The company obviously is outraged.
[00:01:06.640 --> 00:01:08.720] Well, all you did was hit it once with a hammer.
[00:01:08.720 --> 00:01:11.120] How can you charge $10,000 for that?
[00:01:11.120 --> 00:01:20.400] And then the engineer calmly explains: well, the hammer tap was $5, but knowing exactly where to hit, that's the other $9,995.
[00:01:20.400 --> 00:01:24.720] That's a pretty common story that is often told in the industry about expertise.
[00:01:24.720 --> 00:01:38.160] And this story perfectly captures what I want to talk about today: the nature of expertise and a growing concern that I have about how AI tools might actually be fundamentally changing our ability to develop it.
[00:01:38.480 --> 00:01:48.560] I recently read an article that got me thinking about one of AI's most celebrated features: its ability to remove friction from processes that used to be quite difficult.
[00:01:48.560 --> 00:01:50.080] And I think this sounds wonderful, right?
[00:01:50.080 --> 00:01:53.200] Who doesn't want things to be easier, to be frictionless?
[00:01:53.200 --> 00:02:01.640] But here's the thing that's been nagging at me: what if this removal of friction also removes our capacity for expertise?
[00:02:01.960 --> 00:02:11.960] Think about what happens when we use an AI system, particularly the kind of sophisticated agentic ones that we use for coding and all of that, to do all this work for us.
[00:02:11.960 --> 00:02:25.240] The abstraction of planning, the individual execution of steps, the verification of those steps, and the adaptive behavior to overcome problems and regressions that happen in between, all of this gets kind of taken away from us.
[00:02:25.240 --> 00:02:27.480] It gets abstracted away.
[00:02:27.480 --> 00:02:34.600] And this makes me wonder: can expertise actually still be formed without friction like this at all?
[00:02:34.600 --> 00:02:38.360] Is not friction a necessary requirement for expertise?
[00:02:38.360 --> 00:02:46.600] Or are there maybe ways to become an expert even when the struggle of understanding the nuance and running into problems over and over again is missing?
[00:02:46.600 --> 00:03:00.200] When it comes to software development, to entrepreneurship, being a founder, there's so much in that process, that lifestyle, that requires very intense dealing with setbacks and failures and mistakes and errors all the time.
[00:03:00.200 --> 00:03:05.240] Like if you ever tried building anything meaningful, anything new, you will have run into this.
[00:03:05.240 --> 00:03:15.640] And if we have tooling and automations that take every single pain point and every single point of friction away from us, what happens to our development and our capacity to deal with this?
[00:03:15.640 --> 00:03:32.280] My assumption is that we either become much slower at developing skills that are similar to the tools that we're using, or we're prevented completely from building the repertoire of understanding and the kind of behavior that we would call expertise or professional experience.
[00:03:32.280 --> 00:03:38.520] And in the absence of that expertise, we might not be good founders or good entrepreneurs or good developers.
[00:03:38.520 --> 00:03:41.240] We might not even be able to get there anymore.
[00:03:41.240 --> 00:03:44.080] I came across a phrase recently.
[00:03:44.080 --> 00:03:45.360] Not sure who said it originally.
[00:03:44.920 --> 00:03:48.160] It might have been me thinking out loud, but might have found it on Twitter.
[00:03:48.240 --> 00:03:57.600] But it goes: With AI, people who are already good at a thing get better, and people who are not yet very good at that thing get worse.
[00:03:57.600 --> 00:04:00.160] The good get better, the bad get worse.
[00:04:00.480 --> 00:04:17.760] And the core of this statement lies in how we work through friction through working through criticism or experimentation or failing and trying many other ways, reading about other people's experiences, trying to integrate them into our own and failing at that too, then finding our own way to integrate other people's experience.
[00:04:17.760 --> 00:04:19.760] That's how we learn, right?
[00:04:19.760 --> 00:04:25.360] That's how we build this kind of knowledge of deep understanding of a thing and how to deal with it.
[00:04:25.360 --> 00:04:29.360] And it is through all of this friction that we build the muscle of execution.
[00:04:29.360 --> 00:04:41.200] We build the muscle of comprehension that allows us to develop judgment and discernment and a capacity to understand good from bad, to have taste, to understand this is good and this is bad.
[00:04:41.200 --> 00:04:43.680] You can't really innately understand things like this.
[00:04:43.680 --> 00:04:47.840] You have to try to figure out how they work to see, okay, this is well done.
[00:04:47.840 --> 00:04:50.640] Or see something that looks shiny but isn't.
[00:04:50.720 --> 00:04:54.320] Look into the details of it and see, oh, this is all just clobbered together.
[00:04:54.320 --> 00:04:55.280] This doesn't look nice.
[00:04:55.280 --> 00:04:55.840] I don't want this.
[00:04:55.840 --> 00:04:57.760] I don't want to build things like this.
[00:04:57.760 --> 00:05:06.560] And most of the time, when we think about an expert, we think of someone capable of executing a certain task in a particular field to some kind of standard of excellence, right?
[00:05:06.560 --> 00:05:08.480] Somebody who's good at doing stuff.
[00:05:08.480 --> 00:05:23.920] But I think that's too narrow a definition because an expert is a person who's developed taste, a person who has the capacity to judge tasteful from tasteless, to see good from bad, to see the beautiful in things and see when they are lacking beauty.
[00:05:23.920 --> 00:05:26.240] And these aren't things that are innate to us.
[00:05:26.240 --> 00:05:47.160] An expert has developed taste in the industry for a long time because they've been working in it, and they've developed the capacity for discernment because they had to discern a lot of things and can very quickly apply that taste, discernment, and judgment to any new situation, even ones that they haven't figured out yet because they can extrapolate, because they have so much repertoire.
[00:05:47.160 --> 00:06:01.960] And going back to our engineer story from the beginning, I think that's exactly what expertise looks like: it's knowing what works, knowing what doesn't work, what won't work, and being able to almost immediately discard the many ways that don't work and you know that won't.
[00:06:01.960 --> 00:06:06.920] So that the number of ways that do work in whatever options you have left is high.
[00:06:06.920 --> 00:06:08.040] That's discernment.
[00:06:08.040 --> 00:06:10.600] And that's the capacity to make a judgment call.
[00:06:10.600 --> 00:06:12.680] Let me give you a personal example here.
[00:06:12.680 --> 00:06:19.800] I've been a developer for 20 years at this point, and I've seen some inkling along the way of what good code might look like.
[00:06:19.800 --> 00:06:23.800] And I know how a seasoned software developer would approach solving a problem.
[00:06:23.800 --> 00:06:28.680] So when I prompt an AI system, I don't tell it to build me this or build me that.
[00:06:28.680 --> 00:06:39.800] I give it a very scoped definition of what I want, the same way that I, as a developer, would want the definition to be told to me by my boss or by somebody whose project I'm building.
[00:06:39.800 --> 00:06:44.040] I tell the system, the AI, this is the application this feature is in.
[00:06:44.040 --> 00:06:45.880] This is the kind of customer it's for.
[00:06:45.880 --> 00:06:47.400] This is the data I will work on.
[00:06:47.400 --> 00:06:49.800] And here are the things that it will need to interface with.
[00:06:49.800 --> 00:06:50.440] Here's the input.
[00:06:50.440 --> 00:06:52.120] Here are the outputs that I expect.
[00:06:52.120 --> 00:06:54.360] Here's probably how I would build this.
[00:06:54.360 --> 00:06:56.440] And here are the steps that I would think about.
[00:06:56.440 --> 00:06:58.760] A couple edge cases to consider as well.
[00:06:58.760 --> 00:07:02.680] I have a lot of understanding of my existing code bases and how new code is written.
[00:07:02.680 --> 00:07:06.760] So, in my prompt, I give all of this information to the AI system.
[00:07:06.760 --> 00:07:11.320] I don't tell it to write me a game where wizards fight monsters or anything like this.
[00:07:11.320 --> 00:07:19.360] I would tell it to build me a roguelike site-scroller that uses sprites from this certain sprite database and has a level design that is specific to this.
[00:07:19.600 --> 00:07:24.320] And then I would tell it every single detail extensively because I would already have built this up in my mind.
[00:07:24.320 --> 00:07:30.560] I have the phrasing, the understanding, and the capacity of knowing what another developer would need to build this to.
[00:07:30.560 --> 00:07:36.960] That's how I use this tool, just as if I were to outsource this work and externalize it to another developer.
[00:07:36.960 --> 00:07:38.880] But just in this case, it's a machine.
[00:07:38.880 --> 00:07:43.120] And this makes me faster because I can do this in parallel with many different systems.
[00:07:43.120 --> 00:07:50.880] I can spend five minutes scoping and then I wait 10 minutes for a result versus five minutes scoping for myself and then implementing it for 30 minutes.
[00:07:50.880 --> 00:07:53.360] The machine writes code much quicker than I do.
[00:07:53.360 --> 00:07:59.440] And the better my definition is, my scope is, the more realistic the result, right?
[00:07:59.440 --> 00:08:00.880] The better the actual outcome.
[00:08:00.880 --> 00:08:07.200] And even if there are errors, it's quicker to look and find the bug, the problematic line, and then fix it myself.
[00:08:07.200 --> 00:08:16.560] But here's the crucial part: once I look at the code that comes out, I personally can discern if that looks like code that I would have written.
[00:08:16.560 --> 00:08:18.880] I can judge its quality.
[00:08:18.880 --> 00:08:30.800] And I get to benefit from this AI system in a way that somebody who wouldn't know what good code looks like or if that code would even work, they could not benefit from it the same way.
[00:08:30.800 --> 00:08:42.640] And I've recently talked to a customer who's not technical at all, but they've been trying to build applications using Lovable, this AI tool, which you can prompt your way to a fully capable application.
[00:08:43.040 --> 00:08:43.920] It's really nice.
[00:08:43.920 --> 00:08:45.360] Now, I've used Lovable before.
[00:08:45.360 --> 00:08:52.240] If you look at the PodScan homepage, podscan.fm, you will see what a day or so of hacking around that Lovable can produce.
[00:08:52.240 --> 00:08:59.520] There is a world map that shows whenever new episodes come out and where they have come out, and it shows the thumbnail and the world rotates.
[00:08:59.520 --> 00:09:00.440] It's really cool.
[00:09:00.440 --> 00:09:04.040] I think podscape.fm is the actual landing page.
[00:08:59.920 --> 00:09:05.720] I just pull it into my Podscan homepage.
[00:09:05.800 --> 00:09:09.000] It's really cool what you can build by just coding away there through prompting.
[00:09:09.000 --> 00:09:10.600] Like I didn't write a single line of code.
[00:09:10.600 --> 00:09:12.760] All of this was just me telling it what to build.
[00:09:12.760 --> 00:09:21.400] But this person, my customer, has been trying to integrate the exact same API that I've been using for this Podscape tool into their own Lovable application.
[00:09:21.400 --> 00:09:22.520] And they've been struggling.
[00:09:22.520 --> 00:09:28.920] Whenever new data comes in that doesn't perfectly adhere to the format that they have prompted into the system, it fails.
[00:09:28.920 --> 00:09:34.200] And they've spent hundreds of thousands of tokens trying to get it to work by just telling it to try it differently.
[00:09:34.200 --> 00:09:38.120] But they didn't know what the code should look like or what the data might be structured as.
[00:09:38.120 --> 00:09:40.120] They were just hoping to get it right.
[00:09:40.120 --> 00:09:41.960] That's a waste of time and money.
[00:09:41.960 --> 00:09:46.520] It's somebody who would have likely not spent two days trying to figure this out.
[00:09:46.520 --> 00:09:50.840] And it made them spend more money and it made their life objectively worse.
[00:09:50.840 --> 00:09:59.800] This AI tool isn't useful for them because it doesn't get them the results they need because they don't know how exactly to get to that result.
[00:09:59.800 --> 00:10:07.640] I could probably help and prompt that tool out of the abyss back into a functioning system, but the tool itself isn't the magical component here.
[00:10:07.640 --> 00:10:14.280] In all things AI, the magical component is the person capable of prompting it effectively.
[00:10:14.280 --> 00:10:24.040] So if AI systems make good people better and bad people worse at coding, at writing, at creating art, creating insights, basically creating anything, what should we do?
[00:10:24.040 --> 00:10:30.440] Honestly, I wouldn't want to learn how to code today by just using vibe coding or prompt-centric software development tools at all.
[00:10:30.440 --> 00:10:40.520] I would definitely want to learn the code using AI, having somebody help me build things, a tool that can help me build, build them with me, maybe, but not building them for me.
[00:10:40.520 --> 00:10:45.000] Because if I don't know how a thing is built, how can I judge the quality of the building?
[00:10:45.360 --> 00:10:53.840] If I don't know how this code came to be, what the alternatives would have been, how can I say that this is the optimal choice or that this choice has a good reason behind it?
[00:10:53.840 --> 00:10:57.840] If that's just taken away from me, abstracted away, I will never understand it.
[00:10:57.840 --> 00:11:03.040] And this is something that we need to be really careful with when we attempt mastery at anything now.
[00:11:03.040 --> 00:11:09.280] Outsourcing the act of struggling through our first and even later experiments in that field is not okay.
[00:11:09.280 --> 00:11:10.800] We need to struggle through this.
[00:11:10.800 --> 00:11:12.080] We need to have friction.
[00:11:12.080 --> 00:11:22.240] We need to fail a little bit every now and then to develop the capacity to overcome challenges, new ones and old ones, to understand why things work one way and don't work another way.
[00:11:22.240 --> 00:11:26.720] And to build the taste that's required to discern a good thing from a bad thing.
[00:11:26.720 --> 00:11:28.480] Through criticism, I think that's important too.
[00:11:28.480 --> 00:11:31.040] Like somebody telling you this is not good.
[00:11:31.040 --> 00:11:32.080] That's okay too.
[00:11:32.080 --> 00:11:32.960] You have to learn from this.
[00:11:32.960 --> 00:11:37.120] You quickly then understand: well, what do you think good looks like and why is this not?
[00:11:37.120 --> 00:11:43.120] How you experiment, how you fail and try many other ways, reading about people's experiences and pulling them in.
[00:11:43.120 --> 00:11:45.600] We built a muscle of comprehension.
[00:11:45.600 --> 00:11:50.400] And this muscle allows us to build judgment, discernment, and the capacity to understand good from bad.
[00:11:50.400 --> 00:11:55.600] It's what separates someone who can execute a task from someone who has true expertise.
[00:11:55.600 --> 00:12:05.840] So whenever you think about outsourcing something to AI, a whole process, think about the fact that that very act might make it harder for you to do the actual thing yourself in the first place.
[00:12:05.920 --> 00:12:07.840] Doesn't mean we should avoid AI tools.
[00:12:07.840 --> 00:12:14.160] They're super powerful, make us way more productive, but we need to be intentional about how we use them.
[00:12:14.160 --> 00:12:24.640] Use AI as a collaborator, a pair programmer, or somebody who helps you, but not a replacement for your thinking and struggle, because you have to let it handle the tedious parts.
[00:12:24.640 --> 00:12:29.880] And you still need to maintain involvement in the conceptual and creative aspects of anything you do.
[00:12:29.440 --> 00:12:34.200] You have to keep enough friction in your process to continue building your expertise muscle.
[00:12:34.360 --> 00:12:42.120] Because at the end of the day, the most valuable thing that you can develop isn't just the ability to get AI to do things for you, to delegate.
[00:12:42.120 --> 00:12:46.040] It's the judgment to know whether what it produced is any good.
[00:12:46.040 --> 00:12:52.680] The hammer tap, that's the easy part, but knowing where to hit, that's the expertise that no amount of automation can replace.
[00:12:52.680 --> 00:12:54.920] And that's what I've been thinking a lot about lately.
[00:12:54.920 --> 00:12:57.080] I would love to hear your thoughts on this.
[00:12:57.080 --> 00:13:00.360] Are you finding that AI tools are making you better at what you do?
[00:13:00.360 --> 00:13:01.960] Is my perspective wrong here?
[00:13:01.960 --> 00:13:03.240] Do you actually grow?
[00:13:03.240 --> 00:13:07.240] Or are they creating this dependency that might be limiting your growth?
[00:13:07.240 --> 00:13:08.680] And that's it for today.
[00:13:08.680 --> 00:13:10.840] Thanks so much for listening to the Boots of Founder.
[00:13:10.840 --> 00:13:14.120] You can find me on Twitter at Avid Kahl, A-I-V-I-D-K-A-H-L.
[00:13:14.120 --> 00:13:24.200] If you want to support me in the show, please share PodScan with your professional peers and those who you think will benefit from tracking mentions of brands, businesses, and names on podcasts out there.
[00:13:24.200 --> 00:13:28.120] We are a near real-time database with a really good API.
[00:13:28.120 --> 00:13:33.480] So please share the word with those who need to stay on top of the podcast ecosystem programmatically.
[00:13:33.480 --> 00:13:34.680] Thank you so much for listening.
[00:13:34.760 --> 00:13:37.480] Have a wonderful day and bye-bye.
Prompt 2: Key Takeaways
Now please extract the key takeaways from the transcript content I provided.
Extract the most important key takeaways from this part of the conversation. Use a single sentence statement (the key takeaway) rather than milquetoast descriptions like "the hosts discuss...".
Limit the key takeaways to a maximum of 3. The key takeaways should be insightful and knowledge-additive.
IMPORTANT: Return ONLY valid JSON, no explanations or markdown. Ensure:
- All strings are properly quoted and escaped
- No trailing commas
- All braces and brackets are balanced
Format: {"key_takeaways": ["takeaway 1", "takeaway 2"]}
Prompt 3: Segments
Now identify 2-4 distinct topical segments from this part of the conversation.
For each segment, identify:
- Descriptive title (3-6 words)
- START timestamp when this topic begins (HH:MM:SS format)
- Double check that the timestamp is accurate - a timestamp will NEVER be greater than the total length of the audio
- Most important Key takeaway from that segment. Key takeaway must be specific and knowledge-additive.
- Brief summary of the discussion
IMPORTANT: The timestamp should mark when the topic/segment STARTS, not a range. Look for topic transitions and conversation shifts.
Return ONLY valid JSON. Ensure all strings are properly quoted, no trailing commas:
{
"segments": [
{
"segment_title": "Topic Discussion",
"timestamp": "01:15:30",
"key_takeaway": "main point from this segment",
"segment_summary": "brief description of what was discussed"
}
]
}
Timestamp format: HH:MM:SS (e.g., 00:05:30, 01:22:45) marking the START of each segment.
Now scan the transcript content I provided for ACTUAL mentions of specific media titles:
Find explicit mentions of:
- Books (with specific titles)
- Movies (with specific titles)
- TV Shows (with specific titles)
- Music/Songs (with specific titles)
DO NOT include:
- Websites, URLs, or web services
- Other podcasts or podcast names
IMPORTANT:
- Only include items explicitly mentioned by name. Do not invent titles.
- Valid categories are: "Book", "Movie", "TV Show", "Music"
- Include the exact phrase where each item was mentioned
- Find the nearest proximate timestamp where it appears in the conversation
- THE TIMESTAMP OF THE MEDIA MENTION IS IMPORTANT - DO NOT INVENT TIMESTAMPS AND DO NOT MISATTRIBUTE TIMESTAMPS
- Double check that the timestamp is accurate - a timestamp will NEVER be greater than the total length of the audio
- Timestamps are given as ranges, e.g. 01:13:42.520 --> 01:13:46.720. Use the EARLIER of the 2 timestamps in the range.
Return ONLY valid JSON. Ensure all strings are properly quoted and escaped, no trailing commas:
{
"media_mentions": [
{
"title": "Exact Title as Mentioned",
"category": "Book",
"author_artist": "N/A",
"context": "Brief context of why it was mentioned",
"context_phrase": "The exact sentence or phrase where it was mentioned",
"timestamp": "estimated time like 01:15:30"
}
]
}
If no media is mentioned, return: {"media_mentions": []}
Full Transcript
[00:00:00.400 --> 00:00:04.320] Hey, it's Arvid, and this is the Bootstrap Founder.
[00:00:09.120 --> 00:00:21.360] Today, we will talk about how AI systems, and particularly the ones that do all the work for us, can both massively amplify and hinder our effectiveness at the things that we want to do.
[00:00:21.360 --> 00:00:28.240] This episode is sponsored by Paddle.com, my merchant of record payment provider of choice, and they've been helping me focus on PodScan from day one.
[00:00:28.240 --> 00:00:29.360] That's my own business.
[00:00:29.360 --> 00:00:35.840] They're taking care of all the things that are related to money so that founders like you and me can focus on building the things that only we can build.
[00:00:35.840 --> 00:00:40.400] Paddle then handles the rest, sales tags, credit cards failing, all of that.
[00:00:40.400 --> 00:00:41.440] I highly recommend it.
[00:00:41.440 --> 00:00:44.400] So, please check out paddle.com.
[00:00:46.000 --> 00:00:54.640] Now, picture this: an engineer gets called to fix a massive industrial machine that's been down for hours and that costs the company thousands.
[00:00:54.640 --> 00:01:02.800] They walk in, they look around for maybe two minutes, they pick up a hammer and give one precise tap on a specific component, machine wars back to life.
[00:01:02.800 --> 00:01:03.440] The bill?
[00:01:03.440 --> 00:01:04.560] $10,000.
[00:01:04.720 --> 00:01:06.640] The company obviously is outraged.
[00:01:06.640 --> 00:01:08.720] Well, all you did was hit it once with a hammer.
[00:01:08.720 --> 00:01:11.120] How can you charge $10,000 for that?
[00:01:11.120 --> 00:01:20.400] And then the engineer calmly explains: well, the hammer tap was $5, but knowing exactly where to hit, that's the other $9,995.
[00:01:20.400 --> 00:01:24.720] That's a pretty common story that is often told in the industry about expertise.
[00:01:24.720 --> 00:01:38.160] And this story perfectly captures what I want to talk about today: the nature of expertise and a growing concern that I have about how AI tools might actually be fundamentally changing our ability to develop it.
[00:01:38.480 --> 00:01:48.560] I recently read an article that got me thinking about one of AI's most celebrated features: its ability to remove friction from processes that used to be quite difficult.
[00:01:48.560 --> 00:01:50.080] And I think this sounds wonderful, right?
[00:01:50.080 --> 00:01:53.200] Who doesn't want things to be easier, to be frictionless?
[00:01:53.200 --> 00:02:01.640] But here's the thing that's been nagging at me: what if this removal of friction also removes our capacity for expertise?
[00:02:01.960 --> 00:02:11.960] Think about what happens when we use an AI system, particularly the kind of sophisticated agentic ones that we use for coding and all of that, to do all this work for us.
[00:02:11.960 --> 00:02:25.240] The abstraction of planning, the individual execution of steps, the verification of those steps, and the adaptive behavior to overcome problems and regressions that happen in between, all of this gets kind of taken away from us.
[00:02:25.240 --> 00:02:27.480] It gets abstracted away.
[00:02:27.480 --> 00:02:34.600] And this makes me wonder: can expertise actually still be formed without friction like this at all?
[00:02:34.600 --> 00:02:38.360] Is not friction a necessary requirement for expertise?
[00:02:38.360 --> 00:02:46.600] Or are there maybe ways to become an expert even when the struggle of understanding the nuance and running into problems over and over again is missing?
[00:02:46.600 --> 00:03:00.200] When it comes to software development, to entrepreneurship, being a founder, there's so much in that process, that lifestyle, that requires very intense dealing with setbacks and failures and mistakes and errors all the time.
[00:03:00.200 --> 00:03:05.240] Like if you ever tried building anything meaningful, anything new, you will have run into this.
[00:03:05.240 --> 00:03:15.640] And if we have tooling and automations that take every single pain point and every single point of friction away from us, what happens to our development and our capacity to deal with this?
[00:03:15.640 --> 00:03:32.280] My assumption is that we either become much slower at developing skills that are similar to the tools that we're using, or we're prevented completely from building the repertoire of understanding and the kind of behavior that we would call expertise or professional experience.
[00:03:32.280 --> 00:03:38.520] And in the absence of that expertise, we might not be good founders or good entrepreneurs or good developers.
[00:03:38.520 --> 00:03:41.240] We might not even be able to get there anymore.
[00:03:41.240 --> 00:03:44.080] I came across a phrase recently.
[00:03:44.080 --> 00:03:45.360] Not sure who said it originally.
[00:03:44.920 --> 00:03:48.160] It might have been me thinking out loud, but might have found it on Twitter.
[00:03:48.240 --> 00:03:57.600] But it goes: With AI, people who are already good at a thing get better, and people who are not yet very good at that thing get worse.
[00:03:57.600 --> 00:04:00.160] The good get better, the bad get worse.
[00:04:00.480 --> 00:04:17.760] And the core of this statement lies in how we work through friction through working through criticism or experimentation or failing and trying many other ways, reading about other people's experiences, trying to integrate them into our own and failing at that too, then finding our own way to integrate other people's experience.
[00:04:17.760 --> 00:04:19.760] That's how we learn, right?
[00:04:19.760 --> 00:04:25.360] That's how we build this kind of knowledge of deep understanding of a thing and how to deal with it.
[00:04:25.360 --> 00:04:29.360] And it is through all of this friction that we build the muscle of execution.
[00:04:29.360 --> 00:04:41.200] We build the muscle of comprehension that allows us to develop judgment and discernment and a capacity to understand good from bad, to have taste, to understand this is good and this is bad.
[00:04:41.200 --> 00:04:43.680] You can't really innately understand things like this.
[00:04:43.680 --> 00:04:47.840] You have to try to figure out how they work to see, okay, this is well done.
[00:04:47.840 --> 00:04:50.640] Or see something that looks shiny but isn't.
[00:04:50.720 --> 00:04:54.320] Look into the details of it and see, oh, this is all just clobbered together.
[00:04:54.320 --> 00:04:55.280] This doesn't look nice.
[00:04:55.280 --> 00:04:55.840] I don't want this.
[00:04:55.840 --> 00:04:57.760] I don't want to build things like this.
[00:04:57.760 --> 00:05:06.560] And most of the time, when we think about an expert, we think of someone capable of executing a certain task in a particular field to some kind of standard of excellence, right?
[00:05:06.560 --> 00:05:08.480] Somebody who's good at doing stuff.
[00:05:08.480 --> 00:05:23.920] But I think that's too narrow a definition because an expert is a person who's developed taste, a person who has the capacity to judge tasteful from tasteless, to see good from bad, to see the beautiful in things and see when they are lacking beauty.
[00:05:23.920 --> 00:05:26.240] And these aren't things that are innate to us.
[00:05:26.240 --> 00:05:47.160] An expert has developed taste in the industry for a long time because they've been working in it, and they've developed the capacity for discernment because they had to discern a lot of things and can very quickly apply that taste, discernment, and judgment to any new situation, even ones that they haven't figured out yet because they can extrapolate, because they have so much repertoire.
[00:05:47.160 --> 00:06:01.960] And going back to our engineer story from the beginning, I think that's exactly what expertise looks like: it's knowing what works, knowing what doesn't work, what won't work, and being able to almost immediately discard the many ways that don't work and you know that won't.
[00:06:01.960 --> 00:06:06.920] So that the number of ways that do work in whatever options you have left is high.
[00:06:06.920 --> 00:06:08.040] That's discernment.
[00:06:08.040 --> 00:06:10.600] And that's the capacity to make a judgment call.
[00:06:10.600 --> 00:06:12.680] Let me give you a personal example here.
[00:06:12.680 --> 00:06:19.800] I've been a developer for 20 years at this point, and I've seen some inkling along the way of what good code might look like.
[00:06:19.800 --> 00:06:23.800] And I know how a seasoned software developer would approach solving a problem.
[00:06:23.800 --> 00:06:28.680] So when I prompt an AI system, I don't tell it to build me this or build me that.
[00:06:28.680 --> 00:06:39.800] I give it a very scoped definition of what I want, the same way that I, as a developer, would want the definition to be told to me by my boss or by somebody whose project I'm building.
[00:06:39.800 --> 00:06:44.040] I tell the system, the AI, this is the application this feature is in.
[00:06:44.040 --> 00:06:45.880] This is the kind of customer it's for.
[00:06:45.880 --> 00:06:47.400] This is the data I will work on.
[00:06:47.400 --> 00:06:49.800] And here are the things that it will need to interface with.
[00:06:49.800 --> 00:06:50.440] Here's the input.
[00:06:50.440 --> 00:06:52.120] Here are the outputs that I expect.
[00:06:52.120 --> 00:06:54.360] Here's probably how I would build this.
[00:06:54.360 --> 00:06:56.440] And here are the steps that I would think about.
[00:06:56.440 --> 00:06:58.760] A couple edge cases to consider as well.
[00:06:58.760 --> 00:07:02.680] I have a lot of understanding of my existing code bases and how new code is written.
[00:07:02.680 --> 00:07:06.760] So, in my prompt, I give all of this information to the AI system.
[00:07:06.760 --> 00:07:11.320] I don't tell it to write me a game where wizards fight monsters or anything like this.
[00:07:11.320 --> 00:07:19.360] I would tell it to build me a roguelike site-scroller that uses sprites from this certain sprite database and has a level design that is specific to this.
[00:07:19.600 --> 00:07:24.320] And then I would tell it every single detail extensively because I would already have built this up in my mind.
[00:07:24.320 --> 00:07:30.560] I have the phrasing, the understanding, and the capacity of knowing what another developer would need to build this to.
[00:07:30.560 --> 00:07:36.960] That's how I use this tool, just as if I were to outsource this work and externalize it to another developer.
[00:07:36.960 --> 00:07:38.880] But just in this case, it's a machine.
[00:07:38.880 --> 00:07:43.120] And this makes me faster because I can do this in parallel with many different systems.
[00:07:43.120 --> 00:07:50.880] I can spend five minutes scoping and then I wait 10 minutes for a result versus five minutes scoping for myself and then implementing it for 30 minutes.
[00:07:50.880 --> 00:07:53.360] The machine writes code much quicker than I do.
[00:07:53.360 --> 00:07:59.440] And the better my definition is, my scope is, the more realistic the result, right?
[00:07:59.440 --> 00:08:00.880] The better the actual outcome.
[00:08:00.880 --> 00:08:07.200] And even if there are errors, it's quicker to look and find the bug, the problematic line, and then fix it myself.
[00:08:07.200 --> 00:08:16.560] But here's the crucial part: once I look at the code that comes out, I personally can discern if that looks like code that I would have written.
[00:08:16.560 --> 00:08:18.880] I can judge its quality.
[00:08:18.880 --> 00:08:30.800] And I get to benefit from this AI system in a way that somebody who wouldn't know what good code looks like or if that code would even work, they could not benefit from it the same way.
[00:08:30.800 --> 00:08:42.640] And I've recently talked to a customer who's not technical at all, but they've been trying to build applications using Lovable, this AI tool, which you can prompt your way to a fully capable application.
[00:08:43.040 --> 00:08:43.920] It's really nice.
[00:08:43.920 --> 00:08:45.360] Now, I've used Lovable before.
[00:08:45.360 --> 00:08:52.240] If you look at the PodScan homepage, podscan.fm, you will see what a day or so of hacking around that Lovable can produce.
[00:08:52.240 --> 00:08:59.520] There is a world map that shows whenever new episodes come out and where they have come out, and it shows the thumbnail and the world rotates.
[00:08:59.520 --> 00:09:00.440] It's really cool.
[00:09:00.440 --> 00:09:04.040] I think podscape.fm is the actual landing page.
[00:08:59.920 --> 00:09:05.720] I just pull it into my Podscan homepage.
[00:09:05.800 --> 00:09:09.000] It's really cool what you can build by just coding away there through prompting.
[00:09:09.000 --> 00:09:10.600] Like I didn't write a single line of code.
[00:09:10.600 --> 00:09:12.760] All of this was just me telling it what to build.
[00:09:12.760 --> 00:09:21.400] But this person, my customer, has been trying to integrate the exact same API that I've been using for this Podscape tool into their own Lovable application.
[00:09:21.400 --> 00:09:22.520] And they've been struggling.
[00:09:22.520 --> 00:09:28.920] Whenever new data comes in that doesn't perfectly adhere to the format that they have prompted into the system, it fails.
[00:09:28.920 --> 00:09:34.200] And they've spent hundreds of thousands of tokens trying to get it to work by just telling it to try it differently.
[00:09:34.200 --> 00:09:38.120] But they didn't know what the code should look like or what the data might be structured as.
[00:09:38.120 --> 00:09:40.120] They were just hoping to get it right.
[00:09:40.120 --> 00:09:41.960] That's a waste of time and money.
[00:09:41.960 --> 00:09:46.520] It's somebody who would have likely not spent two days trying to figure this out.
[00:09:46.520 --> 00:09:50.840] And it made them spend more money and it made their life objectively worse.
[00:09:50.840 --> 00:09:59.800] This AI tool isn't useful for them because it doesn't get them the results they need because they don't know how exactly to get to that result.
[00:09:59.800 --> 00:10:07.640] I could probably help and prompt that tool out of the abyss back into a functioning system, but the tool itself isn't the magical component here.
[00:10:07.640 --> 00:10:14.280] In all things AI, the magical component is the person capable of prompting it effectively.
[00:10:14.280 --> 00:10:24.040] So if AI systems make good people better and bad people worse at coding, at writing, at creating art, creating insights, basically creating anything, what should we do?
[00:10:24.040 --> 00:10:30.440] Honestly, I wouldn't want to learn how to code today by just using vibe coding or prompt-centric software development tools at all.
[00:10:30.440 --> 00:10:40.520] I would definitely want to learn the code using AI, having somebody help me build things, a tool that can help me build, build them with me, maybe, but not building them for me.
[00:10:40.520 --> 00:10:45.000] Because if I don't know how a thing is built, how can I judge the quality of the building?
[00:10:45.360 --> 00:10:53.840] If I don't know how this code came to be, what the alternatives would have been, how can I say that this is the optimal choice or that this choice has a good reason behind it?
[00:10:53.840 --> 00:10:57.840] If that's just taken away from me, abstracted away, I will never understand it.
[00:10:57.840 --> 00:11:03.040] And this is something that we need to be really careful with when we attempt mastery at anything now.
[00:11:03.040 --> 00:11:09.280] Outsourcing the act of struggling through our first and even later experiments in that field is not okay.
[00:11:09.280 --> 00:11:10.800] We need to struggle through this.
[00:11:10.800 --> 00:11:12.080] We need to have friction.
[00:11:12.080 --> 00:11:22.240] We need to fail a little bit every now and then to develop the capacity to overcome challenges, new ones and old ones, to understand why things work one way and don't work another way.
[00:11:22.240 --> 00:11:26.720] And to build the taste that's required to discern a good thing from a bad thing.
[00:11:26.720 --> 00:11:28.480] Through criticism, I think that's important too.
[00:11:28.480 --> 00:11:31.040] Like somebody telling you this is not good.
[00:11:31.040 --> 00:11:32.080] That's okay too.
[00:11:32.080 --> 00:11:32.960] You have to learn from this.
[00:11:32.960 --> 00:11:37.120] You quickly then understand: well, what do you think good looks like and why is this not?
[00:11:37.120 --> 00:11:43.120] How you experiment, how you fail and try many other ways, reading about people's experiences and pulling them in.
[00:11:43.120 --> 00:11:45.600] We built a muscle of comprehension.
[00:11:45.600 --> 00:11:50.400] And this muscle allows us to build judgment, discernment, and the capacity to understand good from bad.
[00:11:50.400 --> 00:11:55.600] It's what separates someone who can execute a task from someone who has true expertise.
[00:11:55.600 --> 00:12:05.840] So whenever you think about outsourcing something to AI, a whole process, think about the fact that that very act might make it harder for you to do the actual thing yourself in the first place.
[00:12:05.920 --> 00:12:07.840] Doesn't mean we should avoid AI tools.
[00:12:07.840 --> 00:12:14.160] They're super powerful, make us way more productive, but we need to be intentional about how we use them.
[00:12:14.160 --> 00:12:24.640] Use AI as a collaborator, a pair programmer, or somebody who helps you, but not a replacement for your thinking and struggle, because you have to let it handle the tedious parts.
[00:12:24.640 --> 00:12:29.880] And you still need to maintain involvement in the conceptual and creative aspects of anything you do.
[00:12:29.440 --> 00:12:34.200] You have to keep enough friction in your process to continue building your expertise muscle.
[00:12:34.360 --> 00:12:42.120] Because at the end of the day, the most valuable thing that you can develop isn't just the ability to get AI to do things for you, to delegate.
[00:12:42.120 --> 00:12:46.040] It's the judgment to know whether what it produced is any good.
[00:12:46.040 --> 00:12:52.680] The hammer tap, that's the easy part, but knowing where to hit, that's the expertise that no amount of automation can replace.
[00:12:52.680 --> 00:12:54.920] And that's what I've been thinking a lot about lately.
[00:12:54.920 --> 00:12:57.080] I would love to hear your thoughts on this.
[00:12:57.080 --> 00:13:00.360] Are you finding that AI tools are making you better at what you do?
[00:13:00.360 --> 00:13:01.960] Is my perspective wrong here?
[00:13:01.960 --> 00:13:03.240] Do you actually grow?
[00:13:03.240 --> 00:13:07.240] Or are they creating this dependency that might be limiting your growth?
[00:13:07.240 --> 00:13:08.680] And that's it for today.
[00:13:08.680 --> 00:13:10.840] Thanks so much for listening to the Boots of Founder.
[00:13:10.840 --> 00:13:14.120] You can find me on Twitter at Avid Kahl, A-I-V-I-D-K-A-H-L.
[00:13:14.120 --> 00:13:24.200] If you want to support me in the show, please share PodScan with your professional peers and those who you think will benefit from tracking mentions of brands, businesses, and names on podcasts out there.
[00:13:24.200 --> 00:13:28.120] We are a near real-time database with a really good API.
[00:13:28.120 --> 00:13:33.480] So please share the word with those who need to stay on top of the podcast ecosystem programmatically.
[00:13:33.480 --> 00:13:34.680] Thank you so much for listening.
[00:13:34.760 --> 00:13:37.480] Have a wonderful day and bye-bye.