Debug Information
Processing Details
- VTT File: 59a029bc.vtt
- Processing Time: September 11, 2025 at 01:31 PM
- Total Chunks: 1
- Transcript Length: 38,062 characters
- Caption Count: 301 captions
Prompts Used
Prompt 1: Context Setup
You are an expert data extractor tasked with analyzing a podcast transcript.
I will provide you with part 1 of 1 from a podcast transcript.
I will then ask you to extract different types of information from this content in subsequent messages. Please confirm you have received and understood the transcript content.
Transcript section:
[00:00:00.320 --> 00:00:04.400] Hey, it's Arvid and this is the Bootstrap Founder.
[00:00:08.880 --> 00:00:23.600] Today you will learn exactly how I code, or I guess rather, how I make machines do my bidding, and why that is both highly effective and has changed my coding forever, and it's also surprisingly anxiety-inducing.
[00:00:23.600 --> 00:00:26.320] But first, here's something that reduces my anxiety.
[00:00:26.320 --> 00:00:32.080] The episode you're listening to is sponsored by Paddle.com, my merchant of record payment provider of choice.
[00:00:32.080 --> 00:00:38.720] They're just taking care of all the things related to money so that founders like me and you can focus on building the things that only we can build.
[00:00:38.720 --> 00:00:43.520] And I don't want to have to deal with the anxiety that comes with like taking people's credit cards or whatever.
[00:00:43.520 --> 00:00:45.680] Paddle handles all of that for me.
[00:00:45.680 --> 00:00:49.360] Sales tax, credit cards failing, all of that recovery.
[00:00:49.360 --> 00:00:50.480] I highly recommend it.
[00:00:50.480 --> 00:00:54.240] So please go and check it out at paddle.com.
[00:00:55.520 --> 00:01:02.320] There's something deeply unsettling about being dramatically more productive while also feeling like you're barely working.
[00:01:02.320 --> 00:01:06.720] If you've been using AI over the last couple months, you might have felt like this too.
[00:01:06.720 --> 00:01:10.240] All of the sudden, this thing is doing stuff for you.
[00:01:10.240 --> 00:01:11.600] And what should you do now?
[00:01:11.600 --> 00:01:12.720] Should you also work?
[00:01:12.720 --> 00:01:15.200] Should you do something else or watch it work?
[00:01:15.200 --> 00:01:16.080] It's wild.
[00:01:16.080 --> 00:01:20.640] There's a strange dichotomy that I find myself in every day now with AI-assisted coding.
[00:01:20.640 --> 00:01:21.920] And I use that quite a bit.
[00:01:21.920 --> 00:01:25.680] My output has multiplied significantly over the last few months.
[00:01:25.680 --> 00:01:27.440] I think that might even be an understatement.
[00:01:27.440 --> 00:01:29.120] It's been 5x, 10x.
[00:01:29.120 --> 00:01:30.000] It's a lot.
[00:01:30.000 --> 00:01:34.480] Yet, I often feel like I'm underutilizing my own time.
[00:01:34.480 --> 00:01:42.800] It's probably the most interesting and confusing part of how I build software today and how I would never have thought I would build software just a couple of years ago.
[00:01:42.800 --> 00:01:48.640] So this shift has been so massive that I can barely recognize how I used to work in the past.
[00:01:48.640 --> 00:01:53.840] And the difference isn't just in the tools, it's in the whole role that I play as a developer.
[00:01:53.840 --> 00:02:04.440] I want to share exactly how this works for me at this very point, like in this moment, how I code, because I think we're witnessing this fundamental transformation in what it means to build software.
[00:02:04.440 --> 00:02:10.200] And if you haven't tried it just yet, maybe this is going to inspire you to give it a shot.
[00:02:10.200 --> 00:02:17.400] And if you are trying it, maybe this is going to give you a couple of hints and pointers as to how to optimize it and make it even more magical.
[00:02:17.400 --> 00:02:29.480] So, let me walk you through what I actually did today earlier before recording this just to show you how I build software right now because it perfectly illustrates the new workflow that I've developed.
[00:02:29.480 --> 00:02:46.040] So, whenever I need to build something that extends existing code, whether I wrote it myself or AI wrote it previously through a different kind of prompt, I found that the most effective approach is to draft a very well-defined spec, like a specification of what I want.
[00:02:46.040 --> 00:02:53.480] But here's the key difference to how it used to be in drafting these kinds of things: I don't type these specs, I speak them.
[00:02:53.480 --> 00:02:58.600] I have a mic attached to my computer for podcasting, obviously, and it's always on.
[00:02:58.600 --> 00:03:14.360] I use a tool called Whisperflow on my Mac that lets me hit a key command, just start speaking, and whenever I hit the finish command, which is the same key command, the full transcript of what I just said gets pasted into whatever input field is currently active under my cursor.
[00:03:14.360 --> 00:03:25.080] Whether that's ChatGPT or Claude Perplexity or my coding assistant or just a text field somewhere, maybe Twitter input field, anywhere I want to put text, I can just dictate it.
[00:03:25.080 --> 00:03:31.240] And this is so much faster than typing, even though I can type pretty quickly and stuff.
[00:03:31.240 --> 00:03:34.920] Still, like being able to voice it, massive difference.
[00:03:34.920 --> 00:03:44.080] The transcription quality is quite excellent for this tool in particular because Whisperflow has this additional AI step at the end of it that smooths out the transcript.
[00:03:44.080 --> 00:03:54.720] Instead of just a raw transcription, which can have mistakes, it does reduce misspellings and makes the text more coherent, which is particularly important if you do computing stuff, if you do coding things, right?
[00:03:54.720 --> 00:04:06.160] If you say PHP and you want it to actually be PHP and not like some other weird transcript thing that may come out of it, or Ruby on Rails, I don't know what the transcriber might think of this if it doesn't know the term.
[00:04:06.160 --> 00:04:08.880] So it's really nice to have an AI look into this.
[00:04:08.880 --> 00:04:20.240] So when I have a coding task, which is what this whole podcast episode is about, I use Juni right now, which is the AI coding assistant for IntelliJ's PHP Storm platform.
[00:04:20.240 --> 00:04:22.880] It's my IDE of choice, and I want to use it.
[00:04:22.880 --> 00:04:24.400] I just start talking.
[00:04:24.400 --> 00:04:26.400] I might even switch between windows.
[00:04:26.400 --> 00:04:32.880] I look up articles, blog posts, research, related information, all while I verbalize my thoughts.
[00:04:32.880 --> 00:04:36.480] I scroll through my own code base, I name certain functions, right?
[00:04:36.480 --> 00:04:42.240] I talk about stuff that already exists and how to contextualize it, but I just speak it.
[00:04:42.240 --> 00:04:47.440] And once I'm done speaking, I select the Juni window and I paste what I said.
[00:04:47.440 --> 00:04:49.520] And that becomes my prompt.
[00:04:49.520 --> 00:04:55.920] And these spoken prompts typically follow a very specific structure that I have found works best for coding.
[00:04:55.920 --> 00:05:08.880] I start by speaking through where we are right now, what the tool is, what it currently does, what's the current status of the code that I want changed or augmented, which files are relevant to all of this, and what business logic might be impacted by it.
[00:05:08.880 --> 00:05:19.760] I kind of give it an environmental description, and then I describe what I want the changes themselves to look like, what the interface components are, the wording changes, new logic, different outcomes.
[00:05:19.760 --> 00:05:21.920] I try to prompt outcome first.
[00:05:21.920 --> 00:05:26.720] I give as much detail about the desired outcomes as possible.
[00:05:26.720 --> 00:05:33.960] And about half the time, I also provide detailed implementation steps, not just outcomes, but steps there, the process.
[00:05:29.840 --> 00:05:36.760] Because sometimes I know exactly what the solution should look like.
[00:05:36.920 --> 00:05:38.360] I just don't want to type it out.
[00:05:38.360 --> 00:05:41.640] I'll say something like, here's the class that I would create.
[00:05:41.640 --> 00:05:44.200] Here's the job that I want you to create for this.
[00:05:44.200 --> 00:05:50.040] And the job gets invoked and dispatched at this point in that file and that function in this context.
[00:05:50.040 --> 00:05:55.400] I just kind of build what I have in my mind, the mental model that every developer develops of their code.
[00:05:55.400 --> 00:06:00.920] I verbalize it to the AI so it knows exactly how I'm thinking about it.
[00:06:00.920 --> 00:06:09.160] And after developing this workflow over a couple months, I've noticed that my time breaks down into a consistent pattern here that works best.
[00:06:09.160 --> 00:06:17.000] It's roughly 40% of my time is setting up the prompt, it's like talking myself through it and turning this into a verbalized transcript.
[00:06:17.320 --> 00:06:24.280] Then 20% of my time is actually waiting for the code to be generated, waiting for the agent to do the work.
[00:06:24.280 --> 00:06:29.960] And then 40%, the remaining 40% of my time is reviewing and verifying the code.
[00:06:29.960 --> 00:06:38.600] And that 40% upfront investment into this prompt, I think is crucial because I've tried with less and the quality was pretty bad.
[00:06:38.600 --> 00:06:40.440] The things that came out of it just didn't fit.
[00:06:40.600 --> 00:06:42.600] They were too much, too little.
[00:06:42.600 --> 00:06:53.640] But the moment I spend 20 minutes sometimes just verbalizing my prompt, all of a sudden, what it would have taken me a couple hours to build is then done in 10 minutes by the AI.
[00:06:53.640 --> 00:06:57.480] And if I had only spent five minutes explaining it, I would have done it in 10 minutes.
[00:06:57.560 --> 00:06:58.280] It would have been bad.
[00:06:58.280 --> 00:07:02.200] And I would have to do the whole thing in a couple more 10-minute steps after that.
[00:07:02.200 --> 00:07:12.840] Usually, usually half an hour, under half an hour, 15 minutes, something like this, of just explaining exactly what you need will allow you to build very, very good first-shot results.
[00:07:12.840 --> 00:07:15.000] So that upfront investment is crucial.
[00:07:15.040 --> 00:07:25.040] And the more time you spend giving the AI context, the less likely you will run into unexpected errors because you've kind of matched every potential scenario and explained what should happen.
[00:07:25.040 --> 00:07:35.280] A highly contextualized prompt will generate code that does not surprise you and doesn't surprise itself because agents are now kind of recursive in how they interact with their own code.
[00:07:35.280 --> 00:07:40.400] So the more you contextualize, the more you reduce errors in the process.
[00:07:40.400 --> 00:07:47.120] I found that being verbose here, and if you listen to this podcast, you know exactly what I mean with this, is super helpful.
[00:07:47.120 --> 00:07:49.520] Just talk, talk, talk, think about everything.
[00:07:49.520 --> 00:07:58.560] I often repeat myself even when I describe what I want, especially for like critical business logic, because the AI doesn't really know what is critical and what is not.
[00:07:58.560 --> 00:08:10.240] So if I'm dealing with important data that could be corrupted or mishandled if I didn't get it right, I lay out every single scenario, what the data should look like, what changes should look like, what's allowed, what's not allowed.
[00:08:10.240 --> 00:08:16.640] I make it almost repetitive to ensure that the AI system understands every case and what's important in it.
[00:08:16.640 --> 00:08:25.280] And this might be weird because we're so trained to be concise in how we communicate as developers, but for an AI, it doesn't really matter if you repeat yourself 10 times.
[00:08:25.280 --> 00:08:31.440] In fact, it helps because it gets to see what you stress as something meaningful and important and valuable.
[00:08:31.440 --> 00:08:43.680] And this level of detail pays off because the AI takes the same care and projects it into other parts of the application that you might not have even thought would be involved in the changes if you hadn't talked about it before.
[00:08:43.680 --> 00:08:57.280] So when I know that multiple files and complex interactions are required, which is often if you're building on top of an existing code base, I do give the AI explicit instructions to be thorough in the planning stage.
[00:08:57.280 --> 00:09:06.440] We have these reasoning models now that do a lot of planning before they actually go into like research or thinking, right, or giving you a result inferencing.
[00:09:06.440 --> 00:09:20.360] I tell it to search for all files where certain code might be relevant or where models that are being changed, what the task is to change a model or something, are being used or they are being implemented, or there is an interface that's related.
[00:09:20.360 --> 00:09:26.200] I want to find every place that needs modification instead of jumping to the first place and forgetting others.
[00:09:26.200 --> 00:09:32.120] And for the agent to do that, I need to tell it to be very thorough in exploring as much as it can.
[00:09:32.120 --> 00:09:39.560] To help with this, inside my IDE, I open two or three core files that I know will be involved before running the AI.
[00:09:39.560 --> 00:09:43.480] Like if I were to do something on my podcast model, for example, right?
[00:09:43.480 --> 00:09:55.240] I wanted to have the AI add a couple more demographics to the estimator, or I wanted to, you know, build something that if a podcast is marked as hidden by a user, it removes it from the database, something like this.
[00:09:55.240 --> 00:10:04.520] Then I open my podcast model file, and maybe the podcast dashboard controller file, and I put it in the context of the prompt.
[00:10:04.520 --> 00:10:11.320] This gives the agent anchor points, and it doesn't have to search the whole code base, which is something that tools like Cursor often do.
[00:10:11.320 --> 00:10:15.320] No, I give it the specific context in which I want it to operate.
[00:10:15.320 --> 00:10:19.960] It can start with these key files, and then it usually finds references from there.
[00:10:19.960 --> 00:10:26.920] I found much better success with this approach than giving the AI either the entire code base or nothing at all.
[00:10:27.000 --> 00:10:27.800] Couple key files.
[00:10:27.800 --> 00:10:29.080] That's all you need to put in.
[00:10:29.080 --> 00:10:31.080] And usually, then I hit generate.
[00:10:31.080 --> 00:10:35.240] The AI runs for five to 10 minutes, depending on the complexity of the task.
[00:10:35.240 --> 00:10:44.120] Sometimes for very large features that require dozens or hundreds of file changes, I need to come back and eventually type continue to finish the implementation.
[00:10:44.280 --> 00:10:51.040] Rarely happens, but for normal mid-scope features, one shot is usually enough to get something usable out of it.
[00:10:51.360 --> 00:10:53.920] And then we're now 60% into this.
[00:10:53.920 --> 00:10:55.920] Come the 40%, that's probably the most...
[00:10:56.240 --> 00:10:59.680] crucial period of all of this workflow, code review.
[00:10:59.680 --> 00:11:02.800] And this is where I investigate every change line by line.
[00:11:02.800 --> 00:11:11.040] I look at code that I didn't write, which requires intense focus, but it certainly beats having to write all the code myself.
[00:11:11.040 --> 00:11:12.400] So do appreciate it.
[00:11:12.400 --> 00:11:20.240] And since I've given it such a specific scope definition of what I want, the generated code usually aligns well with my expectations.
[00:11:20.240 --> 00:11:22.480] And that's just me talking about Juni here, right?
[00:11:22.480 --> 00:11:29.200] That's the tool that is built into my IDE, that has access to all the automated code intelligence and all of this.
[00:11:29.200 --> 00:11:30.560] Probably helps.
[00:11:30.560 --> 00:11:33.200] I've not done this work with Windsurf or Cursor.
[00:11:33.200 --> 00:11:37.440] I've checked them out, but they might have different levels of integration.
[00:11:37.440 --> 00:11:39.920] I'm telling you what works for me.
[00:11:39.920 --> 00:11:43.920] But I have one non-negotiable rule in all of my code review.
[00:11:43.920 --> 00:11:49.280] Even though the code might be great, I must understand every single line of code written by an AI agent.
[00:11:49.280 --> 00:11:50.560] I have to go through it.
[00:11:50.560 --> 00:11:59.600] Even if it looks good and it works because I try to test it, I always test changes immediately to catch logic errors or when it misses an import or something like this.
[00:11:59.600 --> 00:12:02.720] Even if it looks good and works, I need to understand it.
[00:12:02.720 --> 00:12:04.640] I need to check out every single line.
[00:12:04.640 --> 00:12:10.400] And most of the time, that's probably 80% in my experience, the code works on the first try.
[00:12:10.400 --> 00:12:13.120] When there are issues, though, they're usually small.
[00:12:13.120 --> 00:12:14.960] They forgot an import statement.
[00:12:14.960 --> 00:12:20.000] They just reference the class because they know it's there, but it's not really imported for the compiler to find.
[00:12:20.000 --> 00:12:23.760] Or there's slightly incorrect syntax or minor logic errors.
[00:12:23.760 --> 00:12:27.520] These typically take no more than two or three changes to fix.
[00:12:27.520 --> 00:12:35.960] And often it's enough to take the output of the error somewhere and just paste it right back into the prompt, and it will figure it out and fix it for you.
[00:12:36.280 --> 00:12:41.000] But code review gets pretty hard when entirely new files are created.
[00:12:41.000 --> 00:12:46.600] When there's a completely new job being defined or something, I actually have to read and understand the entire definition.
[00:12:46.600 --> 00:12:53.400] I can just look at what changed and verify that it looks right in the context of the existing function.
[00:12:53.400 --> 00:12:55.320] I need to dive deeply into the logic.
[00:12:55.320 --> 00:13:01.640] And that's the only stressful part of that code review, usually, is when there's a completely new file, need to figure out does it fit?
[00:13:01.960 --> 00:13:03.560] Is it the right smell?
[00:13:03.560 --> 00:13:05.000] Is it the right location?
[00:13:05.000 --> 00:13:06.760] Is it the right connection?
[00:13:06.760 --> 00:13:13.480] And one of the most powerful features there for these AI coding assistants is the ability to set guidelines.
[00:13:13.480 --> 00:13:21.240] Juni, for example, obviously the others do that too, lets you define coding standards that get applied every time an agent runs.
[00:13:21.240 --> 00:13:32.440] You can tell it to, I don't know, create unit tests for every new method that they create, or integration tests for every new end-to-end job, or define your test suite and how you want it to be run.
[00:13:32.440 --> 00:13:36.520] Tell it about your coding style or the code smell that you want by giving it examples.
[00:13:36.520 --> 00:13:40.200] All of this gets automatically applied if it is well-defined.
[00:13:40.200 --> 00:13:48.760] And in the documentation, IntelliJ even suggests having the AI create these guidelines by investigating your existing code base, which I think is so cool.
[00:13:48.760 --> 00:13:58.680] It's such a cool idea to have an AI that is smart enough to set up guidelines for itself to stick to by investigating the thing that they're supposed to help you with.
[00:13:58.680 --> 00:14:00.920] Like, how is this not magical?
[00:14:00.920 --> 00:14:01.800] I wonder.
[00:14:01.800 --> 00:14:11.800] You can task it to understand how you currently build your product, how your code is structured, how you approach jobs, background jobs, database connectivity, all of that.
[00:14:11.800 --> 00:14:16.960] And then it codifies this understanding into guidelines that it will use every single time it creates code for you.
[00:14:14.840 --> 00:14:19.600] So I recommend always using these guidelines.
[00:14:19.840 --> 00:14:26.160] They're not just useful for code quality, but for providing architectural insight, both for the AI and for yourself.
[00:14:26.160 --> 00:14:30.640] You can tell the system, my back end is Laravel version 12, my front-end is Vue.js.
[00:14:30.640 --> 00:14:32.640] We use Intershare to bridge them too.
[00:14:32.880 --> 00:14:37.040] We try to use as few libraries as possible in this part of the program.
[00:14:37.040 --> 00:14:40.080] And we prefer external services for these kind of features.
[00:14:40.080 --> 00:14:46.480] And if you have this all written down, well, it makes integration and decisions around this much more intelligent, right?
[00:14:46.480 --> 00:14:51.200] It gives the tool the tools to make the right choices for you.
[00:14:51.200 --> 00:14:57.600] So that's my agentic coding experience, my voice-to-code thing, because everything is kind of spoken at this point.
[00:14:57.600 --> 00:15:03.280] But agentic coding isn't the only way I use AI in development or in my business to begin with.
[00:15:03.280 --> 00:15:13.680] For less technical issues, like operational challenges, and maybe even super technical things like server errors and database stuff, but it's not coding related.
[00:15:13.680 --> 00:15:16.960] I use conversational AI like Claude in the browser.
[00:15:16.960 --> 00:15:28.640] So when my servers start throwing 502 errors intermittently, that's a problem that I still have on occasion, because my backend is just hammering the server all the time with new transcripts and stuff.
[00:15:28.640 --> 00:15:33.120] And sometimes errors appear, I can ask, well, what could be the reason, Claude?
[00:15:33.120 --> 00:15:34.240] Where should I start looking?
[00:15:34.240 --> 00:15:36.320] Which log files should I investigate?
[00:15:36.320 --> 00:15:49.280] When I have a large JSON object and I need to extract data with a bash command, or I need a script to convert CSV to JSON, stuff like this, I handle these through back-and-forth conversation rather than integrated agents.
[00:15:49.600 --> 00:15:54.560] And recently, I used Claude's artifacts feature for prototyping, front-end prototyping.
[00:15:54.560 --> 00:15:55.280] That's so cool.
[00:15:55.280 --> 00:15:57.200] I highly recommend you try this.
[00:15:57.200 --> 00:16:03.560] I was working on analytics, a visualization for Podscan, because we track podcast charts and rankings over time.
[00:15:59.920 --> 00:16:06.920] So I wanted to show to my customers how the chart position moved.
[00:16:07.000 --> 00:16:08.280] I wanted a graph for this.
[00:16:08.280 --> 00:16:22.760] So I took example data straight from production, just went into the database, copied some metadata from one podcast, pasted that JSON into Claude, and then asked it to generate three different ways of visualizing this data as live React components.
[00:16:22.760 --> 00:16:25.400] And Claude was really good at building React code.
[00:16:25.400 --> 00:16:33.400] Like they built an HTML file and get all the JavaScript in there and all the React, and you can actually test it and use it and run it.
[00:16:33.400 --> 00:16:36.120] So it built three different interactive components.
[00:16:36.120 --> 00:16:45.320] And once I found the one that I liked, I told it to convert that into a Vue.js composition API component, which is what I use in PodScan, for my own project.
[00:16:45.320 --> 00:16:50.920] And then I took that component, threw that into my coding agent, and told it to integrate everything properly.
[00:16:50.920 --> 00:16:51.960] It's so powerful.
[00:16:51.960 --> 00:16:56.360] It's incredibly powerful to have a workflow like this for prototyping and iteration.
[00:16:56.360 --> 00:17:05.240] Everything is done by the machine, yet you have all the joy of interacting with the in-between stages and figuring out where you want to go.
[00:17:05.240 --> 00:17:06.840] It's really powerful.
[00:17:06.840 --> 00:17:16.200] And one of the most impressive applications for this AI stuff recently for me has been documentation generation because nobody likes to write docs.
[00:17:16.200 --> 00:17:20.280] And this week I overhauled the documentation for the PodScan Firehose API.
[00:17:20.280 --> 00:17:23.800] By this week, I mean earlier this morning, and it took me 10 minutes.
[00:17:23.800 --> 00:17:26.600] A customer mentioned some parts were outdated.
[00:17:26.600 --> 00:17:41.600] And the Firehose API for PodScan, in case you haven't followed this podcast religiously for the last 50 or so episodes, is a webhook-based data stream that sends information about every single podcast episode that we transcribe the moment we finish processing it.
[00:17:41.480 --> 00:17:41.800] Right, right?
[00:17:41.800 --> 00:17:45.520] There's like 50,000 shows a day that release a new episode worldwide.
[00:17:44.920 --> 00:17:52.400] We grab them all, we transcribe them, and then we send off the full data through the Firehose to our customers that need this data.
[00:17:52.720 --> 00:17:57.760] It's in the advanced tier, like the most expensive tier of PodScan, to be able to access this information.
[00:17:57.920 --> 00:18:08.000] Contains the full transcript, host guest identification, sponsor mentions, all the main themes, topics, basically everything we analyze, dispatched as a sizable JSON object.
[00:18:08.000 --> 00:18:12.800] Like most of the data is just a couple words, but the transcript that can be megabytes in size.
[00:18:12.800 --> 00:18:16.160] Imagine Joe Rogan talking for four hours about something.
[00:18:16.160 --> 00:18:19.520] That is a significant transcript, and we just zip it out.
[00:18:19.520 --> 00:18:31.040] So to update the documentation, which already has most of this well-defined or had at that point, I took my existing Markdown documentation from the Notion document in which it's kept.
[00:18:31.040 --> 00:18:37.680] I turned on my Firehose on my own account into a test webhook to collect real data for a bit.
[00:18:37.680 --> 00:18:46.720] And after getting about 30 to 40 episodes worth of actual data, which is like a minute or two, I exported all of this as CSV directly from webhook.site.
[00:18:46.720 --> 00:18:48.240] That's what I use for testing.
[00:18:48.240 --> 00:18:56.640] And then I had Claude create a bash script for me to condense the transcript portion so I could fit more examples into the AIS context.
[00:18:56.640 --> 00:19:06.400] So I had Claude build a little script to take my massive JSON and snip out most of the transcripts from inside the JSON and then create a CSV file again.
[00:19:06.400 --> 00:19:10.960] Put that into Claude and I told it: here's the existing documentation attached to Markdown.
[00:19:10.960 --> 00:19:15.680] Here's the real data showing the actual structure and field variations attached to CSV.
[00:19:15.680 --> 00:19:18.560] Update the documentation to be comprehensive and accurate.
[00:19:18.560 --> 00:19:20.480] Respond in a Markdown document.
[00:19:20.480 --> 00:19:29.920] And less than a minute later, I had extended documentation that included everything from my prior version that I'd already written manually, like a person from the 1500s, apparently.
[00:19:30.760 --> 00:19:41.480] But it had replaced all the simple examples that I handcrafted with a table-based index of every single field, what types might be expected, when they're present, and what their purpose is.
[00:19:41.480 --> 00:19:43.400] It was 95% correct.
[00:19:43.400 --> 00:19:45.880] It was mostly incorrect in terms of frequency and stuff.
[00:19:45.880 --> 00:19:51.960] So I went through it again, code review, corrected it, and the remaining 5% took very little time to fix.
[00:19:51.960 --> 00:19:54.920] I think like five minutes just to read through the whole thing.
[00:19:54.920 --> 00:19:56.520] And this is how I code now.
[00:19:56.520 --> 00:20:03.080] This is how I run my business, my software as a service, podcast, and database business.
[00:20:03.080 --> 00:20:05.800] It's just me wrangling AIs to do my bidding.
[00:20:05.800 --> 00:20:07.400] It's bizarre that this is a thing.
[00:20:07.400 --> 00:20:11.880] I would never have thought that this was the future that I would live to see.
[00:20:11.880 --> 00:20:16.680] But here's what's most fascinating about this entire transformation.
[00:20:16.680 --> 00:20:28.920] Those 20% moments when the AI is generating code between the 40% of me telling it what to do and the 40% of me checking, that still very much feels like cheating.
[00:20:28.920 --> 00:20:32.120] It feels like somebody else is doing the work for me and I should be working.
[00:20:32.120 --> 00:20:41.960] I have to remind myself that without this process of specifying, executing, and reviewing 402040, I wouldn't get half the things done that I accomplish in a day.
[00:20:41.960 --> 00:20:53.480] Probably not like even 10% of it at this point, because it is extremely reliable and super fast compared to me spending two hours on a whole thing instead of just half an hour.
[00:20:53.480 --> 00:20:57.400] I'm often humbled by the speed at which these systems generate code.
[00:20:57.400 --> 00:21:01.240] It's not always right, but neither is mine, to be honest, when I write code.
[00:21:01.240 --> 00:21:04.760] I'm confused when people complain about AI writing code that doesn't work.
[00:21:04.760 --> 00:21:11.400] They seem to forget that they themselves write code that doesn't work immediately and always needs debugging or needs at least some kind of iterating process.
[00:21:11.400 --> 00:21:20.480] Coding for most of us, and I call myself a 0.8x developer, both jokingly and realistically, I'm not the best coder out there.
[00:21:20.720 --> 00:21:26.160] It's always been trial and error for me at least, and for many others, until the errors become small enough not to matter.
[00:21:26.160 --> 00:21:28.160] That's the process that I had.
[00:21:28.160 --> 00:21:35.760] And I think just from watching Juni do the work, like from actually reading the individual steps, AI systems work the same way internally now.
[00:21:35.760 --> 00:21:37.680] They have self-checking loops.
[00:21:37.680 --> 00:21:45.680] I often see an agentic system tests its own work, either linting it or realizing that something can't compile or is interpreted and doesn't work.
[00:21:45.680 --> 00:21:48.560] And they try again until it finds the right approach.
[00:21:48.560 --> 00:21:51.040] And that's exactly what a developer would do.
[00:21:51.040 --> 00:21:56.800] What we're witnessing here is a transformation from being code writers to code editors.
[00:21:56.800 --> 00:21:58.400] We're no longer writers of code.
[00:21:58.400 --> 00:21:59.840] We are the editors of code.
[00:21:59.840 --> 00:22:03.120] And what we call a code editor now might as well be redefined, right?
[00:22:03.120 --> 00:22:09.280] It's not the program that allows us to type in things, but it's a tool where we say, yeah, I approve of this code.
[00:22:09.280 --> 00:22:10.320] Yes, do this.
[00:22:10.320 --> 00:22:11.440] Oh, you've done this.
[00:22:11.440 --> 00:22:12.400] Okay, it's fine.
[00:22:12.400 --> 00:22:13.040] Or it's not.
[00:22:13.040 --> 00:22:14.080] Do it again.
[00:22:14.080 --> 00:22:19.120] I think it's a fascinating redefinition of terms in our industry that we are looking at right now.
[00:22:19.120 --> 00:22:27.040] And I think this is becoming the norm very quickly, this approach to coding and doing stuff, because we need to understand something important here.
[00:22:27.040 --> 00:22:32.480] Today's version of AI-assisted coding is probably the worst version we'll ever see again.
[00:22:32.480 --> 00:22:37.120] It might be the best we had so far, obviously, but it's also the worst one that we'll have going forward.
[00:22:37.120 --> 00:22:38.080] Everything is going to be better.
[00:22:38.080 --> 00:22:41.120] Tomorrow's version will be better, and the day after that will be even better.
[00:22:41.120 --> 00:22:45.360] At the speed that these things are developed, that might actually be factually the case.
[00:22:45.360 --> 00:22:48.560] And these systems will become more reliable, more autonomous.
[00:22:48.560 --> 00:23:02.040] We'll see fewer interventions needed and more, sure, this works, looks good to me, responses from people, particularly with the advent of people understanding MCPs, the model context protocols, and integrating external verification systems.
[00:23:02.520 --> 00:23:11.960] AI will be checking its own work through external tools and internal tools that we built for it and that are built into the IDEs and development systems of the future.
[00:23:11.960 --> 00:23:15.080] So if you're still writing code entirely by hand, I think that's great.
[00:23:15.080 --> 00:23:17.640] That's fine that you can actually still do that.
[00:23:17.640 --> 00:23:20.280] It is still something that we need to be able to do.
[00:23:20.280 --> 00:23:27.240] I occasionally take a step back and code manually only to quickly get frustrated with my own limitations, which have always been there, right?
[00:23:27.240 --> 00:23:29.560] It's not that I've forgotten how to code.
[00:23:29.560 --> 00:23:33.000] It's that it's always been a struggle to get things just right.
[00:23:33.000 --> 00:23:36.120] For perfectionists, this is particularly complicated.
[00:23:36.120 --> 00:23:45.880] But if the alternative is telling someone to write code for you, then understanding that code and saying, yes, this is exactly what I would have written, or no, try again, you missed this thing.
[00:23:45.880 --> 00:23:48.840] Now that is the superpower all in itself.
[00:23:48.840 --> 00:23:53.960] We thought coding was the superpower, but it turns out that the typing part never really mattered.
[00:23:53.960 --> 00:23:59.320] What matters is understanding what good code looks like, what it does, and what it shouldn't do.
[00:23:59.320 --> 00:24:07.240] Being able to discriminate good code from bad code all of a sudden becomes much more important than being able to type good code into an editor in the first place.
[00:24:07.240 --> 00:24:18.200] You still need to understand algorithmic complexity, basic algorithms, data types, and all that so you can prompt effectively, but you don't necessarily need to implement everything by hand anymore.
[00:24:18.200 --> 00:24:19.880] You just need to be able to get it.
[00:24:19.880 --> 00:24:22.680] And this obviously translates into other fields as well.
[00:24:22.680 --> 00:24:33.240] You can apply the same 40-20-40 approach, taking time to get the prompt right, giving as much context as possible, and then expecting mistakes and approaching it from a corrective verification perspective.
[00:24:33.240 --> 00:24:37.640] You can take that to writing, sales, outreach, research, really anything.
[00:24:37.640 --> 00:24:41.720] You just have to know what looks good when you look at the result.
[00:24:41.720 --> 00:24:45.840] And this feels like the inevitable conclusion of automated software development to me.
[00:24:45.840 --> 00:24:49.520] We're experiencing something here that certainly wasn't possible a couple of years ago.
[00:24:44.920 --> 00:24:51.040] And it feels like we're just getting started.
[00:24:51.360 --> 00:24:57.440] And if you're not already experimenting with AI-assisted development, I encourage you to give it a try.
[00:24:57.440 --> 00:24:59.520] See if it fits somewhere into your workflow.
[00:24:59.520 --> 00:25:08.880] Start small, maybe with documentation like I did, or simple scripting tasks, and work your way up to more complex features that you let these agentic systems build by themselves.
[00:25:08.880 --> 00:25:12.800] See how far you can take it without being completely annoyed by it.
[00:25:12.800 --> 00:25:18.320] Because there's always a ceiling of the mistakes that you're allowing to happen in front of your eyes.
[00:25:18.320 --> 00:25:23.200] But I believe the future belongs to those who can effectively collaborate with these systems.
[00:25:23.200 --> 00:25:26.640] And the best way to learn that collaboration is to practice it today.
[00:25:26.960 --> 00:25:35.600] The tools will only get better, but the fundamental skills of knowing how to specify what you want, how to review what you get, and then iterate towards better results.
[00:25:35.840 --> 00:25:37.600] That's something you can start building right now.
[00:25:37.600 --> 00:25:42.720] And frankly, I think you should, because the people you're competing with, they're figuring this out too.
[00:25:42.720 --> 00:25:46.400] So, what matters isn't whether you can type faster or remember more syntax.
[00:25:46.400 --> 00:25:47.840] That's 90s coding.
[00:25:47.840 --> 00:25:57.840] What matters is whether you can think clearly about problems, communicate effectively with these systems that are building the solutions for you, and recognize quality solutions when you see them.
[00:25:57.840 --> 00:26:04.400] These are the skills that will define the next generation of builders and the next generation of successful software businesses.
[00:26:04.400 --> 00:26:05.600] And that's it for today.
[00:26:05.600 --> 00:26:07.600] Thank you for listening to the Bootstrap Founder.
[00:26:07.600 --> 00:26:10.720] You can find me on Twitter at AvidKar, A-I-V-I-D, K-A-L.
[00:26:10.880 --> 00:26:22.480] If you want to support me on the show, please talk about podscan.fm to your professional peers and those who you think benefit from tracking mentions of brands, businesses, topics, names on podcasts all over the place.
[00:26:22.480 --> 00:26:25.440] 50,000 episodes a day, we transcribe them all.
[00:26:25.440 --> 00:26:30.760] We have this near real-time podcast database with a really, really solid API and a really good search.
[00:26:30.760 --> 00:26:34.920] So please share the word with those who need to stay on top of the podcast ecosystem.
[00:26:29.120 --> 00:26:35.480] I appreciate it.
[00:26:35.720 --> 00:26:36.920] Thank you so much for listening.
[00:26:36.920 --> 00:26:39.480] Have a wonderful day and bye-bye.
Prompt 2: Key Takeaways
Now please extract the key takeaways from the transcript content I provided.
Extract the most important key takeaways from this part of the conversation. Use a single sentence statement (the key takeaway) rather than milquetoast descriptions like "the hosts discuss...".
Limit the key takeaways to a maximum of 3. The key takeaways should be insightful and knowledge-additive.
IMPORTANT: Return ONLY valid JSON, no explanations or markdown. Ensure:
- All strings are properly quoted and escaped
- No trailing commas
- All braces and brackets are balanced
Format: {"key_takeaways": ["takeaway 1", "takeaway 2"]}
Prompt 3: Segments
Now identify 2-4 distinct topical segments from this part of the conversation.
For each segment, identify:
- Descriptive title (3-6 words)
- START timestamp when this topic begins (HH:MM:SS format)
- Double check that the timestamp is accurate - a timestamp will NEVER be greater than the total length of the audio
- Most important Key takeaway from that segment. Key takeaway must be specific and knowledge-additive.
- Brief summary of the discussion
IMPORTANT: The timestamp should mark when the topic/segment STARTS, not a range. Look for topic transitions and conversation shifts.
Return ONLY valid JSON. Ensure all strings are properly quoted, no trailing commas:
{
"segments": [
{
"segment_title": "Topic Discussion",
"timestamp": "01:15:30",
"key_takeaway": "main point from this segment",
"segment_summary": "brief description of what was discussed"
}
]
}
Timestamp format: HH:MM:SS (e.g., 00:05:30, 01:22:45) marking the START of each segment.
Now scan the transcript content I provided for ACTUAL mentions of specific media titles:
Find explicit mentions of:
- Books (with specific titles)
- Movies (with specific titles)
- TV Shows (with specific titles)
- Music/Songs (with specific titles)
DO NOT include:
- Websites, URLs, or web services
- Other podcasts or podcast names
IMPORTANT:
- Only include items explicitly mentioned by name. Do not invent titles.
- Valid categories are: "Book", "Movie", "TV Show", "Music"
- Include the exact phrase where each item was mentioned
- Find the nearest proximate timestamp where it appears in the conversation
- THE TIMESTAMP OF THE MEDIA MENTION IS IMPORTANT - DO NOT INVENT TIMESTAMPS AND DO NOT MISATTRIBUTE TIMESTAMPS
- Double check that the timestamp is accurate - a timestamp will NEVER be greater than the total length of the audio
- Timestamps are given as ranges, e.g. 01:13:42.520 --> 01:13:46.720. Use the EARLIER of the 2 timestamps in the range.
Return ONLY valid JSON. Ensure all strings are properly quoted and escaped, no trailing commas:
{
"media_mentions": [
{
"title": "Exact Title as Mentioned",
"category": "Book",
"author_artist": "N/A",
"context": "Brief context of why it was mentioned",
"context_phrase": "The exact sentence or phrase where it was mentioned",
"timestamp": "estimated time like 01:15:30"
}
]
}
If no media is mentioned, return: {"media_mentions": []}
Full Transcript
[00:00:00.320 --> 00:00:04.400] Hey, it's Arvid and this is the Bootstrap Founder.
[00:00:08.880 --> 00:00:23.600] Today you will learn exactly how I code, or I guess rather, how I make machines do my bidding, and why that is both highly effective and has changed my coding forever, and it's also surprisingly anxiety-inducing.
[00:00:23.600 --> 00:00:26.320] But first, here's something that reduces my anxiety.
[00:00:26.320 --> 00:00:32.080] The episode you're listening to is sponsored by Paddle.com, my merchant of record payment provider of choice.
[00:00:32.080 --> 00:00:38.720] They're just taking care of all the things related to money so that founders like me and you can focus on building the things that only we can build.
[00:00:38.720 --> 00:00:43.520] And I don't want to have to deal with the anxiety that comes with like taking people's credit cards or whatever.
[00:00:43.520 --> 00:00:45.680] Paddle handles all of that for me.
[00:00:45.680 --> 00:00:49.360] Sales tax, credit cards failing, all of that recovery.
[00:00:49.360 --> 00:00:50.480] I highly recommend it.
[00:00:50.480 --> 00:00:54.240] So please go and check it out at paddle.com.
[00:00:55.520 --> 00:01:02.320] There's something deeply unsettling about being dramatically more productive while also feeling like you're barely working.
[00:01:02.320 --> 00:01:06.720] If you've been using AI over the last couple months, you might have felt like this too.
[00:01:06.720 --> 00:01:10.240] All of the sudden, this thing is doing stuff for you.
[00:01:10.240 --> 00:01:11.600] And what should you do now?
[00:01:11.600 --> 00:01:12.720] Should you also work?
[00:01:12.720 --> 00:01:15.200] Should you do something else or watch it work?
[00:01:15.200 --> 00:01:16.080] It's wild.
[00:01:16.080 --> 00:01:20.640] There's a strange dichotomy that I find myself in every day now with AI-assisted coding.
[00:01:20.640 --> 00:01:21.920] And I use that quite a bit.
[00:01:21.920 --> 00:01:25.680] My output has multiplied significantly over the last few months.
[00:01:25.680 --> 00:01:27.440] I think that might even be an understatement.
[00:01:27.440 --> 00:01:29.120] It's been 5x, 10x.
[00:01:29.120 --> 00:01:30.000] It's a lot.
[00:01:30.000 --> 00:01:34.480] Yet, I often feel like I'm underutilizing my own time.
[00:01:34.480 --> 00:01:42.800] It's probably the most interesting and confusing part of how I build software today and how I would never have thought I would build software just a couple of years ago.
[00:01:42.800 --> 00:01:48.640] So this shift has been so massive that I can barely recognize how I used to work in the past.
[00:01:48.640 --> 00:01:53.840] And the difference isn't just in the tools, it's in the whole role that I play as a developer.
[00:01:53.840 --> 00:02:04.440] I want to share exactly how this works for me at this very point, like in this moment, how I code, because I think we're witnessing this fundamental transformation in what it means to build software.
[00:02:04.440 --> 00:02:10.200] And if you haven't tried it just yet, maybe this is going to inspire you to give it a shot.
[00:02:10.200 --> 00:02:17.400] And if you are trying it, maybe this is going to give you a couple of hints and pointers as to how to optimize it and make it even more magical.
[00:02:17.400 --> 00:02:29.480] So, let me walk you through what I actually did today earlier before recording this just to show you how I build software right now because it perfectly illustrates the new workflow that I've developed.
[00:02:29.480 --> 00:02:46.040] So, whenever I need to build something that extends existing code, whether I wrote it myself or AI wrote it previously through a different kind of prompt, I found that the most effective approach is to draft a very well-defined spec, like a specification of what I want.
[00:02:46.040 --> 00:02:53.480] But here's the key difference to how it used to be in drafting these kinds of things: I don't type these specs, I speak them.
[00:02:53.480 --> 00:02:58.600] I have a mic attached to my computer for podcasting, obviously, and it's always on.
[00:02:58.600 --> 00:03:14.360] I use a tool called Whisperflow on my Mac that lets me hit a key command, just start speaking, and whenever I hit the finish command, which is the same key command, the full transcript of what I just said gets pasted into whatever input field is currently active under my cursor.
[00:03:14.360 --> 00:03:25.080] Whether that's ChatGPT or Claude Perplexity or my coding assistant or just a text field somewhere, maybe Twitter input field, anywhere I want to put text, I can just dictate it.
[00:03:25.080 --> 00:03:31.240] And this is so much faster than typing, even though I can type pretty quickly and stuff.
[00:03:31.240 --> 00:03:34.920] Still, like being able to voice it, massive difference.
[00:03:34.920 --> 00:03:44.080] The transcription quality is quite excellent for this tool in particular because Whisperflow has this additional AI step at the end of it that smooths out the transcript.
[00:03:44.080 --> 00:03:54.720] Instead of just a raw transcription, which can have mistakes, it does reduce misspellings and makes the text more coherent, which is particularly important if you do computing stuff, if you do coding things, right?
[00:03:54.720 --> 00:04:06.160] If you say PHP and you want it to actually be PHP and not like some other weird transcript thing that may come out of it, or Ruby on Rails, I don't know what the transcriber might think of this if it doesn't know the term.
[00:04:06.160 --> 00:04:08.880] So it's really nice to have an AI look into this.
[00:04:08.880 --> 00:04:20.240] So when I have a coding task, which is what this whole podcast episode is about, I use Juni right now, which is the AI coding assistant for IntelliJ's PHP Storm platform.
[00:04:20.240 --> 00:04:22.880] It's my IDE of choice, and I want to use it.
[00:04:22.880 --> 00:04:24.400] I just start talking.
[00:04:24.400 --> 00:04:26.400] I might even switch between windows.
[00:04:26.400 --> 00:04:32.880] I look up articles, blog posts, research, related information, all while I verbalize my thoughts.
[00:04:32.880 --> 00:04:36.480] I scroll through my own code base, I name certain functions, right?
[00:04:36.480 --> 00:04:42.240] I talk about stuff that already exists and how to contextualize it, but I just speak it.
[00:04:42.240 --> 00:04:47.440] And once I'm done speaking, I select the Juni window and I paste what I said.
[00:04:47.440 --> 00:04:49.520] And that becomes my prompt.
[00:04:49.520 --> 00:04:55.920] And these spoken prompts typically follow a very specific structure that I have found works best for coding.
[00:04:55.920 --> 00:05:08.880] I start by speaking through where we are right now, what the tool is, what it currently does, what's the current status of the code that I want changed or augmented, which files are relevant to all of this, and what business logic might be impacted by it.
[00:05:08.880 --> 00:05:19.760] I kind of give it an environmental description, and then I describe what I want the changes themselves to look like, what the interface components are, the wording changes, new logic, different outcomes.
[00:05:19.760 --> 00:05:21.920] I try to prompt outcome first.
[00:05:21.920 --> 00:05:26.720] I give as much detail about the desired outcomes as possible.
[00:05:26.720 --> 00:05:33.960] And about half the time, I also provide detailed implementation steps, not just outcomes, but steps there, the process.
[00:05:29.840 --> 00:05:36.760] Because sometimes I know exactly what the solution should look like.
[00:05:36.920 --> 00:05:38.360] I just don't want to type it out.
[00:05:38.360 --> 00:05:41.640] I'll say something like, here's the class that I would create.
[00:05:41.640 --> 00:05:44.200] Here's the job that I want you to create for this.
[00:05:44.200 --> 00:05:50.040] And the job gets invoked and dispatched at this point in that file and that function in this context.
[00:05:50.040 --> 00:05:55.400] I just kind of build what I have in my mind, the mental model that every developer develops of their code.
[00:05:55.400 --> 00:06:00.920] I verbalize it to the AI so it knows exactly how I'm thinking about it.
[00:06:00.920 --> 00:06:09.160] And after developing this workflow over a couple months, I've noticed that my time breaks down into a consistent pattern here that works best.
[00:06:09.160 --> 00:06:17.000] It's roughly 40% of my time is setting up the prompt, it's like talking myself through it and turning this into a verbalized transcript.
[00:06:17.320 --> 00:06:24.280] Then 20% of my time is actually waiting for the code to be generated, waiting for the agent to do the work.
[00:06:24.280 --> 00:06:29.960] And then 40%, the remaining 40% of my time is reviewing and verifying the code.
[00:06:29.960 --> 00:06:38.600] And that 40% upfront investment into this prompt, I think is crucial because I've tried with less and the quality was pretty bad.
[00:06:38.600 --> 00:06:40.440] The things that came out of it just didn't fit.
[00:06:40.600 --> 00:06:42.600] They were too much, too little.
[00:06:42.600 --> 00:06:53.640] But the moment I spend 20 minutes sometimes just verbalizing my prompt, all of a sudden, what it would have taken me a couple hours to build is then done in 10 minutes by the AI.
[00:06:53.640 --> 00:06:57.480] And if I had only spent five minutes explaining it, I would have done it in 10 minutes.
[00:06:57.560 --> 00:06:58.280] It would have been bad.
[00:06:58.280 --> 00:07:02.200] And I would have to do the whole thing in a couple more 10-minute steps after that.
[00:07:02.200 --> 00:07:12.840] Usually, usually half an hour, under half an hour, 15 minutes, something like this, of just explaining exactly what you need will allow you to build very, very good first-shot results.
[00:07:12.840 --> 00:07:15.000] So that upfront investment is crucial.
[00:07:15.040 --> 00:07:25.040] And the more time you spend giving the AI context, the less likely you will run into unexpected errors because you've kind of matched every potential scenario and explained what should happen.
[00:07:25.040 --> 00:07:35.280] A highly contextualized prompt will generate code that does not surprise you and doesn't surprise itself because agents are now kind of recursive in how they interact with their own code.
[00:07:35.280 --> 00:07:40.400] So the more you contextualize, the more you reduce errors in the process.
[00:07:40.400 --> 00:07:47.120] I found that being verbose here, and if you listen to this podcast, you know exactly what I mean with this, is super helpful.
[00:07:47.120 --> 00:07:49.520] Just talk, talk, talk, think about everything.
[00:07:49.520 --> 00:07:58.560] I often repeat myself even when I describe what I want, especially for like critical business logic, because the AI doesn't really know what is critical and what is not.
[00:07:58.560 --> 00:08:10.240] So if I'm dealing with important data that could be corrupted or mishandled if I didn't get it right, I lay out every single scenario, what the data should look like, what changes should look like, what's allowed, what's not allowed.
[00:08:10.240 --> 00:08:16.640] I make it almost repetitive to ensure that the AI system understands every case and what's important in it.
[00:08:16.640 --> 00:08:25.280] And this might be weird because we're so trained to be concise in how we communicate as developers, but for an AI, it doesn't really matter if you repeat yourself 10 times.
[00:08:25.280 --> 00:08:31.440] In fact, it helps because it gets to see what you stress as something meaningful and important and valuable.
[00:08:31.440 --> 00:08:43.680] And this level of detail pays off because the AI takes the same care and projects it into other parts of the application that you might not have even thought would be involved in the changes if you hadn't talked about it before.
[00:08:43.680 --> 00:08:57.280] So when I know that multiple files and complex interactions are required, which is often if you're building on top of an existing code base, I do give the AI explicit instructions to be thorough in the planning stage.
[00:08:57.280 --> 00:09:06.440] We have these reasoning models now that do a lot of planning before they actually go into like research or thinking, right, or giving you a result inferencing.
[00:09:06.440 --> 00:09:20.360] I tell it to search for all files where certain code might be relevant or where models that are being changed, what the task is to change a model or something, are being used or they are being implemented, or there is an interface that's related.
[00:09:20.360 --> 00:09:26.200] I want to find every place that needs modification instead of jumping to the first place and forgetting others.
[00:09:26.200 --> 00:09:32.120] And for the agent to do that, I need to tell it to be very thorough in exploring as much as it can.
[00:09:32.120 --> 00:09:39.560] To help with this, inside my IDE, I open two or three core files that I know will be involved before running the AI.
[00:09:39.560 --> 00:09:43.480] Like if I were to do something on my podcast model, for example, right?
[00:09:43.480 --> 00:09:55.240] I wanted to have the AI add a couple more demographics to the estimator, or I wanted to, you know, build something that if a podcast is marked as hidden by a user, it removes it from the database, something like this.
[00:09:55.240 --> 00:10:04.520] Then I open my podcast model file, and maybe the podcast dashboard controller file, and I put it in the context of the prompt.
[00:10:04.520 --> 00:10:11.320] This gives the agent anchor points, and it doesn't have to search the whole code base, which is something that tools like Cursor often do.
[00:10:11.320 --> 00:10:15.320] No, I give it the specific context in which I want it to operate.
[00:10:15.320 --> 00:10:19.960] It can start with these key files, and then it usually finds references from there.
[00:10:19.960 --> 00:10:26.920] I found much better success with this approach than giving the AI either the entire code base or nothing at all.
[00:10:27.000 --> 00:10:27.800] Couple key files.
[00:10:27.800 --> 00:10:29.080] That's all you need to put in.
[00:10:29.080 --> 00:10:31.080] And usually, then I hit generate.
[00:10:31.080 --> 00:10:35.240] The AI runs for five to 10 minutes, depending on the complexity of the task.
[00:10:35.240 --> 00:10:44.120] Sometimes for very large features that require dozens or hundreds of file changes, I need to come back and eventually type continue to finish the implementation.
[00:10:44.280 --> 00:10:51.040] Rarely happens, but for normal mid-scope features, one shot is usually enough to get something usable out of it.
[00:10:51.360 --> 00:10:53.920] And then we're now 60% into this.
[00:10:53.920 --> 00:10:55.920] Come the 40%, that's probably the most...
[00:10:56.240 --> 00:10:59.680] crucial period of all of this workflow, code review.
[00:10:59.680 --> 00:11:02.800] And this is where I investigate every change line by line.
[00:11:02.800 --> 00:11:11.040] I look at code that I didn't write, which requires intense focus, but it certainly beats having to write all the code myself.
[00:11:11.040 --> 00:11:12.400] So do appreciate it.
[00:11:12.400 --> 00:11:20.240] And since I've given it such a specific scope definition of what I want, the generated code usually aligns well with my expectations.
[00:11:20.240 --> 00:11:22.480] And that's just me talking about Juni here, right?
[00:11:22.480 --> 00:11:29.200] That's the tool that is built into my IDE, that has access to all the automated code intelligence and all of this.
[00:11:29.200 --> 00:11:30.560] Probably helps.
[00:11:30.560 --> 00:11:33.200] I've not done this work with Windsurf or Cursor.
[00:11:33.200 --> 00:11:37.440] I've checked them out, but they might have different levels of integration.
[00:11:37.440 --> 00:11:39.920] I'm telling you what works for me.
[00:11:39.920 --> 00:11:43.920] But I have one non-negotiable rule in all of my code review.
[00:11:43.920 --> 00:11:49.280] Even though the code might be great, I must understand every single line of code written by an AI agent.
[00:11:49.280 --> 00:11:50.560] I have to go through it.
[00:11:50.560 --> 00:11:59.600] Even if it looks good and it works because I try to test it, I always test changes immediately to catch logic errors or when it misses an import or something like this.
[00:11:59.600 --> 00:12:02.720] Even if it looks good and works, I need to understand it.
[00:12:02.720 --> 00:12:04.640] I need to check out every single line.
[00:12:04.640 --> 00:12:10.400] And most of the time, that's probably 80% in my experience, the code works on the first try.
[00:12:10.400 --> 00:12:13.120] When there are issues, though, they're usually small.
[00:12:13.120 --> 00:12:14.960] They forgot an import statement.
[00:12:14.960 --> 00:12:20.000] They just reference the class because they know it's there, but it's not really imported for the compiler to find.
[00:12:20.000 --> 00:12:23.760] Or there's slightly incorrect syntax or minor logic errors.
[00:12:23.760 --> 00:12:27.520] These typically take no more than two or three changes to fix.
[00:12:27.520 --> 00:12:35.960] And often it's enough to take the output of the error somewhere and just paste it right back into the prompt, and it will figure it out and fix it for you.
[00:12:36.280 --> 00:12:41.000] But code review gets pretty hard when entirely new files are created.
[00:12:41.000 --> 00:12:46.600] When there's a completely new job being defined or something, I actually have to read and understand the entire definition.
[00:12:46.600 --> 00:12:53.400] I can just look at what changed and verify that it looks right in the context of the existing function.
[00:12:53.400 --> 00:12:55.320] I need to dive deeply into the logic.
[00:12:55.320 --> 00:13:01.640] And that's the only stressful part of that code review, usually, is when there's a completely new file, need to figure out does it fit?
[00:13:01.960 --> 00:13:03.560] Is it the right smell?
[00:13:03.560 --> 00:13:05.000] Is it the right location?
[00:13:05.000 --> 00:13:06.760] Is it the right connection?
[00:13:06.760 --> 00:13:13.480] And one of the most powerful features there for these AI coding assistants is the ability to set guidelines.
[00:13:13.480 --> 00:13:21.240] Juni, for example, obviously the others do that too, lets you define coding standards that get applied every time an agent runs.
[00:13:21.240 --> 00:13:32.440] You can tell it to, I don't know, create unit tests for every new method that they create, or integration tests for every new end-to-end job, or define your test suite and how you want it to be run.
[00:13:32.440 --> 00:13:36.520] Tell it about your coding style or the code smell that you want by giving it examples.
[00:13:36.520 --> 00:13:40.200] All of this gets automatically applied if it is well-defined.
[00:13:40.200 --> 00:13:48.760] And in the documentation, IntelliJ even suggests having the AI create these guidelines by investigating your existing code base, which I think is so cool.
[00:13:48.760 --> 00:13:58.680] It's such a cool idea to have an AI that is smart enough to set up guidelines for itself to stick to by investigating the thing that they're supposed to help you with.
[00:13:58.680 --> 00:14:00.920] Like, how is this not magical?
[00:14:00.920 --> 00:14:01.800] I wonder.
[00:14:01.800 --> 00:14:11.800] You can task it to understand how you currently build your product, how your code is structured, how you approach jobs, background jobs, database connectivity, all of that.
[00:14:11.800 --> 00:14:16.960] And then it codifies this understanding into guidelines that it will use every single time it creates code for you.
[00:14:14.840 --> 00:14:19.600] So I recommend always using these guidelines.
[00:14:19.840 --> 00:14:26.160] They're not just useful for code quality, but for providing architectural insight, both for the AI and for yourself.
[00:14:26.160 --> 00:14:30.640] You can tell the system, my back end is Laravel version 12, my front-end is Vue.js.
[00:14:30.640 --> 00:14:32.640] We use Intershare to bridge them too.
[00:14:32.880 --> 00:14:37.040] We try to use as few libraries as possible in this part of the program.
[00:14:37.040 --> 00:14:40.080] And we prefer external services for these kind of features.
[00:14:40.080 --> 00:14:46.480] And if you have this all written down, well, it makes integration and decisions around this much more intelligent, right?
[00:14:46.480 --> 00:14:51.200] It gives the tool the tools to make the right choices for you.
[00:14:51.200 --> 00:14:57.600] So that's my agentic coding experience, my voice-to-code thing, because everything is kind of spoken at this point.
[00:14:57.600 --> 00:15:03.280] But agentic coding isn't the only way I use AI in development or in my business to begin with.
[00:15:03.280 --> 00:15:13.680] For less technical issues, like operational challenges, and maybe even super technical things like server errors and database stuff, but it's not coding related.
[00:15:13.680 --> 00:15:16.960] I use conversational AI like Claude in the browser.
[00:15:16.960 --> 00:15:28.640] So when my servers start throwing 502 errors intermittently, that's a problem that I still have on occasion, because my backend is just hammering the server all the time with new transcripts and stuff.
[00:15:28.640 --> 00:15:33.120] And sometimes errors appear, I can ask, well, what could be the reason, Claude?
[00:15:33.120 --> 00:15:34.240] Where should I start looking?
[00:15:34.240 --> 00:15:36.320] Which log files should I investigate?
[00:15:36.320 --> 00:15:49.280] When I have a large JSON object and I need to extract data with a bash command, or I need a script to convert CSV to JSON, stuff like this, I handle these through back-and-forth conversation rather than integrated agents.
[00:15:49.600 --> 00:15:54.560] And recently, I used Claude's artifacts feature for prototyping, front-end prototyping.
[00:15:54.560 --> 00:15:55.280] That's so cool.
[00:15:55.280 --> 00:15:57.200] I highly recommend you try this.
[00:15:57.200 --> 00:16:03.560] I was working on analytics, a visualization for Podscan, because we track podcast charts and rankings over time.
[00:15:59.920 --> 00:16:06.920] So I wanted to show to my customers how the chart position moved.
[00:16:07.000 --> 00:16:08.280] I wanted a graph for this.
[00:16:08.280 --> 00:16:22.760] So I took example data straight from production, just went into the database, copied some metadata from one podcast, pasted that JSON into Claude, and then asked it to generate three different ways of visualizing this data as live React components.
[00:16:22.760 --> 00:16:25.400] And Claude was really good at building React code.
[00:16:25.400 --> 00:16:33.400] Like they built an HTML file and get all the JavaScript in there and all the React, and you can actually test it and use it and run it.
[00:16:33.400 --> 00:16:36.120] So it built three different interactive components.
[00:16:36.120 --> 00:16:45.320] And once I found the one that I liked, I told it to convert that into a Vue.js composition API component, which is what I use in PodScan, for my own project.
[00:16:45.320 --> 00:16:50.920] And then I took that component, threw that into my coding agent, and told it to integrate everything properly.
[00:16:50.920 --> 00:16:51.960] It's so powerful.
[00:16:51.960 --> 00:16:56.360] It's incredibly powerful to have a workflow like this for prototyping and iteration.
[00:16:56.360 --> 00:17:05.240] Everything is done by the machine, yet you have all the joy of interacting with the in-between stages and figuring out where you want to go.
[00:17:05.240 --> 00:17:06.840] It's really powerful.
[00:17:06.840 --> 00:17:16.200] And one of the most impressive applications for this AI stuff recently for me has been documentation generation because nobody likes to write docs.
[00:17:16.200 --> 00:17:20.280] And this week I overhauled the documentation for the PodScan Firehose API.
[00:17:20.280 --> 00:17:23.800] By this week, I mean earlier this morning, and it took me 10 minutes.
[00:17:23.800 --> 00:17:26.600] A customer mentioned some parts were outdated.
[00:17:26.600 --> 00:17:41.600] And the Firehose API for PodScan, in case you haven't followed this podcast religiously for the last 50 or so episodes, is a webhook-based data stream that sends information about every single podcast episode that we transcribe the moment we finish processing it.
[00:17:41.480 --> 00:17:41.800] Right, right?
[00:17:41.800 --> 00:17:45.520] There's like 50,000 shows a day that release a new episode worldwide.
[00:17:44.920 --> 00:17:52.400] We grab them all, we transcribe them, and then we send off the full data through the Firehose to our customers that need this data.
[00:17:52.720 --> 00:17:57.760] It's in the advanced tier, like the most expensive tier of PodScan, to be able to access this information.
[00:17:57.920 --> 00:18:08.000] Contains the full transcript, host guest identification, sponsor mentions, all the main themes, topics, basically everything we analyze, dispatched as a sizable JSON object.
[00:18:08.000 --> 00:18:12.800] Like most of the data is just a couple words, but the transcript that can be megabytes in size.
[00:18:12.800 --> 00:18:16.160] Imagine Joe Rogan talking for four hours about something.
[00:18:16.160 --> 00:18:19.520] That is a significant transcript, and we just zip it out.
[00:18:19.520 --> 00:18:31.040] So to update the documentation, which already has most of this well-defined or had at that point, I took my existing Markdown documentation from the Notion document in which it's kept.
[00:18:31.040 --> 00:18:37.680] I turned on my Firehose on my own account into a test webhook to collect real data for a bit.
[00:18:37.680 --> 00:18:46.720] And after getting about 30 to 40 episodes worth of actual data, which is like a minute or two, I exported all of this as CSV directly from webhook.site.
[00:18:46.720 --> 00:18:48.240] That's what I use for testing.
[00:18:48.240 --> 00:18:56.640] And then I had Claude create a bash script for me to condense the transcript portion so I could fit more examples into the AIS context.
[00:18:56.640 --> 00:19:06.400] So I had Claude build a little script to take my massive JSON and snip out most of the transcripts from inside the JSON and then create a CSV file again.
[00:19:06.400 --> 00:19:10.960] Put that into Claude and I told it: here's the existing documentation attached to Markdown.
[00:19:10.960 --> 00:19:15.680] Here's the real data showing the actual structure and field variations attached to CSV.
[00:19:15.680 --> 00:19:18.560] Update the documentation to be comprehensive and accurate.
[00:19:18.560 --> 00:19:20.480] Respond in a Markdown document.
[00:19:20.480 --> 00:19:29.920] And less than a minute later, I had extended documentation that included everything from my prior version that I'd already written manually, like a person from the 1500s, apparently.
[00:19:30.760 --> 00:19:41.480] But it had replaced all the simple examples that I handcrafted with a table-based index of every single field, what types might be expected, when they're present, and what their purpose is.
[00:19:41.480 --> 00:19:43.400] It was 95% correct.
[00:19:43.400 --> 00:19:45.880] It was mostly incorrect in terms of frequency and stuff.
[00:19:45.880 --> 00:19:51.960] So I went through it again, code review, corrected it, and the remaining 5% took very little time to fix.
[00:19:51.960 --> 00:19:54.920] I think like five minutes just to read through the whole thing.
[00:19:54.920 --> 00:19:56.520] And this is how I code now.
[00:19:56.520 --> 00:20:03.080] This is how I run my business, my software as a service, podcast, and database business.
[00:20:03.080 --> 00:20:05.800] It's just me wrangling AIs to do my bidding.
[00:20:05.800 --> 00:20:07.400] It's bizarre that this is a thing.
[00:20:07.400 --> 00:20:11.880] I would never have thought that this was the future that I would live to see.
[00:20:11.880 --> 00:20:16.680] But here's what's most fascinating about this entire transformation.
[00:20:16.680 --> 00:20:28.920] Those 20% moments when the AI is generating code between the 40% of me telling it what to do and the 40% of me checking, that still very much feels like cheating.
[00:20:28.920 --> 00:20:32.120] It feels like somebody else is doing the work for me and I should be working.
[00:20:32.120 --> 00:20:41.960] I have to remind myself that without this process of specifying, executing, and reviewing 402040, I wouldn't get half the things done that I accomplish in a day.
[00:20:41.960 --> 00:20:53.480] Probably not like even 10% of it at this point, because it is extremely reliable and super fast compared to me spending two hours on a whole thing instead of just half an hour.
[00:20:53.480 --> 00:20:57.400] I'm often humbled by the speed at which these systems generate code.
[00:20:57.400 --> 00:21:01.240] It's not always right, but neither is mine, to be honest, when I write code.
[00:21:01.240 --> 00:21:04.760] I'm confused when people complain about AI writing code that doesn't work.
[00:21:04.760 --> 00:21:11.400] They seem to forget that they themselves write code that doesn't work immediately and always needs debugging or needs at least some kind of iterating process.
[00:21:11.400 --> 00:21:20.480] Coding for most of us, and I call myself a 0.8x developer, both jokingly and realistically, I'm not the best coder out there.
[00:21:20.720 --> 00:21:26.160] It's always been trial and error for me at least, and for many others, until the errors become small enough not to matter.
[00:21:26.160 --> 00:21:28.160] That's the process that I had.
[00:21:28.160 --> 00:21:35.760] And I think just from watching Juni do the work, like from actually reading the individual steps, AI systems work the same way internally now.
[00:21:35.760 --> 00:21:37.680] They have self-checking loops.
[00:21:37.680 --> 00:21:45.680] I often see an agentic system tests its own work, either linting it or realizing that something can't compile or is interpreted and doesn't work.
[00:21:45.680 --> 00:21:48.560] And they try again until it finds the right approach.
[00:21:48.560 --> 00:21:51.040] And that's exactly what a developer would do.
[00:21:51.040 --> 00:21:56.800] What we're witnessing here is a transformation from being code writers to code editors.
[00:21:56.800 --> 00:21:58.400] We're no longer writers of code.
[00:21:58.400 --> 00:21:59.840] We are the editors of code.
[00:21:59.840 --> 00:22:03.120] And what we call a code editor now might as well be redefined, right?
[00:22:03.120 --> 00:22:09.280] It's not the program that allows us to type in things, but it's a tool where we say, yeah, I approve of this code.
[00:22:09.280 --> 00:22:10.320] Yes, do this.
[00:22:10.320 --> 00:22:11.440] Oh, you've done this.
[00:22:11.440 --> 00:22:12.400] Okay, it's fine.
[00:22:12.400 --> 00:22:13.040] Or it's not.
[00:22:13.040 --> 00:22:14.080] Do it again.
[00:22:14.080 --> 00:22:19.120] I think it's a fascinating redefinition of terms in our industry that we are looking at right now.
[00:22:19.120 --> 00:22:27.040] And I think this is becoming the norm very quickly, this approach to coding and doing stuff, because we need to understand something important here.
[00:22:27.040 --> 00:22:32.480] Today's version of AI-assisted coding is probably the worst version we'll ever see again.
[00:22:32.480 --> 00:22:37.120] It might be the best we had so far, obviously, but it's also the worst one that we'll have going forward.
[00:22:37.120 --> 00:22:38.080] Everything is going to be better.
[00:22:38.080 --> 00:22:41.120] Tomorrow's version will be better, and the day after that will be even better.
[00:22:41.120 --> 00:22:45.360] At the speed that these things are developed, that might actually be factually the case.
[00:22:45.360 --> 00:22:48.560] And these systems will become more reliable, more autonomous.
[00:22:48.560 --> 00:23:02.040] We'll see fewer interventions needed and more, sure, this works, looks good to me, responses from people, particularly with the advent of people understanding MCPs, the model context protocols, and integrating external verification systems.
[00:23:02.520 --> 00:23:11.960] AI will be checking its own work through external tools and internal tools that we built for it and that are built into the IDEs and development systems of the future.
[00:23:11.960 --> 00:23:15.080] So if you're still writing code entirely by hand, I think that's great.
[00:23:15.080 --> 00:23:17.640] That's fine that you can actually still do that.
[00:23:17.640 --> 00:23:20.280] It is still something that we need to be able to do.
[00:23:20.280 --> 00:23:27.240] I occasionally take a step back and code manually only to quickly get frustrated with my own limitations, which have always been there, right?
[00:23:27.240 --> 00:23:29.560] It's not that I've forgotten how to code.
[00:23:29.560 --> 00:23:33.000] It's that it's always been a struggle to get things just right.
[00:23:33.000 --> 00:23:36.120] For perfectionists, this is particularly complicated.
[00:23:36.120 --> 00:23:45.880] But if the alternative is telling someone to write code for you, then understanding that code and saying, yes, this is exactly what I would have written, or no, try again, you missed this thing.
[00:23:45.880 --> 00:23:48.840] Now that is the superpower all in itself.
[00:23:48.840 --> 00:23:53.960] We thought coding was the superpower, but it turns out that the typing part never really mattered.
[00:23:53.960 --> 00:23:59.320] What matters is understanding what good code looks like, what it does, and what it shouldn't do.
[00:23:59.320 --> 00:24:07.240] Being able to discriminate good code from bad code all of a sudden becomes much more important than being able to type good code into an editor in the first place.
[00:24:07.240 --> 00:24:18.200] You still need to understand algorithmic complexity, basic algorithms, data types, and all that so you can prompt effectively, but you don't necessarily need to implement everything by hand anymore.
[00:24:18.200 --> 00:24:19.880] You just need to be able to get it.
[00:24:19.880 --> 00:24:22.680] And this obviously translates into other fields as well.
[00:24:22.680 --> 00:24:33.240] You can apply the same 40-20-40 approach, taking time to get the prompt right, giving as much context as possible, and then expecting mistakes and approaching it from a corrective verification perspective.
[00:24:33.240 --> 00:24:37.640] You can take that to writing, sales, outreach, research, really anything.
[00:24:37.640 --> 00:24:41.720] You just have to know what looks good when you look at the result.
[00:24:41.720 --> 00:24:45.840] And this feels like the inevitable conclusion of automated software development to me.
[00:24:45.840 --> 00:24:49.520] We're experiencing something here that certainly wasn't possible a couple of years ago.
[00:24:44.920 --> 00:24:51.040] And it feels like we're just getting started.
[00:24:51.360 --> 00:24:57.440] And if you're not already experimenting with AI-assisted development, I encourage you to give it a try.
[00:24:57.440 --> 00:24:59.520] See if it fits somewhere into your workflow.
[00:24:59.520 --> 00:25:08.880] Start small, maybe with documentation like I did, or simple scripting tasks, and work your way up to more complex features that you let these agentic systems build by themselves.
[00:25:08.880 --> 00:25:12.800] See how far you can take it without being completely annoyed by it.
[00:25:12.800 --> 00:25:18.320] Because there's always a ceiling of the mistakes that you're allowing to happen in front of your eyes.
[00:25:18.320 --> 00:25:23.200] But I believe the future belongs to those who can effectively collaborate with these systems.
[00:25:23.200 --> 00:25:26.640] And the best way to learn that collaboration is to practice it today.
[00:25:26.960 --> 00:25:35.600] The tools will only get better, but the fundamental skills of knowing how to specify what you want, how to review what you get, and then iterate towards better results.
[00:25:35.840 --> 00:25:37.600] That's something you can start building right now.
[00:25:37.600 --> 00:25:42.720] And frankly, I think you should, because the people you're competing with, they're figuring this out too.
[00:25:42.720 --> 00:25:46.400] So, what matters isn't whether you can type faster or remember more syntax.
[00:25:46.400 --> 00:25:47.840] That's 90s coding.
[00:25:47.840 --> 00:25:57.840] What matters is whether you can think clearly about problems, communicate effectively with these systems that are building the solutions for you, and recognize quality solutions when you see them.
[00:25:57.840 --> 00:26:04.400] These are the skills that will define the next generation of builders and the next generation of successful software businesses.
[00:26:04.400 --> 00:26:05.600] And that's it for today.
[00:26:05.600 --> 00:26:07.600] Thank you for listening to the Bootstrap Founder.
[00:26:07.600 --> 00:26:10.720] You can find me on Twitter at AvidKar, A-I-V-I-D, K-A-L.
[00:26:10.880 --> 00:26:22.480] If you want to support me on the show, please talk about podscan.fm to your professional peers and those who you think benefit from tracking mentions of brands, businesses, topics, names on podcasts all over the place.
[00:26:22.480 --> 00:26:25.440] 50,000 episodes a day, we transcribe them all.
[00:26:25.440 --> 00:26:30.760] We have this near real-time podcast database with a really, really solid API and a really good search.
[00:26:30.760 --> 00:26:34.920] So please share the word with those who need to stay on top of the podcast ecosystem.
[00:26:29.120 --> 00:26:35.480] I appreciate it.
[00:26:35.720 --> 00:26:36.920] Thank you so much for listening.
[00:26:36.920 --> 00:26:39.480] Have a wonderful day and bye-bye.