Debug Information
Processing Details
- VTT File: 8e5fdca1.vtt
- Processing Time: September 09, 2025 at 06:05 PM
- Total Chunks: 1
- Transcript Length: 36,052 characters
- Caption Count: 267 captions
Prompts Used
Prompt 1: Context Setup
You are an expert data extractor tasked with analyzing a podcast transcript.
I will provide you with part 1 of 1 from a podcast transcript.
I will then ask you to extract different types of information from this content in subsequent messages. Please confirm you have received and understood the transcript content.
Transcript section:
[00:00:00.480 --> 00:00:04.800] Hey, it's Arvid, and this is the Bootstrap Founder.
[00:00:08.960 --> 00:00:11.520] This week, I want to be as pragmatic as possible.
[00:00:11.520 --> 00:00:26.320] Let's talk about not just AI this and AI that, as I so often do, but the actual applications of generative AI that I personally leverage within PodScan to get my customers to see the value of the product and that as quickly as possible.
[00:00:26.320 --> 00:00:35.680] You know, it took me a pretty long time to actually realize this, but using AI in a SaaS, that's not just something like a chatbot for my customers or something like that.
[00:00:35.680 --> 00:00:42.960] AI can work behind the scenes quite effectively, and that facilitates getting the right stuff in front of the right people.
[00:00:42.960 --> 00:00:49.600] And even just to figure out who people are and how I should talk to them is something that AI can help me with.
[00:00:49.600 --> 00:01:00.640] Today, I want to share exactly what I'm doing, how expensive that is to run at my scale, and how I believe that this can be part of every single software as a service business out there.
[00:01:00.640 --> 00:01:15.760] Even if you don't have any touch point with AI, with the concept of artificial intelligence in that business for those customers at all, even if you don't think you should be offering AI features to your customers, there's a good use, and I'm going to share with you how I'm using it behind the scenes.
[00:01:15.760 --> 00:01:23.920] And speaking of scaling smarter, not just faster, that's exactly what today's sponsor, Paddle, is all about.
[00:01:23.920 --> 00:01:31.200] They're running this exclusive five-part online live series designed for SaaS and digital product leaders who want to scale with precision.
[00:01:31.200 --> 00:01:42.240] It goes like from pricing strategies to exit planning, and each of these sessions is packed with a lot of experts, a lot of insight, a lot of actual data that they pull from their own database, improving strategies.
[00:01:42.240 --> 00:01:57.360] So, if you're serious about really putting more effort into growing a SaaS, check out Paddle.com's event series because, just like the AI strategies that I'm about to share with you today, it's all about working smarter to create these magical moments for your customers.
[00:01:57.360 --> 00:02:00.000] So, go to paddle.com to check this out.
[00:02:02.920 --> 00:02:10.840] Let me start with the internal use of generative AI that has been absolutely transformative for me and PodScan.
[00:02:10.840 --> 00:02:20.440] Six hours after a customer signs up for that trial, exactly six hours after, something magical happens, or I coded it to actually happen, but they don't see it.
[00:02:20.440 --> 00:02:23.480] It happens completely behind the scenes.
[00:02:23.480 --> 00:02:29.400] I let them explore the platform first in those first six hours, but at that six hour mark, I start scoring them.
[00:02:29.400 --> 00:02:36.840] Or rather, I have an LLM that is fed with a lot of information, scored that customer for me on a scale from 0 to 10.
[00:02:36.840 --> 00:02:46.040] So I take everything that I know about this person up until that point and have the AI come up with a number following a rather complicated prompt on how to score them.
[00:02:46.040 --> 00:02:54.840] So here's what I collect for this: I collect the domain of their email address, their full name, the name of the team that they have created.
[00:02:54.840 --> 00:03:01.320] I track what and how often people search and the general themes of the topics that they search for.
[00:03:01.320 --> 00:03:05.160] I collect all the kinds of activities they could possibly have on the page.
[00:03:05.160 --> 00:03:17.160] If they navigate to the dashboard, if they navigate to a search window, if they go to the API docs and read certain things, or just even which podcasts and which categories they've checked out along the way.
[00:03:17.160 --> 00:03:28.520] I take all of these activities that I track very meticulously in my own database that at this point has over 500,000 activities in it from all my customers' active usage of the platform.
[00:03:28.520 --> 00:03:32.200] And I condense all of this into one single prompt.
[00:03:32.200 --> 00:03:36.760] At this point, it's GPT-4.0 still that I sent this to.
[00:03:36.760 --> 00:03:45.000] And I ask it to come up with a JSON object, a data object, that contains a score, a number, and an explanation for why that score was given.
[00:03:45.440 --> 00:03:54.080] I set up the prompt so that it gives me the explanation first and then gives me the score, which is one of the most effective ways of dealing with LLMs.
[00:03:54.080 --> 00:03:58.960] I set up the prompt so that it gives the explanation first because that's a good idea.
[00:03:58.960 --> 00:04:02.400] These systems generate one token after the other.
[00:04:02.400 --> 00:04:15.520] So to always give them a good reason to reason first, now you're going to pull that up front and then later give their summaries, any scoring, any reduction of granularity of information.
[00:04:15.520 --> 00:04:23.040] That's why the explanation should come before the point, the numbers, so they can actually argue their case towards the number.
[00:04:23.040 --> 00:04:37.520] Because if you don't, if you get the score first and then ask it to explain it, it takes the number that it already predicted and then tries to justify that number, which always means that the explanation is not necessarily true.
[00:04:37.520 --> 00:04:44.480] It's just an attempt by the LLM to gaslight you into believing that the number it gave, the score it gave, is actually correct.
[00:04:44.480 --> 00:04:46.080] And that number may be anything.
[00:04:46.080 --> 00:04:55.280] So it's just better kind of inductive method, if you think about it in mathematical dimensions, to get to the result that you want.
[00:04:55.280 --> 00:04:56.880] So I let it score this stuff.
[00:04:56.880 --> 00:05:11.920] And if these people have a score of five and above, then the LLM shouldn't just answer with the score, but also with a full data object containing a lot of information about not just the customer, the potential customer, but how I should interact with them in the future, what I should be doing.
[00:05:11.920 --> 00:05:14.080] And this prompt is incredibly specific.
[00:05:14.080 --> 00:05:21.120] I instruct it along the lines of we're trying to talk mostly to these kinds of customers, PR agencies, marketing customers, and founders.
[00:05:21.120 --> 00:05:25.200] I define who these people are in the prompt and what they like, what they need.
[00:05:25.200 --> 00:05:28.320] And then I tell the LLM to score them higher if they're a founder.
[00:05:28.320 --> 00:05:31.400] And if they're a founder, figure out what projects they're currently working on.
[00:05:31.720 --> 00:05:42.120] So we can talk to them about these projects, suggest particular things on how to use the platform with their specific problems and targets in mind, and do web searches, figure stuff out, scrape it.
[00:05:42.120 --> 00:05:54.120] And if it's a business domain that the email comes from, not just Gmail or Yahoo or whatever people might be using, I instruct the prompt to look up that website, actually look it up, and figure out what kind of business they are.
[00:05:54.120 --> 00:06:06.200] I very intensely, I may clearly instruct the AI to actually check who is signing up, if they know this email address, if they know the name, and can give me more social feeds maybe to reach these people and look for their work.
[00:06:06.200 --> 00:06:08.040] That doesn't always work.
[00:06:08.040 --> 00:06:15.320] Currently, the GPT models still hallucinate a lot of social media profilings, but occasionally it does work.
[00:06:15.320 --> 00:06:22.120] And if you were to do a tool call to an actual API that does this more reliably, that's a way to integrate more reliable data.
[00:06:22.120 --> 00:06:25.960] But for me, it's really just about how should I reach out to these people?
[00:06:25.960 --> 00:06:27.480] Like, what can I talk to them about?
[00:06:27.480 --> 00:06:29.400] What can I suggest to them?
[00:06:29.400 --> 00:06:40.920] I have the message with the score and the explanation and stuff sent to my Slack if it's a high-scoring prospect, so that I or anybody in my sales team can start reaching out manually to those people.
[00:06:40.920 --> 00:06:51.640] And I persist the score to the database, to the user model, the reason for the score and the additional data, so that I can use it later as kind of a CRM approach to this particular customer, right?
[00:06:51.640 --> 00:06:54.600] I know who they are, where they come from, what projects they might be into.
[00:06:54.600 --> 00:06:57.720] So whenever I communicate with them, I can pull up this data.
[00:06:57.720 --> 00:07:04.920] And in my administrative interface, I have a view that lists every single prospect over the last 10 days, which is the length of my trial.
[00:07:04.920 --> 00:07:11.160] And it shows their score, what activities and how many of them they've taken up until now, their email, all of that.
[00:07:11.160 --> 00:07:20.960] And then at any point where I feel like reaching out to them, which is usually around three to six days after they start their trial, I have a generate email button, and that button takes all the activities into account.
[00:07:20.960 --> 00:07:34.160] It takes their score, their previous data extraction, who they are, what we know about them, what they've said about themselves in a profile, which teams they are on, what alerts they have created, what searches they've done, all the usage data.
[00:07:34.160 --> 00:07:46.000] It takes that all into account and creates a follow-up email that looks at all the things they've done and then tries to find the next best thing that they should be doing on PodScan to see the value of the platform.
[00:07:46.000 --> 00:07:48.480] That is my value-nurturing approach.
[00:07:48.480 --> 00:07:59.840] I've instructed that particular email-generating prompt to come up with the best possible keywords for an alert or the best search terms for a search and suggest that all in an email.
[00:07:59.840 --> 00:08:01.440] The email introduces me as the founder.
[00:08:01.440 --> 00:08:05.840] It says, Hey, I'm Arvid, I'm the founder and CEO of PodScan, and this is the next step you should take.
[00:08:05.840 --> 00:08:06.560] Any questions?
[00:08:06.560 --> 00:08:07.920] Respond to this email.
[00:08:07.920 --> 00:08:11.200] And then I send that manually by taking content, putting it into my email window.
[00:08:11.200 --> 00:08:16.560] I have a little button that automatically pre-fills my hey.com interface with this, and I just send it out.
[00:08:16.560 --> 00:08:28.560] And this has been extremely helpful in getting people who have done little on the platform to actually set up their first alert and getting people who've already seen a lot of value by searching to persist that search into an alert.
[00:08:28.560 --> 00:08:32.320] Because that's my main goal with PodScan, to get people to set up an alert.
[00:08:32.320 --> 00:08:38.160] Because an alert that generates this kind of value memory, like value-nurturing notifications, right?
[00:08:38.160 --> 00:08:43.440] If somebody puts in their brand and they get one or two mentions of a podcast every single day, well, that's great.
[00:08:43.440 --> 00:08:46.800] They get an email from me every single day with something interesting.
[00:08:46.800 --> 00:08:47.600] That is valuable.
[00:08:47.600 --> 00:08:52.960] That is a value-nurturing that just happens through the internal mechanisms of the platform itself.
[00:08:52.960 --> 00:08:54.000] So that's what I want.
[00:08:54.000 --> 00:09:01.160] And for that, I need people to actually sign up or after sign-up, do an alert, like set up this alert that really works for them.
[00:09:01.160 --> 00:09:03.320] And this has been running for a couple months at this point.
[00:08:59.680 --> 00:09:04.920] It's been extremely helpful.
[00:09:05.160 --> 00:09:11.880] It has led to a lot of interesting conversations and conversations even with people who were just checking it out, not doing much.
[00:09:11.880 --> 00:09:15.640] But a day or two later, I reached out to them with a suggestion and they set up the alert.
[00:09:15.640 --> 00:09:18.280] And now all of a sudden, they're extremely happy with the product.
[00:09:18.280 --> 00:09:21.800] They see why they should be using it and they are paying for it.
[00:09:21.800 --> 00:09:24.200] It's a great conversion tool.
[00:09:24.200 --> 00:09:25.720] So there's a second step too.
[00:09:25.720 --> 00:09:27.640] And that actually comes much earlier.
[00:09:27.640 --> 00:09:33.560] And it's one that I only recently implemented because I realized that I've been missing out on this all along.
[00:09:33.560 --> 00:09:39.640] And that is during the first onboarding wizard that I have that new signups get to see on PodScan.
[00:09:39.640 --> 00:09:44.360] Now, I had onboarding for over a year and the business is a year and a half old.
[00:09:44.360 --> 00:09:46.760] So it came to the product pretty early.
[00:09:46.760 --> 00:09:52.600] Whenever you come to the dashboard for the first time, a full screen overlay comes up that says, Hey, what do you want to do?
[00:09:52.600 --> 00:09:53.960] Do you want to look for mentions?
[00:09:53.960 --> 00:09:54.760] Do you want to search?
[00:09:54.840 --> 00:09:56.360] Do you just want to check out the product?
[00:09:56.360 --> 00:09:58.760] Click any of these options and it takes you to that section.
[00:09:58.760 --> 00:10:02.600] It's kind of an orientation tool, but more here are the options that you have.
[00:10:02.600 --> 00:10:05.240] And I'm going to just show you to the right gate, right?
[00:10:05.240 --> 00:10:09.080] I'm going to show you where you need to go to get this started right away.
[00:10:09.080 --> 00:10:19.000] And if you clicked on monitoring, which is my alerting system, and that was the default up until a couple of days ago, you would then be able to set up a couple of filters or keywords that you might find interesting.
[00:10:19.000 --> 00:10:23.640] You would have to put them in yourself into a little list and click create my first alert.
[00:10:23.640 --> 00:10:29.960] But it still involved a lot of manual work from people who just came to this product like a couple minutes ago.
[00:10:29.960 --> 00:10:32.440] And I think they want to see what it can do for them.
[00:10:32.440 --> 00:10:42.840] And initially, I took the name of the person, because that's one of the first things that they actually gave me in terms of data, and put that in into the alert, but that's not enough.
[00:10:42.840 --> 00:10:58.640] So instead of having people come up with and I guess write their own keywords into the platform, I implemented an automated background process that would fetch or generate the right keywords for this person while they are still going through the onboarding process.
[00:10:58.640 --> 00:11:03.520] Because during registration, I have this step where I ask people to self-classify.
[00:11:03.520 --> 00:11:10.640] They can classify as a founder, as somebody who owns a podcast, as somebody who's a data analyst, PR marketing, that kind of stuff.
[00:11:10.640 --> 00:11:13.680] Or alternatively, I just want to check it out and look at transcripts.
[00:11:13.680 --> 00:11:21.280] And between these four groups, the one that I'm most interested in are obviously the data analyst and the founder, because these are my ICPs.
[00:11:21.280 --> 00:11:25.920] But the others might also just be on their first step into becoming a customer.
[00:11:25.920 --> 00:11:32.960] So I have this text field additionally during registration where they can say what their project is, why they come to PodScan.
[00:11:32.960 --> 00:11:35.600] Some people tell me, I'm looking for this particular transcript.
[00:11:35.600 --> 00:11:37.120] And then I know they're just, they have to check it out.
[00:11:37.120 --> 00:11:37.680] That's fine.
[00:11:37.680 --> 00:11:38.480] We'll see.
[00:11:38.480 --> 00:11:41.040] But some people say, I work for a PR agency.
[00:11:41.040 --> 00:11:43.280] We're trying to track what people say about our clients.
[00:11:43.280 --> 00:11:45.600] And this is who I want to use this product.
[00:11:45.600 --> 00:11:51.360] People put that in there, maybe in 20% of all cases, but the people who do help a lot with the system.
[00:11:51.360 --> 00:11:52.560] So I can see what's going on.
[00:11:52.560 --> 00:12:01.760] So I take this information, I take the domain name again from the email that they signed up with and their full name, and I throw this into a very fast model.
[00:12:01.760 --> 00:12:05.520] I think this is the GPT-40 mini at this point.
[00:12:05.520 --> 00:12:12.720] No, actually, it's the GPT-5 mini that I kind of got to have low reasoning and low verbosity to make it fast.
[00:12:12.720 --> 00:12:23.280] And I task it to create three good keywords or keyword groups that might be good first alert keywords that will be potentially useful to this particular person.
[00:12:23.280 --> 00:12:26.560] And this selection is incredibly effective.
[00:12:26.560 --> 00:12:34.600] If you sign up from a Google domain and you're saying you're just looking for a transcript, it just suggests an alert that might have to do with the podcast you might be interested in.
[00:12:34.600 --> 00:12:49.800] But if you come from an educational domain, you work at a university and you say you're a data analyst, well, then it suggests some keywords, maybe even some key personnel names from a university that you might track, like the dean or whoever is currently working in the department you're in, it pulls that out of the domain.
[00:12:49.800 --> 00:13:03.160] And most importantly, if you come from a marketing company that the AI is aware of, an agency that has certain clients, the AI will quickly check their website, figure out who their biggest clients are, and then suggest tracking those clients immediately.
[00:13:03.160 --> 00:13:04.920] And that is the magic moment.
[00:13:04.920 --> 00:13:17.080] That is what the LLM can do with tool calling and with scraping websites, can actually fetch meaningful information that helps onboard the customer to their specific use case that they might not even know of just yet.
[00:13:17.080 --> 00:13:27.960] They might have an inclination of what it might be, but the AI can pull this out of the context, all of that from just a self-classifier, maybe a project description, and their email domain.
[00:13:27.960 --> 00:13:29.400] I think this is very powerful.
[00:13:29.400 --> 00:13:35.720] And this is one of the things that you could implement for your SaaS right now and not have any other AI feature in the product.
[00:13:35.720 --> 00:13:46.120] Just something that helps people see this spark, this moment of enjoyment, this value moment much earlier because they see, oh, you actually care to do research about them.
[00:13:46.120 --> 00:13:52.840] And your product can be customized to this very specific use case that they have or might not even be aware of.
[00:13:52.920 --> 00:13:54.600] It's really, really powerful.
[00:13:54.600 --> 00:14:04.360] Obviously, now that they are coming into the product with such AI-assisted strategy, I built AI into the product as a generator as well.
[00:14:04.360 --> 00:14:08.520] The third way that I use generative AI is in the dashboard itself.
[00:14:08.520 --> 00:14:12.680] Some people just don't do or don't like onboarding.
[00:14:12.680 --> 00:14:14.120] They say, no, I'm going to do this myself.
[00:14:14.120 --> 00:14:17.840] They click skip and then they go and try to figure out the product.
[00:14:18.000 --> 00:14:27.280] But I still wanted that magic of having the system create an alert for you to be part of the regular experience of PodScan, to be a repeatable thing that people can do whenever they need it.
[00:14:27.280 --> 00:14:34.160] It's not just for their first alert that they get to create via AI, it should also be every other one that they do in the future.
[00:14:34.160 --> 00:14:36.720] So I implemented an alert builder.
[00:14:36.720 --> 00:14:39.360] You can just kind of free form, write about whatever you want.
[00:14:39.360 --> 00:14:44.080] You can say, hey, I want an alert that tracks keywords from this particular industry that I'm working in.
[00:14:44.080 --> 00:14:45.280] I'm working for this company.
[00:14:45.280 --> 00:14:46.320] I'm doing this kind of work.
[00:14:46.320 --> 00:14:49.440] I'm using this to create a newsletter or whatever.
[00:14:49.440 --> 00:15:07.040] And then the AI, tasked with a lot of information about the platform, and it has a lot of information about the platform in the context window, creates the best possible keywords and other specific things like context-aware questions for that particular filter for that particular alert.
[00:15:07.360 --> 00:15:23.040] And that is really, really fun, particularly with the context-aware question filter is extremely powerful because that feature is an AI-assisted check that runs on every single transcript that gets certain keywords mentioned, just to make sure that the context is exactly what the user wants, right?
[00:15:23.040 --> 00:15:30.400] Because if John Smith is looking for a mention, then there's a lot of John Smith that that guy might not be interested in being mentioned around.
[00:15:30.400 --> 00:15:41.280] So having a specific keyword filter that then checks what the context of the keyword is, where you could just phrase it, like, is this episode talking about John Smith, the professional basketball player?
[00:15:41.600 --> 00:15:44.640] That will get a lot of false positives out of the way.
[00:15:44.640 --> 00:15:49.120] And that filter, just like the keywords, can be pre-suggested by AI.
[00:15:49.120 --> 00:15:55.760] And I've trained this system through prompting to give really good, solid results that work really well with PodScan's data.
[00:15:55.760 --> 00:16:01.560] The alert builder is available in the dashboard for everybody, like paid accounts, trial accounts, anybody can use it.
[00:16:01.560 --> 00:16:16.920] And the list builder, which is a similar feature, it builds lists of podcasts, has a couple of AI-assisted creation features to it, where you could just say, I want all podcasts that talk about Star Trek the next generation, like rewatch podcasts, humor comedy podcasts that have a sci-fi angle.
[00:16:16.920 --> 00:16:22.520] You just put that in, you click on create list, and the list gets automatically generated.
[00:16:22.520 --> 00:16:28.520] It pulls all of these podcasts from our own API that you can also use, but it uses its own API.
[00:16:28.520 --> 00:16:30.920] It searches for keywords like Star Trek podcast.
[00:16:30.920 --> 00:16:31.720] It pulls those in.
[00:16:31.720 --> 00:16:37.400] It looks for similar podcasts because we have a podcast similarity feature that I built over the last year as well.
[00:16:37.400 --> 00:16:38.840] And it just explores the whole system.
[00:16:38.840 --> 00:16:42.760] It scores all these shows and then returns the list of the best 50 or so items.
[00:16:42.760 --> 00:16:50.840] That's all powered by an AI system that gets the best search terms and finds the best scoring criteria for that process from the text that you wrote.
[00:16:50.840 --> 00:16:51.080] Right?
[00:16:51.080 --> 00:17:00.120] If you say, I want to watch Star Trek the next generation shows or whatever, then it pulls out obviously Star Trek as a search term, but also 90s sci-fi, right?
[00:17:00.120 --> 00:17:02.120] All of that is really, really useful.
[00:17:02.120 --> 00:17:10.520] Things that I could never build just through like a lookup system or just by having keywords automatically figured out that doesn't work.
[00:17:10.520 --> 00:17:13.720] It needs an AI component and it's really, really useful.
[00:17:13.720 --> 00:17:20.360] So for me, generative AI used in a software as a service product doesn't mean having a chatbot in there.
[00:17:20.360 --> 00:17:20.920] I don't have one.
[00:17:20.920 --> 00:17:21.880] I don't need that.
[00:17:21.880 --> 00:17:26.440] It doesn't mean any kind of like overly agentic behavior for the user.
[00:17:26.440 --> 00:17:32.040] For me, using generative AI in PodScan is just the way I communicate with the user.
[00:17:32.040 --> 00:17:40.760] And it's about making it magical for them to feel understood, for them to see that PodScan gets why they are using it and what they're using it for.
[00:17:40.760 --> 00:17:44.880] It understands that for their business, they might be looking for this kind of data.
[00:17:44.880 --> 00:17:46.560] So I'm going to suggest that to them, right?
[00:17:44.680 --> 00:17:49.440] I'm make them see, oh, this is what I should look for.
[00:17:49.680 --> 00:18:04.880] And in a moment like this, where they already get a really good suggestion, they also understand how they should be doing this better in the future, how they should be searching for things more effectively, how they should be phrasing things to make PodScan be used more optimally, more effectively as well.
[00:18:04.880 --> 00:18:14.400] And to be able to do this, for the prompts that I write, to have an understanding of the software, because I can't just say, help people come up with alert keywords.
[00:18:14.400 --> 00:18:15.760] But yeah, but how do they work?
[00:18:15.760 --> 00:18:17.280] Like, how does the system interpret them?
[00:18:17.280 --> 00:18:21.040] Like, ChatGPT doesn't know how PodScan does this internally, right?
[00:18:21.040 --> 00:18:32.960] So I did a very painful walkthrough of the system where I manually recorded myself like audio just explaining every single feature on the platform into transcription software.
[00:18:32.960 --> 00:18:37.920] I did this for, I think it was 45 minutes of just me talking about every single thing.
[00:18:37.920 --> 00:18:44.160] I clicked, I opened Windows, I described them, I said how it works, how it works in the back end, and all of that.
[00:18:44.160 --> 00:18:54.000] And then I had that transcription, took that into Claude, had that boiled down to a markdown document describing very effectively the structure and function of the platform.
[00:18:54.000 --> 00:18:56.960] And that markdown document is a couple thousand lines long.
[00:18:56.960 --> 00:19:04.720] It was condensed then later into a more promptable version that is part of all these helpful prompts, all these steps.
[00:19:04.720 --> 00:19:12.000] These prompts are not small, they're quite huge because they have a lot of context about what PodScan is and how PodScan can best be used.
[00:19:12.320 --> 00:19:18.160] Particularly the prompt that tells people what the next best step is, the one that I manually trigger via email generation.
[00:19:18.160 --> 00:19:21.200] Well, that needs to know what all the potential steps are, right?
[00:19:21.280 --> 00:19:22.800] Can't just make up steps.
[00:19:22.800 --> 00:19:28.240] I'm very firm on this in the prompt to not make up things that I don't offer in the software.
[00:19:28.240 --> 00:19:29.520] It did that in the beginning.
[00:19:29.520 --> 00:19:31.640] And then people asked me, well, where is that feature?
[00:19:31.640 --> 00:19:32.840] And I kind of had to build it.
[00:19:32.840 --> 00:19:37.320] So make sure it only suggests the things that it knows are possible.
[00:19:37.320 --> 00:19:42.120] I had to go through the whole platform, had to describe everything from start to finish.
[00:19:42.120 --> 00:19:43.800] And then that became part of a prompt.
[00:19:43.800 --> 00:19:46.840] It's also super helpful for agentic coding.
[00:19:46.840 --> 00:19:49.480] If the platform is aware of itself, right?
[00:19:49.480 --> 00:19:58.120] If the platform is kind of self-describing, then an AI agent can read that document and figure out where things might semantically fit.
[00:19:58.120 --> 00:20:06.840] Not just where certain kinds of models and controllers live in the code base, but where in the interface do people expect to be able to use certain data?
[00:20:06.840 --> 00:20:08.600] It's quite useful.
[00:20:08.600 --> 00:20:18.280] And looking into this, to whatever kind of software business you might be running, I think is a very, very useful and quite cheap thing for you to do.
[00:20:18.280 --> 00:20:21.800] And I don't just mean like going through your whole product and explaining it.
[00:20:21.800 --> 00:20:31.560] I think that's very useful for all kinds of AI-assisted work, but particularly this kind of scoring that I'm doing, you only ever have to do this once, right?
[00:20:31.560 --> 00:20:34.360] My scoring happens once per customer.
[00:20:34.360 --> 00:20:37.960] It's only one request that comes back with a score and a little bit of text.
[00:20:38.040 --> 00:20:46.680] Doesn't really have too much context other than the name, customer, the email, maybe a couple things they've searched for, whatever, maybe a couple locations they've been in the product, but it's not big.
[00:20:46.760 --> 00:20:50.360] Costs me less than a couple of cents, maybe not even a cent in fees.
[00:20:50.360 --> 00:20:53.640] And the same goes for the email generated to the customer a couple days later.
[00:20:53.720 --> 00:20:59.320] Might also be a cent or two, just it costs to pull all the information and the prompts in there.
[00:20:59.320 --> 00:21:04.200] So that might be a little bit more expensive, but I think it's still like it's very manageable per prospect.
[00:21:04.200 --> 00:21:07.720] And I do it only for certain prospects that score fairly high to begin with.
[00:21:07.720 --> 00:21:11.400] So, on a daily basis, I spent maybe 20 cents on this kind of stuff.
[00:21:11.400 --> 00:21:12.760] Really negligible.
[00:21:12.760 --> 00:21:19.840] And at the same time, you got to be careful with AI in the product because it can be very expensive if people start using it a lot.
[00:21:20.160 --> 00:21:27.840] So, I'm very, very cautious with anything that allows a customer to trigger an AI API request, like the alert builder or the list builder.
[00:21:27.840 --> 00:21:31.360] Those are heavily rate-limited within PodScan's infrastructure.
[00:21:31.360 --> 00:21:39.120] If I see anybody using AI requests of any kind, they all go through the same middleware here more than a couple times an hour.
[00:21:39.120 --> 00:21:46.000] Then, I manually can decide if I should block this customer forever or deactivate the account if they're starting to abuse it.
[00:21:46.000 --> 00:21:57.680] Got to be really careful with this because, with backend processes that trigger AI stuff, it's always good to make sure that you don't overuse AI API calls just for your own sanity and the finances of your business.
[00:21:57.680 --> 00:21:59.440] It can be very expensive.
[00:21:59.440 --> 00:22:03.120] I would track them all and see if they go over a certain threshold per hour.
[00:22:03.120 --> 00:22:15.440] And you should always get an alarm right to your phone, right to your email, if that even goes over by a couple, because you need to know immediately once you exceed certain usage patterns when new patterns occur.
[00:22:15.440 --> 00:22:21.280] Because if people start abusing your system, you can be like out of thousands of dollars within a couple of hours.
[00:22:21.280 --> 00:22:23.040] Really got to be careful here.
[00:22:23.040 --> 00:22:25.200] Yeah, let's kind of summarize it here.
[00:22:25.200 --> 00:22:37.040] I think tracking who your users are, understanding what their needs are, and suggesting initial configuration for the dashboards or for their use of the software that can work in every single niche, can work in every single industry.
[00:22:37.040 --> 00:22:48.800] And I see quite a few cutting-edge founders building this into every single product that they create, every SaaS, because it's always valuable to meet the customer immediately where they're at.
[00:22:49.120 --> 00:22:52.880] And this is easiest to do if you investigate where they're coming from.
[00:22:52.880 --> 00:22:55.120] What information do you have about this person?
[00:22:55.120 --> 00:22:59.280] How can you make it easiest for them to see where the value is?
[00:22:59.280 --> 00:23:10.440] So, inside your product, you can make AI a kind of transmission system between what the customer knows they want and how you need it presented to your database, to your algorithms, your backend systems.
[00:23:10.440 --> 00:23:22.120] AI can communicate between these two because you can put in the effort to completely and correctly describe your system and then task AI to translate between user requirements and your platform requirements.
[00:23:22.120 --> 00:23:25.640] That's kind of what I do with the automatic alert generation, right?
[00:23:25.640 --> 00:23:29.480] I know what a good alert looks like, I know what really functional keywords are.
[00:23:29.480 --> 00:23:31.640] My customer, they only know kind of what they want.
[00:23:31.640 --> 00:23:34.680] They only know, oh, I want to listen to all these podcasts or those.
[00:23:34.680 --> 00:23:36.760] They don't know exactly what words to use.
[00:23:36.760 --> 00:23:38.440] And AI can translate that.
[00:23:38.440 --> 00:23:41.960] So I get the keywords and they get to describe what they want.
[00:23:41.960 --> 00:23:43.560] And that is a magical moment.
[00:23:43.560 --> 00:23:45.720] It's the AI doing work for the user.
[00:23:45.720 --> 00:23:47.320] It's almost agentic, right?
[00:23:47.320 --> 00:23:50.360] The AI becomes the agent that works for your business.
[00:23:50.360 --> 00:23:51.800] It's a different kind of agenda.
[00:23:51.800 --> 00:23:55.400] You have this little transmission process that an AI does for you.
[00:23:55.400 --> 00:24:00.680] This is one way that I think every business can benefit from AI, and I highly recommend you implement it right now.
[00:24:00.680 --> 00:24:03.400] This might even be a business idea to think about it.
[00:24:03.400 --> 00:24:06.840] Like, it feels like this could be something that everybody can integrate.
[00:24:06.840 --> 00:24:14.040] So, you know, take a look at it from that perspective if you're looking for an idea, figuring out who your customer is and suggesting data.
[00:24:14.040 --> 00:24:18.360] My goal is not to wow people with AI features inside of my product.
[00:24:18.360 --> 00:24:23.000] The goal is to use AI to help people reach their own wow moments with my product.
[00:24:23.000 --> 00:24:30.520] When you do that invisibly, kind of behind the scenes, you create these magical moments where everything just works exactly as they had hoped.
[00:24:30.520 --> 00:24:36.600] That's when AI becomes extremely powerful and affordable for Bootstrap founders.
[00:24:36.600 --> 00:24:37.800] And that's it for today.
[00:24:37.800 --> 00:24:39.880] Thank you so much for listening to the Bootstrap Founder.
[00:24:39.880 --> 00:24:43.240] You can find me on Twitter at ArvidKahl, A-RV-I-D-K-A-H-L.
[00:24:43.480 --> 00:24:55.520] If you want to know what everybody's saying about your brand on over 3.8 million podcasts, podscan.fm tracks mentions in near real time with a powerful API that turns podcast conversations into actionable data.
[00:24:55.520 --> 00:25:08.880] And if you're hunting for your next business idea, you can get them delivered fresh from the podcast world at ideas.podscan.fm, where AI extracts the best startup opportunities from hundreds of hours of expert conversations every day.
[00:25:08.880 --> 00:25:10.160] Thanks so much for listening.
[00:25:10.160 --> 00:25:13.040] Have a wonderful day and bye-bye.
Prompt 2: Key Takeaways
Now please extract the key takeaways from the transcript content I provided.
Extract the most important key takeaways from this part of the conversation. Use a single sentence statement (the key takeaway) rather than milquetoast descriptions like "the hosts discuss...".
Limit the key takeaways to a maximum of 3. The key takeaways should be insightful and knowledge-additive.
IMPORTANT: Return ONLY valid JSON, no explanations or markdown. Ensure:
- All strings are properly quoted and escaped
- No trailing commas
- All braces and brackets are balanced
Format: {"key_takeaways": ["takeaway 1", "takeaway 2"]}
Prompt 3: Segments
Now identify 2-4 distinct topical segments from this part of the conversation.
For each segment, identify:
- Descriptive title (3-6 words)
- START timestamp when this topic begins (HH:MM:SS format)
- Double check that the timestamp is accurate - a timestamp will NEVER be greater than the total length of the audio
- Most important Key takeaway from that segment. Key takeaway must be specific and knowledge-additive.
- Brief summary of the discussion
IMPORTANT: The timestamp should mark when the topic/segment STARTS, not a range. Look for topic transitions and conversation shifts.
Return ONLY valid JSON. Ensure all strings are properly quoted, no trailing commas:
{
"segments": [
{
"segment_title": "Topic Discussion",
"timestamp": "01:15:30",
"key_takeaway": "main point from this segment",
"segment_summary": "brief description of what was discussed"
}
]
}
Timestamp format: HH:MM:SS (e.g., 00:05:30, 01:22:45) marking the START of each segment.
Now scan the transcript content I provided for ACTUAL mentions of specific media titles:
Find explicit mentions of:
- Books (with specific titles)
- Movies (with specific titles)
- TV Shows (with specific titles)
- Music/Songs (with specific titles)
DO NOT include:
- Websites, URLs, or web services
- Other podcasts or podcast names
IMPORTANT:
- Only include items explicitly mentioned by name. Do not invent titles.
- Valid categories are: "Book", "Movie", "TV Show", "Music"
- Include the exact phrase where each item was mentioned
- Find the nearest proximate timestamp where it appears in the conversation
- THE TIMESTAMP OF THE MEDIA MENTION IS IMPORTANT - DO NOT INVENT TIMESTAMPS AND DO NOT MISATTRIBUTE TIMESTAMPS
- Double check that the timestamp is accurate - a timestamp will NEVER be greater than the total length of the audio
- Timestamps are given as ranges, e.g. 01:13:42.520 --> 01:13:46.720. Use the EARLIER of the 2 timestamps in the range.
Return ONLY valid JSON. Ensure all strings are properly quoted and escaped, no trailing commas:
{
"media_mentions": [
{
"title": "Exact Title as Mentioned",
"category": "Book",
"author_artist": "N/A",
"context": "Brief context of why it was mentioned",
"context_phrase": "The exact sentence or phrase where it was mentioned",
"timestamp": "estimated time like 01:15:30"
}
]
}
If no media is mentioned, return: {"media_mentions": []}
Full Transcript
[00:00:00.480 --> 00:00:04.800] Hey, it's Arvid, and this is the Bootstrap Founder.
[00:00:08.960 --> 00:00:11.520] This week, I want to be as pragmatic as possible.
[00:00:11.520 --> 00:00:26.320] Let's talk about not just AI this and AI that, as I so often do, but the actual applications of generative AI that I personally leverage within PodScan to get my customers to see the value of the product and that as quickly as possible.
[00:00:26.320 --> 00:00:35.680] You know, it took me a pretty long time to actually realize this, but using AI in a SaaS, that's not just something like a chatbot for my customers or something like that.
[00:00:35.680 --> 00:00:42.960] AI can work behind the scenes quite effectively, and that facilitates getting the right stuff in front of the right people.
[00:00:42.960 --> 00:00:49.600] And even just to figure out who people are and how I should talk to them is something that AI can help me with.
[00:00:49.600 --> 00:01:00.640] Today, I want to share exactly what I'm doing, how expensive that is to run at my scale, and how I believe that this can be part of every single software as a service business out there.
[00:01:00.640 --> 00:01:15.760] Even if you don't have any touch point with AI, with the concept of artificial intelligence in that business for those customers at all, even if you don't think you should be offering AI features to your customers, there's a good use, and I'm going to share with you how I'm using it behind the scenes.
[00:01:15.760 --> 00:01:23.920] And speaking of scaling smarter, not just faster, that's exactly what today's sponsor, Paddle, is all about.
[00:01:23.920 --> 00:01:31.200] They're running this exclusive five-part online live series designed for SaaS and digital product leaders who want to scale with precision.
[00:01:31.200 --> 00:01:42.240] It goes like from pricing strategies to exit planning, and each of these sessions is packed with a lot of experts, a lot of insight, a lot of actual data that they pull from their own database, improving strategies.
[00:01:42.240 --> 00:01:57.360] So, if you're serious about really putting more effort into growing a SaaS, check out Paddle.com's event series because, just like the AI strategies that I'm about to share with you today, it's all about working smarter to create these magical moments for your customers.
[00:01:57.360 --> 00:02:00.000] So, go to paddle.com to check this out.
[00:02:02.920 --> 00:02:10.840] Let me start with the internal use of generative AI that has been absolutely transformative for me and PodScan.
[00:02:10.840 --> 00:02:20.440] Six hours after a customer signs up for that trial, exactly six hours after, something magical happens, or I coded it to actually happen, but they don't see it.
[00:02:20.440 --> 00:02:23.480] It happens completely behind the scenes.
[00:02:23.480 --> 00:02:29.400] I let them explore the platform first in those first six hours, but at that six hour mark, I start scoring them.
[00:02:29.400 --> 00:02:36.840] Or rather, I have an LLM that is fed with a lot of information, scored that customer for me on a scale from 0 to 10.
[00:02:36.840 --> 00:02:46.040] So I take everything that I know about this person up until that point and have the AI come up with a number following a rather complicated prompt on how to score them.
[00:02:46.040 --> 00:02:54.840] So here's what I collect for this: I collect the domain of their email address, their full name, the name of the team that they have created.
[00:02:54.840 --> 00:03:01.320] I track what and how often people search and the general themes of the topics that they search for.
[00:03:01.320 --> 00:03:05.160] I collect all the kinds of activities they could possibly have on the page.
[00:03:05.160 --> 00:03:17.160] If they navigate to the dashboard, if they navigate to a search window, if they go to the API docs and read certain things, or just even which podcasts and which categories they've checked out along the way.
[00:03:17.160 --> 00:03:28.520] I take all of these activities that I track very meticulously in my own database that at this point has over 500,000 activities in it from all my customers' active usage of the platform.
[00:03:28.520 --> 00:03:32.200] And I condense all of this into one single prompt.
[00:03:32.200 --> 00:03:36.760] At this point, it's GPT-4.0 still that I sent this to.
[00:03:36.760 --> 00:03:45.000] And I ask it to come up with a JSON object, a data object, that contains a score, a number, and an explanation for why that score was given.
[00:03:45.440 --> 00:03:54.080] I set up the prompt so that it gives me the explanation first and then gives me the score, which is one of the most effective ways of dealing with LLMs.
[00:03:54.080 --> 00:03:58.960] I set up the prompt so that it gives the explanation first because that's a good idea.
[00:03:58.960 --> 00:04:02.400] These systems generate one token after the other.
[00:04:02.400 --> 00:04:15.520] So to always give them a good reason to reason first, now you're going to pull that up front and then later give their summaries, any scoring, any reduction of granularity of information.
[00:04:15.520 --> 00:04:23.040] That's why the explanation should come before the point, the numbers, so they can actually argue their case towards the number.
[00:04:23.040 --> 00:04:37.520] Because if you don't, if you get the score first and then ask it to explain it, it takes the number that it already predicted and then tries to justify that number, which always means that the explanation is not necessarily true.
[00:04:37.520 --> 00:04:44.480] It's just an attempt by the LLM to gaslight you into believing that the number it gave, the score it gave, is actually correct.
[00:04:44.480 --> 00:04:46.080] And that number may be anything.
[00:04:46.080 --> 00:04:55.280] So it's just better kind of inductive method, if you think about it in mathematical dimensions, to get to the result that you want.
[00:04:55.280 --> 00:04:56.880] So I let it score this stuff.
[00:04:56.880 --> 00:05:11.920] And if these people have a score of five and above, then the LLM shouldn't just answer with the score, but also with a full data object containing a lot of information about not just the customer, the potential customer, but how I should interact with them in the future, what I should be doing.
[00:05:11.920 --> 00:05:14.080] And this prompt is incredibly specific.
[00:05:14.080 --> 00:05:21.120] I instruct it along the lines of we're trying to talk mostly to these kinds of customers, PR agencies, marketing customers, and founders.
[00:05:21.120 --> 00:05:25.200] I define who these people are in the prompt and what they like, what they need.
[00:05:25.200 --> 00:05:28.320] And then I tell the LLM to score them higher if they're a founder.
[00:05:28.320 --> 00:05:31.400] And if they're a founder, figure out what projects they're currently working on.
[00:05:31.720 --> 00:05:42.120] So we can talk to them about these projects, suggest particular things on how to use the platform with their specific problems and targets in mind, and do web searches, figure stuff out, scrape it.
[00:05:42.120 --> 00:05:54.120] And if it's a business domain that the email comes from, not just Gmail or Yahoo or whatever people might be using, I instruct the prompt to look up that website, actually look it up, and figure out what kind of business they are.
[00:05:54.120 --> 00:06:06.200] I very intensely, I may clearly instruct the AI to actually check who is signing up, if they know this email address, if they know the name, and can give me more social feeds maybe to reach these people and look for their work.
[00:06:06.200 --> 00:06:08.040] That doesn't always work.
[00:06:08.040 --> 00:06:15.320] Currently, the GPT models still hallucinate a lot of social media profilings, but occasionally it does work.
[00:06:15.320 --> 00:06:22.120] And if you were to do a tool call to an actual API that does this more reliably, that's a way to integrate more reliable data.
[00:06:22.120 --> 00:06:25.960] But for me, it's really just about how should I reach out to these people?
[00:06:25.960 --> 00:06:27.480] Like, what can I talk to them about?
[00:06:27.480 --> 00:06:29.400] What can I suggest to them?
[00:06:29.400 --> 00:06:40.920] I have the message with the score and the explanation and stuff sent to my Slack if it's a high-scoring prospect, so that I or anybody in my sales team can start reaching out manually to those people.
[00:06:40.920 --> 00:06:51.640] And I persist the score to the database, to the user model, the reason for the score and the additional data, so that I can use it later as kind of a CRM approach to this particular customer, right?
[00:06:51.640 --> 00:06:54.600] I know who they are, where they come from, what projects they might be into.
[00:06:54.600 --> 00:06:57.720] So whenever I communicate with them, I can pull up this data.
[00:06:57.720 --> 00:07:04.920] And in my administrative interface, I have a view that lists every single prospect over the last 10 days, which is the length of my trial.
[00:07:04.920 --> 00:07:11.160] And it shows their score, what activities and how many of them they've taken up until now, their email, all of that.
[00:07:11.160 --> 00:07:20.960] And then at any point where I feel like reaching out to them, which is usually around three to six days after they start their trial, I have a generate email button, and that button takes all the activities into account.
[00:07:20.960 --> 00:07:34.160] It takes their score, their previous data extraction, who they are, what we know about them, what they've said about themselves in a profile, which teams they are on, what alerts they have created, what searches they've done, all the usage data.
[00:07:34.160 --> 00:07:46.000] It takes that all into account and creates a follow-up email that looks at all the things they've done and then tries to find the next best thing that they should be doing on PodScan to see the value of the platform.
[00:07:46.000 --> 00:07:48.480] That is my value-nurturing approach.
[00:07:48.480 --> 00:07:59.840] I've instructed that particular email-generating prompt to come up with the best possible keywords for an alert or the best search terms for a search and suggest that all in an email.
[00:07:59.840 --> 00:08:01.440] The email introduces me as the founder.
[00:08:01.440 --> 00:08:05.840] It says, Hey, I'm Arvid, I'm the founder and CEO of PodScan, and this is the next step you should take.
[00:08:05.840 --> 00:08:06.560] Any questions?
[00:08:06.560 --> 00:08:07.920] Respond to this email.
[00:08:07.920 --> 00:08:11.200] And then I send that manually by taking content, putting it into my email window.
[00:08:11.200 --> 00:08:16.560] I have a little button that automatically pre-fills my hey.com interface with this, and I just send it out.
[00:08:16.560 --> 00:08:28.560] And this has been extremely helpful in getting people who have done little on the platform to actually set up their first alert and getting people who've already seen a lot of value by searching to persist that search into an alert.
[00:08:28.560 --> 00:08:32.320] Because that's my main goal with PodScan, to get people to set up an alert.
[00:08:32.320 --> 00:08:38.160] Because an alert that generates this kind of value memory, like value-nurturing notifications, right?
[00:08:38.160 --> 00:08:43.440] If somebody puts in their brand and they get one or two mentions of a podcast every single day, well, that's great.
[00:08:43.440 --> 00:08:46.800] They get an email from me every single day with something interesting.
[00:08:46.800 --> 00:08:47.600] That is valuable.
[00:08:47.600 --> 00:08:52.960] That is a value-nurturing that just happens through the internal mechanisms of the platform itself.
[00:08:52.960 --> 00:08:54.000] So that's what I want.
[00:08:54.000 --> 00:09:01.160] And for that, I need people to actually sign up or after sign-up, do an alert, like set up this alert that really works for them.
[00:09:01.160 --> 00:09:03.320] And this has been running for a couple months at this point.
[00:08:59.680 --> 00:09:04.920] It's been extremely helpful.
[00:09:05.160 --> 00:09:11.880] It has led to a lot of interesting conversations and conversations even with people who were just checking it out, not doing much.
[00:09:11.880 --> 00:09:15.640] But a day or two later, I reached out to them with a suggestion and they set up the alert.
[00:09:15.640 --> 00:09:18.280] And now all of a sudden, they're extremely happy with the product.
[00:09:18.280 --> 00:09:21.800] They see why they should be using it and they are paying for it.
[00:09:21.800 --> 00:09:24.200] It's a great conversion tool.
[00:09:24.200 --> 00:09:25.720] So there's a second step too.
[00:09:25.720 --> 00:09:27.640] And that actually comes much earlier.
[00:09:27.640 --> 00:09:33.560] And it's one that I only recently implemented because I realized that I've been missing out on this all along.
[00:09:33.560 --> 00:09:39.640] And that is during the first onboarding wizard that I have that new signups get to see on PodScan.
[00:09:39.640 --> 00:09:44.360] Now, I had onboarding for over a year and the business is a year and a half old.
[00:09:44.360 --> 00:09:46.760] So it came to the product pretty early.
[00:09:46.760 --> 00:09:52.600] Whenever you come to the dashboard for the first time, a full screen overlay comes up that says, Hey, what do you want to do?
[00:09:52.600 --> 00:09:53.960] Do you want to look for mentions?
[00:09:53.960 --> 00:09:54.760] Do you want to search?
[00:09:54.840 --> 00:09:56.360] Do you just want to check out the product?
[00:09:56.360 --> 00:09:58.760] Click any of these options and it takes you to that section.
[00:09:58.760 --> 00:10:02.600] It's kind of an orientation tool, but more here are the options that you have.
[00:10:02.600 --> 00:10:05.240] And I'm going to just show you to the right gate, right?
[00:10:05.240 --> 00:10:09.080] I'm going to show you where you need to go to get this started right away.
[00:10:09.080 --> 00:10:19.000] And if you clicked on monitoring, which is my alerting system, and that was the default up until a couple of days ago, you would then be able to set up a couple of filters or keywords that you might find interesting.
[00:10:19.000 --> 00:10:23.640] You would have to put them in yourself into a little list and click create my first alert.
[00:10:23.640 --> 00:10:29.960] But it still involved a lot of manual work from people who just came to this product like a couple minutes ago.
[00:10:29.960 --> 00:10:32.440] And I think they want to see what it can do for them.
[00:10:32.440 --> 00:10:42.840] And initially, I took the name of the person, because that's one of the first things that they actually gave me in terms of data, and put that in into the alert, but that's not enough.
[00:10:42.840 --> 00:10:58.640] So instead of having people come up with and I guess write their own keywords into the platform, I implemented an automated background process that would fetch or generate the right keywords for this person while they are still going through the onboarding process.
[00:10:58.640 --> 00:11:03.520] Because during registration, I have this step where I ask people to self-classify.
[00:11:03.520 --> 00:11:10.640] They can classify as a founder, as somebody who owns a podcast, as somebody who's a data analyst, PR marketing, that kind of stuff.
[00:11:10.640 --> 00:11:13.680] Or alternatively, I just want to check it out and look at transcripts.
[00:11:13.680 --> 00:11:21.280] And between these four groups, the one that I'm most interested in are obviously the data analyst and the founder, because these are my ICPs.
[00:11:21.280 --> 00:11:25.920] But the others might also just be on their first step into becoming a customer.
[00:11:25.920 --> 00:11:32.960] So I have this text field additionally during registration where they can say what their project is, why they come to PodScan.
[00:11:32.960 --> 00:11:35.600] Some people tell me, I'm looking for this particular transcript.
[00:11:35.600 --> 00:11:37.120] And then I know they're just, they have to check it out.
[00:11:37.120 --> 00:11:37.680] That's fine.
[00:11:37.680 --> 00:11:38.480] We'll see.
[00:11:38.480 --> 00:11:41.040] But some people say, I work for a PR agency.
[00:11:41.040 --> 00:11:43.280] We're trying to track what people say about our clients.
[00:11:43.280 --> 00:11:45.600] And this is who I want to use this product.
[00:11:45.600 --> 00:11:51.360] People put that in there, maybe in 20% of all cases, but the people who do help a lot with the system.
[00:11:51.360 --> 00:11:52.560] So I can see what's going on.
[00:11:52.560 --> 00:12:01.760] So I take this information, I take the domain name again from the email that they signed up with and their full name, and I throw this into a very fast model.
[00:12:01.760 --> 00:12:05.520] I think this is the GPT-40 mini at this point.
[00:12:05.520 --> 00:12:12.720] No, actually, it's the GPT-5 mini that I kind of got to have low reasoning and low verbosity to make it fast.
[00:12:12.720 --> 00:12:23.280] And I task it to create three good keywords or keyword groups that might be good first alert keywords that will be potentially useful to this particular person.
[00:12:23.280 --> 00:12:26.560] And this selection is incredibly effective.
[00:12:26.560 --> 00:12:34.600] If you sign up from a Google domain and you're saying you're just looking for a transcript, it just suggests an alert that might have to do with the podcast you might be interested in.
[00:12:34.600 --> 00:12:49.800] But if you come from an educational domain, you work at a university and you say you're a data analyst, well, then it suggests some keywords, maybe even some key personnel names from a university that you might track, like the dean or whoever is currently working in the department you're in, it pulls that out of the domain.
[00:12:49.800 --> 00:13:03.160] And most importantly, if you come from a marketing company that the AI is aware of, an agency that has certain clients, the AI will quickly check their website, figure out who their biggest clients are, and then suggest tracking those clients immediately.
[00:13:03.160 --> 00:13:04.920] And that is the magic moment.
[00:13:04.920 --> 00:13:17.080] That is what the LLM can do with tool calling and with scraping websites, can actually fetch meaningful information that helps onboard the customer to their specific use case that they might not even know of just yet.
[00:13:17.080 --> 00:13:27.960] They might have an inclination of what it might be, but the AI can pull this out of the context, all of that from just a self-classifier, maybe a project description, and their email domain.
[00:13:27.960 --> 00:13:29.400] I think this is very powerful.
[00:13:29.400 --> 00:13:35.720] And this is one of the things that you could implement for your SaaS right now and not have any other AI feature in the product.
[00:13:35.720 --> 00:13:46.120] Just something that helps people see this spark, this moment of enjoyment, this value moment much earlier because they see, oh, you actually care to do research about them.
[00:13:46.120 --> 00:13:52.840] And your product can be customized to this very specific use case that they have or might not even be aware of.
[00:13:52.920 --> 00:13:54.600] It's really, really powerful.
[00:13:54.600 --> 00:14:04.360] Obviously, now that they are coming into the product with such AI-assisted strategy, I built AI into the product as a generator as well.
[00:14:04.360 --> 00:14:08.520] The third way that I use generative AI is in the dashboard itself.
[00:14:08.520 --> 00:14:12.680] Some people just don't do or don't like onboarding.
[00:14:12.680 --> 00:14:14.120] They say, no, I'm going to do this myself.
[00:14:14.120 --> 00:14:17.840] They click skip and then they go and try to figure out the product.
[00:14:18.000 --> 00:14:27.280] But I still wanted that magic of having the system create an alert for you to be part of the regular experience of PodScan, to be a repeatable thing that people can do whenever they need it.
[00:14:27.280 --> 00:14:34.160] It's not just for their first alert that they get to create via AI, it should also be every other one that they do in the future.
[00:14:34.160 --> 00:14:36.720] So I implemented an alert builder.
[00:14:36.720 --> 00:14:39.360] You can just kind of free form, write about whatever you want.
[00:14:39.360 --> 00:14:44.080] You can say, hey, I want an alert that tracks keywords from this particular industry that I'm working in.
[00:14:44.080 --> 00:14:45.280] I'm working for this company.
[00:14:45.280 --> 00:14:46.320] I'm doing this kind of work.
[00:14:46.320 --> 00:14:49.440] I'm using this to create a newsletter or whatever.
[00:14:49.440 --> 00:15:07.040] And then the AI, tasked with a lot of information about the platform, and it has a lot of information about the platform in the context window, creates the best possible keywords and other specific things like context-aware questions for that particular filter for that particular alert.
[00:15:07.360 --> 00:15:23.040] And that is really, really fun, particularly with the context-aware question filter is extremely powerful because that feature is an AI-assisted check that runs on every single transcript that gets certain keywords mentioned, just to make sure that the context is exactly what the user wants, right?
[00:15:23.040 --> 00:15:30.400] Because if John Smith is looking for a mention, then there's a lot of John Smith that that guy might not be interested in being mentioned around.
[00:15:30.400 --> 00:15:41.280] So having a specific keyword filter that then checks what the context of the keyword is, where you could just phrase it, like, is this episode talking about John Smith, the professional basketball player?
[00:15:41.600 --> 00:15:44.640] That will get a lot of false positives out of the way.
[00:15:44.640 --> 00:15:49.120] And that filter, just like the keywords, can be pre-suggested by AI.
[00:15:49.120 --> 00:15:55.760] And I've trained this system through prompting to give really good, solid results that work really well with PodScan's data.
[00:15:55.760 --> 00:16:01.560] The alert builder is available in the dashboard for everybody, like paid accounts, trial accounts, anybody can use it.
[00:16:01.560 --> 00:16:16.920] And the list builder, which is a similar feature, it builds lists of podcasts, has a couple of AI-assisted creation features to it, where you could just say, I want all podcasts that talk about Star Trek the next generation, like rewatch podcasts, humor comedy podcasts that have a sci-fi angle.
[00:16:16.920 --> 00:16:22.520] You just put that in, you click on create list, and the list gets automatically generated.
[00:16:22.520 --> 00:16:28.520] It pulls all of these podcasts from our own API that you can also use, but it uses its own API.
[00:16:28.520 --> 00:16:30.920] It searches for keywords like Star Trek podcast.
[00:16:30.920 --> 00:16:31.720] It pulls those in.
[00:16:31.720 --> 00:16:37.400] It looks for similar podcasts because we have a podcast similarity feature that I built over the last year as well.
[00:16:37.400 --> 00:16:38.840] And it just explores the whole system.
[00:16:38.840 --> 00:16:42.760] It scores all these shows and then returns the list of the best 50 or so items.
[00:16:42.760 --> 00:16:50.840] That's all powered by an AI system that gets the best search terms and finds the best scoring criteria for that process from the text that you wrote.
[00:16:50.840 --> 00:16:51.080] Right?
[00:16:51.080 --> 00:17:00.120] If you say, I want to watch Star Trek the next generation shows or whatever, then it pulls out obviously Star Trek as a search term, but also 90s sci-fi, right?
[00:17:00.120 --> 00:17:02.120] All of that is really, really useful.
[00:17:02.120 --> 00:17:10.520] Things that I could never build just through like a lookup system or just by having keywords automatically figured out that doesn't work.
[00:17:10.520 --> 00:17:13.720] It needs an AI component and it's really, really useful.
[00:17:13.720 --> 00:17:20.360] So for me, generative AI used in a software as a service product doesn't mean having a chatbot in there.
[00:17:20.360 --> 00:17:20.920] I don't have one.
[00:17:20.920 --> 00:17:21.880] I don't need that.
[00:17:21.880 --> 00:17:26.440] It doesn't mean any kind of like overly agentic behavior for the user.
[00:17:26.440 --> 00:17:32.040] For me, using generative AI in PodScan is just the way I communicate with the user.
[00:17:32.040 --> 00:17:40.760] And it's about making it magical for them to feel understood, for them to see that PodScan gets why they are using it and what they're using it for.
[00:17:40.760 --> 00:17:44.880] It understands that for their business, they might be looking for this kind of data.
[00:17:44.880 --> 00:17:46.560] So I'm going to suggest that to them, right?
[00:17:44.680 --> 00:17:49.440] I'm make them see, oh, this is what I should look for.
[00:17:49.680 --> 00:18:04.880] And in a moment like this, where they already get a really good suggestion, they also understand how they should be doing this better in the future, how they should be searching for things more effectively, how they should be phrasing things to make PodScan be used more optimally, more effectively as well.
[00:18:04.880 --> 00:18:14.400] And to be able to do this, for the prompts that I write, to have an understanding of the software, because I can't just say, help people come up with alert keywords.
[00:18:14.400 --> 00:18:15.760] But yeah, but how do they work?
[00:18:15.760 --> 00:18:17.280] Like, how does the system interpret them?
[00:18:17.280 --> 00:18:21.040] Like, ChatGPT doesn't know how PodScan does this internally, right?
[00:18:21.040 --> 00:18:32.960] So I did a very painful walkthrough of the system where I manually recorded myself like audio just explaining every single feature on the platform into transcription software.
[00:18:32.960 --> 00:18:37.920] I did this for, I think it was 45 minutes of just me talking about every single thing.
[00:18:37.920 --> 00:18:44.160] I clicked, I opened Windows, I described them, I said how it works, how it works in the back end, and all of that.
[00:18:44.160 --> 00:18:54.000] And then I had that transcription, took that into Claude, had that boiled down to a markdown document describing very effectively the structure and function of the platform.
[00:18:54.000 --> 00:18:56.960] And that markdown document is a couple thousand lines long.
[00:18:56.960 --> 00:19:04.720] It was condensed then later into a more promptable version that is part of all these helpful prompts, all these steps.
[00:19:04.720 --> 00:19:12.000] These prompts are not small, they're quite huge because they have a lot of context about what PodScan is and how PodScan can best be used.
[00:19:12.320 --> 00:19:18.160] Particularly the prompt that tells people what the next best step is, the one that I manually trigger via email generation.
[00:19:18.160 --> 00:19:21.200] Well, that needs to know what all the potential steps are, right?
[00:19:21.280 --> 00:19:22.800] Can't just make up steps.
[00:19:22.800 --> 00:19:28.240] I'm very firm on this in the prompt to not make up things that I don't offer in the software.
[00:19:28.240 --> 00:19:29.520] It did that in the beginning.
[00:19:29.520 --> 00:19:31.640] And then people asked me, well, where is that feature?
[00:19:31.640 --> 00:19:32.840] And I kind of had to build it.
[00:19:32.840 --> 00:19:37.320] So make sure it only suggests the things that it knows are possible.
[00:19:37.320 --> 00:19:42.120] I had to go through the whole platform, had to describe everything from start to finish.
[00:19:42.120 --> 00:19:43.800] And then that became part of a prompt.
[00:19:43.800 --> 00:19:46.840] It's also super helpful for agentic coding.
[00:19:46.840 --> 00:19:49.480] If the platform is aware of itself, right?
[00:19:49.480 --> 00:19:58.120] If the platform is kind of self-describing, then an AI agent can read that document and figure out where things might semantically fit.
[00:19:58.120 --> 00:20:06.840] Not just where certain kinds of models and controllers live in the code base, but where in the interface do people expect to be able to use certain data?
[00:20:06.840 --> 00:20:08.600] It's quite useful.
[00:20:08.600 --> 00:20:18.280] And looking into this, to whatever kind of software business you might be running, I think is a very, very useful and quite cheap thing for you to do.
[00:20:18.280 --> 00:20:21.800] And I don't just mean like going through your whole product and explaining it.
[00:20:21.800 --> 00:20:31.560] I think that's very useful for all kinds of AI-assisted work, but particularly this kind of scoring that I'm doing, you only ever have to do this once, right?
[00:20:31.560 --> 00:20:34.360] My scoring happens once per customer.
[00:20:34.360 --> 00:20:37.960] It's only one request that comes back with a score and a little bit of text.
[00:20:38.040 --> 00:20:46.680] Doesn't really have too much context other than the name, customer, the email, maybe a couple things they've searched for, whatever, maybe a couple locations they've been in the product, but it's not big.
[00:20:46.760 --> 00:20:50.360] Costs me less than a couple of cents, maybe not even a cent in fees.
[00:20:50.360 --> 00:20:53.640] And the same goes for the email generated to the customer a couple days later.
[00:20:53.720 --> 00:20:59.320] Might also be a cent or two, just it costs to pull all the information and the prompts in there.
[00:20:59.320 --> 00:21:04.200] So that might be a little bit more expensive, but I think it's still like it's very manageable per prospect.
[00:21:04.200 --> 00:21:07.720] And I do it only for certain prospects that score fairly high to begin with.
[00:21:07.720 --> 00:21:11.400] So, on a daily basis, I spent maybe 20 cents on this kind of stuff.
[00:21:11.400 --> 00:21:12.760] Really negligible.
[00:21:12.760 --> 00:21:19.840] And at the same time, you got to be careful with AI in the product because it can be very expensive if people start using it a lot.
[00:21:20.160 --> 00:21:27.840] So, I'm very, very cautious with anything that allows a customer to trigger an AI API request, like the alert builder or the list builder.
[00:21:27.840 --> 00:21:31.360] Those are heavily rate-limited within PodScan's infrastructure.
[00:21:31.360 --> 00:21:39.120] If I see anybody using AI requests of any kind, they all go through the same middleware here more than a couple times an hour.
[00:21:39.120 --> 00:21:46.000] Then, I manually can decide if I should block this customer forever or deactivate the account if they're starting to abuse it.
[00:21:46.000 --> 00:21:57.680] Got to be really careful with this because, with backend processes that trigger AI stuff, it's always good to make sure that you don't overuse AI API calls just for your own sanity and the finances of your business.
[00:21:57.680 --> 00:21:59.440] It can be very expensive.
[00:21:59.440 --> 00:22:03.120] I would track them all and see if they go over a certain threshold per hour.
[00:22:03.120 --> 00:22:15.440] And you should always get an alarm right to your phone, right to your email, if that even goes over by a couple, because you need to know immediately once you exceed certain usage patterns when new patterns occur.
[00:22:15.440 --> 00:22:21.280] Because if people start abusing your system, you can be like out of thousands of dollars within a couple of hours.
[00:22:21.280 --> 00:22:23.040] Really got to be careful here.
[00:22:23.040 --> 00:22:25.200] Yeah, let's kind of summarize it here.
[00:22:25.200 --> 00:22:37.040] I think tracking who your users are, understanding what their needs are, and suggesting initial configuration for the dashboards or for their use of the software that can work in every single niche, can work in every single industry.
[00:22:37.040 --> 00:22:48.800] And I see quite a few cutting-edge founders building this into every single product that they create, every SaaS, because it's always valuable to meet the customer immediately where they're at.
[00:22:49.120 --> 00:22:52.880] And this is easiest to do if you investigate where they're coming from.
[00:22:52.880 --> 00:22:55.120] What information do you have about this person?
[00:22:55.120 --> 00:22:59.280] How can you make it easiest for them to see where the value is?
[00:22:59.280 --> 00:23:10.440] So, inside your product, you can make AI a kind of transmission system between what the customer knows they want and how you need it presented to your database, to your algorithms, your backend systems.
[00:23:10.440 --> 00:23:22.120] AI can communicate between these two because you can put in the effort to completely and correctly describe your system and then task AI to translate between user requirements and your platform requirements.
[00:23:22.120 --> 00:23:25.640] That's kind of what I do with the automatic alert generation, right?
[00:23:25.640 --> 00:23:29.480] I know what a good alert looks like, I know what really functional keywords are.
[00:23:29.480 --> 00:23:31.640] My customer, they only know kind of what they want.
[00:23:31.640 --> 00:23:34.680] They only know, oh, I want to listen to all these podcasts or those.
[00:23:34.680 --> 00:23:36.760] They don't know exactly what words to use.
[00:23:36.760 --> 00:23:38.440] And AI can translate that.
[00:23:38.440 --> 00:23:41.960] So I get the keywords and they get to describe what they want.
[00:23:41.960 --> 00:23:43.560] And that is a magical moment.
[00:23:43.560 --> 00:23:45.720] It's the AI doing work for the user.
[00:23:45.720 --> 00:23:47.320] It's almost agentic, right?
[00:23:47.320 --> 00:23:50.360] The AI becomes the agent that works for your business.
[00:23:50.360 --> 00:23:51.800] It's a different kind of agenda.
[00:23:51.800 --> 00:23:55.400] You have this little transmission process that an AI does for you.
[00:23:55.400 --> 00:24:00.680] This is one way that I think every business can benefit from AI, and I highly recommend you implement it right now.
[00:24:00.680 --> 00:24:03.400] This might even be a business idea to think about it.
[00:24:03.400 --> 00:24:06.840] Like, it feels like this could be something that everybody can integrate.
[00:24:06.840 --> 00:24:14.040] So, you know, take a look at it from that perspective if you're looking for an idea, figuring out who your customer is and suggesting data.
[00:24:14.040 --> 00:24:18.360] My goal is not to wow people with AI features inside of my product.
[00:24:18.360 --> 00:24:23.000] The goal is to use AI to help people reach their own wow moments with my product.
[00:24:23.000 --> 00:24:30.520] When you do that invisibly, kind of behind the scenes, you create these magical moments where everything just works exactly as they had hoped.
[00:24:30.520 --> 00:24:36.600] That's when AI becomes extremely powerful and affordable for Bootstrap founders.
[00:24:36.600 --> 00:24:37.800] And that's it for today.
[00:24:37.800 --> 00:24:39.880] Thank you so much for listening to the Bootstrap Founder.
[00:24:39.880 --> 00:24:43.240] You can find me on Twitter at ArvidKahl, A-RV-I-D-K-A-H-L.
[00:24:43.480 --> 00:24:55.520] If you want to know what everybody's saying about your brand on over 3.8 million podcasts, podscan.fm tracks mentions in near real time with a powerful API that turns podcast conversations into actionable data.
[00:24:55.520 --> 00:25:08.880] And if you're hunting for your next business idea, you can get them delivered fresh from the podcast world at ideas.podscan.fm, where AI extracts the best startup opportunities from hundreds of hours of expert conversations every day.
[00:25:08.880 --> 00:25:10.160] Thanks so much for listening.
[00:25:10.160 --> 00:25:13.040] Have a wonderful day and bye-bye.