Getting paid to vibe code: Inside the new AI-era job | Lazar Jovanovic (Professional Vibe Coder)
Key Takeaways Copied to clipboard!
- Non-technical backgrounds can be an advantage in the AI era because they foster a 'positively delusional' mindset, unconstrained by traditional technical limitations, leading to novel solutions.
- Elite 'Vibe Coders' optimize 80% of their time on planning, clarity, judgment, and taste, rather than execution, recognizing that AI amplifies existing skill, and 'good enough' output is now ubiquitous.
- To combat AI's limited context window and ensure quality, users must proactively manage context by creating dynamic documentation (PRDs, tasks.md, rules.md) and running parallel prototypes to rapidly achieve clarity before committing to a build path.
- Elite engineering skills, particularly in maintenance, scaling infrastructure, and fixing complex issues, will remain scarce and highly necessary to support the massive influx of AI-powered builders.
- The future of development involves the convergence of Product, Engineering, and Design roles, where 'vibe coding' (relying on AI for raw output based on good judgment) becomes the norm, making skills like judgment and taste more valuable than manual coding.
- Lazar Jovanovic employs a '4x4 debugging workflow' (Tool Fix, Awareness Layer/Console Logs, External Facilitator like Codex, and Self-Correction/Prompt Refinement) to systematically resolve issues encountered during AI-assisted development.
Segments
Lazar’s Vibe Coder Role
Copied to clipboard!
(00:05:37)
- Key Takeaway: Professional Vibe Coders build both internal and external production-quality products across all departments.
- Summary: Lazar’s day-to-day involves using AI tools to push projects to production, ranging from marketing templates to complex internal tools with integrations. He acts as an ‘ideas role,’ bringing concepts to life quickly with quality and security. This role covers a wide surface area, including shipping public-facing items like integration templates and demo stores.
Advantage of Non-Technical Background
Copied to clipboard!
(00:09:51)
- Key Takeaway: Lack of coding background fosters positive delusion, enabling builders to attempt and achieve things technical experts deem impossible.
- Summary: Lazar, having no coding background, is not constrained by what ‘shouldn’t be possible’ with AI tools, leading to successful builds like Chrome extensions or desktop apps. This unbiased approach requires a positive delusion that everything is possible until proven otherwise. This mindset is crucial for excelling in AI-driven development.
Optimizing Time: Planning Over Coding
Copied to clipboard!
(01:12:36)
- Key Takeaway: Successful AI utilization requires spending 80% of time on planning and chatting to ensure clarity, not on execution.
- Summary: The core problem AI solves is not coding, but achieving clarity in the ask; therefore, most time should be spent in planning and chat mode. Until AGI arrives, the human must steer the ship by knowing the instructions, treating AI tools as technical co-founders and educators. Success is measured by reading the agent’s output, not the code syntax.
Genie Model: Context and Specificity
Copied to clipboard!
(01:14:57)
- Key Takeaway: AI limitations stem from finite token windows and a lack of human-level contextual understanding, demanding extreme specificity in prompts.
- Summary: The ‘Aladdin and the Genie’ model illustrates that AI cannot infer meaning (‘you know what I mean’), leading to dysfunctional outcomes if requests are vague. Users must optimize for the human-level limitation by providing references and context, as they cannot control the machine-level token memory window. Optimizing specificity is the user’s 100% controllable factor for quality output.
Parallel Prototyping for Clarity
Copied to clipboard!
(01:22:22)
- Key Takeaway: Running four or five parallel prototypes using different input methods rapidly clarifies the idea and prevents building ‘AI slop.’
- Summary: To achieve clarity, start multiple projects simultaneously: one via brain dump dictation, one via typed prompts, one referencing visual designs (Mobbin/Dribbble), and one referencing existing code snippets. This process forces refinement, saves credits long-term by avoiding rework, and helps the builder develop the necessary taste and judgment.
Dynamic Context Management via Documentation
Copied to clipboard!
(01:30:15)
- Key Takeaway: To maintain context across long builds and multiple projects, externalize project knowledge into structured Markdown files for the agent to reference perpetually.
- Summary: Since LLM memory fades, users must provide perpetual context by creating structured documents like a Master Plan, Implementation Plan, Design Guidelines, and Tasks.md. These Markdown files serve as the source of truth, allowing the agent to focus its scarce token allocation on execution rather than re-reading the entire conversation history. This delegation enables building five projects simultaneously without losing productivity.
Skill Shift: Judgment Over Output
Copied to clipboard!
(01:36:31)
- Key Takeaway: The future rewards better judgment, taste, and design skills, as AI democratizes ‘good enough’ output, making world-class quality the new baseline.
- Summary: In the AI era, the gap between ‘good enough’ and ‘world-class’ is shrinking because everyone can produce mediocre results quickly. Therefore, skills requiring better decision-making—like design, taste, and clarity—are becoming the most valuable differentiators. Elite engineering remains necessary for maintenance and scaling complex systems, but design skills will likely be the next major value driver after PM clarity.
Debugging Workflow Explained
Copied to clipboard!
(01:05:37)
- Key Takeaway: The 4x4 debugging framework involves four distinct attempts: relying on the tool’s self-fix, adding manual awareness via console logs, using an external facilitator like Codex, and finally, reverting to the last known good state due to user error.
- Summary: When stuck, the first step is using the tool’s ’try to fix’ feature; if that fails, the user must manually introduce an awareness layer by adding console logs to observe the environment the AI cannot see. If logs are insufficient, an external tool like Codex can analyze the code and logs for diagnosis without making direct changes. The final step acknowledges that the error is often the user’s fault (bad prompt) and involves reverting version control to rethink the initial input.
Learning from Debugging Failures
Copied to clipboard!
(01:11:32)
- Key Takeaway: After resolving a complex bug using multiple steps, the user must immediately ask the agent how to prompt better next time, and then codify that learning into a rules file (e.g., rules.md) to prevent recurrence.
- Summary: The most crucial part of debugging is turning the failure into a learning opportunity by asking the agent for better prompting strategies to solve the issue in one go next time. To ensure this learning persists, the derived solution or rule should be explicitly added to the agent’s context files, effectively allowing the AI to prompt itself better in the future.
Agent Output Over Code Observation
Copied to clipboard!
(01:15:41)
- Key Takeaway: The primary layer above code is the agent’s thinking process, observable through conversation, which is becoming the new focus for understanding system behavior.
- Summary: Learning how coding systems work is increasingly done by observing the agent’s output and conversation rather than scrutinizing the generated code itself. This conversational layer acts as the new interface, where English dialogue dictates execution, making judgment and understanding the agent’s thought process paramount.
AI Capabilities and User Delusion
Copied to clipboard!
(01:17:18)
- Key Takeaway: Attempting to force an AI capability before its API is released (like image generation in early ChatGPT) results in wasted effort, emphasizing the need to understand current tool limitations.
- Summary: A personal failure involved spending a week trying to brute-force an image generation feature before the API was available, highlighting that tools are only as capable as their current release allows. Authentic tools that can browse and reason will explicitly state when a request is undoable, making chat mode essential for planning and leveling up judgment.
Future Career Skills and Role Convergence
Copied to clipboard!
(01:21:26)
- Key Takeaway: Valuable future skills will center on uniquely human attributes like emotional intelligence, judgment, and high-quality design/copywriting, as deterministic tasks are rapidly commoditized by AI.
- Summary: Roles that involve deterministic tasks, like pure math or translation, are highly susceptible to AI replacement, whereas human-to-human interaction skills will become more valuable. Elite copywriting and great design taste are critical because users can quickly discern low-quality, AI-generated content, meaning writers and designers must train AI to amplify their unique human quality.
Path to Professional Vibe Coder
Copied to clipboard!
(01:28:32)
- Key Takeaway: The path to becoming a professional vibe coder involves building in public, sharing knowledge openly, and demonstrating capability by building tools using the target platform (like Lovable) before being hired.
- Summary: Lazar’s non-linear career path was validated by building in public and sharing knowledge, which led to his role at Lovable; candidates can emulate this by sending functional apps built with the company’s tool instead of traditional resumes. The core principle is to already be doing the job professionally before seeking employment, focusing on developing deep judgment and understanding how speed and quality translate in the AI era.
Final Thoughts on Tech Stack Irrelevance
Copied to clipboard!
(01:37:15)
- Key Takeaway: In the AI-driven building era, the underlying tech stack (HTML, React, backend choices) is irrelevant; optimization must shift entirely to producing ‘magic’ through quality, taste, and design.
- Summary: Since AI enables anyone to produce ‘good enough’ output, the differentiator is producing ‘magic,’ which requires dedicating more time to learning and exposure than to coding itself. Obsessing over technical decisions is obsolete; the focus must be on the end-user experience, taste, and design quality, as these inputs determine the quality of the AI’s fast output.