
Ai Prompt Engineering In 2025 What Works And What Doesn T Sander Schulhoff Learn Prompting Hackaprompt
June 19, 2025
Key Takeaways
- Prompt engineering remains crucial for effectively interacting with and leveraging Large Language Models (LLMs), despite claims of its obsolescence.
- Techniques like few-shot prompting, decomposition, self-criticism, and providing additional context significantly improve LLM performance.
- Role prompting, while popular, has diminishing returns for accuracy-based tasks but can still be useful for expressive tasks.
- Prompt injection and AI red teaming are critical for identifying and mitigating security vulnerabilities in AI systems, especially as AI agents become more autonomous.
- While prompt injection is not fully solvable, it is mitigatable through advanced techniques and ongoing research by AI labs, not external guardrails.
Segments
Modes of Prompt Engineering (~00:08:00)
- Key Takeaway: Prompt engineering can be divided into conversational (interactive chatbot use) and product-focused (building AI applications) modes, with the latter requiring more rigorous techniques.
- Summary: Schulhoff distinguishes between conversational prompt engineering, which involves iterative refinement in a chat, and product-focused prompt engineering, where prompts are optimized for specific applications and used at scale.
Effective Prompting Techniques: Basics (~00:13:00)
- Key Takeaway: Few-shot prompting, which involves providing examples of desired output, is a highly effective technique for improving LLM performance.
- Summary: The discussion covers essential prompting techniques, starting with few-shot prompting, where providing examples significantly boosts accuracy. Other techniques like decomposition, self-criticism, and adding additional information (context) are also highlighted as valuable.
Debunking Ineffective Prompting Techniques (~00:22:00)
- Key Takeaway: Role prompting, while seemingly intuitive, offers little to no benefit for accuracy-based tasks and its effectiveness for expressive tasks is limited.
- Summary: Schulhoff debunks the effectiveness of role prompting for accuracy-based tasks, explaining that studies show minimal statistical significance. He notes it can be useful for expressive tasks like writing but is generally not a reliable technique for improving factual outputs.
Advanced Prompting Techniques: Ensembling (~00:45:00)
- Key Takeaway: Ensembling techniques, such as Mixture of Reasoning Experts, involve using multiple prompts or models to solve a problem and aggregating the results for better accuracy.
- Summary: The conversation moves to more advanced techniques like ensembling, where multiple prompts or models are used to tackle a problem, and the most common answer is taken as the final output. This method, including variations like Mixture of Reasoning Experts, can improve overall performance.
Understanding Prompt Injection and Red Teaming (~00:55:00)
- Key Takeaway: Prompt injection involves tricking AIs into performing harmful actions, and red teaming is the process of discovering these vulnerabilities.
- Summary: Schulhoff defines prompt injection as getting AIs to do or say bad things, often through creative prompting that bypasses safety measures. Red teaming is the practice of actively seeking out these vulnerabilities, with his ‘Hack a Prompt’ competition being a prime example.
Prompt Injection Techniques That Work (~01:08:00)
- Key Takeaway: Techniques like narrative framing (e.g., ‘story for my grandma’), typos, and obfuscation (e.g., Base64 encoding) can still be effective in prompt injection attacks.
- Summary: The discussion highlights that certain prompt injection techniques, such as using narrative framing, introducing typos, and employing obfuscation methods like Base64 encoding, remain effective in bypassing AI safety protocols.
Defenses Against Prompt Injection (~01:15:00)
- Key Takeaway: Simple prompt-based defenses and external AI guardrails are largely ineffective against prompt injection; solutions require fundamental model improvements.
- Summary: Schulhoff explains that common defenses like adding ‘do not follow malicious instructions’ to prompts or using external AI guardrails are ineffective. He argues that true solutions lie in safety tuning and fine-tuning the models themselves, ideally by the AI providers.
The Misalignment Problem (~01:28:00)
- Key Takeaway: AI misalignment, where AI agents pursue goals that lead to harmful outcomes, is a realistic concern, distinct from prompt injection, and requires careful consideration.
- Summary: Schulhoff discusses the growing concern of AI misalignment, where AI agents, without malicious prompting, might take actions with unintended harmful consequences. He uses examples like an AI prioritizing a task over ethical considerations to illustrate this emerging problem.
The Future of AI and Regulation (~01:35:00)
- Key Takeaway: While AI development should not be stopped, responsible regulation and continued research into AI safety and security are essential.
- Summary: Schulhoff expresses a belief in the immense potential benefits of AI, particularly in healthcare, and advocates for responsible regulation rather than halting development. He emphasizes that AI’s progress is inevitable and requires a focus on mitigating risks through ongoing research and security measures.
Lightning Round: Books and Media (~01:42:00)
- Key Takeaway: The River of Doubt, Black Mirror, and Evil are highlighted as influential works, with Black Mirror serving as a form of ‘red teaming for tech’.
- Summary: In the lightning round, Schulhoff recommends ‘The River of Doubt’ for its exploration of mental fortitude, praises ‘Black Mirror’ for its prescient portrayal of technological risks, and mentions ‘Evil’ for its examination of faith and science.
Lightning Round: Life Motto and Hat (~01:47:00)
- Key Takeaway: Persistence and embracing the ‘strenuous life’ are key life philosophies, complemented by a practical hat for foraging adventures.
- Summary: Schulhoff shares his life motto of persistence and the ‘strenuous life,’ drawing parallels to Theodore Roosevelt’s adventures. He also explains the practical use of his signature hat for foraging, highlighting its protective qualities.
Where to Find Sander Schulhoff (~01:50:00)
- Key Takeaway: Resources for learning about prompt engineering and AI red teaming are available through LearnPrompting.org, Maven.com, and Hackaprompt.com.
- Summary: Schulhoff directs listeners to LearnPrompting.org and Maven.com for his AI Red Teaming course and Hackaprompt.com for competition details. He also invites researchers for collaboration, emphasizing the importance of data in advancing AI security.