Player FMアプリでオフラインにしPlayer FMう!
Generative AI Prompt Hacking and Its Impact on AI Security & Safety
Manage episode 440639205 series 3461851
Welcome to Season 3 of the MLSecOps Podcast, brought to you by Protect AI!
In this episode, MLSecOps Community Manager Charlie McCarthy speaks with Sander Schulhoff, co-founder and CEO of Learn Prompting. Sander discusses his background in AI research, focusing on the rise of prompt engineering and its critical role in generative AI. He also shares insights into prompt security, the creation of LearnPrompting.org, and its mission to democratize prompt engineering knowledge. This episode also explores the intricacies of prompting techniques, "prompt hacking," and the impact of competitions like HackAPrompt on improving AI safety and security.
Thanks for checking out the MLSecOps Podcast! Get involved with the MLSecOps Community and find more resources at https://community.mlsecops.com.
Additional tools and resources to check out:
Protect AI Guardian: Zero Trust for ML Models
Recon: Automated Red Teaming for GenAI
Protect AI’s ML Security-Focused Open Source Tools
LLM Guard Open Source Security Toolkit for LLM Interactions
Huntr - The World's First AI/Machine Learning Bug Bounty Platform
43 つのエピソード
Manage episode 440639205 series 3461851
Welcome to Season 3 of the MLSecOps Podcast, brought to you by Protect AI!
In this episode, MLSecOps Community Manager Charlie McCarthy speaks with Sander Schulhoff, co-founder and CEO of Learn Prompting. Sander discusses his background in AI research, focusing on the rise of prompt engineering and its critical role in generative AI. He also shares insights into prompt security, the creation of LearnPrompting.org, and its mission to democratize prompt engineering knowledge. This episode also explores the intricacies of prompting techniques, "prompt hacking," and the impact of competitions like HackAPrompt on improving AI safety and security.
Thanks for checking out the MLSecOps Podcast! Get involved with the MLSecOps Community and find more resources at https://community.mlsecops.com.
Additional tools and resources to check out:
Protect AI Guardian: Zero Trust for ML Models
Recon: Automated Red Teaming for GenAI
Protect AI’s ML Security-Focused Open Source Tools
LLM Guard Open Source Security Toolkit for LLM Interactions
Huntr - The World's First AI/Machine Learning Bug Bounty Platform
43 つのエピソード
すべてのエピソード
×プレーヤーFMへようこそ!
Player FMは今からすぐに楽しめるために高品質のポッドキャストをウェブでスキャンしています。 これは最高のポッドキャストアプリで、Android、iPhone、そしてWebで動作します。 全ての端末で購読を同期するためにサインアップしてください。