Testing AI’s “Lethal Trifecta” with Promptfoo
Learn what the lethal trifecta is and how to use promptfoo red teaming to detect prompt injection and data exfiltration risks in AI agents..

Latest Posts

Top 10 Open Datasets for LLM Safety, Toxicity & Bias Evaluation
A comprehensive guide to the most important open-source datasets for evaluating LLM safety, including toxicity detection, bias measurement, and truthfulness benchmarks..

Autonomy and agency in AI: We should secure LLMs with the same fervor spent realizing AGI
Exploring the critical need to secure LLMs with the same urgency and resources dedicated to achieving AGI, focusing on autonomy and agency in AI systems..

Prompt Injection vs Jailbreaking: What's the Difference?
Learn the critical difference between prompt injection and jailbreaking attacks, with real CVEs, production defenses, and test configurations..

AI Safety vs AI Security in LLM Applications: What Teams Must Know
AI safety vs AI security for LLM apps.

Top Open Source AI Red-Teaming and Fuzzing Tools in 2025
Compare the top open source AI red teaming tools in 2025.

Promptfoo Raises $18.4M Series A to Build the Definitive AI Security Stack
We raised $18.4M from Insight Partners with participation from Andreessen Horowitz.

Evaluating political bias in LLMs
How right-leaning is Grok? We've released a new testing methodology alongside a dataset of 2,500 political questions..

Join Promptfoo at Hacker Summer Camp 2025
Join Promptfoo at AI Summit, Black Hat, and DEF CON for demos, workshops, and discussions on LLM security and red teaming.

AI Red Teaming for complete first-timers
A comprehensive guide to AI red teaming for beginners, covering the basics, culture building, and operational feedback loops.