Prompt Injection vs Jailbreaking: What's the Difference?
Learn the critical difference between prompt injection and jailbreaking attacks, with real CVEs, production defenses, and test configurations..

Latest Posts

AI Safety vs AI Security in LLM Applications: What Teams Must Know
AI safety vs AI security for LLM apps.

Top 5 Open Source AI Red Teaming Tools in 2025
Compare the top open source AI red teaming tools in 2025.

Promptfoo Raises $18.4M Series A to Build the Definitive AI Security Stack
We raised $18.4M from Insight Partners with participation from Andreessen Horowitz.

Evaluating political bias in LLMs
How right-leaning is Grok? We've released a new testing methodology alongside a dataset of 2,500 political questions..

Join Promptfoo at Hacker Summer Camp 2025
Join Promptfoo at AI Summit, Black Hat, and DEF CON for demos, workshops, and discussions on LLM security and red teaming.

AI Red Teaming for complete first-timers
A comprehensive guide to AI red teaming for beginners, covering the basics, culture building, and operational feedback loops.

System Cards Go Hard
Master LLM system cards for responsible AI deployment with transparency, safety documentation, and compliance requirements.

The Promptfoo MCP Proxy: Enterprise MCP Security
Learn about the security risks introduced by MCP servers and how to mitigate them using the Promptfoo MCP Proxy, an enterprise solution for MCP security..

OWASP Top 10 LLM Security Risks (2025) – 5-Minute TLDR
Master OWASP Top 10 LLM security vulnerabilities with practical mitigation strategies in this comprehensive 2025 guide.