Test & secure your LLM apps
Open-source LLM testing used by 40,000+ developers
Automated red teaming for gen AI
Run custom scans that detect security, legal, and brand risk:
npx promptfoo@latest redteam init
Our probes adapt dynamically to your app and uncover common failures like:
- PII leaks
- Insecure tool use
- Cross-session data leaks
- Direct and indirect prompt injections
- Jailbreaks
- Harmful content
- Specialized medical and legal advice
- and much more
Trusted by developers at
Comprehensive security coverage
Custom probes for your application that identify failures you actually care about, not just generic jailbreaks and prompt injections.
Learn MoreBuilt for developers
Move quickly with a command-line interface, live reloads, and caching. No SDKs, cloud dependencies, or logins.
Get StartedBattle-tested, 100% open-source
Used by teams serving millions of users and supported by an active open-source community.
View on GitHubEasy abstractions for complex LLM testing
Simple declarative config
# Test cases are generated to specifically target the system's use case
purpose: 'Budget travel agent'
# Define where payloads are sent
targets:
- id: 'https://example.com/generate'
config:
method: 'POST'
headers:
'Content-Type': 'application/json'
body:
userInput: '{{prompt}}'
Detailed, actionable results