Skip to main content

Test & secure your LLM apps

Open-source LLM testing used by 40,000+ developers

Walkthrough step 2

Automated red teaming for gen AI

Run custom scans that detect security, legal, and brand risk:

npx promptfoo@latest redteam init

Our probes adapt dynamically to your app and uncover common failures like:

  • PII leaks
  • Insecure tool use
  • Cross-session data leaks
  • Direct and indirect prompt injections
  • Jailbreaks
  • Harmful content
  • Specialized medical and legal advice
  • and much more

» Scan for vulnerabilities in your LLM app

Trusted by developers at

ShopifyDiscordAnthropicMicrosoftDoordashCarvana

Comprehensive security coverage

Custom probes for your application that identify failures you actually care about, not just generic jailbreaks and prompt injections.

Learn More
promptfoo security coverage examples

Built for developers

Move quickly with a command-line interface, live reloads, and caching. No SDKs, cloud dependencies, or logins.

Get Started
promptfoo CLI

Battle-tested, 100% open-source

Used by teams serving millions of users and supported by an active open-source community.

View on GitHub
promptfoo github repo

Easy abstractions for complex LLM testing

Simple declarative config

# Test cases are generated to specifically target the system's use case
purpose: 'Budget travel agent'

# Define where payloads are sent
targets:
- id: 'https://example.com/generate'
config:
method: 'POST'
headers:
'Content-Type': 'application/json'
body:
userInput: '{{prompt}}'

Detailed, actionable results
Security Results

Make your LLM app reliable & secure