Skip to main content

Prompts, tests, and outputs

Configure how promptfoo evaluates your LLM applications.

Quick Startโ€‹

promptfooconfig.yaml
# Define your prompts
prompts:
- 'Translate to {{language}}: {{text}}'

# Configure test cases
tests:
- vars:
language: French
text: Hello world
assert:
- type: contains
value: Bonjour
# Run evaluation
# promptfoo eval

Core Conceptsโ€‹

๐Ÿ“ Promptsโ€‹

Define what you send to your LLMs - from simple strings to complex conversations.

Common patterns

Text prompts

prompts:
- 'Summarize this: {{content}}'
- file://prompts/customer_service.txt

Chat conversations

prompts:
- file://prompts/chat.json

Dynamic prompts

prompts:
- file://generate_prompt.js
- file://create_prompt.py

Learn more about prompts โ†’

๐Ÿงช Test Casesโ€‹

Configure evaluation scenarios with variables and assertions.

Common patterns

Inline tests

tests:
- vars:
question: "What's 2+2?"
assert:
- type: equals
value: '4'

CSV test data

tests: file://test_cases.csv

Dynamic generation

tests: file://generate_tests.js

Learn more about test cases โ†’

๐Ÿ“Š Output Formatsโ€‹

Save and analyze your evaluation results.

Available formats
# Visual report
promptfoo eval --output results.html

# Data analysis
promptfoo eval --output results.json

# Spreadsheet
promptfoo eval --output results.csv

Learn more about outputs โ†’

Complete Exampleโ€‹

Here's a real-world example that combines multiple features:

promptfooconfig.yaml
# yaml-language-server: $schema=https://promptfoo.dev/config-schema.json
description: Customer service chatbot evaluation

prompts:
# Simple text prompt
- 'You are a helpful customer service agent. {{query}}'

# Chat conversation format
- file://prompts/chat_conversation.json

# Dynamic prompt with logic
- file://prompts/generate_prompt.js

providers:
- openai:gpt-4.1-mini
- anthropic:claude-3-haiku

tests:
# Inline test cases
- vars:
query: 'I need to return a product'
assert:
- type: contains
value: 'return policy'
- type: llm-rubric
value: 'Response is helpful and professional'

# Load more tests from CSV
- file://test_scenarios.csv

# Save results
outputPath: evaluations/customer_service_results.html

Quick Referenceโ€‹

Supported File Formatsโ€‹

FormatPromptsTestsUse Case
.txtโœ…โŒSimple text prompts
.jsonโœ…โœ…Chat conversations, structured data
.yamlโœ…โœ…Complex configurations
.csvโœ…โœ…Bulk data, multiple variants
.js/.tsโœ…โœ…Dynamic generation with logic
.pyโœ…โœ…Python-based generation
.mdโœ…โŒMarkdown-formatted prompts
.j2โœ…โŒJinja2 templates

Variable Syntaxโ€‹

Variables use Nunjucks templating:

# Basic substitution
prompt: "Hello {{name}}"

# Filters
prompt: "URGENT: {{message | upper}}"

# Conditionals
prompt: "{% if premium %}Premium support: {% endif %}{{query}}"

File Referencesโ€‹

All file paths are relative to the config file:

# Single file
prompts:
- file://prompts/main.txt

# Multiple files with glob
tests:
- file://tests/*.yaml

# Specific function
prompts:
- file://generate.js:createPrompt

Next Stepsโ€‹