📄️ Guide
The YAML configuration format runs each prompt through a series of example inputs (aka "test case") and checks if they meet requirements (aka "assertions").
📄️ Reference
Here is the main structure of the promptfoo configuration file:
📄️ Prompts, tests, and outputs
Prompts
📄️ Chat threads
The prompt file supports a message in OpenAI's JSON prompt format. This allows you to set multiple messages including the system prompt. For example:
📄️ Dataset generation
Your dataset is the heart of your LLM eval. To the extent possible, it should closely represent true inputs into your LLM app.
📄️ Scenarios
The scenarios configuration lets you group a set of data along with a set of tests that should be run on that data.
📄️ Caching
promptfoo caches the results of API calls to LLM providers to help save time and cost.
📄️ Telemetry
promptfoo collects basic anonymous telemetry by default. This telemetry helps us decide how to spend time on development.