📄️ Guide
The YAML configuration format runs each prompt through a series of example inputs (aka "test case") and checks if they meet requirements (aka "assert").
📄️ Reference
Here is the main structure of the promptfoo configuration file:
📄️ Input and output files
Prompts
🗃️ Test assertions
6 items
📄️ LLM chains
Prompt chaining is a common pattern used to perform more complex reasoning with LLMs. It's used by libraries like LangChain, and OpenAI has released baked-in support via OpenAI functions.
📄️ Scenarios
The scenarios configuration lets you group a set of data along with a set of tests that should be run on that data.
📄️ Caching
promptfoo caches the results of API calls to LLM providers. This helps save time and cost.
📄️ Telemetry
promptfoo collects basic anonymous telemetry by default. This telemetry helps us decide how to spend time on development.