The YAML configuration format runs each prompt through a series of example inputs (aka "test case") and checks if they meet requirements (aka "assert").
Here is the main structure of the promptfoo configuration file:
📄️ Input and output files
🗃️ Test assertions
📄️ LLM chains
Prompt chaining is a common pattern used to perform more complex reasoning with LLMs. It's used by libraries like LangChain, and OpenAI has released baked-in support via OpenAI functions.
The scenarios configuration lets you group a set of data along with a set of tests that should be run on that data.
promptfoo caches the results of API calls to LLM providers. This helps save time and cost.
promptfoo collects basic anonymous telemetry by default. This telemetry helps us decide how to spend time on development.