OpenLLM
To use OpenLLM with promptfoo, we take advantage of OpenLLM's support for OpenAI-compatible endpoint.
-
Start the server using the
openllm start
command. -
Set environment variables:
- Set
OPENAI_BASE_URL
tohttp://localhost:8001/v1
- Set
OPENAI_API_KEY
to a dummy valuefoo
.
- Set
-
Depending on your use case, use the
chat
orcompletion
model types.Chat format example: To run a Llama2 eval using chat-formatted prompts, first start the model:
openllm start llama --model-id meta-llama/Llama-2-7b-chat-hf
Then set the promptfoo configuration:
providers:
- openai:chat:llama2Completion format example: To run a Flan eval using completion-formatted prompts, first start the model:
openllm start flan-t5 --model-id google/flan-t5-large
Then set the promptfoo configuration:
providers:
- openai:completion:flan-t5 -
See OpenAI provider documentation for more details.