Skip to main content

LLM Providers

Providers in promptfoo are the interfaces to various language models and AI services. This guide will help you understand how to configure and use providers in your promptfoo evaluations.

Quick Start

Here's a basic example of configuring providers in your promptfoo YAML config:

providers:
- openai:gpt-4o-mini
- anthropic:messages:claude-3-5-sonnet-20241022
- vertex:gemini-pro

Available Providers

Api ProvidersDescriptionSyntax & Example
OpenAIGPT models including GPT-4 and GPT-3.5openai:o1-preview
AnthropicClaude modelsanthropic:messages:claude-3-5-sonnet-20241022
HTTPGeneric HTTP-based providershttps://api.example.com/v1/chat/completions
JavascriptCustom - JavaScript filefile://path/to/custom_provider.js
PythonCustom - Python filefile://path/to/custom_provider.py
Shell CommandCustom - script-based providersexec: python chain.py
AI21 LabsJurassic and Jamba modelsai21:jamba-1.5-mini
AWS BedrockAWS-hosted models from various providersbedrock:us.meta.llama3-2-90b-instruct-v1:0
Azure OpenAIAzure-hosted OpenAI modelsazureopenai:gpt-4o-custom-deployment-name
Cloudflare AICloudflare's AI platformcloudflare-ai:@cf/meta/llama-3-8b-instruct
CohereCohere's language modelscohere:command
fal.aiImage Generation Providerfal:image:fal-ai/fast-sdxl
Google AI Studio (PaLM)Gemini and PaLM modelsgoogle:gemini-pro
Google Vertex AIGoogle Cloud's AI platformvertex:gemini-pro
GroqHigh-performance inference APIgroq:llama3-70b-8192-tool-use-preview
Hugging FaceAccess thousands of modelshuggingface:text-generation:gpt2
IBM BAMIBM's foundation modelsbam:chat:ibm/granite-13b-chat-v2
LiteLLMUnified interface for multiple providersCompatible with OpenAI syntax
Mistral AIMistral's language modelsmistral:open-mistral-nemo
OpenLLMBentoML's model serving frameworkCompatible with OpenAI syntax
OpenRouterUnified API for multiple providersopenrouter:mistral/7b-instruct
Perplexity AISpecialized in question-answeringCompatible with OpenAI syntax
ReplicateVarious hosted modelsreplicate:stability-ai/sdxl
Together AIVarious hosted modelsCompatible with OpenAI syntax
Voyage AISpecialized embedding modelsvoyage:voyage-3
vLLMLocalCompatible with OpenAI syntax
OllamaLocalollama:llama3.2:latest
LocalAILocallocalai:gpt4all-j
llama.cppLocalllama:7b
WebSocketWebSocket-based providersws://example.com/ws
EchoCustom - For testing purposesecho
Manual InputCustom - CLI manual entrypromptfoo:manual-input
GoCustom - Go filefile://path/to/your/script.go
Web BrowserCustom - Automate web browser interactionsbrowser
Text Generation WebUIGradio WebUICompatible with OpenAI syntax
WatsonXIBM's WatsonXwatsonx:ibm/granite-13b-chat-v2
X.AIX.AI's modelsxai:grok-2

Provider Syntax

Providers are specified using various syntax options:

  1. Simple string format:

    provider_name:model_name

    Example: openai:gpt-4o-mini or anthropic:claude-3-sonnet-20240229

  2. Object format with configuration:

    - id: provider_name:model_name
    config:
    option1: value1
    option2: value2

    Example:

    - id: openai:gpt-4o-mini
    config:
    temperature: 0.7
    max_tokens: 150
  3. File-based configuration:

    - file://path/to/provider_config.yaml

Configuring Providers

Most providers use environment variables for authentication:

export OPENAI_API_KEY=your_api_key_here
export ANTHROPIC_API_KEY=your_api_key_here

You can also specify API keys in your configuration file:

providers:
- id: openai:gpt-4o-mini
config:
apiKey: your_api_key_here

Custom Integrations

promptfoo supports several types of custom integrations:

  1. File-based providers:

    providers:
    - file://path/to/provider_config.yaml
  2. JavaScript providers:

    providers:
    - file://path/to/custom_provider.js
  3. Python providers:

    providers:
    - id: file://path/to/custom_provider.py
  4. HTTP/HTTPS API:

    providers:
    - id: https://api.example.com/v1/chat/completions
    config:
    headers:
    Authorization: 'Bearer your_api_key'
  5. WebSocket:

    providers:
    - id: ws://example.com/ws
    config:
    messageTemplate: '{"prompt": "{{prompt}}"}'
  6. Custom scripts:

    providers:
    - 'exec: python chain.py'

Common Configuration Options

Many providers support these common configuration options:

  • temperature: Controls randomness (0.0 to 1.0)
  • max_tokens: Maximum number of tokens to generate
  • top_p: Nucleus sampling parameter
  • frequency_penalty: Penalizes frequent tokens
  • presence_penalty: Penalizes new tokens based on presence in text
  • stop: Sequences where the API will stop generating further tokens

Example:

providers:
- id: openai:gpt-4o-mini
config:
temperature: 0.7
max_tokens: 150
top_p: 0.9
frequency_penalty: 0.5
presence_penalty: 0.5
stop: ["\n", 'Human:', 'AI:']