To use the OpenAI API, set the OPENAIAPIKEY environment variable,
The anthropic provider supports the following models:
The azureopenai provider is an interface to OpenAI through Azure. It behaves the same as the OpenAI provider.
The llama provider is compatible with the HTTP server bundled with llama.cpp.
The ollama provider is compatible with Ollama,
📄️ Google Vertex
The vertex provider is compatible with Google's Vertex AI offering, which offers access to models such as bison.
📄️ Google PaLM
The palm provider is compatible with Google's PaLM offering, which offers access to models such as text-bison-001.
📄️ Generic webhook
The webhook provider can be useful for triggering more complex flows or prompt chains end to end in your app.
📄️ Custom API Provider
To create a custom API provider, implement the ApiProvider interface in a separate module. Here is the interface:
📄️ Custom scripts
You may use any shell command as an API provider. This is particularly useful when you want to use a language or framework that is not directly supported by promptfoo.
promptfoo includes support for the HuggingFace Inference API, specifically text generation and feature extraction tasks.
📄️ LocalAI (Llama, Alpaca, GPT4All, ...)
LocalAI is an API wrapper for open-source LLMs that is compatible with OpenAI. You can run LocalAI for compatibility with Llama, Alpaca, Vicuna, GPT4All, RedPajama, and many other models compatible with the ggml format.
Replicate is an API for machine learning models. It currently hosts models like Llama v2.
To use OpenLLM with promptfoo, we take advantage of OpenLLM's support for OpenAI-compatible endpoint.
The Perplexity API (pplx-api) offers access to Perplexity, Mistral, Llama, and other models.