Skip to main content

fal.ai

The fal provider supports the fal.ai inference API using the fal-js client, providing a native experience for using fal.ai models in your evaluations.

Setup

  1. Install the fal client as a dependency:

    npm install -g @fal-ai/serverless-client
  2. Create an API key in the fal dashboard.

  3. Set the FAL_KEY environment variable:

    export FAL_KEY=your_api_key_here

Provider Format

To run a model, specify the model type and model name: fal:<model_type>:<model_name>.

  • fal:image:fal-ai/flux-pro/v1.1-ultra - Professional-grade image generation with up to 2K resolution
  • fal:image:fal-ai/flux/schnell - Fast, high-quality image generation in 1-4 steps
  • fal:image:fal-ai/fast-sdxl - High-speed SDXL with LoRA support
info

Browse the complete model gallery for the latest models and detailed specifications. Model availability and capabilities are frequently updated.

Environment Variables

VariableDescription
FAL_KEYYour API key for authentication with fal

Configuration

Configure the fal provider in your promptfoo configuration file. Here's an example using fal-ai/flux/schnell:

info

Configuration parameters vary by model. For example, fast-sdxl supports additional parameters like scheduler and guidance_scale. Always check the model-specific documentation for supported parameters.

providers:
- id: fal:image:fal-ai/flux/schnell
config:
apiKey: your_api_key_here # Alternative to FAL_KEY environment variable
image_size:
width: 1024
height: 1024
num_inference_steps: 8
seed: 6252023

Configuration Options

ParameterTypeDescription
apiKeystringThe API key for authentication with fal
image_size.widthnumberThe width of the generated image
image_size.heightnumberThe height of the generated image
num_inference_stepsnumberThe number of inference steps to run
seednumberSets a seed for reproducible results