Google AI Studio
The google
provider is compatible with Google AI Studio (formerly known as PaLM), which offers access to Gemini models.
You can use it by specifying one of the available models. Currently, the following models are supported:
google:gemini-pro
google:gemini-pro-vision
google:aqa
(attributed question answering)google:chat-bison-001
tip
If you are using Google Vertex, see the vertex
provider.
Supported environment variables:
GOOGLE_API_KEY
(required) - Google AI Studio/PaLM API tokenGOOGLE_API_HOST
- used to override the Google API host, defaults togenerativelanguage.googleapis.com
The PaLM provider supports various configuration options such as safetySettings
, stopSequences
, temperature
, maxOutputTokens
, topP
, and topK
that can be used to customize the behavior of the model like so:
providers:
- id: google:gemini-pro
config:
temperature: 0
maxOutputTokens: 1024
You can also pass in a responseSchema
file:
providers:
- id: google:gemini-pro
config:
responseSchema: file://test.json
If you want more fine tuned control over the generationConfig
:
providers:
- id: google:gemini-1.5-flash-002
config:
generationConfig:
response_mime_type: application/json
response_schema:
type: object
properties:
foo:
type: string