Skip to main content
All providers are configured under bifrost.providers in your values file. Each provider entry contains a keys list where each key has a name, value, weight, and optional provider-specific config. Two ways to supply credentials:
  • Direct valuevalue: "sk-..." (fine for dev; avoid in production)
  • Kubernetes Secret + env var — store the key in a Secret, inject as an env var, and reference it with value: "env.VAR_NAME"
The providerSecrets block handles the Secret → env var injection automatically:
bifrost:
  providers:
    openai:
      keys:
        - name: "primary"
          value: "env.OPENAI_API_KEY"   # resolved at runtime
          weight: 1

  providerSecrets:
    openai:
      existingSecret: "my-openai-secret"
      key: "api-key"
      envVar: "OPENAI_API_KEY"          # injected into the pod

OpenAI

Supports multiple keys with weighted load balancing. The key with use_for_batch_api: true is eligible for the Batch API.Step 1 — Create secret
kubectl create secret generic openai-credentials \
  --from-literal=api-key-1='sk-your-primary-key' \
  --from-literal=api-key-2='sk-your-secondary-key' \
  --from-literal=api-key-batch='sk-your-batch-key'
Step 2 — Values file
# openai-values.yaml
image:
  tag: "v1.4.11"

bifrost:
  providers:
    openai:
      keys:
        - name: "openai-primary"
          value: "env.OPENAI_KEY_1"
          weight: 2               # 50% of traffic
          models: ["*"]
        - name: "openai-secondary"
          value: "env.OPENAI_KEY_2"
          weight: 1               # 25%
          models: ["gpt-4o-mini"] # restrict to cheaper model
        - name: "openai-batch"
          value: "env.OPENAI_KEY_BATCH"
          weight: 1               # 25%
          models: ["*"]
          use_for_batch_api: true

  providerSecrets:
    openai-key-1:
      existingSecret: "openai-credentials"
      key: "api-key-1"
      envVar: "OPENAI_KEY_1"
    openai-key-2:
      existingSecret: "openai-credentials"
      key: "api-key-2"
      envVar: "OPENAI_KEY_2"
    openai-key-batch:
      existingSecret: "openai-credentials"
      key: "api-key-batch"
      envVar: "OPENAI_KEY_BATCH"
Step 3 — Install
helm install bifrost bifrost/bifrost -f openai-values.yaml
Optional — per-provider network config
bifrost:
  providers:
    openai:
      keys:
        - name: "primary"
          value: "env.OPENAI_KEY_1"
          weight: 1
      network_config:
        default_request_timeout_in_seconds: 120
        max_retries: 3
        retry_backoff_initial_ms: 500
        retry_backoff_max_ms: 5000
        max_conns_per_host: 5000

Multi-Provider Example

Combine providers in a single values file:
# multi-provider-values.yaml
image:
  tag: "v1.4.11"

bifrost:
  providers:
    openai:
      keys:
        - name: "openai-primary"
          value: "env.OPENAI_API_KEY"
          weight: 2
          models: ["*"]
    anthropic:
      keys:
        - name: "anthropic-primary"
          value: "env.ANTHROPIC_API_KEY"
          weight: 1
          models: ["*"]
    groq:
      keys:
        - name: "groq-primary"
          value: "env.GROQ_API_KEY"
          weight: 1
          models: ["*"]

  providerSecrets:
    openai-key:
      existingSecret: "provider-keys"
      key: "openai"
      envVar: "OPENAI_API_KEY"
    anthropic-key:
      existingSecret: "provider-keys"
      key: "anthropic"
      envVar: "ANTHROPIC_API_KEY"
    groq-key:
      existingSecret: "provider-keys"
      key: "groq"
      envVar: "GROQ_API_KEY"

  plugins:
    logging:
      enabled: true
    governance:
      enabled: true
# Create a single secret with all provider keys
kubectl create secret generic provider-keys \
  --from-literal=openai='sk-your-openai-key' \
  --from-literal=anthropic='sk-ant-your-anthropic-key' \
  --from-literal=groq='gsk_your-groq-key'

helm install bifrost bifrost/bifrost -f multi-provider-values.yaml