Skip to main content
LibreChat is a modern, open-source chat client that supports multiple AI providers. By adding Bifrost as a custom provider, you get access to any model configured in Bifrost through a familiar chat interface, plus governance features like virtual keys and built-in observability.

Setup

1. Install LibreChat

Follow the LibreChat documentation for local setup. There are multiple installation options (Docker, npm, etc.).

2. Add Bifrost as a Custom Provider

Add the following to your librechat.yaml file:
custom:
  - name: "Bifrost"
    apiKey: "dummy"
    baseURL: "http://localhost:8080/v1"
    models:
      default: ["openai/gpt-4o"]
      fetch: true
    titleConvo: true
    titleModel: "openai/gpt-4o"
    summarize: false
    summaryModel: "openai/gpt-4o"
    forcePrompt: false
    modelDisplayLabel: "Bifrost"
    iconURL: https://getbifrost.ai/bifrost-logo.png
FieldDescription
apiKeyBifrost virtual key if authentication is enabled; use dummy otherwise
baseURLBifrost gateway URL + /v1 (LibreChat uses OpenAI format)
models.defaultDefault models to show. Use Bifrost model IDs (provider/model)
models.fetchSet true to fetch available models from Bifrost
titleConvoUse AI for conversation title generation
titleModelModel for title generation
summarizeEnable chat summary generation
summaryModelModel for summaries
Set models.fetch: true to automatically discover all models configured in Bifrost. This keeps your LibreChat model list in sync with your Bifrost provider configuration.
If you’re running LibreChat in Docker, it does not automatically use librechat.yaml. See Step 1 of the LibreChat custom endpoints guide for how to mount or override the config.

3. Docker Networking

Choose the correct baseURL for your setup:
SetupbaseURL
LibreChat and Bifrost on same hosthttp://localhost:8080/v1
LibreChat in Docker Desktop, Bifrost on hosthttp://host.docker.internal:8080/v1
LibreChat in Docker Engine (Linux), Bifrost on hostAdd --add-host=host.docker.internal:host-gateway to docker run, or extra_hosts: ["host.docker.internal:host-gateway"] in Compose, then use http://host.docker.internal:8080/v1
Both in same Docker networkhttp://bifrost-container-name:8080/v1

4. Run LibreChat

Start LibreChat. Bifrost will appear as a provider with all configured models available.

Virtual Keys

When Bifrost has virtual key authentication enabled, set apiKey to your virtual key:
apiKey: "bf-your-virtual-key-here"
This lets you enforce usage limits, budgets, and access control per user or team. For team deployments, create a separate virtual key for each team or environment — each key can have its own rate limits, budgets, and provider access rules configured in the Bifrost dashboard.

Model Selection

LibreChat displays models from the models.default list or fetches them from Bifrost when models.fetch is enabled. Use Bifrost model IDs in provider/model format to access any configured provider:
models:
  default:
    - "openai/gpt-5"
    - "anthropic/claude-sonnet-4-5-20250929"
    - "gemini/gemini-2.5-pro"
    - "groq/llama-3.3-70b-versatile"
  fetch: true
  • Use powerful models like openai/gpt-5 or anthropic/claude-sonnet-4-5-20250929 for complex conversations
  • Use fast models like groq/llama-3.3-70b-versatile for quick responses
  • Set titleModel and summaryModel to lighter models to reduce cost for metadata generation

Using Multiple Providers

Bifrost routes requests to the correct provider based on the model name. Use the provider/model-name format to access any configured provider through the single /v1 endpoint:
anthropic/claude-sonnet-4-5-20250929
openai/gpt-5
gemini/gemini-2.5-pro
mistral/mistral-large-latest

Supported Providers

Bifrost supports the following providers with the provider/model-name format: openai, azure, gemini, vertex, bedrock, mistral, groq, cerebras, cohere, perplexity, xai, ollama, openrouter, huggingface, nebius, parasail, replicate, vllm, sgl
LibreChat connects to Bifrost via a single OpenAI-compatible endpoint. Bifrost handles routing to the correct provider based on the model name — no per-provider configuration needed in LibreChat.

Observability

All LibreChat traffic through Bifrost is logged. Monitor it at http://localhost:8080/logs — filter by provider, model, or search through conversation content to track usage across your team.

Next Steps