Setup
1. Install LibreChat
Follow the LibreChat documentation for local setup. There are multiple installation options (Docker, npm, etc.).2. Add Bifrost as a Custom Provider
Add the following to yourlibrechat.yaml file:
| Field | Description |
|---|---|
apiKey | Bifrost virtual key if authentication is enabled; use dummy otherwise |
baseURL | Bifrost gateway URL + /v1 (LibreChat uses OpenAI format) |
models.default | Default models to show. Use Bifrost model IDs (provider/model) |
models.fetch | Set true to fetch available models from Bifrost |
titleConvo | Use AI for conversation title generation |
titleModel | Model for title generation |
summarize | Enable chat summary generation |
summaryModel | Model for summaries |
If you’re running LibreChat in Docker, it does not automatically use
librechat.yaml. See Step 1 of the LibreChat custom endpoints guide for how to mount or override the config.3. Docker Networking
Choose the correctbaseURL for your setup:
| Setup | baseURL |
|---|---|
| LibreChat and Bifrost on same host | http://localhost:8080/v1 |
| LibreChat in Docker Desktop, Bifrost on host | http://host.docker.internal:8080/v1 |
| LibreChat in Docker Engine (Linux), Bifrost on host | Add --add-host=host.docker.internal:host-gateway to docker run, or extra_hosts: ["host.docker.internal:host-gateway"] in Compose, then use http://host.docker.internal:8080/v1 |
| Both in same Docker network | http://bifrost-container-name:8080/v1 |
4. Run LibreChat
Start LibreChat. Bifrost will appear as a provider with all configured models available.Virtual Keys
When Bifrost has virtual key authentication enabled, setapiKey to your virtual key:
Model Selection
LibreChat displays models from themodels.default list or fetches them from Bifrost when models.fetch is enabled. Use Bifrost model IDs in provider/model format to access any configured provider:
- Use powerful models like
openai/gpt-5oranthropic/claude-sonnet-4-5-20250929for complex conversations - Use fast models like
groq/llama-3.3-70b-versatilefor quick responses - Set
titleModelandsummaryModelto lighter models to reduce cost for metadata generation
Using Multiple Providers
Bifrost routes requests to the correct provider based on the model name. Use theprovider/model-name format to access any configured provider through the single /v1 endpoint:
Supported Providers
Bifrost supports the following providers with theprovider/model-name format:
openai, azure, gemini, vertex, bedrock, mistral, groq, cerebras, cohere, perplexity, xai, ollama, openrouter, huggingface, nebius, parasail, replicate, vllm, sgl
LibreChat connects to Bifrost via a single OpenAI-compatible endpoint. Bifrost handles routing to the correct provider based on the model name — no per-provider configuration needed in LibreChat.
Observability
All LibreChat traffic through Bifrost is logged. Monitor it athttp://localhost:8080/logs — filter by provider, model, or search through conversation content to track usage across your team.
Next Steps
- Provider Configuration — Configure AI providers in Bifrost
- Virtual Keys — Set up usage limits and access control

