Overview
Bifrost supports a wide range of AI providers, all accessible through a consistent OpenAI-compatible interface. This standardization allows you to switch between providers without modifying your application code, as all responses follow the same structure regardless of the underlying provider. Bifrost can also act as a provider-compatible gateway (for example, Anthropic, Google Gemini, Cohere, Bedrock, and others), exposing provider-specific endpoints so you can use existing provider SDKs or integrations with no code changes, see What is an integration? for details.Provider Support Matrix
The following table summarizes which operations are supported by each provider via Bifrost’s unified interface.| Provider | Models | Text | Text (stream) | Chat | Chat (stream) | Responses | Responses (stream) | Embeddings | TTS | TTS (stream) | STT | STT (stream) | Files | Batch |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Anthropic (anthropic/<model>) | ✅ | ✅ | ❌ | ✅ | ✅ | ✅ | ✅ | ❌ | ❌ | ❌ | ❌ | ❌ | ✅ | ✅ |
Azure (azure/<model>) | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ❌ | ✅ | ✅ |
Bedrock (bedrock/<model>) | ✅ | ✅ | ❌ | ✅ | ✅ | ✅ | ✅ | ✅ | ❌ | ❌ | ❌ | ❌ | ✅ | ✅ |
Cerebras (cerebras/<model>) | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ❌ | ❌ | ❌ | ❌ | ❌ | ❌ | ❌ |
Cohere (cohere/<model>) | ✅ | ❌ | ❌ | ✅ | ✅ | ✅ | ✅ | ✅ | ❌ | ❌ | ❌ | ❌ | ❌ | ❌ |
Elevenlabs (elevenlabs/<model>) | ❌ | ❌ | ❌ | ❌ | ❌ | ❌ | ❌ | ❌ | ✅ | ✅ | ✅ | ❌ | ❌ | ❌ |
Gemini (gemini/<model>) | ✅ | ❌ | ❌ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ |
Groq (groq/<model>) | ✅ | 🟡 | 🟡 | ✅ | ✅ | ✅ | ✅ | ❌ | ❌ | ❌ | ❌ | ❌ | ❌ | ❌ |
Hugging Face (huggingface/<model>) | ✅ | ❌ | ❌ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ❌ | ✅ | ❌ | ❌ | ❌ |
Mistral (mistral/<model>) | ✅ | ❌ | ❌ | ✅ | ✅ | ✅ | ✅ | ✅ | ❌ | ❌ | ✅ | ✅ | ❌ | ❌ |
Nebius (nebius/<model>) | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ❌ | ❌ | ❌ | ❌ | ❌ | ❌ |
Ollama (ollama/<model>) | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ❌ | ❌ | ❌ | ❌ | ❌ | ❌ |
OpenAI (openai/<model>) | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ |
OpenRouter (openrouter/<model>) | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ❌ | ❌ | ❌ | ❌ | ❌ | ❌ | ❌ |
Parasail (parasail/<model>) | ✅ | ❌ | ❌ | ✅ | ✅ | ✅ | ✅ | ❌ | ❌ | ❌ | ❌ | ❌ | ❌ | ❌ |
Perplexity (perplexity/<model>) | ✅ | ❌ | ❌ | ✅ | ✅ | ✅ | ✅ | ❌ | ❌ | ❌ | ❌ | ❌ | ❌ | ❌ |
SGL (sgl/<model>) | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ❌ | ❌ | ❌ | ❌ | ❌ | ❌ |
Vertex AI (vertex/<model>) | ✅ | ❌ | ❌ | ✅ | ✅ | ✅ | ✅ | ✅ | ❌ | ❌ | ❌ | ❌ | ❌ | ❌ |
- 🟡 Not supported by the downstream provider, but internally implemented by Bifrost as a fallback.
- ❌ Not supported by the downstream provider, hence not supported by Bifrost.
- ✅ Fully supported by the downstream provider, or internally implemented by Bifrost.
Some operations are not supported by the downstream provider, and their internal implementation in Bifrost is optional. 🟡
Like Text completions are not supported by Groq, but Bifrost can emulate them internally using the Chat Completions API. This feature is disabled by default, but it can be enabled by setting the
enable_litellm_fallbacks flag to true in the client configuration.
We do not promote using such fallbacks, since text completions and chat completions are fundamentally different. However, this option is available to help users migrating from LiteLLM (which does support these fallbacks).- “Models” refers to the list models operation (
/v1/models). - “Text” refers to the classic text completion interface (
/v1/completions). - “Responses” refers to the OpenAI-style Responses API (
/v1/responses). Non-OpenAI providers map this to their native chat API under the hood. - TTS corresponds to
/v1/audio/speechand STT to/v1/audio/transcriptions. - “Files” refers to the Files API operations (
/v1/files) for uploading, listing, retrieving, and deleting files. - “Batch” refers to the Batch API operations (
/v1/batches) for creating, listing, retrieving, canceling, and getting results of batch jobs.
Response Format
All providers return responses in the OpenAI-compatible format. Bifrost handles the translation between different provider-specific formats automatically.- Gateway
- Go SDK
Custom Providers
In addition to the built-in providers, Bifrost supports custom provider configurations. Custom providers allow you to create multiple instances of the same base provider with different configurations, request type restrictions, and access patterns. This is useful for environment-specific configurations, role-based access control, and feature testing. Learn more: Custom ProvidersBenefits
The consistent interface across providers enables:- Provider switching without code modifications
- Fallback configurations for improved reliability
- Load balancing across multiple providers
- OpenAI-compatible patterns for all providers
Provider Metadata
Provider information is included in theextra_fields section of each response, providing transparency into which provider handled the request and any provider-specific metadata.
Configuration options:
- Go SDK Provider Configuration - Configure
SendBackRawResponseand other provider settings - Gateway Provider Configuration - Configure
send_back_raw_responsevia API, UI, or config file

