One Format, All Providers
The beauty of Bifrost lies in its unified interface: regardless of whether you’re using OpenAI, Anthropic, AWS Bedrock, Google Vertex, or any other supported provider, you always get the same response format. This means your application logic never needs to change when switching providers. Bifrost standardizes all provider responses to follow the OpenAI-compatible structure, so you can write your code once and use it with any provider.How It Works
When you make a request to any provider through Bifrost, the response always follows the same structure - the familiar OpenAI format that most developers already know. Behind the scenes, Bifrost handles all the complexity of translating between different provider formats.- Gateway
- Go SDK
Provider Support Matrix
The following table summarizes which operations are supported by each provider via Bifrost’s unified interface.| Provider | Models | Text | Text (stream) | Chat | Chat (stream) | Responses | Responses (stream) | Embeddings | TTS | TTS (stream) | STT | STT (stream) |
|---|---|---|---|---|---|---|---|---|---|---|---|---|
Anthropic (anthropic/<model>) | Yes | Yes | No | Yes | Yes | Yes | Yes | No | No | No | No | No |
Azure OpenAI (azure/<model>) | Yes | Yes | Yes | Yes | Yes | Yes | Yes | Yes | No | No | No | No |
Bedrock (bedrock/<model>) | Yes | Yes | No | Yes | Yes | Yes | Yes | Yes | No | No | No | No |
Cerebras (cerebras/<model>) | Yes | Yes | Yes | Yes | Yes | Yes | Yes | No | No | No | No | No |
Cohere (cohere/<model>) | Yes | No | No | Yes | Yes | Yes | Yes | Yes | No | No | No | No |
Gemini (gemini/<model>) | Yes | No | No | Yes | Yes | Yes | Yes | Yes | Yes | Yes | Yes | Yes |
Groq (groq/<model>) | Yes | No | No | Yes | Yes | Yes | Yes | No | No | No | No | No |
Mistral (mistral/<model>) | Yes | No | No | Yes | Yes | Yes | Yes | Yes | No | No | No | No |
Ollama (ollama/<model>) | Yes | Yes | Yes | Yes | Yes | Yes | Yes | Yes | No | No | No | No |
OpenAI (openai/<model>) | Yes | Yes | Yes | Yes | Yes | Yes | Yes | Yes | Yes | Yes | Yes | Yes |
OpenRouter (openrouter/<model>) | Yes | Yes | Yes | Yes | Yes | Yes | Yes | No | No | No | No | No |
Parasail (parasail/<model>) | Yes | Yes | Yes | Yes | Yes | Yes | Yes | No | No | No | No | No |
SGL (sgl/<model>) | Yes | Yes | Yes | Yes | Yes | Yes | Yes | Yes | No | No | No | No |
Vertex AI (vertex/<model>) | Yes | No | No | Yes | Yes | Yes | Yes | Yes | No | No | No | No |
- “Models” refers to the list models operation (
/v1/models). - “Text” refers to the classic text completion interface (
/v1/completions). - “Responses” refers to the OpenAI-style Responses API (
/v1/responses). Non-OpenAI providers map this to their native chat API under the hood. - TTS corresponds to
/v1/audio/speechand STT to/v1/audio/transcriptions.
The Power of Consistency
This unified approach means you can:- Switch providers instantly without changing application logic
- Mix and match providers using fallbacks and load balancing
- Future-proof your code as new providers get added
- Use familiar OpenAI patterns regardless of the underlying provider
Provider Transparency
While the response format stays consistent, Bifrost doesn’t hide which provider actually handled your request. Provider information is always available in theextra_fields section, along with any provider-specific metadata you might need for debugging or analytics.
This gives you the best of both worlds: consistent application logic with full transparency into the underlying provider behavior.
Learn more about configuring provider transparency:
- Go SDK Provider Configuration - Configure
SendBackRawResponseand other provider settings - Gateway Provider Configuration - Configure
send_back_raw_responsevia API, UI, or config file

