Multi-Provider Setup
Configure multiple providers to seamlessly switch between them. This example shows how to configure OpenAI, Anthropic, and Mistral providers.If Bifrost receives a new provider at runtime (i.e., one that is not returned byGetConfiguredProviders()initially onbifrost.Init()), it will set up the provider at runtime usingGetConfigForProvider(), which may cause a delay in the first request to that provider.
Making Requests
Once providers are configured, you can make requests to any specific provider. This example shows how to send a request directly to Mistral’s latest vision model. Bifrost handles the provider-specific API formatting automatically.Environment Variables
Set up your API keys for the providers you want to use:Advanced Configuration
Weighted Load Balancing
Distribute requests across multiple API keys or providers based on custom weights. This example shows how to split traffic 70/30 between two OpenAI keys, useful for managing rate limits or costs across different accounts.Model-Specific Keys
Use different API keys for specific models, allowing you to manage access controls and billing separately. This example uses a premium key for advanced reasoning models (o1-preview, o1-mini) and a standard key for regular GPT models.Custom Network Settings
Customize the network configuration for each provider, including custom base URLs, extra headers, and timeout settings. This example shows how to use a local OpenAI-compatible server with custom headers for user identification.Managing Retries
Configure retry behavior for handling temporary failures and rate limits. This example sets up exponential backoff with up to 5 retries, starting with 1ms delay and capping at 10 seconds - ideal for handling transient network issues.Custom Concurrency and Buffer Size
Fine-tune performance by adjusting worker concurrency and queue sizes per provider (defaults are 1000 workers and 5000 queue size). This example gives OpenAI higher limits (100 workers, 500 queue) for high throughput, while Anthropic gets conservative limits to respect their rate limits.Setting Up a Proxy
Route requests through proxies for compliance, security, or geographic requirements. This example shows both HTTP proxy for OpenAI and authenticated SOCKS5 proxy for Anthropic, useful for corporate environments or regional access.Send Back Raw Response
Include the original provider response alongside Bifrost’s standardized response format. Useful for debugging and accessing provider-specific metadata.ExtraFields.RawResponse:
Provider-Specific Authentication
Enterprise cloud providers require additional configuration beyond API keys. Configure Azure OpenAI, AWS Bedrock, and Google Vertex with platform-specific authentication details.- Azure OpenAI
- AWS Bedrock
- Google Vertex
Azure OpenAI requires endpoint URLs, deployment mappings, and API version configuration:
Best Practices
Performance Considerations
Keys are fetched from yourGetKeysForProvider implementation on every request. Ensure your implementation is optimized for speed to avoid adding latency:
- Cache keys in memory during application startup
- Use simple switch statements or map lookups
- Avoid database queries, file I/O, or network calls
- Keep complex key processing logic outside the request path
Next Steps
- Streaming Responses - Real-time response generation
- Tool Calling - Enable AI to use external functions
- Multimodal AI - Process images, audio, and text
- Core Features - Advanced Bifrost capabilities

