What Are Custom Providers?
Custom providers allow you to create multiple instances of the same base provider, each with different configurations and access patterns. The key feature is request type control, which enables you to restrict what operations each custom provider instance can perform.
Think of custom providers as “multiple views” of the same underlying provider — you can create several custom configurations for OpenAI, Anthropic, or any other provider, each optimized for different use cases while sharing the same API keys and base infrastructure.
Key Benefits
- Multiple Provider Instances: Create several configurations of the same base provider (e.g., multiple OpenAI configurations)
- Request Type Control: Restrict which operations (chat, embeddings, speech, etc.) each custom provider can perform
- Custom Naming: Use descriptive names like “openai-production” or “openai-staging”
- Provider Reuse: Maximize the value of your existing provider accounts
Custom providers are configured using the custom_provider_config field, which extends the standard provider configuration. The main purpose is to create multiple instances of the same base provider, each with different request type restrictions.
Important: The allowed_requests field follows a specific behavior:
- Omitted entirely: All operations are allowed (default behavior)
- Partially specified: Only explicitly set fields are allowed, others default to
false
- Fully specified: Only the operations you explicitly enable are allowed
- Present but empty object (
{}): All fields are set to false
Using Web UI
Using API
Using config.json
Go SDK

- Go to http://localhost:8080
- Navigate to “Providers” in the sidebar
- Click “Add New Provider”
- Choose a unique provider name (e.g., “openai-custom”)
- Select the base provider type (e.g., “openai”)
- Configure which request types are allowed
- Save configuration
Configuration Options
Allowed Request Types
Control which operations your custom provider can perform. The behavior is:
- If
allowed_requests is not specified: All operations are allowed by default
- If
allowed_requests is specified: Only the fields set to true are allowed, all others default to false
Available operations:
text_completion: Legacy text completion requests
text_completion_stream: Streaming text completion requests
chat_completion: Standard chat completion requests
chat_completion_stream: Streaming chat responses
responses: Standard responses requests
responses_stream: Streaming responses requests
embedding: Text embedding generation
speech: Text-to-speech conversion
speech_stream: Streaming text-to-speech
transcription: Speech-to-text conversion
transcription_stream: Streaming speech-to-text
Base Provider Types
Custom providers can be built on these supported providers:
openai - OpenAI API
anthropic - Anthropic Claude
bedrock - AWS Bedrock
cohere - Cohere
gemini - Gemini
Request Path Overrides
The request_path_overrides field allows you to override the default API endpoint paths for specific request types. This is useful when:
- Connecting to custom or self-hosted model providers
- Integrating with proxies that expect specific URL patterns
- Using provider forks with modified API paths
Not Supported: request_path_overrides is not supported for gemini and bedrock base provider types due to their specialized API implementations.
The field accepts a mapping of request types to custom paths:
{
"request_path_overrides": {
"chat_completion": "/v1/chat/completions",
"chat_completion_stream": "/v1/chat/completions",
"embedding": "/v1/embeddings",
"text_completion": "/v1/completions"
}
}
Example: OpenAI-Compatible Endpoint with Custom Paths
{
"custom-llm": {
"keys": [{ "value": "env.PROVIDER_API_KEY", "models": [], "weight": 1.0 }],
"network_config": {
"base_url": "https://your-openai-compatible-endpoint.com"
},
"custom_provider_config": {
"base_provider_type": "openai",
"allowed_requests": {
"chat_completion": true,
"chat_completion_stream": true
},
"request_path_overrides": {
"chat_completion": "/api/v2/chat",
"chat_completion_stream": "/api/v2/chat"
}
}
}
}
In this example, instead of using OpenAI’s default /v1/chat/completions path, requests will be sent to https://custom-endpoint.example.com/api/v2/chat.
Use Cases
1. Environment-Specific Configurations
Create different configurations for production, staging, and development environments:
{
"openai-production": {
"keys": [{ "value": "env.PROVIDER_API_KEY", "models": [], "weight": 1.0 }],
"custom_provider_config": {
"base_provider_type": "openai",
"allowed_requests": {
"chat_completion": true,
"chat_completion_stream": true,
"embedding": true,
"speech": true,
"speech_stream": true
}
}
},
"openai-staging": {
"keys": [{ "value": "env.PROVIDER_API_KEY", "models": [], "weight": 1.0 }],
"custom_provider_config": {
"base_provider_type": "openai",
"allowed_requests": {
"chat_completion": true,
"chat_completion_stream": true,
"embedding": true,
"speech": false,
"speech_stream": false
}
}
},
"openai-dev": {
"keys": [{ "value": "env.PROVIDER_API_KEY", "models": [], "weight": 1.0 }],
"custom_provider_config": {
"base_provider_type": "openai",
"allowed_requests": {
"chat_completion": true,
"chat_completion_stream": false,
"embedding": false,
"speech": false,
"speech_stream": false
}
}
}
}
2. Role-Based Access Control
Restrict capabilities based on user roles or team permissions. You can then create virtual keys for better management of who can access which providers, providing granular control over team permissions and resource usage. This integrates seamlessly with Bifrost’s governance features for comprehensive access control and monitoring:
{
"openai-developers": {
"keys": [{ "value": "env.PROVIDER_API_KEY", "models": [], "weight": 1.0 }],
"custom_provider_config": {
"base_provider_type": "openai",
"allowed_requests": {
"chat_completion": true,
"chat_completion_stream": true,
"embedding": true,
"text_completion": true
}
}
},
"openai-analysts": {
"keys": [{ "value": "env.PROVIDER_API_KEY", "models": [], "weight": 1.0 }],
"custom_provider_config": {
"base_provider_type": "openai",
"allowed_requests": {
"chat_completion": true,
"embedding": true
}
}
},
"openai-support": {
"keys": [{ "value": "env.PROVIDER_API_KEY", "models": [], "weight": 1.0 }],
"custom_provider_config": {
"base_provider_type": "openai",
"allowed_requests": {
"chat_completion": true,
"chat_completion_stream": false
}
}
}
}
3. Feature Testing and Rollouts
Test new features with limited user groups:
{
"openai-beta-streaming": {
"keys": [{ "value": "env.PROVIDER_API_KEY", "models": [], "weight": 1.0 }],
"custom_provider_config": {
"base_provider_type": "openai",
"allowed_requests": {
"chat_completion": true,
"chat_completion_stream": true,
"embedding": false
}
}
},
"openai-stable": {
"keys": [{ "value": "env.PROVIDER_API_KEY", "models": [], "weight": 1.0 }],
"custom_provider_config": {
"base_provider_type": "openai",
"allowed_requests": {
"chat_completion": true,
"chat_completion_stream": false,
"embedding": true
}
}
}
}
Making Requests
Use your custom provider name in requests:
# Request to custom provider
curl --location 'http://localhost:8080/v1/chat/completions' \
--header 'Content-Type: application/json' \
--data '{
"model": "openai-custom/gpt-4o-mini",
"messages": [
{"role": "user", "content": "Hello!"}
]
}'
Relationship to Provider Configuration
Custom providers extend the standard provider configuration system. They inherit all the capabilities of their base provider while adding request type restrictions.
Learn more about provider configuration:
Next Steps
- Fallbacks - Automatic failover between providers
- Load Balancing - Intelligent API key management with weighted load balancing
- Governance - Advanced access control and monitoring