Quick Reference: This guide uses simplified generic examples for clarity. For complete, production-ready implementations:
- OpenAI-compatible providers: See
core/providers/cerebras/orcore/providers/groq/ - Custom API providers: See
core/providers/huggingface/orcore/providers/anthropic/
Setup
- Fork and Clone:
- Fork the repository: https://github.com/maximhq/bifrost/
- Clone your fork:
git clone https://github.com/<your_github_username>/bifrost/
- Initialize:
- Run
make devat the root of the project to set up dependencies and tools.
- Run
Provider Structure
Bifrost acts as a gateway:- Receives a request in a standard format (defined in
core/schemas/). - Converts it to the provider-specific format.
- Sends the request to the provider’s API.
- Receives the provider’s response.
- Converts it back to the standard Bifrost response format.
core/schemas/bifrost.go
- Add it in const declaration of
ModelProvidertype in the format[ProviderName] ModelProvider = "[providername]" - Then, add it in the StandardProviders array in the same file and if needed add in SupportedBaseProviders.
core/providers/ and populate it with specific files following our strict conventions.
Directory Structure
The directory structure differs based on whether the provider is OpenAI API compatible:Non-OpenAI-compatible Providers
If the provider has a custom API format (not OpenAI-compatible), create a new foldercore/providers/[provider_name]/.
Complete Reference Structure (see core/providers/huggingface/):
- Create
types.goFIRST - Define all provider-specific request/response structures - Create
utils.goSECOND - Define constants, base URLs, and helper functions - Create feature files (
chat.go,embedding.go, etc.) THIRD - Implement converters - Create
[provider_name].goFOURTH - Wire everything together - Create
[provider_name]_test.goLAST - Add comprehensive tests
OpenAI-compatible Providers
If the provider is OpenAI API compatible, you only need a minimal structure: Minimal Reference Structure (seecore/providers/cerebras/):
core/providers/openai/.
File Conventions & Responsibilities
We enforce strict separation of concerns to keep providers maintainable and consistent. Each file has a specific purpose and must follow these rules.1. types.go (The Data Layer)
CRITICAL RULE: All provider-specific structs (Request/Response DTOs) MUST go here. NEVER define types in other files.
Naming Convention:
- Prefix ALL types with the provider name in PascalCase:
[ProviderName][StructName] - Examples:
HuggingFaceChatRequest,HuggingFaceModel,HuggingFaceToolCall
- Use
jsontags that exactly match the provider’s API field names - Use
omitemptyfor optional fields - Use pointers for nullable fields to distinguish between “not set” and “zero value”
- Group related types together with comments (e.g.,
// # CHAT TYPES,// # MODELS TYPES) - Define request types before response types
- Keep nested types near their parent types
- Use
json.RawMessagefor fields that can be multiple types (string or object/array) - Use pointers (
*float64,*bool) for optional fields - Add validation tags when appropriate (
validate:"required") - Include comments for complex or non-obvious types
2. utils.go (The Helper Layer)
CRITICAL RULE: All shared utility functions, constants, and configuration helpers MUST go here.
Constants Naming Convention:
- Use camelCase for unexported constants:
defaultInferenceBaseURL - Use SCREAMING_SNAKE_CASE for exported constants:
INFERENCE_PROVIDERS - Group related constants together
- Use camelCase for unexported helpers:
convertTypeToLowerCase,parseErrorResponse - Use PascalCase for exported utilities:
ConfigureProxy,BuildHeaders
- Base URLs and API endpoints
- Default values and limits
- Provider-specific constants (like model names, inference providers)
- HTTP request helpers (headers, authentication)
- Error handling utilities
- Data transformation helpers
- Group constants by category (URLs, limits, enums)
- Document the source/reason for constants (API docs, limits)
- Keep helper functions focused and single-purpose
- Include error handling in utility functions
3. [provider_name].go (The Controller Layer)
CRITICAL RULE: This is the orchestration layer. It coordinates the request flow but delegates all conversion logic to feature files.
Naming Convention:
- Provider struct:
[ProviderName]Provider(e.g.,HuggingFaceProvider) - Constructor:
New[ProviderName]Provider(config *schemas.ProviderConfig, logger schemas.Logger) - Methods: Match interface exactly:
ChatCompletion,ChatCompletionStream,ListModels, etc.
- Accept
*schemas.ProviderConfigandschemas.Logger - Call
config.CheckAndSetDefaults() - Initialize
fasthttp.Clientwith timeouts and limits - Configure proxy using
providerUtils.ConfigureProxy - Set default BaseURL if not provided
- Trim trailing slashes from BaseURL
- Pre-warm response pools if using sync.Pool
- Return provider instance (and error for OpenAI-compatible providers)
- Validation: Check request validity (optional, usually done in converter)
- Convert Request: Call
To[Provider][Feature]Request()from feature file - Build HTTP Request: Construct URL, headers, body
- Execute Request: Use
provider.client.Do()or streaming logic - Handle Errors: Parse and convert provider errors to
schemas.BifrostError - Convert Response: Call
ToBifrost[Feature]Response()from feature file - Return Result: Return Bifrost response or error
4. Feature Files (chat.go, embedding.go, speech.go, etc.) (The Converter Layer)
CRITICAL RULE: These files contain pure transformation functions ONLY. No HTTP calls, no logging, no side effects.
File Naming Convention:
chat.go- Chat completion convertersembedding.go- Embedding convertersspeech.go- Text-to-speech converterstranscription.go- Speech-to-text convertersmodels.go- List models convertersresponses.go- Response format converters
- To Provider Format:
To[ProviderName][Feature]Request(bifrostReq *schemas.Bifrost[Feature]Request) *[ProviderName][Feature]Request - To Bifrost Format:
ToBifrost[Feature]Response(providerResp *[ProviderName][Feature]Response) (*schemas.Bifrost[Feature]Response, *schemas.BifrostError)
ToHuggingFaceChatCompletionRequestToBifrostChatResponseToHuggingFaceEmbeddingRequestToBifrostEmbeddingResponse
- Request converter: Bifrost → Provider
- Response converter: Provider → Bifrost
core/providers/huggingface/chat.go:
- Always check for nil inputs at the start
- Pre-allocate slices with known capacity for performance
- Handle optional fields using pointers in types
- Use ExtraParams for provider-specific fields not in standard schema
- Document complex conversions with inline comments
- Keep functions pure - no side effects, no external state
- Return errors when conversion fails (for response converters)
OpenAI-compatible Providers
If you are implementing a provider that is strictly OpenAI API compatible, the implementation is significantly simpler. You reuse all the conversion logic fromcore/providers/openai/.
When to Use This Approach:
- Provider’s API is 100% OpenAI-compatible
- Same request/response formats
- Same endpoint paths (
/v1/chat/completions,/v1/completions, etc.) - Only differences are: base URL, authentication, and possibly some extra headers
core/providers/cerebras/cerebras.go
Step 1: Create the Provider File
Createcore/providers/[provider_name]/[provider_name].go:
Step 2: Implement Required Methods Using OpenAI Handlers
For each supported feature, delegate to the corresponding OpenAI handler: Chat Completion (Non-Streaming):Step 3: Implement Unsupported Methods
For features not supported by the provider, return appropriate errors:Key Points for OpenAI-compatible Providers
Constructor Differences:- Returns
(*[ProviderName]Provider, error)instead of just*[ProviderName]Provider - Must set a default
BaseURLspecific to the provider - Must trim trailing slashes from
BaseURL
- Use
provider.networkConfig.BaseURL + "/v1/[endpoint]"for direct paths - Use
providerUtils.GetPathFromContext(ctx, "/v1/[endpoint]")when path might be overridden in context
- Create
authHeader map[string]stringwithAuthorization: Bearer {key} - Pass to OpenAI handlers separately from
ExtraHeaders
- Pass
nilforcustomStreamParserif using standard OpenAI SSE format - Only implement custom parser if provider uses non-standard streaming format
- OpenAI handlers return
*schemas.BifrostError- propagate directly - For unsupported features, return custom error with
StatusNotImplemented
- Automatic updates - benefits from OpenAI handler improvements
- Consistent behavior - same conversion logic as OpenAI
- Easy maintenance - only provider-specific config in your file
Implementation Steps
Follow this exact order when implementing a new provider.For Non-OpenAI-compatible Providers
Phase 1: Research & Planning (Before Writing Code)
-
Study the Provider’s API Documentation:
- Identify all supported endpoints (chat, embeddings, speech, etc.)
- Note authentication method (API key, bearer token, custom headers)
- Document base URL and endpoint paths
- List all request/response fields
- Identify provider-specific parameters not in OpenAI schema
-
Create a Mapping Document (recommended):
Phase 2: Create Directory Structure
- Create Provider Directory:
Phase 3: Define Types (types.go)
-
Create
types.go- Define ALL Provider-Specific Types: Order of Type Definitions:Type Naming Checklist:- ✅ All types prefixed with provider name:
HuggingFaceChatRequest - ✅ JSON tags match provider API exactly:
json:"model_name" - ✅ Optional fields use
omitempty:json:"temperature,omitempty" - ✅ Nullable fields use pointers:
*float64,*string - ✅ Flexible fields use
json.RawMessage:Content json.RawMessage - ✅ Required fields have validation tags:
validate:"required"
- ✅ All types prefixed with provider name:
Phase 4: Define Utilities (utils.go)
-
Create
utils.go- Define Constants and Helper Functions: Order of Definitions:Utility Function Checklist:- ✅ All base URLs defined as constants
- ✅ Helper functions use camelCase (unexported) or PascalCase (exported)
- ✅ Error handling utilities included
- ✅ HTTP header builders included
- ✅ Constants grouped logically with comments
Phase 5: Implement Converters (Feature Files)
-
Create Feature Files in Order of Complexity (simplest first):
a. Create
models.go(if supported):b. Createembedding.go(if supported):c. Createchat.go(most complex):
- ✅ Request converter:
To[ProviderName][Feature]Request - ✅ Response converter:
ToBifrost[Feature]Response - ✅ Nil checks at start of every function
- ✅ Pre-allocate slices with capacity
- ✅ Handle all optional fields with nil checks
- ✅ Map ExtraParams to provider-specific fields
- ✅ Return errors for response converters
- ✅ Document complex transformations
Phase 6: Implement Provider (provider_name.go)
-
Create
[provider_name].go- Wire Everything Together: See detailed structure in “File Conventions & Responsibilities” section above. Implementation Checklist:- ✅ Package comment at top
- ✅ All imports organized (stdlib, external, internal)
- ✅ Provider struct with correct field order
- ✅ Response pools (if using sync.Pool)
- ✅ Constructor with proper initialization
- ✅
GetProviderKey()method - ✅ All interface methods implemented
- ✅ Each method follows the strict order: convert → execute → handle errors → convert back
Phase 7: Add Tests
-
Create
[provider_name]_test.go: See “Adding Automated Tests” section below for complete details.
For OpenAI-compatible Providers
For OpenAI-compatible providers, follow the simpler structure shown in the “OpenAI-compatible Providers” section above. Implementation Checklist:- ✅ Create
[provider_name].goonly - ✅ Import
github.com/maximhq/bifrost/core/providers/openai - ✅ Implement constructor returning
(*Provider, error) - ✅ Set default BaseURL specific to provider
- ✅ Delegate all methods to
openai.HandleOpenAI*functions - ✅ Return errors for unsupported features
- ✅ Create
[provider_name]_test.go
Adding to UI
Once your provider is implemented and tested, you need to integrate it into the Bifrost UI and CI/CD pipelines.Step 1: Update UI Constants
a. Add Model Placeholder (ui/lib/constants/config.ts)
Add a model placeholder example for your provider to help users understand the expected model format:
b. Set Key Requirement (ui/lib/constants/config.ts)
Specify whether your provider requires an API key:
Step 2: Add Provider Icon (ui/lib/constants/icons.tsx)
Create an SVG icon for your provider. You can use the provider’s official brand icon or a placeholder.
- Get the official icon from the provider’s brand assets or press kit
- Ensure the SVG is properly formatted and viewBox is set to “0 0 24 24”
- Use the provider’s brand color for the fill attribute
- Keep the icon simple and recognizable at small sizes
Step 3: Register Provider Name (ui/lib/constants/logs.ts)
a. Add to Known Providers List
b. Add Provider Label
Step 4: Update OpenAPI Specification (docs/openapi/openapi.json)
Add your provider to the API documentation’s provider enum:
"AI model provider" description in docs/openapi/openapi.json and add your provider to the enum array.
Step 5: Update Configuration Schema (transports/config.schema.json)
a. Add Provider to Providers Object
b. Add to Fallback Provider Enum
"fallbacks" in transports/config.schema.json and add your provider to both locations.
Step 6: Update UI README (ui/README.md)
Add your provider to the list of supported providers:
Step 7: Register Provider in Core (core/bifrost.go)
a. Add Provider Import
b. Add Case to createBaseProvider
Step 8: Add CI/CD Environment Variables
Add your provider’s API key to all GitHub Actions workflow files that run tests.Files to Update:
.github/workflows/pr-tests.yml.github/workflows/release-pipeline.yml(multiple jobs)
Changes Required:
Add the environment variable to theenv: section:
core-releasejobframework-releasejobplugins-releasejobbifrost-http-releasejob
Settings > Secrets and variables > Actions.
UI Integration Checklist
Before submitting your PR, verify all UI changes:- ✅ Model placeholder added to
ui/lib/constants/config.ts - ✅ Key requirement set in
ui/lib/constants/config.ts - ✅ Provider icon added to
ui/lib/constants/icons.tsx - ✅ Provider name added to
ui/lib/constants/logs.ts(KnownProvidersNames) - ✅ Provider label added to
ui/lib/constants/logs.ts(ProviderLabels) - ✅ Provider added to OpenAPI spec enum (
docs/openapi/openapi.json) - ✅ Provider added to config schema (
transports/config.schema.json) - 2 locations - ✅ Provider listed in UI README (
ui/README.md) - ✅ Provider import added to
core/bifrost.go - ✅ Provider case added to
createBaseProviderincore/bifrost.go - ✅ Environment variable added to
.github/workflows/pr-tests.yml - ✅ Environment variable added to
.github/workflows/release-pipeline.yml(4 jobs)
Creating Provider Documentation
MANDATORY: Every new provider must have comprehensive documentation in the docs directory. This documentation helps users understand how the provider works, what parameters it supports, and any special considerations.Documentation File Location
Create a new MDX file at:docs/providers/supported-providers/[provider_name].mdx
Example: For a provider named “example”, create: docs/providers/supported-providers/example.mdx
Documentation Structure
Your provider documentation should follow this structure for consistency. Reference complete examples:- Groq:
docs/providers/supported-providers/groq.mdx(OpenAI-compatible provider) - Bedrock:
docs/providers/supported-providers/bedrock.mdx(Custom API provider with multiple features) - Cerebras:
docs/providers/supported-providers/cerebras.mdx(OpenAI-compatible, simple) - Mistral:
docs/providers/supported-providers/mistral.mdx(Transcription + chat support) - Ollama:
docs/providers/supported-providers/ollama.mdx(Local-first infrastructure)
Required Sections
1. Front Matter (Frontmatter)
2. Overview Section
Start with a brief overview explaining:- What the provider is and its key characteristics
- How Bifrost converts requests to/from this provider’s format
- List of major transformation features
3. Supported Operations Table
Create a table showing which operations are supported:4. Feature Sections (One per Supported Feature)
For each major feature (Chat Completions, Embeddings, etc.):a. Request Parameters
- OpenAI parameter name
- How it’s transformed for the provider (renamed, dropped, etc.)
- Any special notes or constraints
b. Filtered/Dropped Parameters
c. Special Features
d. Message Conversion
e. Response Conversion
5. Streaming Section (If Supported)
6. Authentication Section
7. Configuration Section
8. Caveats/Important Notes
Use collapsible accordion sections for limitations:- Unsupported content types (images, audio, etc.)
- Parameter limitations
- Streaming restrictions
- Special handling required
- Breaking behavioral differences from OpenAI standard
9. Warnings/Notes
Use special callouts for important information:Code Examples in Documentation
Include examples in both formats where applicable:Test Configuration Requirements
Environment Variables:- REQUIRED:
[PROVIDER_NAME]_API_KEY- API key for the provider - Optional:
PROVIDER_BASE_URL- Custom base URL for testing
- Use
package [provider_name]_test(note the_testsuffix) - This ensures tests don’t access unexported functions (tests external behavior)
Test Scenarios Configuration
Thetestutil.TestScenarios struct defines which tests to run. Set each field based on provider capabilities:
Core Test Scenarios
| Scenario | Enable if… |
|---|---|
SimpleChat | Provider supports basic chat completion |
CompletionStream | Provider supports streaming chat |
TextCompletion | Provider supports text completions (legacy) |
TextCompletionStream | Provider supports streaming text completions |
ToolCalls | Provider supports function/tool calling |
ToolCallsStreaming | Provider supports streaming with tool calls |
Embedding | Provider supports text embeddings |
ListModels | Provider has a list models endpoint |
ImageURL | Provider accepts image URLs in messages |
ImageBase64 | Provider accepts base64-encoded images |
Model Configuration
ChatModel (REQUIRED if any chat scenario is enabled):- Fallbacks are tested if primary model fails
- Tests that fallback mechanism works correctly
Running Tests
Run all tests for your provider:Test Checklist
Before submitting your provider, ensure:- ✅ Test file named
[provider_name]_test.go - ✅ Package is
[provider_name]_test - ✅
t.Parallel()called at start - ✅ API key check with
t.Skip()if not available - ✅ All supported scenarios enabled in config
- ✅ All unsupported scenarios disabled (set to
false) - ✅ Appropriate models specified (ChatModel, TextModel, EmbeddingModel)
- ✅ Fallback models configured (at least 1-2)
- ✅
client.Shutdown()called at end - ✅ Tests pass locally with valid API key
- ✅ Tests skip gracefully without API key
Common Test Failures and Solutions
Test hangs indefinitely:- Solution: Add timeout:
go test -v -timeout 2m - Cause: Provider not responding or network issue
- Solution: Export the required environment variable
- Not a failure: Tests correctly skip when credentials unavailable
- Solution: Set the scenario to
falseinTestScenarios - Cause: Test trying to run unsupported feature
- Solution: Update ChatModel/TextModel to valid model for provider
- Cause: Model name incorrect or not available
- Solution: Check streaming implementation in provider
- Cause: SSE parsing error or incorrect stream handling
- Solution: Verify tool/function conversion in
chat.go - Cause: Tool format doesn’t match provider’s expected structure
CI/CD Integration
After your tests pass locally, ensure they’ll run in the CI/CD pipeline:GitHub Actions Setup
Required Secret: Your provider’s API key must be added to GitHub repository secrets by a maintainer:- Secret name:
PROVIDER_NAME_API_KEY(uppercase, underscores) - Example:
HUGGING_FACE_API_KEY,CEREBRAS_API_KEY
.github/workflows/pr-tests.yml.github/workflows/release-pipeline.yml(4 jobs)
- Pull requests (PR tests workflow)
- Release builds (release pipeline workflow)
- Manual workflow triggers
t.Skip() check in your test file.
Final Pre-Submission Checklist
Before creating a pull request, verify everything is complete: Provider Implementation:- ✅ Provider code follows file structure conventions
- ✅ All supported features implemented correctly
- ✅ Error handling properly converts to
schemas.BifrostError - ✅ OpenAI handlers used if provider is compatible
- ✅ Code is well-commented and documented
- ✅ Test file created:
[provider_name]_test.go - ✅ All supported scenarios enabled
- ✅ All unsupported scenarios disabled
- ✅ Tests pass locally with valid API key
- ✅ Tests skip gracefully without API key
- ✅ Appropriate models configured
- ✅ Provider added to
core/schemas/bifrost.go(ModelProvider type + arrays) - ✅ Provider registered in
core/bifrost.go(import + case)
- ✅ All 7 UI files updated (config.ts, icons.tsx, logs.ts, etc.)
- ✅ Provider icon looks good and is recognizable
- ✅ Model placeholders are helpful examples
- ✅ Environment variables added to workflow files
- ✅ API key secret name follows convention
- ✅ Provider-specific parameters documented (if any)
- ✅ Example usage added (optional but helpful)
- ✅ Any special setup instructions noted

