If your Allowed Headers are already set to
*, you can skip this note. If not, and you face issues integrating Bifrost with Codex CLI, try switching to * or adding the specific headers required by your client. By default, Bifrost whitelists: Content-Type, Authorization, X-Requested-With, X-Stainless-Timeout, and X-Api-Key.Installing Codex CLI
Configuring Codex CLI with Bifrost
Update codex.toml
Add the Bifrost base URL and credentials to your global~/.codex/config.toml or project-specific .codex/config.toml:
codex from the same terminal session where you exported variables, or restart the terminal after changing your profile. GUI-launched terminals or IDEs may not pick up shell-profile exports unless the environment is configured there as well.
Codex CLI defaults to websocket mode for the Responses API and automatically falls back to HTTPS if the websocket connection fails. To enable https for Codex CLI by default, add these settings in your config.toml:
Model Configuration
Use the--model flag to start Codex with a specific model:
/model command:
Using Non-OpenAI Models with Codex CLI
Bifrost automatically translates OpenAI API requests to other providers, so you can use Codex CLI with models from Anthropic, Google, Mistral, and more. Use theprovider/model-name format to specify any Bifrost-configured model:
Supported Providers
Bifrost supports the following providers with theprovider/model-name format:
openai, azure, gemini, vertex, bedrock, mistral, groq, cerebras, cohere, perplexity, xai, ollama, openrouter, huggingface, nebius, parasail, replicate, vllm, sgl

