Try Bifrost Enterprise free for 14 days. Explore now
A valid request URL is required to generate request examples{
"data": [
{
"id": "<string>",
"canonical_slug": "<string>",
"name": "<string>",
"normalized_name": "<string>",
"deployment": "<string>",
"created": 123,
"context_length": 123,
"max_input_tokens": 123,
"max_output_tokens": 123,
"architecture": {
"modality": "<string>",
"tokenizer": "<string>",
"instruct_type": "<string>",
"input_modalities": [
"<string>"
],
"output_modalities": [
"<string>"
]
},
"pricing": {
"prompt": "<string>",
"completion": "<string>",
"request": "<string>",
"image": "<string>",
"web_search": "<string>",
"internal_reasoning": "<string>",
"input_cache_read": "<string>",
"input_cache_write": "<string>"
},
"top_provider": {
"is_moderated": true,
"context_length": 123,
"max_completion_tokens": 123
},
"per_request_limits": {
"prompt_tokens": 123,
"completion_tokens": 123
},
"supported_parameters": [
"<string>"
],
"default_parameters": {
"temperature": 123,
"top_p": 123,
"frequency_penalty": 123
},
"hugging_face_id": "<string>",
"description": "<string>",
"owned_by": "<string>",
"supported_methods": [
"<string>"
]
}
],
"extra_fields": {
"request_type": "<string>",
"provider": "openai",
"model_requested": "<string>",
"model_deployment": "<string>",
"latency": 123,
"chunk_index": 123,
"raw_request": {},
"raw_response": {},
"cache_debug": {
"cache_hit": true,
"cache_id": "<string>",
"hit_type": "<string>",
"requested_provider": "<string>",
"requested_model": "<string>",
"provider_used": "<string>",
"model_used": "<string>",
"input_tokens": 123,
"threshold": 123,
"similarity": 123
}
},
"next_page_token": "<string>"
}Lists available models. If provider is not specified, lists all models from all configured providers.
If a virtual key is provided, Bifrost only lists (and only queries) providers allowed by that virtual key.
A valid request URL is required to generate request examples{
"data": [
{
"id": "<string>",
"canonical_slug": "<string>",
"name": "<string>",
"normalized_name": "<string>",
"deployment": "<string>",
"created": 123,
"context_length": 123,
"max_input_tokens": 123,
"max_output_tokens": 123,
"architecture": {
"modality": "<string>",
"tokenizer": "<string>",
"instruct_type": "<string>",
"input_modalities": [
"<string>"
],
"output_modalities": [
"<string>"
]
},
"pricing": {
"prompt": "<string>",
"completion": "<string>",
"request": "<string>",
"image": "<string>",
"web_search": "<string>",
"internal_reasoning": "<string>",
"input_cache_read": "<string>",
"input_cache_write": "<string>"
},
"top_provider": {
"is_moderated": true,
"context_length": 123,
"max_completion_tokens": 123
},
"per_request_limits": {
"prompt_tokens": 123,
"completion_tokens": 123
},
"supported_parameters": [
"<string>"
],
"default_parameters": {
"temperature": 123,
"top_p": 123,
"frequency_penalty": 123
},
"hugging_face_id": "<string>",
"description": "<string>",
"owned_by": "<string>",
"supported_methods": [
"<string>"
]
}
],
"extra_fields": {
"request_type": "<string>",
"provider": "openai",
"model_requested": "<string>",
"model_deployment": "<string>",
"latency": 123,
"chunk_index": 123,
"raw_request": {},
"raw_response": {},
"cache_debug": {
"cache_hit": true,
"cache_id": "<string>",
"hit_type": "<string>",
"requested_provider": "<string>",
"requested_model": "<string>",
"provider_used": "<string>",
"model_used": "<string>",
"input_tokens": 123,
"threshold": 123,
"similarity": 123
}
},
"next_page_token": "<string>"
}Documentation Index
Fetch the complete documentation index at: https://docs.getbifrost.ai/llms.txt
Use this file to discover all available pages before exploring further.
Bearer token authentication. Use your provider API key or Bifrost authentication token.
Virtual keys (prefixed with sk-bf-) can also be passed here.
Filter by provider (e.g., openai, anthropic, bedrock) AI model provider identifier
openai, azure, anthropic, bedrock, cohere, vertex, vllm, mistral, ollama, groq, sgl, parasail, perplexity, replicate, cerebras, gemini, openrouter, elevenlabs, huggingface, nebius, xai, runway, fireworks Maximum number of models to return
x >= 0Token for pagination
Was this page helpful?