Overview
When an LLM returns tool calls in its response, Bifrost does not automatically execute them. Instead, your application explicitly calls the tool execution API, giving you full control over:- Which tool calls to execute
- User approval workflows
- Security validation
- Audit logging
End-to-End Example
- Gateway
- Go SDK
Step 1: Send Chat Request
Tool names are prefixed with the MCP client name (e.g.,
filesystem_list_directory). This ensures uniqueness across multiple MCP clients.Step 2: Execute the Tool
The request body matches the tool call object from the response:Step 3: Continue the Conversation
Assemble the full conversation history and continue:Response Formats
Bifrost supports two API formats for tool execution:Chat Format (Default)
Use?format=chat or omit the parameter:
Responses Format
Use?format=responses for the Responses API format:
Multiple Tool Calls
LLMs often request multiple tools in a single response. Execute them in sequence or parallel:- Sequential
- Parallel
Error Handling
Tool execution can fail for various reasons:Copy-Pastable Responses
Tool execution responses are designed to be directly appended to your conversation history:- Correct
rolefield ("tool") - Matching
tool_call_idfor correlation - Properly formatted
content

