These optimizations apply to Docker, Docker Compose, Kubernetes, and any container runtime using cgroups for resource management.
Quick Start
For most production deployments, add these settings to your container:Go Runtime Tuning
GOMAXPROCS (Automatic)
Bifrost automatically detects container CPU limits using automaxprocs. This setsGOMAXPROCS to match your container’s CPU quota from cgroups (v1 and v2).
No configuration needed — this works automatically. You’ll see a log line at startup:
GOGC (Garbage Collection)
GOGC controls garbage collection frequency. The default is 100 (GC triggers when heap grows 100% since last collection).
| Scenario | Recommended GOGC | Trade-off |
|---|---|---|
| Memory constrained | 50-100 | More frequent GC, lower memory |
| High throughput, memory available | 200-400 | Less GC overhead, higher memory |
| Latency sensitive | 50-100 | More predictable latency |
GOMEMLIMIT (Memory Limit)
GOMEMLIMIT sets a soft memory limit for the Go runtime. When approaching this limit, Go becomes more aggressive about garbage collection.
Best practice: Set to ~90% of your container’s memory limit to leave headroom for non-heap memory (goroutine stacks, CGO, etc.).
| Container Memory | Recommended GOMEMLIMIT |
|---|---|
| 512 MB | 450MiB |
| 1 GB | 900MiB |
| 2 GB | 1800MiB |
| 4 GB | 3600MiB |
| 8 GB | 7200MiB |
When using both
GOGC and GOMEMLIMIT, Go GCs based on whichever trigger fires first. For high-throughput workloads, set GOGC=200 or higher and let GOMEMLIMIT be the primary constraint.System Limits
File Descriptor Limits (ulimits)
Each HTTP connection requires a file descriptor. The default container limit (often 1024) is too low for high-concurrency workloads.| Expected Concurrent Connections | Recommended nofile |
|---|---|
| < 1000 | 4096 |
| 1000-5000 | 16384 |
| 5000-10000 | 32768 |
| > 10000 | 65536+ |
Resource Limits
Set CPU and memory limits to match your expected workload:| Expected RPS | Recommended CPUs | Recommended Memory |
|---|---|---|
| 100-500 | 1-2 | 512MB-1GB |
| 500-2000 | 2-4 | 1-2GB |
| 2000-5000 | 4-8 | 2-4GB |
| 5000+ | 8+ | 4GB+ |
Docker Compose Examples
Development
Production (Single Node)
Production (Multi-Node with PostgreSQL)
Kubernetes Configuration
Basic Deployment
File Descriptor Limits in Kubernetes
File descriptor limits in Kubernetes are typically set at the node level. Options include:- Node-level configuration (recommended): Set
fs.file-maxand ulimits in your node configuration - Init container: Use an init container with elevated privileges to set limits
- Security context: Some clusters allow setting capabilities
Check your current limits inside a container with:
cat /proc/sys/fs/file-max and ulimit -nBifrost Application Settings
Align Bifrost’s internal settings with your container resources:Concurrency and Buffer Size
Configure per provider inconfig.json:
concurrency= expected RPS per providerbuffer_size= 1.5 × concurrency
Initial Pool Size
Configure globally inconfig.json:
initial_pool_size = 1.5 × total expected RPS across all providers
Tuning Checklist
Set container resource limits
Define CPU and memory limits based on expected workload. Start with 2 CPUs / 2GB for moderate loads.
Align Bifrost settings
Match
concurrency and buffer_size to your container’s CPU count and expected RPS.Troubleshooting
High Memory Usage
- Reduce
GOGC(e.g., from 200 to 100) - Ensure
GOMEMLIMITis set - Reduce
buffer_sizeandinitial_pool_size
High Latency Spikes
- May indicate GC pauses; try reducing
GOGC - Check if container is hitting CPU limits
- Verify
GOMAXPROCSmatches container CPU quota (check startup logs)
Connection Errors Under Load
- Increase
nofileulimit - Ensure
buffer_sizeis large enough for traffic spikes - Check provider rate limits
Container OOM Killed
- Reduce
GOMEMLIMITto 85% of container memory - Reduce
GOGCto trigger more frequent GC - Reduce
buffer_sizeandinitial_pool_size
Related Documentation
- Performance Tuning - Bifrost-specific performance configuration
- Helm Deployment - Kubernetes deployment with Helm
- Multi-Node Setup - Scaling across multiple instances

