Integrate Bifrost LLM gateway, add test suite, implement memory pipeline

- Add Bifrost (maximhq/bifrost) as LLM gateway: all inference routes through
  bifrost:8080/v1 with retry logic and observability; VRAMManager keeps direct
  Ollama access for VRAM flush/prewarm operations
- Switch medium model from qwen3:4b to qwen2.5:1.5b (direct call, no tools)
  via _DirectModel wrapper; complex keeps create_deep_agent with qwen3:8b
- Implement out-of-agent memory pipeline: _retrieve_memories pre-fetches
  relevant context (injected into all tiers), _store_memory runs as background
  task after each reply writing to openmemory/Qdrant
- Add tests/unit/ with 133 tests covering router, channels, vram_manager,
  agent helpers; move integration test to tests/integration/
- Add bifrost-config.json with GPU Ollama (qwen2.5:0.5b/1.5b, qwen3:4b/8b,
  gemma3:4b) and CPU Ollama providers
- Integration test 28/29 pass (only grammy fails — no TELEGRAM_BOT_TOKEN)

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
This commit is contained in:
Alvis
2026-03-12 13:50:12 +00:00
parent ec45d255f0
commit f9618a9bbf
16 changed files with 1195 additions and 36 deletions

58
bifrost-config.json Normal file
View File

@@ -0,0 +1,58 @@
{
"client": {
"drop_excess_requests": false
},
"providers": {
"ollama": {
"keys": [
{
"name": "ollama-gpu",
"value": "dummy",
"models": [
"qwen2.5:0.5b",
"qwen2.5:1.5b",
"qwen3:4b",
"gemma3:4b",
"qwen3:8b"
],
"weight": 1.0
}
],
"network_config": {
"base_url": "http://host.docker.internal:11436",
"default_request_timeout_in_seconds": 300,
"max_retries": 2,
"retry_backoff_initial_ms": 500,
"retry_backoff_max_ms": 10000
}
},
"ollama-cpu": {
"keys": [
{
"name": "ollama-cpu-key",
"value": "dummy",
"models": [
"gemma3:1b",
"qwen2.5:1.5b",
"qwen2.5:3b"
],
"weight": 1.0
}
],
"network_config": {
"base_url": "http://host.docker.internal:11435",
"default_request_timeout_in_seconds": 120,
"max_retries": 2,
"retry_backoff_initial_ms": 500,
"retry_backoff_max_ms": 10000
},
"custom_provider_config": {
"base_provider_type": "openai",
"allowed_requests": {
"chat_completion": true,
"chat_completion_stream": true
}
}
}
}
}