Commit Graph

3 Commits

Author SHA1 Message Date
Alvis
f9618a9bbf Integrate Bifrost LLM gateway, add test suite, implement memory pipeline
- Add Bifrost (maximhq/bifrost) as LLM gateway: all inference routes through
  bifrost:8080/v1 with retry logic and observability; VRAMManager keeps direct
  Ollama access for VRAM flush/prewarm operations
- Switch medium model from qwen3:4b to qwen2.5:1.5b (direct call, no tools)
  via _DirectModel wrapper; complex keeps create_deep_agent with qwen3:8b
- Implement out-of-agent memory pipeline: _retrieve_memories pre-fetches
  relevant context (injected into all tiers), _store_memory runs as background
  task after each reply writing to openmemory/Qdrant
- Add tests/unit/ with 133 tests covering router, channels, vram_manager,
  agent helpers; move integration test to tests/integration/
- Add bifrost-config.json with GPU Ollama (qwen2.5:0.5b/1.5b, qwen3:4b/8b,
  gemma3:4b) and CPU Ollama providers
- Integration test 28/29 pass (only grammy fails — no TELEGRAM_BOT_TOKEN)

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-03-12 13:50:12 +00:00
Alvis
ec45d255f0 wiki search people tested pipeline 2026-03-05 11:22:34 +00:00
Alvis
19e2c27976 Switch extraction model to qwen2.5:1.5b, fix mem0migrations dims, update tests
- openmemory: use qwen2.5:1.5b instead of gemma3:1b for fact extraction
- test_pipeline.py: check qwen2.5:1.5b, fix SSE checks, fix Qdrant payload
  parsing, relax SearXNG threshold to 5s, improve marker word test
- potential-directions.md: ranked CPU extraction model candidates
- Root cause: mem0migrations collection had stale 1536-dim vectors causing
  silent dedup failures; recreate both collections at 768 dims

All 18 pipeline tests now pass.

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-02-23 05:11:29 +00:00