- Add Bifrost (maximhq/bifrost) as LLM gateway: all inference routes through bifrost:8080/v1 with retry logic and observability; VRAMManager keeps direct Ollama access for VRAM flush/prewarm operations - Switch medium model from qwen3:4b to qwen2.5:1.5b (direct call, no tools) via _DirectModel wrapper; complex keeps create_deep_agent with qwen3:8b - Implement out-of-agent memory pipeline: _retrieve_memories pre-fetches relevant context (injected into all tiers), _store_memory runs as background task after each reply writing to openmemory/Qdrant - Add tests/unit/ with 133 tests covering router, channels, vram_manager, agent helpers; move integration test to tests/integration/ - Add bifrost-config.json with GPU Ollama (qwen2.5:0.5b/1.5b, qwen3:4b/8b, gemma3:4b) and CPU Ollama providers - Integration test 28/29 pass (only grammy fails — no TELEGRAM_BOT_TOKEN) Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
11 lines
333 B
Docker
11 lines
333 B
Docker
FROM python:3.12-slim
|
|
|
|
WORKDIR /app
|
|
|
|
RUN pip install --no-cache-dir deepagents langchain-openai langgraph \
|
|
fastapi uvicorn langchain-mcp-adapters langchain-community httpx
|
|
|
|
COPY agent.py channels.py vram_manager.py router.py agent_factory.py hello_world.py .
|
|
|
|
CMD ["uvicorn", "agent:app", "--host", "0.0.0.0", "--port", "8000"]
|