Update Adolf wiki: current architecture, fast tools, SearXNG tuning

2026-03-15 12:33:57 +00:00
parent 6ffa86e9f0
commit e7fc6d7d35

140
Adolf.md

@@ -1,121 +1,103 @@
# Adolf
Persistent AI assistant reachable via Telegram. Three-tier model routing with GPU VRAM management and long-term memory.
Autonomous personal assistant reachable via Telegram and CLI. Three-tier model routing with GPU VRAM management and long-term memory.
## Architecture
```
Telegram user
(long-polling)
[grammy] Node.js — port 3001
- grammY bot polls Telegram
- on message: fire-and-forget POST /chat to deepagents
- exposes MCP SSE: send_telegram_message(chat_id, text)
↓ POST /chat → 202 Accepted immediately
Telegram / CLI
[grammy] Node.js — port 3001 [cli] Python Rich REPL
grammY long-poll → POST /message POST /message + GET /stream SSE
[deepagents] Python FastAPI — port 8000
Pre-check: /think prefix? → force_complex=True, strip prefix
Pre-flight (asyncio.gather — all parallel):
- URL fetch (Crawl4AI)
- Memory retrieval (openmemory)
- Fast tools (WeatherTool, CommuteTool)
Router (qwen2.5:1.5b, temp=0, ~24s)
- light: simple/conversational → router answers directly
- medium: needs memory/web search → qwen3:4b + tools
- complex: multi-step research, planning → qwen3:8b + subagents
Fast tool matched? → deliver reply directly (no LLM)
↓ (if no fast tool)
Router (qwen2.5:1.5b)
- light: simple/conversational → router answers directly (~24s)
- medium: default → qwen3:4b single call (~1020s)
- complex: /think prefix → qwen3:8b + web_search + fetch_url (~60120s)
├── light ─────────── router reply used directly
├── medium ────────── qwen3:4b + TodoList + tools (~20100s)
└── complex ───────── VRAM flush → qwen3:8b + subagents (~60180s)
└→ background: flush 8b, prewarm 4b+router
channels.deliver() → Telegram / CLI SSE stream
send_telegram_message via Grammy MCP (auto-split if >4000 chars)
asyncio.create_task(store_memory_async) — spin-wait GPU idle → add_memory
↕ MCP SSE ↕ HTTP
[openmemory] Python + mem0 — port 8765 [SearXNG — port 11437]
- MCP tools: add_memory, search_memory, get_all_memories
- extractor: qwen2.5:1.5b on GPU Ollama (11436) — 25s
- embedder: nomic-embed-text on CPU Ollama (11435) — 50150ms
- vector store: Qdrant (port 6333), 768 dims
asyncio.create_task(_store_memory()) — background
```
## Three-Tier Model Routing
| Tier | Model | VRAM | Trigger | Latency |
|------|-------|------|---------|---------|
| Light | qwen2.5:1.5b (router) | ~1.2 GB (shared with extraction) | Router classifies as light | ~24s |
| Medium | qwen3:4b | ~2.5 GB | Default | ~20100s |
| Complex | qwen3:8b | ~5.5 GB | `/think` prefix | ~60180s |
| Tier | Model | Trigger | Latency |
|------|-------|---------|---------|
| Fast | — (no LLM) | Fast tool matched (weather, commute) | ~1s |
| Light | qwen2.5:1.5b (router) | Regex or LLM classifies "light" | ~24s |
| Medium | qwen3:4b | Default | ~1020s |
| Complex | qwen3:8b | `/think` prefix only | ~60120s |
**Normal VRAM**: router/extraction (1.2 GB, shared) + medium (2.5 GB) = ~3.7 GB
**Complex VRAM**: 8b alone = ~5.5 GB — flushes others first
Complex tier is locked behind `/think` — LLM classification of "complex" is downgraded to medium.
Router uses regex pre-classifier (greetings, simple patterns) then raw-text LLM classification. Complex tier requires `/think` prefix.
## Fast Tools
Pre-flight tools run concurrently before any LLM call. If matched, the result is delivered directly — no LLM involved.
| Tool | Pattern | Source | Latency |
|------|---------|--------|---------|
| `WeatherTool` | weather/forecast/temperature/... | open-meteo.com API (Balashikha, no key) | ~200ms |
| `CommuteTool` | commute/traffic/пробки/... | routecheck:8090 → Yandex Routing API | ~12s |
## Memory Pipeline
openmemory (FastMCP + mem0 + Qdrant + nomic-embed-text):
- **Before routing**: `search_memory` retrieves relevant context injected into system prompt
- **After reply**: `_store_memory()` runs as background task — extraction via `qwen2.5:1.5b`
## VRAM Management
GTX 1070 (8 GB). Explicit flush before loading qwen3:8b prevents Ollama's CPU-spill bug:
GTX 1070 (8 GB). Flush qwen3:4b before loading qwen3:8b for complex tier.
1. Flush qwen3:4b and qwen2.5:1.5b (`keep_alive=0`)
1. Flush medium + router (`keep_alive=0`)
2. Poll `/api/ps` until evicted (15s timeout)
3. Fallback to medium agent if timeout
4. After complex reply: flush 8b, pre-warm 4b + router
3. Fallback to medium on timeout
4. After complex reply: flush 8b, pre-warm medium + router
## Agents
## SearXNG
**Medium agent**: `create_deep_agent` (deepagents) + TodoListMiddleware
Tools: `search_memory`, `get_all_memories`, `web_search`
Port 11437. Used by `web_search` tool in complex tier.
**Complex agent**: `create_deep_agent` + TodoListMiddleware + SubAgentMiddleware
Subagents: `research` (web_search), `memory` (search_memory + get_all_memories)
Disabled slow/broken engines: **startpage** (3s timeout), **google news** (timeout), **qwant news/images/videos** (access denied).
Fast enabled engines: bing, duckduckgo, brave, google, yahoo (~3001000ms).
## Concurrency
| Semaphore | Guards |
|-----------|--------|
| `_reply_semaphore(1)` | GPU Ollama — one LLM inference at a time |
| `_memory_semaphore(1)` | GPU Ollama — one memory extraction at a time |
Memory extraction spin-waits until `_reply_semaphore` is free (60s timeout).
## External Services (from openai/ stack)
| Service | Host Port | Role |
|---------|-----------|------|
| Ollama GPU | 11436 | Reply inference + extraction (qwen2.5:1.5b) |
| Ollama CPU | 11435 | Memory embedding (nomic-embed-text) |
| Qdrant | 6333 | Vector store for memories |
| SearXNG | 11437 | Web search |
GPU Ollama config: `OLLAMA_MAX_LOADED_MODELS=2`, `OLLAMA_NUM_PARALLEL=1`.
Config: `/mnt/ssd/ai/searxng/config/settings.yml`
## Compose Stack
Repo: `http://localhost:3000/alvis/adolf``~/adolf/`
Repo: `~/adolf/``http://localhost:3000/alvis/adolf`
```bash
cd ~/adolf
docker compose up -d
docker compose up --build -d # start all services
docker compose --profile tools run --rm -it cli # interactive CLI
```
Requires `TELEGRAM_BOT_TOKEN` in `~/adolf/.env`.
Requires `~/adolf/.env`: `TELEGRAM_BOT_TOKEN`, `ROUTECHECK_TOKEN`, `YANDEX_ROUTING_KEY`.
## Files
```
~/adolf/
├── docker-compose.yml Services: deepagents, openmemory, grammy
├── Dockerfile deepagents container (Python 3.12)
├── agent.py FastAPI + three-tier routing + run_agent_task
├── docker-compose.yml Services: bifrost, deepagents, openmemory, grammy, crawl4ai, routecheck, cli
├── agent.py FastAPI gateway, run_agent_task, fast tool short-circuit, memory pipeline
├── fast_tools.py WeatherTool (open-meteo), CommuteTool (routecheck), FastToolRunner
├── router.py Router — regex + qwen2.5:1.5b classification
├── channels.py Channel registry + deliver()
├── vram_manager.py VRAMManager — flush/poll/prewarm Ollama VRAM
├── agent_factory.py build_medium_agent / build_complex_agent
├── test_pipeline.py Integration tests + benchmark (easy/medium/hard)
├── .env TELEGRAM_BOT_TOKEN (not committed)
├── openmemory/
│ ├── server.py FastMCP + mem0 MCP tools
│ ├── requirements.txt
│ └── Dockerfile
└── grammy/
├── bot.mjs grammY bot + MCP SSE server
├── package.json
└── Dockerfile
├── agent_factory.py _DirectModel (medium) / create_deep_agent (complex)
├── cli.py Rich Live streaming REPL
├── routecheck/ Yandex Routing API proxy (port 8090)
├── openmemory/ FastMCP + mem0 MCP server (port 8765)
└── grammy/ grammY Telegram bot (port 3001)
```