Update Adolf wiki: current architecture, fast tools, SearXNG tuning

2026-03-15 12:33:57 +00:00
parent 6ffa86e9f0
commit e7fc6d7d35

140
Adolf.md

@@ -1,121 +1,103 @@
# Adolf # Adolf
Persistent AI assistant reachable via Telegram. Three-tier model routing with GPU VRAM management and long-term memory. Autonomous personal assistant reachable via Telegram and CLI. Three-tier model routing with GPU VRAM management and long-term memory.
## Architecture ## Architecture
``` ```
Telegram user Telegram / CLI
(long-polling)
[grammy] Node.js — port 3001 [grammy] Node.js — port 3001 [cli] Python Rich REPL
- grammY bot polls Telegram grammY long-poll → POST /message POST /message + GET /stream SSE
- on message: fire-and-forget POST /chat to deepagents
- exposes MCP SSE: send_telegram_message(chat_id, text)
↓ POST /chat → 202 Accepted immediately
[deepagents] Python FastAPI — port 8000 [deepagents] Python FastAPI — port 8000
Pre-check: /think prefix? → force_complex=True, strip prefix Pre-flight (asyncio.gather — all parallel):
- URL fetch (Crawl4AI)
- Memory retrieval (openmemory)
- Fast tools (WeatherTool, CommuteTool)
Router (qwen2.5:1.5b, temp=0, ~24s) Fast tool matched? → deliver reply directly (no LLM)
- light: simple/conversational → router answers directly ↓ (if no fast tool)
- medium: needs memory/web search → qwen3:4b + tools Router (qwen2.5:1.5b)
- complex: multi-step research, planning → qwen3:8b + subagents - light: simple/conversational → router answers directly (~24s)
- medium: default → qwen3:4b single call (~1020s)
- complex: /think prefix → qwen3:8b + web_search + fetch_url (~60120s)
├── light ─────────── router reply used directly channels.deliver() → Telegram / CLI SSE stream
├── medium ────────── qwen3:4b + TodoList + tools (~20100s)
└── complex ───────── VRAM flush → qwen3:8b + subagents (~60180s)
└→ background: flush 8b, prewarm 4b+router
send_telegram_message via Grammy MCP (auto-split if >4000 chars) asyncio.create_task(_store_memory()) — background
asyncio.create_task(store_memory_async) — spin-wait GPU idle → add_memory
↕ MCP SSE ↕ HTTP
[openmemory] Python + mem0 — port 8765 [SearXNG — port 11437]
- MCP tools: add_memory, search_memory, get_all_memories
- extractor: qwen2.5:1.5b on GPU Ollama (11436) — 25s
- embedder: nomic-embed-text on CPU Ollama (11435) — 50150ms
- vector store: Qdrant (port 6333), 768 dims
``` ```
## Three-Tier Model Routing ## Three-Tier Model Routing
| Tier | Model | VRAM | Trigger | Latency | | Tier | Model | Trigger | Latency |
|------|-------|------|---------|---------| |------|-------|---------|---------|
| Light | qwen2.5:1.5b (router) | ~1.2 GB (shared with extraction) | Router classifies as light | ~24s | | Fast | — (no LLM) | Fast tool matched (weather, commute) | ~1s |
| Medium | qwen3:4b | ~2.5 GB | Default | ~20100s | | Light | qwen2.5:1.5b (router) | Regex or LLM classifies "light" | ~24s |
| Complex | qwen3:8b | ~5.5 GB | `/think` prefix | ~60180s | | Medium | qwen3:4b | Default | ~1020s |
| Complex | qwen3:8b | `/think` prefix only | ~60120s |
**Normal VRAM**: router/extraction (1.2 GB, shared) + medium (2.5 GB) = ~3.7 GB Complex tier is locked behind `/think` — LLM classification of "complex" is downgraded to medium.
**Complex VRAM**: 8b alone = ~5.5 GB — flushes others first
Router uses regex pre-classifier (greetings, simple patterns) then raw-text LLM classification. Complex tier requires `/think` prefix. ## Fast Tools
Pre-flight tools run concurrently before any LLM call. If matched, the result is delivered directly — no LLM involved.
| Tool | Pattern | Source | Latency |
|------|---------|--------|---------|
| `WeatherTool` | weather/forecast/temperature/... | open-meteo.com API (Balashikha, no key) | ~200ms |
| `CommuteTool` | commute/traffic/пробки/... | routecheck:8090 → Yandex Routing API | ~12s |
## Memory Pipeline
openmemory (FastMCP + mem0 + Qdrant + nomic-embed-text):
- **Before routing**: `search_memory` retrieves relevant context injected into system prompt
- **After reply**: `_store_memory()` runs as background task — extraction via `qwen2.5:1.5b`
## VRAM Management ## VRAM Management
GTX 1070 (8 GB). Explicit flush before loading qwen3:8b prevents Ollama's CPU-spill bug: GTX 1070 (8 GB). Flush qwen3:4b before loading qwen3:8b for complex tier.
1. Flush qwen3:4b and qwen2.5:1.5b (`keep_alive=0`) 1. Flush medium + router (`keep_alive=0`)
2. Poll `/api/ps` until evicted (15s timeout) 2. Poll `/api/ps` until evicted (15s timeout)
3. Fallback to medium agent if timeout 3. Fallback to medium on timeout
4. After complex reply: flush 8b, pre-warm 4b + router 4. After complex reply: flush 8b, pre-warm medium + router
## Agents ## SearXNG
**Medium agent**: `create_deep_agent` (deepagents) + TodoListMiddleware Port 11437. Used by `web_search` tool in complex tier.
Tools: `search_memory`, `get_all_memories`, `web_search`
**Complex agent**: `create_deep_agent` + TodoListMiddleware + SubAgentMiddleware Disabled slow/broken engines: **startpage** (3s timeout), **google news** (timeout), **qwant news/images/videos** (access denied).
Subagents: `research` (web_search), `memory` (search_memory + get_all_memories) Fast enabled engines: bing, duckduckgo, brave, google, yahoo (~3001000ms).
## Concurrency Config: `/mnt/ssd/ai/searxng/config/settings.yml`
| Semaphore | Guards |
|-----------|--------|
| `_reply_semaphore(1)` | GPU Ollama — one LLM inference at a time |
| `_memory_semaphore(1)` | GPU Ollama — one memory extraction at a time |
Memory extraction spin-waits until `_reply_semaphore` is free (60s timeout).
## External Services (from openai/ stack)
| Service | Host Port | Role |
|---------|-----------|------|
| Ollama GPU | 11436 | Reply inference + extraction (qwen2.5:1.5b) |
| Ollama CPU | 11435 | Memory embedding (nomic-embed-text) |
| Qdrant | 6333 | Vector store for memories |
| SearXNG | 11437 | Web search |
GPU Ollama config: `OLLAMA_MAX_LOADED_MODELS=2`, `OLLAMA_NUM_PARALLEL=1`.
## Compose Stack ## Compose Stack
Repo: `http://localhost:3000/alvis/adolf``~/adolf/` Repo: `~/adolf/``http://localhost:3000/alvis/adolf`
```bash ```bash
cd ~/adolf cd ~/adolf
docker compose up -d docker compose up --build -d # start all services
docker compose --profile tools run --rm -it cli # interactive CLI
``` ```
Requires `TELEGRAM_BOT_TOKEN` in `~/adolf/.env`. Requires `~/adolf/.env`: `TELEGRAM_BOT_TOKEN`, `ROUTECHECK_TOKEN`, `YANDEX_ROUTING_KEY`.
## Files ## Files
``` ```
~/adolf/ ~/adolf/
├── docker-compose.yml Services: deepagents, openmemory, grammy ├── docker-compose.yml Services: bifrost, deepagents, openmemory, grammy, crawl4ai, routecheck, cli
├── Dockerfile deepagents container (Python 3.12) ├── agent.py FastAPI gateway, run_agent_task, fast tool short-circuit, memory pipeline
├── agent.py FastAPI + three-tier routing + run_agent_task ├── fast_tools.py WeatherTool (open-meteo), CommuteTool (routecheck), FastToolRunner
├── router.py Router — regex + qwen2.5:1.5b classification ├── router.py Router — regex + qwen2.5:1.5b classification
├── channels.py Channel registry + deliver()
├── vram_manager.py VRAMManager — flush/poll/prewarm Ollama VRAM ├── vram_manager.py VRAMManager — flush/poll/prewarm Ollama VRAM
├── agent_factory.py build_medium_agent / build_complex_agent ├── agent_factory.py _DirectModel (medium) / create_deep_agent (complex)
├── test_pipeline.py Integration tests + benchmark (easy/medium/hard) ├── cli.py Rich Live streaming REPL
├── .env TELEGRAM_BOT_TOKEN (not committed) ├── routecheck/ Yandex Routing API proxy (port 8090)
├── openmemory/ ├── openmemory/ FastMCP + mem0 MCP server (port 8765)
│ ├── server.py FastMCP + mem0 MCP tools └── grammy/ grammY Telegram bot (port 3001)
│ ├── requirements.txt
│ └── Dockerfile
└── grammy/
├── bot.mjs grammY bot + MCP SSE server
├── package.json
└── Dockerfile
``` ```