Commit Graph

17 Commits

Author SHA1 Message Date
Alvis
32089ed596 Add routecheck service and CommuteTool fast tool
routecheck/ — FastAPI service (port 8090):
  - Image captcha (PIL: arithmetic problem, noise, wave distortion)
  - POST /api/captcha/new + /api/captcha/solve → short-lived token
  - GET /api/route?from=lat,lon&to=lat,lon&token=... → Yandex Routing API
  - Internal bypass via INTERNAL_TOKEN env var (for CommuteTool)
  - HTTPS proxy forwarded to reach Yandex API from container

CommuteTool (fast_tools.py):
  - Matches commute/traffic/arrival time queries
  - Calls routecheck /api/route with ROUTECHECK_TOKEN
  - Hardcoded route: Balashikha home → Moscow center
  - Returns traffic-adjusted travel time + delay annotation

Needs: YANDEX_ROUTING_KEY + ROUTECHECK_TOKEN in .env

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-13 07:08:48 +00:00
Alvis
d2ca1926f8 WeatherTool: use Russian query for Celsius sources
'погода Балашиха сейчас' returns Russian weather sites (gismeteo,
meteotrend) that report in °C, vs English queries which return
Fahrenheit snippets that the model misreads as Celsius.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-13 06:25:53 +00:00
Alvis
af181ba7ec Rename RealTimeSearchTool → WeatherTool, fetch Balashikha weather via SearXNG
WeatherTool queries SearXNG with a fixed 'weather Balashikha Moscow now'
query instead of passing the user message as-is. SearXNG has external
internet access and returns snippets with actual current conditions.
Direct wttr.in fetch not possible — deepagents container has no external
internet routing.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-13 05:40:10 +00:00
Alvis
f5fc2e9bfb Introduce FastTools: pre-flight classifier + context enrichment
New fast_tools.py module:
- FastTool base class (matches + run interface)
- RealTimeSearchTool: SearXNG search for weather/news/prices/scores
- FastToolRunner: classifier that checks all tools, runs matching
  ones concurrently and returns combined context

Router accepts FastToolRunner; any_matches() forces medium tier
before LLM classification (replaces _MEDIUM_FORCE_PATTERNS regex).

agent.py: _REALTIME_RE and _searxng_search_async removed; pre-flight
gather now includes fast_tool_runner.run_matching() alongside URL
fetch and memory retrieval.

To add a new fast tool: subclass FastTool, add to the list in agent.py.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-13 05:18:44 +00:00
Alvis
436299f7e2 Add real-time query handling: pre-search enrichment + routing fix
- router.py: add _MEDIUM_FORCE_PATTERNS to block weather/news/price
  queries from light tier regardless of LLM classification
- agent.py: add _REALTIME_RE and _searxng_search_async(); real-time
  queries now run SearXNG search concurrently with URL fetch + memory
  retrieval, injecting snippets into medium system prompt
- tests/use_cases/weather_now.md: use case test for weather queries

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-13 05:08:08 +00:00
Alvis
8cd41940f0 Update docs: streaming, CLI container, use_cases tests
- /stream/{session_id} SSE endpoint replaces /reply/ for CLI
- Medium tier streams per-token via astream() with in_think filtering
- CLI now runs as Docker container (Dockerfile.cli, profile:tools)
- Correct medium model to qwen3:4b with real-time think block filtering
- Add use_cases/ test category to commands section
- Update files tree and services table

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-03-12 17:31:36 +00:00
Alvis
b04e8a0925 Add Rich token streaming: server SSE + CLI live display + CLI container
Server (agent.py):
- _stream_queues: per-session asyncio.Queue for token chunks
- _push_stream_chunk() / _end_stream() helpers
- Medium tier: astream() with <think> block filtering — real token streaming
- Light tier: full reply pushed as single chunk then [DONE]
- Complex tier: full reply pushed after agent completes then [DONE]
- GET /stream/{session_id} SSE endpoint (data: <chunk>\n\n, data: [DONE]\n\n)
- medium_model promoted to module-level global for astream() access

CLI (cli.py):
- stream_reply(): reads /stream/ SSE, renders tokens live with Rich Live (transient)
- Final reply rendered as Markdown after stream completes
- os.getlogin() replaced with os.getenv("USER") for container compatibility

Dockerfile.cli + docker-compose cli service (profiles: tools):
- Run: docker compose --profile tools run --rm -it cli

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-03-12 17:26:52 +00:00
Alvis
edc9a96f7a Add use_cases test category as Claude Code skill instructions
Use cases are markdown files that Claude Code reads, executes step by step
using its tools, and evaluates with its own judgment — not assertion scripts.

- cli_startup.md: pipe EOF into cli.py, verify banner and exit code 0
- apple_pie_research.md: /think query → complex tier → web_search + fetch →
  evaluate recipe quality, sources, and structure

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-03-12 17:01:13 +00:00
Alvis
a35ba83db7 Add use_cases test category with CLI startup test
tests/use_cases/ holds scenario-driven tests run by the Claude Code agent,
which acts as both the test runner and mock user. Each test prints a
structured transcript; Claude evaluates correctness.

First test: test_cli_startup.py — spawns cli.py with a subprocess, reads
the welcome banner, sends EOF, and verifies exit code 0.

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-03-12 16:10:04 +00:00
Alvis
021104f510 Split monolithic test_pipeline.py into focused integration test scripts
- common.py: shared config, URL constants, benchmark questions, all helpers
  (get, post_json, check_sse, qdrant_count, fetch_logs, parse_run_block, wait_for, etc.)
- test_health.py: service health checks (deepagents, bifrost, GPU/CPU Ollama, Qdrant, SearXNG)
- test_memory.py: name store/recall pipeline, memory benchmark (5 facts + 10 recalls), dedup test
- test_routing.py: easy/medium/hard tier routing benchmarks with --easy/medium/hard-only flags
- Removed test_pipeline.py

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-03-12 16:02:57 +00:00
Alvis
50097d6092 Embed Crawl4AI at all tiers, restore qwen3:4b medium, update docs
- Pre-routing URL fetch: any message with URLs gets content fetched
  async (httpx.AsyncClient) before routing via _fetch_urls_from_message()
- URL context and memories gathered concurrently with asyncio.gather
- Light tier upgraded to medium when URL content is present
- url_context injected into system prompt for medium and complex agents
- Complex agent retains web_search/fetch_url tools + receives pre-fetched content
- Medium model restored to qwen3:4b (was temporarily qwen2.5:1.5b)
- Unit tests added for _extract_urls
- ARCHITECTURE.md: added Tool Handling, Crawl4AI Integration, Memory Pipeline sections
- CLAUDE.md: updated request flow and Crawl4AI integration docs

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-03-12 15:49:34 +00:00
Alvis
f9618a9bbf Integrate Bifrost LLM gateway, add test suite, implement memory pipeline
- Add Bifrost (maximhq/bifrost) as LLM gateway: all inference routes through
  bifrost:8080/v1 with retry logic and observability; VRAMManager keeps direct
  Ollama access for VRAM flush/prewarm operations
- Switch medium model from qwen3:4b to qwen2.5:1.5b (direct call, no tools)
  via _DirectModel wrapper; complex keeps create_deep_agent with qwen3:8b
- Implement out-of-agent memory pipeline: _retrieve_memories pre-fetches
  relevant context (injected into all tiers), _store_memory runs as background
  task after each reply writing to openmemory/Qdrant
- Add tests/unit/ with 133 tests covering router, channels, vram_manager,
  agent helpers; move integration test to tests/integration/
- Add bifrost-config.json with GPU Ollama (qwen2.5:0.5b/1.5b, qwen3:4b/8b,
  gemma3:4b) and CPU Ollama providers
- Integration test 28/29 pass (only grammy fails — no TELEGRAM_BOT_TOKEN)

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-03-12 13:50:12 +00:00
Alvis
ec45d255f0 wiki search people tested pipeline 2026-03-05 11:22:34 +00:00
Alvis
ea77b2308b Add three-tier model routing with VRAM management and benchmark suite
- Three-tier routing: light (router answers directly ~3s), medium (qwen3:4b
  + tools ~60s), complex (/think prefix → qwen3:8b + subagents ~140s)
- Router: qwen2.5:1.5b, temp=0, regex pre-classifier + raw-text LLM classify
- VRAMManager: explicit flush/poll/prewarm to prevent Ollama CPU-spill bug
- agent_factory: build_medium_agent and build_complex_agent using deepagents
  (TodoListMiddleware + SubAgentMiddleware with research/memory subagents)
- Fix: split Telegram replies >4000 chars into multiple messages
- Benchmark: 30 questions (easy/medium/hard) — 10/10/10 verified passing
  easy→light, medium→medium, hard→complex with VRAM flush confirmed

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-02-28 17:54:51 +00:00
Alvis
1718d70203 Fix system prompt: agent now correctly handles memory requests
- Tell agent that memory is saved automatically after every reply
- Instruct agent to never say it cannot store information
- Instruct agent to acknowledge and confirm when user asks to remember something
- Fix misleading startup log (gemma3:1b → qwen2.5:1.5b)

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-02-23 05:22:08 +00:00
Alvis
19e2c27976 Switch extraction model to qwen2.5:1.5b, fix mem0migrations dims, update tests
- openmemory: use qwen2.5:1.5b instead of gemma3:1b for fact extraction
- test_pipeline.py: check qwen2.5:1.5b, fix SSE checks, fix Qdrant payload
  parsing, relax SearXNG threshold to 5s, improve marker word test
- potential-directions.md: ranked CPU extraction model candidates
- Root cause: mem0migrations collection had stale 1536-dim vectors causing
  silent dedup failures; recreate both collections at 768 dims

All 18 pipeline tests now pass.

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-02-23 05:11:29 +00:00
Alvis
66ab93aa37 Add Adolf architecture doc and integration test script
- ARCHITECTURE.md: comprehensive pipeline description (copied from Gitea wiki)
- test_pipeline.py: tests all services, memory, async timing, and recall

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-02-23 04:52:40 +00:00