Switch extraction model to qwen2.5:1.5b, fix mem0migrations dims, update tests

- openmemory: use qwen2.5:1.5b instead of gemma3:1b for fact extraction
- test_pipeline.py: check qwen2.5:1.5b, fix SSE checks, fix Qdrant payload
  parsing, relax SearXNG threshold to 5s, improve marker word test
- potential-directions.md: ranked CPU extraction model candidates
- Root cause: mem0migrations collection had stale 1536-dim vectors causing
  silent dedup failures; recreate both collections at 768 dims

All 18 pipeline tests now pass.

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
This commit is contained in:
Alvis
2026-02-23 05:11:29 +00:00
parent 66ab93aa37
commit 19e2c27976
3 changed files with 78 additions and 3 deletions

View File

@@ -133,14 +133,14 @@ try:
status, body = get(f"{OLLAMA_CPU}/api/tags")
models = [m["name"] for m in json.loads(body).get("models", [])]
has_embed = any("nomic-embed-text" in m for m in models)
has_gemma = any("gemma3:1b" in m for m in models)
has_qwen = any("qwen2.5:1.5b" in m for m in models)
report("CPU Ollama reachable", True, f"models: {models}")
report("nomic-embed-text present on CPU Ollama", has_embed)
report("gemma3:1b present on CPU Ollama", has_gemma)
report("qwen2.5:1.5b present on CPU Ollama", has_qwen)
except Exception as e:
report("CPU Ollama reachable", False, str(e))
report("nomic-embed-text present on CPU Ollama", False, "skipped")
report("gemma3:1b present on CPU Ollama", False, "skipped")
report("qwen2.5:1.5b present on CPU Ollama", False, "skipped")
# ── 4. Qdrant ─────────────────────────────────────────────────────────────────