Backend: - Replace on-the-fly Ollama calls with versioned feature store (task_features, task_edges) - Background Tokio worker drains pending rows; write path returns immediately - MLConfig versioning: changing model IDs triggers automatic backfill via next_stale() - AppState with FromRef; new GET /api/ml/status observability endpoint - Idempotent mark_pending (content hash guards), retry failed rows after 30s - Remove tracked build artifacts (backend/target/, frontend/.next/, node_modules/) Frontend: - TaskItem: items-center alignment (fixes checkbox/text offset), break-words for overflow - TaskDetailPanel: fix invisible AI context (text-gray-700→text-gray-400), show all fields - TaskDetailPanel: pending placeholder when latent_desc not yet computed, show task ID - GraphView: surface pending_count as amber pulsing "analyzing N tasks…" hint in legend - Fix Task.created_at type (number/Unix seconds, not string) - Auth gate: LoginPage + sessionStorage; fix e2e tests to bypass gate in jsdom - Fix deleteTask test assertion and '1 remaining'→'1 left' stale text Docs: - VitePress docs in docs/ with guide, MLOps pipeline, and API reference Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
6.7 KiB
6.7 KiB
Taskpile
A task manager with force-directed graph visualization and an MLOps-grade semantic feature store.
Remote
http://localhost:3000/alvis/taskpile (Gitea, Agap server)
Push: git push origin master
Architecture
- Frontend: Next.js 14 (App Router) + React 18 + Tailwind CSS 3 + TypeScript
- Backend: Rust (Axum 0.7) + SQLite (via SQLx)
- Graph:
react-force-graph-2dfor force-directed visualization - ML: Ollama (nomic-embed-text embeddings, qwen2.5:1.5b descriptions) — async worker, feature store in SQLite
Project Structure
frontend/
src/app/page.tsx — Main page: tabs, panels, task state management; auth gate (LoginPage)
src/app/layout.tsx — Root layout
src/components/
GraphView.tsx — Force graph; node selection, drag-to-center, pending_count hint
TaskList.tsx — Pending/completed task list with selection
TaskItem.tsx — Individual task card (items-center, break-words)
TaskDetailPanel.tsx — Right panel: full task info including AI context + ID
LoginPage.tsx — Login form (auth gate)
ProjectsPanel.tsx — Left panel: project filter
ForceGraphClient.tsx — ForceGraph2D ref wrapper for dynamic import
src/lib/
api.ts — API client (fetch wrappers, auth header, getMLStatus)
types.ts — TypeScript interfaces (Task.created_at is number/Unix secs)
src/__tests__/
unit/ — Jest unit tests (API, TaskItem)
e2e/ — Jest integration tests (full user flows)
backend/
src/main.rs — Axum server on port 3001; spawns ML worker on startup
src/state.rs — AppState (pool + notify + cfg); FromRef for SqlitePool
src/models.rs — Task, GraphNode, GraphEdge, GraphData structs
src/db.rs — SQLite pool; migrations; seeds pending feature rows
src/ml/
config.rs — MLConfig (model IDs, prompt_version, threshold); edge_model_key()
ollama.rs — HTTP client; generate_description, get_embedding; render_prompt by version
features.rs — content_hash, encode/decode embedding, mark_pending, compute, next_stale
edges.rs — recompute_for_task (transactional, canonical source<target ordering)
worker.rs — Tokio background loop; drains pending/stale, retries failures after 30s
src/routes/
tasks.rs — CRUD; create/update call mark_pending + notify.notify_one()
graph.rs — Pure read: tasks + task_features + task_edges; returns pending_count
ml.rs — GET /api/ml/status (observability: pending/ready/failed counts, last_error)
tests/integration_test.rs — Axum integration tests; no ML worker spawned, features stay pending
docs/ — VitePress docs (npm run docs:dev inside docs/)
guide/getting-started.md
guide/architecture.md
mlops/overview.md
mlops/pipeline.md
api/reference.md
Running
# Backend (port 3001)
cd backend && cargo run
# Frontend (port 3003, proxies /api to backend)
cd frontend && npm run dev -- -p 3003
# Docs (port 5173 by default)
cd docs && npm install && npm run docs:dev
Port 3000 is used by Gitea on this machine — use 3003 for the frontend.
Testing
# Frontend tests
cd frontend && npx jest
# Backend tests
cd backend && cargo test
MLOps Design
The ML pipeline follows three principles: decouple inference from serving, versioned feature store, idempotent pipelines.
POST /tasksnever calls Ollama. It inserts atask_featuresrow withstatus='pending'and wakes the worker viatokio::sync::Notify. Returns immediately.- The worker runs in the background, calls Ollama, writes embeddings + descriptions to
task_features, then recomputes edges intask_edges. GET /graphis a pure SQL read — zero Ollama calls.- Changing
desc_model,embed_model, orprompt_versioninMLConfigcausesnext_stale()to pick up all affected rows on the next worker tick (automatic backfill). - Failed rows are stamped with current model IDs to prevent hot-loop; retried after 30s.
GET /api/ml/statusshows pending/ready/failed counts and the last error message.
Key Design Decisions
- Task IDs are UUIDs (TEXT in SQLite, string from backend). Frontend
Task.idis typed asnumberbut actually receives strings — selection uses string comparison throughout. Task.created_atis Unix seconds from the backend — multiply by 1000 beforenew Date().- Graph tab and task list are switched via tabs in the center area. Left panel (projects) and right panel (task details) are independently foldable.
- Selecting a task triggers a 3-phase animation: (1) charge force jumps to -200 so other nodes repel, (2) after 80ms the selected node slides to canvas center over 800ms with cubic ease-out, (3) charge restores to -120 and the graph stabilizes. The node stays pinned (
fx/fy) until a different task is selected. - Both views (task list and graph) are always mounted using
absolute inset-0with opacity/pointer-events toggle — neverhidden. This ensuresGraphViewalways has real canvas dimensions from page load. ForceGraph2Dcanvas dimensions are driven by aResizeObserver. Canvas is only mounted after the first measurement to avoid the 300×300 default size.- Graph re-fits on tab switch and on panel resize. When a node is selected,
zoomToFitis suppressed to avoid fighting the pin animation. GraphViewshows an amber "analyzing N tasks…" pulse indicator in the legend whenpending_count > 0.TaskItemusesitems-center(notitems-start) so the checkbox aligns with the vertical center of the text block. Titles and descriptions usebreak-wordsto prevent overflow.TaskDetailPanelshows all fields: title, description, project, tags, status, created_at (formatted), AI context (latent_desc) with a pending placeholder, and task ID. The AI context section previously usedtext-gray-700(invisible on dark bg) — nowtext-gray-400.
API
All endpoints under /api, Basic Auth required:
| Method | Path | Description |
|---|---|---|
| GET | /tasks | List all tasks |
| POST | /tasks | Create task {title, description?} — seeds ML feature row |
| PATCH | /tasks/:id | Update task {title?, description?, completed?} |
| DELETE | /tasks/:id | Delete task (cascades to task_features + task_edges) |
| GET | /graph | Nodes + edges + pending_count (pure read, no Ollama) |
| GET | /ml/status | ML pipeline observability |
Proxy
Do not use system proxy env vars when testing the app locally — curl --noproxy '*' or equivalent.