Files
taskpile/docs/guide/getting-started.md
Alvis 9b77d6ea67 Add MLOps feature store, fix UI layout, add docs and Gitea remote
Backend:
- Replace on-the-fly Ollama calls with versioned feature store (task_features, task_edges)
- Background Tokio worker drains pending rows; write path returns immediately
- MLConfig versioning: changing model IDs triggers automatic backfill via next_stale()
- AppState with FromRef; new GET /api/ml/status observability endpoint
- Idempotent mark_pending (content hash guards), retry failed rows after 30s
- Remove tracked build artifacts (backend/target/, frontend/.next/, node_modules/)

Frontend:
- TaskItem: items-center alignment (fixes checkbox/text offset), break-words for overflow
- TaskDetailPanel: fix invisible AI context (text-gray-700→text-gray-400), show all fields
- TaskDetailPanel: pending placeholder when latent_desc not yet computed, show task ID
- GraphView: surface pending_count as amber pulsing "analyzing N tasks…" hint in legend
- Fix Task.created_at type (number/Unix seconds, not string)
- Auth gate: LoginPage + sessionStorage; fix e2e tests to bypass gate in jsdom
- Fix deleteTask test assertion and '1 remaining'→'1 left' stale text

Docs:
- VitePress docs in docs/ with guide, MLOps pipeline, and API reference

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-04-10 06:16:28 +00:00

1.4 KiB

Getting Started

Prerequisites

Tool Version Notes
Rust ≥ 1.78 rustup update stable
Node.js ≥ 20 For the frontend
Ollama any ollama pull nomic-embed-text && ollama pull qwen2.5:1.5b

Port note — Port 3000 is used by Gitea on this machine. The frontend runs on 3003; the backend on 3001.

Running locally

# 1. Backend (Rust + SQLite)
cd backend
cargo run
# → Listening on http://0.0.0.0:3001

# 2. Frontend (Next.js)
cd frontend
npm install
npm run dev -- -p 3003
# → http://localhost:3003

The backend auto-creates taskpile.db and runs schema migrations on startup. It also seeds task_features pending rows for any existing task that doesn't have embeddings yet, then wakes the ML worker to process them.

First login

The default credentials are admin / VQ7q1CzFe3Y (configured via ValidateRequestHeaderLayer::basic in backend/src/main.rs).

Verifying the ML pipeline

# Check ML status (requires auth)
curl -u admin:VQ7q1CzFe3Y --noproxy '*' http://localhost:3001/api/ml/status | jq

You should see pending ticking down toward 0 as the worker processes tasks. Once ready matches your task count, edges will appear in the graph.

Running tests

# Backend (Rust)
cd backend && cargo test

# Frontend (Jest)
cd frontend && npx jest