# Getting Started ## Prerequisites | Tool | Version | Notes | |------|---------|-------| | Rust | ≥ 1.78 | `rustup update stable` | | Node.js | ≥ 20 | For the frontend | | Ollama | any | `ollama pull nomic-embed-text && ollama pull qwen2.5:1.5b` | > **Port note** — Port 3000 is used by Gitea on this machine. The frontend runs on **3003**; the backend on **3001**. ## Running locally ```bash # 1. Backend (Rust + SQLite) cd backend cargo run # → Listening on http://0.0.0.0:3001 # 2. Frontend (Next.js) cd frontend npm install npm run dev -- -p 3003 # → http://localhost:3003 ``` The backend auto-creates `taskpile.db` and runs schema migrations on startup. It also seeds `task_features` pending rows for any existing task that doesn't have embeddings yet, then wakes the ML worker to process them. ## First login The default credentials are `admin` / `VQ7q1CzFe3Y` (configured via `ValidateRequestHeaderLayer::basic` in `backend/src/main.rs`). ## Verifying the ML pipeline ```bash # Check ML status (requires auth) curl -u admin:VQ7q1CzFe3Y --noproxy '*' http://localhost:3001/api/ml/status | jq ``` You should see `pending` ticking down toward 0 as the worker processes tasks. Once `ready` matches your task count, edges will appear in the graph. ## Running tests ```bash # Backend (Rust) cd backend && cargo test # Frontend (Jest) cd frontend && npx jest ```