Compare commits

..

23 Commits

Author SHA1 Message Date
Alvis
e04f9059ae Add Matrix homeserver with MatrixRTC calling support
- Synapse + PostgreSQL + coturn + LiveKit + lk-jwt-service
- Caddy entries for mtx.alogins.net, lk.alogins.net, lkjwt.alogins.net
- well-known endpoints for Matrix client/server discovery and RTC transport
- Users: admin, elizaveta, aleksandra

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-03-15 14:12:13 +00:00
Alvis
002f9863b0 Add users backup script with Zabbix notification
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-03-09 06:26:32 +00:00
Alvis
77c7cd09aa Update CLAUDE.md: expand Seafile wiki page description 2026-03-08 16:12:13 +00:00
Alvis
b66a74df06 Add Seafile backup script with Zabbix monitoring
- backup.sh: mysqldump all 3 DBs + rsync seafile-data, runs every 3 days
  via root crontab, keeps last 5 backups in /mnt/backups/seafile
- Notifies Zabbix trapper item seafile.backup.ts (id 70369) on AgapHost

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-03-08 16:06:14 +00:00
Alvis
b8db06cd21 Fix OnlyOffice→Seafile connectivity (hairpin NAT)
Add extra_hosts: docs.alogins.net:host-gateway so OnlyOffice container
can reach Seafile's callback URL without going through the public IP.

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-03-08 15:45:08 +00:00
Alvis
7e889d8530 Add OnlyOffice integration for Seafile
- seafile/onlyoffice.yml: OnlyOffice Document Server 8.1 with JWT auth
- Expose on 127.0.0.1:6233, proxied via Caddy at office.alogins.net
- Caddyfile: add office.alogins.net → localhost:6233
- JWT secret stored in Vaultwarden (ONLYOFFICE_JWT_SECRET)
- seahub_settings.py configured inside container with ENABLE_ONLYOFFICE=True

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-03-08 15:36:30 +00:00
Alvis
73ba559593 Update CLAUDE.md: add Seafile wiki page 2026-03-08 15:18:51 +00:00
Alvis
10cb24b7e5 Add Seafile service and update Caddyfile
- seafile/: docker compose setup (seafile-mc 13, mariadb, redis, seadoc, caddy-proxy)
- Expose seafile on 127.0.0.1:8078, proxied via Caddy at docs.alogins.net
- Fix: SEAFILE_SERVER_PROTOCOL=https to avoid CSRF errors
- Fix: TIME_ZONE=Asia/Dubai (Etc/UTC+4 was invalid)
- Caddyfile: add docs.alogins.net → localhost:8078
- .gitignore: exclude seafile/.env (credentials stored in Vaultwarden)

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-03-08 15:11:08 +00:00
Alvis
20c318b3c1 Update CLAUDE.md: add Vaultwarden service and wiki page 2026-03-08 13:45:34 +00:00
Alvis
8873e441c2 Add Vaultwarden backup script with Zabbix monitoring
- backup.sh: runs every 3 days via root crontab, uses built-in container
  backup command, copies db/config/rsa_key to /mnt/backups/vaultwarden,
  keeps last 5 backups, notifies Zabbix item vaultwarden.backup.ts (id 70368)
- Zabbix trigger fires if no backup received in 4 days

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-03-08 13:44:11 +00:00
Alvis
d72fd95dfd Add Vaultwarden service and update Caddyfile
- Add vaultwarden/docker-compose.yml (port 8041, data on /mnt/ssd/dbs/vw-data)
- Update Caddyfile with all current services including vw.alogins.net

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-03-08 13:13:49 +00:00
Alvis
87eb4fb765 Remove adolf — moved to separate repo (alvis/adolf) 2026-03-08 07:06:07 +00:00
Alvis
e2e15009e2 Add Immich backup script
Daily backup at 02:30 via root cron: DB dump + rsync of library/upload/profile
to /mnt/backups/media/. Retains 14 days of DB dumps. Monitored via Zabbix
immich.backup.age item with High trigger if stale >25h.

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-03-07 18:28:58 +00:00
Alvis
5017827af2 cleaning 2026-03-07 17:50:46 +00:00
Alvis
a30936f120 wiki search people tested pipeline 2026-03-05 11:22:34 +00:00
Alvis
09a93c661e Add three-tier model routing with VRAM management and benchmark suite
- Three-tier routing: light (router answers directly ~3s), medium (qwen3:4b
  + tools ~60s), complex (/think prefix → qwen3:8b + subagents ~140s)
- Router: qwen2.5:1.5b, temp=0, regex pre-classifier + raw-text LLM classify
- VRAMManager: explicit flush/poll/prewarm to prevent Ollama CPU-spill bug
- agent_factory: build_medium_agent and build_complex_agent using deepagents
  (TodoListMiddleware + SubAgentMiddleware with research/memory subagents)
- Fix: split Telegram replies >4000 chars into multiple messages
- Benchmark: 30 questions (easy/medium/hard) — 10/10/10 verified passing
  easy→light, medium→medium, hard→complex with VRAM flush confirmed

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-02-28 17:54:51 +00:00
Alvis
ff20f8942d Fix system prompt: agent now correctly handles memory requests
- Tell agent that memory is saved automatically after every reply
- Instruct agent to never say it cannot store information
- Instruct agent to acknowledge and confirm when user asks to remember something
- Fix misleading startup log (gemma3:1b → qwen2.5:1.5b)

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-02-23 05:22:08 +00:00
Alvis
d61dcfb83e Switch extraction model to qwen2.5:1.5b, fix mem0migrations dims, update tests
- openmemory: use qwen2.5:1.5b instead of gemma3:1b for fact extraction
- test_pipeline.py: check qwen2.5:1.5b, fix SSE checks, fix Qdrant payload
  parsing, relax SearXNG threshold to 5s, improve marker word test
- potential-directions.md: ranked CPU extraction model candidates
- Root cause: mem0migrations collection had stale 1536-dim vectors causing
  silent dedup failures; recreate both collections at 768 dims

All 18 pipeline tests now pass.

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-02-23 05:11:29 +00:00
Alvis
f6714f9392 Add Adolf architecture doc and integration test script
- ARCHITECTURE.md: comprehensive pipeline description (copied from Gitea wiki)
- test_pipeline.py: tests all services, memory, async timing, and recall

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-02-23 04:52:40 +00:00
Alvis
90cb41ec53 Fix zabbix agent hostnames for correct host assignment
- Container agent: rename from AgapHost to 'Zabbix server' so it monitors
  the Zabbix server container (was conflicting with the host agent)
- Enable passive listeners in container agent (remove ZBX_STARTAGENTS=0)
- Update 'Zabbix server' host interface from 127.0.0.1 to DNS zabbix-agent
  so the server can reach the agent over the backend Docker network

Host zabbix-agent2 (systemd) keeps hostname AgapHost for host monitoring.

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-02-22 10:51:35 +00:00
Alvis
7548ba117f Add Zabbix Docker Compose config, fix agent hostname
Set AGENT_HOSTNAME=AgapHost to match the existing host in Zabbix server
(was agap-server, causing "host not found" errors).

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-02-22 10:40:10 +00:00
Alvis
0848b6f3fb Set Gitea public domain to git.alogins.net
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-21 13:23:11 +00:00
Alvis
74bdf01989 Add Gitea backup/restore scripts, parameterize configs
- Add gitea/backup.sh and gitea/restore.sh
- Move hardcoded values in gitea/docker-compose.yml to gitea/.env
- Move immich .env from root to immich-app/, update env_file path
- Remove root docker-compose.yml (was only an include alias)

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-21 13:19:08 +00:00
32 changed files with 1408 additions and 25 deletions

2
.gitignore vendored Normal file
View File

@@ -0,0 +1,2 @@
adolf/.env
seafile/.env

100
CLAUDE.md
View File

@@ -13,6 +13,7 @@ This repository manages Docker Compose configurations for the **Agap** self-host
| `immich-app/` | Immich (photo management) | 2283 | Main compose via root `docker-compose.yml` |
| `gitea/` | Gitea (git hosting) + Postgres | 3000, 222 | Standalone compose |
| `openai/` | Open WebUI + Ollama (AI chat) | 3125 | Requires NVIDIA GPU |
| `vaultwarden/` | Vaultwarden (password manager) | 8041 | Backup script in `vaultwarden/backup.sh` |
## Common Commands
@@ -90,6 +91,8 @@ When changes are made to infrastructure (services, config, setup), update the re
| Home-Assistant | KVM-based Home Assistant setup |
| 3X-UI | VPN proxy panel |
| Gitea | Git hosting Docker service |
| Vaultwarden | Password manager, CLI setup, backup |
| Seafile | File sync, document editing, OnlyOffice, WebDAV |
### Read Wiki Pages (API)
@@ -125,3 +128,100 @@ git push http://alvis:$GITEA_TOKEN@localhost:3000/alvis/AgapHost.wiki.git main
- Remove outdated or redundant content when updating
- Create a new page if a topic doesn't exist yet
- Wiki files are Markdown, named `<PageTitle>.md`
## Home Assistant API
**Instance**: `https://haos.alogins.net`
**Token**: Read from `$HA_TOKEN` environment variable — never hardcode it
**Base URL**: `https://haos.alogins.net/api/`
**Auth header**: `Authorization: Bearer <token>`
### Common Endpoints
```bash
# Health check
curl -s -H "Authorization: Bearer $HA_TOKEN" \
https://haos.alogins.net/api/
# Get all entity states
curl -s -H "Authorization: Bearer $HA_TOKEN" \
https://haos.alogins.net/api/states
# Get specific entity
curl -s -H "Authorization: Bearer $HA_TOKEN" \
https://haos.alogins.net/api/states/<entity_id>
# Call service (e.g., turn on light)
curl -s -X POST \
-H "Authorization: Bearer $HA_TOKEN" \
-H "Content-Type: application/json" \
-d '{"entity_id":"light.example"}' \
https://haos.alogins.net/api/services/<domain>/<service>
```
**Note**: Status 401 = token invalid/expired
## HA → Zabbix Alerting
Home Assistant automations push alerts to Zabbix via `history.push` API (Zabbix 7.4 trapper items). No middleware needed.
### Architecture
```
[HA sensor ON] → [HA automation] → [rest_command: HTTP POST] → [Zabbix history.push] → [trapper item] → [trigger] → [Telegram]
```
### Water Leak Sensors
3x HOBEIAN ZG-222Z moisture sensors → Disaster-level Zabbix alert with room name.
| HA Entity | Room |
|-----------|------|
| `binary_sensor.hobeian_zg_222z` | Kitchen |
| `binary_sensor.hobeian_zg_222z_2` | Bathroom |
| `binary_sensor.hobeian_zg_222z_3` | Laundry |
**Zabbix side** (host "HA Agap", hostid 10780):
- Trapper item: `water.leak` (text type) — receives room name or "ok"
- Trigger: `last(/HA Agap/water.leak)<>"ok"` — Disaster (severity 5), manual close
- Trigger name uses `{ITEM.LASTVALUE}` to show room in notification
**HA side** (`configuration.yaml`):
- `rest_command.zabbix_water_leak` — POST to Zabbix `history.push`, accepts `{{ room }}` template variable
- `rest_command.zabbix_water_leak_clear` — pushes "ok" to clear
- Automation "Water Leak Alert" — any sensor ON → sends room name to Zabbix
- Automation "Water Leak Clear" — all sensors OFF → sends "ok"
### Adding a New HA → Zabbix Alert
1. **Zabbix**: Create trapper item (type 2) on "HA Agap" via `item.create` API. Create trigger via `trigger.create`.
2. **HA config**: Add `rest_command` entry in `configuration.yaml` with `history.push` payload. Restart HA.
3. **HA automation**: Create via `POST /api/config/automation/config/<id>` with trigger on sensor state and action calling the rest_command.
4. **Test**: Call `rest_command` via HA API, verify Zabbix problem appears.
## Zabbix API
**Instance**: `http://localhost:81` (local), `https://zb.alogins.net` (external)
**Endpoint**: `http://localhost:81/api_jsonrpc.php`
**Token**: Read from `$ZABBIX_TOKEN` environment variable — never hardcode it
**Auth header**: `Authorization: Bearer <token>`
### Common Requests
```bash
# Check API version
curl -s -X POST http://localhost:81/api_jsonrpc.php \
-H "Content-Type: application/json" \
-H "Authorization: Bearer $ZABBIX_TOKEN" \
-d '{"jsonrpc":"2.0","method":"apiinfo.version","params":{},"id":1}'
# Get all hosts
curl -s -X POST http://localhost:81/api_jsonrpc.php \
-H "Content-Type: application/json" \
-H "Authorization: Bearer $ZABBIX_TOKEN" \
-d '{"jsonrpc":"2.0","method":"host.get","params":{"output":"extend"},"id":1}'
# Get problems/issues
curl -s -X POST http://localhost:81/api_jsonrpc.php \
-H "Content-Type: application/json" \
-H "Authorization: Bearer $ZABBIX_TOKEN" \
-d '{"jsonrpc":"2.0","method":"problem.get","params":{"output":"extend"},"id":1}'
```

122
Caddyfile Normal file
View File

@@ -0,0 +1,122 @@
haos.alogins.net {
reverse_proxy http://192.168.1.141:8123 {
header_up X-Forwarded-For {remote_host}
header_up X-Forwarded-Proto {scheme}
}
}
vi.alogins.net {
reverse_proxy localhost:2283
}
doc.alogins.net {
reverse_proxy localhost:11001
}
zb.alogins.net {
reverse_proxy localhost:81
}
wiki.alogins.net {
reverse_proxy localhost:8083 {
header_up Host {http.request.host}
header_up X-Forwarded-Proto {scheme}
header_up X-Real-IP {remote_host}
}
}
nn.alogins.net {
reverse_proxy localhost:5678
}
git.alogins.net {
reverse_proxy localhost:3000
}
ds.alogins.net {
reverse_proxy localhost:3974
}
ai.alogins.net {
reverse_proxy localhost:3125
}
openpi.alogins.net {
root * /home/alvis/tmp/files/pi05_droid
file_server browse
}
vui3.alogins.net {
@xhttp {
path /VLSpdG9k/xht*
}
handle @xhttp {
reverse_proxy http://localhost:8445 {
flush_interval -1
header_up X-Real-IP {remote_host}
transport http {
read_timeout 0
write_timeout 0
dial_timeout 10s
}
}
}
reverse_proxy /gnYCNq4EbYukS5qtOe/* localhost:58959
respond 401
}
vui4.alogins.net {
reverse_proxy localhost:58959
}
ntfy.alogins.net {
reverse_proxy localhost:8840
}
docs.alogins.net {
reverse_proxy localhost:8078
}
office.alogins.net {
reverse_proxy localhost:6233
}
vw.alogins.net {
reverse_proxy localhost:8041
}
mtx.alogins.net {
handle /.well-known/matrix/client {
header Content-Type application/json
header Access-Control-Allow-Origin *
respond `{"m.homeserver":{"base_url":"https://mtx.alogins.net"},"org.matrix.msc4143.rtc_foci":[{"type":"livekit","livekit_service_url":"https://lkjwt.alogins.net"}]}`
}
handle /.well-known/matrix/server {
header Content-Type application/json
header Access-Control-Allow-Origin *
respond `{"m.server":"mtx.alogins.net:443"}`
}
handle /_matrix/client/unstable/org.matrix.msc4143/rtc/transports {
header Content-Type application/json
header Access-Control-Allow-Origin *
respond `{"foci":[{"type":"livekit","livekit_service_url":"https://lkjwt.alogins.net"}]}`
}
reverse_proxy localhost:8008
}
lkjwt.alogins.net {
reverse_proxy localhost:8009
}
lk.alogins.net {
reverse_proxy localhost:7880
}
localhost:8042 {
reverse_proxy localhost:8041
tls internal
}

View File

@@ -1,3 +0,0 @@
include:
- path: ./immich-app/docker-compose.yml

7
gitea/.env Normal file
View File

@@ -0,0 +1,7 @@
GITEA_DATA=/mnt/misc/gitea
SSH_KEY_PATH=/home/git/.ssh
DB_DATA_LOCATION=/mnt/ssd/dbs/gitea/postgres
DB_USER=gitea
DB_PASSWORD=gitea
DB_NAME=gitea
BACKUP_DIR=/mnt/backups/gitea

39
gitea/backup.sh Executable file
View File

@@ -0,0 +1,39 @@
#!/usr/bin/env bash
set -euo pipefail
SCRIPT_DIR="$(cd "$(dirname "$0")" && pwd)"
source "$SCRIPT_DIR/.env"
if [ ! -d "$BACKUP_DIR" ]; then
echo "Error: BACKUP_DIR does not exist: $BACKUP_DIR" >&2
exit 1
fi
if ! docker info > /dev/null 2>&1; then
echo "Error: Docker is not accessible" >&2
exit 1
fi
cleanup() {
echo "Restarting all services..."
docker compose -f "$SCRIPT_DIR/docker-compose.yml" up -d
}
trap cleanup EXIT
echo "Stopping all services..."
docker compose -f "$SCRIPT_DIR/docker-compose.yml" down
echo "Starting database only..."
docker compose -f "$SCRIPT_DIR/docker-compose.yml" up -d db
sleep 5
echo "Running gitea dump..."
docker run --rm \
--network gitea_gitea \
-e USER_UID=1001 \
-e USER_GID=1001 \
-v "${GITEA_DATA}:/data" \
-v "${BACKUP_DIR}:/backup" \
docker.gitea.com/gitea:1.25.3 \
/bin/sh -c "chown 1001:1001 /tmp && su-exec 1001:1001 /bin/sh -c 'cd /tmp && gitea dump -c /data/gitea/conf/app.ini --tempdir /tmp' > /backup/backup.log 2>&1 && cp /tmp/gitea-dump-*.zip /backup/"
echo "Backup completed successfully"

View File

@@ -1,5 +1,3 @@
version: "3"
networks:
gitea:
external: false
@@ -13,15 +11,19 @@ services:
- USER_GID=1001
- GITEA__database__DB_TYPE=postgres
- GITEA__database__HOST=db:5432
- GITEA__database__NAME=gitea
- GITEA__database__USER=gitea
- GITEA__database__PASSWD=gitea
- GITEA__database__NAME=${DB_NAME}
- GITEA__database__USER=${DB_USER}
- GITEA__database__PASSWD=${DB_PASSWORD}
- GITEA__server__DOMAIN=git.alogins.net
- GITEA__server__SSH_DOMAIN=git.alogins.net
- GITEA__server__ROOT_URL=https://git.alogins.net/
restart: always
networks:
- gitea
volumes:
- /home/git/.ssh/:/data/git/.ssh
- /mnt/misc/gitea:/data
- ${SSH_KEY_PATH}:/data/git/.ssh
- ${GITEA_DATA}:/data
- ${BACKUP_DIR}:/backup
- /etc/timezone:/etc/timezone:ro
- /etc/localtime:/etc/localtime:ro
ports:
@@ -34,10 +36,10 @@ services:
image: docker.io/library/postgres:14
restart: always
environment:
- POSTGRES_USER=gitea
- POSTGRES_PASSWORD=gitea
- POSTGRES_DB=gitea
- POSTGRES_USER=${DB_USER}
- POSTGRES_PASSWORD=${DB_PASSWORD}
- POSTGRES_DB=${DB_NAME}
networks:
- gitea
volumes:
- ./postgres:/var/lib/postgresql/data
- ${DB_DATA_LOCATION}:/var/lib/postgresql/data

124
gitea/restore.sh Executable file
View File

@@ -0,0 +1,124 @@
#!/usr/bin/env bash
set -euo pipefail
SCRIPT_DIR="$(cd "$(dirname "$0")" && pwd)"
source "$SCRIPT_DIR/.env"
# --- Argument validation ---
if [ $# -lt 1 ]; then
echo "Usage: $0 <path-to-gitea-dump.zip>" >&2
exit 1
fi
DUMP_ZIP="$(realpath "$1")"
if [ ! -f "$DUMP_ZIP" ]; then
echo "Error: dump file not found: $DUMP_ZIP" >&2
exit 1
fi
if ! docker info > /dev/null 2>&1; then
echo "Error: Docker is not accessible" >&2
exit 1
fi
# --- Cleanup trap: always bring services back up ---
cleanup() {
echo "Starting all services..."
docker compose -f "$SCRIPT_DIR/docker-compose.yml" up -d
}
trap cleanup EXIT
# --- Stop everything ---
echo "Stopping all services..."
docker compose -f "$SCRIPT_DIR/docker-compose.yml" down
# --- Start only the database ---
echo "Starting database only..."
docker compose -f "$SCRIPT_DIR/docker-compose.yml" up -d db
echo "Waiting for database to be ready..."
for i in $(seq 1 30); do
if docker compose -f "$SCRIPT_DIR/docker-compose.yml" exec -T db \
pg_isready -U "$DB_USER" -d "$DB_NAME" > /dev/null 2>&1; then
break
fi
if [ "$i" -eq 30 ]; then
echo "Error: database not ready after 30 seconds" >&2
exit 1
fi
sleep 1
done
# --- Restore database ---
echo "Restoring database..."
docker compose -f "$SCRIPT_DIR/docker-compose.yml" exec -T db \
psql -U "$DB_USER" -d postgres -c "DROP DATABASE IF EXISTS \"$DB_NAME\";"
docker compose -f "$SCRIPT_DIR/docker-compose.yml" exec -T db \
psql -U "$DB_USER" -d postgres -c "CREATE DATABASE \"$DB_NAME\" OWNER \"$DB_USER\";"
unzip -p "$DUMP_ZIP" gitea-db.sql | \
docker compose -f "$SCRIPT_DIR/docker-compose.yml" exec -T db \
psql -U "$DB_USER" -d "$DB_NAME"
# --- Restore data files ---
echo "Restoring data files..."
docker run --rm \
-v "${GITEA_DATA}:/data" \
-v "${DUMP_ZIP}:/backup/dump.zip:ro" \
docker.gitea.com/gitea:1.25.3 \
/bin/sh -c '
set -e
apk add --no-cache unzip > /dev/null 2>&1 || true
mkdir -p /tmp/restore
unzip -o /backup/dump.zip -d /tmp/restore
# Clear old data
rm -rf /data/gitea/attachments /data/gitea/avatars /data/gitea/jwt \
/data/gitea/indexers /data/gitea/queues /data/gitea/lfs \
/data/gitea/packages /data/gitea/tmp
rm -rf /data/git/repositories/*
# Restore data directory contents
if [ -d /tmp/restore/data ]; then
cp -a /tmp/restore/data/* /data/gitea/ 2>/dev/null || true
fi
# Restore repositories
if [ -d /tmp/restore/repos ]; then
cp -a /tmp/restore/repos/* /data/git/repositories/ 2>/dev/null || true
fi
# Restore app.ini
if [ -f /tmp/restore/app.ini ]; then
mkdir -p /data/gitea/conf
cp -a /tmp/restore/app.ini /data/gitea/conf/app.ini
fi
# Fix ownership
chown -R 1001:1001 /data
rm -rf /tmp/restore
'
# --- Bring everything up (trap will handle this) ---
# Trap fires on exit, which starts all services.
# After services are up, regenerate hooks.
trap - EXIT
echo "Starting all services..."
docker compose -f "$SCRIPT_DIR/docker-compose.yml" up -d
echo "Waiting for Gitea to start..."
for i in $(seq 1 60); do
if docker exec gitea curl -sf http://localhost:3000/ > /dev/null 2>&1; then
break
fi
if [ "$i" -eq 60 ]; then
echo "Warning: Gitea not responding after 60s, attempting hook regeneration anyway" >&2
break
fi
sleep 1
done
echo "Regenerating git hooks..."
docker exec gitea gitea admin regenerate hooks
echo "Restore completed successfully"

191
haos/CLAUDE.md Normal file
View File

@@ -0,0 +1,191 @@
# Home Assistant REST API
## Connection
- **Base URL**: `http://<HA_IP>:8123/api/`
- **Auth header**: `Authorization: Bearer <TOKEN>`
- **Token**: Generate at `http://<HA_IP>:8123/profile` → Long-Lived Access Tokens
- **Response format**: JSON (except `/api/error_log` which is plaintext)
Store token in env var, never hardcode:
```bash
export HA_TOKEN="your_token_here"
export HA_URL="http://<HA_IP>:8123"
```
## Status Codes
| Code | Meaning |
|------|---------|
| 200 | Success (existing resource) |
| 201 | Created (new resource) |
| 400 | Bad request |
| 401 | Unauthorized |
| 404 | Not found |
| 405 | Method not allowed |
## GET Endpoints
```bash
# Health check
GET /api/
# Current HA configuration
GET /api/config
# Loaded components
GET /api/components
# All entity states
GET /api/states
# Specific entity state
GET /api/states/<entity_id>
# Available services
GET /api/services
# Available events
GET /api/events
# Error log (plaintext)
GET /api/error_log
# Camera image
GET /api/camera_proxy/<camera_entity_id>
# All calendar entities
GET /api/calendars
# Calendar events (start and end are required ISO timestamps)
GET /api/calendars/<calendar_entity_id>?start=<ISO>&end=<ISO>
# Historical state changes
GET /api/history/period/<ISO_timestamp>?filter_entity_id=<entity_id>
# Optional params: end_time, minimal_response, no_attributes, significant_changes_only
# Logbook entries
GET /api/logbook/<ISO_timestamp>
# Optional params: entity=<entity_id>, end_time=<ISO>
```
## POST Endpoints
```bash
# Create or update entity state (virtual, not device)
POST /api/states/<entity_id>
{"state": "on", "attributes": {"brightness": 255}}
# Fire an event
POST /api/events/<event_type>
{"optional": "event_data"}
# Call a service
POST /api/services/<domain>/<service>
{"entity_id": "light.living_room"}
# Call service and get its response
POST /api/services/<domain>/<service>?return_response
{"entity_id": "..."}
# Render a Jinja2 template
POST /api/template
{"template": "{{ states('sensor.temperature') }}"}
# Validate configuration
POST /api/config/core/check_config
# Handle an intent
POST /api/intent/handle
{"name": "HassTurnOn", "data": {"name": "lights"}}
```
## DELETE Endpoints
```bash
# Remove an entity
DELETE /api/states/<entity_id>
```
## Example curl Usage
```bash
# Health check
curl -s -H "Authorization: Bearer $HA_TOKEN" $HA_URL/api/
# Get all states
curl -s -H "Authorization: Bearer $HA_TOKEN" $HA_URL/api/states | jq .
# Get specific entity
curl -s -H "Authorization: Bearer $HA_TOKEN" $HA_URL/api/states/light.living_room
# Turn on a light
curl -s -X POST \
-H "Authorization: Bearer $HA_TOKEN" \
-H "Content-Type: application/json" \
-d '{"entity_id": "light.living_room"}' \
$HA_URL/api/services/light/turn_on
# Render template
curl -s -X POST \
-H "Authorization: Bearer $HA_TOKEN" \
-H "Content-Type: application/json" \
-d '{"template": "{{ states(\"sensor.temperature\") }}"}' \
$HA_URL/api/template
```
## Devices
### Lights
4x Zigbee Tuya lights (TZ3210 TS0505B):
- `light.tz3210_r5afgmkl_ts0505b` (G2)
- `light.tz3210_r5afgmkl_ts0505b_g2` (G22)
- `light.tz3210_r5afgmkl_ts0505b_2`
- `light.tz3210_r5afgmkl_ts0505b_3`
Support: color_temp (2000-6535K), xy color mode, brightness (0-254)
### Vacuum Cleaner
**Entity**: `vacuum.xiaomi_ru_1173505785_ov71gl` (Петя Петя)
**Status**: Docked
**Type**: Xiaomi robot vacuum with mop
**Rooms** (from `sensor.xiaomi_ru_1173505785_ov71gl_room_information_p_2_16`):
- ID 4: Спальня (Bedroom)
- ID 3: Гостиная (Living Room)
- ID 5: Кухня (Kitchen)
- ID 6: Прихожая (Hallway)
- ID 7: Ванная комната (Bathroom)
**Services**:
- `vacuum.start` — Start cleaning
- `vacuum.pause` — Pause
- `vacuum.stop` — Stop
- `vacuum.return_to_base` — Dock
- `vacuum.clean_spot` — Clean spot
- `vacuum.set_fan_speed` — Set fan (param: `fan_speed`)
- `vacuum.send_command` — Raw command (params: `command`, `params`)
- Room-aware: `start_vacuum_room_sweep`, `start_zone_sweep`, `get_room_configs`, `set_room_clean_configs`
**Key attributes**:
- `sensor.xiaomi_ru_1173505785_ov71gl_room_information_p_2_16` — Room data (JSON)
- `sensor.xiaomi_ru_1173505785_ov71gl_zone_ids_p_2_12` — Zone IDs
- `button.xiaomi_ru_1173505785_ov71gl_auto_room_partition_a_10_5` — Auto-detect room boundaries
### Water Leak Sensors
3x HOBEIAN ZG-222Z Zigbee moisture sensors:
- `binary_sensor.hobeian_zg_222z` — Kitchen
- `binary_sensor.hobeian_zg_222z_2` — Bathroom
- `binary_sensor.hobeian_zg_222z_3` — Laundry
Battery sensors: `sensor.hobeian_zg_222z_battery`, `_2`, `_3`
**Automations** (push to Zabbix via `rest_command`):
- "Water Leak Alert" (`water_leak_alert`) — any sensor ON → `rest_command.zabbix_water_leak` with room name
- "Water Leak Clear" (`water_leak_clear`) — all sensors OFF → `rest_command.zabbix_water_leak_clear`
## Notes
- `POST /api/states/<entity_id>` creates a virtual state representation only — it does NOT control physical devices. Use `POST /api/services/...` for actual device control.
- Timestamp format: `YYYY-MM-DDThh:mm:ssTZD` (ISO 8601)
- Using `?return_response` on a service that doesn't support it returns a 400 error

View File

@@ -6,11 +6,11 @@
# The location where your uploaded files are stored
UPLOAD_LOCATION=/mnt/media/upload
THUMB_LOCATION=/mnt/ssd1/media/thumbs
ENCODED_VIDEO_LOCATION=/mnt/ssd1/media/encoded-video
THUMB_LOCATION=/mnt/ssd/media/thumbs
ENCODED_VIDEO_LOCATION=/mnt/ssd/media/encoded-video
# The location where your database files are stored. Network shares are not supported for the database
DB_DATA_LOCATION=/mnt/ssd1/media/postgres
DB_DATA_LOCATION=/mnt/ssd/media/postgres
# To set a timezone, uncomment the next line and change Etc/UTC to a TZ identifier from this list: https://en.wikipedia.org/wiki/List_of_tz_database_time_zones#List
# TZ=Etc/UTC

30
immich-app/backup.sh Executable file
View File

@@ -0,0 +1,30 @@
#!/usr/bin/env bash
set -euo pipefail
BACKUP_DIR=/mnt/backups/media
DB_BACKUP_DIR="$BACKUP_DIR/backups"
LOG="$BACKUP_DIR/backup.log"
RETAIN_DAYS=14
mkdir -p "$DB_BACKUP_DIR"
echo "[$(date)] Starting Immich backup" >> "$LOG"
# 1. Database dump (must come before file sync)
DUMP_FILE="$DB_BACKUP_DIR/immich-db-$(date +%Y%m%dT%H%M%S).sql.gz"
docker exec immich_postgres pg_dump --clean --if-exists \
--dbname=immich --username=postgres | gzip > "$DUMP_FILE"
echo "[$(date)] DB dump: $DUMP_FILE" >> "$LOG"
# 2. Rsync critical asset folders (skip thumbs and encoded-video — regeneratable)
for DIR in library upload profile; do
rsync -a --delete /mnt/media/upload/$DIR/ "$BACKUP_DIR/$DIR/" >> "$LOG" 2>&1
echo "[$(date)] Synced $DIR" >> "$LOG"
done
# 3. Remove old DB dumps
find "$DB_BACKUP_DIR" -name "immich-db-*.sql.gz" -mtime +$RETAIN_DAYS -delete
echo "[$(date)] Cleaned dumps older than ${RETAIN_DAYS}d" >> "$LOG"
touch "$BACKUP_DIR/.last_sync"
echo "[$(date)] Immich backup complete" >> "$LOG"

View File

@@ -23,7 +23,7 @@ services:
- ${ENCODED_VIDEO_LOCATION}:/data/encoded-video
- /etc/localtime:/etc/localtime:ro
env_file:
- ../.env
- .env
ports:
- '2283:2283'
depends_on:
@@ -44,7 +44,7 @@ services:
volumes:
- model-cache:/cache
env_file:
- ../.env
- .env
restart: always
healthcheck:
disable: false

7
matrix/.env Normal file
View File

@@ -0,0 +1,7 @@
SYNAPSE_DATA=./data/synapse
POSTGRES_DATA=./data/postgres
POSTGRES_USER=synapse
POSTGRES_PASSWORD=OimW4JUSXhZBCtLHE1kFnZ7cWVbESsxynapnJ+PSw/4=
POSTGRES_DB=synapse
LIVEKIT_KEY=devkey
LIVEKIT_SECRET=ef3ef4b903ca8469b09b2dd7ab6af529c4d2f3c95668f53832fc351cf67777a9

1
matrix/.gitignore vendored Normal file
View File

@@ -0,0 +1 @@
data/

105
matrix/README.md Normal file
View File

@@ -0,0 +1,105 @@
# Matrix Home Server
Self-hosted Matrix homeserver running on `mtx.alogins.net`.
## Stack
| Service | Purpose |
|---------|---------|
| Synapse | Matrix homeserver |
| PostgreSQL | Synapse database |
| LiveKit | MatrixRTC media server (calls) |
| lk-jwt-service | LiveKit JWT auth for Matrix users |
| coturn | TURN/STUN server (ICE fallback) |
## Clients
- **Element X** (Android/iOS) — recommended, full call support
- **FluffyChat** — messaging only, calls not supported
Connect clients to: `https://mtx.alogins.net`
## Users
| Username | Admin |
|----------|-------|
| admin | yes |
| elizaveta | no |
| aleksandra | no |
## Managing Users
```bash
# Add user
docker exec synapse register_new_matrix_user \
-c /data/homeserver.yaml \
-u <username> -p <password> --no-admin \
http://localhost:8008
# Add admin
docker exec synapse register_new_matrix_user \
-c /data/homeserver.yaml \
-u <username> -p <password> -a \
http://localhost:8008
```
## Start / Stop
```bash
cd /home/alvis/agap_git/matrix
docker compose up -d # start all
docker compose down # stop all
docker compose restart # restart all
docker compose ps # status
docker compose logs -f # logs
```
## Caddy
Entries in `/home/alvis/agap_git/Caddyfile`:
| Domain | Purpose |
|--------|---------|
| `mtx.alogins.net` | Synapse + well-known |
| `lk.alogins.net` | LiveKit SFU |
| `lkjwt.alogins.net` | LiveKit JWT service |
Deploy Caddyfile changes:
```bash
sudo cp /home/alvis/agap_git/Caddyfile /etc/caddy/Caddyfile && sudo systemctl reload caddy
```
## Firewall Ports Required
| Port | Protocol | Service |
|------|----------|---------|
| 443 | TCP | Caddy (HTTPS) |
| 3478 | UDP+TCP | coturn TURN |
| 5349 | UDP+TCP | coturn TURNS |
| 7881 | TCP | LiveKit |
| 49152-65535 | UDP | coturn relay |
| 50100-50200 | UDP | LiveKit media |
## Data Locations
| Data | Path |
|------|------|
| Synapse config & media | `./data/synapse/` |
| PostgreSQL data | `./data/postgres/` |
| LiveKit config | `./livekit/livekit.yaml` |
| coturn config | `./coturn/turnserver.conf` |
## First-Time Setup (reference)
```bash
# Generate Synapse config
docker run --rm \
-v ./data/synapse:/data \
-e SYNAPSE_SERVER_NAME=mtx.alogins.net \
-e SYNAPSE_REPORT_STATS=no \
matrixdotorg/synapse:latest generate
# Edit database section in data/synapse/homeserver.yaml, then:
docker compose up -d
```

View File

@@ -0,0 +1,18 @@
listening-port=3478
tls-listening-port=5349
external-ip=83.99.190.32/192.168.1.3
realm=mtx.alogins.net
server-name=mtx.alogins.net
use-auth-secret
static-auth-secret=144152cc09030796a4fd0109437dfc2089db2d5181b848d38d20c646c1d7a14b
no-multicast-peers
denied-peer-ip=10.0.0.0-10.255.255.255
denied-peer-ip=172.16.0.0-172.31.255.255
denied-peer-ip=192.168.0.0-192.168.255.255
log-file=stdout
no-software-attribute

73
matrix/docker-compose.yml Normal file
View File

@@ -0,0 +1,73 @@
services:
synapse:
image: matrixdotorg/synapse:latest
container_name: synapse
restart: unless-stopped
volumes:
- ${SYNAPSE_DATA}:/data
- /etc/localtime:/etc/localtime:ro
environment:
- SYNAPSE_CONFIG_PATH=/data/homeserver.yaml
ports:
- "127.0.0.1:8008:8008"
depends_on:
- db
networks:
- matrix
- frontend
db:
image: postgres:16-alpine
container_name: synapse-db
restart: unless-stopped
environment:
- POSTGRES_USER=${POSTGRES_USER}
- POSTGRES_PASSWORD=${POSTGRES_PASSWORD}
- POSTGRES_DB=${POSTGRES_DB}
- POSTGRES_INITDB_ARGS=--encoding=UTF-8 --lc-collate=C --lc-ctype=C
volumes:
- ${POSTGRES_DATA}:/var/lib/postgresql/data
- /etc/localtime:/etc/localtime:ro
networks:
- matrix
lk-jwt-service:
image: ghcr.io/element-hq/lk-jwt-service:latest
container_name: lk-jwt-service
restart: unless-stopped
ports:
- "127.0.0.1:8009:8080"
environment:
- LIVEKIT_JWT_BIND=:8080
- LIVEKIT_URL=wss://lk.alogins.net
- LIVEKIT_KEY=${LIVEKIT_KEY}
- LIVEKIT_SECRET=${LIVEKIT_SECRET}
- LIVEKIT_FULL_ACCESS_HOMESERVERS=mtx.alogins.net
extra_hosts:
- "mtx.alogins.net:host-gateway"
- "lk.alogins.net:host-gateway"
livekit:
image: livekit/livekit-server:latest
container_name: livekit
restart: unless-stopped
network_mode: host
volumes:
- ./livekit/livekit.yaml:/etc/livekit.yaml:ro
command: --config /etc/livekit.yaml
coturn:
image: coturn/coturn:latest
container_name: coturn
restart: unless-stopped
network_mode: host
volumes:
- ./coturn/turnserver.conf:/etc/coturn/turnserver.conf:ro
- /etc/localtime:/etc/localtime:ro
networks:
matrix:
driver: bridge
internal: true
frontend:
driver: bridge

View File

@@ -0,0 +1,15 @@
port: 7880
rtc:
tcp_port: 7881
port_range_start: 50100
port_range_end: 50200
use_external_ip: true
keys:
devkey: ef3ef4b903ca8469b09b2dd7ab6af529c4d2f3c95668f53832fc351cf67777a9
room:
auto_create: false
logging:
level: info

16
ntfy/docker-compose.yml Normal file
View File

@@ -0,0 +1,16 @@
services:
ntfy:
image: binwiederhier/ntfy
container_name: ntfy
command: serve
environment:
- NTFY_BASE_URL=https://ntfy.alogins.net
- NTFY_CACHE_FILE=/var/lib/ntfy/cache.db
- NTFY_AUTH_FILE=/var/lib/ntfy/auth.db
- NTFY_AUTH_DEFAULT_ACCESS=deny-all
- NTFY_BEHIND_PROXY=true
volumes:
- /mnt/misc/ntfy:/var/lib/ntfy
ports:
- "8840:80"
restart: unless-stopped

View File

@@ -1,12 +1,42 @@
services:
ollama:
image: ollama/ollama
container_name: ollama
ports:
- "11436:11434"
volumes:
- /mnt/ssd/ai/ollama:/root/.ollama
- /mnt/ssd/ai/open-webui:/app/backend/data
restart: always
environment:
# Allow qwen3:8b + qwen2.5:1.5b to coexist in VRAM (~6.7-7.7 GB on 8 GB GPU)
- OLLAMA_MAX_LOADED_MODELS=2
# One GPU inference at a time — prevents compute contention between models
- OLLAMA_NUM_PARALLEL=1
deploy:
resources:
reservations:
devices:
- driver: nvidia
count: all
capabilities: [gpu]
ollama-cpu:
image: ollama/ollama
container_name: ollama-cpu
ports:
- "11435:11434"
volumes:
- /mnt/ssd/ai/ollama-cpu:/root/.ollama
restart: always
open-webui:
image: ghcr.io/open-webui/open-webui:ollama
image: ghcr.io/open-webui/open-webui:main
container_name: open-webui
ports:
- "3125:8080"
volumes:
- ollama:/root/.ollama
- open-webui:/app/backend/data
- /mnt/ssd/ai/open-webui:/app/backend/data
restart: always
deploy:
resources:
@@ -18,6 +48,22 @@ services:
environment:
- ANTHROPIC_API_KEY=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
volumes:
ollama:
open-webui:
searxng:
image: docker.io/searxng/searxng:latest
container_name: searxng
volumes:
- /mnt/ssd/ai/searxng/config/:/etc/searxng/
- /mnt/ssd/ai/searxng/data/:/var/cache/searxng/
restart: always
ports:
- "11437:8080"
qdrant:
image: qdrant/qdrant
container_name: qdrant
ports:
- "6333:6333"
- "6334:6334"
restart: always
volumes:
- /mnt/ssd/dbs/qdrant:/qdrant/storage:z

9
otter/docker-compose.yml Normal file
View File

@@ -0,0 +1,9 @@
services:
otterwiki:
image: redimp/otterwiki:2
restart: unless-stopped
ports:
- 8083:80
volumes:
- /mnt/ssd/dbs/otter/app-data:/app-data

View File

@@ -0,0 +1,58 @@
networks:
macvlan-br0:
driver: macvlan
driver_opts:
parent: br0
ipam:
config:
- subnet: 192.168.1.0/24
gateway: 192.168.1.1
# ip_range: 192.168.1.192/27
services:
pihole:
container_name: pihole
image: pihole/pihole:latest
#ports:
# DNS Ports
#- "53:53/tcp"
#- "53:53/udp"
# Default HTTP Port
#- "80:80/tcp"
# Default HTTPs Port. FTL will generate a self-signed certificate
#- "443:443/tcp"
# Uncomment the below if using Pi-hole as your DHCP Server
#- "67:67/udp"
# Uncomment the line below if you are using Pi-hole as your NTP server
#- "123:123/udp"
dns:
- 8.8.8.8
- 1.1.1.1
networks:
macvlan-br0:
ipv4_address: 192.168.1.2
environment:
# Set the appropriate timezone for your location from
# https://en.wikipedia.org/wiki/List_of_tz_database_time_zones, e.g:
TZ: 'Europe/Moscow'
# Set a password to access the web interface. Not setting one will result in a random password being assigned
FTLCONF_webserver_api_password: 'correct horse 123'
# If using Docker's default `bridge` network setting the dns listening mode should be set to 'ALL'
FTLCONF_dns_listeningMode: 'ALL'
# Volumes store your data between container upgrades
volumes:
# For persisting Pi-hole's databases and common configuration file
- '/mnt/ssd/dbs/pihole:/etc/pihole'
# Uncomment the below if you have custom dnsmasq config files that you want to persist. Not needed for most starting fresh with Pi-hole v6. If you're upgrading from v5 you and have used this directory before, you should keep it enabled for the first v6 container start to allow for a complete migration. It can be removed afterwards. Needs environment variable FTLCONF_misc_etc_dnsmasq_d: 'true'
#- './etc-dnsmasq.d:/etc/dnsmasq.d'
cap_add:
# See https://github.com/pi-hole/docker-pi-hole#note-on-capabilities
# Required if you are using Pi-hole as your DHCP server, else not needed
- NET_ADMIN
# Required if you are using Pi-hole as your NTP client to be able to set the host's system time
- SYS_TIME
# Optional, if Pi-hole should get some more processing time
- SYS_NICE
restart: unless-stopped

44
seafile/backup.sh Executable file
View File

@@ -0,0 +1,44 @@
#!/bin/bash
# Seafile backup script.
# Backs up MySQL databases and seafile data directory.
# Runs every 3 days via root crontab. Keeps last 5 backups.
# Notifies Zabbix (item seafile.backup.ts, id 70369 on AgapHost) after success.
set -euo pipefail
BACKUP_DIR="/mnt/backups/seafile"
DATA_DIR="/mnt/misc/seafile"
DATE=$(date '+%Y%m%d-%H%M')
DEST="$BACKUP_DIR/$DATE"
mkdir -p "$DEST"
# Dump all three Seafile databases from the running container
for DB in ccnet_db seafile_db seahub_db; do
docker exec seafile-mysql mysqldump \
-u seafile -pFWsYYeZa15ro6x \
--single-transaction "$DB" > "$DEST/${DB}.sql"
echo "Dumped: $DB"
done
# Copy seafile data (libraries, config — excludes mysql and caddy dirs)
rsync -a --delete \
--exclude='seafile-mysql/' \
--exclude='seafile-caddy/' \
"$DATA_DIR/" "$DEST/data/"
echo "$(date): Backup complete: $DEST"
ls "$DEST/"
# Notify Zabbix
if [[ -f /root/.zabbix_token ]]; then
ZABBIX_TOKEN=$(cat /root/.zabbix_token)
curl -s -X POST http://localhost:81/api_jsonrpc.php \
-H "Content-Type: application/json" \
-H "Authorization: Bearer $ZABBIX_TOKEN" \
-d "{\"jsonrpc\":\"2.0\",\"method\":\"history.push\",\"id\":1,\"params\":{\"itemid\":\"70369\",\"value\":\"$(date '+%Y-%m-%d %H:%M')\"}}" > /dev/null \
&& echo "Zabbix notified."
fi
# Rotate: keep last 5 backups
ls -1dt "$BACKUP_DIR"/[0-9]*-[0-9]* 2>/dev/null | tail -n +6 | xargs -r rm -rf

26
seafile/caddy.yml Normal file
View File

@@ -0,0 +1,26 @@
services:
caddy:
image: ${SEAFILE_CADDY_IMAGE:-lucaslorentz/caddy-docker-proxy:2.9-alpine}
restart: unless-stopped
container_name: seafile-caddy
ports:
- 8077:80
- 4433:443
environment:
- CADDY_INGRESS_NETWORKS=seafile-net
volumes:
- /var/run/docker.sock:/var/run/docker.sock
- ${SEAFILE_CADDY_VOLUME:-/opt/seafile-caddy}:/data/caddy
networks:
- seafile-net
healthcheck:
test: ["CMD-SHELL", "curl --fail http://localhost:2019/metrics || exit 1"]
start_period: 20s
interval: 20s
timeout: 5s
retries: 3
networks:
seafile-net:
name: seafile-net

20
seafile/onlyoffice.yml Normal file
View File

@@ -0,0 +1,20 @@
services:
onlyoffice:
image: ${ONLYOFFICE_IMAGE:-onlyoffice/documentserver:8.1.0.1}
container_name: seafile-onlyoffice
restart: unless-stopped
environment:
- JWT_ENABLED=true
- JWT_SECRET=${ONLYOFFICE_JWT_SECRET:?Variable is not set or empty}
volumes:
- "${ONLYOFFICE_VOLUME:-/opt/onlyoffice}:/var/lib/onlyoffice"
ports:
- "127.0.0.1:6233:80"
extra_hosts:
- "docs.alogins.net:host-gateway"
networks:
- seafile-net
networks:
seafile-net:
name: seafile-net

40
seafile/seadoc.yml Normal file
View File

@@ -0,0 +1,40 @@
services:
seadoc:
image: ${SEADOC_IMAGE:-seafileltd/sdoc-server:2.0-latest}
container_name: seadoc
restart: unless-stopped
volumes:
- ${SEADOC_VOLUME:-/opt/seadoc-data/}:/shared
# ports:
# - "80:80"
environment:
- DB_HOST=${SEAFILE_MYSQL_DB_HOST:-db}
- DB_PORT=${SEAFILE_MYSQL_DB_PORT:-3306}
- DB_USER=${SEAFILE_MYSQL_DB_USER:-seafile}
- DB_PASSWORD=${SEAFILE_MYSQL_DB_PASSWORD:?Variable is not set or empty}
- DB_NAME=${SEADOC_MYSQL_DB_NAME:-${SEAFILE_MYSQL_DB_SEAHUB_DB_NAME:-seahub_db}}
- TIME_ZONE=${TIME_ZONE:-Etc/UTC}
- JWT_PRIVATE_KEY=${JWT_PRIVATE_KEY:?Variable is not set or empty}
- NON_ROOT=${NON_ROOT:-false}
- SEAHUB_SERVICE_URL=${SEAFILE_SERVICE_URL:-http://seafile}
labels:
caddy: ${SEAFILE_SERVER_PROTOCOL:-http}://${SEAFILE_SERVER_HOSTNAME:?Variable is not set or empty}
caddy.@ws.0_header: "Connection *Upgrade*"
caddy.@ws.1_header: "Upgrade websocket"
caddy.0_reverse_proxy: "@ws {{upstreams 80}}"
caddy.1_handle_path: "/socket.io/*"
caddy.1_handle_path.0_rewrite: "* /socket.io{uri}"
caddy.1_handle_path.1_reverse_proxy: "{{upstreams 80}}"
caddy.2_handle_path: "/sdoc-server/*"
caddy.2_handle_path.0_rewrite: "* {uri}"
caddy.2_handle_path.1_reverse_proxy: "{{upstreams 80}}"
depends_on:
db:
condition: service_healthy
networks:
- seafile-net
networks:
seafile-net:
name: seafile-net

103
seafile/seafile-server.yml Normal file
View File

@@ -0,0 +1,103 @@
services:
db:
image: ${SEAFILE_DB_IMAGE:-mariadb:10.11}
container_name: seafile-mysql
restart: unless-stopped
environment:
- MYSQL_ROOT_PASSWORD=${INIT_SEAFILE_MYSQL_ROOT_PASSWORD:-}
- MYSQL_LOG_CONSOLE=true
- MARIADB_AUTO_UPGRADE=1
volumes:
- "${SEAFILE_MYSQL_VOLUME:-/opt/seafile-mysql/db}:/var/lib/mysql"
networks:
- seafile-net
healthcheck:
test:
[
"CMD",
"/usr/local/bin/healthcheck.sh",
"--connect",
"--mariadbupgrade",
"--innodb_initialized",
]
interval: 20s
start_period: 30s
timeout: 5s
retries: 10
redis:
image: ${SEAFILE_REDIS_IMAGE:-redis}
container_name: seafile-redis
restart: unless-stopped
command:
- /bin/sh
- -c
- redis-server --requirepass "$$REDIS_PASSWORD"
environment:
- REDIS_PASSWORD=${REDIS_PASSWORD:-}
networks:
- seafile-net
seafile:
image: ${SEAFILE_IMAGE:-seafileltd/seafile-mc:13.0-latest}
container_name: seafile
restart: unless-stopped
ports:
- "127.0.0.1:8078:80"
volumes:
- ${SEAFILE_VOLUME:-/opt/seafile-data}:/shared
environment:
- SEAFILE_MYSQL_DB_HOST=${SEAFILE_MYSQL_DB_HOST:-db}
- SEAFILE_MYSQL_DB_PORT=${SEAFILE_MYSQL_DB_PORT:-3306}
- SEAFILE_MYSQL_DB_USER=${SEAFILE_MYSQL_DB_USER:-seafile}
- SEAFILE_MYSQL_DB_PASSWORD=${SEAFILE_MYSQL_DB_PASSWORD:?Variable is not set or empty}
- INIT_SEAFILE_MYSQL_ROOT_PASSWORD=${INIT_SEAFILE_MYSQL_ROOT_PASSWORD:-}
- SEAFILE_MYSQL_DB_CCNET_DB_NAME=${SEAFILE_MYSQL_DB_CCNET_DB_NAME:-ccnet_db}
- SEAFILE_MYSQL_DB_SEAFILE_DB_NAME=${SEAFILE_MYSQL_DB_SEAFILE_DB_NAME:-seafile_db}
- SEAFILE_MYSQL_DB_SEAHUB_DB_NAME=${SEAFILE_MYSQL_DB_SEAHUB_DB_NAME:-seahub_db}
- TIME_ZONE=${TIME_ZONE:-Etc/UTC}
- INIT_SEAFILE_ADMIN_EMAIL=${INIT_SEAFILE_ADMIN_EMAIL:-me@example.com}
- INIT_SEAFILE_ADMIN_PASSWORD=${INIT_SEAFILE_ADMIN_PASSWORD:-asecret}
- SEAFILE_SERVER_HOSTNAME=${SEAFILE_SERVER_HOSTNAME:?Variable is not set or empty}
- SEAFILE_SERVER_PROTOCOL=${SEAFILE_SERVER_PROTOCOL:-http}
- SITE_ROOT=${SITE_ROOT:-/}
- NON_ROOT=${NON_ROOT:-false}
- JWT_PRIVATE_KEY=${JWT_PRIVATE_KEY:?Variable is not set or empty}
- SEAFILE_LOG_TO_STDOUT=${SEAFILE_LOG_TO_STDOUT:-false}
- ENABLE_GO_FILESERVER=${ENABLE_GO_FILESERVER:-true}
- ENABLE_SEADOC=${ENABLE_SEADOC:-true}
- SEADOC_SERVER_URL=${SEAFILE_SERVER_PROTOCOL:-http}://${SEAFILE_SERVER_HOSTNAME:?Variable is not set or empty}/sdoc-server
- CACHE_PROVIDER=${CACHE_PROVIDER:-redis}
- REDIS_HOST=${REDIS_HOST:-redis}
- REDIS_PORT=${REDIS_PORT:-6379}
- REDIS_PASSWORD=${REDIS_PASSWORD:-}
- MEMCACHED_HOST=${MEMCACHED_HOST:-memcached}
- MEMCACHED_PORT=${MEMCACHED_PORT:-11211}
- ENABLE_NOTIFICATION_SERVER=${ENABLE_NOTIFICATION_SERVER:-false}
- INNER_NOTIFICATION_SERVER_URL=${INNER_NOTIFICATION_SERVER_URL:-http://notification-server:8083}
- NOTIFICATION_SERVER_URL=${NOTIFICATION_SERVER_URL:-${SEAFILE_SERVER_PROTOCOL:-http}://${SEAFILE_SERVER_HOSTNAME:?Variable is not set or empty}/notification}
- ENABLE_SEAFILE_AI=${ENABLE_SEAFILE_AI:-false}
- ENABLE_FACE_RECOGNITION=${ENABLE_FACE_RECOGNITION:-false}
- SEAFILE_AI_SERVER_URL=${SEAFILE_AI_SERVER_URL:-http://seafile-ai:8888}
- SEAFILE_AI_SECRET_KEY=${JWT_PRIVATE_KEY:?Variable is not set or empty}
- MD_FILE_COUNT_LIMIT=${MD_FILE_COUNT_LIMIT:-100000}
labels:
caddy: ${SEAFILE_SERVER_PROTOCOL:-http}://${SEAFILE_SERVER_HOSTNAME:?Variable is not set or empty}
caddy.reverse_proxy: "{{upstreams 80}}"
healthcheck:
test: ["CMD-SHELL", "curl -f http://localhost:80 || exit 1"]
interval: 30s
timeout: 10s
retries: 3
start_period: 10s
depends_on:
db:
condition: service_healthy
redis:
condition: service_started
networks:
- seafile-net
networks:
seafile-net:
name: seafile-net

25
users-backup.sh Executable file
View File

@@ -0,0 +1,25 @@
#!/bin/bash
# Backup /mnt/misc/alvis and /mnt/misc/liza to /mnt/backups/users/
# Runs every 3 days via root crontab.
# Notifies Zabbix (item users.backup.ts, id 70379 on AgapHost) after success.
set -euo pipefail
DEST=/mnt/backups/users
mkdir -p "$DEST/alvis" "$DEST/liza"
rsync -a --delete /mnt/misc/alvis/ "$DEST/alvis/"
rsync -a --delete /mnt/misc/liza/ "$DEST/liza/"
echo "$(date): Backup complete."
# Notify Zabbix (token stored in /root/.zabbix_token)
if [[ -f /root/.zabbix_token ]]; then
ZABBIX_TOKEN=$(cat /root/.zabbix_token)
curl -s -X POST http://localhost:81/api_jsonrpc.php \
-H "Content-Type: application/json" \
-H "Authorization: Bearer $ZABBIX_TOKEN" \
-d "{\"jsonrpc\":\"2.0\",\"method\":\"history.push\",\"id\":1,\"params\":{\"itemid\":\"70379\",\"value\":\"$(date '+%Y-%m-%d %H:%M')\"}}" > /dev/null \
&& echo "Zabbix notified."
fi

41
vaultwarden/backup.sh Executable file
View File

@@ -0,0 +1,41 @@
#!/bin/bash
# Vaultwarden backup — uses built-in container backup command (safe with live DB).
# Runs every 3 days via root crontab. Keeps last 5 backups.
# Notifies Zabbix (item vaultwarden.backup.ts, id 70368 on AgapHost) after success.
set -euo pipefail
BACKUP_DIR="/mnt/backups/vaultwarden"
DATA_DIR="/mnt/ssd/dbs/vw-data"
DATE=$(date '+%Y%m%d-%H%M')
DEST="$BACKUP_DIR/$DATE"
mkdir -p "$DEST"
# Run built-in backup inside container — writes db_<timestamp>.sqlite3 to /data/ on the host
docker exec vaultwarden /vaultwarden backup 2>&1
# Move the newly created sqlite3 backup file out of the data dir
find "$DATA_DIR" -maxdepth 1 -name 'db_*.sqlite3' -newer "$DATA_DIR/db.sqlite3" | xargs -r mv -t "$DEST/"
# Copy config and RSA keys
cp "$DATA_DIR/config.json" "$DEST/"
cp "$DATA_DIR"/rsa_key* "$DEST/"
[ -d "$DATA_DIR/attachments" ] && cp -r "$DATA_DIR/attachments" "$DEST/"
[ -d "$DATA_DIR/sends" ] && cp -r "$DATA_DIR/sends" "$DEST/"
echo "$(date): Backup complete: $DEST"
ls "$DEST/"
# Notify Zabbix (token stored in /root/.zabbix_token)
if [[ -f /root/.zabbix_token ]]; then
ZABBIX_TOKEN=$(cat /root/.zabbix_token)
curl -s -X POST http://localhost:81/api_jsonrpc.php \
-H "Content-Type: application/json" \
-H "Authorization: Bearer $ZABBIX_TOKEN" \
-d "{\"jsonrpc\":\"2.0\",\"method\":\"history.push\",\"id\":1,\"params\":{\"itemid\":\"70368\",\"value\":\"$(date '+%Y-%m-%d %H:%M')\"}}" > /dev/null \
&& echo "Zabbix notified."
fi
# Rotate: keep last 5 backups
ls -1dt "$BACKUP_DIR"/[0-9]*-[0-9]* 2>/dev/null | tail -n +6 | xargs -r rm -rf

View File

@@ -0,0 +1,12 @@
services:
vaultwarden:
image: vaultwarden/server:latest
container_name: vaultwarden
restart: unless-stopped
environment:
DOMAIN: "https://vw.alogins.net"
ADMIN_TOKEN: $$argon2id$$v=19$$m=65540,t=3,p=4$$bkE5Y1grLzF4czZiUk9tcWR6WTlGNC9CQmxGeHg0R1JUMFBrY2l0SVZocz0$$hn0snCmQkzDTEBzPYGQxFNmHxTgpxQ+O8OvzOhR3/a0
volumes:
- /mnt/ssd/dbs/vw-data/:/data/
ports:
- 127.0.0.1:8041:80

12
zabbix/.env Normal file
View File

@@ -0,0 +1,12 @@
# Zabbix web frontend
WEB_PORT=81
PHP_TZ=Europe/Amsterdam
# Agent
AGENT_HOSTNAME=Zabbix server
# PostgreSQL
POSTGRES_DATA_DIR=/mnt/ssd/dbs/zabbix
POSTGRES_USER=zabbix
POSTGRES_PASSWORD=fefwG11UAFfs110
POSTGRES_DB=zabbix

98
zabbix/docker-compose.yml Normal file
View File

@@ -0,0 +1,98 @@
services:
postgres-server:
image: postgres:16-alpine
restart: unless-stopped
volumes:
- ${POSTGRES_DATA_DIR}:/var/lib/postgresql/data
environment:
POSTGRES_USER: ${POSTGRES_USER}
POSTGRES_PASSWORD: ${POSTGRES_PASSWORD}
POSTGRES_DB: ${POSTGRES_DB}
networks:
- database
stop_grace_period: 1m
zabbix-server:
image: zabbix/zabbix-server-pgsql:ubuntu-7.4-latest
restart: unless-stopped
ports:
- "10051:10051"
environment:
DB_SERVER_HOST: postgres-server
DB_SERVER_PORT: 5432
POSTGRES_USER: ${POSTGRES_USER}
POSTGRES_PASSWORD: ${POSTGRES_PASSWORD}
POSTGRES_DB: ${POSTGRES_DB}
volumes:
- /etc/localtime:/etc/localtime:ro
ulimits:
nproc: 65535
nofile:
soft: 20000
hard: 40000
depends_on:
- postgres-server
networks:
- database
- backend
- frontend
stop_grace_period: 30s
zabbix-web:
image: zabbix/zabbix-web-apache-pgsql:ubuntu-7.4-latest
restart: unless-stopped
ports:
- "${WEB_PORT}:8080"
environment:
ZBX_SERVER_HOST: zabbix-server
ZBX_SERVER_PORT: 10051
DB_SERVER_HOST: postgres-server
DB_SERVER_PORT: 5432
POSTGRES_USER: ${POSTGRES_USER}
POSTGRES_PASSWORD: ${POSTGRES_PASSWORD}
POSTGRES_DB: ${POSTGRES_DB}
PHP_TZ: ${PHP_TZ}
volumes:
- /etc/localtime:/etc/localtime:ro
healthcheck:
test: ["CMD", "curl", "-f", "http://localhost:8080/ping"]
interval: 1m30s
timeout: 3s
retries: 3
start_period: 40s
start_interval: 5s
depends_on:
- postgres-server
- zabbix-server
networks:
- database
- backend
- frontend
stop_grace_period: 10s
zabbix-agent:
image: zabbix/zabbix-agent:ubuntu-7.4-latest
restart: unless-stopped
environment:
ZBX_HOSTNAME: ${AGENT_HOSTNAME}
ZBX_SERVER_HOST: zabbix-server
ZBX_SERVER_ACTIVE: zabbix-server
volumes:
- /etc/localtime:/etc/localtime:ro
privileged: true
pid: host
depends_on:
- zabbix-server
networks:
- backend
stop_grace_period: 5s
networks:
frontend:
driver: bridge
backend:
driver: bridge
internal: true
database:
driver: bridge
internal: true