Files
grabowski 09e0fdc9ac
Deploy to LXC / deploy (push) Successful in 15s
Validate / validate (push) Successful in 35s
feat(maintenance): reminders CLI + systemd timer drop-in
Phase 3-4 of maintenance reminders.

scripts/maintenance-reminders.ts: thin CLI that opens the same
pg pool as the app and calls runRemindersOnce. Flags:
--soon-days N (default 7), --company <id> (default all),
--dry-run (count without firing), --backfill (mark all currently
due as already-sent, mutually exclusive with --dry-run). Prints a
single-line JSON summary so journald/jq handles it cleanly.

package.json: + reminders:check script.

DEPLOYMENT.md: documents the systemd .service + .timer pair for
06:00 daily, with Persistent=true so a missed run during host
downtime still fires on next boot. Includes the first-deploy
protocol (--dry-run to scope, then either let day-one alert or
--backfill for a clean slate).

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-27 16:15:03 +07:00

12 KiB

Deployment

Guide for running buildfor_life_ops on a Linux server behind a reverse proxy. The build output of @sveltejs/adapter-node is a plain Node HTTP server — nothing exotic.

Stack assumptions:

  • Linux host (Debian/Ubuntu preferred; works on anything with glibc ≥ 2.31)
  • fnm for Node version management — pinned to 24 via .node-version
  • pnpm 9.15.0 via Corepack — pinned in package.json#packageManager
  • PostgreSQL 16+ reachable from the app host (local socket or remote)
  • A reverse proxy terminating TLS (nginx / Caddy / Traefik — examples below use nginx)
  • systemd to supervise the Node process

1. Prepare the host

# As root, one-time
apt update
apt install -y build-essential curl git postgresql-client

# Dedicated service user
adduser --system --group --home /opt/buildfor_life_ops --shell /bin/bash buildfor_life_ops

2. Install fnm and pin Node

# Run as the service user
sudo -iu buildfor_life_ops
curl -fsSL https://fnm.vercel.app/install | bash -s -- --skip-shell

# Activate in current shell and on login
cat >> ~/.bashrc <<'EOF'
export PATH="$HOME/.local/share/fnm:$PATH"
eval "$(fnm env --use-on-cd --shell bash)"
EOF
source ~/.bashrc

# Install the pinned major — fnm resolves .node-version once the repo is cloned
fnm install 24
fnm default 24

3. Enable pnpm via Corepack

Corepack ships with Node, so nothing extra to install:

corepack enable
corepack prepare pnpm@9.15.0 --activate
pnpm --version   # → 9.15.0

4. Clone the repo and install dependencies

cd /opt/buildfor_life_ops
git clone git@gitssh.b4l.co.th:B4L/buildfor_life_ops.git app
cd app

# fnm picks up .node-version automatically on cd; verify:
node --version   # → v24.x.x

# Reproducible install — fails if lockfile drifts
pnpm install --frozen-lockfile

5. Configure .env

cp .env.example .env      # if present — otherwise copy from a trusted source
$EDITOR .env

Required keys (the app refuses to boot without them — see src/lib/server/env.ts):

Key Notes
DATABASE_URL postgres://user:pass@host:5432/buildfor_life_ops
SESSION_SECRET ≥ 32 hex chars — openssl rand -hex 32
STORAGE_SIGNING_SECRET ≥ 32 hex chars, independent of SESSION_SECRET
PUBLIC_BASE_URL External URL, e.g. https://ops.b4l.co.th
STORAGE_BACKEND local or s3
STORAGE_LOCAL_ROOT Absolute path to blob root, e.g. /var/lib/buildfor_life_ops/storage

Optional keys:

  • SMTP: SMTP_HOST, SMTP_PORT, SMTP_USER, SMTP_PASS, SMTP_FROM, SMTP_SECURE. Email is silently disabled when any of HOST/PORT/FROM is unset.
  • Matrix: MATRIX_HOMESERVER, MATRIX_ACCESS_TOKEN. Matrix delivery is disabled when either is unset. The per-company room comes from companies.settings.matrix_room_id.
  • OIDC: OIDC_ENABLED=true + the four OIDC_* values when wiring SSO.
  • S3: S3_BUCKET, S3_REGION, optional S3_ENDPOINT for MinIO/compatibles, plus credentials.

Lock it down:

chmod 600 .env
chown buildfor_life_ops:buildfor_life_ops .env

6. Create the database

sudo -iu postgres psql <<'SQL'
CREATE USER buildfor_life_ops WITH PASSWORD '<strong-password>';
CREATE DATABASE buildfor_life_ops OWNER buildfor_life_ops;
SQL

7. Run migrations

pnpm run db:migrate

Re-run after every deploy — the migrator is idempotent and skips applied migrations.

8. Bootstrap the first admin

pnpm run create-user -- \
  --email admin@b4l.co.th \
  --password '<strong-password>' \
  --name 'Admin' \
  --company 'B4L' \
  --role admin

9. Build for production

pnpm run build

Output: build/ — a standalone Node server bundle. The node_modules it needs at runtime are the production deps:

pnpm install --prod --frozen-lockfile

(Do this on the deploy host, not a cross-compiled one, so native modules like @node-rs/argon2 and sharp pick the right binaries.)

10. systemd unit

/etc/systemd/system/buildfor_life_ops.service:

[Unit]
Description=buildfor_life_ops (SvelteKit node adapter)
After=network.target postgresql.service
Wants=postgresql.service

[Service]
Type=simple
User=buildfor_life_ops
Group=buildfor_life_ops
WorkingDirectory=/opt/buildfor_life_ops/app
EnvironmentFile=/opt/buildfor_life_ops/app/.env
Environment=NODE_ENV=production
Environment=HOST=127.0.0.1
Environment=PORT=3000
Environment=BODY_SIZE_LIMIT=10M
# fnm-installed Node — path resolved via the service user's shim dir.
# Pin the exact version here so a stray `fnm default` does not change the runtime.
ExecStart=/opt/buildfor_life_ops/.local/state/fnm_multishells/current/bin/node build/index.js
Restart=on-failure
RestartSec=5
# Hardening
NoNewPrivileges=true
ProtectSystem=strict
ReadWritePaths=/opt/buildfor_life_ops/app/storage /var/lib/buildfor_life_ops
ProtectHome=true
PrivateTmp=true

[Install]
WantedBy=multi-user.target

If fnm multishell paths are awkward (they rotate per shell), use the canonical alias path instead:

ExecStart=/opt/buildfor_life_ops/.local/share/fnm/aliases/default/bin/node build/index.js

Enable and start:

systemctl daemon-reload
systemctl enable --now buildfor_life_ops
systemctl status buildfor_life_ops
journalctl -u buildfor_life_ops -f

Maintenance reminders timer

The app records next_due_at on every time-based maintenance schedule but does not poll itself. A daily systemd timer runs pnpm run reminders:check, which scans for schedules entering the 7-day warning window or already overdue and fans out via the existing in-app + email + Matrix notifier. Re-runs are idempotent — maintenance_reminders_sent deduplicates per (schedule, kind, due_at).

/etc/systemd/system/buildfor_life_ops-reminders.service:

[Unit]
Description=buildfor_life_ops maintenance reminder cron
After=postgresql.service network.target
Wants=postgresql.service

[Service]
Type=oneshot
User=ops
Group=ops
WorkingDirectory=/home/ops/buildfor_life_ops
EnvironmentFile=/home/ops/buildfor_life_ops/.env
Environment=NODE_ENV=production
ExecStart=/home/ops/.local/share/fnm/aliases/default/bin/pnpm run reminders:check

/etc/systemd/system/buildfor_life_ops-reminders.timer:

[Unit]
Description=Run buildfor_life_ops maintenance reminders daily

[Timer]
OnCalendar=*-*-* 06:00:00
Persistent=true
Unit=buildfor_life_ops-reminders.service

[Install]
WantedBy=timers.target

Enable:

sudo systemctl daemon-reload
sudo systemctl enable --now buildfor_life_ops-reminders.timer
sudo systemctl list-timers buildfor_life_ops-reminders.timer

First-run protocol to avoid a deluge of stale alerts on day one:

# Inspect what would fire without notifying.
sudo -iu ops bash -lc 'cd ~/buildfor_life_ops && pnpm run reminders:check -- --dry-run'

# If the count is reasonable, run normally — the timer will pick up subsequent
# windows automatically. Or, if you want a clean slate, mark everything
# currently-due as already-notified (no fan-out), so day-one alerts only
# new breaches:
sudo -iu ops bash -lc 'cd ~/buildfor_life_ops && pnpm run reminders:check -- --backfill'

Logs end up in journalctl -u buildfor_life_ops-reminders.service. Each run prints a single JSON line ({ ok, scanned, fired, skippedDedup, noRecipients, ... }) so journalctl --output=cat | grep '"ok":true' | jq gives a clean trend view.

11. Reverse proxy (nginx)

/etc/nginx/sites-available/buildfor_life_ops:

server {
    listen 80;
    server_name ops.b4l.co.th;
    return 301 https://$host$request_uri;
}

server {
    listen 443 ssl http2;
    server_name ops.b4l.co.th;

    ssl_certificate     /etc/letsencrypt/live/ops.b4l.co.th/fullchain.pem;
    ssl_certificate_key /etc/letsencrypt/live/ops.b4l.co.th/privkey.pem;

    # Uploads: documents + CSV import. Keep in sync with BODY_SIZE_LIMIT in systemd unit.
    client_max_body_size 10m;

    location / {
        proxy_pass http://127.0.0.1:3000;
        proxy_http_version 1.1;
        proxy_set_header Host              $host;
        proxy_set_header X-Forwarded-For   $proxy_add_x_forwarded_for;
        proxy_set_header X-Forwarded-Proto $scheme;
        proxy_set_header X-Forwarded-Host  $host;
        proxy_set_header Upgrade           $http_upgrade;
        proxy_set_header Connection        "upgrade";
        proxy_read_timeout 60s;
    }
}

SvelteKit trusts X-Forwarded-* when ORIGIN or PROTOCOL_HEADER/HOST_HEADER env vars are set. Recommended:

# Add to the systemd unit [Service] block
Environment=ORIGIN=https://ops.b4l.co.th
Environment=PROTOCOL_HEADER=x-forwarded-proto
Environment=HOST_HEADER=x-forwarded-host

12. Upgrades

Automated (CI-driven)

.gitea/workflows/deploy.yml runs on every push to main. It SSHes into the LXC host, pulls, installs, builds, migrates, and restarts the service. Required Gitea secrets:

Secret Purpose
DEPLOY_HOST SSH host of the LXC container
DEPLOY_USER SSH user (must own $DEPLOY_PATH and have a sudoers entry for systemctl restart buildfor_life_ops)
DEPLOY_KEY Private SSH key matching an authorized key on the deploy user
DEPLOY_PORT (optional, default 22)
DEPLOY_PATH (optional, default /opt/buildfor_life_ops/app)

The repo itself is cloned from https://git.b4l.co.th/B4L/buildfor_life_ops.git (public HTTPS) — no repo deploy key needed, unlike the budget sibling.

Manual

cd /opt/buildfor_life_ops/app
git fetch --tags
git checkout <tag-or-sha>

# Node version may have changed — fnm re-reads .node-version on cd, but force it:
fnm use --install-if-missing

pnpm install --frozen-lockfile
pnpm run build
pnpm run db:migrate
pnpm install --prod --frozen-lockfile

systemctl restart buildfor_life_ops
journalctl -u buildfor_life_ops -n 100 --no-pager

A migration that cannot be rolled forward-only (rare — see drizzle/README.md) needs a maintenance window and a DB snapshot first.

13. Rollback

cd /opt/buildfor_life_ops/app
git checkout <previous-tag>
pnpm install --frozen-lockfile
pnpm run build
pnpm install --prod --frozen-lockfile
systemctl restart buildfor_life_ops

Schema rollback is manual. Drizzle does not ship down-migrations. If the previous code cannot read the current schema, restore the DB from the pre-upgrade snapshot before checking out the old tag.

14. Backups

Two things matter:

  • Postgrespg_dump -Fc buildfor_life_ops > ops-$(date +%F).dump, daily, offsite. Retain ≥ 14 days.
  • Blob storage — when STORAGE_BACKEND=local, STORAGE_LOCAL_ROOT holds all uploaded documents. Snapshot it with the filesystem (ZFS/btrfs) or rsync it alongside the DB dump. When STORAGE_BACKEND=s3, rely on bucket versioning + cross-region replication.

The DB is the source of truth for documents.storage_key → blob mapping. A blob directory without its matching DB rows is unusable.

15. Health check

There is no dedicated /healthz endpoint yet. For now, probe GET /login — it returns 200 without a session:

curl -fsS -o /dev/null -w '%{http_code}\n' https://ops.b4l.co.th/login

16. Observability

  • Logs: journalctl -u buildfor_life_ops. All app logs go to stdout/stderr.
  • Metrics: not wired yet. If/when added, expose on a separate localhost port so nginx does not proxy them publicly.

17. Common pitfalls

  • Environment validation failed on boot — .env is missing, the EnvironmentFile= path is wrong, or one of the min(32) secrets is too short.
  • sharp fails with could not load the "sharp" module — cross-compiled install. Re-run pnpm install --prod --frozen-lockfile on the deploy host.
  • @node-rs/argon2 prebuilt binary missing — same cause, same fix. If the host is exotic (musl, ARM), set npm_config_build_from_source=true before install.
  • Cookies not settingPUBLIC_BASE_URL must match the user-facing URL exactly (scheme + host). In production this means HTTPS; the session cookie is Secure.
  • 413 on document upload — bump both client_max_body_size in nginx and BODY_SIZE_LIMIT in the systemd unit; they must agree.
  • fnm picks the wrong Node after a server reboot — ensure fnm default 24 was run for the service user, and the systemd ExecStart= points at the aliases path, not a multishell path.