54 Commits

Author SHA1 Message Date
gitea-runner
45417aa7ee flake.lock: update inputs
Some checks failed
Check Flake / check-flake (push) Successful in 2m18s
Auto Update Flake / auto-update (push) Failing after 15s
2026-04-10 23:00:31 -07:00
gitea-runner
50dc0c5cc3 flake.lock: update inputs
All checks were successful
Check Flake / check-flake (push) Successful in 2m25s
Auto Update Flake / auto-update (push) Successful in 9m32s
2026-04-09 23:00:50 -07:00
gitea-runner
a265472def flake.lock: update inputs
All checks were successful
Check Flake / check-flake (push) Successful in 2m19s
Auto Update Flake / auto-update (push) Successful in 1h50m58s
2026-04-08 23:00:31 -07:00
gitea-runner
528d438d86 flake.lock: update inputs
All checks were successful
Check Flake / check-flake (push) Successful in 2m17s
Auto Update Flake / auto-update (push) Successful in 5m55s
2026-04-07 23:00:31 -07:00
gitea-runner
1663e286bf flake.lock: update inputs
All checks were successful
Check Flake / check-flake (push) Successful in 2m26s
Auto Update Flake / auto-update (push) Successful in 11m24s
2026-04-06 23:00:38 -07:00
gitea-runner
91c1bef489 flake.lock: update inputs
All checks were successful
Check Flake / check-flake (push) Successful in 2m15s
Auto Update Flake / auto-update (push) Successful in 1h48m34s
2026-04-05 23:00:29 -07:00
gitea-runner
30585d8727 flake.lock: update inputs
All checks were successful
Check Flake / check-flake (push) Successful in 2m16s
Auto Update Flake / auto-update (push) Successful in 5m47s
2026-04-04 23:00:29 -07:00
gitea-runner
00d3bf09d7 flake.lock: update inputs
All checks were successful
Check Flake / check-flake (push) Successful in 2m19s
Auto Update Flake / auto-update (push) Successful in 5m40s
2026-04-03 23:00:30 -07:00
gitea-runner
1ff6894b35 flake.lock: update inputs
All checks were successful
Check Flake / check-flake (push) Successful in 2m17s
Auto Update Flake / auto-update (push) Successful in 6m44s
2026-04-02 23:00:29 -07:00
gitea-runner
2c4d822429 flake.lock: update inputs
All checks were successful
Check Flake / check-flake (push) Successful in 2m20s
Auto Update Flake / auto-update (push) Successful in 9m49s
2026-04-01 23:00:49 -07:00
gitea-runner
4880fbd0e3 flake.lock: update inputs
All checks were successful
Check Flake / check-flake (push) Successful in 2m17s
Auto Update Flake / auto-update (push) Successful in 1h44m45s
2026-03-31 23:00:29 -07:00
gitea-runner
e611a9e1fe flake.lock: update inputs
All checks were successful
Check Flake / check-flake (push) Successful in 2m17s
Auto Update Flake / auto-update (push) Successful in 5m36s
2026-03-30 23:00:39 -07:00
gitea-runner
b4c26f1b9f flake.lock: update inputs
All checks were successful
Check Flake / check-flake (push) Successful in 2m16s
Auto Update Flake / auto-update (push) Successful in 18m45s
2026-03-29 23:00:43 -07:00
gitea-runner
70438c74fc flake.lock: update inputs
All checks were successful
Check Flake / check-flake (push) Successful in 2m17s
Auto Update Flake / auto-update (push) Successful in 17m35s
2026-03-28 23:00:29 -07:00
gitea-runner
de10fa8dbb flake.lock: update inputs
All checks were successful
Check Flake / check-flake (push) Successful in 2m23s
Auto Update Flake / auto-update (push) Successful in 10m49s
2026-03-27 23:00:28 -07:00
gitea-runner
f114d45601 flake.lock: update inputs
All checks were successful
Check Flake / check-flake (push) Successful in 2m14s
Auto Update Flake / auto-update (push) Successful in 2h23m25s
2026-03-22 23:00:30 -07:00
gitea-runner
a023a12cf1 flake.lock: update inputs
All checks were successful
Check Flake / check-flake (push) Successful in 2m14s
Auto Update Flake / auto-update (push) Successful in 5m37s
2026-03-21 23:00:32 -07:00
gitea-runner
afc4bd44e7 flake.lock: update inputs
All checks were successful
Check Flake / check-flake (push) Successful in 2m17s
Auto Update Flake / auto-update (push) Successful in 5m59s
2026-03-20 23:00:35 -07:00
gitea-runner
7c4997c00b flake.lock: update inputs
All checks were successful
Check Flake / check-flake (push) Successful in 2m19s
Auto Update Flake / auto-update (push) Successful in 5m34s
2026-03-19 23:00:29 -07:00
gitea-runner
ab1faaba70 flake.lock: update inputs
All checks were successful
Check Flake / check-flake (push) Successful in 2m18s
Auto Update Flake / auto-update (push) Successful in 10m7s
2026-03-18 23:00:38 -07:00
gitea-runner
2b8a0a36d4 flake.lock: update inputs
All checks were successful
Check Flake / check-flake (push) Successful in 2m16s
Auto Update Flake / auto-update (push) Successful in 1h47m14s
2026-03-17 23:00:38 -07:00
gitea-runner
412e317efd flake.lock: update inputs
All checks were successful
Check Flake / check-flake (push) Successful in 2m14s
Auto Update Flake / auto-update (push) Successful in 7m31s
2026-03-16 23:00:39 -07:00
gitea-runner
454fe3bec6 flake.lock: update inputs
All checks were successful
Check Flake / check-flake (push) Successful in 2m13s
Auto Update Flake / auto-update (push) Successful in 7m10s
2026-03-15 23:00:42 -07:00
gitea-runner
192babbabe flake.lock: update inputs
All checks were successful
Check Flake / check-flake (push) Successful in 2m19s
Auto Update Flake / auto-update (push) Successful in 8m41s
2026-03-14 23:00:48 -07:00
2762c323e9 Improve gamescope session
All checks were successful
Check Flake / check-flake (push) Successful in 2m16s
Auto Update Flake / auto-update (push) Successful in 1h13m13s
2026-03-14 18:58:29 -07:00
bd71d6e2f5 Don't ntfy for logrotate failures and add container names to ntfy alerts 2026-03-14 18:58:29 -07:00
4899a37a82 Add gamescope (steam) login option 2026-03-14 18:58:29 -07:00
99200dc201 Initial KDE Plasma Bigscreen mode 2026-03-14 18:58:29 -07:00
4fb1c8957a Make PIA connection check more tollerant to hiccups 2026-03-14 18:58:29 -07:00
d2c274fca5 Bump ntfy attachment expiry time 2026-03-14 18:58:29 -07:00
eac627765a Disable bolt for now since I don't use it and it sometimes randomly hangs 2026-03-14 18:58:29 -07:00
63de76572b Log DIMM temperatures on each check run 2026-03-14 18:58:29 -07:00
cbb94d9f4e Fix VPN check alert limiting to only count failures
StartLimitBurst counts all starts (including successes), so the timer
was getting blocked after ~15 min. Replace with a JSON counter file
that resets on success and daily, only triggering OnFailure alerts for
the first 3 failures per day.
2026-03-14 18:58:29 -07:00
84745a3dc7 Remove recyclarr, I'm not using it currently 2026-03-14 18:58:29 -07:00
1d3a931fd0 Add periodic PIA VPN connectivity check
Oneshot service + timer (every 5 min) inside the VPN container that
verifies WireGuard handshake freshness and internet reachability.
Fails on VPN or internet outage, triggering ntfy alert via OnFailure.
Capped at 3 failures per day via StartLimitBurst.
2026-03-14 18:58:29 -07:00
23b0695cf2 Add DDR5 DIMM temperature monitoring with ntfy alerts
Monitors spd5118 sensors every 5 minutes and sends an ntfy
notification if any DIMM exceeds 55°C. Opt-in via
ntfy-alerts.dimmTempCheck.enable, enabled on s0.
2026-03-14 18:58:29 -07:00
b1a26b681f Add Music Assistant to Dashy and Gatus 2026-03-14 18:58:29 -07:00
401ab250f1 Update README 2026-03-14 18:58:29 -07:00
cd864b4061 Remove LanguageTool service 2026-03-14 18:58:29 -07:00
gitea-runner
6d2c5267a4 flake.lock: update inputs
Some checks failed
Check Flake / check-flake (push) Successful in 2m13s
Auto Update Flake / auto-update (push) Failing after 47s
2026-03-10 23:00:31 -07:00
gitea-runner
76bcc114a1 flake.lock: update inputs
All checks were successful
Check Flake / check-flake (push) Successful in 2m24s
Auto Update Flake / auto-update (push) Successful in 12m54s
2026-03-09 23:00:48 -07:00
gitea-runner
f2a482a46f flake.lock: update inputs
All checks were successful
Check Flake / check-flake (push) Successful in 2m14s
Auto Update Flake / auto-update (push) Successful in 1h42m53s
2026-03-07 22:00:55 -08:00
gitea-runner
969d8d8d5e flake.lock: update inputs
All checks were successful
Check Flake / check-flake (push) Successful in 2m13s
Auto Update Flake / auto-update (push) Successful in 12m50s
2026-03-06 22:00:31 -08:00
gitea-runner
518a7d0ffb flake.lock: update inputs
All checks were successful
Check Flake / check-flake (push) Successful in 2m14s
Auto Update Flake / auto-update (push) Successful in 17m40s
2026-03-05 22:00:37 -08:00
gitea-runner
2d6ad9f090 flake.lock: update inputs
All checks were successful
Check Flake / check-flake (push) Successful in 2m12s
Auto Update Flake / auto-update (push) Successful in 8m19s
2026-03-04 22:00:30 -08:00
88cfad2a69 Update flake inputs (nixpkgs, home-manager, claude-code-nix)
All checks were successful
Check Flake / check-flake (push) Successful in 2m12s
Auto Update Flake / auto-update (push) Successful in 7m5s
Remove obsolete libreoffice-noto-fonts-subset.patch — upstream nixpkgs
removed the noto-fonts-subset code from the libreoffice derivation.
2026-03-03 22:54:45 -08:00
86a9f777ad Use the hosts overlays in gitea container (for attic patches)
All checks were successful
Check Flake / check-flake (push) Successful in 3m42s
2026-03-03 22:54:14 -08:00
b29e80f3e9 Patch attic-client to retry on push failure
Some checks failed
Check Flake / check-flake (push) Failing after 4m5s
Backport zhaofengli/attic#246 to work around a hyper connection pool
race condition that causes spurious "connection closed before message
completed" errors during cache uploads in CI.
2026-03-03 22:40:27 -08:00
e32834ff7f Prevent nify-failure from calling itself
Some checks failed
Check Flake / check-flake (push) Failing after 4m13s
2026-03-03 22:36:58 -08:00
bb39587292 Fix unifi service taking 5+ minutes to shut down
Some checks failed
Check Flake / check-flake (push) Failing after 4m8s
UniFi's Java process crashes during shutdown (Spring context race
condition) leaving mongod orphaned in the cgroup. The upstream module
sets KillSignal=SIGCONT so systemd won't interrupt the graceful
shutdown, but with the default KillMode=control-group this means
mongod also only gets SIGCONT (a no-op) and sits there until the
5-minute timeout triggers SIGKILL.

Switch to KillMode=mixed so the main Java process still gets the
harmless SIGCONT while mongod gets a proper SIGTERM for a clean
database shutdown.
2026-03-03 22:02:21 -08:00
712b52a48d Capture full systemd unit name for ntfy error alerts 2026-03-03 21:46:45 -08:00
c6eeea982e Add ignoredUnits option; skip logrotate failures on s0 because they are spurious 2026-03-03 21:46:19 -08:00
6bd1b4466e Update claude.md 2026-03-03 21:43:36 -08:00
d806d4df0a Increase tinyproxy wait-online timeout to 180s
Some checks failed
Check Flake / check-flake (push) Failing after 5m29s
The bridge takes ~62s to come up on s0, exceeding the 60s timeout
and causing tinyproxy to fail on first start.
2026-03-03 21:04:40 -08:00
29 changed files with 573 additions and 146 deletions

View File

@@ -85,17 +85,3 @@ When adding or removing a web-facing service, update both:
- Always use `--no-link` when running `nix build`
- Don't use `nix build --dry-run` unless you only need evaluation — it skips the actual build
- Avoid `2>&1` on nix commands — it can cause error output to be missed
## Git Worktrees
When the user asks you to "start a worktree" or work in a worktree, **do not create one manually** with `git worktree add`. Instead, tell the user to start a new session with:
```bash
claude --worktree <name>
```
This is the built-in Claude Code worktree workflow. It creates the worktree at `.claude/worktrees/<name>/` with a branch `worktree-<name>` and starts a new Claude session inside it. Cleanup is handled automatically on exit.
When instructed to work in a git worktree (e.g., via `isolation: "worktree"` on a subagent), you **MUST** do so. If you are unable to create or use a git worktree, you **MUST** stop work immediately and report the failure to the user. Do not fall back to working in the main working tree.
When applying work from a git worktree back to the main branch, commit in the worktree first, then use `git cherry-pick` from the main working tree to bring the commit over. Do not use `git checkout` or `git apply` to copy files directly. Do **not** automatically apply worktree work to the main branch — always ask the user for approval first.

View File

@@ -1,16 +1,26 @@
# NixOS Configuration
A NixOS flake managing multiple machines with role-based configuration, agenix secrets, and sandboxed dev workspaces.
A NixOS flake managing multiple machines with role-based configuration, agenix secrets, sandboxed dev workspaces, and self-hosted services.
## Layout
- `/common` - shared configuration imported by all machines
- `/boot` - bootloaders, CPU microcode, remote LUKS unlock over Tor
- `/network` - Tailscale, VPN tunneling via PIA
- `/network` - Tailscale, PIA VPN with leak-proof containers, sandbox networking
- `/pc` - desktop/graphical config (enabled by the `personal` role)
- `/server` - service definitions and extensions
- `/server` - self-hosted service definitions (Gitea, Matrix, Nextcloud, media stack, etc.)
- `/sandboxed-workspace` - isolated dev environments (VM, container, or Incus)
- `/ntfy` - push notification integration (service failures, SSH logins, ZFS alerts)
- `binary-cache.nix` - nix binary cache configuration (nixos.org, cachix, self-hosted atticd)
- `nix-builder.nix` - distributed build delegation across machines
- `backups.nix` - snapshot-aware restic backups to Backblaze B2
- `/machines` - per-machine config (`default.nix`, `hardware-configuration.nix`, `properties.nix`)
- `fry` - personal desktop
- `howl` - personal laptop
- `ponyo` - web/mail server (Gitea, Nextcloud, LibreChat, mail)
- `storage/s0` - storage/media server (Jellyfin, Home Assistant, monitoring, productivity apps)
- `zoidberg` - media center
- `ephemeral` - minimal config for building install ISOs and kexec images
- `/secrets` - agenix-encrypted secrets, decryptable by machines based on their roles
- `/home` - Home Manager user config
- `/lib` - custom library functions extending nixpkgs lib
@@ -25,8 +35,14 @@ A NixOS flake managing multiple machines with role-based configuration, agenix s
**Remote LUKS unlock over Tor** — Machines with encrypted root disks can be unlocked remotely via SSH. An embedded Tor hidden service starts in the initrd so the machine is reachable even without a known IP, using a separate SSH host key for the boot environment.
**VPN containers** — A `vpn-container` module spins up an ephemeral NixOS container with a PIA WireGuard tunnel. The host creates the WireGuard interface and authenticates with PIA, then hands it off to the container's network namespace. This ensures that the container can **never** have direct internet access. Leakage is impossible.
**VPN containers** — A `pia-vpn` module provides leak-proof VPN networking for containers. The host creates a WireGuard interface and runs tinyproxy on a bridge network for PIA API bootstrap. A dedicated VPN container authenticates with PIA via the proxy, configures WireGuard, and masquerades bridge traffic through the tunnel. Service containers default-route exclusively through the VPN container — leakage is impossible by network topology. Supports port forwarding with automatic port assignment.
**Sandboxed workspaces** — Isolated dev environments backed by microVMs (cloud-hypervisor), systemd-nspawn containers, or Incus. Each workspace gets a static IP on a NAT'd bridge, auto-generated SSH host keys, shell aliases for management, and comes pre-configured with Claude Code. The sandbox network blocks access to the local LAN while allowing internet.
**Sandboxed workspaces** — Isolated dev environments backed by microVMs (cloud-hypervisor), systemd-nspawn containers, or Incus. Each workspace gets a static IP on a NAT'd bridge (`192.168.83.0/24`), auto-generated SSH host keys, shell aliases for management, and comes pre-configured with Claude Code. The sandbox network blocks access to the local LAN while allowing internet.
**Snapshot-aware backups** — Restic backups to Backblaze B2 automatically create ZFS snapshots or btrfs read-only snapshots before backing up, using mount namespaces to bind-mount frozen data over the original paths so restic records correct paths. Each backup group gets a `restic_<group>` CLI wrapper. Supports `.nobackup` marker files.
**Self-hosted services** — Comprehensive service stack across ponyo and s0: Gitea (git hosting + CI), Nextcloud (files/calendar), Matrix (chat), mail server, Jellyfin/Sonarr/Radarr/Lidarr (media), Home Assistant/Zigbee2MQTT/Frigate (home automation), LibreChat (AI), Gatus (monitoring), and productivity tools (Vikunja, Actual Budget, Outline, Linkwarden, Memos).
**Push notifications** — ntfy integration alerts on systemd service failures, SSH logins, and ZFS pool issues. Gatus monitors all web-facing services and sends alerts via ntfy.
**Remote deployment** — deploy-rs handles remote machine deployments with boot-only or immediate activation modes. A Makefile wraps common operations (`make deploy <host>`, `make deploy-activate <host>`).

View File

@@ -235,7 +235,7 @@ in
after = [ "systemd-networkd.service" ];
requires = [ "systemd-networkd.service" ];
serviceConfig.ExecStartPre = [
"+${pkgs.systemd}/lib/systemd/systemd-networkd-wait-online --interface=${cfg.bridgeName}:no-carrier --timeout=60"
"+${pkgs.systemd}/lib/systemd/systemd-networkd-wait-online --interface=${cfg.bridgeName}:no-carrier --timeout=180"
];
};

View File

@@ -11,6 +11,7 @@ with lib;
let
cfg = config.pia-vpn;
hostName = config.networking.hostName;
mkContainer = name: ctr: {
autoStart = true;
@@ -28,6 +29,9 @@ let
config = { config, pkgs, lib, ... }: {
imports = allModules ++ [ ctr.config ];
ntfy-alerts.ignoredUnits = [ "logrotate" ];
ntfy-alerts.hostLabel = "${hostName}/${name}";
# Static IP with gateway pointing to VPN container
networking.useNetworkd = true;
systemd.network.enable = true;

View File

@@ -6,6 +6,7 @@ with lib;
let
cfg = config.pia-vpn;
hostName = config.networking.hostName;
scripts = import ./scripts.nix;
# Port forwarding derived state
@@ -98,6 +99,8 @@ in
# Route ntfy alerts through the host proxy (VPN container has no gateway on eth0)
ntfy-alerts.curlExtraArgs = "--proxy http://${cfg.hostAddress}:${toString cfg.proxyPort}";
ntfy-alerts.ignoredUnits = [ "logrotate" ];
ntfy-alerts.hostLabel = "${hostName}/pia-vpn";
# Enable forwarding so bridge traffic can go through WG
boot.kernel.sysctl."net.ipv4.ip_forward" = 1;
@@ -226,6 +229,93 @@ in
RandomizedDelaySec = "1m";
};
};
# Periodic VPN connectivity check — fails if VPN or internet is down,
# triggering ntfy alert via the OnFailure drop-in.
# Tracks failures with a counter file so only the first 3 failures per
# day trigger an alert (subsequent failures exit 0 to suppress noise).
systemd.services.pia-vpn-check = {
description = "Check PIA VPN connectivity";
after = [ "pia-vpn-setup.service" ];
requires = [ "pia-vpn-setup.service" ];
path = with pkgs; [ wireguard-tools iputils coreutils gawk jq ];
serviceConfig.Type = "oneshot";
script = ''
set -euo pipefail
COUNTER_FILE="/var/lib/pia-vpn/check-fail-count.json"
MAX_ALERTS=3
check_vpn() {
# Check that WireGuard has a peer with a recent handshake (within 3 minutes)
handshake=$(wg show ${cfg.interfaceName} latest-handshakes | awk '{print $2}')
if [ -z "$handshake" ] || [ "$handshake" -eq 0 ]; then
echo "No WireGuard handshake recorded" >&2
return 1
fi
now=$(date +%s)
age=$((now - handshake))
if [ "$age" -gt 180 ]; then
echo "WireGuard handshake is stale (''${age}s ago)" >&2
return 1
fi
# Verify internet connectivity through VPN tunnel
if ! ping -c1 -W10 1.1.1.1 >/dev/null 2>&1; then
echo "Cannot reach internet through VPN" >&2
return 1
fi
echo "PIA VPN connectivity OK (handshake ''${age}s ago)"
return 0
}
MAX_RETRIES=4
for attempt in $(seq 1 $MAX_RETRIES); do
if check_vpn; then
rm -f "$COUNTER_FILE"
exit 0
fi
if [ "$attempt" -lt "$MAX_RETRIES" ]; then
echo "Attempt $attempt/$MAX_RETRIES failed, retrying in 5 minutes..." >&2
sleep 300
fi
done
# Failed read and update counter (reset if from a previous day)
today=$(date +%Y-%m-%d)
count=0
if [ -f "$COUNTER_FILE" ]; then
stored=$(jq -r '.date // ""' "$COUNTER_FILE")
if [ "$stored" = "$today" ]; then
count=$(jq -r '.count // 0' "$COUNTER_FILE")
fi
fi
count=$((count + 1))
jq -n --arg date "$today" --argjson count "$count" \
'{"date": $date, "count": $count}' > "$COUNTER_FILE"
if [ "$count" -le "$MAX_ALERTS" ]; then
echo "Failure $count/$MAX_ALERTS today alerting" >&2
exit 1
else
echo "Failure $count today suppressing alert (already sent $MAX_ALERTS)" >&2
exit 0
fi
'';
};
systemd.timers.pia-vpn-check = {
description = "Periodic PIA VPN connectivity check";
wantedBy = [ "timers.target" ];
timerConfig = {
OnCalendar = "*:0/30";
RandomizedDelaySec = "30s";
};
};
};
};
};

View File

@@ -5,6 +5,7 @@
./service-failure.nix
./ssh-login.nix
./zfs.nix
./dimm-temp.nix
];
options.ntfy-alerts = {
@@ -19,6 +20,18 @@
default = "";
description = "Extra arguments to pass to curl (e.g. --proxy http://host:port).";
};
ignoredUnits = lib.mkOption {
type = lib.types.listOf lib.types.str;
default = [ ];
description = "Unit names to skip failure notifications for.";
};
hostLabel = lib.mkOption {
type = lib.types.str;
default = config.networking.hostName;
description = "Label used in ntfy alert titles to identify this host/container.";
};
};
config = lib.mkIf config.thisMachine.hasRole."ntfy" {

73
common/ntfy/dimm-temp.nix Normal file
View File

@@ -0,0 +1,73 @@
{ config, lib, pkgs, ... }:
let
cfg = config.ntfy-alerts;
hasNtfy = config.thisMachine.hasRole."ntfy";
checkScript = pkgs.writeShellScript "dimm-temp-check" ''
PATH="${lib.makeBinPath [ pkgs.lm_sensors pkgs.gawk pkgs.coreutils pkgs.curl ]}"
threshold=55
hot=""
summary=""
while IFS= read -r line; do
case "$line" in
spd5118-*)
chip="$line"
;;
*temp1_input:*)
temp="''${line##*: }"
whole="''${temp%%.*}"
summary="''${summary:+$summary, }$chip: ''${temp}°C"
if [ "$whole" -ge "$threshold" ]; then
hot="$hot"$'\n'" $chip: ''${temp}°C"
fi
;;
esac
done < <(sensors -u 'spd5118-*' 2>/dev/null)
echo "$summary"
if [ -n "$hot" ]; then
message="DIMM temperature above ''${threshold}°C on ${config.networking.hostName}:$hot"
curl \
--fail --silent --show-error \
--max-time 30 --retry 3 \
-H "Authorization: Bearer $NTFY_TOKEN" \
-H "Title: High DIMM temperature on ${config.networking.hostName}" \
-H "Priority: high" \
-H "Tags: thermometer" \
-d "$message" \
"${cfg.serverUrl}/service-failures"
echo "$message" >&2
fi
'';
in
{
options.ntfy-alerts.dimmTempCheck.enable = lib.mkEnableOption "DDR5 DIMM temperature monitoring via spd5118";
config = lib.mkIf (cfg.dimmTempCheck.enable && hasNtfy) {
systemd.services.dimm-temp-check = {
description = "Check DDR5 DIMM temperatures and alert on overheating";
wants = [ "network-online.target" ];
after = [ "network-online.target" ];
serviceConfig = {
Type = "oneshot";
EnvironmentFile = "/run/agenix/ntfy-token";
ExecStart = checkScript;
};
};
systemd.timers.dimm-temp-check = {
description = "Periodic DDR5 DIMM temperature check";
wantedBy = [ "timers.target" ];
timerConfig = {
OnCalendar = "*:0/5";
Persistent = true;
};
};
};
}

View File

@@ -14,6 +14,14 @@ in
EnvironmentFile = "/run/agenix/ntfy-token";
ExecStart = "${pkgs.writeShellScript "ntfy-failure-notify" ''
unit="$1"
# Prevent infinite recursion if this service itself fails
[[ "$unit" == ntfy-failure@* ]] && exit 0
ignored_units=(${lib.concatMapStringsSep " " (u: lib.escapeShellArg u) cfg.ignoredUnits})
for ignored in "''${ignored_units[@]}"; do
if [[ "$unit" == "$ignored" ]]; then
exit 0
fi
done
logfile=$(mktemp)
trap 'rm -f "$logfile"' EXIT
${pkgs.systemd}/bin/journalctl -u "$unit" -n 50 --no-pager -o short > "$logfile" 2>/dev/null \
@@ -24,7 +32,7 @@ in
--max-time 30 --retry 3 \
${cfg.curlExtraArgs} \
-H "Authorization: Bearer $NTFY_TOKEN" \
-H "Title: Service failure on ${config.networking.hostName}" \
-H "Title: Service failure on ${cfg.hostLabel}" \
-H "Priority: high" \
-H "Tags: rotating_light" \
-H "Message: Unit $unit failed at $(date +%c)" \
@@ -40,7 +48,7 @@ in
mkdir -p $out/lib/systemd/system/service.d
cat > $out/lib/systemd/system/service.d/ntfy-on-failure.conf <<'EOF'
[Unit]
OnFailure=ntfy-failure@%p.service
OnFailure=ntfy-failure@%N.service
EOF
'')
];

View File

@@ -9,6 +9,14 @@ in
services.displayManager.sddm.wayland.enable = true;
services.desktopManager.plasma6.enable = true;
services.displayManager.sessionPackages = [
pkgs.plasma-bigscreen
];
# Bigscreen binaries must be on PATH for autostart services, KCMs, and
# internal plasmashell launches (settings, input handler, envmanager, etc.)
environment.systemPackages = [ pkgs.plasma-bigscreen ];
# kde apps
users.users.googlebot.packages = with pkgs; [
# akonadi

View File

@@ -8,6 +8,29 @@ in
programs.steam.enable = true;
hardware.steam-hardware.enable = true; # steam controller
# Login DE Option: Steam Gamescope (Steam Deck-like session)
programs.gamescope = {
enable = true;
};
programs.steam.gamescopeSession = {
enable = true;
args = [
"--hdr-enabled"
"--hdr-itm-enabled"
"--adaptive-sync"
];
steamArgs = [
"-steamos3"
"-gamepadui"
"-pipewire-dmabuf"
];
env = {
STEAM_ENABLE_VOLUME_HANDLER = "1";
STEAM_DISABLE_AUDIO_DEVICE_SWITCHING = "1";
};
};
environment.systemPackages = [ pkgs.gamescope-wsi ];
users.users.googlebot.packages = [
pkgs.steam
];

View File

@@ -270,6 +270,16 @@ in
];
alerts = [{ type = "ntfy"; }];
}
{
name = "Music Assistant";
group = "s0";
url = "http://s0.koi-bebop.ts.net:8095";
interval = "5m";
conditions = [
"[STATUS] == 200"
];
alerts = [{ type = "ntfy"; }];
}
{
name = "Vikunja";
group = "s0";
@@ -320,16 +330,7 @@ in
];
alerts = [{ type = "ntfy"; }];
}
{
name = "LanguageTool";
group = "s0";
url = "https://languagetool.s0.neet.dev";
interval = "5m";
conditions = [
"[STATUS] == 200"
];
alerts = [{ type = "ntfy"; }];
}
{
name = "Unifi";
group = "s0";

View File

@@ -1,4 +1,4 @@
{ config, lib, ... }:
{ config, lib, allModules, ... }:
# Gitea Actions Runner inside a NixOS container.
# The container shares the host's /nix/store (read-only) and nix-daemon socket,
@@ -8,6 +8,7 @@
let
thisMachineIsARunner = config.thisMachine.hasRole."gitea-actions-runner";
hostName = config.networking.hostName;
containerName = "gitea-runner";
giteaRunnerUid = 991;
giteaRunnerGid = 989;
@@ -31,7 +32,10 @@ in
};
config = { config, lib, pkgs, ... }: {
system.stateVersion = "25.11";
imports = allModules;
ntfy-alerts.ignoredUnits = [ "logrotate" ];
ntfy-alerts.hostLabel = "${hostName}/${containerName}";
services.gitea-actions-runner.instances.inst = {
enable = true;

View File

@@ -18,6 +18,7 @@ in
auth-default-access = "deny-all";
behind-proxy = true;
enable-login = true;
attachment-expiry-duration = "48h";
};
# backups

View File

@@ -13,6 +13,15 @@ in
services.unifi.unifiPackage = pkgs.unifi;
services.unifi.mongodbPackage = pkgs.mongodb-7_0;
# The upstream module sets KillSignal=SIGCONT so systemd doesn't interfere
# with UniFi's self-managed shutdown. But UniFi's Java process crashes during
# shutdown (Spring context already closed) leaving mongod orphaned in the
# cgroup. With the default KillMode=control-group, mongod only gets SIGCONT
# (a no-op) and runs until the 5min timeout triggers SIGKILL.
# KillMode=mixed sends SIGCONT to the main process but SIGTERM to remaining
# children, giving mongod a clean shutdown instead of SIGKILL.
systemd.services.unifi.serviceConfig.KillMode = "mixed";
networking.firewall = lib.mkIf cfg.openMinimalFirewall {
allowedUDPPorts = [
3478 # STUN

56
flake.lock generated
View File

@@ -53,11 +53,11 @@
]
},
"locked": {
"lastModified": 1772252645,
"narHash": "sha256-SVP3BYv/tY19P7mh0aG2Pgq4M/CynQEnV4y+57Ed91g=",
"lastModified": 1775848625,
"narHash": "sha256-y2/PYZu+yAeG+ueAuhjeeAWHOSvZMJfPiNs7pQJ/Wbc=",
"owner": "sadjow",
"repo": "claude-code-nix",
"rev": "42c9207e79f1e6b8b95b54a64c10452275717466",
"rev": "2a665ed3a46cb363630df50150ecf47f45a1d893",
"type": "github"
},
"original": {
@@ -186,11 +186,11 @@
]
},
"locked": {
"lastModified": 1769939035,
"narHash": "sha256-Fok2AmefgVA0+eprw2NDwqKkPGEI5wvR+twiZagBvrg=",
"lastModified": 1774959120,
"narHash": "sha256-Pzk6UbueeWy9WFiDY6iA1aHid+2AMzkS6gg2x2cSkz4=",
"owner": "cachix",
"repo": "git-hooks.nix",
"rev": "a8ca480175326551d6c4121498316261cbb5b260",
"rev": "c06f90f1eb6569bdaf6a4a10cb7e66db4454ac2a",
"type": "github"
},
"original": {
@@ -228,11 +228,11 @@
]
},
"locked": {
"lastModified": 1772380461,
"narHash": "sha256-O3ukj3Bb3V0Tiy/4LUfLlBpWypJ9P0JeUgsKl2nmZZY=",
"lastModified": 1775781825,
"narHash": "sha256-L5yKTpR+alrZU2XYYvIxCeCP4LBHU5jhwSj7H1VAavg=",
"owner": "nix-community",
"repo": "home-manager",
"rev": "f140aa04d7d14f8a50ab27f3691b5766b17ae961",
"rev": "e35c39fca04fee829cecdf839a50eb9b54d8a701",
"type": "github"
},
"original": {
@@ -250,11 +250,11 @@
"spectrum": "spectrum"
},
"locked": {
"lastModified": 1772338235,
"narHash": "sha256-9XcwtSIL/c+pkC3SBNuxCJuSktFOBV1TLvvkhekyB8I=",
"lastModified": 1775847073,
"narHash": "sha256-OyRZOIQZZQNrIDN40jrhY1SFTzTNYURT5MPhZZchSbY=",
"owner": "astro",
"repo": "microvm.nix",
"rev": "9d1ff9b53532908a5eba7707931c9093508b6b92",
"rev": "239045c84aa62c2ce1349fa4c1ceae9eb6ce9e85",
"type": "github"
},
"original": {
@@ -270,11 +270,11 @@
]
},
"locked": {
"lastModified": 1772341813,
"narHash": "sha256-/PQ0ubBCMj/MVCWEI/XMStn55a8dIKsvztj4ZVLvUrQ=",
"lastModified": 1775365369,
"narHash": "sha256-DgH5mveLoau20CuTnaU5RXZWgFQWn56onQ4Du2CqYoI=",
"owner": "Mic92",
"repo": "nix-index-database",
"rev": "a2051ff239ce2e8a0148fa7a152903d9a78e854f",
"rev": "cef5cf82671e749ac87d69aadecbb75967e6f6c3",
"type": "github"
},
"original": {
@@ -285,11 +285,11 @@
},
"nixos-hardware": {
"locked": {
"lastModified": 1771969195,
"narHash": "sha256-qwcDBtrRvJbrrnv1lf/pREQi8t2hWZxVAyeMo7/E9sw=",
"lastModified": 1775490113,
"narHash": "sha256-2ZBhDNZZwYkRmefK5XLOusCJHnoeKkoN95hoSGgMxWM=",
"owner": "NixOS",
"repo": "nixos-hardware",
"rev": "41c6b421bdc301b2624486e11905c9af7b8ec68e",
"rev": "c775c2772ba56e906cbeb4e0b2db19079ef11ff7",
"type": "github"
},
"original": {
@@ -301,11 +301,11 @@
},
"nixpkgs": {
"locked": {
"lastModified": 1772198003,
"narHash": "sha256-I45esRSssFtJ8p/gLHUZ1OUaaTaVLluNkABkk6arQwE=",
"lastModified": 1775710090,
"narHash": "sha256-ar3rofg+awPB8QXDaFJhJ2jJhu+KqN/PRCXeyuXR76E=",
"owner": "NixOS",
"repo": "nixpkgs",
"rev": "dd9b079222d43e1943b6ebd802f04fd959dc8e61",
"rev": "4c1018dae018162ec878d42fec712642d214fdfa",
"type": "github"
},
"original": {
@@ -344,11 +344,11 @@
]
},
"locked": {
"lastModified": 1772064816,
"narHash": "sha256-ks1D9Rtmopd5F/8ENjEUJpSYYMxv603/v6TRen9Hq54=",
"lastModified": 1775244324,
"narHash": "sha256-TSAozmLyWCRbUJu6tXQvhTjsDKNj9q1CsEqwhhh9kMU=",
"owner": "simple-nixos-mailserver",
"repo": "nixos-mailserver",
"rev": "ea4dc17f4bc0f65eed082fa394509e4543072b56",
"rev": "c45a1e4385e81b937b353ee4ce97f5cfd60ceff2",
"type": "gitlab"
},
"original": {
@@ -361,11 +361,11 @@
"spectrum": {
"flake": false,
"locked": {
"lastModified": 1759482047,
"narHash": "sha256-H1wiXRQHxxPyMMlP39ce3ROKCwI5/tUn36P8x6dFiiQ=",
"lastModified": 1772189877,
"narHash": "sha256-i1p90Rgssb//aNiTDFq46ZG/fk3LmyRLChtp/9lddyA=",
"ref": "refs/heads/main",
"rev": "c5d5786d3dc938af0b279c542d1e43bce381b4b9",
"revCount": 996,
"rev": "fe39e122d898f66e89ffa17d4f4209989ccb5358",
"revCount": 1255,
"type": "git",
"url": "https://spectrum-os.org/git/spectrum"
},

View File

@@ -139,7 +139,6 @@
src = nixpkgs;
patches = [
./patches/dont-break-nix-serve.patch
./patches/libreoffice-noto-fonts-subset.patch
];
};
patchedNixpkgs = nixpkgs.lib.fix (self: (import "${patchedNixpkgsSrc}/flake.nix").outputs { self = nixpkgs; });

View File

@@ -18,7 +18,7 @@
boot.extraModulePackages = [ ];
# thunderbolt
services.hardware.bolt.enable = true;
services.hardware.bolt.enable = false;
# firmware
firmware.x86_64.enable = true;

View File

@@ -22,7 +22,7 @@
boot.extraModulePackages = [ ];
# thunderbolt
services.hardware.bolt.enable = true;
services.hardware.bolt.enable = false;
# firmware
firmware.x86_64.enable = true;

View File

@@ -401,6 +401,15 @@
statusCheck = false;
id = "5_4201_sandman";
};
music-assistant = {
title = "Music Assistant";
description = "s0:8095";
icon = "hl-music-assistant";
url = "http://s0.koi-bebop.ts.net:8095";
target = "sametab";
statusCheck = false;
id = "6_4201_music-assistant";
};
};
haList = [
haItems.home-assistant
@@ -409,6 +418,7 @@
haItems.frigate
haItems.valetudo
haItems.sandman
haItems.music-assistant
];
in
{
@@ -474,15 +484,6 @@
statusCheck = false;
id = "4_5301_outline";
};
languagetool = {
title = "LanguageTool";
description = "languagetool.s0.neet.dev";
icon = "hl-languagetool";
url = "https://languagetool.s0.neet.dev";
target = "sametab";
statusCheck = false;
id = "5_5301_languagetool";
};
};
prodList = [
prodItems.vikunja
@@ -490,7 +491,6 @@
prodItems.linkwarden
prodItems.memos
prodItems.outline
prodItems.languagetool
];
in
{

View File

@@ -9,6 +9,9 @@
networking.hostName = "s0";
ntfy-alerts.ignoredUnits = [ "logrotate" ];
ntfy-alerts.dimmTempCheck.enable = true;
# system.autoUpgrade.enable = true;
nix.gc.automatic = lib.mkForce false; # allow the nix store to serve as a build cache
@@ -140,30 +143,6 @@
services.lidarr.enable = true;
services.lidarr.user = "public_data";
services.lidarr.group = "public_data";
services.recyclarr = {
enable = true;
configuration = {
radarr.radarr_main = {
api_key = {
_secret = "/run/credentials/recyclarr.service/radarr-api-key";
};
base_url = "http://localhost:7878";
quality_definition.type = "movie";
};
sonarr.sonarr_main = {
api_key = {
_secret = "/run/credentials/recyclarr.service/sonarr-api-key";
};
base_url = "http://localhost:8989";
quality_definition.type = "series";
};
};
};
systemd.services.recyclarr.serviceConfig.LoadCredential = [
"radarr-api-key:/run/agenix/radarr-api-key"
"sonarr-api-key:/run/agenix/sonarr-api-key"
];
users.groups.public_data.gid = 994;
users.users.public_data = {
@@ -174,8 +153,6 @@
};
};
};
age.secrets.radarr-api-key.file = ../../../secrets/radarr-api-key.age;
age.secrets.sonarr-api-key.file = ../../../secrets/sonarr-api-key.age;
# jellyfin
# jellyfin cannot run in the vpn container and use hardware encoding
@@ -251,7 +228,6 @@
(mkVirtualHost "linkwarden.s0.neet.dev" "http://localhost:${toString config.services.linkwarden.port}")
(mkVirtualHost "memos.s0.neet.dev" "http://localhost:${toString config.services.memos.settings.MEMOS_PORT}")
(mkVirtualHost "outline.s0.neet.dev" "http://localhost:${toString config.services.outline.port}")
(mkVirtualHost "languagetool.s0.neet.dev" "http://localhost:${toString config.services.languagetool.port}")
];
tailscaleAuth = {
@@ -275,7 +251,6 @@
"linkwarden.s0.neet.dev"
# "memos.s0.neet.dev" # messes up memos /auth route
# "outline.s0.neet.dev" # messes up outline /auth route
"languagetool.s0.neet.dev"
];
expectedTailnet = "koi-bebop.ts.net";
};
@@ -365,10 +340,5 @@
owner = config.services.outline.user;
};
services.languagetool = {
enable = true;
port = 60613;
};
boot.binfmt.emulatedSystems = [ "aarch64-linux" "armv7l-linux" ];
}

View File

@@ -5,10 +5,6 @@
./hardware-configuration.nix
];
# Login DE Option: Steam
programs.steam.gamescopeSession.enable = true;
# programs.gamescope.capSysNice = true;
# Login DE Option: Kodi
services.xserver.desktopManager.kodi.enable = true;
services.xserver.desktopManager.kodi.package =
@@ -35,7 +31,7 @@
"L+ /opt/rocm/hip - - - - ${pkgs.rocmPackages.clr}"
];
services.displayManager.defaultSession = "plasma";
services.displayManager.defaultSession = "plasma-bigscreen-wayland";
users.users.cris = {
isNormalUser = true;
@@ -54,10 +50,10 @@
uid = 1002;
};
# Auto login into Plasma in john zoidberg account
# Auto login into Plasma Bigscreen in john zoidberg account
services.displayManager.sddm.settings = {
Autologin = {
Session = "plasma";
Session = "plasma-bigscreen-wayland";
User = "john";
};
};

View File

@@ -16,6 +16,14 @@ in
nativeBuildInputs = (old.nativeBuildInputs or [ ]) ++ [ prev.writableTmpDirAsHomeHook ];
});
# Retry on push failure to work around hyper connection pool race condition.
# https://github.com/zhaofengli/attic/pull/246
attic-client = prev.attic-client.overrideAttrs (old: {
patches = (old.patches or [ ]) ++ [
../patches/attic-client-push-retry.patch
];
});
# Add --zeroconf-port support to Spotify Connect plugin so librespot
# binds to a fixed port that can be opened in the firewall.
music-assistant = prev.music-assistant.overrideAttrs (old: {
@@ -23,4 +31,12 @@ in
../patches/music-assistant-zeroconf-port.patch
];
});
# Plasma Bigscreen: TV-optimized KDE shell (not yet packaged in nixpkgs)
plasma-bigscreen = import ./plasma-bigscreen.nix {
inherit (prev.kdePackages)
mkKdeDerivation plasma-workspace plasma-wayland-protocols
qtmultimedia qtwayland qtwebengine qcoro;
inherit (prev) lib fetchFromGitLab pkg-config sdl3 libcec wayland;
};
}

View File

@@ -0,0 +1,79 @@
{
mkKdeDerivation,
lib,
fetchFromGitLab,
pkg-config,
plasma-workspace,
qtmultimedia,
qtwayland,
qtwebengine,
qcoro,
plasma-wayland-protocols,
wayland,
sdl3,
libcec,
}:
mkKdeDerivation {
pname = "plasma-bigscreen";
version = "unstable-2026-03-07";
src = fetchFromGitLab {
domain = "invent.kde.org";
owner = "plasma";
repo = "plasma-bigscreen";
rev = "bd143fea7e386bac1652b8150a3ed3d5ef7cf93c";
hash = "sha256-y439IX7e0+XqxqFj/4+P5le0hA7DiwA+smDsD0UH/fI=";
};
patches = [
../patches/plasma-bigscreen-input-handler-app-id.patch
];
extraNativeBuildInputs = [ pkg-config ];
extraBuildInputs = [
qtmultimedia
qtwayland
qtwebengine
qcoro
plasma-wayland-protocols
wayland
sdl3
libcec
];
# Match project version to installed Plasma release so cmake version checks pass
postPatch = ''
substituteInPlace CMakeLists.txt \
--replace-fail 'set(PROJECT_VERSION "6.5.80")' \
'set(PROJECT_VERSION "${plasma-workspace.version}")'
# Upstream references a nonexistent startplasma-waylandsession binary.
# Fix this in the cmake template (before @KDE_INSTALL_FULL_LIBEXECDIR@ is substituted).
substituteInPlace bin/plasma-bigscreen-wayland.in \
--replace-fail \
'startplasma-wayland --xwayland --libinput --exit-with-session=@KDE_INSTALL_FULL_LIBEXECDIR@/startplasma-waylandsession' \
'startplasma-wayland'
'';
# FIXME: work around Qt 6.10 cmake API changes
cmakeFlags = [ "-DQT_FIND_PRIVATE_MODULES=1" ];
# QML lint fails on missing runtime-only imports (org.kde.private.biglauncher)
# that are only available inside a running Plasma session
dontQmlLint = true;
postFixup = ''
# Session .desktop references $out/libexec/plasma-dbus-run-session-if-needed
# but the binary lives in plasma-workspace
substituteInPlace "$out/share/wayland-sessions/plasma-bigscreen-wayland.desktop" \
--replace-fail \
"$out/libexec/plasma-dbus-run-session-if-needed" \
"${plasma-workspace}/libexec/plasma-dbus-run-session-if-needed"
'';
passthru.providedSessions = [ "plasma-bigscreen-wayland" ];
meta.license = with lib.licenses; [ gpl2Plus ];
}

View File

@@ -0,0 +1,143 @@
diff --git a/attic/src/api/v1/upload_path.rs b/attic/src/api/v1/upload_path.rs
index 5b1231e5..cb90928c 100644
--- a/attic/src/api/v1/upload_path.rs
+++ b/attic/src/api/v1/upload_path.rs
@@ -25,7 +25,7 @@ pub const ATTIC_NAR_INFO_PREAMBLE_SIZE: &str = "X-Attic-Nar-Info-Preamble-Size";
/// Regardless of client compression, the server will always decompress
/// the NAR to validate the NAR hash before applying the server-configured
/// compression again.
-#[derive(Debug, Serialize, Deserialize)]
+#[derive(Debug, Serialize, Deserialize, Clone)]
pub struct UploadPathNarInfo {
/// The name of the binary cache to upload to.
pub cache: CacheName,
diff --git a/client/src/push.rs b/client/src/push.rs
index 309bd4b6..f3951d2b 100644
--- a/client/src/push.rs
+++ b/client/src/push.rs
@@ -560,57 +560,83 @@ pub async fn upload_path(
);
let bar = mp.add(ProgressBar::new(path_info.nar_size));
bar.set_style(style);
- let nar_stream = NarStreamProgress::new(store.nar_from_path(path.to_owned()), bar.clone())
- .map_ok(Bytes::from);
- let start = Instant::now();
- match api
- .upload_path(upload_info, nar_stream, force_preamble)
- .await
- {
- Ok(r) => {
- let r = r.unwrap_or(UploadPathResult {
- kind: UploadPathResultKind::Uploaded,
- file_size: None,
- frac_deduplicated: None,
- });
-
- let info_string: String = match r.kind {
- UploadPathResultKind::Deduplicated => "deduplicated".to_string(),
- _ => {
- let elapsed = start.elapsed();
- let seconds = elapsed.as_secs_f64();
- let speed = (path_info.nar_size as f64 / seconds) as u64;
+ // Create a new stream for each retry attempt
+ let bar_ref = &bar;
+ let nar_stream = move || {
+ NarStreamProgress::new(store.nar_from_path(path.to_owned()), bar_ref.clone())
+ .map_ok(Bytes::from)
+ };
- let mut s = format!("{}/s", HumanBytes(speed));
+ let start = Instant::now();
+ let mut retries = 0;
+ const MAX_RETRIES: u32 = 3;
+ const RETRY_DELAY: Duration = Duration::from_millis(250);
- if let Some(frac_deduplicated) = r.frac_deduplicated {
- if frac_deduplicated > 0.01f64 {
- s += &format!(", {:.1}% deduplicated", frac_deduplicated * 100.0);
+ loop {
+ let result = api
+ .upload_path(upload_info.clone(), nar_stream(), force_preamble)
+ .await;
+ match result {
+ Ok(r) => {
+ let r = r.unwrap_or(UploadPathResult {
+ kind: UploadPathResultKind::Uploaded,
+ file_size: None,
+ frac_deduplicated: None,
+ });
+
+ let info_string: String = match r.kind {
+ UploadPathResultKind::Deduplicated => "deduplicated".to_string(),
+ _ => {
+ let elapsed = start.elapsed();
+ let seconds = elapsed.as_secs_f64();
+ let speed = (path_info.nar_size as f64 / seconds) as u64;
+
+ let mut s = format!("{}/s", HumanBytes(speed));
+
+ if let Some(frac_deduplicated) = r.frac_deduplicated {
+ if frac_deduplicated > 0.01f64 {
+ s += &format!(", {:.1}% deduplicated", frac_deduplicated * 100.0);
+ }
}
+
+ s
}
+ };
- s
+ mp.suspend(|| {
+ eprintln!(
+ "✅ {} ({})",
+ path.as_os_str().to_string_lossy(),
+ info_string
+ );
+ });
+ bar.finish_and_clear();
+
+ return Ok(());
+ }
+ Err(e) => {
+ if retries < MAX_RETRIES {
+ retries += 1;
+ mp.suspend(|| {
+ eprintln!(
+ "❕ {}: Upload failed, retrying ({}/{})...",
+ path.as_os_str().to_string_lossy(),
+ retries,
+ MAX_RETRIES
+ );
+ });
+ tokio::time::sleep(RETRY_DELAY).await;
+ continue;
}
- };
- mp.suspend(|| {
- eprintln!(
- "✅ {} ({})",
- path.as_os_str().to_string_lossy(),
- info_string
- );
- });
- bar.finish_and_clear();
+ mp.suspend(|| {
+ eprintln!("❌ {}: {}", path.as_os_str().to_string_lossy(), e);
+ });
+ bar.finish_and_clear();
- Ok(())
- }
- Err(e) => {
- mp.suspend(|| {
- eprintln!("❌ {}: {}", path.as_os_str().to_string_lossy(), e);
- });
- bar.finish_and_clear();
- Err(e)
+ return Err(e);
+ }
}
}
}

View File

@@ -1,16 +0,0 @@
Fix notoSubset glob for noto-fonts >= 2026.02.01.
noto-fonts switched from variable fonts (NotoSansArabic[wdth,wght].ttf)
to static fonts (NotoSansArabic.ttf). The old glob pattern only matched
files with brackets in the name, causing the cp to fail.
--- a/pkgs/applications/office/libreoffice/default.nix
+++ b/pkgs/applications/office/libreoffice/default.nix
@@ -191,7 +191,7 @@
runCommand "noto-fonts-subset" { } ''
mkdir -p "$out/share/fonts/noto/"
${concatMapStrings (x: ''
- cp "${noto-fonts}/share/fonts/noto/NotoSans${x}["*.[ot]tf "$out/share/fonts/noto/"
+ cp "${noto-fonts}/share/fonts/noto/NotoSans${x}"*.[ot]tf "$out/share/fonts/noto/"
'') suffixes}
'';

View File

@@ -0,0 +1,19 @@
Use the correct app_id when pre-authorizing remote-desktop portal access.
The portal's isAppMegaAuthorized() looks up the caller's specific app_id in
the PermissionStore. An empty string only matches apps the portal cannot
identify; it is not a wildcard. Since the input handler is launched via
KIO::CommandLauncherJob with a desktopName, the portal resolves it to the
desktop file ID, so the empty-string entry never matches.
--- a/inputhandler/xdgremotedesktopsystem.cpp
+++ b/inputhandler/xdgremotedesktopsystem.cpp
@@ -66,7 +67,7 @@
QDBusReply<void> reply = permissionStore.call(QStringLiteral("SetPermission"),
QStringLiteral("kde-authorized"), // table
true, // create table if not exists
QStringLiteral("remote-desktop"), // id
- QLatin1String(""), // app (empty for host applications)
+ QStringLiteral("org.kde.plasma.bigscreen.inputhandler"),
QStringList{QStringLiteral("yes")}); // permissions

View File

@@ -1,11 +0,0 @@
age-encryption.org/v1
-> ssh-ed25519 hPp1nw gfVRDt7ReEnz10WvPa8UfBBnsRsiw7sxxXQMuXRnCVs
slBNX9Yc1qSu1P5ioNDNLPd97NGE/LWPS/A+u9QGo4E
-> ssh-ed25519 ZDy34A e5MSY5qDP6WuEgbiK0p5esMQJBb3ScVpb15Ff8sTQgQ
9nsimoUQncnbfiu13AnFWZXcpaiySUYdS1eH5O/3Fgg
-> ssh-ed25519 w3nu8g op1KSUhJgM6w/nlaUssQDiraQpVzgnWd//JMu2vFgms
KvEaJfsB7Qkf+PnzFJdZ3wAxm2qj23IS8RRxyuGN2G4
-> ssh-ed25519 evqvfg 9L6pFuqkcChZq/W4zkATXm1Y76SEK+S4SyaiSlJd+C4
j/UWJvo4Cr/UDfaN2milpJ6rU0w1EWdTAzV3SlrCcW8
--- bdG4zC5dx6cSPetH3DNeHEk6EYCJ5TXGrn8OhUMknNU
/¶ø+ÏpñR[¤àJ-*@ÌÿŸx0Ú©ò-ä.*&T·™~-i 2€eƒ¡`@ëQ8š<l™à QK0AÕ§

View File

@@ -63,8 +63,4 @@ with roles;
# zigbee2mqtt secrets
"zigbee2mqtt.yaml.age".publicKeys = zigbee;
# Sonarr and Radarr secrets
"radarr-api-key.age".publicKeys = media-server;
"sonarr-api-key.age".publicKeys = media-server;
}

Binary file not shown.