39 Commits

Author SHA1 Message Date
2762c323e9 Improve gamescope session
All checks were successful
Check Flake / check-flake (push) Successful in 2m16s
Auto Update Flake / auto-update (push) Successful in 1h13m13s
2026-03-14 18:58:29 -07:00
bd71d6e2f5 Don't ntfy for logrotate failures and add container names to ntfy alerts 2026-03-14 18:58:29 -07:00
4899a37a82 Add gamescope (steam) login option 2026-03-14 18:58:29 -07:00
99200dc201 Initial KDE Plasma Bigscreen mode 2026-03-14 18:58:29 -07:00
4fb1c8957a Make PIA connection check more tollerant to hiccups 2026-03-14 18:58:29 -07:00
d2c274fca5 Bump ntfy attachment expiry time 2026-03-14 18:58:29 -07:00
eac627765a Disable bolt for now since I don't use it and it sometimes randomly hangs 2026-03-14 18:58:29 -07:00
63de76572b Log DIMM temperatures on each check run 2026-03-14 18:58:29 -07:00
cbb94d9f4e Fix VPN check alert limiting to only count failures
StartLimitBurst counts all starts (including successes), so the timer
was getting blocked after ~15 min. Replace with a JSON counter file
that resets on success and daily, only triggering OnFailure alerts for
the first 3 failures per day.
2026-03-14 18:58:29 -07:00
84745a3dc7 Remove recyclarr, I'm not using it currently 2026-03-14 18:58:29 -07:00
1d3a931fd0 Add periodic PIA VPN connectivity check
Oneshot service + timer (every 5 min) inside the VPN container that
verifies WireGuard handshake freshness and internet reachability.
Fails on VPN or internet outage, triggering ntfy alert via OnFailure.
Capped at 3 failures per day via StartLimitBurst.
2026-03-14 18:58:29 -07:00
23b0695cf2 Add DDR5 DIMM temperature monitoring with ntfy alerts
Monitors spd5118 sensors every 5 minutes and sends an ntfy
notification if any DIMM exceeds 55°C. Opt-in via
ntfy-alerts.dimmTempCheck.enable, enabled on s0.
2026-03-14 18:58:29 -07:00
b1a26b681f Add Music Assistant to Dashy and Gatus 2026-03-14 18:58:29 -07:00
401ab250f1 Update README 2026-03-14 18:58:29 -07:00
cd864b4061 Remove LanguageTool service 2026-03-14 18:58:29 -07:00
gitea-runner
6d2c5267a4 flake.lock: update inputs
Some checks failed
Check Flake / check-flake (push) Successful in 2m13s
Auto Update Flake / auto-update (push) Failing after 47s
2026-03-10 23:00:31 -07:00
gitea-runner
76bcc114a1 flake.lock: update inputs
All checks were successful
Check Flake / check-flake (push) Successful in 2m24s
Auto Update Flake / auto-update (push) Successful in 12m54s
2026-03-09 23:00:48 -07:00
gitea-runner
f2a482a46f flake.lock: update inputs
All checks were successful
Check Flake / check-flake (push) Successful in 2m14s
Auto Update Flake / auto-update (push) Successful in 1h42m53s
2026-03-07 22:00:55 -08:00
gitea-runner
969d8d8d5e flake.lock: update inputs
All checks were successful
Check Flake / check-flake (push) Successful in 2m13s
Auto Update Flake / auto-update (push) Successful in 12m50s
2026-03-06 22:00:31 -08:00
gitea-runner
518a7d0ffb flake.lock: update inputs
All checks were successful
Check Flake / check-flake (push) Successful in 2m14s
Auto Update Flake / auto-update (push) Successful in 17m40s
2026-03-05 22:00:37 -08:00
gitea-runner
2d6ad9f090 flake.lock: update inputs
All checks were successful
Check Flake / check-flake (push) Successful in 2m12s
Auto Update Flake / auto-update (push) Successful in 8m19s
2026-03-04 22:00:30 -08:00
88cfad2a69 Update flake inputs (nixpkgs, home-manager, claude-code-nix)
All checks were successful
Check Flake / check-flake (push) Successful in 2m12s
Auto Update Flake / auto-update (push) Successful in 7m5s
Remove obsolete libreoffice-noto-fonts-subset.patch — upstream nixpkgs
removed the noto-fonts-subset code from the libreoffice derivation.
2026-03-03 22:54:45 -08:00
86a9f777ad Use the hosts overlays in gitea container (for attic patches)
All checks were successful
Check Flake / check-flake (push) Successful in 3m42s
2026-03-03 22:54:14 -08:00
b29e80f3e9 Patch attic-client to retry on push failure
Some checks failed
Check Flake / check-flake (push) Failing after 4m5s
Backport zhaofengli/attic#246 to work around a hyper connection pool
race condition that causes spurious "connection closed before message
completed" errors during cache uploads in CI.
2026-03-03 22:40:27 -08:00
e32834ff7f Prevent nify-failure from calling itself
Some checks failed
Check Flake / check-flake (push) Failing after 4m13s
2026-03-03 22:36:58 -08:00
bb39587292 Fix unifi service taking 5+ minutes to shut down
Some checks failed
Check Flake / check-flake (push) Failing after 4m8s
UniFi's Java process crashes during shutdown (Spring context race
condition) leaving mongod orphaned in the cgroup. The upstream module
sets KillSignal=SIGCONT so systemd won't interrupt the graceful
shutdown, but with the default KillMode=control-group this means
mongod also only gets SIGCONT (a no-op) and sits there until the
5-minute timeout triggers SIGKILL.

Switch to KillMode=mixed so the main Java process still gets the
harmless SIGCONT while mongod gets a proper SIGTERM for a clean
database shutdown.
2026-03-03 22:02:21 -08:00
712b52a48d Capture full systemd unit name for ntfy error alerts 2026-03-03 21:46:45 -08:00
c6eeea982e Add ignoredUnits option; skip logrotate failures on s0 because they are spurious 2026-03-03 21:46:19 -08:00
6bd1b4466e Update claude.md 2026-03-03 21:43:36 -08:00
d806d4df0a Increase tinyproxy wait-online timeout to 180s
Some checks failed
Check Flake / check-flake (push) Failing after 5m29s
The bridge takes ~62s to come up on s0, exceeding the 60s timeout
and causing tinyproxy to fail on first start.
2026-03-03 21:04:40 -08:00
8997e996ba See if limiting upload jobs helps with push reliability
Some checks failed
Check Flake / check-flake (push) Successful in 14m14s
Auto Update Flake / auto-update (push) Failing after 19s
2026-03-01 21:36:31 -08:00
9914d03ba2 Embed flake git revision in NixOS configuration
Some checks failed
Check Flake / check-flake (push) Has been cancelled
2026-03-01 19:03:47 -08:00
55204b5074 Upgrade to nextcloud 33
Some checks failed
Check Flake / check-flake (push) Has been cancelled
2026-03-01 18:23:55 -08:00
43ec75741d Fix memos failing to open SQLite database on ZFS
Some checks failed
Check Flake / check-flake (push) Failing after 18s
ProtectSystem=strict with ReadWritePaths fails silently on ZFS submounts
(/var/lib is a separate dataset), leaving the data dir read-only. Downgrade
to ProtectSystem=full which leaves /var writable while still protecting
/usr and /boot.
2026-03-01 17:54:11 -08:00
000bbd7f4d Update interface names because usePredictableInterfaceNames is now off 2026-03-01 17:52:42 -08:00
e4f0d065f9 Fix tinyproxy starting before VPN bridge is configured
tinyproxy binds to the bridge IP but had no ordering dependency on
systemd-networkd, so it could start before the bridge existed.
2026-03-01 17:52:35 -08:00
7ec85cb406 Move s0 to using systemd networkd 2026-03-01 12:36:10 -08:00
e9e925eb46 Fix annoying 'refused connection' logs spamming dmesg due to spotify connect 2026-03-01 12:36:10 -08:00
2ed58e1ec5 Update flake inputs; drop navidrome; fix noto-fonts subset glob
- Update nixpkgs (Feb 27), home-manager, microvm, nix-index-database,
  claude-code-nix, dailybot
- Remove navidrome service, nginx proxy, dashy entry, and gatus monitor
- Add noto-fonts-subset patch for libreoffice/collabora (noto-fonts
  2026.02.01 switched from variable to static font filenames)
- Add incus-lts writableTmpDirAsHomeHook overlay for sandbox HOME fix
- Add samba4Full overlay to disable CephFS (ceph pinned to python3.11)
2026-03-01 12:36:10 -08:00
37 changed files with 708 additions and 192 deletions

View File

@@ -1,6 +1,6 @@
---
name: update-flake
description: Update nix flake inputs to latest versions, fix build breakage from upstream changes, build all NixOS machines, and run garbage collection. Use when the user wants to update nixpkgs, update flake inputs, upgrade packages, or refresh the flake lockfile.
description: Update nix flake inputs to latest versions, fix build breakage from upstream changes, and build all NixOS machines. Use when the user wants to update nixpkgs, update flake inputs, upgrade packages, or refresh the flake lockfile.
---
# Update Flake
@@ -35,12 +35,6 @@ nix build .#nixosConfigurations.<hostname>.config.system.build.toplevel --no-lin
Fix any build failures before continuing.
### 4. Garbage Collection
```bash
nix store gc
```
### 5. Summary
### 4. Summary
Report: inputs updated, fixes applied, nextcloud changes, and anything needing user attention.

View File

@@ -26,4 +26,4 @@ paths=$(echo "$toplevels" \
and .value.narSize >= 524288
) | .key] | unique[]')
echo "Pushing $(echo "$paths" | wc -l) unique paths to cache"
echo "$paths" | xargs attic push local:nixos
echo "$paths" | xargs attic push -j 1 local:nixos

View File

@@ -85,17 +85,3 @@ When adding or removing a web-facing service, update both:
- Always use `--no-link` when running `nix build`
- Don't use `nix build --dry-run` unless you only need evaluation — it skips the actual build
- Avoid `2>&1` on nix commands — it can cause error output to be missed
## Git Worktrees
When the user asks you to "start a worktree" or work in a worktree, **do not create one manually** with `git worktree add`. Instead, tell the user to start a new session with:
```bash
claude --worktree <name>
```
This is the built-in Claude Code worktree workflow. It creates the worktree at `.claude/worktrees/<name>/` with a branch `worktree-<name>` and starts a new Claude session inside it. Cleanup is handled automatically on exit.
When instructed to work in a git worktree (e.g., via `isolation: "worktree"` on a subagent), you **MUST** do so. If you are unable to create or use a git worktree, you **MUST** stop work immediately and report the failure to the user. Do not fall back to working in the main working tree.
When applying work from a git worktree back to the main branch, commit in the worktree first, then use `git cherry-pick` from the main working tree to bring the commit over. Do not use `git checkout` or `git apply` to copy files directly. Do **not** automatically apply worktree work to the main branch — always ask the user for approval first.

View File

@@ -1,16 +1,26 @@
# NixOS Configuration
A NixOS flake managing multiple machines with role-based configuration, agenix secrets, and sandboxed dev workspaces.
A NixOS flake managing multiple machines with role-based configuration, agenix secrets, sandboxed dev workspaces, and self-hosted services.
## Layout
- `/common` - shared configuration imported by all machines
- `/boot` - bootloaders, CPU microcode, remote LUKS unlock over Tor
- `/network` - Tailscale, VPN tunneling via PIA
- `/network` - Tailscale, PIA VPN with leak-proof containers, sandbox networking
- `/pc` - desktop/graphical config (enabled by the `personal` role)
- `/server` - service definitions and extensions
- `/server` - self-hosted service definitions (Gitea, Matrix, Nextcloud, media stack, etc.)
- `/sandboxed-workspace` - isolated dev environments (VM, container, or Incus)
- `/ntfy` - push notification integration (service failures, SSH logins, ZFS alerts)
- `binary-cache.nix` - nix binary cache configuration (nixos.org, cachix, self-hosted atticd)
- `nix-builder.nix` - distributed build delegation across machines
- `backups.nix` - snapshot-aware restic backups to Backblaze B2
- `/machines` - per-machine config (`default.nix`, `hardware-configuration.nix`, `properties.nix`)
- `fry` - personal desktop
- `howl` - personal laptop
- `ponyo` - web/mail server (Gitea, Nextcloud, LibreChat, mail)
- `storage/s0` - storage/media server (Jellyfin, Home Assistant, monitoring, productivity apps)
- `zoidberg` - media center
- `ephemeral` - minimal config for building install ISOs and kexec images
- `/secrets` - agenix-encrypted secrets, decryptable by machines based on their roles
- `/home` - Home Manager user config
- `/lib` - custom library functions extending nixpkgs lib
@@ -25,8 +35,14 @@ A NixOS flake managing multiple machines with role-based configuration, agenix s
**Remote LUKS unlock over Tor** — Machines with encrypted root disks can be unlocked remotely via SSH. An embedded Tor hidden service starts in the initrd so the machine is reachable even without a known IP, using a separate SSH host key for the boot environment.
**VPN containers** — A `vpn-container` module spins up an ephemeral NixOS container with a PIA WireGuard tunnel. The host creates the WireGuard interface and authenticates with PIA, then hands it off to the container's network namespace. This ensures that the container can **never** have direct internet access. Leakage is impossible.
**VPN containers** — A `pia-vpn` module provides leak-proof VPN networking for containers. The host creates a WireGuard interface and runs tinyproxy on a bridge network for PIA API bootstrap. A dedicated VPN container authenticates with PIA via the proxy, configures WireGuard, and masquerades bridge traffic through the tunnel. Service containers default-route exclusively through the VPN container — leakage is impossible by network topology. Supports port forwarding with automatic port assignment.
**Sandboxed workspaces** — Isolated dev environments backed by microVMs (cloud-hypervisor), systemd-nspawn containers, or Incus. Each workspace gets a static IP on a NAT'd bridge, auto-generated SSH host keys, shell aliases for management, and comes pre-configured with Claude Code. The sandbox network blocks access to the local LAN while allowing internet.
**Sandboxed workspaces** — Isolated dev environments backed by microVMs (cloud-hypervisor), systemd-nspawn containers, or Incus. Each workspace gets a static IP on a NAT'd bridge (`192.168.83.0/24`), auto-generated SSH host keys, shell aliases for management, and comes pre-configured with Claude Code. The sandbox network blocks access to the local LAN while allowing internet.
**Snapshot-aware backups** — Restic backups to Backblaze B2 automatically create ZFS snapshots or btrfs read-only snapshots before backing up, using mount namespaces to bind-mount frozen data over the original paths so restic records correct paths. Each backup group gets a `restic_<group>` CLI wrapper. Supports `.nobackup` marker files.
**Self-hosted services** — Comprehensive service stack across ponyo and s0: Gitea (git hosting + CI), Nextcloud (files/calendar), Matrix (chat), mail server, Jellyfin/Sonarr/Radarr/Lidarr (media), Home Assistant/Zigbee2MQTT/Frigate (home automation), LibreChat (AI), Gatus (monitoring), and productivity tools (Vikunja, Actual Budget, Outline, Linkwarden, Memos).
**Push notifications** — ntfy integration alerts on systemd service failures, SSH logins, and ZFS pool issues. Gatus monitors all web-facing services and sends alerts via ntfy.
**Remote deployment** — deploy-rs handles remote machine deployments with boot-only or immediate activation modes. A Makefile wraps common operations (`make deploy <host>`, `make deploy-activate <host>`).

View File

@@ -187,8 +187,7 @@ in
# Enable systemd-networkd for bridge management
systemd.network.enable = true;
# TODO: re-enable once primary networking uses networkd
systemd.network.wait-online.enable = false;
systemd.network.wait-online.anyInterface = true;
# Tell NetworkManager to ignore VPN bridge and container interfaces
networking.networkmanager.unmanaged = mkIf config.networking.networkmanager.enable [
@@ -231,7 +230,14 @@ in
Port = cfg.proxyPort;
};
};
systemd.services.tinyproxy.before = [ "container@pia-vpn.service" ];
systemd.services.tinyproxy = {
before = [ "container@pia-vpn.service" ];
after = [ "systemd-networkd.service" ];
requires = [ "systemd-networkd.service" ];
serviceConfig.ExecStartPre = [
"+${pkgs.systemd}/lib/systemd/systemd-networkd-wait-online --interface=${cfg.bridgeName}:no-carrier --timeout=180"
];
};
# WireGuard interface creation (host-side oneshot)
# Creates the interface in the host namespace so encrypted UDP stays in host netns.

View File

@@ -11,6 +11,7 @@ with lib;
let
cfg = config.pia-vpn;
hostName = config.networking.hostName;
mkContainer = name: ctr: {
autoStart = true;
@@ -28,6 +29,9 @@ let
config = { config, pkgs, lib, ... }: {
imports = allModules ++ [ ctr.config ];
ntfy-alerts.ignoredUnits = [ "logrotate" ];
ntfy-alerts.hostLabel = "${hostName}/${name}";
# Static IP with gateway pointing to VPN container
networking.useNetworkd = true;
systemd.network.enable = true;

View File

@@ -6,6 +6,7 @@ with lib;
let
cfg = config.pia-vpn;
hostName = config.networking.hostName;
scripts = import ./scripts.nix;
# Port forwarding derived state
@@ -98,6 +99,8 @@ in
# Route ntfy alerts through the host proxy (VPN container has no gateway on eth0)
ntfy-alerts.curlExtraArgs = "--proxy http://${cfg.hostAddress}:${toString cfg.proxyPort}";
ntfy-alerts.ignoredUnits = [ "logrotate" ];
ntfy-alerts.hostLabel = "${hostName}/pia-vpn";
# Enable forwarding so bridge traffic can go through WG
boot.kernel.sysctl."net.ipv4.ip_forward" = 1;
@@ -226,6 +229,93 @@ in
RandomizedDelaySec = "1m";
};
};
# Periodic VPN connectivity check — fails if VPN or internet is down,
# triggering ntfy alert via the OnFailure drop-in.
# Tracks failures with a counter file so only the first 3 failures per
# day trigger an alert (subsequent failures exit 0 to suppress noise).
systemd.services.pia-vpn-check = {
description = "Check PIA VPN connectivity";
after = [ "pia-vpn-setup.service" ];
requires = [ "pia-vpn-setup.service" ];
path = with pkgs; [ wireguard-tools iputils coreutils gawk jq ];
serviceConfig.Type = "oneshot";
script = ''
set -euo pipefail
COUNTER_FILE="/var/lib/pia-vpn/check-fail-count.json"
MAX_ALERTS=3
check_vpn() {
# Check that WireGuard has a peer with a recent handshake (within 3 minutes)
handshake=$(wg show ${cfg.interfaceName} latest-handshakes | awk '{print $2}')
if [ -z "$handshake" ] || [ "$handshake" -eq 0 ]; then
echo "No WireGuard handshake recorded" >&2
return 1
fi
now=$(date +%s)
age=$((now - handshake))
if [ "$age" -gt 180 ]; then
echo "WireGuard handshake is stale (''${age}s ago)" >&2
return 1
fi
# Verify internet connectivity through VPN tunnel
if ! ping -c1 -W10 1.1.1.1 >/dev/null 2>&1; then
echo "Cannot reach internet through VPN" >&2
return 1
fi
echo "PIA VPN connectivity OK (handshake ''${age}s ago)"
return 0
}
MAX_RETRIES=4
for attempt in $(seq 1 $MAX_RETRIES); do
if check_vpn; then
rm -f "$COUNTER_FILE"
exit 0
fi
if [ "$attempt" -lt "$MAX_RETRIES" ]; then
echo "Attempt $attempt/$MAX_RETRIES failed, retrying in 5 minutes..." >&2
sleep 300
fi
done
# Failed read and update counter (reset if from a previous day)
today=$(date +%Y-%m-%d)
count=0
if [ -f "$COUNTER_FILE" ]; then
stored=$(jq -r '.date // ""' "$COUNTER_FILE")
if [ "$stored" = "$today" ]; then
count=$(jq -r '.count // 0' "$COUNTER_FILE")
fi
fi
count=$((count + 1))
jq -n --arg date "$today" --argjson count "$count" \
'{"date": $date, "count": $count}' > "$COUNTER_FILE"
if [ "$count" -le "$MAX_ALERTS" ]; then
echo "Failure $count/$MAX_ALERTS today alerting" >&2
exit 1
else
echo "Failure $count today suppressing alert (already sent $MAX_ALERTS)" >&2
exit 0
fi
'';
};
systemd.timers.pia-vpn-check = {
description = "Periodic PIA VPN connectivity check";
wantedBy = [ "timers.target" ];
timerConfig = {
OnCalendar = "*:0/30";
RandomizedDelaySec = "30s";
};
};
};
};
};

View File

@@ -5,6 +5,7 @@
./service-failure.nix
./ssh-login.nix
./zfs.nix
./dimm-temp.nix
];
options.ntfy-alerts = {
@@ -19,6 +20,18 @@
default = "";
description = "Extra arguments to pass to curl (e.g. --proxy http://host:port).";
};
ignoredUnits = lib.mkOption {
type = lib.types.listOf lib.types.str;
default = [ ];
description = "Unit names to skip failure notifications for.";
};
hostLabel = lib.mkOption {
type = lib.types.str;
default = config.networking.hostName;
description = "Label used in ntfy alert titles to identify this host/container.";
};
};
config = lib.mkIf config.thisMachine.hasRole."ntfy" {

73
common/ntfy/dimm-temp.nix Normal file
View File

@@ -0,0 +1,73 @@
{ config, lib, pkgs, ... }:
let
cfg = config.ntfy-alerts;
hasNtfy = config.thisMachine.hasRole."ntfy";
checkScript = pkgs.writeShellScript "dimm-temp-check" ''
PATH="${lib.makeBinPath [ pkgs.lm_sensors pkgs.gawk pkgs.coreutils pkgs.curl ]}"
threshold=55
hot=""
summary=""
while IFS= read -r line; do
case "$line" in
spd5118-*)
chip="$line"
;;
*temp1_input:*)
temp="''${line##*: }"
whole="''${temp%%.*}"
summary="''${summary:+$summary, }$chip: ''${temp}°C"
if [ "$whole" -ge "$threshold" ]; then
hot="$hot"$'\n'" $chip: ''${temp}°C"
fi
;;
esac
done < <(sensors -u 'spd5118-*' 2>/dev/null)
echo "$summary"
if [ -n "$hot" ]; then
message="DIMM temperature above ''${threshold}°C on ${config.networking.hostName}:$hot"
curl \
--fail --silent --show-error \
--max-time 30 --retry 3 \
-H "Authorization: Bearer $NTFY_TOKEN" \
-H "Title: High DIMM temperature on ${config.networking.hostName}" \
-H "Priority: high" \
-H "Tags: thermometer" \
-d "$message" \
"${cfg.serverUrl}/service-failures"
echo "$message" >&2
fi
'';
in
{
options.ntfy-alerts.dimmTempCheck.enable = lib.mkEnableOption "DDR5 DIMM temperature monitoring via spd5118";
config = lib.mkIf (cfg.dimmTempCheck.enable && hasNtfy) {
systemd.services.dimm-temp-check = {
description = "Check DDR5 DIMM temperatures and alert on overheating";
wants = [ "network-online.target" ];
after = [ "network-online.target" ];
serviceConfig = {
Type = "oneshot";
EnvironmentFile = "/run/agenix/ntfy-token";
ExecStart = checkScript;
};
};
systemd.timers.dimm-temp-check = {
description = "Periodic DDR5 DIMM temperature check";
wantedBy = [ "timers.target" ];
timerConfig = {
OnCalendar = "*:0/5";
Persistent = true;
};
};
};
}

View File

@@ -14,6 +14,14 @@ in
EnvironmentFile = "/run/agenix/ntfy-token";
ExecStart = "${pkgs.writeShellScript "ntfy-failure-notify" ''
unit="$1"
# Prevent infinite recursion if this service itself fails
[[ "$unit" == ntfy-failure@* ]] && exit 0
ignored_units=(${lib.concatMapStringsSep " " (u: lib.escapeShellArg u) cfg.ignoredUnits})
for ignored in "''${ignored_units[@]}"; do
if [[ "$unit" == "$ignored" ]]; then
exit 0
fi
done
logfile=$(mktemp)
trap 'rm -f "$logfile"' EXIT
${pkgs.systemd}/bin/journalctl -u "$unit" -n 50 --no-pager -o short > "$logfile" 2>/dev/null \
@@ -24,7 +32,7 @@ in
--max-time 30 --retry 3 \
${cfg.curlExtraArgs} \
-H "Authorization: Bearer $NTFY_TOKEN" \
-H "Title: Service failure on ${config.networking.hostName}" \
-H "Title: Service failure on ${cfg.hostLabel}" \
-H "Priority: high" \
-H "Tags: rotating_light" \
-H "Message: Unit $unit failed at $(date +%c)" \
@@ -40,7 +48,7 @@ in
mkdir -p $out/lib/systemd/system/service.d
cat > $out/lib/systemd/system/service.d/ntfy-on-failure.conf <<'EOF'
[Unit]
OnFailure=ntfy-failure@%p.service
OnFailure=ntfy-failure@%N.service
EOF
'')
];

View File

@@ -86,6 +86,9 @@ in
services.gnome.gnome-keyring.enable = true;
security.pam.services.googlebot.enableGnomeKeyring = true;
# Spotify Connect discovery
networking.firewall.allowedTCPPorts = [ 57621 ];
# Mount personal SMB stores
services.mount-samba.enable = true;

View File

@@ -9,6 +9,14 @@ in
services.displayManager.sddm.wayland.enable = true;
services.desktopManager.plasma6.enable = true;
services.displayManager.sessionPackages = [
pkgs.plasma-bigscreen
];
# Bigscreen binaries must be on PATH for autostart services, KCMs, and
# internal plasmashell launches (settings, input handler, envmanager, etc.)
environment.systemPackages = [ pkgs.plasma-bigscreen ];
# kde apps
users.users.googlebot.packages = with pkgs; [
# akonadi

View File

@@ -8,6 +8,29 @@ in
programs.steam.enable = true;
hardware.steam-hardware.enable = true; # steam controller
# Login DE Option: Steam Gamescope (Steam Deck-like session)
programs.gamescope = {
enable = true;
};
programs.steam.gamescopeSession = {
enable = true;
args = [
"--hdr-enabled"
"--hdr-itm-enabled"
"--adaptive-sync"
];
steamArgs = [
"-steamos3"
"-gamepadui"
"-pipewire-dmabuf"
];
env = {
STEAM_ENABLE_VOLUME_HANDLER = "1";
STEAM_DISABLE_AUDIO_DEVICE_SWITCHING = "1";
};
};
environment.systemPackages = [ pkgs.gamescope-wsi ];
users.users.googlebot.packages = [
pkgs.steam
];

View File

@@ -120,16 +120,6 @@ in
];
alerts = [{ type = "ntfy"; }];
}
{
name = "Navidrome";
group = "services";
url = "https://navidrome.neet.cloud";
interval = "5m";
conditions = [
"[STATUS] == 200"
];
alerts = [{ type = "ntfy"; }];
}
{
name = "Roundcube";
group = "services";
@@ -280,6 +270,16 @@ in
];
alerts = [{ type = "ntfy"; }];
}
{
name = "Music Assistant";
group = "s0";
url = "http://s0.koi-bebop.ts.net:8095";
interval = "5m";
conditions = [
"[STATUS] == 200"
];
alerts = [{ type = "ntfy"; }];
}
{
name = "Vikunja";
group = "s0";
@@ -330,16 +330,7 @@ in
];
alerts = [{ type = "ntfy"; }];
}
{
name = "LanguageTool";
group = "s0";
url = "https://languagetool.s0.neet.dev";
interval = "5m";
conditions = [
"[STATUS] == 200"
];
alerts = [{ type = "ntfy"; }];
}
{
name = "Unifi";
group = "s0";

View File

@@ -1,4 +1,4 @@
{ config, lib, ... }:
{ config, lib, allModules, ... }:
# Gitea Actions Runner inside a NixOS container.
# The container shares the host's /nix/store (read-only) and nix-daemon socket,
@@ -8,6 +8,7 @@
let
thisMachineIsARunner = config.thisMachine.hasRole."gitea-actions-runner";
hostName = config.networking.hostName;
containerName = "gitea-runner";
giteaRunnerUid = 991;
giteaRunnerGid = 989;
@@ -31,7 +32,10 @@ in
};
config = { config, lib, pkgs, ... }: {
system.stateVersion = "25.11";
imports = allModules;
ntfy-alerts.ignoredUnits = [ "logrotate" ];
ntfy-alerts.hostLabel = "${hostName}/${containerName}";
services.gitea-actions-runner.instances.inst = {
enable = true;

View File

@@ -17,7 +17,7 @@ in
config = lib.mkIf cfg.enable {
services.nextcloud = {
https = true;
package = pkgs.nextcloud32;
package = pkgs.nextcloud33;
hostName = nextcloudHostname;
config.dbtype = "sqlite";
config.adminuser = "jeremy";

View File

@@ -18,6 +18,7 @@ in
auth-default-access = "deny-all";
behind-proxy = true;
enable-login = true;
attachment-expiry-duration = "48h";
};
# backups

View File

@@ -13,6 +13,15 @@ in
services.unifi.unifiPackage = pkgs.unifi;
services.unifi.mongodbPackage = pkgs.mongodb-7_0;
# The upstream module sets KillSignal=SIGCONT so systemd doesn't interfere
# with UniFi's self-managed shutdown. But UniFi's Java process crashes during
# shutdown (Spring context already closed) leaving mongod orphaned in the
# cgroup. With the default KillMode=control-group, mongod only gets SIGCONT
# (a no-op) and runs until the 5min timeout triggers SIGKILL.
# KillMode=mixed sends SIGCONT to the main process but SIGTERM to remaining
# children, giving mongod a clean shutdown instead of SIGKILL.
systemd.services.unifi.serviceConfig.KillMode = "mixed";
networking.firewall = lib.mkIf cfg.openMinimalFirewall {
allowedUDPPorts = [
3478 # STUN

64
flake.lock generated
View File

@@ -53,11 +53,11 @@
]
},
"locked": {
"lastModified": 1771632347,
"narHash": "sha256-kNm0YX9RUwf7GZaWQu2F71ccm4OUMz0xFkXn6mGPfps=",
"lastModified": 1773106230,
"narHash": "sha256-ob/uMOU6CyRES+/SIxnMDhDAZUQr228JdBPKkGu8m/c=",
"owner": "sadjow",
"repo": "claude-code-nix",
"rev": "ec90f84b2ea21f6d2272e00d1becbc13030d1895",
"rev": "5cbf0a4eba950cdc7d7982774a9bc189ab21cb99",
"type": "github"
},
"original": {
@@ -76,11 +76,11 @@
]
},
"locked": {
"lastModified": 1739947126,
"narHash": "sha256-JoiddH5H9up8jC/VKU8M7wDlk/bstKoJ3rHj+TkW4Zo=",
"lastModified": 1772394520,
"narHash": "sha256-9c0sHyzoVtvufkSqVNGGydsgjpKv5Zf7062LmOm4Gsc=",
"ref": "refs/heads/master",
"rev": "ea1ad60f1c6662103ef4a3705d8e15aa01219529",
"revCount": 20,
"rev": "d07483c17bf31d416de3642a2faec484ea1810ed",
"revCount": 21,
"type": "git",
"url": "https://git.neet.dev/zuckerberg/dailybot.git"
},
@@ -186,11 +186,11 @@
]
},
"locked": {
"lastModified": 1769939035,
"narHash": "sha256-Fok2AmefgVA0+eprw2NDwqKkPGEI5wvR+twiZagBvrg=",
"lastModified": 1772893680,
"narHash": "sha256-JDqZMgxUTCq85ObSaFw0HhE+lvdOre1lx9iI6vYyOEs=",
"owner": "cachix",
"repo": "git-hooks.nix",
"rev": "a8ca480175326551d6c4121498316261cbb5b260",
"rev": "8baab586afc9c9b57645a734c820e4ac0a604af9",
"type": "github"
},
"original": {
@@ -228,11 +228,11 @@
]
},
"locked": {
"lastModified": 1771756436,
"narHash": "sha256-Tl2I0YXdhSTufGqAaD1ySh8x+cvVsEI1mJyJg12lxhI=",
"lastModified": 1773179137,
"narHash": "sha256-EdW2bwzlfme0vbMOcStnNmKlOAA05Bp6su2O8VLGT0k=",
"owner": "nix-community",
"repo": "home-manager",
"rev": "5bd3589390b431a63072868a90c0f24771ff4cbb",
"rev": "3f98e2bbc661ec0aaf558d8a283d6955f05f1d09",
"type": "github"
},
"original": {
@@ -250,11 +250,11 @@
"spectrum": "spectrum"
},
"locked": {
"lastModified": 1771802632,
"narHash": "sha256-UAH8YfrHRvXAMeFxUzJ4h4B1loz1K1wiNUNI8KiPqOg=",
"lastModified": 1773018425,
"narHash": "sha256-fpgZBmZpKoEXEowBK/6m8g9FcOLWQ4UxhXHqCw2CpSM=",
"owner": "astro",
"repo": "microvm.nix",
"rev": "b67e3d80df3ec35bdfd3a00ad64ee437ef4fcded",
"rev": "25ebda3c558e923720c965832dc9a04f559a055c",
"type": "github"
},
"original": {
@@ -270,11 +270,11 @@
]
},
"locked": {
"lastModified": 1771734689,
"narHash": "sha256-/phvMgr1yutyAMjKnZlxkVplzxHiz60i4rc+gKzpwhg=",
"lastModified": 1772945408,
"narHash": "sha256-PMt48sEQ8cgCeljQ9I/32uoBq/8t8y+7W/nAZhf72TQ=",
"owner": "Mic92",
"repo": "nix-index-database",
"rev": "8f590b832326ab9699444f3a48240595954a4b10",
"rev": "1c1d8ea87b047788fd7567adf531418c5da321ec",
"type": "github"
},
"original": {
@@ -285,11 +285,11 @@
},
"nixos-hardware": {
"locked": {
"lastModified": 1771423359,
"narHash": "sha256-yRKJ7gpVmXbX2ZcA8nFi6CMPkJXZGjie2unsiMzj3Ig=",
"lastModified": 1772972630,
"narHash": "sha256-mUJxsNOrBMNOUJzN0pfdVJ1r2pxeqm9gI/yIKXzVVbk=",
"owner": "NixOS",
"repo": "nixos-hardware",
"rev": "740a22363033e9f1bb6270fbfb5a9574067af15b",
"rev": "3966ce987e1a9a164205ac8259a5fe8a64528f72",
"type": "github"
},
"original": {
@@ -301,11 +301,11 @@
},
"nixpkgs": {
"locked": {
"lastModified": 1771369470,
"narHash": "sha256-0NBlEBKkN3lufyvFegY4TYv5mCNHbi5OmBDrzihbBMQ=",
"lastModified": 1772963539,
"narHash": "sha256-9jVDGZnvCckTGdYT53d/EfznygLskyLQXYwJLKMPsZs=",
"owner": "NixOS",
"repo": "nixpkgs",
"rev": "0182a361324364ae3f436a63005877674cf45efb",
"rev": "9dcb002ca1690658be4a04645215baea8b95f31d",
"type": "github"
},
"original": {
@@ -344,11 +344,11 @@
]
},
"locked": {
"lastModified": 1770659507,
"narHash": "sha256-RVZno9CypFN3eHxfULKN1K7mb/Cq0HkznnWqnshxpWY=",
"lastModified": 1773194666,
"narHash": "sha256-YbsbqtTB3q0JjP7/G7GO58ea49cps1+8sb95/Bt7oVs=",
"owner": "simple-nixos-mailserver",
"repo": "nixos-mailserver",
"rev": "781e833633ebc0873d251772a74e4400a73f5d78",
"rev": "489fbc4e0ef987cfdce700476abafe3269ebf3e5",
"type": "gitlab"
},
"original": {
@@ -361,11 +361,11 @@
"spectrum": {
"flake": false,
"locked": {
"lastModified": 1759482047,
"narHash": "sha256-H1wiXRQHxxPyMMlP39ce3ROKCwI5/tUn36P8x6dFiiQ=",
"lastModified": 1772189877,
"narHash": "sha256-i1p90Rgssb//aNiTDFq46ZG/fk3LmyRLChtp/9lddyA=",
"ref": "refs/heads/main",
"rev": "c5d5786d3dc938af0b279c542d1e43bce381b4b9",
"revCount": 996,
"rev": "fe39e122d898f66e89ffa17d4f4209989ccb5358",
"revCount": 1255,
"type": "git",
"url": "https://spectrum-os.org/git/spectrum"
},

View File

@@ -115,6 +115,8 @@
];
networking.hostName = hostname;
# Query with: nixos-version --configuration-revision
system.configurationRevision = self.rev or self.dirtyRev or "unknown";
home-manager.useGlobalPkgs = true;
home-manager.useUserPackages = true;

View File

@@ -18,7 +18,7 @@
boot.extraModulePackages = [ ];
# thunderbolt
services.hardware.bolt.enable = true;
services.hardware.bolt.enable = false;
# firmware
firmware.x86_64.enable = true;

View File

@@ -22,7 +22,7 @@
boot.extraModulePackages = [ ];
# thunderbolt
services.hardware.bolt.enable = true;
services.hardware.bolt.enable = false;
# firmware
firmware.x86_64.enable = true;

View File

@@ -79,12 +79,6 @@
# proxied web services
services.nginx.enable = true;
services.nginx.virtualHosts."navidrome.neet.cloud" = {
enableACME = true;
forceSSL = true;
locations."/".proxyPass = "http://s0.koi-bebop.ts.net:4533";
};
# TODO replace with a proper file hosting service
services.nginx.virtualHosts."tmp.neet.dev" = {
enableACME = true;

View File

@@ -42,5 +42,6 @@
}
];
networking.usePredictableInterfaceNames = true;
networking.interfaces.eth0.useDHCP = true;
}

View File

@@ -115,15 +115,6 @@
statusCheck = false;
id = "5_1956_bazarr";
};
navidrome = {
title = "Navidrome";
description = "Play Music";
icon = "hl-navidrome";
url = "https://music.s0.neet.dev";
target = "sametab";
statusCheck = false;
id = "6_1956_navidrome";
};
transmission = {
title = "Transmission";
description = "Torrenting";
@@ -142,7 +133,6 @@
mediaItems.lidarr
mediaItems.prowlarr
mediaItems.bazarr
mediaItems.navidrome
mediaItems.transmission
];
in
@@ -411,6 +401,15 @@
statusCheck = false;
id = "5_4201_sandman";
};
music-assistant = {
title = "Music Assistant";
description = "s0:8095";
icon = "hl-music-assistant";
url = "http://s0.koi-bebop.ts.net:8095";
target = "sametab";
statusCheck = false;
id = "6_4201_music-assistant";
};
};
haList = [
haItems.home-assistant
@@ -419,6 +418,7 @@
haItems.frigate
haItems.valetudo
haItems.sandman
haItems.music-assistant
];
in
{
@@ -484,15 +484,6 @@
statusCheck = false;
id = "4_5301_outline";
};
languagetool = {
title = "LanguageTool";
description = "languagetool.s0.neet.dev";
icon = "hl-languagetool";
url = "https://languagetool.s0.neet.dev";
target = "sametab";
statusCheck = false;
id = "5_5301_languagetool";
};
};
prodList = [
prodItems.vikunja
@@ -500,7 +491,6 @@
prodItems.linkwarden
prodItems.memos
prodItems.outline
prodItems.languagetool
];
in
{

View File

@@ -9,6 +9,9 @@
networking.hostName = "s0";
ntfy-alerts.ignoredUnits = [ "logrotate" ];
ntfy-alerts.dimmTempCheck.enable = true;
# system.autoUpgrade.enable = true;
nix.gc.automatic = lib.mkForce false; # allow the nix store to serve as a build cache
@@ -41,16 +44,6 @@
# samba
services.samba.enable = true;
# navidrome
services.navidrome = {
enable = true;
settings = {
Address = "0.0.0.0";
Port = 4533;
MusicFolder = "/data/samba/Public/Media/Music";
};
};
# allow access to transmisson data
users.users.googlebot.extraGroups = [ "transmission" ];
users.groups.transmission.gid = config.ids.gids.transmission;
@@ -150,30 +143,6 @@
services.lidarr.enable = true;
services.lidarr.user = "public_data";
services.lidarr.group = "public_data";
services.recyclarr = {
enable = true;
configuration = {
radarr.radarr_main = {
api_key = {
_secret = "/run/credentials/recyclarr.service/radarr-api-key";
};
base_url = "http://localhost:7878";
quality_definition.type = "movie";
};
sonarr.sonarr_main = {
api_key = {
_secret = "/run/credentials/recyclarr.service/sonarr-api-key";
};
base_url = "http://localhost:8989";
quality_definition.type = "series";
};
};
};
systemd.services.recyclarr.serviceConfig.LoadCredential = [
"radarr-api-key:/run/agenix/radarr-api-key"
"sonarr-api-key:/run/agenix/sonarr-api-key"
];
users.groups.public_data.gid = 994;
users.users.public_data = {
@@ -184,8 +153,6 @@
};
};
};
age.secrets.radarr-api-key.file = ../../../secrets/radarr-api-key.age;
age.secrets.sonarr-api-key.file = ../../../secrets/sonarr-api-key.age;
# jellyfin
# jellyfin cannot run in the vpn container and use hardware encoding
@@ -234,7 +201,6 @@
(mkVirtualHost "prowlarr.s0.neet.dev" "http://servarr.containers:9696")
(mkVirtualHost "transmission.s0.neet.dev" "http://transmission.containers:8080")
(mkVirtualHost "unifi.s0.neet.dev" "https://localhost:8443")
(mkVirtualHost "music.s0.neet.dev" "http://localhost:4533")
(mkVirtualHost "jellyfin.s0.neet.dev" "http://localhost:8096")
(mkStaticHost "s0.neet.dev" config.services.dashy.finalDrv)
{
@@ -262,7 +228,6 @@
(mkVirtualHost "linkwarden.s0.neet.dev" "http://localhost:${toString config.services.linkwarden.port}")
(mkVirtualHost "memos.s0.neet.dev" "http://localhost:${toString config.services.memos.settings.MEMOS_PORT}")
(mkVirtualHost "outline.s0.neet.dev" "http://localhost:${toString config.services.outline.port}")
(mkVirtualHost "languagetool.s0.neet.dev" "http://localhost:${toString config.services.languagetool.port}")
];
tailscaleAuth = {
@@ -275,7 +240,6 @@
"prowlarr.s0.neet.dev"
"transmission.s0.neet.dev"
"unifi.s0.neet.dev"
# "music.s0.neet.dev" # messes up navidrome
"jellyfin.s0.neet.dev"
"s0.neet.dev"
# "ha.s0.neet.dev" # messes up home assistant
@@ -287,7 +251,6 @@
"linkwarden.s0.neet.dev"
# "memos.s0.neet.dev" # messes up memos /auth route
# "outline.s0.neet.dev" # messes up outline /auth route
"languagetool.s0.neet.dev"
];
expectedTailnet = "koi-bebop.ts.net";
};
@@ -353,6 +316,8 @@
enable = true;
settings.MEMOS_PORT = "57643";
};
# ReadWritePaths doesn't work with ProtectSystem=strict on ZFS submounts (/var/lib is a separate dataset)
systemd.services.memos.serviceConfig.ProtectSystem = lib.mkForce "full";
services.outline = {
enable = true;
@@ -375,10 +340,5 @@
owner = config.services.outline.user;
};
services.languagetool = {
enable = true;
port = 60613;
};
boot.binfmt.emulatedSystems = [ "aarch64-linux" "armv7l-linux" ];
}

View File

@@ -1,4 +1,4 @@
{ lib, pkgs, modulesPath, ... }:
{ modulesPath, ... }:
{
imports =
@@ -60,16 +60,55 @@
### networking ###
# systemd.network.enable = true;
systemd.network.enable = true;
networking = {
# useNetworkd = true;
dhcpcd.enable = true;
interfaces."eth0".useDHCP = true;
interfaces."eth1".useDHCP = true;
useNetworkd = true;
useDHCP = false;
dhcpcd.enable = false;
};
defaultGateway = {
address = "192.168.1.1";
# eno1 — native VLAN 5 (main), default route, internet
# useDHCP generates the base 40-eno1 networkd unit and drives initrd DHCP for LUKS unlock.
networking.interfaces."eno1".useDHCP = true;
systemd.network.networks."40-eno1" = {
dhcpV4Config.RouteMetric = 100; # prefer eno1 over VLAN interfaces for default route
linkConfig.RequiredForOnline = "routable"; # wait-online succeeds once eno1 has a route
};
# eno2 — trunk port (no IP on the raw interface)
systemd.network.networks."40-eno2" = {
matchConfig.Name = "eno2";
networkConfig = {
VLAN = [ "vlan-iot" "vlan-mgmt" ];
LinkLocalAddressing = "no";
};
linkConfig.RequiredForOnline = "carrier";
};
# VLAN 2 — IoT (cameras, smart home)
systemd.network.netdevs."50-vlan-iot".netdevConfig = { Name = "vlan-iot"; Kind = "vlan"; };
systemd.network.netdevs."50-vlan-iot".vlanConfig.Id = 2;
systemd.network.networks."50-vlan-iot" = {
matchConfig.Name = "vlan-iot";
networkConfig.DHCP = "yes";
dhcpV4Config = {
UseGateway = false;
RouteMetric = 200;
};
linkConfig.RequiredForOnline = "no";
};
# VLAN 4 — Management
systemd.network.netdevs."50-vlan-mgmt".netdevConfig = { Name = "vlan-mgmt"; Kind = "vlan"; };
systemd.network.netdevs."50-vlan-mgmt".vlanConfig.Id = 4;
systemd.network.networks."50-vlan-mgmt" = {
matchConfig.Name = "vlan-mgmt";
networkConfig.DHCP = "yes";
dhcpV4Config = {
UseGateway = false;
RouteMetric = 300;
};
linkConfig.RequiredForOnline = "no";
};
powerManagement.cpuFreqGovernor = "schedutil";

View File

@@ -24,6 +24,10 @@
# Music assistant (must be exposed so local devices can fetch the audio stream from it)
8095
8097
# Music assistant: Spotify Connect zeroconf discovery (one per librespot instance)
44200
44201
];
services.zigbee2mqtt = {

View File

@@ -5,10 +5,6 @@
./hardware-configuration.nix
];
# Login DE Option: Steam
programs.steam.gamescopeSession.enable = true;
# programs.gamescope.capSysNice = true;
# Login DE Option: Kodi
services.xserver.desktopManager.kodi.enable = true;
services.xserver.desktopManager.kodi.package =
@@ -35,7 +31,7 @@
"L+ /opt/rocm/hip - - - - ${pkgs.rocmPackages.clr}"
];
services.displayManager.defaultSession = "plasma";
services.displayManager.defaultSession = "plasma-bigscreen-wayland";
users.users.cris = {
isNormalUser = true;
@@ -54,10 +50,10 @@
uid = 1002;
};
# Auto login into Plasma in john zoidberg account
# Auto login into Plasma Bigscreen in john zoidberg account
services.displayManager.sddm.settings = {
Autologin = {
Session = "plasma";
Session = "plasma-bigscreen-wayland";
User = "john";
};
};

View File

@@ -4,4 +4,39 @@ final: prev:
let
system = prev.system;
in
{ }
{
# Disable CephFS support in samba to work around upstream nixpkgs bug:
# ceph is pinned to python3.11 which is incompatible with sphinx >= 9.1.0.
# https://github.com/NixOS/nixpkgs/issues/442652
samba4Full = prev.samba4Full.override { enableCephFS = false; };
# Fix incus-lts doc build: `incus manpage` tries to create
# ~/.config/incus, but HOME is /homeless-shelter in the nix sandbox.
incus-lts = prev.incus-lts.overrideAttrs (old: {
nativeBuildInputs = (old.nativeBuildInputs or [ ]) ++ [ prev.writableTmpDirAsHomeHook ];
});
# Retry on push failure to work around hyper connection pool race condition.
# https://github.com/zhaofengli/attic/pull/246
attic-client = prev.attic-client.overrideAttrs (old: {
patches = (old.patches or [ ]) ++ [
../patches/attic-client-push-retry.patch
];
});
# Add --zeroconf-port support to Spotify Connect plugin so librespot
# binds to a fixed port that can be opened in the firewall.
music-assistant = prev.music-assistant.overrideAttrs (old: {
patches = (old.patches or [ ]) ++ [
../patches/music-assistant-zeroconf-port.patch
];
});
# Plasma Bigscreen: TV-optimized KDE shell (not yet packaged in nixpkgs)
plasma-bigscreen = import ./plasma-bigscreen.nix {
inherit (prev.kdePackages)
mkKdeDerivation plasma-workspace plasma-wayland-protocols
qtmultimedia qtwayland qtwebengine qcoro;
inherit (prev) lib fetchFromGitLab pkg-config sdl3 libcec wayland;
};
}

View File

@@ -0,0 +1,79 @@
{
mkKdeDerivation,
lib,
fetchFromGitLab,
pkg-config,
plasma-workspace,
qtmultimedia,
qtwayland,
qtwebengine,
qcoro,
plasma-wayland-protocols,
wayland,
sdl3,
libcec,
}:
mkKdeDerivation {
pname = "plasma-bigscreen";
version = "unstable-2026-03-07";
src = fetchFromGitLab {
domain = "invent.kde.org";
owner = "plasma";
repo = "plasma-bigscreen";
rev = "bd143fea7e386bac1652b8150a3ed3d5ef7cf93c";
hash = "sha256-y439IX7e0+XqxqFj/4+P5le0hA7DiwA+smDsD0UH/fI=";
};
patches = [
../patches/plasma-bigscreen-input-handler-app-id.patch
];
extraNativeBuildInputs = [ pkg-config ];
extraBuildInputs = [
qtmultimedia
qtwayland
qtwebengine
qcoro
plasma-wayland-protocols
wayland
sdl3
libcec
];
# Match project version to installed Plasma release so cmake version checks pass
postPatch = ''
substituteInPlace CMakeLists.txt \
--replace-fail 'set(PROJECT_VERSION "6.5.80")' \
'set(PROJECT_VERSION "${plasma-workspace.version}")'
# Upstream references a nonexistent startplasma-waylandsession binary.
# Fix this in the cmake template (before @KDE_INSTALL_FULL_LIBEXECDIR@ is substituted).
substituteInPlace bin/plasma-bigscreen-wayland.in \
--replace-fail \
'startplasma-wayland --xwayland --libinput --exit-with-session=@KDE_INSTALL_FULL_LIBEXECDIR@/startplasma-waylandsession' \
'startplasma-wayland'
'';
# FIXME: work around Qt 6.10 cmake API changes
cmakeFlags = [ "-DQT_FIND_PRIVATE_MODULES=1" ];
# QML lint fails on missing runtime-only imports (org.kde.private.biglauncher)
# that are only available inside a running Plasma session
dontQmlLint = true;
postFixup = ''
# Session .desktop references $out/libexec/plasma-dbus-run-session-if-needed
# but the binary lives in plasma-workspace
substituteInPlace "$out/share/wayland-sessions/plasma-bigscreen-wayland.desktop" \
--replace-fail \
"$out/libexec/plasma-dbus-run-session-if-needed" \
"${plasma-workspace}/libexec/plasma-dbus-run-session-if-needed"
'';
passthru.providedSessions = [ "plasma-bigscreen-wayland" ];
meta.license = with lib.licenses; [ gpl2Plus ];
}

View File

@@ -0,0 +1,143 @@
diff --git a/attic/src/api/v1/upload_path.rs b/attic/src/api/v1/upload_path.rs
index 5b1231e5..cb90928c 100644
--- a/attic/src/api/v1/upload_path.rs
+++ b/attic/src/api/v1/upload_path.rs
@@ -25,7 +25,7 @@ pub const ATTIC_NAR_INFO_PREAMBLE_SIZE: &str = "X-Attic-Nar-Info-Preamble-Size";
/// Regardless of client compression, the server will always decompress
/// the NAR to validate the NAR hash before applying the server-configured
/// compression again.
-#[derive(Debug, Serialize, Deserialize)]
+#[derive(Debug, Serialize, Deserialize, Clone)]
pub struct UploadPathNarInfo {
/// The name of the binary cache to upload to.
pub cache: CacheName,
diff --git a/client/src/push.rs b/client/src/push.rs
index 309bd4b6..f3951d2b 100644
--- a/client/src/push.rs
+++ b/client/src/push.rs
@@ -560,57 +560,83 @@ pub async fn upload_path(
);
let bar = mp.add(ProgressBar::new(path_info.nar_size));
bar.set_style(style);
- let nar_stream = NarStreamProgress::new(store.nar_from_path(path.to_owned()), bar.clone())
- .map_ok(Bytes::from);
- let start = Instant::now();
- match api
- .upload_path(upload_info, nar_stream, force_preamble)
- .await
- {
- Ok(r) => {
- let r = r.unwrap_or(UploadPathResult {
- kind: UploadPathResultKind::Uploaded,
- file_size: None,
- frac_deduplicated: None,
- });
-
- let info_string: String = match r.kind {
- UploadPathResultKind::Deduplicated => "deduplicated".to_string(),
- _ => {
- let elapsed = start.elapsed();
- let seconds = elapsed.as_secs_f64();
- let speed = (path_info.nar_size as f64 / seconds) as u64;
+ // Create a new stream for each retry attempt
+ let bar_ref = &bar;
+ let nar_stream = move || {
+ NarStreamProgress::new(store.nar_from_path(path.to_owned()), bar_ref.clone())
+ .map_ok(Bytes::from)
+ };
- let mut s = format!("{}/s", HumanBytes(speed));
+ let start = Instant::now();
+ let mut retries = 0;
+ const MAX_RETRIES: u32 = 3;
+ const RETRY_DELAY: Duration = Duration::from_millis(250);
- if let Some(frac_deduplicated) = r.frac_deduplicated {
- if frac_deduplicated > 0.01f64 {
- s += &format!(", {:.1}% deduplicated", frac_deduplicated * 100.0);
+ loop {
+ let result = api
+ .upload_path(upload_info.clone(), nar_stream(), force_preamble)
+ .await;
+ match result {
+ Ok(r) => {
+ let r = r.unwrap_or(UploadPathResult {
+ kind: UploadPathResultKind::Uploaded,
+ file_size: None,
+ frac_deduplicated: None,
+ });
+
+ let info_string: String = match r.kind {
+ UploadPathResultKind::Deduplicated => "deduplicated".to_string(),
+ _ => {
+ let elapsed = start.elapsed();
+ let seconds = elapsed.as_secs_f64();
+ let speed = (path_info.nar_size as f64 / seconds) as u64;
+
+ let mut s = format!("{}/s", HumanBytes(speed));
+
+ if let Some(frac_deduplicated) = r.frac_deduplicated {
+ if frac_deduplicated > 0.01f64 {
+ s += &format!(", {:.1}% deduplicated", frac_deduplicated * 100.0);
+ }
}
+
+ s
}
+ };
- s
+ mp.suspend(|| {
+ eprintln!(
+ "✅ {} ({})",
+ path.as_os_str().to_string_lossy(),
+ info_string
+ );
+ });
+ bar.finish_and_clear();
+
+ return Ok(());
+ }
+ Err(e) => {
+ if retries < MAX_RETRIES {
+ retries += 1;
+ mp.suspend(|| {
+ eprintln!(
+ "❕ {}: Upload failed, retrying ({}/{})...",
+ path.as_os_str().to_string_lossy(),
+ retries,
+ MAX_RETRIES
+ );
+ });
+ tokio::time::sleep(RETRY_DELAY).await;
+ continue;
}
- };
- mp.suspend(|| {
- eprintln!(
- "✅ {} ({})",
- path.as_os_str().to_string_lossy(),
- info_string
- );
- });
- bar.finish_and_clear();
+ mp.suspend(|| {
+ eprintln!("❌ {}: {}", path.as_os_str().to_string_lossy(), e);
+ });
+ bar.finish_and_clear();
- Ok(())
- }
- Err(e) => {
- mp.suspend(|| {
- eprintln!("❌ {}: {}", path.as_os_str().to_string_lossy(), e);
- });
- bar.finish_and_clear();
- Err(e)
+ return Err(e);
+ }
}
}
}

View File

@@ -0,0 +1,40 @@
diff --git a/music_assistant/providers/spotify_connect/__init__.py b/music_assistant/providers/spotify_connect/__init__.py
index 1111111..2222222 100644
--- a/music_assistant/providers/spotify_connect/__init__.py
+++ b/music_assistant/providers/spotify_connect/__init__.py
@@ -51,6 +51,7 @@ CONNECT_ITEM_ID = "spotify_connect"
CONF_PUBLISH_NAME = "publish_name"
CONF_ALLOW_PLAYER_SWITCH = "allow_player_switch"
+CONF_ZEROCONF_PORT = "zeroconf_port"
# Special value for auto player selection
PLAYER_ID_AUTO = "__auto__"
@@ -117,6 +118,15 @@ async def get_config_entries(
description="How should this Spotify Connect device be named in the Spotify app?",
default_value="Music Assistant",
),
+ ConfigEntry(
+ key=CONF_ZEROCONF_PORT,
+ type=ConfigEntryType.INTEGER,
+ label="Zeroconf port",
+ description="Fixed TCP port for Spotify Connect discovery (zeroconf). "
+ "Set to a specific port and open it in your firewall to allow "
+ "devices on the network to discover this player. 0 = random port.",
+ default_value=0,
+ ),
# ConfigEntry(
# key=CONF_HANDOFF_MODE,
# type=ConfigEntryType.BOOLEAN,
@@ -677,6 +687,11 @@ class SpotifyConnectProvider(PluginProvider):
"--onevent",
str(EVENTS_SCRIPT),
"--emit-sink-events",
+ *(
+ ["--zeroconf-port", str(zeroconf_port)]
+ if (zeroconf_port := int(self.config.get_value(CONF_ZEROCONF_PORT) or 0)) > 0
+ else []
+ ),
]
self._librespot_proc = librespot = AsyncProcess(
args, stdout=False, stderr=True, name=f"librespot[{self.name}]", env=env

View File

@@ -0,0 +1,19 @@
Use the correct app_id when pre-authorizing remote-desktop portal access.
The portal's isAppMegaAuthorized() looks up the caller's specific app_id in
the PermissionStore. An empty string only matches apps the portal cannot
identify; it is not a wildcard. Since the input handler is launched via
KIO::CommandLauncherJob with a desktopName, the portal resolves it to the
desktop file ID, so the empty-string entry never matches.
--- a/inputhandler/xdgremotedesktopsystem.cpp
+++ b/inputhandler/xdgremotedesktopsystem.cpp
@@ -66,7 +67,7 @@
QDBusReply<void> reply = permissionStore.call(QStringLiteral("SetPermission"),
QStringLiteral("kde-authorized"), // table
true, // create table if not exists
QStringLiteral("remote-desktop"), // id
- QLatin1String(""), // app (empty for host applications)
+ QStringLiteral("org.kde.plasma.bigscreen.inputhandler"),
QStringList{QStringLiteral("yes")}); // permissions

View File

@@ -1,11 +0,0 @@
age-encryption.org/v1
-> ssh-ed25519 hPp1nw gfVRDt7ReEnz10WvPa8UfBBnsRsiw7sxxXQMuXRnCVs
slBNX9Yc1qSu1P5ioNDNLPd97NGE/LWPS/A+u9QGo4E
-> ssh-ed25519 ZDy34A e5MSY5qDP6WuEgbiK0p5esMQJBb3ScVpb15Ff8sTQgQ
9nsimoUQncnbfiu13AnFWZXcpaiySUYdS1eH5O/3Fgg
-> ssh-ed25519 w3nu8g op1KSUhJgM6w/nlaUssQDiraQpVzgnWd//JMu2vFgms
KvEaJfsB7Qkf+PnzFJdZ3wAxm2qj23IS8RRxyuGN2G4
-> ssh-ed25519 evqvfg 9L6pFuqkcChZq/W4zkATXm1Y76SEK+S4SyaiSlJd+C4
j/UWJvo4Cr/UDfaN2milpJ6rU0w1EWdTAzV3SlrCcW8
--- bdG4zC5dx6cSPetH3DNeHEk6EYCJ5TXGrn8OhUMknNU
/¶ø+ÏpñR[¤àJ-*@ÌÿŸx0Ú©ò-ä.*&T·™~-i 2€eƒ¡`@ëQ8š<l™à QK0AÕ§

View File

@@ -63,8 +63,4 @@ with roles;
# zigbee2mqtt secrets
"zigbee2mqtt.yaml.age".publicKeys = zigbee;
# Sonarr and Radarr secrets
"radarr-api-key.age".publicKeys = media-server;
"sonarr-api-key.age".publicKeys = media-server;
}

Binary file not shown.