24 Commits

Author SHA1 Message Date
6802dee96f try out hyprland
All checks were successful
Check Flake / check-flake (push) Successful in 4m34s
2026-03-05 23:29:40 -08:00
1e7aa17d3d Log DIMM temperatures on each check run
All checks were successful
Check Flake / check-flake (push) Successful in 3m58s
2026-03-05 21:31:29 -08:00
77415c30fa Fix VPN check alert limiting to only count failures
StartLimitBurst counts all starts (including successes), so the timer
was getting blocked after ~15 min. Replace with a JSON counter file
that resets on success and daily, only triggering OnFailure alerts for
the first 3 failures per day.
2026-03-05 21:28:39 -08:00
e3f78b460c Remove recyclarr, I'm not using it currently 2026-03-05 21:27:35 -08:00
576ee47246 Add periodic PIA VPN connectivity check
All checks were successful
Check Flake / check-flake (push) Successful in 4m38s
Oneshot service + timer (every 5 min) inside the VPN container that
verifies WireGuard handshake freshness and internet reachability.
Fails on VPN or internet outage, triggering ntfy alert via OnFailure.
Capped at 3 failures per day via StartLimitBurst.
2026-03-04 21:45:07 -08:00
335abe4e65 Add DDR5 DIMM temperature monitoring with ntfy alerts
Monitors spd5118 sensors every 5 minutes and sends an ntfy
notification if any DIMM exceeds 55°C. Opt-in via
ntfy-alerts.dimmTempCheck.enable, enabled on s0.
2026-03-04 21:24:40 -08:00
6267def09b Add Music Assistant to Dashy and Gatus 2026-03-04 21:23:16 -08:00
5342c920a8 Update README 2026-03-04 20:53:46 -08:00
6beaa008e1 Remove LanguageTool service 2026-03-04 20:45:32 -08:00
88cfad2a69 Update flake inputs (nixpkgs, home-manager, claude-code-nix)
All checks were successful
Check Flake / check-flake (push) Successful in 2m12s
Auto Update Flake / auto-update (push) Successful in 7m5s
Remove obsolete libreoffice-noto-fonts-subset.patch — upstream nixpkgs
removed the noto-fonts-subset code from the libreoffice derivation.
2026-03-03 22:54:45 -08:00
86a9f777ad Use the hosts overlays in gitea container (for attic patches)
All checks were successful
Check Flake / check-flake (push) Successful in 3m42s
2026-03-03 22:54:14 -08:00
b29e80f3e9 Patch attic-client to retry on push failure
Some checks failed
Check Flake / check-flake (push) Failing after 4m5s
Backport zhaofengli/attic#246 to work around a hyper connection pool
race condition that causes spurious "connection closed before message
completed" errors during cache uploads in CI.
2026-03-03 22:40:27 -08:00
e32834ff7f Prevent nify-failure from calling itself
Some checks failed
Check Flake / check-flake (push) Failing after 4m13s
2026-03-03 22:36:58 -08:00
bb39587292 Fix unifi service taking 5+ minutes to shut down
Some checks failed
Check Flake / check-flake (push) Failing after 4m8s
UniFi's Java process crashes during shutdown (Spring context race
condition) leaving mongod orphaned in the cgroup. The upstream module
sets KillSignal=SIGCONT so systemd won't interrupt the graceful
shutdown, but with the default KillMode=control-group this means
mongod also only gets SIGCONT (a no-op) and sits there until the
5-minute timeout triggers SIGKILL.

Switch to KillMode=mixed so the main Java process still gets the
harmless SIGCONT while mongod gets a proper SIGTERM for a clean
database shutdown.
2026-03-03 22:02:21 -08:00
712b52a48d Capture full systemd unit name for ntfy error alerts 2026-03-03 21:46:45 -08:00
c6eeea982e Add ignoredUnits option; skip logrotate failures on s0 because they are spurious 2026-03-03 21:46:19 -08:00
6bd1b4466e Update claude.md 2026-03-03 21:43:36 -08:00
d806d4df0a Increase tinyproxy wait-online timeout to 180s
Some checks failed
Check Flake / check-flake (push) Failing after 5m29s
The bridge takes ~62s to come up on s0, exceeding the 60s timeout
and causing tinyproxy to fail on first start.
2026-03-03 21:04:40 -08:00
8997e996ba See if limiting upload jobs helps with push reliability
Some checks failed
Check Flake / check-flake (push) Successful in 14m14s
Auto Update Flake / auto-update (push) Failing after 19s
2026-03-01 21:36:31 -08:00
9914d03ba2 Embed flake git revision in NixOS configuration
Some checks failed
Check Flake / check-flake (push) Has been cancelled
2026-03-01 19:03:47 -08:00
55204b5074 Upgrade to nextcloud 33
Some checks failed
Check Flake / check-flake (push) Has been cancelled
2026-03-01 18:23:55 -08:00
43ec75741d Fix memos failing to open SQLite database on ZFS
Some checks failed
Check Flake / check-flake (push) Failing after 18s
ProtectSystem=strict with ReadWritePaths fails silently on ZFS submounts
(/var/lib is a separate dataset), leaving the data dir read-only. Downgrade
to ProtectSystem=full which leaves /var writable while still protecting
/usr and /boot.
2026-03-01 17:54:11 -08:00
000bbd7f4d Update interface names because usePredictableInterfaceNames is now off 2026-03-01 17:52:42 -08:00
e4f0d065f9 Fix tinyproxy starting before VPN bridge is configured
tinyproxy binds to the bridge IP but had no ordering dependency on
systemd-networkd, so it could start before the bridge existed.
2026-03-01 17:52:35 -08:00
28 changed files with 743 additions and 133 deletions

View File

@@ -26,4 +26,4 @@ paths=$(echo "$toplevels" \
and .value.narSize >= 524288
) | .key] | unique[]')
echo "Pushing $(echo "$paths" | wc -l) unique paths to cache"
echo "$paths" | xargs attic push local:nixos
echo "$paths" | xargs attic push -j 1 local:nixos

View File

@@ -85,17 +85,3 @@ When adding or removing a web-facing service, update both:
- Always use `--no-link` when running `nix build`
- Don't use `nix build --dry-run` unless you only need evaluation — it skips the actual build
- Avoid `2>&1` on nix commands — it can cause error output to be missed
## Git Worktrees
When the user asks you to "start a worktree" or work in a worktree, **do not create one manually** with `git worktree add`. Instead, tell the user to start a new session with:
```bash
claude --worktree <name>
```
This is the built-in Claude Code worktree workflow. It creates the worktree at `.claude/worktrees/<name>/` with a branch `worktree-<name>` and starts a new Claude session inside it. Cleanup is handled automatically on exit.
When instructed to work in a git worktree (e.g., via `isolation: "worktree"` on a subagent), you **MUST** do so. If you are unable to create or use a git worktree, you **MUST** stop work immediately and report the failure to the user. Do not fall back to working in the main working tree.
When applying work from a git worktree back to the main branch, commit in the worktree first, then use `git cherry-pick` from the main working tree to bring the commit over. Do not use `git checkout` or `git apply` to copy files directly. Do **not** automatically apply worktree work to the main branch — always ask the user for approval first.

View File

@@ -1,16 +1,26 @@
# NixOS Configuration
A NixOS flake managing multiple machines with role-based configuration, agenix secrets, and sandboxed dev workspaces.
A NixOS flake managing multiple machines with role-based configuration, agenix secrets, sandboxed dev workspaces, and self-hosted services.
## Layout
- `/common` - shared configuration imported by all machines
- `/boot` - bootloaders, CPU microcode, remote LUKS unlock over Tor
- `/network` - Tailscale, VPN tunneling via PIA
- `/network` - Tailscale, PIA VPN with leak-proof containers, sandbox networking
- `/pc` - desktop/graphical config (enabled by the `personal` role)
- `/server` - service definitions and extensions
- `/server` - self-hosted service definitions (Gitea, Matrix, Nextcloud, media stack, etc.)
- `/sandboxed-workspace` - isolated dev environments (VM, container, or Incus)
- `/ntfy` - push notification integration (service failures, SSH logins, ZFS alerts)
- `binary-cache.nix` - nix binary cache configuration (nixos.org, cachix, self-hosted atticd)
- `nix-builder.nix` - distributed build delegation across machines
- `backups.nix` - snapshot-aware restic backups to Backblaze B2
- `/machines` - per-machine config (`default.nix`, `hardware-configuration.nix`, `properties.nix`)
- `fry` - personal desktop
- `howl` - personal laptop
- `ponyo` - web/mail server (Gitea, Nextcloud, LibreChat, mail)
- `storage/s0` - storage/media server (Jellyfin, Home Assistant, monitoring, productivity apps)
- `zoidberg` - media center
- `ephemeral` - minimal config for building install ISOs and kexec images
- `/secrets` - agenix-encrypted secrets, decryptable by machines based on their roles
- `/home` - Home Manager user config
- `/lib` - custom library functions extending nixpkgs lib
@@ -25,8 +35,14 @@ A NixOS flake managing multiple machines with role-based configuration, agenix s
**Remote LUKS unlock over Tor** — Machines with encrypted root disks can be unlocked remotely via SSH. An embedded Tor hidden service starts in the initrd so the machine is reachable even without a known IP, using a separate SSH host key for the boot environment.
**VPN containers** — A `vpn-container` module spins up an ephemeral NixOS container with a PIA WireGuard tunnel. The host creates the WireGuard interface and authenticates with PIA, then hands it off to the container's network namespace. This ensures that the container can **never** have direct internet access. Leakage is impossible.
**VPN containers** — A `pia-vpn` module provides leak-proof VPN networking for containers. The host creates a WireGuard interface and runs tinyproxy on a bridge network for PIA API bootstrap. A dedicated VPN container authenticates with PIA via the proxy, configures WireGuard, and masquerades bridge traffic through the tunnel. Service containers default-route exclusively through the VPN container — leakage is impossible by network topology. Supports port forwarding with automatic port assignment.
**Sandboxed workspaces** — Isolated dev environments backed by microVMs (cloud-hypervisor), systemd-nspawn containers, or Incus. Each workspace gets a static IP on a NAT'd bridge, auto-generated SSH host keys, shell aliases for management, and comes pre-configured with Claude Code. The sandbox network blocks access to the local LAN while allowing internet.
**Sandboxed workspaces** — Isolated dev environments backed by microVMs (cloud-hypervisor), systemd-nspawn containers, or Incus. Each workspace gets a static IP on a NAT'd bridge (`192.168.83.0/24`), auto-generated SSH host keys, shell aliases for management, and comes pre-configured with Claude Code. The sandbox network blocks access to the local LAN while allowing internet.
**Snapshot-aware backups** — Restic backups to Backblaze B2 automatically create ZFS snapshots or btrfs read-only snapshots before backing up, using mount namespaces to bind-mount frozen data over the original paths so restic records correct paths. Each backup group gets a `restic_<group>` CLI wrapper. Supports `.nobackup` marker files.
**Self-hosted services** — Comprehensive service stack across ponyo and s0: Gitea (git hosting + CI), Nextcloud (files/calendar), Matrix (chat), mail server, Jellyfin/Sonarr/Radarr/Lidarr (media), Home Assistant/Zigbee2MQTT/Frigate (home automation), LibreChat (AI), Gatus (monitoring), and productivity tools (Vikunja, Actual Budget, Outline, Linkwarden, Memos).
**Push notifications** — ntfy integration alerts on systemd service failures, SSH logins, and ZFS pool issues. Gatus monitors all web-facing services and sends alerts via ntfy.
**Remote deployment** — deploy-rs handles remote machine deployments with boot-only or immediate activation modes. A Makefile wraps common operations (`make deploy <host>`, `make deploy-activate <host>`).

View File

@@ -230,7 +230,14 @@ in
Port = cfg.proxyPort;
};
};
systemd.services.tinyproxy.before = [ "container@pia-vpn.service" ];
systemd.services.tinyproxy = {
before = [ "container@pia-vpn.service" ];
after = [ "systemd-networkd.service" ];
requires = [ "systemd-networkd.service" ];
serviceConfig.ExecStartPre = [
"+${pkgs.systemd}/lib/systemd/systemd-networkd-wait-online --interface=${cfg.bridgeName}:no-carrier --timeout=180"
];
};
# WireGuard interface creation (host-side oneshot)
# Creates the interface in the host namespace so encrypted UDP stays in host netns.

View File

@@ -226,6 +226,86 @@ in
RandomizedDelaySec = "1m";
};
};
# Periodic VPN connectivity check — fails if VPN or internet is down,
# triggering ntfy alert via the OnFailure drop-in.
# Tracks failures with a counter file so only the first 3 failures per
# day trigger an alert (subsequent failures exit 0 to suppress noise).
systemd.services.pia-vpn-check = {
description = "Check PIA VPN connectivity";
after = [ "pia-vpn-setup.service" ];
requires = [ "pia-vpn-setup.service" ];
path = with pkgs; [ wireguard-tools iputils coreutils gawk jq ];
serviceConfig.Type = "oneshot";
script = ''
set -euo pipefail
COUNTER_FILE="/var/lib/pia-vpn/check-fail-count.json"
MAX_ALERTS=3
check_vpn() {
# Check that WireGuard has a peer with a recent handshake (within 3 minutes)
handshake=$(wg show ${cfg.interfaceName} latest-handshakes | awk '{print $2}')
if [ -z "$handshake" ] || [ "$handshake" -eq 0 ]; then
echo "No WireGuard handshake recorded" >&2
return 1
fi
now=$(date +%s)
age=$((now - handshake))
if [ "$age" -gt 180 ]; then
echo "WireGuard handshake is stale (''${age}s ago)" >&2
return 1
fi
# Verify internet connectivity through VPN tunnel
if ! ping -c1 -W10 1.1.1.1 >/dev/null 2>&1; then
echo "Cannot reach internet through VPN" >&2
return 1
fi
echo "PIA VPN connectivity OK (handshake ''${age}s ago)"
return 0
}
if check_vpn; then
rm -f "$COUNTER_FILE"
exit 0
fi
# Failed read and update counter (reset if from a previous day)
today=$(date +%Y-%m-%d)
count=0
if [ -f "$COUNTER_FILE" ]; then
stored=$(jq -r '.date // ""' "$COUNTER_FILE")
if [ "$stored" = "$today" ]; then
count=$(jq -r '.count // 0' "$COUNTER_FILE")
fi
fi
count=$((count + 1))
jq -n --arg date "$today" --argjson count "$count" \
'{"date": $date, "count": $count}' > "$COUNTER_FILE"
if [ "$count" -le "$MAX_ALERTS" ]; then
echo "Failure $count/$MAX_ALERTS today alerting" >&2
exit 1
else
echo "Failure $count today suppressing alert (already sent $MAX_ALERTS)" >&2
exit 0
fi
'';
};
systemd.timers.pia-vpn-check = {
description = "Periodic PIA VPN connectivity check";
wantedBy = [ "timers.target" ];
timerConfig = {
OnCalendar = "*:0/5";
RandomizedDelaySec = "30s";
};
};
};
};
};

View File

@@ -5,6 +5,7 @@
./service-failure.nix
./ssh-login.nix
./zfs.nix
./dimm-temp.nix
];
options.ntfy-alerts = {
@@ -19,6 +20,12 @@
default = "";
description = "Extra arguments to pass to curl (e.g. --proxy http://host:port).";
};
ignoredUnits = lib.mkOption {
type = lib.types.listOf lib.types.str;
default = [ ];
description = "Unit names to skip failure notifications for.";
};
};
config = lib.mkIf config.thisMachine.hasRole."ntfy" {

73
common/ntfy/dimm-temp.nix Normal file
View File

@@ -0,0 +1,73 @@
{ config, lib, pkgs, ... }:
let
cfg = config.ntfy-alerts;
hasNtfy = config.thisMachine.hasRole."ntfy";
checkScript = pkgs.writeShellScript "dimm-temp-check" ''
PATH="${lib.makeBinPath [ pkgs.lm_sensors pkgs.gawk pkgs.coreutils pkgs.curl ]}"
threshold=55
hot=""
summary=""
while IFS= read -r line; do
case "$line" in
spd5118-*)
chip="$line"
;;
*temp1_input:*)
temp="''${line##*: }"
whole="''${temp%%.*}"
summary="''${summary:+$summary, }$chip: ''${temp}°C"
if [ "$whole" -ge "$threshold" ]; then
hot="$hot"$'\n'" $chip: ''${temp}°C"
fi
;;
esac
done < <(sensors -u 'spd5118-*' 2>/dev/null)
echo "$summary"
if [ -n "$hot" ]; then
message="DIMM temperature above ''${threshold}°C on ${config.networking.hostName}:$hot"
curl \
--fail --silent --show-error \
--max-time 30 --retry 3 \
-H "Authorization: Bearer $NTFY_TOKEN" \
-H "Title: High DIMM temperature on ${config.networking.hostName}" \
-H "Priority: high" \
-H "Tags: thermometer" \
-d "$message" \
"${cfg.serverUrl}/service-failures"
echo "$message" >&2
fi
'';
in
{
options.ntfy-alerts.dimmTempCheck.enable = lib.mkEnableOption "DDR5 DIMM temperature monitoring via spd5118";
config = lib.mkIf (cfg.dimmTempCheck.enable && hasNtfy) {
systemd.services.dimm-temp-check = {
description = "Check DDR5 DIMM temperatures and alert on overheating";
wants = [ "network-online.target" ];
after = [ "network-online.target" ];
serviceConfig = {
Type = "oneshot";
EnvironmentFile = "/run/agenix/ntfy-token";
ExecStart = checkScript;
};
};
systemd.timers.dimm-temp-check = {
description = "Periodic DDR5 DIMM temperature check";
wantedBy = [ "timers.target" ];
timerConfig = {
OnCalendar = "*:0/5";
Persistent = true;
};
};
};
}

View File

@@ -14,6 +14,14 @@ in
EnvironmentFile = "/run/agenix/ntfy-token";
ExecStart = "${pkgs.writeShellScript "ntfy-failure-notify" ''
unit="$1"
# Prevent infinite recursion if this service itself fails
[[ "$unit" == ntfy-failure@* ]] && exit 0
ignored_units=(${lib.concatMapStringsSep " " (u: lib.escapeShellArg u) cfg.ignoredUnits})
for ignored in "''${ignored_units[@]}"; do
if [[ "$unit" == "$ignored" ]]; then
exit 0
fi
done
logfile=$(mktemp)
trap 'rm -f "$logfile"' EXIT
${pkgs.systemd}/bin/journalctl -u "$unit" -n 50 --no-pager -o short > "$logfile" 2>/dev/null \
@@ -40,7 +48,7 @@ in
mkdir -p $out/lib/systemd/system/service.d
cat > $out/lib/systemd/system/service.d/ntfy-on-failure.conf <<'EOF'
[Unit]
OnFailure=ntfy-failure@%p.service
OnFailure=ntfy-failure@%N.service
EOF
'')
];

View File

@@ -6,6 +6,7 @@ in
{
imports = [
./kde.nix
./hyprland.nix
./yubikey.nix
./chromium.nix
./firefox.nix

49
common/pc/hyprland.nix Normal file
View File

@@ -0,0 +1,49 @@
{ lib, config, pkgs, ... }:
let
cfg = config.de;
in
{
config = lib.mkIf cfg.enable {
programs.hyprland.enable = true;
programs.hyprland.withUWSM = true;
programs.hyprland.xwayland.enable = true;
xdg.portal.extraPortals = [ pkgs.xdg-desktop-portal-hyprland ];
environment.systemPackages = with pkgs; [
# Bar
waybar
# Launcher
wofi
# Notifications
mako
# Lock/idle
hyprlock
hypridle
# Wallpaper
hyprpaper
# Polkit
hyprpolkitagent
# Screenshots
grim
slurp
# Clipboard
wl-clipboard
cliphist
# Color picker
hyprpicker
# Brightness (laptop keybinds)
brightnessctl
];
};
}

View File

@@ -270,6 +270,16 @@ in
];
alerts = [{ type = "ntfy"; }];
}
{
name = "Music Assistant";
group = "s0";
url = "http://s0.koi-bebop.ts.net:8095";
interval = "5m";
conditions = [
"[STATUS] == 200"
];
alerts = [{ type = "ntfy"; }];
}
{
name = "Vikunja";
group = "s0";
@@ -320,16 +330,7 @@ in
];
alerts = [{ type = "ntfy"; }];
}
{
name = "LanguageTool";
group = "s0";
url = "https://languagetool.s0.neet.dev";
interval = "5m";
conditions = [
"[STATUS] == 200"
];
alerts = [{ type = "ntfy"; }];
}
{
name = "Unifi";
group = "s0";

View File

@@ -8,6 +8,7 @@
let
thisMachineIsARunner = config.thisMachine.hasRole."gitea-actions-runner";
hostOverlays = config.nixpkgs.overlays;
containerName = "gitea-runner";
giteaRunnerUid = 991;
giteaRunnerGid = 989;
@@ -32,6 +33,7 @@ in
config = { config, lib, pkgs, ... }: {
system.stateVersion = "25.11";
nixpkgs.overlays = hostOverlays;
services.gitea-actions-runner.instances.inst = {
enable = true;

View File

@@ -17,7 +17,7 @@ in
config = lib.mkIf cfg.enable {
services.nextcloud = {
https = true;
package = pkgs.nextcloud32;
package = pkgs.nextcloud33;
hostName = nextcloudHostname;
config.dbtype = "sqlite";
config.adminuser = "jeremy";

View File

@@ -13,6 +13,15 @@ in
services.unifi.unifiPackage = pkgs.unifi;
services.unifi.mongodbPackage = pkgs.mongodb-7_0;
# The upstream module sets KillSignal=SIGCONT so systemd doesn't interfere
# with UniFi's self-managed shutdown. But UniFi's Java process crashes during
# shutdown (Spring context already closed) leaving mongod orphaned in the
# cgroup. With the default KillMode=control-group, mongod only gets SIGCONT
# (a no-op) and runs until the 5min timeout triggers SIGKILL.
# KillMode=mixed sends SIGCONT to the main process but SIGTERM to remaining
# children, giving mongod a clean shutdown instead of SIGKILL.
systemd.services.unifi.serviceConfig.KillMode = "mixed";
networking.firewall = lib.mkIf cfg.openMinimalFirewall {
allowedUDPPorts = [
3478 # STUN

18
flake.lock generated
View File

@@ -53,11 +53,11 @@
]
},
"locked": {
"lastModified": 1772252645,
"narHash": "sha256-SVP3BYv/tY19P7mh0aG2Pgq4M/CynQEnV4y+57Ed91g=",
"lastModified": 1772587858,
"narHash": "sha256-w0/XBU20BdBeEIJ9i3ecr9Lc6c8uQaXUn/ri+aOsyJk=",
"owner": "sadjow",
"repo": "claude-code-nix",
"rev": "42c9207e79f1e6b8b95b54a64c10452275717466",
"rev": "0a5fc14be38fabfcfff18db749b63c9c15726765",
"type": "github"
},
"original": {
@@ -228,11 +228,11 @@
]
},
"locked": {
"lastModified": 1772380461,
"narHash": "sha256-O3ukj3Bb3V0Tiy/4LUfLlBpWypJ9P0JeUgsKl2nmZZY=",
"lastModified": 1772569491,
"narHash": "sha256-bdr6ueeXO1Xg91sFkuvaysYF0mVdwHBpdyhTjBEWv+s=",
"owner": "nix-community",
"repo": "home-manager",
"rev": "f140aa04d7d14f8a50ab27f3691b5766b17ae961",
"rev": "924e61f5c2aeab38504028078d7091077744ab17",
"type": "github"
},
"original": {
@@ -301,11 +301,11 @@
},
"nixpkgs": {
"locked": {
"lastModified": 1772198003,
"narHash": "sha256-I45esRSssFtJ8p/gLHUZ1OUaaTaVLluNkABkk6arQwE=",
"lastModified": 1772542754,
"narHash": "sha256-WGV2hy+VIeQsYXpsLjdr4GvHv5eECMISX1zKLTedhdg=",
"owner": "NixOS",
"repo": "nixpkgs",
"rev": "dd9b079222d43e1943b6ebd802f04fd959dc8e61",
"rev": "8c809a146a140c5c8806f13399592dbcb1bb5dc4",
"type": "github"
},
"original": {

View File

@@ -115,6 +115,8 @@
];
networking.hostName = hostname;
# Query with: nixos-version --configuration-revision
system.configurationRevision = self.rev or self.dirtyRev or "unknown";
home-manager.useGlobalPkgs = true;
home-manager.useUserPackages = true;
@@ -137,7 +139,6 @@
src = nixpkgs;
patches = [
./patches/dont-break-nix-serve.patch
./patches/libreoffice-noto-fonts-subset.patch
];
};
patchedNixpkgs = nixpkgs.lib.fix (self: (import "${patchedNixpkgsSrc}/flake.nix").outputs { self = nixpkgs; });

View File

@@ -8,6 +8,7 @@ let
thisMachineIsPersonal = osConfig.thisMachine.hasRole."personal";
in
{
imports = [ ./hyprland.nix ];
home.username = "googlebot";
home.homeDirectory = "/home/googlebot";

276
home/hyprland.nix Normal file
View File

@@ -0,0 +1,276 @@
{ lib, osConfig, pkgs, ... }:
let
thisMachineIsPersonal = osConfig.thisMachine.hasRole."personal";
in
{
config = lib.mkIf thisMachineIsPersonal {
wayland.windowManager.hyprland = {
enable = true;
systemd.enable = false; # Required when using UWSM
settings = {
"$mod" = "SUPER";
"$terminal" = "ghostty";
"$menu" = "wofi --show drun";
general = {
gaps_in = 5;
gaps_out = 10;
border_size = 2;
"col.active_border" = "rgba(33ccffee) rgba(00ff99ee) 45deg";
"col.inactive_border" = "rgba(595959aa)";
layout = "dwindle";
};
decoration = {
rounding = 10;
blur = {
enabled = true;
size = 3;
passes = 1;
};
shadow = {
enabled = true;
range = 4;
render_power = 3;
color = "rgba(1a1a1aee)";
};
};
animations = {
enabled = true;
bezier = "myBezier, 0.05, 0.9, 0.1, 1.05";
animation = [
"windows, 1, 7, myBezier"
"windowsOut, 1, 7, default, popin 80%"
"border, 1, 10, default"
"borderangle, 1, 8, default"
"fade, 1, 7, default"
"workspaces, 1, 6, default"
];
};
dwindle = {
pseudotile = true;
preserve_split = true;
};
input = {
follow_mouse = 1;
touchpad = {
natural_scroll = true;
};
};
misc = {
force_default_wallpaper = 0;
};
bind = [
# Applications
"$mod, Return, exec, $terminal"
"$mod, D, exec, $menu"
"$mod, L, exec, hyprlock"
# Window management
"$mod, Q, killactive"
"$mod, F, fullscreen"
"$mod, V, togglefloating"
"$mod, P, pseudo"
"$mod, J, togglesplit"
# Move focus
"$mod, left, movefocus, l"
"$mod, right, movefocus, r"
"$mod, up, movefocus, u"
"$mod, down, movefocus, d"
# Switch workspaces
"$mod, 1, workspace, 1"
"$mod, 2, workspace, 2"
"$mod, 3, workspace, 3"
"$mod, 4, workspace, 4"
"$mod, 5, workspace, 5"
"$mod, 6, workspace, 6"
"$mod, 7, workspace, 7"
"$mod, 8, workspace, 8"
"$mod, 9, workspace, 9"
"$mod, 0, workspace, 10"
# Move active window to workspace
"$mod SHIFT, 1, movetoworkspace, 1"
"$mod SHIFT, 2, movetoworkspace, 2"
"$mod SHIFT, 3, movetoworkspace, 3"
"$mod SHIFT, 4, movetoworkspace, 4"
"$mod SHIFT, 5, movetoworkspace, 5"
"$mod SHIFT, 6, movetoworkspace, 6"
"$mod SHIFT, 7, movetoworkspace, 7"
"$mod SHIFT, 8, movetoworkspace, 8"
"$mod SHIFT, 9, movetoworkspace, 9"
"$mod SHIFT, 0, movetoworkspace, 10"
# Scroll through workspaces
"$mod, mouse_down, workspace, e+1"
"$mod, mouse_up, workspace, e-1"
# Screenshots
", Print, exec, grim -g \"$(slurp)\" - | wl-copy"
"SHIFT, Print, exec, grim - | wl-copy"
# Clipboard history
"$mod SHIFT, V, exec, cliphist list | wofi --dmenu | cliphist decode | wl-copy"
];
bindm = [
# Move/resize with mouse
"$mod, mouse:272, movewindow"
"$mod, mouse:273, resizewindow"
];
bindel = [
# Volume
", XF86AudioRaiseVolume, exec, wpctl set-volume @DEFAULT_AUDIO_SINK@ 5%+"
", XF86AudioLowerVolume, exec, wpctl set-volume @DEFAULT_AUDIO_SINK@ 5%-"
# Brightness
", XF86MonBrightnessUp, exec, brightnessctl s 10%+"
", XF86MonBrightnessDown, exec, brightnessctl s 10%-"
];
bindl = [
", XF86AudioMute, exec, wpctl set-mute @DEFAULT_AUDIO_SINK@ toggle"
];
exec-once = [
"waybar"
"mako"
"hyprpaper"
"wl-paste --type text --watch cliphist store"
"wl-paste --type image --watch cliphist store"
];
};
};
# Waybar
programs.waybar = {
enable = true;
settings = {
mainBar = {
layer = "top";
position = "top";
height = 30;
modules-left = [ "hyprland/workspaces" ];
modules-center = [ "clock" ];
modules-right = [ "network" "pulseaudio" "battery" "tray" ];
clock = {
format = "{:%H:%M %Y-%m-%d}";
};
battery = {
format = "{capacity}% {icon}";
format-icons = [ "" "" "" "" "" ];
};
network = {
format-wifi = "{essid} ({signalStrength}%) ";
format-ethernet = "{ipaddr}/{cidr} ";
format-disconnected = "Disconnected ";
};
pulseaudio = {
format = "{volume}% {icon}";
format-muted = "";
format-icons = {
default = [ "" "" "" ];
};
on-click = "wpctl set-mute @DEFAULT_AUDIO_SINK@ toggle";
};
tray = {
spacing = 10;
};
};
};
};
# Notifications
services.mako = {
enable = true;
settings = {
default-timeout = 5000;
border-radius = 5;
};
};
# Idle daemon
services.hypridle = {
enable = true;
settings = {
general = {
lock_cmd = "pidof hyprlock || hyprlock";
before_sleep_cmd = "loginctl lock-session";
after_sleep_cmd = "hyprctl dispatch dpms on";
};
listener = [
{
timeout = 300;
on-timeout = "loginctl lock-session";
}
{
timeout = 330;
on-timeout = "hyprctl dispatch dpms off";
on-resume = "hyprctl dispatch dpms on";
}
{
timeout = 1800;
on-timeout = "systemctl suspend";
}
];
};
};
# Lock screen
programs.hyprlock = {
enable = true;
settings = {
general = {
hide_cursor = true;
grace = 5;
};
background = [
{
monitor = "";
color = "rgba(25, 20, 20, 1.0)";
blur_passes = 2;
blur_size = 7;
}
];
input-field = [
{
monitor = "";
size = "200, 50";
outline_thickness = 3;
dots_size = 0.33;
dots_spacing = 0.15;
outer_color = "rgb(151515)";
inner_color = "rgb(200, 200, 200)";
font_color = "rgb(10, 10, 10)";
fade_on_empty = true;
placeholder_text = "<i>Password...</i>";
hide_input = false;
position = "0, -20";
halign = "center";
valign = "center";
}
];
};
};
};
}

View File

@@ -42,5 +42,6 @@
}
];
networking.usePredictableInterfaceNames = true;
networking.interfaces.eth0.useDHCP = true;
}

View File

@@ -401,6 +401,15 @@
statusCheck = false;
id = "5_4201_sandman";
};
music-assistant = {
title = "Music Assistant";
description = "s0:8095";
icon = "hl-music-assistant";
url = "http://s0.koi-bebop.ts.net:8095";
target = "sametab";
statusCheck = false;
id = "6_4201_music-assistant";
};
};
haList = [
haItems.home-assistant
@@ -409,6 +418,7 @@
haItems.frigate
haItems.valetudo
haItems.sandman
haItems.music-assistant
];
in
{
@@ -474,15 +484,6 @@
statusCheck = false;
id = "4_5301_outline";
};
languagetool = {
title = "LanguageTool";
description = "languagetool.s0.neet.dev";
icon = "hl-languagetool";
url = "https://languagetool.s0.neet.dev";
target = "sametab";
statusCheck = false;
id = "5_5301_languagetool";
};
};
prodList = [
prodItems.vikunja
@@ -490,7 +491,6 @@
prodItems.linkwarden
prodItems.memos
prodItems.outline
prodItems.languagetool
];
in
{

View File

@@ -9,6 +9,9 @@
networking.hostName = "s0";
ntfy-alerts.ignoredUnits = [ "logrotate" ];
ntfy-alerts.dimmTempCheck.enable = true;
# system.autoUpgrade.enable = true;
nix.gc.automatic = lib.mkForce false; # allow the nix store to serve as a build cache
@@ -140,30 +143,6 @@
services.lidarr.enable = true;
services.lidarr.user = "public_data";
services.lidarr.group = "public_data";
services.recyclarr = {
enable = true;
configuration = {
radarr.radarr_main = {
api_key = {
_secret = "/run/credentials/recyclarr.service/radarr-api-key";
};
base_url = "http://localhost:7878";
quality_definition.type = "movie";
};
sonarr.sonarr_main = {
api_key = {
_secret = "/run/credentials/recyclarr.service/sonarr-api-key";
};
base_url = "http://localhost:8989";
quality_definition.type = "series";
};
};
};
systemd.services.recyclarr.serviceConfig.LoadCredential = [
"radarr-api-key:/run/agenix/radarr-api-key"
"sonarr-api-key:/run/agenix/sonarr-api-key"
];
users.groups.public_data.gid = 994;
users.users.public_data = {
@@ -174,8 +153,6 @@
};
};
};
age.secrets.radarr-api-key.file = ../../../secrets/radarr-api-key.age;
age.secrets.sonarr-api-key.file = ../../../secrets/sonarr-api-key.age;
# jellyfin
# jellyfin cannot run in the vpn container and use hardware encoding
@@ -251,7 +228,6 @@
(mkVirtualHost "linkwarden.s0.neet.dev" "http://localhost:${toString config.services.linkwarden.port}")
(mkVirtualHost "memos.s0.neet.dev" "http://localhost:${toString config.services.memos.settings.MEMOS_PORT}")
(mkVirtualHost "outline.s0.neet.dev" "http://localhost:${toString config.services.outline.port}")
(mkVirtualHost "languagetool.s0.neet.dev" "http://localhost:${toString config.services.languagetool.port}")
];
tailscaleAuth = {
@@ -275,7 +251,6 @@
"linkwarden.s0.neet.dev"
# "memos.s0.neet.dev" # messes up memos /auth route
# "outline.s0.neet.dev" # messes up outline /auth route
"languagetool.s0.neet.dev"
];
expectedTailnet = "koi-bebop.ts.net";
};
@@ -341,6 +316,8 @@
enable = true;
settings.MEMOS_PORT = "57643";
};
# ReadWritePaths doesn't work with ProtectSystem=strict on ZFS submounts (/var/lib is a separate dataset)
systemd.services.memos.serviceConfig.ProtectSystem = lib.mkForce "full";
services.outline = {
enable = true;
@@ -363,10 +340,5 @@
owner = config.services.outline.user;
};
services.languagetool = {
enable = true;
port = 60613;
};
boot.binfmt.emulatedSystems = [ "aarch64-linux" "armv7l-linux" ];
}

View File

@@ -1,4 +1,4 @@
{ lib, pkgs, modulesPath, ... }:
{ modulesPath, ... }:
{
imports =
@@ -67,17 +67,17 @@
dhcpcd.enable = false;
};
# eth0 — native VLAN 5 (main), default route, internet
# useDHCP generates the base 40-eth0 networkd unit and drives initrd DHCP for LUKS unlock.
networking.interfaces."eth0".useDHCP = true;
systemd.network.networks."40-eth0" = {
dhcpV4Config.RouteMetric = 100; # prefer eth0 over VLAN interfaces for default route
linkConfig.RequiredForOnline = "routable"; # wait-online succeeds once eth0 has a route
# eno1 — native VLAN 5 (main), default route, internet
# useDHCP generates the base 40-eno1 networkd unit and drives initrd DHCP for LUKS unlock.
networking.interfaces."eno1".useDHCP = true;
systemd.network.networks."40-eno1" = {
dhcpV4Config.RouteMetric = 100; # prefer eno1 over VLAN interfaces for default route
linkConfig.RequiredForOnline = "routable"; # wait-online succeeds once eno1 has a route
};
# eth1 — trunk port (no IP on the raw interface)
systemd.network.networks."10-eth1" = {
matchConfig.Name = "eth1";
# eno2 — trunk port (no IP on the raw interface)
systemd.network.networks."40-eno2" = {
matchConfig.Name = "eno2";
networkConfig = {
VLAN = [ "vlan-iot" "vlan-mgmt" ];
LinkLocalAddressing = "no";
@@ -86,9 +86,9 @@
};
# VLAN 2 — IoT (cameras, smart home)
systemd.network.netdevs."20-vlan-iot".netdevConfig = { Name = "vlan-iot"; Kind = "vlan"; };
systemd.network.netdevs."20-vlan-iot".vlanConfig.Id = 2;
systemd.network.networks."20-vlan-iot" = {
systemd.network.netdevs."50-vlan-iot".netdevConfig = { Name = "vlan-iot"; Kind = "vlan"; };
systemd.network.netdevs."50-vlan-iot".vlanConfig.Id = 2;
systemd.network.networks."50-vlan-iot" = {
matchConfig.Name = "vlan-iot";
networkConfig.DHCP = "yes";
dhcpV4Config = {
@@ -99,9 +99,9 @@
};
# VLAN 4 — Management
systemd.network.netdevs."20-vlan-mgmt".netdevConfig = { Name = "vlan-mgmt"; Kind = "vlan"; };
systemd.network.netdevs."20-vlan-mgmt".vlanConfig.Id = 4;
systemd.network.networks."20-vlan-mgmt" = {
systemd.network.netdevs."50-vlan-mgmt".netdevConfig = { Name = "vlan-mgmt"; Kind = "vlan"; };
systemd.network.netdevs."50-vlan-mgmt".vlanConfig.Id = 4;
systemd.network.networks."50-vlan-mgmt" = {
matchConfig.Name = "vlan-mgmt";
networkConfig.DHCP = "yes";
dhcpV4Config = {

View File

@@ -16,6 +16,14 @@ in
nativeBuildInputs = (old.nativeBuildInputs or [ ]) ++ [ prev.writableTmpDirAsHomeHook ];
});
# Retry on push failure to work around hyper connection pool race condition.
# https://github.com/zhaofengli/attic/pull/246
attic-client = prev.attic-client.overrideAttrs (old: {
patches = (old.patches or [ ]) ++ [
../patches/attic-client-push-retry.patch
];
});
# Add --zeroconf-port support to Spotify Connect plugin so librespot
# binds to a fixed port that can be opened in the firewall.
music-assistant = prev.music-assistant.overrideAttrs (old: {

View File

@@ -0,0 +1,143 @@
diff --git a/attic/src/api/v1/upload_path.rs b/attic/src/api/v1/upload_path.rs
index 5b1231e5..cb90928c 100644
--- a/attic/src/api/v1/upload_path.rs
+++ b/attic/src/api/v1/upload_path.rs
@@ -25,7 +25,7 @@ pub const ATTIC_NAR_INFO_PREAMBLE_SIZE: &str = "X-Attic-Nar-Info-Preamble-Size";
/// Regardless of client compression, the server will always decompress
/// the NAR to validate the NAR hash before applying the server-configured
/// compression again.
-#[derive(Debug, Serialize, Deserialize)]
+#[derive(Debug, Serialize, Deserialize, Clone)]
pub struct UploadPathNarInfo {
/// The name of the binary cache to upload to.
pub cache: CacheName,
diff --git a/client/src/push.rs b/client/src/push.rs
index 309bd4b6..f3951d2b 100644
--- a/client/src/push.rs
+++ b/client/src/push.rs
@@ -560,57 +560,83 @@ pub async fn upload_path(
);
let bar = mp.add(ProgressBar::new(path_info.nar_size));
bar.set_style(style);
- let nar_stream = NarStreamProgress::new(store.nar_from_path(path.to_owned()), bar.clone())
- .map_ok(Bytes::from);
- let start = Instant::now();
- match api
- .upload_path(upload_info, nar_stream, force_preamble)
- .await
- {
- Ok(r) => {
- let r = r.unwrap_or(UploadPathResult {
- kind: UploadPathResultKind::Uploaded,
- file_size: None,
- frac_deduplicated: None,
- });
-
- let info_string: String = match r.kind {
- UploadPathResultKind::Deduplicated => "deduplicated".to_string(),
- _ => {
- let elapsed = start.elapsed();
- let seconds = elapsed.as_secs_f64();
- let speed = (path_info.nar_size as f64 / seconds) as u64;
+ // Create a new stream for each retry attempt
+ let bar_ref = &bar;
+ let nar_stream = move || {
+ NarStreamProgress::new(store.nar_from_path(path.to_owned()), bar_ref.clone())
+ .map_ok(Bytes::from)
+ };
- let mut s = format!("{}/s", HumanBytes(speed));
+ let start = Instant::now();
+ let mut retries = 0;
+ const MAX_RETRIES: u32 = 3;
+ const RETRY_DELAY: Duration = Duration::from_millis(250);
- if let Some(frac_deduplicated) = r.frac_deduplicated {
- if frac_deduplicated > 0.01f64 {
- s += &format!(", {:.1}% deduplicated", frac_deduplicated * 100.0);
+ loop {
+ let result = api
+ .upload_path(upload_info.clone(), nar_stream(), force_preamble)
+ .await;
+ match result {
+ Ok(r) => {
+ let r = r.unwrap_or(UploadPathResult {
+ kind: UploadPathResultKind::Uploaded,
+ file_size: None,
+ frac_deduplicated: None,
+ });
+
+ let info_string: String = match r.kind {
+ UploadPathResultKind::Deduplicated => "deduplicated".to_string(),
+ _ => {
+ let elapsed = start.elapsed();
+ let seconds = elapsed.as_secs_f64();
+ let speed = (path_info.nar_size as f64 / seconds) as u64;
+
+ let mut s = format!("{}/s", HumanBytes(speed));
+
+ if let Some(frac_deduplicated) = r.frac_deduplicated {
+ if frac_deduplicated > 0.01f64 {
+ s += &format!(", {:.1}% deduplicated", frac_deduplicated * 100.0);
+ }
}
+
+ s
}
+ };
- s
+ mp.suspend(|| {
+ eprintln!(
+ "✅ {} ({})",
+ path.as_os_str().to_string_lossy(),
+ info_string
+ );
+ });
+ bar.finish_and_clear();
+
+ return Ok(());
+ }
+ Err(e) => {
+ if retries < MAX_RETRIES {
+ retries += 1;
+ mp.suspend(|| {
+ eprintln!(
+ "❕ {}: Upload failed, retrying ({}/{})...",
+ path.as_os_str().to_string_lossy(),
+ retries,
+ MAX_RETRIES
+ );
+ });
+ tokio::time::sleep(RETRY_DELAY).await;
+ continue;
}
- };
- mp.suspend(|| {
- eprintln!(
- "✅ {} ({})",
- path.as_os_str().to_string_lossy(),
- info_string
- );
- });
- bar.finish_and_clear();
+ mp.suspend(|| {
+ eprintln!("❌ {}: {}", path.as_os_str().to_string_lossy(), e);
+ });
+ bar.finish_and_clear();
- Ok(())
- }
- Err(e) => {
- mp.suspend(|| {
- eprintln!("❌ {}: {}", path.as_os_str().to_string_lossy(), e);
- });
- bar.finish_and_clear();
- Err(e)
+ return Err(e);
+ }
}
}
}

View File

@@ -1,16 +0,0 @@
Fix notoSubset glob for noto-fonts >= 2026.02.01.
noto-fonts switched from variable fonts (NotoSansArabic[wdth,wght].ttf)
to static fonts (NotoSansArabic.ttf). The old glob pattern only matched
files with brackets in the name, causing the cp to fail.
--- a/pkgs/applications/office/libreoffice/default.nix
+++ b/pkgs/applications/office/libreoffice/default.nix
@@ -191,7 +191,7 @@
runCommand "noto-fonts-subset" { } ''
mkdir -p "$out/share/fonts/noto/"
${concatMapStrings (x: ''
- cp "${noto-fonts}/share/fonts/noto/NotoSans${x}["*.[ot]tf "$out/share/fonts/noto/"
+ cp "${noto-fonts}/share/fonts/noto/NotoSans${x}"*.[ot]tf "$out/share/fonts/noto/"
'') suffixes}
'';

View File

@@ -1,11 +0,0 @@
age-encryption.org/v1
-> ssh-ed25519 hPp1nw gfVRDt7ReEnz10WvPa8UfBBnsRsiw7sxxXQMuXRnCVs
slBNX9Yc1qSu1P5ioNDNLPd97NGE/LWPS/A+u9QGo4E
-> ssh-ed25519 ZDy34A e5MSY5qDP6WuEgbiK0p5esMQJBb3ScVpb15Ff8sTQgQ
9nsimoUQncnbfiu13AnFWZXcpaiySUYdS1eH5O/3Fgg
-> ssh-ed25519 w3nu8g op1KSUhJgM6w/nlaUssQDiraQpVzgnWd//JMu2vFgms
KvEaJfsB7Qkf+PnzFJdZ3wAxm2qj23IS8RRxyuGN2G4
-> ssh-ed25519 evqvfg 9L6pFuqkcChZq/W4zkATXm1Y76SEK+S4SyaiSlJd+C4
j/UWJvo4Cr/UDfaN2milpJ6rU0w1EWdTAzV3SlrCcW8
--- bdG4zC5dx6cSPetH3DNeHEk6EYCJ5TXGrn8OhUMknNU
/¶ø+ÏpñR[¤àJ-*@ÌÿŸx0Ú©ò-ä.*&T·™~-i 2€eƒ¡`@ëQ8š<l™à QK0AÕ§

View File

@@ -63,8 +63,4 @@ with roles;
# zigbee2mqtt secrets
"zigbee2mqtt.yaml.age".publicKeys = zigbee;
# Sonarr and Radarr secrets
"radarr-api-key.age".publicKeys = media-server;
"sonarr-api-key.age".publicKeys = media-server;
}

Binary file not shown.