312 Commits

Author SHA1 Message Date
dd0a89d5cd trim
Some checks are pending
Check Flake / check-flake (push) Waiting to run
2026-02-20 21:03:58 -08:00
63da381813 try again
Some checks are pending
Check Flake / check-flake (push) Waiting to run
2026-02-19 22:39:52 -08:00
fa5be20f39 All systems
Some checks failed
Check Flake / check-flake (push) Failing after 2m47s
2026-02-19 22:35:25 -08:00
09f461123f Add Attic binary cache and containerize gitea runner
All checks were successful
Check Flake / check-flake (push) Successful in 2m32s
Replace nix-serve-only setup with Attic for managed binary caching with
upstream filtering and GC. Move gitea actions runner from host into an
isolated NixOS container with private networking. nix-serve kept alongside
Attic during migration.
2026-02-19 22:22:29 -08:00
9154595910 Ad Incus sandbox on fry I've already been using for a while now
All checks were successful
Check Flake / check-flake (push) Successful in 3m35s
2026-02-17 21:35:23 -08:00
1b92363b08 Fix rust analyzer in vscode 2026-02-17 21:28:50 -08:00
136f024cf0 Fix tailscale networking when incus is on 2026-02-17 21:28:28 -08:00
3d08a3e9bc Improve nix settings for sandboxed workspaces
All checks were successful
Check Flake / check-flake (push) Successful in 1m15s
2026-02-14 11:29:02 -08:00
99ef62d31a Fix unused vars
All checks were successful
Check Flake / check-flake (push) Successful in 1m21s
2026-02-11 23:12:00 -08:00
298f473ceb Remove unused vscode-server module 2026-02-11 23:00:48 -08:00
546bd08f83 Fix CI build. Ephemeral targets should not be in nixosConfigurations
All checks were successful
Check Flake / check-flake (push) Successful in 17m45s
2026-02-11 22:49:11 -08:00
10f3e3a7bf Remove old stale/unused configuration 2026-02-11 22:47:38 -08:00
d44bd12e17 Update README.md 2026-02-11 21:58:38 -08:00
60e89dfc90 Clean up CLAUDE.md and make the claude skill correctly this time
Some checks failed
Check Flake / check-flake (push) Failing after 6s
2026-02-10 21:08:13 -08:00
869b6af7f7 Block sandbox access to local network
Add nftables forward rules to prevent sandboxed workspaces from
reaching RFC1918 private addresses while allowing public internet
and the host gateway (for DNS/NAT).
2026-02-09 20:16:02 -08:00
d6a0e8ec49 Disable tailscaleAuth for now because it doesn't work with tailscale's ACL tagged group
Some checks failed
Check Flake / check-flake (push) Failing after 35s
2026-02-09 19:57:20 -08:00
8293a7dc2a Rework Claude Code config in sandboxed workspaces
Remove credential passing to sandboxes (didn't work well enough).
Move onboarding config init from host-side setup into base.nix so
each workspace initializes its own Claude config on first boot.
Wrap claude command in VM and Incus workspaces to always skip
permission prompts.
2026-02-09 19:56:11 -08:00
cbf2aedcad Add use flake for fresh claude code 2026-02-09 18:04:09 -08:00
69fc3ad837 Add ZFS/btrfs snapshot support to backups
Creates filesystem snapshots before backup for point-in-time consistency.
Uses mount namespaces to bind mount snapshots over original paths, so
restic records correct paths while reading from frozen snapshot data.

- Auto-detects filesystem type via findmnt
- Deterministic snapshot names using path hash
- Graceful fallback for unsupported filesystems
2026-02-08 20:16:37 -08:00
6041d4d09f Replace nixos-generators with upstream nixpkgs image support 2026-02-08 17:57:16 -08:00
cf71b74d6f Add Incus container support to sandboxed workspaces
- Add incus.nix module for fully declarative Incus/LXC containers
- Build NixOS LXC images using nixpkgs.lib.nixosSystem
- Ephemeral containers: recreated on each start, cleaned up on stop
- Use flock to serialize concurrent container operations
- Deterministic MAC addresses via lib.mkMac to prevent ARP cache issues
- Add veth* to NetworkManager unmanaged interfaces
- Update CLAUDE.md with coding conventions and shared lib docs
2026-02-08 15:16:40 -08:00
5178ea6835 Configure Claude Code for sandboxed workspaces
- Add credentials bind mount in container.nix
- Create claude-credentials-dir service to copy credentials for VMs
- Generate .claude.json with onboarding skipped and workspace trusted
- Add allowUnfree to container config
2026-02-08 14:53:31 -08:00
87db330e5b Add sandboxed-workspace module for isolated dev environments
Provides isolated development environments using either VMs (microvm.nix)
or containers (systemd-nspawn) with a unified configuration interface.

Features:
- Unified options with required type field ("vm" or "container")
- Shared base configuration for networking, SSH, users, packages
- Automatic SSH host key generation and persistence
- Shell aliases for workspace management (start/stop/status/ssh)
- Automatic /etc/hosts entries for workspace hostnames
- restartIfChanged support for both VMs and containers
- Passwordless doas in workspaces

Container backend:
- Uses hostBridge for proper bridge networking with /24 subnet
- systemd-networkd for IP configuration
- systemd-resolved for DNS

VM backend:
- TAP interface with deterministic MAC addresses
- virtiofs shares for workspace directories
- vsock CID generation
2026-02-07 22:43:08 -08:00
70f0064d7b Add claude-code to personal machines 2026-02-07 22:37:35 -08:00
cef8456332 Add CLAUDE.md with project conventions 2026-02-07 22:36:11 -08:00
c22855175a Add logseq and godot-mono
All checks were successful
Check Flake / check-flake (push) Successful in 3m51s
2026-02-06 21:12:18 -08:00
0a06e3c1ae Move vscodium config to home manager and add vscodium profile 2026-02-06 21:11:59 -08:00
eb416ae409 Update nixpkgs for wireless fix https://github.com/nixos/nixpkgs/issues/476906
All checks were successful
Check Flake / check-flake (push) Successful in 3m43s
2026-01-27 19:14:40 -08:00
ae2a62515a Enable scanner support
All checks were successful
Check Flake / check-flake (push) Successful in 5m46s
2026-01-25 13:11:01 -08:00
2810ba1412 Enable flakes in kexec image and comma integration
All checks were successful
Check Flake / check-flake (push) Successful in 20m21s
2026-01-24 15:02:42 -08:00
e42e30d3cc Fix nix-index autogenerated db comma integration 2026-01-24 15:01:16 -08:00
83b5d3b8c2 Update nextcloud occ command syntax 2026-01-24 14:59:57 -08:00
0b604fd99c Add activate deploy command 2026-01-24 14:58:40 -08:00
51fbae98c5 Update digitalocean key
All checks were successful
Check Flake / check-flake (push) Successful in 5m51s
2026-01-14 19:32:21 -08:00
d8eff26864 VLAN workaround for now 2026-01-14 18:56:24 -08:00
5f7335c2a0 Simplify kexec and iso image generation 2026-01-14 18:54:55 -08:00
bab2df5d7e Use programs.ssh.askPassword
All checks were successful
Check Flake / check-flake (push) Successful in 4m56s
2026-01-11 15:24:53 -08:00
adc04d1bc7 Update nixos mailserver
All checks were successful
Check Flake / check-flake (push) Successful in 18m38s
2026-01-11 14:25:17 -08:00
da9a8f8c03 Update nixpkgs 2026-01-11 14:25:03 -08:00
415cbca33e VLAN workaround for now 2026-01-10 23:04:48 -08:00
51272a172b Add system76-keyboard-configurator to fry 2026-01-10 23:03:19 -08:00
f053c677e8 Set up openwebui + ollama 2026-01-10 23:02:43 -08:00
c130ce6edd Don't generate zed user config file for now 2026-01-10 22:55:31 -08:00
4718326cb6 Configure ssh-agent to work with keepassxc ssh keys 2026-01-10 22:53:28 -08:00
61698aa7e2 Add kde connect 2026-01-10 22:52:17 -08:00
e0af023ac9 barrier was removed from nixpkgs 2026-01-10 22:51:09 -08:00
c0088553ff jellyfin-media-player was removed from nixpkgs 2026-01-10 22:49:04 -08:00
577736fcb2 Add deploy command 2026-01-10 22:46:39 -08:00
cf087b0e39 Add fry
All checks were successful
Check Flake / check-flake (push) Successful in 1h22m48s
2025-10-12 13:36:02 -07:00
cb1c4752ec Use latest kernel on Howl 2025-10-12 13:35:23 -07:00
b77fb54dc6 Disable annoying pls shell integration 2025-10-12 13:35:02 -07:00
3d6a759827 Update nixpkgs 2025-10-12 13:33:53 -07:00
0c455baebd Add languagetool
All checks were successful
Check Flake / check-flake (push) Successful in 5m13s
2025-08-16 19:04:10 -07:00
b58df0632a Add outline service
All checks were successful
Check Flake / check-flake (push) Successful in 15m2s
2025-08-10 20:49:50 -07:00
4956e41285 Add memos service 2025-08-10 19:03:35 -07:00
ead6653de1 Add services to tailscale auth 2025-08-10 19:02:47 -07:00
dd4a5729d4 Workaround for broken librespot spotify api integration
All checks were successful
Check Flake / check-flake (push) Successful in 4m49s
2025-08-10 15:18:29 -07:00
f248c129c8 Open port 8095 for music assistant too 2025-08-10 15:17:52 -07:00
c011faab18 Use flaresolverr with linkwarden 2025-08-10 15:17:27 -07:00
a5d0b3b748 Bring back APU2 router for more experimentation
All checks were successful
Check Flake / check-flake (push) Successful in 19m21s
2025-08-05 19:45:50 -07:00
ed3bee2e4e Improve minimal iso so it can boot on APU2 from sd card 2025-08-05 19:44:49 -07:00
dbde2a40f2 Add linkwarden 2025-08-05 19:42:29 -07:00
6c69d82156 Add support for Home Assistant voice (whisper + piper + cloud llm) and Music Assistant via Spotify by librespot
Music assistant has custom modifications they made to librespot that they haven't bothered to even try to upstream.
Thus, they require a custom librespot.  I tried and tried and tried and tried to just override the one already in nixpkgs
but I had trouble doing so despite copying the pattern already shown in nixpkgs for overriding the src of a cargo pkg
(See mopidy) but it just didn't work... Oh well. So I just patch nixpkgs instead with the new source. It works I guess.

This is about where I gave up...

```nix
nixpkgs.overlays = [
  (final: prev: {
    # Cannot use librespot upstream because music-assistant requires custom changes
    # that they never bothered to even try to uptream
    librespot = prev.librespot.overrideAttrs (oldAttrs: rec {
      src = prev.fetchFromGitHub {
        owner = "music-assistant";
        repo = "librespot";
        rev = "786cc46199e583f304a84c786acb0a9b37bc3fbd";
        sha256 = "sha256-xaOrqC8yCjF23Tz31RD3CzqZ3xxrDM6ncW1yoovEaGQ=";
      };

      cargoDeps = oldAttrs.cargoDeps.overrideAttrs (oldAttrs': {
        vendorStaging = oldAttrs'.vendorStaging.overrideAttrs {
          outputHash = "sha256-SqvJSHkyd1IicT6c4pE96dBJNNodULhpyG14HRGVWCk=";
        };
      });
    });
  })
];
```
2025-08-05 19:37:50 -07:00
01b01f06b4 Stop using systemd-networkd it has some flaws with NixOS' networking I need to figure out later.
It is very elegant, easy to debug/understand, and I definitely want to use it but The most significant
problem is it doesn't work with NixOS containers private networking.  So I'll need to figure that out
or maybe it will be fixed upstream soon.
2025-08-05 19:27:29 -07:00
cf560d4e53 Downgrade Howl's kernel because newer kernels just are horrible with Howl's network card 2025-08-05 19:24:46 -07:00
8cf4957e15 Add build iso helper command 2025-08-05 19:23:42 -07:00
dc02438a63 Finally a fix DHCP+VLANs thanks to systemd-networkd
All checks were successful
Check Flake / check-flake (push) Successful in 3m31s
2025-07-22 21:20:12 -07:00
948984af2d Set ghostty preferences
All checks were successful
Check Flake / check-flake (push) Successful in 22m14s
2025-07-18 19:46:18 -07:00
be23526c2c Add KeepassXC keys, remove some very old user keys, and rekey
All checks were successful
Check Flake / check-flake (push) Successful in 1m50s
2025-07-16 22:01:33 -07:00
e234577268 Disable inactive cache push experiment 2025-07-16 22:00:11 -07:00
82b67ed566 Add Whiteboard app to Nextcloud
All checks were successful
Check Flake / check-flake (push) Successful in 2m17s
2025-07-16 20:49:39 -07:00
53c2e2222c Move shell aliases 2025-07-16 20:48:26 -07:00
846da159d0 Iodine stopped working again 2025-07-16 20:47:49 -07:00
a45125421e Add collabora online and move nextcloud domain 2025-07-16 20:46:51 -07:00
f4e40955c8 Use upstreamed pcie coral and vaapi frigate configuration
All checks were successful
Check Flake / check-flake (push) Successful in 12m12s
2025-07-13 18:04:36 -07:00
af9e462b27 Allow substituters to be offline
Some checks failed
Check Flake / check-flake (push) Has been cancelled
2025-07-13 17:54:32 -07:00
2faea9d380 Update nixpkgs and other flake inputs 2025-07-13 17:52:08 -07:00
8571922796 Add new helpful utilities 2025-07-12 11:42:40 -07:00
131d5e9313 Add rest command for home assistant 2025-07-12 10:50:37 -07:00
fe0ce3a245 Get recyclarr initially running 2025-07-12 10:48:13 -07:00
7b26cfb4eb update single input cmd 2025-07-12 10:27:09 -07:00
1c9fa418b3 Make s0 easier to unlock
All checks were successful
Check Flake / check-flake (push) Successful in 1m25s
2025-03-29 22:52:00 -07:00
8c4dc9cb74 Improve usage of roles. It should be much easier to read and use now. 2025-03-29 22:48:14 -07:00
1f9fbd87ac Use upstream pykms and Actual Budget. Move Actual to s0. Add automated backups for Actual.
All checks were successful
Check Flake / check-flake (push) Successful in 1m37s
2025-03-29 18:36:13 -07:00
23c8076e4d Pinning system nixpkgs is not needed anymore. nixpkgs already does this automatically for flakes.
All checks were successful
Check Flake / check-flake (push) Successful in 1m50s
2025-03-28 21:45:46 -07:00
75ae399b5a Update nixpkgs. Move to new dashy service 2025-03-28 21:05:37 -07:00
87ddad27a4 Add Home Manager 2025-03-28 20:27:14 -07:00
8dd2a00123 Tauri development extensions 2025-03-28 20:24:33 -07:00
944a783ff2 Add nix LSPs for development 2025-03-28 20:23:07 -07:00
c2cb43fd2c Enable iperf3 server on ponyo 2025-03-28 20:22:14 -07:00
02b2fb6309 Disable gc on howl so nix backed projects don't loose their cache 2025-03-28 20:19:15 -07:00
b43660aaef Clean up very old unused config 2025-03-28 20:17:54 -07:00
567d755850 If machine role is personal set de.enable = true; automatically 2025-03-28 20:16:26 -07:00
adc9b9f2b7 Add sandman.s0.neet.dev 2025-03-28 19:39:59 -07:00
9181e3bfa3 Update librechat to v0.7.7 2025-03-28 19:38:41 -07:00
9845270512 Fix gparted 2025-03-28 19:35:35 -07:00
b3b3044690 Downgrade to dailybot to python 3.11
All checks were successful
Check Flake / check-flake (push) Successful in 1m22s
2025-02-18 22:43:47 -08:00
fb1970c316 Upgrade librechat
All checks were successful
Check Flake / check-flake (push) Successful in 6m43s
2025-02-17 12:12:46 -08:00
34f1edf3b3 Fix s0 setting the incorrect default route by using a static configuration 2025-02-17 12:11:52 -08:00
823f0a6ef2 Disable frigate detect for now. It is using excessive CPU 2025-02-17 12:10:59 -08:00
00d2ccc684 Fix sound in some games running in wine 2025-02-17 12:09:51 -08:00
b2acaff783 Fix pykms by downgrading to python 3.11 2025-02-17 12:09:20 -08:00
c51f4ad65b Unlock zoidberg using TPM2
All checks were successful
Check Flake / check-flake (push) Successful in 1m6s
2024-11-21 21:31:19 -08:00
eb6a50664c Upgrade NixOS. Use upstream libedgetpu, frigate, and gasket kernel module. Fix services broken by upgrade.
All checks were successful
Check Flake / check-flake (push) Successful in 17m43s
2024-11-19 21:28:56 -08:00
89ce0f7fc0 Change Howl's NVMe 2024-11-19 21:08:19 -08:00
8ff552818b Rollover digital ocean auth token
All checks were successful
Check Flake / check-flake (push) Successful in 1m13s
2024-10-27 16:41:02 -07:00
020689d987 Fix zigbee2mqtt auth 2024-10-27 16:40:47 -07:00
9109e356bd Backup vikunja
All checks were successful
Check Flake / check-flake (push) Successful in 2m6s
2024-10-27 16:26:32 -07:00
c7d9e84f73 Lock down access to mqtt
All checks were successful
Check Flake / check-flake (push) Successful in 1m6s
2024-10-27 16:15:23 -07:00
5b666a0565 Add nextcloud apps
Some checks failed
Check Flake / check-flake (push) Has been cancelled
2024-10-11 21:58:54 -07:00
6bc11767ca Update Actual Budget
All checks were successful
Check Flake / check-flake (push) Successful in 2m46s
2024-10-11 21:20:46 -07:00
bdd2d9bef9 Update nextcloud 2024-10-11 21:20:18 -07:00
5acc8b3fca Block email for ellen@runyan.org
All checks were successful
Check Flake / check-flake (push) Successful in 1m5s
2024-10-10 20:04:50 -07:00
1e25d8bb71 Add vikunja
Some checks failed
Check Flake / check-flake (push) Has been cancelled
2024-10-10 20:02:43 -07:00
ac1cf1c531 Open up mqtt for valetudo 2024-10-10 20:02:09 -07:00
02357198bc Change timezone 2024-10-10 20:01:41 -07:00
89b49aafc0 flake.lock: Update
All checks were successful
Check Flake / check-flake (push) Successful in 1h32m23s
Flake lock file updates:

• Updated input 'agenix':
    'github:ryantm/agenix/c2fc0762bbe8feb06a2e59a364fa81b3a57671c9' (2024-05-24)
  → 'github:ryantm/agenix/f6291c5935fdc4e0bef208cfc0dcab7e3f7a1c41' (2024-08-10)
• Updated input 'deploy-rs':
    'github:serokell/deploy-rs/3867348fa92bc892eba5d9ddb2d7a97b9e127a8a' (2024-06-12)
  → 'github:serokell/deploy-rs/aa07eb05537d4cd025e2310397a6adcedfe72c76' (2024-09-27)
• Updated input 'flake-utils':
    'github:numtide/flake-utils/b1d9ab70662946ef0850d488da1c9019f3a9752a' (2024-03-11)
  → 'github:numtide/flake-utils/c1dfcf08411b08f6b8615f7d8971a2bfa81d5e8a' (2024-09-17)
• Updated input 'nix-index-database':
    'github:Mic92/nix-index-database/ff80cb4a11bb87f3ce8459be6f16a25ac86eb2ac' (2024-05-27)
  → 'github:Mic92/nix-index-database/5fce10c871bab6d7d5ac9e5e7efbb3a2783f5259' (2024-10-07)
• Updated input 'nixos-hardware':
    'github:NixOS/nixos-hardware/7b49d3967613d9aacac5b340ef158d493906ba79' (2024-06-01)
  → 'github:NixOS/nixos-hardware/b7ca02c7565fbf6d27ff20dd6dbd49c5b82eef28' (2024-10-04)
• Updated input 'nixpkgs':
    'github:NixOS/nixpkgs/805a384895c696f802a9bf5bf4720f37385df547' (2024-05-31)
  → 'github:NixOS/nixpkgs/ecbc1ca8ffd6aea8372ad16be9ebbb39889e55b6' (2024-10-06)
• Updated input 'simple-nixos-mailserver':
    'gitlab:simple-nixos-mailserver/nixos-mailserver/29916981e7b3b5782dc5085ad18490113f8ff63b' (2024-06-11)
  → 'gitlab:simple-nixos-mailserver/nixos-mailserver/af7d3bf5daeba3fc28089b015c0dd43f06b176f2' (2024-08-05)
• Removed input 'simple-nixos-mailserver/utils'
2024-10-06 20:28:24 -06:00
e56271b2c3 Add reverse proxy for valetudo
All checks were successful
Check Flake / check-flake (push) Successful in 1m6s
2024-10-06 19:16:05 -06:00
f9ef5e4b89 Clean up 2024-10-06 17:15:25 -06:00
e516bd87b5 Fix VLANs 2024-10-06 17:11:58 -06:00
7c9c657bd0 Fix audio stuttering in wine/proton
See: https://old.reddit.com/r/linux_gaming/comments/11yp7ig/pipewire_audio_stuttering_when_playing_games_or/
2024-10-06 17:07:53 -06:00
dff7d65456 vscodium WGSL support 2024-10-06 17:06:28 -06:00
d269d2e5a0 Enable wayland support in chromium based apps 2024-07-17 21:42:43 -06:00
2527b614e9 vscodium rust dev support 2024-07-17 21:15:33 -06:00
528a53a606 Fix chromium acceleration and wayland support 2024-07-17 21:15:02 -06:00
66bfc62566 Refactor frigate config to add a bunch of features
All checks were successful
Check Flake / check-flake (push) Successful in 2h20m26s
- Enable vaapi GPU video encode/decode support
- Use go2rtc. This allows for watching high resolution camera feeds
- Split nix config into pieces that are easier to understand
- Add utilities for easily adding new cameras in the future
- misc changes
2024-06-30 12:49:26 -06:00
91874b9d53 Move frigate into it's own config file 2024-06-30 07:42:23 -06:00
50fc0a53d2 Enable more hass integrations 2024-06-29 10:13:46 -06:00
0b3322afda First VLAN camera in frigate 2024-06-29 10:13:03 -06:00
b32f6fa315 Enable memtest86 2024-06-29 10:12:11 -06:00
fe41ffc788 Allow s0 to access VLANs 2024-06-29 10:11:34 -06:00
eac443f280 Fix home assisstant
All checks were successful
Check Flake / check-flake (push) Successful in 1m7s
2024-06-21 23:26:30 -06:00
d557820d6c Lockdown intranet services behind tailscale 2024-06-21 21:04:49 -06:00
4d658e10d3 Make LibreChat's auth sessions last longer 2024-06-21 19:54:47 -06:00
9ac9613d67 Add gc cmd to makefile 2024-06-16 20:37:21 -06:00
e657ebb134 Clean up flake inputs 2024-06-16 12:47:29 -06:00
d1b07ec06b Add llsblk helper cmd alias 2024-06-16 12:10:39 -06:00
89621945f8 Fix zoidberg 2024-06-16 12:09:58 -06:00
e69fd5bf8f Use Firefox
All checks were successful
Check Flake / check-flake (push) Successful in 3m2s
2024-06-09 22:43:34 -06:00
c856b762e7 Goodbye Ray
All checks were successful
Check Flake / check-flake (push) Successful in 4m30s
2024-06-08 16:39:00 -06:00
b7f82f2d44 Consolidate common PC config
All checks were successful
Check Flake / check-flake (push) Successful in 1m14s
2024-06-03 21:07:53 -06:00
588e94dcf4 Update to NixOS 24.05
All checks were successful
Check Flake / check-flake (push) Successful in 1m11s
2024-06-02 21:12:07 -06:00
fd1ead0b62 Add nixos-hardware config for Howl 2024-06-01 19:57:24 -06:00
37bd7254b9 Add Howl
All checks were successful
Check Flake / check-flake (push) Successful in 1m54s
2024-05-31 23:29:39 -06:00
74e41de9d6 Enable unify v8 service
All checks were successful
Check Flake / check-flake (push) Successful in 56s
2024-05-26 17:24:46 -06:00
0bf0b8b88b Enable ollama service 2024-05-26 17:24:07 -06:00
702129d778 Enable CUDA support 2024-05-26 17:23:38 -06:00
88c67dde84 Open C&C ports 2024-05-26 17:21:58 -06:00
8e3a0761e8 Clean up 2024-05-26 17:21:34 -06:00
a785890990 Fix esphome so that it can build again 2024-05-26 17:20:05 -06:00
b482a8c106 Restore frigate functionality by reverting to an older tensorflow version for libedgetpu 2024-05-26 17:16:59 -06:00
efe50be604 Update nixpkgs
All checks were successful
Check Flake / check-flake (push) Successful in 53s
2024-03-17 09:39:54 -06:00
99904d0066 Update 'Actual' and 'Actual Server' to 'v24.3.0'
All checks were successful
Check Flake / check-flake (push) Successful in 14m33s
2024-03-03 14:57:23 -07:00
55e44bc3d0 Add 'tree' to system pkgs 2024-03-03 14:53:14 -07:00
da7ffa839b Blackhole spammed email address
All checks were successful
Check Flake / check-flake (push) Successful in 5m18s
2024-02-20 18:13:19 -07:00
01af25a57e Add Actual server
All checks were successful
Check Flake / check-flake (push) Successful in 6m3s
2024-02-19 19:44:07 -07:00
bfc1bb2da9 Use a makefile for utility snippets
All checks were successful
Check Flake / check-flake (push) Successful in 12m54s
2024-02-18 17:30:52 -07:00
0e59fa3518 Add easy boot configuration profile limit 2024-02-18 17:30:12 -07:00
7e812001f0 Add librechat
All checks were successful
Check Flake / check-flake (push) Successful in 6m12s
2024-02-09 19:57:09 -07:00
14c19b80ef Stop auto upgrade
All checks were successful
Check Flake / check-flake (push) Successful in 1m2s
2024-02-05 11:32:16 -07:00
e8dd0cb5ff Increase gitea session length
All checks were successful
Check Flake / check-flake (push) Successful in 2m17s
2024-02-04 15:48:06 -07:00
dc9f5e969a Update nextcloud
All checks were successful
Check Flake / check-flake (push) Successful in 2m48s
2024-02-04 14:34:42 -07:00
03150667b6 Enable gitea index and lfs. Fix warning.
All checks were successful
Check Flake / check-flake (push) Successful in 4m49s
2024-02-04 13:59:39 -07:00
1dfd7bc8a2 Increase seed ratio
All checks were successful
Check Flake / check-flake (push) Successful in 2m58s
2024-02-03 14:15:49 -07:00
fa649b1e2a Add missing locale settings to perl stops complaining
All checks were successful
Check Flake / check-flake (push) Successful in 12m4s
2024-02-03 14:11:26 -07:00
e34752c791 Fix transmission running in a container
https://github.com/NixOS/nixpkgs/issues/258793
2024-02-03 14:10:35 -07:00
75031567bd Two radio endpoints
All checks were successful
Check Flake / check-flake (push) Successful in 50s
2024-02-02 20:23:40 -07:00
800a95d431 Update to nixos 23.11
All checks were successful
Check Flake / check-flake (push) Successful in 1m24s
2024-02-01 21:42:33 -07:00
932b05a42e Basic oauth proxy for frigate
All checks were successful
Check Flake / check-flake (push) Successful in 1m13s
2024-01-30 22:12:18 -07:00
b5cc4d4609 Emulate ARM systems for building 2024-01-30 21:59:09 -07:00
ba3d15d82a PoC: Frigate + PCIe Coral + ESPCam, Home Assistant, ESPHome, MQTT, zigbee2mqtt
All checks were successful
Check Flake / check-flake (push) Successful in 3m24s
2023-12-17 21:29:45 -07:00
e80fb7b3db PoC: Frigate + PCIe Coral + ESPCam, Home Assistant, ESPHome, MQTT, zigbee2mqtt
Some checks failed
Check Flake / check-flake (push) Failing after 1m1s
2023-12-17 14:29:45 -07:00
84e1f6e573 wireless role was removed 2023-12-02 10:26:44 -07:00
c4847bd39b Use dashy for services homepage
All checks were successful
Check Flake / check-flake (push) Successful in 5m25s
2023-11-08 21:35:10 -07:00
c0c1ec5c67 Enable autologin for zoidberg 2023-11-08 21:34:13 -07:00
6739115cfb Fix sddm barrier service for current nixpkgs version 2023-11-08 21:33:38 -07:00
4606cc32ba Enable adb debugging 2023-11-08 21:32:26 -07:00
2d27bf7505 Allow other users to access public samba mount 2023-11-08 21:32:00 -07:00
d07af6d101 Should use tailscale eventually for remote luks unlocking 2023-11-08 21:31:14 -07:00
4890dc20e0 Add basic nix utilities
All checks were successful
Check Flake / check-flake (push) Successful in 2m21s
2023-10-20 20:13:08 -06:00
8b01a9b240 Use podman instead of docker 2023-10-20 20:12:14 -06:00
8dfba8646c Fix CI builder
All checks were successful
Check Flake / check-flake (push) Successful in 1m5s
2023-10-20 19:52:33 -06:00
63c0f52955 s0: use eth1
Some checks failed
Check Flake / check-flake (push) Failing after 9s
2023-10-16 20:21:00 -06:00
5413a8e7db Remove mounts that fail. These never worked 2023-10-16 20:20:32 -06:00
330c801e43 Fix issue where wg vpn starts slightly too early for internet access 2023-10-16 20:19:34 -06:00
8ba08ce982 Zoidberg move /boot device
Some checks failed
Check Flake / check-flake (push) Failing after 6m57s
2023-10-15 19:23:24 -06:00
2b50aeba93 Zoidberg auto login 2023-10-15 19:22:51 -06:00
c1aef574b1 Try to build only x84_64 for now
Some checks failed
Check Flake / check-flake (push) Failing after 8m22s
2023-10-15 19:09:40 -06:00
52ed25f1b9 Push derivations built during nix flake check to binary cache
Some checks failed
Check Flake / check-flake (push) Failing after 1m17s
2023-10-15 18:00:38 -06:00
0446d18712 Use official nixos module for gitea actions runner 2023-10-15 17:58:03 -06:00
d2bbbb827e Disable router 2023-10-15 17:55:44 -06:00
6fba594625 Target nixpkgs 23.05 2023-10-15 17:55:04 -06:00
fa6e092c06 Update zoidberg keyfile
Some checks failed
Check Flake / check-flake (push) Failing after 6m52s
2023-09-04 17:18:42 -06:00
3a6dae2b82 Enable barrier for use system wide
Some checks failed
Check Flake / check-flake (push) Failing after 7m29s
2023-09-03 21:59:31 -06:00
62bb740634 Enable ROCm 2023-09-03 21:58:52 -06:00
577e0d21bc Xbox wireless controller support 2023-09-03 21:58:08 -06:00
b481a518f5 Samba mount 2023-09-03 21:57:24 -06:00
f93b2c6908 Steam login option 2023-09-03 21:56:37 -06:00
890b24200e Retroarch
Some checks failed
Check Flake / check-flake (push) Failing after 8m51s
2023-08-13 18:03:45 -06:00
d3259457de Use latest kernel so amdgpu doesn't crash 2023-08-12 23:17:26 -06:00
8eb42ee68b Add common user for kodi 2023-08-12 23:16:52 -06:00
9d4c48badb Use Barrier 2023-08-12 23:16:26 -06:00
9cf2b82e92 Update nixpkgs and cleanup
Some checks failed
Check Flake / check-flake (push) Failing after 10m41s
2023-08-12 19:40:22 -06:00
61ca918cca flake.lock: Update
Flake lock file updates:

• Updated input 'agenix':
    'github:ryantm/agenix/2994d002dcff5353ca1ac48ec584c7f6589fe447' (2023-04-21)
  → 'github:ryantm/agenix/d8c973fd228949736dedf61b7f8cc1ece3236792' (2023-07-24)
• Added input 'agenix/home-manager':
    'github:nix-community/home-manager/32d3e39c491e2f91152c84f8ad8b003420eab0a1' (2023-04-22)
• Added input 'agenix/home-manager/nixpkgs':
    follows 'agenix/nixpkgs'
• Updated input 'deploy-rs':
    'github:serokell/deploy-rs/c2ea4e642dc50fd44b537e9860ec95867af30d39' (2023-04-21)
  → 'github:serokell/deploy-rs/724463b5a94daa810abfc64a4f87faef4e00f984' (2023-06-14)
• Updated input 'flake-utils':
    'github:numtide/flake-utils/cfacdce06f30d2b68473a46042957675eebb3401' (2023-04-11)
  → 'github:numtide/flake-utils/919d646de7be200f3bf08cb76ae1f09402b6f9b4' (2023-07-11)
• Updated input 'nix-index-database':
    'github:Mic92/nix-index-database/e3e320b19c192f40a5b98e8776e3870df62dee8a' (2023-04-25)
  → 'github:Mic92/nix-index-database/6c626d54d0414d34c771c0f6f9d771bc8aaaa3c4' (2023-08-06)
• Updated input 'nixpkgs':
    'github:NixOS/nixpkgs/297187b30a19f147ef260abb5abd93b0706af238' (2023-04-30)
  → 'github:NixOS/nixpkgs/a4d0fe7270cc03eeb1aba4e8b343fe47bfd7c4d5' (2023-08-13)
2023-08-12 19:00:16 -06:00
ef61792da4 Add maestral
Some checks failed
Check Flake / check-flake (push) Failing after 30s
2023-08-12 18:27:24 -06:00
3dc97f4960 Enable kde scaling 2023-08-12 18:27:01 -06:00
f4a26a8d15 Enable zfs scrubbing 2023-08-12 18:26:13 -06:00
37782a26d5 Add pavucontrol-qt 2023-08-12 18:25:46 -06:00
1434bd2df1 Share userspace packages
Some checks failed
Check Flake / check-flake (push) Failing after 19s
2023-08-11 20:48:27 -06:00
e49ea3a7c4 Share userspace packages
Some checks failed
Check Flake / check-flake (push) Failing after 8s
2023-08-11 20:45:34 -06:00
9a6cde1e89 Get zoidberg ready
Some checks failed
Check Flake / check-flake (push) Failing after 1m34s
2023-08-11 19:51:42 -06:00
35972b6d68 Xbox controller support
Some checks failed
Check Flake / check-flake (push) Failing after 18s
2023-08-10 20:39:41 -06:00
b8021c1756 Samba mount for zoidberg
Some checks failed
Check Flake / check-flake (push) Failing after 18s
2023-08-10 19:45:11 -06:00
4b21489141 Increase boot timeout for zoidberg
Some checks failed
Check Flake / check-flake (push) Failing after 19s
2023-08-10 19:44:44 -06:00
a256ab7728 Rekey secrets 2023-08-10 19:44:20 -06:00
da7ebe7baa Add Zoidberg
Some checks failed
Check Flake / check-flake (push) Failing after 2m43s
2023-08-10 19:40:01 -06:00
1922bbbcfd Local arduino development 2023-08-10 18:05:45 -06:00
b17be86927 Cleanup 2023-08-10 18:04:46 -06:00
ec73a63e09 Define vscodium extensions
All checks were successful
Check Flake / check-flake (push) Successful in 30m4s
2023-05-10 12:05:46 -06:00
af26a004e5 Forwards 2023-05-10 12:04:57 -06:00
d83782f315 Set up Nix build worker
All checks were successful
Check Flake / check-flake (push) Successful in 19m33s
2023-04-30 12:49:15 -06:00
162b544249 Set binary cache priority 2023-04-30 09:13:49 -06:00
0c58e62ed4 flake.lock: Update
All checks were successful
Check Flake / check-flake (push) Successful in 1m27s
Flake lock file updates:

• Updated input 'nix-index-database':
    'github:Mic92/nix-index-database/68ec961c51f48768f72d2bbdb396ce65a316677e' (2023-04-15)
  → 'github:Mic92/nix-index-database/e3e320b19c192f40a5b98e8776e3870df62dee8a' (2023-04-25)
• Updated input 'nixpkgs':
    'github:NixOS/nixpkgs/8dafae7c03d6aa8c2ae0a0612fbcb47e994e3fb8' (2023-04-22)
  → 'github:NixOS/nixpkgs/297187b30a19f147ef260abb5abd93b0706af238' (2023-04-30)
2023-04-29 20:34:11 -06:00
96de109d62 Basic binary cache
All checks were successful
Check Flake / check-flake (push) Successful in 7m55s
2023-04-29 20:33:10 -06:00
0efcf8f3fc Flake check gitea action
All checks were successful
Check Flake / check-flake (push) Successful in 1m28s
2023-04-29 19:20:48 -06:00
2009180827 Add mail user 2023-04-29 18:24:20 -06:00
306ce8bc3f Move s0 to systemd-boot 2023-04-25 23:41:08 -06:00
b5dd983ba3 Automatically set machine hostname 2023-04-24 20:52:17 -06:00
832894edfc Gitea runner 2023-04-23 10:29:18 -06:00
feb6270952 Update options for newer nixpkgs 2023-04-23 10:28:55 -06:00
b4dd2d4a92 update TODOs 2023-04-23 10:16:54 -06:00
38c2e5aece Fix properties.nix path loading 2023-04-21 23:24:05 -06:00
0ef689b750 flake.lock: Update
Flake lock file updates:

• Updated input 'agenix':
    'github:ryantm/agenix/b7ffcfe77f817d9ee992640ba1f270718d197f28' (2023-01-31)
  → 'github:ryantm/agenix/2994d002dcff5353ca1ac48ec584c7f6589fe447' (2023-04-21)
• Updated input 'deploy-rs':
    'github:serokell/deploy-rs/8c9ea9605eed20528bf60fae35a2b613b901fd77' (2023-01-19)
  → 'github:serokell/deploy-rs/c2ea4e642dc50fd44b537e9860ec95867af30d39' (2023-04-21)
• Updated input 'flake-utils':
    'github:numtide/flake-utils/5aed5285a952e0b949eb3ba02c12fa4fcfef535f' (2022-11-02)
  → 'github:numtide/flake-utils/cfacdce06f30d2b68473a46042957675eebb3401' (2023-04-11)
• Added input 'flake-utils/systems':
    'github:nix-systems/default/da67096a3b9bf56a91d16901293e51ba5b49a27e' (2023-04-09)
• Updated input 'nix-index-database':
    'github:Mic92/nix-index-database/4306fa7c12e098360439faac1a2e6b8e509ec97c' (2023-02-26)
  → 'github:Mic92/nix-index-database/68ec961c51f48768f72d2bbdb396ce65a316677e' (2023-04-15)
• Updated input 'nixpkgs':
    'github:NixOS/nixpkgs/78c4d33c16092e535bc4ba1284ba49e3e138483a' (2023-03-03)
  → 'github:NixOS/nixpkgs/8dafae7c03d6aa8c2ae0a0612fbcb47e994e3fb8' (2023-04-22)
2023-04-21 21:22:00 -06:00
e72e19b7e8 Fix auto upgrade 2023-04-21 18:58:54 -06:00
03603119e5 Fix invalid import issue. 2023-04-21 18:57:06 -06:00
71baa09bd2 Refactor imports and secrets. Add per system properties and role based secret access.
Highlights
- No need to update flake for every machine anymore, just add a properties.nix file.
- Roles are automatically generated from all machine configurations.
- Roles and their secrets automatically are grouped and show up in agenix secrets.nix
- Machines and their service configs may now query the properties of all machines.
- Machine configuration and secrets are now competely isolated into each machine's directory.
- Safety checks to ensure no mixing of luks unlocking secrets and hosts with primary ones.
- SSH pubkeys no longer centrally stored but instead per machine where the private key lies for better cleanup.
2023-04-21 12:58:11 -06:00
a02775a234 Update install steps 2023-04-19 21:17:45 -06:00
5800359214 Update install steps 2023-04-19 21:17:03 -06:00
0bd42f1850 Update install steps 2023-04-19 21:15:58 -06:00
40f0e5d2ac Add Phil 2023-04-19 18:12:42 -06:00
f90b9f85fd try out appvm 2023-04-18 23:15:21 -06:00
5b084fffcc moonlander 2023-04-18 23:15:03 -06:00
4dd6401f8c update TODOs 2023-04-18 23:14:49 -06:00
260bbc1ffd Use doas instead of sudo 2023-04-10 22:03:57 -06:00
c8132a67d0 Use lf as terminal file explorer 2023-04-10 22:03:29 -06:00
3412d5caf9 Use hashed passwordfile just to be safe 2023-04-09 23:00:10 -06:00
1065cc4b59 Enable gitea email notifications 2023-04-09 22:05:23 -06:00
154b37879b Cross off finished TODOs 2023-04-09 22:04:51 -06:00
a34238b3a9 Easily run restic commands on a backup group 2023-04-09 13:06:15 -06:00
42e2ebd294 Allow marking folders as omitted from backup 2023-04-09 12:35:20 -06:00
378cf47683 restic backups 2023-04-08 21:25:55 -06:00
f68a4f4431 nixpkgs-fmt everything 2023-04-04 23:30:28 -06:00
3c683e7b9e NixOS router is now in active use :) 2023-04-04 20:53:38 -06:00
68bd70b525 Basic router working using the wip hostapd module from upstream 2023-04-04 12:57:16 -06:00
2189ab9a1b Improve cifs mounts. Newer protocol version, helpful commands, better network connection resiliency. 2023-03-31 11:43:12 -06:00
acbbb8a37a encrypted samba vault with gocryptfs 2023-03-25 15:49:07 -06:00
d1e6d21d66 iperf server 2023-03-25 15:48:39 -06:00
1a98e039fe Cleanup fio tests 2023-03-25 15:48:24 -06:00
3459ce5058 Add joplin 2023-03-18 22:04:31 -06:00
c48b1995f8 Remove zerotier 2023-03-18 20:41:09 -06:00
53c0e7ba1f Add Webmail 2023-03-14 23:28:07 -06:00
820cd392f1 Choose random PIA server in a specified region instead of hardcoded. And more TODOs addressed. 2023-03-12 22:55:46 -06:00
759fe04185 with lib; 2023-03-12 21:50:46 -06:00
db441fcf98 Add ability to refuse PIA ports 2023-03-12 21:46:36 -06:00
83e9280bb4 Use the NixOS firewall instead to block unwanted PIA VPN traffic 2023-03-12 20:49:39 -06:00
478235fe32 Enable firewall for PIA VPN wireguard interface 2023-03-12 20:29:20 -06:00
440401a391 Add ponyo to deploy-rs config 2023-03-12 19:50:55 -06:00
42c0dcae2d Port forwarding for transmission 2023-03-12 19:50:29 -06:00
7159868b57 update todo's 2023-03-12 19:46:51 -06:00
ab2cc0cc0a Cleanup services 2023-03-12 17:51:10 -06:00
aaa1800d0c Cleanup mail domains 2023-03-12 13:29:12 -06:00
a795c65c32 Cleanup mail domains 2023-03-12 13:25:34 -06:00
5ed02e924d Remove liza 2023-03-12 00:15:06 -07:00
1d620372b8 Remove leftovers of removed compute nodes 2023-03-12 00:14:49 -07:00
9684a975e2 Migrate nextcloud to ponyo 2023-03-12 00:10:14 -07:00
c3c3a9e77f disable searx for now 2023-03-12 00:09:40 -07:00
ecb6d1ef63 Migrate mailserver to ponyo 2023-03-11 23:40:36 -07:00
a5f7bb8a22 Fix vpn systemd service restart issues 2023-03-09 13:07:20 -07:00
cea9b9452b Initial prototype for Wireguard based PIA VPN - not quite 'ready' yet 2023-03-08 23:49:02 -07:00
8fb45a7ee5 Turn off howdy 2023-03-08 23:47:11 -07:00
b53f03bb7d Fix typo 2023-03-08 23:45:49 -07:00
dee0243268 Peer to peer connection keepalive task 2023-03-07 22:55:37 -07:00
8b6bc354bd Peer to peer connection keepalive task 2023-03-07 22:54:26 -07:00
aff5611cdb Update renamed nixos options 2023-03-07 22:52:31 -07:00
c5e7d8b2fe Allow easy patching of nixpkgs 2023-03-03 23:24:33 -07:00
90a3549237 use comma and pregenerated nix-index 2023-03-03 00:18:20 -07:00
63f2a82ad1 ignore lid close for NAS 2023-03-03 00:16:57 -07:00
0cc39bfbe0 deploy-rs initial PoC 2023-03-03 00:16:23 -07:00
ec54b27d67 fix router serial 2023-03-03 00:14:22 -07:00
bba4f27465 add picocom for serial 2023-03-03 00:12:35 -07:00
b5c77611d7 remove unused compute nodes 2023-03-03 00:12:16 -07:00
987919417d allow root login over ssh using trusted key 2023-02-11 23:07:48 -07:00
d8dbb12959 grow disk for ponyo 2023-02-11 19:01:42 -07:00
3e0cde40b8 Cleanup remote LUKS unlock 2023-02-11 18:40:08 -07:00
2c8576a295 Hardware accelerated encoding for jellyfin 2023-02-11 16:10:19 -07:00
8aecc04d01 config cleanup 2023-02-11 16:10:10 -07:00
9bcf7cc50d VPN using its own DNS resolver is unstable 2023-02-11 16:09:02 -07:00
cb2ac1c1ba Use x86 machine for NAS 2023-02-11 16:08:48 -07:00
7f1e304012 Remove stale secrets 2023-02-11 15:19:35 -07:00
9e3dae4b16 Rekey secrets 2023-02-11 15:07:08 -07:00
c649b04bdd Update ssh keys and allow easy ssh LUKS unlocking 2023-02-11 15:05:20 -07:00
6fce2e1116 Allow unlocking over tor 2023-02-11 13:38:54 -07:00
3e192b3321 Hardware config should be in hardware config 2023-02-11 13:35:46 -07:00
bc863de165 Hardware config should be in hardware config 2023-02-11 09:48:25 -07:00
cfa5c9428e Remove reg 2023-02-11 09:46:05 -07:00
abddc5a680 Razer keyboard 2023-02-11 00:32:36 -07:00
577dc4faaa Add initial configuration for APU2E4 router 2023-02-10 20:51:10 -07:00
a8b0385c6d more ephemeral options 2023-02-08 22:27:54 -07:00
fc85627bd6 use unstable for ephemeral os config 2023-02-08 22:26:04 -07:00
f9cadba3eb improve ephemeral os config 2023-02-08 22:25:09 -07:00
c192c2d52f enable spotify 2023-02-08 18:48:08 -07:00
04c7a9ea51 Update tz 2023-02-08 18:47:58 -07:00
189 changed files with 5991 additions and 3666 deletions

View File

@@ -0,0 +1,160 @@
---
name: create-workspace
description: >
Creates a new sandboxed workspace (isolated dev environment) by adding
NixOS configuration for a VM, container, or Incus instance. Use when
the user wants to create, set up, or add a new sandboxed workspace.
---
# Create Sandboxed Workspace
Creates an isolated development environment backed by a VM (microvm.nix), container (systemd-nspawn), or Incus instance. This produces:
1. A workspace config file at `machines/<machine>/workspaces/<name>.nix`
2. A registration entry in `machines/<machine>/default.nix`
## Step 1: Parse Arguments
Extract the workspace name and backend type from `$ARGUMENTS`. If either is missing, ask the user.
- **Name**: lowercase alphanumeric with hyphens (e.g., `my-project`)
- **Type**: one of `vm`, `container`, or `incus`
## Step 2: Detect Machine
Run `hostname` to get the current machine name. Verify that `machines/<hostname>/default.nix` exists.
If the machine directory doesn't exist, stop and tell the user this machine isn't managed by this flake.
## Step 3: Allocate IP Address
Read `machines/<hostname>/default.nix` to find existing `sandboxed-workspace.workspaces` entries and their IPs.
All IPs are in the `192.168.83.0/24` subnet. Use these ranges by convention:
| Type | IP Range |
|------|----------|
| vm | 192.168.83.10 - .49 |
| container | 192.168.83.50 - .89 |
| incus | 192.168.83.90 - .129 |
Pick the next available IP in the appropriate range. If no workspaces exist yet for that type, use the first IP in the range.
## Step 4: Create Workspace Config File
Create `machines/<hostname>/workspaces/<name>.nix`. Use this template:
```nix
{ config, lib, pkgs, ... }:
{
environment.systemPackages = with pkgs; [
# Add packages here
];
}
```
Ask the user if they want any packages pre-installed.
Create the `workspaces/` directory if it doesn't exist.
**Important:** After creating the file, run `git add` on it immediately. Nix flakes only see files tracked by git, so new files must be staged before `nix build` will work.
## Step 5: Register Workspace
Edit `machines/<hostname>/default.nix` to add the workspace entry inside the `sandboxed-workspace` block.
The entry should look like:
```nix
workspaces.<name> = {
type = "<type>";
config = ./workspaces/<name>.nix;
ip = "<allocated-ip>";
};
```
**If `sandboxed-workspace` block doesn't exist yet**, add the full block:
```nix
sandboxed-workspace = {
enable = true;
workspaces.<name> = {
type = "<type>";
config = ./workspaces/<name>.nix;
ip = "<allocated-ip>";
};
};
```
The machine also needs `networking.sandbox.upstreamInterface` set. Check if it exists; if not, ask the user for their primary network interface name (they can find it with `ip route show default`).
Do **not** set `hostKey` — it gets auto-generated on first boot and can be added later.
## Step 6: Verify Build
Run a build to check for configuration errors:
```
nix build .#nixosConfigurations.<hostname>.config.system.build.toplevel --no-link
```
If the build fails, fix the configuration and retry.
## Step 7: Deploy
Tell the user to deploy by running:
```
doas nixos-rebuild switch --flake .
```
**Never run this command yourself** — it requires privileges.
## Step 8: Post-Deploy Info
Tell the user to deploy and then start the workspace so the host key gets generated. Provide these instructions:
**Deploy:**
```
doas nixos-rebuild switch --flake .
```
**Starting the workspace:**
```
doas systemctl start <service>
```
Where `<service>` is:
- VM: `microvm@<name>`
- Container: `container@<name>`
- Incus: `incus-workspace-<name>`
Or use the auto-generated shell alias: `workspace_<name>_start`
**Connecting:**
```
ssh googlebot@workspace-<name>
```
Or use the alias: `workspace_<name>`
**Never run deploy or start commands yourself** — they require privileges.
## Step 9: Add Host Key
After the user has deployed and started the workspace, add the SSH host key to the workspace config. Do NOT skip this step — always wait for the user to confirm they've started the workspace, then proceed.
1. Read the host key from `~/sandboxed/<name>/ssh-host-keys/ssh_host_ed25519_key.pub`
2. Add `hostKey = "<contents>";` to the workspace entry in `machines/<hostname>/default.nix`
3. Run the build again to verify
4. Tell the user to redeploy with `doas nixos-rebuild switch --flake .`
## Backend Reference
| | VM | Container | Incus |
|---|---|---|---|
| Isolation | Full kernel (cloud-hypervisor) | Shared kernel (systemd-nspawn) | Unprivileged container |
| Overhead | Higher (separate kernel) | Lower (bind mounts) | Medium |
| Filesystem | virtiofs shares | Bind mounts | Incus-managed |
| Use case | Untrusted code, kernel-level isolation | Fast dev environments | Better security than nspawn |

View File

@@ -0,0 +1,46 @@
name: Check Flake
on: [push]
env:
DEBIAN_FRONTEND: noninteractive
PATH: /run/current-system/sw/bin/
jobs:
check-flake:
runs-on: nixos
steps:
- name: Checkout the repository
uses: actions/checkout@v3
with:
fetch-depth: 0
- name: Check Flake
run: nix flake check --all-systems --print-build-logs --log-format raw --show-trace
- name: Build all systems
run: |
nix eval .#nixosConfigurations --apply 'cs: builtins.attrNames cs' --json \
| jq -r '.[]' \
| xargs -I{} nix build ".#nixosConfigurations.{}.config.system.build.toplevel" --no-link --print-build-logs --log-format raw
- name: Push to cache
env:
XDG_CONFIG_HOME: ${{ runner.temp }}/.config
run: |
set -euo pipefail
attic login local "${{ vars.ATTIC_ENDPOINT }}" "${{ secrets.ATTIC_TOKEN }}"
# Get all system toplevel store paths
toplevels=$(nix eval .#nixosConfigurations --apply 'cs: map (n: "${cs.${n}.config.system.build.toplevel}") (builtins.attrNames cs)' --json | jq -r '.[]')
echo "Found $(echo "$toplevels" | wc -l) system toplevels"
# Expand to full closures, deduplicate, and filter out paths that are:
# - already signed by cache.nixos.org (available upstream)
# - smaller than 0.5MB (insignificant build artifacts)
paths=$(echo "$toplevels" \
| xargs nix path-info -r --json \
| jq -r '[to_entries[] | select(
(.value.signatures | all(startswith("cache.nixos.org") | not))
and .value.narSize >= 524288
) | .key] | unique[]')
echo "Pushing $(echo "$paths" | wc -l) unique paths to cache"
echo "$paths" | xargs attic push local:nixos

81
CLAUDE.md Normal file
View File

@@ -0,0 +1,81 @@
# CLAUDE.md
This file provides guidance to Claude Code (claude.ai/code) when working with code in this repository.
## What This Is
A NixOS flake managing multiple machines. All machines import `/common` for shared config, and each machine has its own directory under `/machines/<hostname>/` with a `default.nix` (machine-specific config), `hardware-configuration.nix`, and `properties.nix` (metadata: hostnames, arch, roles, SSH keys).
## Common Commands
```bash
# Build a machine config (check for errors without deploying)
nix build .#nixosConfigurations.<hostname>.config.system.build.toplevel --no-link
# Deploy to local machine (user must run this themselves - requires privileges)
doas nixos-rebuild switch --flake .
# Deploy to a remote machine (boot-only, no activate)
deploy --remote-build --boot --debug-logs --skip-checks .#<hostname>
# Deploy to a remote machine (activate immediately)
deploy --remote-build --debug-logs --skip-checks .#<hostname>
# Update flake lockfile
make update-lockfile
# Update a single flake input
make update-input <input-name>
# Edit an agenix secret
make edit-secret <secret-filename>
# Rekey all secrets (after adding/removing machine host keys)
make rekey-secrets
```
## Architecture
### Machine Discovery (Auto-Registration)
Machines are **not** listed in `flake.nix`. Instead, `common/machine-info/default.nix` recursively scans `/machines/` for any `properties.nix` file and auto-registers that directory as a machine. To add a machine, create `machines/<name>/properties.nix` and `machines/<name>/default.nix`.
`properties.nix` returns a plain attrset (no NixOS module args) with: `hostNames`, `arch`, `systemRoles`, `hostKey`, and optionally `userKeys`, `deployKeys`, `remoteUnlock`.
### Role System
Each machine declares `systemRoles` in its `properties.nix` (e.g., `["personal" "dns-challenge"]`). Roles drive conditional config:
- `config.thisMachine.hasRole.<role>` - boolean, used to conditionally enable features (e.g., `de.enable` for `personal` role)
- `config.machines.withRole.<role>` - list of hostnames with that role
- Roles also determine which machines can decrypt which agenix secrets (see `secrets/secrets.nix`)
### Secrets (agenix)
Secrets in `/secrets/` are encrypted `.age` files. `secrets.nix` maps each secret to the SSH host keys (by role) that can decrypt it. After changing which machines have access, run `make rekey-secrets`.
### Sandboxed Workspaces
`common/sandboxed-workspace/` provides isolated dev environments. Three backends: `vm` (microvm/cloud-hypervisor), `container` (systemd-nspawn), `incus`. Workspaces are defined in machine `default.nix` files and their per-workspace config goes in `machines/<hostname>/workspaces/<name>.nix`. The base config (`base.nix`) handles networking, SSH, user setup, and Claude Code pre-configuration.
IP allocation convention: VMs `.10-.49`, containers `.50-.89`, incus `.90-.129` in `192.168.83.0/24`.
### Backups
`common/backups.nix` defines a `backup.group` option. Machines declare backup groups with paths; restic handles daily backups to Backblaze B2 with automatic ZFS/btrfs snapshot support. Each group gets a `restic_<group>` CLI wrapper for manual operations.
### Nixpkgs Patching
`flake.nix` applies patches from `/patches/` to nixpkgs before building (workaround for nix#3920).
### Key Conventions
- Uses `doas` instead of `sudo` everywhere
- Fish shell is the default user shell
- Home Manager is used for user-level config (`home/googlebot.nix`)
- `lib/default.nix` extends nixpkgs lib with custom utility functions (extends via `nixpkgs.lib.extend`)
- Overlays are in `/overlays/` and applied globally via `flake.nix`
- The Nix formatter for this project is `nixpkgs-fmt`
- Do not add "Co-Authored-By" lines to commit messages
- Always use `--no-link` when running `nix build`
- Don't use `nix build --dry-run` unless you only need evaluation — it skips the actual build
- Avoid `2>&1` on nix commands — it can cause error output to be missed

52
Makefile Normal file
View File

@@ -0,0 +1,52 @@
# Lockfile utils
.PHONY: update-lockfile
update-lockfile:
nix flake update --commit-lock-file
.PHONY: update-lockfile-without-commit
update-lockfile-without-commit:
nix flake update
# Agenix utils
.PHONY: edit-secret
edit-secret:
cd secrets && agenix -e $(filter-out $@,$(MAKECMDGOALS))
.PHONY: rekey-secrets
rekey-secrets:
cd secrets && agenix -r
# NixOS utils
.PHONY: clean-old-nixos-profiles
clean-old-nixos-profiles:
doas nix-collect-garbage -d
# Garbage Collect
.PHONY: gc
gc:
nix store gc
# Update a flake input by name (ex: 'nixpkgs')
.PHONY: update-input
update-input:
nix flake update $(filter-out $@,$(MAKECMDGOALS))
# Build Custom Install ISO
.PHONY: iso
iso:
nix build .#packages.x86_64-linux.iso
# Build Custom kexec image
.PHONY: kexec-img
kexec-img:
nix build .#packages.x86_64-linux.kexec
# Deploy a host by name (ex: 's0') but don't activate
.PHONY: deploy
deploy:
deploy --remote-build --boot --debug-logs --skip-checks .#$(filter-out $@,$(MAKECMDGOALS))
# Deploy a host by name (ex: 's0')
.PHONY: deploy-activate
deploy-activate:
deploy --remote-build --debug-logs --skip-checks .#$(filter-out $@,$(MAKECMDGOALS))

View File

@@ -1,12 +1,32 @@
# My NixOS configurations
# NixOS Configuration
### Source Layout
- `/common` - common configuration imported into all `/machines`
- `/boot` - config related to bootloaders, cpu microcode, and unlocking LUKS root disks over tor
- `/network` - config for tailscale, zeroteir, and NixOS container with automatic vpn tunneling via PIA
- `/pc` - config that a graphical desktop computer should have. Use `de.enable = true;` to enable everthing.
- `/server` - config that creates new nixos services or extends existing ones to meet my needs
- `/ssh.nix` - all ssh public host and user keys for all `/machines`
- `/machines` - all my NixOS machines along with their machine unique configuration for hardware and services
- `/kexec` - a special machine for generating minimal kexec images. Does not import `/common`
- `/secrets` - encrypted shared secrets unlocked through `/machines` ssh host keys
A NixOS flake managing multiple machines with role-based configuration, agenix secrets, and sandboxed dev workspaces.
## Layout
- `/common` - shared configuration imported by all machines
- `/boot` - bootloaders, CPU microcode, remote LUKS unlock over Tor
- `/network` - Tailscale, VPN tunneling via PIA
- `/pc` - desktop/graphical config (enabled by the `personal` role)
- `/server` - service definitions and extensions
- `/sandboxed-workspace` - isolated dev environments (VM, container, or Incus)
- `/machines` - per-machine config (`default.nix`, `hardware-configuration.nix`, `properties.nix`)
- `/secrets` - agenix-encrypted secrets, decryptable by machines based on their roles
- `/home` - Home Manager user config
- `/lib` - custom library functions extending nixpkgs lib
- `/overlays` - nixpkgs overlays applied globally
- `/patches` - patches applied to nixpkgs at build time
## Notable Features
**Auto-discovery & roles** — Machines register themselves by placing a `properties.nix` under `/machines/`. No manual listing in `flake.nix`. Roles declared per-machine (`"personal"`, `"dns-challenge"`, etc.) drive feature enablement via `config.thisMachine.hasRole.<role>` and control which agenix secrets each machine can decrypt.
**Machine properties module system**`properties.nix` files form a separate lightweight module system (`machine-info`) for recording machine metadata (hostnames, architecture, roles, SSH keys). Since every machine's properties are visible to every other machine, each system can reflect on the properties of the entire fleet — enabling automatic SSH trust, role-based secret access, and cross-machine coordination without duplicating information.
**Remote LUKS unlock over Tor** — Machines with encrypted root disks can be unlocked remotely via SSH. An embedded Tor hidden service starts in the initrd so the machine is reachable even without a known IP, using a separate SSH host key for the boot environment.
**VPN containers** — A `vpn-container` module spins up an ephemeral NixOS container with a PIA WireGuard tunnel. The host creates the WireGuard interface and authenticates with PIA, then hands it off to the container's network namespace. This ensures that the container can **never** have direct internet access. Leakage is impossible.
**Sandboxed workspaces** — Isolated dev environments backed by microVMs (cloud-hypervisor), systemd-nspawn containers, or Incus. Each workspace gets a static IP on a NAT'd bridge, auto-generated SSH host keys, shell aliases for management, and comes pre-configured with Claude Code. The sandbox network blocks access to the local LAN while allowing internet.
**Snapshot-aware backups** — Restic backups to Backblaze B2 automatically create ZFS snapshots or btrfs read-only snapshots before backing up, using mount namespaces to bind-mount frozen data over the original paths so restic records correct paths. Each backup group gets a `restic_<group>` CLI wrapper. Supports `.nobackup` marker files.

51
TODO.md
View File

@@ -10,24 +10,12 @@
- https://nixos.wiki/wiki/Comparison_of_NixOS_setups
### Housekeeping
- Format everything here using nixfmt
- Cleanup the line between hardware-configuration.nix and configuration.nix in machine config
- CI https://gvolpe.com/blog/nixos-binary-cache-ci/
- remove `options.currentSystem`
- allow `hostname` option for webservices to be null to disable configuring nginx
### NAS
- helios64 extra led lights
- safely turn off NAS on power disconnect
- hardware de/encoding for rk3399 helios64 https://forum.pine64.org/showthread.php?tid=14018
- tor unlock
### bcachefs
- bcachefs health alerts via email
- bcachefs periodic snapshotting
- use mount.bcachefs command for mounting
- bcachefs native encryption
- just need a kernel module? https://github.com/firestack/bcachefs-tools-flake/blob/kf/dev/mvp/nixos/module/bcachefs.nix#L40
### Shell Comands
- tailexitnode = `sudo tailscale up --exit-node=<exit-node-ip> --exit-node-allow-lan-access=true`
@@ -52,21 +40,7 @@
- https://ampache.org/
- replace nextcloud with seafile
### VPN container
- use wireguard for vpn
- https://github.com/triffid/pia-wg/blob/master/pia-wg.sh
- https://github.com/pia-foss/manual-connections
- port forwarding for vpn
- transmission using forwarded port
- https://www.wireguard.com/netns/
- one way firewall for vpn container
### Networking
- tailscale for p2p connections
- remove all use of zerotier
### Archive
- https://www.backblaze.com/b2/cloud-storage.html
- email
- https://github.com/Disassembler0/dovecot-archive/blob/main/src/dovecot_archive.py
- http://kb.unixservertech.com/software/dovecot/archiveserver
@@ -75,7 +49,32 @@
- https://christine.website/blog/paranoid-nixos-2021-07-18
- https://nixos.wiki/wiki/Impermanence
# Setup CI
- CI
- hydra
- https://docs.cachix.org/continuous-integration-setup/
- Binary Cache
- Maybe use cachix https://gvolpe.com/blog/nixos-binary-cache-ci/
- Self hosted binary cache? https://www.tweag.io/blog/2019-11-21-untrusted-ci/
- https://github.com/edolstra/nix-serve
- https://nixos.wiki/wiki/Binary_Cache
- https://discourse.nixos.org/t/introducing-attic-a-self-hostable-nix-binary-cache-server/24343
- Both
- https://garnix.io/
- https://nixbuild.net
# Secrets
- consider using headscale
- Replace luks over tor for remote unlock with luks over tailscale using ephemeral keys
- Rollover luks FDE passwords
- /secrets on personal computers should only be readable using a trusted ssh key, preferably requiring a yubikey
- Rollover shared yubikey secrets
- offsite backup yubikey, pw db, and ssh key with /secrets access
### Misc
- for automated kernel upgrades on luks systems, need to kexec with initrd that contains luks key
- https://github.com/flowztul/keyexec/blob/master/etc/default/kexec-cryptroot
- https://github.com/pop-os/system76-scheduler
- improve email a little bit https://helloinbox.email
- remap razer keys https://github.com/sezanzeb/input-remapper

View File

@@ -4,11 +4,12 @@
let
cfg = config.system.autoUpgrade;
in {
in
{
config = lib.mkIf cfg.enable {
system.autoUpgrade = {
flake = "git+https://git.neet.dev/zuckerberg/nix-config.git";
flags = [ "--recreate-lock-file" ]; # ignore lock file, just pull the latest
flags = [ "--recreate-lock-file" "--no-write-lock-file" ]; # ignore lock file, just pull the latest
};
};
}

179
common/backups.nix Normal file
View File

@@ -0,0 +1,179 @@
{ config, lib, pkgs, ... }:
let
cfg = config.backup;
mkRespository = group: "s3:s3.us-west-004.backblazeb2.com/D22TgIt0-main-backup/${group}";
findmnt = "${pkgs.util-linux}/bin/findmnt";
mount = "${pkgs.util-linux}/bin/mount";
umount = "${pkgs.util-linux}/bin/umount";
btrfs = "${pkgs.btrfs-progs}/bin/btrfs";
zfs = "/run/current-system/sw/bin/zfs";
# Creates snapshots and bind mounts them over original paths within the
# service's mount namespace, so restic sees correct paths but reads frozen data
snapshotHelperFn = ''
snapshot_for_path() {
local group="$1" path="$2" action="$3"
local pathhash fstype
pathhash=$(echo -n "$path" | sha256sum | cut -c1-8)
fstype=$(${findmnt} -n -o FSTYPE -T "$path" 2>/dev/null || echo "unknown")
case "$fstype" in
zfs)
local dataset mount subpath snapname snappath
dataset=$(${findmnt} -n -o SOURCE -T "$path")
mount=$(${findmnt} -n -o TARGET -T "$path")
subpath=''${path#"$mount"}
[[ "$subpath" != /* ]] && subpath="/$subpath"
snapname="''${dataset}@restic-''${group}-''${pathhash}"
snappath="''${mount}/.zfs/snapshot/restic-''${group}-''${pathhash}''${subpath}"
case "$action" in
create)
${zfs} destroy "$snapname" 2>/dev/null || true
${zfs} snapshot "$snapname"
${mount} --bind "$snappath" "$path"
echo "$path"
;;
destroy)
${umount} "$path" 2>/dev/null || true
${zfs} destroy "$snapname" 2>/dev/null || true
;;
esac
;;
btrfs)
local mount subpath snapdir snappath
mount=$(${findmnt} -n -o TARGET -T "$path")
subpath=''${path#"$mount"}
[[ "$subpath" != /* ]] && subpath="/$subpath"
snapdir="/.restic-snapshots/''${group}-''${pathhash}"
snappath="''${snapdir}''${subpath}"
case "$action" in
create)
${btrfs} subvolume delete "$snapdir" 2>/dev/null || true
mkdir -p /.restic-snapshots
${btrfs} subvolume snapshot -r "$mount" "$snapdir" >&2
${mount} --bind "$snappath" "$path"
echo "$path"
;;
destroy)
${umount} "$path" 2>/dev/null || true
${btrfs} subvolume delete "$snapdir" 2>/dev/null || true
;;
esac
;;
*)
echo "No snapshot support for $fstype ($path), using original" >&2
[ "$action" = "create" ] && echo "$path"
;;
esac
}
'';
mkBackup = group: paths: {
repository = mkRespository group;
dynamicFilesFrom = "cat /run/restic-backup-${group}/paths";
backupPrepareCommand = ''
mkdir -p /run/restic-backup-${group}
: > /run/restic-backup-${group}/paths
${snapshotHelperFn}
for path in ${lib.escapeShellArgs paths}; do
snapshot_for_path ${lib.escapeShellArg group} "$path" create >> /run/restic-backup-${group}/paths
done
'';
backupCleanupCommand = ''
${snapshotHelperFn}
for path in ${lib.escapeShellArgs paths}; do
snapshot_for_path ${lib.escapeShellArg group} "$path" destroy
done
rm -rf /run/restic-backup-${group}
'';
initialize = true;
timerConfig = {
OnCalendar = "daily";
RandomizedDelaySec = "1h";
};
extraBackupArgs = [
''--exclude-if-present ".nobackup"''
];
# Keeps backups from up to 6 months ago
pruneOpts = [
"--keep-daily 7" # one backup for each of the last n days
"--keep-weekly 5" # one backup for each of the last n weeks
"--keep-monthly 6" # one backup for each of the last n months
];
environmentFile = "/run/agenix/backblaze-s3-backups";
passwordFile = "/run/agenix/restic-password";
};
# example usage: "sudo restic_samba unlock" (removes lockfile)
mkResticGroupCmd = group: pkgs.writeShellScriptBin "restic_${group}" ''
if [ "$EUID" -ne 0 ]
then echo "Run as root"
exit
fi
. /run/agenix/backblaze-s3-backups
export AWS_SECRET_ACCESS_KEY
export AWS_ACCESS_KEY_ID
export RESTIC_PASSWORD_FILE=/run/agenix/restic-password
export RESTIC_REPOSITORY="${mkRespository group}"
exec ${pkgs.restic}/bin/restic "$@"
'';
in
{
options.backup = {
group = lib.mkOption {
default = null;
type = lib.types.nullOr (lib.types.attrsOf (lib.types.submodule {
options = {
paths = lib.mkOption {
type = lib.types.listOf lib.types.str;
description = ''
Paths to backup
'';
};
};
}));
};
};
config = lib.mkIf (cfg.group != null) {
assertions = lib.mapAttrsToList (group: _: {
assertion = config.systemd.services."restic-backups-${group}".enable;
message = "Expected systemd service 'restic-backups-${group}' not found. The nixpkgs restic module may have changed its naming convention.";
}) cfg.group;
services.restic.backups = lib.concatMapAttrs
(group: groupCfg: {
${group} = mkBackup group groupCfg.paths;
})
cfg.group;
# Mount namespace lets us bind mount snapshots over original paths,
# so restic backs up from frozen snapshots while recording correct paths
systemd.services = lib.concatMapAttrs
(group: _: {
"restic-backups-${group}".serviceConfig.PrivateMounts = true;
})
cfg.group;
age.secrets.backblaze-s3-backups.file = ../secrets/backblaze-s3-backups.age;
age.secrets.restic-password.file = ../secrets/restic-password.age;
environment.systemPackages = map mkResticGroupCmd (builtins.attrNames cfg.group);
};
}

29
common/binary-cache.nix Normal file
View File

@@ -0,0 +1,29 @@
{ config, ... }:
{
nix = {
settings = {
substituters = [
"https://cache.nixos.org/"
"https://nix-community.cachix.org"
"http://s0.koi-bebop.ts.net:28338/nixos"
];
trusted-public-keys = [
"nix-community.cachix.org-1:mB9FSh9qf2dCimDSUo8Zy7bkq5CX+/rkCWyvRCYg3Fs="
"nixos:SnTTQutdOJbAmxo6AQ3cbRt5w9f4byMXQODCieBH3PQ="
];
# Allow substituters to be offline
# This isn't exactly ideal since it would be best if I could set up a system
# so that it is an error if a derivation isn't available for any substituters
# and use this flag as intended for deciding if it should build missing
# derivations locally. See https://github.com/NixOS/nix/issues/6901
fallback = true;
# Authenticate to private nixos cache
netrc-file = config.age.secrets.attic-netrc.path;
};
};
age.secrets.attic-netrc.file = ../secrets/attic-netrc.age;
}

View File

@@ -3,24 +3,27 @@
with lib;
let
cfg = config.bios;
in {
in
{
options.bios = {
enable = mkEnableOption "enable bios boot";
device = mkOption {
type = types.str;
};
configurationLimit = mkOption {
default = 20;
type = types.int;
};
};
config = mkIf cfg.enable {
# Use GRUB 2 for BIOS
boot.loader = {
timeout = 2;
grub = {
enable = true;
device = cfg.device;
version = 2;
useOSProber = true;
configurationLimit = 20;
configurationLimit = cfg.configurationLimit;
theme = pkgs.nixos-grub2-theme;
};
};

View File

@@ -1,10 +1,10 @@
{ lib, config, pkgs, ... }:
{ ... }:
{
imports = [
./firmware.nix
./efi.nix
./bios.nix
./luks.nix
./remote-luks-unlock.nix
];
}

View File

@@ -3,24 +3,27 @@
with lib;
let
cfg = config.efi;
in {
in
{
options.efi = {
enable = mkEnableOption "enable efi boot";
configurationLimit = mkOption {
default = 20;
type = types.int;
};
};
config = mkIf cfg.enable {
# Use GRUB2 for EFI
boot.loader = {
efi.canTouchEfiVariables = true;
timeout = 2;
grub = {
enable = true;
device = "nodev";
version = 2;
efiSupport = true;
useOSProber = true;
# memtest86.enable = true;
configurationLimit = 20;
configurationLimit = cfg.configurationLimit;
theme = pkgs.nixos-grub2-theme;
};
};

View File

@@ -1,9 +1,10 @@
{ lib, config, pkgs, ... }:
{ lib, config, ... }:
with lib;
let
cfg = config.firmware;
in {
in
{
options.firmware.x86_64 = {
enable = mkEnableOption "enable x86_64 firmware";
};

View File

@@ -1,101 +0,0 @@
{ config, pkgs, lib, ... }:
let
cfg = config.luks;
in {
options.luks = {
enable = lib.mkEnableOption "enable luks root remote decrypt over ssh/tor";
device = {
name = lib.mkOption {
type = lib.types.str;
default = "enc-pv";
};
path = lib.mkOption {
type = lib.types.either lib.types.str lib.types.path;
};
allowDiscards = lib.mkOption {
type = lib.types.bool;
default = false;
};
};
sshHostKeys = lib.mkOption {
type = lib.types.listOf (lib.types.either lib.types.str lib.types.path);
default = [
"/secret/ssh_host_rsa_key"
"/secret/ssh_host_ed25519_key"
];
};
sshAuthorizedKeys = lib.mkOption {
type = lib.types.listOf lib.types.str;
default = config.users.users.googlebot.openssh.authorizedKeys.keys;
};
onionConfig = lib.mkOption {
type = lib.types.path;
default = /secret/onion;
};
kernelModules = lib.mkOption {
type = lib.types.listOf lib.types.str;
default = [ "e1000" "e1000e" "virtio_pci" "r8169" ];
};
};
config = lib.mkIf cfg.enable {
boot.initrd.luks.devices.${cfg.device.name} = {
device = cfg.device.path;
allowDiscards = cfg.device.allowDiscards;
};
# Unlock LUKS disk over ssh
boot.initrd.network.enable = true;
boot.initrd.kernelModules = cfg.kernelModules;
boot.initrd.network.ssh = {
enable = true;
port = 22;
hostKeys = cfg.sshHostKeys;
authorizedKeys = cfg.sshAuthorizedKeys;
};
boot.initrd.postDeviceCommands = ''
echo 'waiting for root device to be opened...'
mkfifo /crypt-ramfs/passphrase
echo /crypt-ramfs/passphrase >> /dev/null
'';
# Make machine accessable over tor for boot unlock
boot.initrd.secrets = {
"/etc/tor/onion/bootup" = cfg.onionConfig;
};
boot.initrd.extraUtilsCommands = ''
copy_bin_and_libs ${pkgs.tor}/bin/tor
copy_bin_and_libs ${pkgs.haveged}/bin/haveged
'';
# start tor during boot process
boot.initrd.network.postCommands = let
torRc = (pkgs.writeText "tor.rc" ''
DataDirectory /etc/tor
SOCKSPort 127.0.0.1:9050 IsolateDestAddr
SOCKSPort 127.0.0.1:9063
HiddenServiceDir /etc/tor/onion/bootup
HiddenServicePort 22 127.0.0.1:22
'');
in ''
# Add nice prompt for giving LUKS passphrase over ssh
echo 'read -s -p "Unlock Passphrase: " passphrase && echo $passphrase > /crypt-ramfs/passphrase && exit' >> /root/.profile
echo "tor: preparing onion folder"
# have to do this otherwise tor does not want to start
chmod -R 700 /etc/tor
echo "make sure localhost is up"
ip a a 127.0.0.1/8 dev lo
ip link set lo up
echo "haveged: starting haveged"
haveged -F &
echo "tor: starting tor"
tor -f ${torRc} --verify-config
tor -f ${torRc} &
'';
};
}

View File

@@ -0,0 +1,96 @@
{ config, pkgs, lib, ... }:
# TODO: use tailscale instead of tor https://gist.github.com/antifuchs/e30d58a64988907f282c82231dde2cbc
let
cfg = config.remoteLuksUnlock;
in
{
options.remoteLuksUnlock = {
enable = lib.mkEnableOption "enable luks root remote decrypt over ssh/tor";
enableTorUnlock = lib.mkOption {
type = lib.types.bool;
default = cfg.enable;
description = "Make machine accessable over tor for ssh boot unlock";
};
sshHostKeys = lib.mkOption {
type = lib.types.listOf (lib.types.either lib.types.str lib.types.path);
default = [
"/secret/ssh_host_rsa_key"
"/secret/ssh_host_ed25519_key"
];
};
sshAuthorizedKeys = lib.mkOption {
type = lib.types.listOf lib.types.str;
default = config.users.users.googlebot.openssh.authorizedKeys.keys;
};
onionConfig = lib.mkOption {
type = lib.types.path;
default = /secret/onion;
};
kernelModules = lib.mkOption {
type = lib.types.listOf lib.types.str;
default = [ "e1000" "e1000e" "virtio_pci" "r8169" ];
};
};
config = lib.mkIf cfg.enable {
# Unlock LUKS disk over ssh
boot.initrd.network.enable = true;
boot.initrd.kernelModules = cfg.kernelModules;
boot.initrd.network.ssh = {
enable = true;
port = 22;
hostKeys = cfg.sshHostKeys;
authorizedKeys = cfg.sshAuthorizedKeys;
};
boot.initrd.postDeviceCommands = ''
echo 'waiting for root device to be opened...'
mkfifo /crypt-ramfs/passphrase
echo /crypt-ramfs/passphrase >> /dev/null
'';
boot.initrd.secrets = lib.mkIf cfg.enableTorUnlock {
"/etc/tor/onion/bootup" = cfg.onionConfig;
};
boot.initrd.extraUtilsCommands = lib.mkIf cfg.enableTorUnlock ''
copy_bin_and_libs ${pkgs.tor}/bin/tor
copy_bin_and_libs ${pkgs.haveged}/bin/haveged
'';
boot.initrd.network.postCommands = lib.mkMerge [
(
''
# Add nice prompt for giving LUKS passphrase over ssh
echo 'read -s -p "Unlock Passphrase: " passphrase && echo $passphrase > /crypt-ramfs/passphrase && exit' >> /root/.profile
''
)
(
let torRc = (pkgs.writeText "tor.rc" ''
DataDirectory /etc/tor
SOCKSPort 127.0.0.1:9050 IsolateDestAddr
SOCKSPort 127.0.0.1:9063
HiddenServiceDir /etc/tor/onion/bootup
HiddenServicePort 22 127.0.0.1:22
''); in
lib.mkIf cfg.enableTorUnlock ''
echo "tor: preparing onion folder"
# have to do this otherwise tor does not want to start
chmod -R 700 /etc/tor
echo "make sure localhost is up"
ip a a 127.0.0.1/8 dev lo
ip link set lo up
echo "haveged: starting haveged"
haveged -F &
echo "tor: starting tor"
tor -f ${torRc} --verify-config
tor -f ${torRc} &
''
)
];
};
}

View File

@@ -1,7 +1,9 @@
{ config, pkgs, ... }:
{ config, pkgs, lib, ... }:
{
imports = [
./backups.nix
./binary-cache.nix
./flakes.nix
./auto-update.nix
./shell.nix
@@ -9,28 +11,44 @@
./boot
./server
./pc
./machine-info
./nix-builder.nix
./ssh.nix
./sandboxed-workspace
];
nix.flakes.enable = true;
system.stateVersion = "21.11";
system.stateVersion = "23.11";
networking.useDHCP = false;
networking.useDHCP = lib.mkDefault true;
networking.firewall.enable = true;
networking.firewall.allowPing = true;
time.timeZone = "America/New_York";
i18n.defaultLocale = "en_US.UTF-8";
time.timeZone = "America/Los_Angeles";
i18n = {
defaultLocale = "en_US.UTF-8";
extraLocaleSettings = {
LANGUAGE = "en_US.UTF-8";
LC_ALL = "en_US.UTF-8";
};
};
services.openssh.enable = true;
services.openssh = {
enable = true;
settings = {
PasswordAuthentication = false;
};
};
programs.mosh.enable = true;
environment.systemPackages = with pkgs; [
wget
kakoune
htop
git git-lfs
git
git-lfs
dnsutils
tmux
nethogs
@@ -38,10 +56,13 @@
pciutils
usbutils
killall
screen
micro
helix
lm_sensors
picocom
lf
gnumake
tree
];
nixpkgs.config.allowUnfree = true;
@@ -54,14 +75,30 @@
"dialout" # serial
];
shell = pkgs.fish;
openssh.authorizedKeys.keys = (import ./ssh.nix).users;
openssh.authorizedKeys.keys = config.machines.ssh.userKeys;
hashedPassword = "$6$TuDO46rILr$gkPUuLKZe3psexhs8WFZMpzgEBGksE.c3Tjh1f8sD0KMC4oV89K2pqAABfl.Lpxu2jVdr5bgvR5cWnZRnji/r/";
uid = 1000;
};
nix.trustedUsers = [ "root" "googlebot" ];
users.users.root = {
openssh.authorizedKeys.keys = config.machines.ssh.deployKeys;
};
nix.settings = {
trusted-users = [ "root" "googlebot" ];
};
# don't use sudo
security.doas.enable = true;
security.sudo.enable = false;
security.doas.extraRules = [
# don't ask for password every time
{ groups = [ "wheel" ]; persist = true; }
];
nix.gc.automatic = true;
security.acme.acceptTerms = true;
security.acme.defaults.email = "zuckerberg@neet.dev";
# Enable Desktop Environment if this is a PC (machine role is "personal")
de.enable = lib.mkDefault (config.thisMachine.hasRole."personal");
}

View File

@@ -1,24 +1,18 @@
{ lib, pkgs, config, ... }:
{ lib, config, ... }:
with lib;
let
cfg = config.nix.flakes;
in {
in
{
options.nix.flakes = {
enable = mkEnableOption "use nix flakes";
};
config = mkIf cfg.enable {
nix = {
package = pkgs.nixFlakes;
extraOptions = ''
experimental-features = nix-command flakes
'';
# pin nixpkgs for system commands such as "nix shell"
registry.nixpkgs.flake = config.inputs.nixpkgs;
# pin system nixpkgs to the same version as the flake input
nixPath = [ "nixpkgs=${config.inputs.nixpkgs}" ];
};
};
}

View File

@@ -0,0 +1,207 @@
# Gathers info about each machine to constuct overall configuration
# Ex: Each machine already trusts each others SSH fingerprint already
{ config, lib, ... }:
let
machines = config.machines.hosts;
hostOptionsSubmoduleType = lib.types.submodule {
options = {
hostNames = lib.mkOption {
type = lib.types.listOf lib.types.str;
description = ''
List of hostnames for this machine. The first one is the default so it is the target of deployments.
Used for automatically trusting hosts for ssh connections.
'';
};
arch = lib.mkOption {
type = lib.types.enum [ "x86_64-linux" "aarch64-linux" ];
description = ''
The architecture of this machine.
'';
};
systemRoles = lib.mkOption {
type = lib.types.listOf lib.types.str; # TODO: maybe use an enum?
description = ''
The set of roles this machine holds. Affects secrets available. (TODO add service config as well using this info)
'';
};
hostKey = lib.mkOption {
type = lib.types.str;
description = ''
The system ssh host key of this machine. Used for automatically trusting hosts for ssh connections
and for decrypting secrets with agenix.
'';
};
remoteUnlock = lib.mkOption {
default = null;
type = lib.types.nullOr (lib.types.submodule {
options = {
hostKey = lib.mkOption {
type = lib.types.str;
description = ''
The system ssh host key of this machine used for luks boot unlocking only.
'';
};
clearnetHost = lib.mkOption {
default = null;
type = lib.types.nullOr lib.types.str;
description = ''
The hostname resolvable over clearnet used to luks boot unlock this machine
'';
};
onionHost = lib.mkOption {
default = null;
type = lib.types.nullOr lib.types.str;
description = ''
The hostname resolvable over tor used to luks boot unlock this machine
'';
};
};
});
};
userKeys = lib.mkOption {
default = [ ];
type = lib.types.listOf lib.types.str;
description = ''
The list of user keys. Each key here can be used to log into all other systems as `googlebot`.
TODO: consider auto populating other programs that use ssh keys such as gitea
'';
};
deployKeys = lib.mkOption {
default = [ ];
type = lib.types.listOf lib.types.str;
description = ''
The list of deployment keys. Each key here can be used to log into all other systems as `root`.
'';
};
configurationPath = lib.mkOption {
type = lib.types.path;
description = ''
The path to this machine's configuration directory.
'';
};
};
};
in
{
imports = [
./ssh.nix
./roles.nix
];
options.machines = {
hosts = lib.mkOption {
type = lib.types.attrsOf hostOptionsSubmoduleType;
};
};
options.thisMachine.config = lib.mkOption {
# For ease of use, a direct copy of the host config from machines.hosts.${hostName}
type = hostOptionsSubmoduleType;
};
config = {
assertions = (lib.concatLists (lib.mapAttrsToList
(
name: cfg: [
{
assertion = builtins.length cfg.hostNames > 0;
message = ''
Error with config for ${name}
There must be at least one hostname.
'';
}
{
assertion = builtins.length cfg.systemRoles > 0;
message = ''
Error with config for ${name}
There must be at least one system role.
'';
}
{
assertion = cfg.remoteUnlock == null || cfg.remoteUnlock.hostKey != cfg.hostKey;
message = ''
Error with config for ${name}
Unlock hostkey and hostkey cannot be the same because unlock hostkey is in /boot, unencrypted.
'';
}
{
assertion = cfg.remoteUnlock == null || (cfg.remoteUnlock.clearnetHost != null || cfg.remoteUnlock.onionHost != null);
message = ''
Error with config for ${name}
At least one of clearnet host or onion host must be defined.
'';
}
{
assertion = cfg.remoteUnlock == null || cfg.remoteUnlock.clearnetHost == null || builtins.elem cfg.remoteUnlock.clearnetHost cfg.hostNames == false;
message = ''
Error with config for ${name}
Clearnet unlock hostname cannot be in the list of hostnames for security reasons.
'';
}
{
assertion = cfg.remoteUnlock == null || cfg.remoteUnlock.onionHost == null || lib.strings.hasSuffix ".onion" cfg.remoteUnlock.onionHost;
message = ''
Error with config for ${name}
Tor unlock hostname must be an onion address.
'';
}
{
assertion = builtins.elem "personal" cfg.systemRoles || builtins.length cfg.userKeys == 0;
message = ''
Error with config for ${name}
There must be at least one userkey defined for personal machines.
'';
}
{
assertion = builtins.elem "deploy" cfg.systemRoles || builtins.length cfg.deployKeys == 0;
message = ''
Error with config for ${name}
Only deploy machines are allowed to have deploy keys for security reasons.
'';
}
]
)
machines));
# Set per machine properties automatically using each of their `properties.nix` files respectively
machines.hosts =
let
properties = dir: lib.concatMapAttrs
(name: path: {
${name} =
import path
//
{ configurationPath = builtins.dirOf path; };
})
(propertiesFiles dir);
propertiesFiles = dir:
lib.foldl (lib.mergeAttrs) { } (propertiesFiles' dir);
propertiesFiles' = dir:
let
propFiles = lib.filter (p: baseNameOf p == "properties.nix") (lib.filesystem.listFilesRecursive dir);
dirName = path: builtins.baseNameOf (builtins.dirOf path);
in
builtins.map (p: { "${dirName p}" = p; }) propFiles;
in
properties ../../machines;
# Don't try to evaluate "thisMachine" when reflecting using moduleless.nix.
# When evaluated by moduleless.nix this will fail due to networking.hostName not
# existing. This is because moduleless.nix is not intended for reflection from the
# perspective of a perticular machine but is instead intended for reflecting on
# the properties of all machines as a whole system.
thisMachine.config = config.machines.hosts.${config.networking.hostName};
# Add ssh keys from KeepassXC
machines.ssh.userKeys = [ "ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAILACiZO7QnB4bcmziVaUkUE0ZPMR0M/yJbbHYsHIZz9g" ];
machines.ssh.deployKeys = [ "ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAID58MvKGs3GDMMcN8Iyi9S59SciSrVM97wKtOvUAl3li" ];
};
}

View File

@@ -0,0 +1,15 @@
# Allows getting machine-info outside the scope of nixos configuration
{ nixpkgs ? import <nixpkgs> { }
, assertionsModule ? <nixpkgs/nixos/modules/misc/assertions.nix>
}:
{
machines =
(nixpkgs.lib.evalModules {
modules = [
./default.nix
assertionsModule
];
}).config.machines;
}

View File

@@ -0,0 +1,55 @@
{ config, lib, ... }:
# Maps roles to their hosts.
# machines.withRole = {
# personal = [
# "machine1" "machine3"
# ];
# cache = [
# "machine2"
# ];
# };
#
# A list of all possible roles
# machines.allRoles = [
# "personal"
# "cache"
# ];
#
# For each role has true or false if the current machine has that role
# thisMachine.hasRole = {
# personal = true;
# cache = false;
# };
{
options.machines.withRole = lib.mkOption {
type = lib.types.attrsOf (lib.types.listOf lib.types.str);
};
options.machines.allRoles = lib.mkOption {
type = lib.types.listOf lib.types.str;
};
options.thisMachine.hasRole = lib.mkOption {
type = lib.types.attrsOf lib.types.bool;
};
config = {
machines.withRole = lib.zipAttrs
(lib.mapAttrsToList
(host: cfg:
lib.foldl (lib.mergeAttrs) { }
(builtins.map (role: { ${role} = host; })
cfg.systemRoles))
config.machines.hosts);
machines.allRoles = lib.attrNames config.machines.withRole;
thisMachine.hasRole = lib.mapAttrs
(role: cfg:
builtins.elem config.networking.hostName config.machines.withRole.${role}
)
config.machines.withRole;
};
}

View File

@@ -0,0 +1,44 @@
{ config, lib, ... }:
let
machines = config.machines;
sshkeys = keyType: lib.foldl (l: cfg: l ++ cfg.${keyType}) [ ] (builtins.attrValues machines.hosts);
in
{
options.machines.ssh = {
userKeys = lib.mkOption {
type = lib.types.listOf lib.types.str;
description = ''
List of user keys aggregated from all machines.
'';
};
deployKeys = lib.mkOption {
default = [ ];
type = lib.types.listOf lib.types.str;
description = ''
List of deploy keys aggregated from all machines.
'';
};
hostKeysByRole = lib.mkOption {
type = lib.types.attrsOf (lib.types.listOf lib.types.str);
description = ''
Machine host keys divided into their roles.
'';
};
};
config = {
machines.ssh.userKeys = sshkeys "userKeys";
machines.ssh.deployKeys = sshkeys "deployKeys";
machines.ssh.hostKeysByRole = lib.mapAttrs
(role: hosts:
builtins.map
(host: machines.hosts.${host}.hostKey)
hosts)
machines.withRole;
};
}

View File

@@ -7,11 +7,11 @@ let
in
{
imports = [
./hosts.nix
./pia-openvpn.nix
./pia-wireguard.nix
./tailscale.nix
./vpn.nix
./zerotier.nix
./sandbox.nix
];
options.networking.ip_forward = mkEnableOption "Enable ip forwarding";

View File

@@ -1,63 +0,0 @@
{ config, lib, ... }:
let
system = (import ../ssh.nix).system;
in {
networking.hosts = {
# some DNS providers filter local ip results from DNS request
"172.30.145.180" = [ "s0.zt.neet.dev" ];
"172.30.109.9" = [ "ponyo.zt.neet.dev" ];
"172.30.189.212" = [ "ray.zt.neet.dev" ];
};
programs.ssh.knownHosts = {
liza = {
hostNames = [ "liza" "liza.neet.dev" ];
publicKey = system.liza;
};
ponyo = {
hostNames = [ "ponyo" "ponyo.neet.dev" "ponyo.zt.neet.dev" "git.neet.dev" ];
publicKey = system.ponyo;
};
ponyo-unlock = {
hostNames = [ "unlock.ponyo.neet.dev" "cfamr6artx75qvt7ho3rrbsc7mkucmv5aawebwflsfuorusayacffryd.onion" ];
publicKey = system.ponyo-unlock;
};
ray = {
hostNames = [ "ray" "ray.zt.neet.dev" ];
publicKey = system.ray;
};
s0 = {
hostNames = [ "s0" "s0.zt.neet.dev" ];
publicKey = system.s0;
};
n1 = {
hostNames = [ "n1" ];
publicKey = system.n1;
};
n2 = {
hostNames = [ "n2" ];
publicKey = system.n2;
};
n3 = {
hostNames = [ "n3" ];
publicKey = system.n3;
};
n4 = {
hostNames = [ "n4" ];
publicKey = system.n4;
};
n5 = {
hostNames = [ "n5" ];
publicKey = system.n5;
};
n6 = {
hostNames = [ "n6" ];
publicKey = system.n6;
};
n7 = {
hostNames = [ "n7" ];
publicKey = system.n7;
};
};
}

View File

@@ -1,7 +1,7 @@
{ config, pkgs, lib, ... }:
let
cfg = config.pia;
cfg = config.pia.openvpn;
vpnfailsafe = pkgs.stdenv.mkDerivation {
pname = "vpnfailsafe";
version = "0.0.1";
@@ -14,7 +14,7 @@ let
};
in
{
options.pia = {
options.pia.openvpn = {
enable = lib.mkEnableOption "Enable private internet access";
server = lib.mkOption {
type = lib.types.str;
@@ -108,6 +108,6 @@ in
};
};
};
age.secrets."pia-login.conf".file = ../../secrets/pia-login.conf;
age.secrets."pia-login.conf".file = ../../secrets/pia-login.age;
};
}

View File

@@ -0,0 +1,363 @@
{ config, lib, pkgs, ... }:
# Server list:
# https://serverlist.piaservers.net/vpninfo/servers/v6
# Reference materials:
# https://github.com/pia-foss/manual-connections
# https://github.com/thrnz/docker-wireguard-pia/blob/master/extra/wg-gen.sh
# TODO handle potential errors (or at least print status, success, and failures to the console)
# TODO parameterize names of systemd services so that multiple wg VPNs could coexist in theory easier
# TODO implement this module such that the wireguard VPN doesn't have to live in a container
# TODO don't add forward rules if the PIA port is the same as cfg.forwardedPort
# TODO verify signatures of PIA responses
# TODO `RuntimeMaxSec = "30d";` for pia-vpn-wireguard-init isn't allowed per the systemd logs. Find alternative.
with builtins;
with lib;
let
cfg = config.pia.wireguard;
getPIAToken = ''
PIA_USER=`sed '1q;d' /run/agenix/pia-login.conf`
PIA_PASS=`sed '2q;d' /run/agenix/pia-login.conf`
# PIA_TOKEN only lasts 24hrs
PIA_TOKEN=`curl -s -u "$PIA_USER:$PIA_PASS" https://www.privateinternetaccess.com/gtoken/generateToken | jq -r '.token'`
'';
chooseWireguardServer = ''
servers=$(mktemp)
servers_json=$(mktemp)
curl -s "https://serverlist.piaservers.net/vpninfo/servers/v6" > "$servers"
# extract json part only
head -n 1 "$servers" | tr -d '\n' > "$servers_json"
echo "Available location ids:" && jq '.regions | .[] | {name, id, port_forward}' "$servers_json"
# Some locations have multiple servers available. Pick a random one.
totalservers=$(jq -r '.regions | .[] | select(.id=="'${cfg.serverLocation}'") | .servers.wg | length' "$servers_json")
if ! [[ "$totalservers" =~ ^[0-9]+$ ]] || [ "$totalservers" -eq 0 ] 2>/dev/null; then
echo "Location \"${cfg.serverLocation}\" not found."
exit 1
fi
serverindex=$(( RANDOM % totalservers))
WG_HOSTNAME=$(jq -r '.regions | .[] | select(.id=="'${cfg.serverLocation}'") | .servers.wg | .['$serverindex'].cn' "$servers_json")
WG_SERVER_IP=$(jq -r '.regions | .[] | select(.id=="'${cfg.serverLocation}'") | .servers.wg | .['$serverindex'].ip' "$servers_json")
WG_SERVER_PORT=$(jq -r '.groups.wg | .[0] | .ports | .[0]' "$servers_json")
# write chosen server
rm -f /tmp/${cfg.interfaceName}-server.conf
touch /tmp/${cfg.interfaceName}-server.conf
chmod 700 /tmp/${cfg.interfaceName}-server.conf
echo "$WG_HOSTNAME" >> /tmp/${cfg.interfaceName}-server.conf
echo "$WG_SERVER_IP" >> /tmp/${cfg.interfaceName}-server.conf
echo "$WG_SERVER_PORT" >> /tmp/${cfg.interfaceName}-server.conf
rm $servers_json $servers
'';
getChosenWireguardServer = ''
WG_HOSTNAME=`sed '1q;d' /tmp/${cfg.interfaceName}-server.conf`
WG_SERVER_IP=`sed '2q;d' /tmp/${cfg.interfaceName}-server.conf`
WG_SERVER_PORT=`sed '3q;d' /tmp/${cfg.interfaceName}-server.conf`
'';
refreshPIAPort = ''
${getChosenWireguardServer}
signature=`sed '1q;d' /tmp/${cfg.interfaceName}-port-renewal`
payload=`sed '2q;d' /tmp/${cfg.interfaceName}-port-renewal`
bind_port_response=`curl -Gs -m 5 --connect-to "$WG_HOSTNAME::$WG_SERVER_IP:" --cacert "${./ca.rsa.4096.crt}" --data-urlencode "payload=$payload" --data-urlencode "signature=$signature" "https://$WG_HOSTNAME:19999/bindPort"`
'';
portForwarding = cfg.forwardPortForTransmission || cfg.forwardedPort != null;
containerServiceName = "container@${config.vpn-container.containerName}.service";
in
{
options.pia.wireguard = {
enable = mkEnableOption "Enable private internet access";
badPortForwardPorts = mkOption {
type = types.listOf types.port;
description = ''
Ports that will not be accepted from PIA.
If PIA assigns a port from this list, the connection is aborted since we cannot ask for a different port.
This is used to guarantee we are not assigned a port that is used by a service we do not want exposed.
'';
};
wireguardListenPort = mkOption {
type = types.port;
description = "The port wireguard listens on for this VPN connection";
default = 51820;
};
serverLocation = mkOption {
type = types.str;
default = "swiss";
};
interfaceName = mkOption {
type = types.str;
default = "piaw";
};
forwardedPort = mkOption {
type = types.nullOr types.port;
description = "The port to redirect port forwarded TCP VPN traffic too";
default = null;
};
forwardPortForTransmission = mkEnableOption "PIA port forwarding for transmission should be performed.";
};
config = mkIf cfg.enable {
assertions = [
{
assertion = cfg.forwardPortForTransmission != (cfg.forwardedPort != null);
message = ''
The PIA forwarded port cannot simultaneously be used by transmission and redirected to another port.
'';
}
];
# mounts used to pass the connection parameters to the container
# the container doesn't have internet until it uses these parameters so it cannot fetch them itself
vpn-container.mounts = [
"/tmp/${cfg.interfaceName}.conf"
"/tmp/${cfg.interfaceName}-server.conf"
"/tmp/${cfg.interfaceName}-address.conf"
];
# The container takes ownership of the wireguard interface on its startup
containers.vpn.interfaces = [ cfg.interfaceName ];
# TODO: while this is much better than "loose" networking, it seems to have issues with firewall restarts
# allow traffic for wireguard interface to pass since wireguard trips up rpfilter
# networking.firewall = {
# extraCommands = ''
# ip46tables -t raw -I nixos-fw-rpfilter -p udp -m udp --sport ${toString cfg.wireguardListenPort} -j RETURN
# ip46tables -t raw -I nixos-fw-rpfilter -p udp -m udp --dport ${toString cfg.wireguardListenPort} -j RETURN
# '';
# extraStopCommands = ''
# ip46tables -t raw -D nixos-fw-rpfilter -p udp -m udp --sport ${toString cfg.wireguardListenPort} -j RETURN || true
# ip46tables -t raw -D nixos-fw-rpfilter -p udp -m udp --dport ${toString cfg.wireguardListenPort} -j RETURN || true
# '';
# };
networking.firewall.checkReversePath = "loose";
systemd.services.pia-vpn-wireguard-init = {
description = "Creates PIA VPN Wireguard Interface";
wants = [ "network-online.target" ];
after = [ "network.target" "network-online.target" ];
before = [ containerServiceName ];
requiredBy = [ containerServiceName ];
partOf = [ containerServiceName ];
wantedBy = [ "multi-user.target" ];
path = with pkgs; [ wireguard-tools jq curl iproute2 iputils ];
serviceConfig = {
Type = "oneshot";
RemainAfterExit = true;
# restart once a month; PIA forwarded port expires after two months
# because the container is "PartOf" this unit, it gets restarted too
RuntimeMaxSec = "30d";
};
script = ''
echo Waiting for internet...
while ! ping -c 1 -W 1 1.1.1.1; do
sleep 1
done
# Prepare to connect by generating wg secrets and auth'ing with PIA since the container
# cannot do without internet to start with. NAT'ing the host's internet would address this
# issue but is not ideal because then leaking network outside of the VPN is more likely.
${chooseWireguardServer}
${getPIAToken}
# generate wireguard keys
privKey=$(wg genkey)
pubKey=$(echo "$privKey" | wg pubkey)
# authorize our WG keys with the PIA server we are about to connect to
wireguard_json=`curl -s -G --connect-to "$WG_HOSTNAME::$WG_SERVER_IP:" --cacert "${./ca.rsa.4096.crt}" --data-urlencode "pt=$PIA_TOKEN" --data-urlencode "pubkey=$pubKey" https://$WG_HOSTNAME:$WG_SERVER_PORT/addKey`
# create wg-quick config file
rm -f /tmp/${cfg.interfaceName}.conf /tmp/${cfg.interfaceName}-address.conf
touch /tmp/${cfg.interfaceName}.conf /tmp/${cfg.interfaceName}-address.conf
chmod 700 /tmp/${cfg.interfaceName}.conf /tmp/${cfg.interfaceName}-address.conf
echo "
[Interface]
# Address = $(echo "$wireguard_json" | jq -r '.peer_ip')
PrivateKey = $privKey
ListenPort = ${toString cfg.wireguardListenPort}
[Peer]
PersistentKeepalive = 25
PublicKey = $(echo "$wireguard_json" | jq -r '.server_key')
AllowedIPs = 0.0.0.0/0
Endpoint = $WG_SERVER_IP:$(echo "$wireguard_json" | jq -r '.server_port')
" >> /tmp/${cfg.interfaceName}.conf
# create file storing the VPN ip address PIA assigned to us
echo "$wireguard_json" | jq -r '.peer_ip' >> /tmp/${cfg.interfaceName}-address.conf
# Create wg interface now so it inherits from the namespace with internet access
# the container will handle actually connecting the interface since that info is
# not preserved upon moving into the container's networking namespace
# Roughly following this guide https://www.wireguard.com/netns/#ordinary-containerization
[[ -z $(ip link show dev ${cfg.interfaceName} 2>/dev/null) ]] || exit
ip link add ${cfg.interfaceName} type wireguard
'';
preStop = ''
# cleanup wireguard interface
ip link del ${cfg.interfaceName}
rm -f /tmp/${cfg.interfaceName}.conf /tmp/${cfg.interfaceName}-address.conf
'';
};
vpn-container.config.systemd.services.pia-vpn-wireguard = {
description = "Initializes the PIA VPN WireGuard Tunnel";
wants = [ "network-online.target" ];
after = [ "network.target" "network-online.target" ];
wantedBy = [ "multi-user.target" ];
path = with pkgs; [ wireguard-tools iproute2 curl jq iptables ];
serviceConfig = {
Type = "oneshot";
RemainAfterExit = true;
};
script = ''
# pseudo calls wg-quick
# Near equivalent of "wg-quick up /tmp/${cfg.interfaceName}.conf"
# cannot actually call wg-quick because the interface has to be already
# created before the container taken ownership of the interface
# Thus, assumes wg interface was already created:
# ip link add ${cfg.interfaceName} type wireguard
${getChosenWireguardServer}
myaddress=`cat /tmp/${cfg.interfaceName}-address.conf`
wg setconf ${cfg.interfaceName} /tmp/${cfg.interfaceName}.conf
ip -4 address add $myaddress dev ${cfg.interfaceName}
ip link set mtu 1420 up dev ${cfg.interfaceName}
wg set ${cfg.interfaceName} fwmark ${toString cfg.wireguardListenPort}
ip -4 route add 0.0.0.0/0 dev ${cfg.interfaceName} table ${toString cfg.wireguardListenPort}
# TODO is this needed?
ip -4 rule add not fwmark ${toString cfg.wireguardListenPort} table ${toString cfg.wireguardListenPort}
ip -4 rule add table main suppress_prefixlength 0
# The rest of the script is only for only for port forwarding skip if not needed
if [ ${boolToString portForwarding} == false ]; then exit 0; fi
# Reserve port
${getPIAToken}
payload_and_signature=`curl -s -m 5 --connect-to "$WG_HOSTNAME::$WG_SERVER_IP:" --cacert "${./ca.rsa.4096.crt}" -G --data-urlencode "token=$PIA_TOKEN" "https://$WG_HOSTNAME:19999/getSignature"`
signature=$(echo "$payload_and_signature" | jq -r '.signature')
payload=$(echo "$payload_and_signature" | jq -r '.payload')
port=$(echo "$payload" | base64 -d | jq -r '.port')
# Check if the port is acceptable
notallowed=(${concatStringsSep " " (map toString cfg.badPortForwardPorts)})
if [[ " ''${notallowed[*]} " =~ " $port " ]]; then
# the port PIA assigned is not allowed, kill the connection
wg-quick down /tmp/${cfg.interfaceName}.conf
exit 1
fi
# write reserved port to file readable for all users
echo $port > /tmp/${cfg.interfaceName}-port
chmod 644 /tmp/${cfg.interfaceName}-port
# write payload and signature info needed to allow refreshing allocated forwarded port
rm -f /tmp/${cfg.interfaceName}-port-renewal
touch /tmp/${cfg.interfaceName}-port-renewal
chmod 700 /tmp/${cfg.interfaceName}-port-renewal
echo $signature >> /tmp/${cfg.interfaceName}-port-renewal
echo $payload >> /tmp/${cfg.interfaceName}-port-renewal
# Block all traffic from VPN interface except for traffic that is from the forwarded port
iptables -I nixos-fw -p tcp --dport $port -j nixos-fw-accept -i ${cfg.interfaceName}
iptables -I nixos-fw -p udp --dport $port -j nixos-fw-accept -i ${cfg.interfaceName}
# The first port refresh triggers the port to be actually allocated
${refreshPIAPort}
${optionalString (cfg.forwardedPort != null) ''
# redirect the fowarded port
iptables -A INPUT -i ${cfg.interfaceName} -p tcp --dport $port -j ACCEPT
iptables -A INPUT -i ${cfg.interfaceName} -p udp --dport $port -j ACCEPT
iptables -A INPUT -i ${cfg.interfaceName} -p tcp --dport ${toString cfg.forwardedPort} -j ACCEPT
iptables -A INPUT -i ${cfg.interfaceName} -p udp --dport ${toString cfg.forwardedPort} -j ACCEPT
iptables -A PREROUTING -t nat -i ${cfg.interfaceName} -p tcp --dport $port -j REDIRECT --to-port ${toString cfg.forwardedPort}
iptables -A PREROUTING -t nat -i ${cfg.interfaceName} -p udp --dport $port -j REDIRECT --to-port ${toString cfg.forwardedPort}
''}
${optionalString cfg.forwardPortForTransmission ''
# assumes no auth needed for transmission
curlout=$(curl localhost:9091/transmission/rpc 2>/dev/null)
regex='X-Transmission-Session-Id\: (\w*)'
if [[ $curlout =~ $regex ]]; then
sessionId=''${BASH_REMATCH[1]}
else
exit 1
fi
# set the port in transmission
data='{"method": "session-set", "arguments": { "peer-port" :'$port' } }'
curl http://localhost:9091/transmission/rpc -d "$data" -H "X-Transmission-Session-Id: $sessionId"
''}
'';
preStop = ''
wg-quick down /tmp/${cfg.interfaceName}.conf
# The rest of the script is only for only for port forwarding skip if not needed
if [ ${boolToString portForwarding} == false ]; then exit 0; fi
${optionalString (cfg.forwardedPort != null) ''
# stop redirecting the forwarded port
iptables -D INPUT -i ${cfg.interfaceName} -p tcp --dport $port -j ACCEPT
iptables -D INPUT -i ${cfg.interfaceName} -p udp --dport $port -j ACCEPT
iptables -D INPUT -i ${cfg.interfaceName} -p tcp --dport ${toString cfg.forwardedPort} -j ACCEPT
iptables -D INPUT -i ${cfg.interfaceName} -p udp --dport ${toString cfg.forwardedPort} -j ACCEPT
iptables -D PREROUTING -t nat -i ${cfg.interfaceName} -p tcp --dport $port -j REDIRECT --to-port ${toString cfg.forwardedPort}
iptables -D PREROUTING -t nat -i ${cfg.interfaceName} -p udp --dport $port -j REDIRECT --to-port ${toString cfg.forwardedPort}
''}
'';
};
vpn-container.config.systemd.services.pia-vpn-wireguard-forward-port = {
enable = portForwarding;
description = "PIA VPN WireGuard Tunnel Port Forwarding";
after = [ "pia-vpn-wireguard.service" ];
requires = [ "pia-vpn-wireguard.service" ];
path = with pkgs; [ curl ];
serviceConfig = {
Type = "oneshot";
};
script = refreshPIAPort;
};
vpn-container.config.systemd.timers.pia-vpn-wireguard-forward-port = {
enable = portForwarding;
partOf = [ "pia-vpn-wireguard-forward-port.service" ];
wantedBy = [ "timers.target" ];
timerConfig = {
OnCalendar = "*:0/10"; # 10 minutes
RandomizedDelaySec = "1m"; # vary by 1 min to give PIA servers some relief
};
};
age.secrets."pia-login.conf".file = ../../secrets/pia-login.age;
};
}

126
common/network/sandbox.nix Normal file
View File

@@ -0,0 +1,126 @@
{ config, lib, ... }:
# Network configuration for sandboxed workspaces (VMs and containers)
# Creates a bridge network with NAT for isolated environments
with lib;
let
cfg = config.networking.sandbox;
in
{
options.networking.sandbox = {
enable = mkEnableOption "sandboxed workspace network bridge";
bridgeName = mkOption {
type = types.str;
default = "sandbox-br";
description = "Name of the bridge interface for sandboxed workspaces";
};
subnet = mkOption {
type = types.str;
default = "192.168.83.0/24";
description = "Subnet for sandboxed workspace network";
};
hostAddress = mkOption {
type = types.str;
default = "192.168.83.1";
description = "Host address on the sandbox bridge";
};
upstreamInterface = mkOption {
type = types.str;
description = "Upstream network interface for NAT";
};
};
config = mkIf cfg.enable {
networking.ip_forward = true;
# Create the bridge interface
systemd.network.netdevs."10-${cfg.bridgeName}" = {
netdevConfig = {
Kind = "bridge";
Name = cfg.bridgeName;
};
};
systemd.network.networks."10-${cfg.bridgeName}" = {
matchConfig.Name = cfg.bridgeName;
networkConfig = {
Address = "${cfg.hostAddress}/24";
DHCPServer = false;
IPv4Forwarding = true;
IPv6Forwarding = false;
IPMasquerade = "ipv4";
};
linkConfig.RequiredForOnline = "no";
};
# Automatically attach VM tap interfaces to the bridge
systemd.network.networks."11-vm" = {
matchConfig.Name = "vm-*";
networkConfig.Bridge = cfg.bridgeName;
linkConfig.RequiredForOnline = "no";
};
# Automatically attach container veth interfaces to the bridge
systemd.network.networks."11-container" = {
matchConfig.Name = "ve-*";
networkConfig.Bridge = cfg.bridgeName;
linkConfig.RequiredForOnline = "no";
};
# NAT configuration for sandboxed workspaces
networking.nat = {
enable = true;
internalInterfaces = [ cfg.bridgeName ];
externalInterface = cfg.upstreamInterface;
};
# Enable systemd-networkd (required for bridge setup)
systemd.network.enable = true;
# When NetworkManager handles primary networking, disable systemd-networkd-wait-online.
# The bridge is the only interface managed by systemd-networkd and it never reaches
# "online" state without connected workspaces. NetworkManager-wait-online.service already
# gates network-online.target for the primary interface.
# On pure systemd-networkd systems (no NM), we just ignore the bridge.
systemd.network.wait-online.enable =
!config.networking.networkmanager.enable;
systemd.network.wait-online.ignoredInterfaces =
lib.mkIf (!config.networking.networkmanager.enable) [ cfg.bridgeName ];
# If NetworkManager is enabled, tell it to ignore sandbox interfaces
# This allows systemd-networkd and NetworkManager to coexist
networking.networkmanager.unmanaged = [
"interface-name:${cfg.bridgeName}"
"interface-name:vm-*"
"interface-name:ve-*"
"interface-name:veth*"
];
# Make systemd-resolved listen on the bridge for workspace DNS queries.
# By default resolved only listens on 127.0.0.53 (localhost).
# DNSStubListenerExtra adds the bridge address so workspaces can use the host as DNS.
services.resolved.settings.Resolve.DNSStubListenerExtra = cfg.hostAddress;
# Allow DNS traffic from workspaces to the host
networking.firewall.interfaces.${cfg.bridgeName} = {
allowedTCPPorts = [ 53 ];
allowedUDPPorts = [ 53 ];
};
# Block sandboxes from reaching the local network (private RFC1918 ranges)
# while still allowing public internet access via NAT.
# The sandbox subnet itself is allowed so workspaces can reach the host gateway.
networking.firewall.extraForwardRules = ''
iifname ${cfg.bridgeName} ip daddr ${cfg.hostAddress} accept
iifname ${cfg.bridgeName} ip daddr 10.0.0.0/8 drop
iifname ${cfg.bridgeName} ip daddr 172.16.0.0/12 drop
iifname ${cfg.bridgeName} ip daddr 192.168.0.0/16 drop
'';
};
}

View File

@@ -8,7 +8,15 @@ in
{
options.services.tailscale.exitNode = mkEnableOption "Enable exit node support";
config.services.tailscale.enable = !config.boot.isContainer;
config.services.tailscale.enable = mkDefault (!config.boot.isContainer);
# Trust Tailscale interface - access control is handled by Tailscale ACLs.
# Required because nftables (used by Incus) breaks Tailscale's automatic iptables rules.
config.networking.firewall.trustedInterfaces = mkIf cfg.enable [ "tailscale0" ];
# MagicDNS
config.networking.nameservers = mkIf cfg.enable [ "1.1.1.1" "8.8.8.8" ];
config.networking.search = mkIf cfg.enable [ "koi-bebop.ts.net" ];
# exit node
config.networking.firewall.checkReversePath = mkIf cfg.exitNode "loose";

View File

@@ -1,4 +1,4 @@
{ config, pkgs, lib, allModules, ... }:
{ config, lib, allModules, ... }:
with lib;
@@ -26,6 +26,8 @@ in
'';
};
useOpenVPN = mkEnableOption "Uses OpenVPN instead of wireguard for PIA VPN connection";
config = mkOption {
type = types.anything;
default = { };
@@ -41,6 +43,9 @@ in
};
config = mkIf cfg.enable {
pia.wireguard.enable = !cfg.useOpenVPN;
pia.wireguard.forwardPortForTransmission = !cfg.useOpenVPN;
containers.${cfg.containerName} = {
ephemeral = true;
autoStart = true;
@@ -59,7 +64,7 @@ in
}
)));
enableTun = true;
enableTun = cfg.useOpenVPN;
privateNetwork = true;
hostAddress = "172.16.100.1";
localAddress = "172.16.100.2";
@@ -67,28 +72,32 @@ in
config = {
imports = allModules ++ [ cfg.config ];
nixpkgs.pkgs = pkgs;
# networking.firewall.enable = mkForce false;
networking.firewall.trustedInterfaces = [
# completely trust internal interface to host
"eth0"
];
networking.firewall.enable = mkForce false;
pia.enable = true;
pia.server = "swiss.privacy.network"; # swiss vpn
pia.openvpn.enable = cfg.useOpenVPN;
pia.openvpn.server = "swiss.privacy.network"; # swiss vpn
# TODO fix so it does run it's own resolver again
# run it's own DNS resolver
networking.useHostResolvConf = false;
services.resolved.enable = true;
# services.resolved.enable = true;
networking.nameservers = [ "1.1.1.1" "8.8.8.8" ];
};
};
# load secrets the container needs
age.secrets = config.containers.${cfg.containerName}.config.age.secrets;
# forwarding for vpn container
networking.nat.enable = true;
networking.nat.internalInterfaces = [
# forwarding for vpn container (only for OpenVPN)
networking.nat.enable = mkIf cfg.useOpenVPN true;
networking.nat.internalInterfaces = mkIf cfg.useOpenVPN [
"ve-${cfg.containerName}"
];
networking.ip_forward = true;
networking.ip_forward = mkIf cfg.useOpenVPN true;
# assumes only one potential interface
networking.usePredictableInterfaceNames = false;

View File

@@ -1,14 +0,0 @@
{ lib, config, ... }:
let
cfg = config.services.zerotierone;
in {
config = lib.mkIf cfg.enable {
services.zerotierone.joinNetworks = [
"565799d8f6d654c0"
];
networking.firewall.allowedUDPPorts = [
9993
];
};
}

56
common/nix-builder.nix Normal file
View File

@@ -0,0 +1,56 @@
{ config, lib, ... }:
let
builderUserName = "nix-builder";
builderRole = "nix-builder";
builders = config.machines.withRole.${builderRole};
thisMachineIsABuilder = config.thisMachine.hasRole.${builderRole};
# builders don't include themselves as a remote builder
otherBuilders = lib.filter (hostname: hostname != config.networking.hostName) builders;
in
lib.mkMerge [
# configure builder
(lib.mkIf thisMachineIsABuilder {
users.users.${builderUserName} = {
description = "Distributed Nix Build User";
group = builderUserName;
isSystemUser = true;
createHome = true;
home = "/var/lib/nix-builder";
useDefaultShell = true;
openssh.authorizedKeys.keys = builtins.map
(builderCfg: builderCfg.hostKey)
(builtins.attrValues config.machines.hosts);
};
users.groups.${builderUserName} = { };
nix.settings.trusted-users = [
builderUserName
];
})
# use each builder
{
nix.distributedBuilds = true;
nix.buildMachines = builtins.map
(builderHostname: {
hostName = builderHostname;
system = config.machines.hosts.${builderHostname}.arch;
protocol = "ssh-ng";
sshUser = builderUserName;
sshKey = "/etc/ssh/ssh_host_ed25519_key";
maxJobs = 3;
speedFactor = 10;
supportedFeatures = [ "nixos-test" "benchmark" "big-parallel" "kvm" ];
})
otherBuilders;
# It is very likely that the builder's internet is faster or just as fast
nix.extraOptions = ''
builders-use-substitutes = true
'';
}
]

View File

@@ -1,8 +1,9 @@
{ lib, config, pkgs, ... }:
{ lib, config, ... }:
let
cfg = config.de;
in {
in
{
config = lib.mkIf cfg.enable {
# enable pulseaudio support for packages
nixpkgs.config.pulseaudio = true;
@@ -16,44 +17,14 @@ in {
alsa.support32Bit = true;
pulse.enable = true;
jack.enable = true;
};
# use the example session manager (no others are packaged yet so this is enabled by default,
# no need to redefine it in your config for now)
#media-session.enable = true;
config.pipewire = {
"context.objects" = [
{
# A default dummy driver. This handles nodes marked with the "node.always-driver"
# properyty when no other driver is currently active. JACK clients need this.
factory = "spa-node-factory";
args = {
"factory.name" = "support.node.driver";
"node.name" = "Dummy-Driver";
"priority.driver" = 8000;
};
}
{
factory = "adapter";
args = {
"factory.name" = "support.null-audio-sink";
"node.name" = "Microphone-Proxy";
"node.description" = "Microphone";
"media.class" = "Audio/Source/Virtual";
"audio.position" = "MONO";
};
}
{
factory = "adapter";
args = {
"factory.name" = "support.null-audio-sink";
"node.name" = "Main-Output-Proxy";
"node.description" = "Main Output";
"media.class" = "Audio/Sink";
"audio.position" = "FL,FR";
};
}
];
services.pipewire.extraConfig.pipewire."92-fix-wine-audio" = {
context.properties = {
default.clock.rate = 48000;
default.clock.quantum = 256;
default.clock.min-quantum = 256;
default.clock.max-quantum = 2048;
};
};

View File

@@ -17,39 +17,8 @@ let
"PREFIX=$(out)"
];
};
nvidia-vaapi-driver = pkgs.stdenv.mkDerivation rec {
pname = "nvidia-vaapi-driver";
version = "0.0.5";
src = pkgs.fetchFromGitHub {
owner = "elFarto";
repo = pname;
rev = "v${version}";
sha256 = "2bycqKolVoaHK64XYcReteuaON9TjzrFhaG5kty28YY=";
};
patches = [
./use-meson-v57.patch
];
nativeBuildInputs = with pkgs; [
meson
cmake
ninja
pkg-config
];
buildInputs = with pkgs; [
nv-codec-headers-11-1-5-1
libva
gst_all_1.gstreamer
gst_all_1.gst-plugins-bad
libglvnd
];
};
in {
in
{
config = lib.mkIf cfg.enable {
# chromium with specific extensions + settings
programs.chromium = {
@@ -77,27 +46,23 @@ in {
# hardware accelerated video playback (on intel)
nixpkgs.config.packageOverrides = pkgs: {
vaapiIntel = pkgs.vaapiIntel.override { enableHybridCodec = true; };
chromium = pkgs.chromium.override {
enableWideVine = true;
# ungoogled = true;
# --enable-native-gpu-memory-buffers # fails on AMD APU
# --enable-webrtc-vp9-support
commandLineArgs = "--use-vulkan --use-gl=desktop --enable-zero-copy --enable-hardware-overlays --enable-features=VaapiVideoDecoder,CanvasOopRasterization --ignore-gpu-blocklist --enable-accelerated-mjpeg-decode --enable-accelerated-video --enable-gpu-rasterization";
commandLineArgs = "--use-vulkan";
};
};
# todo vulkan in chrome
# todo video encoding in chrome
hardware.opengl = {
hardware.graphics = {
enable = true;
extraPackages = with pkgs; [
intel-media-driver # LIBVA_DRIVER_NAME=iHD
vaapiIntel # LIBVA_DRIVER_NAME=i965 (older but works better for Firefox/Chromium)
# vaapiVdpau
libvdpau-va-gl
nvidia-vaapi-driver
];
extraPackages32 = with pkgs.pkgsi686Linux; [ vaapiIntel ];
};
};
}

View File

@@ -2,22 +2,21 @@
let
cfg = config.de;
in {
in
{
imports = [
./kde.nix
./xfce.nix
./yubikey.nix
./chromium.nix
# ./firefox.nix
./firefox.nix
./audio.nix
# ./torbrowser.nix
./pithos.nix
./spotify.nix
./vscodium.nix
./discord.nix
./steam.nix
./touchpad.nix
./mount-samba.nix
./udev.nix
./virtualisation.nix
];
options.de = {
@@ -25,9 +24,10 @@ in {
};
config = lib.mkIf cfg.enable {
# vulkan
hardware.opengl.driSupport = true;
hardware.opengl.driSupport32Bit = true;
environment.systemPackages = with pkgs; [
# https://github.com/NixOS/nixpkgs/pull/328086#issuecomment-2235384618
gparted
];
# Applications
users.users.googlebot.packages = with pkgs; [
@@ -36,39 +36,81 @@ in {
mumble
tigervnc
bluez-tools
vscodium
element-desktop
mpv
nextcloud-client
signal-desktop
minecraft
gparted
libreoffice-fresh
thunderbird
spotifyd
spotify-qt
spotify
arduino
yt-dlp
jellyfin-media-player
joplin-desktop
config.inputs.deploy-rs.packages.${config.currentSystem}.deploy-rs
lxqt.pavucontrol-qt
deskflow
file-roller
android-tools
logseq
# For Nix IDE
nixpkgs-fmt
nixd
nil
godot-mono
];
# Networking
networking.networkmanager.enable = true;
users.users.googlebot.extraGroups = [ "networkmanager" ];
# Printing
services.printing.enable = true;
services.printing.drivers = with pkgs; [
gutenprint
];
# Printer discovery
services.avahi.enable = true;
services.avahi.nssmdns = true;
programs.file-roller.enable = true;
# Scanning
hardware.sane.enable = true;
hardware.sane.extraBackends = with pkgs; [
# Enable support for "driverless" scanners
# Check for support here: https://mfi.apple.com/account/airprint-search
sane-airscan
];
# Printer/Scanner discovery
services.avahi.enable = true;
services.avahi.nssmdns4 = true;
# Security
services.gnome.gnome-keyring.enable = true;
security.pam.services.googlebot.enableGnomeKeyring = true;
# Mount personal SMB stores
services.mount-samba.enable = true;
# allow building ARM derivations
boot.binfmt.emulatedSystems = [ "aarch64-linux" ];
# for luks onlock over tor
services.tor.enable = true;
services.tor.client.enable = true;
# Enable wayland support in various chromium based applications
environment.sessionVariables.NIXOS_OZONE_WL = "1";
fonts.packages = with pkgs; [ nerd-fonts.symbols-only ];
# SSH Ask pass
programs.ssh.enableAskPassword = true;
programs.ssh.askPassword = "${pkgs.kdePackages.ksshaskpass}/bin/ksshaskpass";
users.users.googlebot.extraGroups = [
# Networking
"networkmanager"
# Scanning
"scanner"
"lp"
];
};
}

View File

@@ -2,7 +2,8 @@
let
cfg = config.de;
in {
in
{
config = lib.mkIf cfg.enable {
users.users.googlebot.packages = [
pkgs.discord

View File

@@ -20,31 +20,6 @@ let
};
firefox = pkgs.wrapFirefox somewhatPrivateFF {
desktopName = "Sneed Browser";
nixExtensions = [
(pkgs.fetchFirefoxAddon {
name = "ublock-origin";
url = "https://addons.mozilla.org/firefox/downloads/file/3719054/ublock_origin-1.33.2-an+fx.xpi";
sha256 = "XDpe9vW1R1iVBTI4AmNgAg1nk7BVQdIAMuqd0cnK5FE=";
})
(pkgs.fetchFirefoxAddon {
name = "sponsorblock";
url = "https://addons.mozilla.org/firefox/downloads/file/3720594/sponsorblock_skip_sponsorships_on_youtube-2.0.12.3-an+fx.xpi";
sha256 = "HRtnmZWyXN3MKo4AvSYgNJGkBEsa2RaMamFbkz+YzQg=";
})
(pkgs.fetchFirefoxAddon {
name = "KeePassXC-Browser";
url = "https://addons.mozilla.org/firefox/downloads/file/3720664/keepassxc_browser-1.7.6-fx.xpi";
sha256 = "3K404/eq3amHhIT0WhzQtC892he5I0kp2SvbzE9dbZg=";
})
(pkgs.fetchFirefoxAddon {
name = "https-everywhere";
url = "https://addons.mozilla.org/firefox/downloads/file/3716461/https_everywhere-2021.1.27-an+fx.xpi";
sha256 = "2gSXSLunKCwPjAq4Wsj0lOeV551r3G+fcm1oeqjMKh8=";
})
];
extraPolicies = {
CaptivePortal = false;
DisableFirefoxStudies = true;
@@ -74,12 +49,6 @@ let
ExtensionRecommendations = false;
SkipOnboarding = true;
};
WebsiteFilter = {
Block = [
"http://paradigminteractive.io/"
"https://paradigminteractive.io/"
];
};
};
extraPrefs = ''

View File

@@ -2,22 +2,21 @@
let
cfg = config.de;
in {
in
{
config = lib.mkIf cfg.enable {
# kde plasma
services.xserver = {
enable = true;
desktopManager.plasma5.enable = true;
displayManager.sddm.enable = true;
};
services.displayManager.sddm.enable = true;
services.displayManager.sddm.wayland.enable = true;
services.desktopManager.plasma6.enable = true;
# kde apps
nixpkgs.config.firefox.enablePlasmaBrowserIntegration = true;
users.users.googlebot.packages = with pkgs; [
# akonadi
# kmail
# plasma5Packages.kmail-account-wizard
kate
kdePackages.kate
kdePackages.kdeconnect-kde
kdePackages.skanpage
];
};
}

View File

@@ -1,36 +1,50 @@
# mounts the samba share on s0 over zeroteir
# mounts the samba share on s0 over tailscale
{ config, lib, ... }:
{ config, lib, pkgs, ... }:
let
cfg = config.services.mount-samba;
# prevents hanging on network split
network_opts = "x-systemd.automount,noauto,x-systemd.idle-timeout=60,x-systemd.device-timeout=5s,x-systemd.mount-timeout=5s,nostrictsync,cache=loose,handlecache,handletimeout=30000,rwpidforward,mapposix,soft,resilienthandles,echo_interval=10,noblocksend";
# prevents hanging on network split and other similar niceties to ensure a stable connection
network_opts = "nostrictsync,cache=strict,handlecache,handletimeout=30000,rwpidforward,mapposix,soft,resilienthandles,echo_interval=10,noblocksend,fsc";
systemd_opts = "x-systemd.automount,noauto,x-systemd.idle-timeout=60,x-systemd.device-timeout=5s,x-systemd.mount-timeout=5s";
user_opts = "uid=${toString config.users.users.googlebot.uid},file_mode=0660,dir_mode=0770,user";
auth_opts = "credentials=/run/agenix/smb-secrets";
version_opts = "vers=2.1";
auth_opts = "sec=ntlmv2i,credentials=/run/agenix/smb-secrets";
version_opts = "vers=3.1.1";
opts = "${network_opts},${user_opts},${version_opts},${auth_opts}";
in {
public_user_opts = "gid=${toString config.users.groups.users.gid}";
opts = "${systemd_opts},${network_opts},${user_opts},${version_opts},${auth_opts}";
in
{
options.services.mount-samba = {
enable = lib.mkEnableOption "enable mounting samba shares";
};
config = lib.mkIf (cfg.enable && config.services.zerotierone.enable) {
config = lib.mkIf (cfg.enable && config.services.tailscale.enable) {
fileSystems."/mnt/public" = {
device = "//s0.zt.neet.dev/public";
device = "//s0.koi-bebop.ts.net/public";
fsType = "cifs";
options = [ opts ];
options = [ "${opts},${public_user_opts}" ];
};
fileSystems."/mnt/private" = {
device = "//s0.zt.neet.dev/googlebot";
device = "//s0.koi-bebop.ts.net/googlebot";
fsType = "cifs";
options = [ opts ];
};
age.secrets.smb-secrets.file = ../../secrets/smb-secrets.age;
environment.shellAliases = {
# remount storage
remount_public = "sudo systemctl restart mnt-public.mount";
remount_private = "sudo systemctl restart mnt-private.mount";
# Encrypted Vault
vault_unlock = "${pkgs.gocryptfs}/bin/gocryptfs /mnt/private/.vault/ /mnt/vault/";
vault_lock = "umount /mnt/vault/";
};
};
}

View File

@@ -1,76 +0,0 @@
{ lib, config, pkgs, ... }:
with lib;
let
cfg = config.services.pia;
in {
imports = [
./pia.nix
];
options.services.pia = {
enable = lib.mkEnableOption "Enable PIA Client";
dataDir = lib.mkOption {
type = lib.types.str;
default = "/var/lib/pia";
description = ''
Path to the pia data directory
'';
};
user = lib.mkOption {
type = lib.types.str;
default = "root";
description = ''
The user pia should run as
'';
};
group = lib.mkOption {
type = lib.types.str;
default = "piagrp";
description = ''
The group pia should run as
'';
};
users = mkOption {
type = with types; listOf str;
default = [];
description = ''
Usernames to be added to the "spotifyd" group, so that they
can start and interact with the userspace daemon.
'';
};
};
config = mkIf cfg.enable {
# users.users.${cfg.user} =
# if cfg.user == "pia" then {
# isSystemUser = true;
# group = cfg.group;
# home = cfg.dataDir;
# createHome = true;
# }
# else {};
users.groups.${cfg.group}.members = cfg.users;
systemd.services.pia-daemon = {
enable = true;
after = [ "network.target" ];
wantedBy = [ "multi-user.target" ];
serviceConfig.ExecStart = "${pkgs.pia-daemon}/bin/pia-daemon";
serviceConfig.PrivateTmp="yes";
serviceConfig.User = cfg.user;
serviceConfig.Group = cfg.group;
preStart = ''
mkdir -p ${cfg.dataDir}
chown ${cfg.user}:${cfg.group} ${cfg.dataDir}
'';
};
};
}

View File

@@ -1,147 +0,0 @@
diff --git a/Rakefile b/Rakefile
index fa6d771..bcd6fb1 100644
--- a/Rakefile
+++ b/Rakefile
@@ -151,41 +151,6 @@ end
# Install LICENSE.txt
stage.install('LICENSE.txt', :res)
-# Download server lists to ship preloaded copies with the app. These tasks
-# depend on version.txt so they're refreshed periodically (whenver a new commit
-# is made), but not for every build.
-#
-# SERVER_DATA_DIR can be set to use existing files instead of downloading them;
-# this is primarily intended for reproducing a build.
-#
-# Create a probe for SERVER_DATA_DIR so these are updated if it changes.
-serverDataProbe = Probe.new('serverdata')
-serverDataProbe.file('serverdata.txt', "#{ENV['SERVER_DATA_DIR']}")
-# JSON resource build directory
-jsonFetched = Build.new('json-fetched')
-# These are the assets we need to fetch and the URIs we get them from
-{
- 'modern_shadowsocks.json': 'https://serverlist.piaservers.net/shadow_socks',
- 'modern_servers.json': 'https://serverlist.piaservers.net/vpninfo/servers/v6',
- 'modern_region_meta.json': 'https://serverlist.piaservers.net/vpninfo/regions/v2'
-}.each do |k, v|
- fetchedFile = jsonFetched.artifact(k.to_s)
- serverDataDir = ENV['SERVER_DATA_DIR']
- file fetchedFile => [version.artifact('version.txt'),
- serverDataProbe.artifact('serverdata.txt'),
- jsonFetched.componentDir] do |t|
- if(serverDataDir)
- # Use the copy provided instead of fetching (for reproducing a build)
- File.copy(File.join(serverDataDir, k), fetchedFile)
- else
- # Fetch from the web API (write with "binary" mode so LF is not
- # converted to CRLF on Windows)
- File.binwrite(t.name, Net::HTTP.get(URI(v)))
- end
- end
- stage.install(fetchedFile, :res)
-end
-
# Install version/brand/arch info in case an upgrade needs to know what is
# currently installed
stage.install(version.artifact('version.txt'), :res)
diff --git a/common/src/posix/unixsignalhandler.cpp b/common/src/posix/unixsignalhandler.cpp
index f820a6d..e1b6c33 100644
--- a/common/src/posix/unixsignalhandler.cpp
+++ b/common/src/posix/unixsignalhandler.cpp
@@ -132,7 +132,7 @@ void UnixSignalHandler::_signalHandler(int, siginfo_t *info, void *)
// we checked it, we can't even log because the logger is not reentrant.
auto pThis = instance();
if(pThis)
- ::write(pThis->_sigFd[0], info, sizeof(siginfo_t));
+ auto _ = ::write(pThis->_sigFd[0], info, sizeof(siginfo_t));
}
template<int Signal>
void UnixSignalHandler::setAbortAction()
diff --git a/daemon/src/linux/linux_nl.cpp b/daemon/src/linux/linux_nl.cpp
index fd3aced..2367a5e 100644
--- a/daemon/src/linux/linux_nl.cpp
+++ b/daemon/src/linux/linux_nl.cpp
@@ -642,6 +642,6 @@ LinuxNl::~LinuxNl()
unsigned char term = 0;
PosixFd killSocket = _workerKillSocket.get();
if(killSocket)
- ::write(killSocket.get(), &term, sizeof(term));
+ auto _ = ::write(killSocket.get(), &term, sizeof(term));
_workerThread.join();
}
diff --git a/extras/support-tool/launcher/linux-launcher.cpp b/extras/support-tool/launcher/linux-launcher.cpp
index 3f63ac2..420d54d 100644
--- a/extras/support-tool/launcher/linux-launcher.cpp
+++ b/extras/support-tool/launcher/linux-launcher.cpp
@@ -48,7 +48,7 @@ int fork_execv(gid_t gid, char *filename, char *const argv[])
if(forkResult == 0)
{
// Apply gid as both real and effective
- setregid(gid, gid);
+ auto _ = setregid(gid, gid);
int execErr = execv(filename, argv);
std::cerr << "exec err: " << execErr << " / " << errno << " - "
diff --git a/rake/model/qt.rb b/rake/model/qt.rb
index c8cd362..a6abe59 100644
--- a/rake/model/qt.rb
+++ b/rake/model/qt.rb
@@ -171,12 +171,7 @@ class Qt
end
def getQtRoot(qtVersion, arch)
- qtToolchainPtns = getQtToolchainPatterns(arch)
- qtRoots = FileList[*Util.joinPaths([[qtVersion], qtToolchainPtns])]
- # Explicitly filter for existing paths - if the pattern has wildcards
- # we only get existing directories, but if the patterns are just
- # alternates with no wildcards, we can get directories that don't exist
- qtRoots.find_all { |r| File.exist?(r) }.max
+ ENV['QTROOT']
end
def getQtVersionScore(minor, patch)
@@ -192,12 +187,7 @@ class Qt
end
def getQtPathVersion(path)
- verMatch = path.match('^.*/Qt[^/]*/5\.(\d+)\.?(\d*)$')
- if(verMatch == nil)
- nil
- else
- [verMatch[1].to_i, verMatch[2].to_i]
- end
+ [ENV['QT_MAJOR'].to_i, ENV['QT_MINOR'].to_i]
end
# Build a component definition with the defaults. The "Core" component will
diff --git a/rake/product/linux.rb b/rake/product/linux.rb
index f43fb3e..83505af 100644
--- a/rake/product/linux.rb
+++ b/rake/product/linux.rb
@@ -18,8 +18,7 @@ module PiaLinux
QT_BINARIES = %w(pia-client pia-daemon piactl pia-support-tool)
# Version of libicu (needed to determine lib*.so.## file names in deployment)
- ICU_VERSION = FileList[File.join(Executable::Qt.targetQtRoot, 'lib', 'libicudata.so.*')]
- .first.match(/libicudata\.so\.(\d+)(\..*|)/)[1]
+ ICU_VERSION = ENV['ICU_MAJOR'].to_i;
# Copy a directory recursively, excluding *.debug files (debugging symbols)
def self.copyWithoutDebug(sourceDir, destDir)
@@ -220,16 +219,5 @@ module PiaLinux
# Since these are just development workflow tools, they can be skipped if
# specific dependencies are not available.
def self.defineTools(toolsStage)
- # Test if we have libthai-dev, for the Thai word breaking utility
- if(Executable::Tc.sysHeaderAvailable?('thai/thwbrk.h'))
- Executable.new('thaibreak')
- .source('tools/thaibreak')
- .lib('thai')
- .install(toolsStage, :bin)
- toolsStage.install('tools/thaibreak/thai_ts.sh', :bin)
- toolsStage.install('tools/onesky_import/import_translations.sh', :bin)
- else
- puts "skipping thaibreak utility, install libthai-dev to build thaibreak"
- end
end
end

View File

@@ -1,139 +0,0 @@
{ pkgs, lib, config, ... }:
{
nixpkgs.overlays = [
(self: super:
with self;
let
# arch = builtins.elemAt (lib.strings.splitString "-" builtins.currentSystem) 0;
arch = "x86_64";
pia-desktop = clangStdenv.mkDerivation rec {
pname = "pia-desktop";
version = "3.3.0";
src = fetchgit {
url = "https://github.com/pia-foss/desktop";
rev = version;
fetchLFS = true;
sha256 = "D9txL5MUWyRYTnsnhlQdYT4dGVpj8PFsVa5hkrb36cw=";
};
patches = [
./fix-pia.patch
];
nativeBuildInputs = [
cmake
rake
];
prePatch = ''
sed -i 's|/usr/include/libnl3|${libnl.dev}/include/libnl3|' Rakefile
'';
installPhase = ''
mkdir -p $out/bin $out/lib $out/share
cp -r ../out/pia_release_${arch}/stage/bin $out
cp -r ../out/pia_release_${arch}/stage/lib $out
cp -r ../out/pia_release_${arch}/stage/share $out
'';
cmakeFlags = [
"-DCMAKE_BUILD_TYPE=Release"
];
QTROOT = "${qt5.full}";
QT_MAJOR = lib.versions.minor (lib.strings.parseDrvName qt5.full.name).version;
QT_MINOR = lib.versions.patch (lib.strings.parseDrvName qt5.full.name).version;
ICU_MAJOR = lib.versions.major (lib.strings.parseDrvName icu.name).version;
buildInputs = [
mesa
libsForQt5.qt5.qtquickcontrols
libsForQt5.qt5.qtquickcontrols2
icu
libnl
];
dontWrapQtApps = true;
};
in rec {
openvpn-updown = buildFHSUserEnv {
name = "openvpn-updown";
targetPkgs = pkgs: (with pkgs; [ pia-desktop ]);
runScript = "openvpn-updown.sh";
};
pia-client = buildFHSUserEnv {
name = "pia-client";
targetPkgs = pkgs: (with pkgs; [
pia-desktop
xorg.libXau
xorg.libXdmcp
]);
runScript = "pia-client";
};
piactl = buildFHSUserEnv {
name = "piactl";
targetPkgs = pkgs: (with pkgs; [ pia-desktop ]);
runScript = "piactl";
};
pia-daemon = buildFHSUserEnv {
name = "pia-daemon";
targetPkgs = pkgs: (with pkgs; [ pia-desktop ]);
runScript = "pia-daemon";
};
pia-hnsd = buildFHSUserEnv {
name = "pia-hnsd";
targetPkgs = pkgs: (with pkgs; [ pia-desktop ]);
runScript = "pia-hnsd";
};
pia-openvpn = buildFHSUserEnv {
name = "pia-openvpn";
targetPkgs = pkgs: (with pkgs; [ pia-desktop ]);
runScript = "pia-openvpn";
};
pia-ss-local = buildFHSUserEnv {
name = "pia-ss-local";
targetPkgs = pkgs: (with pkgs; [ pia-desktop ]);
runScript = "pia-ss-local";
};
pia-support-tool = buildFHSUserEnv {
name = "pia-support-tool";
targetPkgs = pkgs: (with pkgs; [
pia-desktop
xorg.libXau
xorg.libXdmcp
]);
runScript = "pia-support-tool";
};
pia-unbound = buildFHSUserEnv {
name = "pia-unbound";
targetPkgs = pkgs: (with pkgs; [ pia-desktop ]);
runScript = "pia-unbound";
};
pia-wireguard-go = buildFHSUserEnv {
name = "pia-wireguard-go";
targetPkgs = pkgs: (with pkgs; [ pia-desktop ]);
runScript = "pia-wireguard-go";
};
support-tool-launcher = buildFHSUserEnv {
name = "support-tool-launcher";
targetPkgs = pkgs: (with pkgs; [ pia-desktop ]);
runScript = "support-tool-launcher";
};
})
];
}

View File

@@ -2,7 +2,8 @@
let
cfg = config.de;
in {
in
{
config = lib.mkIf cfg.enable {
nixpkgs.overlays = [
(self: super: {

View File

@@ -1,86 +0,0 @@
{ lib, config, pkgs, ... }:
with lib;
let
cfg = config.services.spotifyd;
toml = pkgs.formats.toml {};
spotifydConf = toml.generate "spotify.conf" cfg.settings;
in
{
disabledModules = [
"services/audio/spotifyd.nix"
];
options = {
services.spotifyd = {
enable = mkEnableOption "spotifyd, a Spotify playing daemon";
settings = mkOption {
default = {};
type = toml.type;
example = { global.bitrate = 320; };
description = ''
Configuration for Spotifyd. For syntax and directives, see
<link xlink:href="https://github.com/Spotifyd/spotifyd#Configuration"/>.
'';
};
users = mkOption {
type = with types; listOf str;
default = [];
description = ''
Usernames to be added to the "spotifyd" group, so that they
can start and interact with the userspace daemon.
'';
};
};
};
config = mkIf cfg.enable {
# username specific stuff because i'm lazy...
services.spotifyd.users = [ "googlebot" ];
users.users.googlebot.packages = with pkgs; [
spotify
spotify-tui
];
users.groups.spotifyd = {
members = cfg.users;
};
age.secrets.spotifyd = {
file = ../../secrets/spotifyd.age;
group = "spotifyd";
mode = "0440"; # group can read
};
# spotifyd to read secrets and run as user service
services.spotifyd = {
settings.global = {
username_cmd = "sed '1q;d' /run/agenix/spotifyd";
password_cmd = "sed '2q;d' /run/agenix/spotifyd";
bitrate = 320;
backend = "pulseaudio";
device_name = config.networking.hostName;
device_type = "computer";
# on_song_change_hook = "command_to_run_on_playback_events"
autoplay = true;
};
};
systemd.user.services.spotifyd-daemon = {
enable = true;
wantedBy = [ "graphical-session.target" ];
partOf = [ "graphical-session.target" ];
description = "spotifyd, a Spotify playing daemon";
environment.SHELL = "/bin/sh";
serviceConfig = {
ExecStart = "${pkgs.spotifyd}/bin/spotifyd --no-daemon --config-path ${spotifydConf}";
Restart = "always";
CacheDirectory = "spotifyd";
};
};
};
}

View File

@@ -2,7 +2,8 @@
let
cfg = config.de;
in {
in
{
config = lib.mkIf cfg.enable {
programs.steam.enable = true;
hardware.steam-hardware.enable = true; # steam controller

View File

@@ -1,24 +0,0 @@
{ lib, config, pkgs, ... }:
let
cfg = config.de;
in {
config = lib.mkIf cfg.enable {
nixpkgs.overlays = [
(self: super: {
tor-browser-bundle-bin = super.tor-browser-bundle-bin.overrideAttrs (old: rec {
version = "10.0.10";
lang = "en-US";
src = pkgs.fetchurl {
url = "https://dist.torproject.org/torbrowser/${version}/tor-browser-linux64-${version}_${lang}.tar.xz";
sha256 = "vYWZ+NsGN8YH5O61+zrUjlFv3rieaBqjBQ+a18sQcZg=";
};
});
})
];
users.users.googlebot.packages = with pkgs; [
tor-browser-bundle-bin
];
};
}

View File

@@ -1,14 +1,11 @@
{ lib, config, pkgs, ... }:
{ lib, config, ... }:
let
cfg = config.de.touchpad;
in {
options.de.touchpad = {
enable = lib.mkEnableOption "enable touchpad";
};
cfg = config.de;
in
{
config = lib.mkIf cfg.enable {
services.xserver.libinput.enable = true;
services.xserver.libinput.touchpad.naturalScrolling = true;
services.libinput.enable = true;
services.libinput.touchpad.naturalScrolling = true;
};
}

25
common/pc/udev.nix Normal file
View File

@@ -0,0 +1,25 @@
{ config, lib, pkgs, ... }:
let
cfg = config.de;
in
{
config = lib.mkIf cfg.enable {
services.udev.extraRules = ''
# depthai
SUBSYSTEM=="usb", ATTRS{idVendor}=="03e7", MODE="0666"
# Moonlander
# Rules for Oryx web flashing and live training
KERNEL=="hidraw*", ATTRS{idVendor}=="16c0", MODE="0664", GROUP="plugdev"
KERNEL=="hidraw*", ATTRS{idVendor}=="3297", MODE="0664", GROUP="plugdev"
# Wally Flashing rules for the Moonlander and Planck EZ
SUBSYSTEMS=="usb", ATTRS{idVendor}=="0483", ATTRS{idProduct}=="df11", MODE:="0666", SYMLINK+="stm32_dfu"
'';
services.udev.packages = [ pkgs.platformio ];
users.groups.plugdev = {
members = [ "googlebot" ];
};
};
}

View File

@@ -1,22 +0,0 @@
diff --git a/meson.build b/meson.build
index dace367..8c0e290 100644
--- a/meson.build
+++ b/meson.build
@@ -8,7 +8,7 @@ project(
'warning_level=0',
],
license: 'MIT',
- meson_version: '>= 0.58.0',
+ meson_version: '>= 0.57.0',
)
cc = meson.get_compiler('c')
@@ -47,8 +47,3 @@ shared_library(
gnu_symbol_visibility: 'hidden',
)
-meson.add_devenv(environment({
- 'NVD_LOG': '1',
- 'LIBVA_DRIVER_NAME': 'nvidia',
- 'LIBVA_DRIVERS_PATH': meson.project_build_root(),
-}))

View File

@@ -0,0 +1,23 @@
{ config, lib, pkgs, ... }:
let
cfg = config.de;
in
{
config = lib.mkIf cfg.enable {
# AppVMs
virtualisation.appvm.enable = true;
virtualisation.appvm.user = "googlebot";
# Use podman instead of docker
virtualisation.podman.enable = true;
virtualisation.podman.dockerCompat = true;
# virt-manager
virtualisation.libvirtd.enable = true;
programs.dconf.enable = true;
virtualisation.spiceUSBRedirection.enable = true;
environment.systemPackages = with pkgs; [ virt-manager ];
users.users.googlebot.extraGroups = [ "libvirtd" "adbusers" ];
};
}

View File

@@ -1,22 +0,0 @@
{ lib, config, pkgs, ... }:
let
cfg = config.de;
extensions = with pkgs.vscode-extensions; [
# bbenoist.Nix # nix syntax support
# arrterian.nix-env-selector # nix dev envs
];
vscodium-with-extensions = pkgs.vscode-with-extensions.override {
vscode = pkgs.vscodium;
vscodeExtensions = extensions;
};
in
{
config = lib.mkIf cfg.enable {
users.users.googlebot.packages = [
vscodium-with-extensions
];
};
}

View File

@@ -1,22 +0,0 @@
{ lib, config, pkgs, ... }:
let
cfg = config.de;
in {
config = lib.mkIf cfg.enable {
services.xserver = {
enable = true;
desktopManager = {
xterm.enable = false;
xfce.enable = true;
};
displayManager.sddm.enable = true;
};
# xfce apps
# TODO for some reason whiskermenu needs to be global for it to work
environment.systemPackages = with pkgs; [
xfce.xfce4-whiskermenu-plugin
];
};
}

View File

@@ -2,7 +2,8 @@
let
cfg = config.de;
in {
in
{
config = lib.mkIf cfg.enable {
# yubikey
services.pcscd.enable = true;

View File

@@ -0,0 +1,147 @@
{ hostConfig, workspaceName, ip, networkInterface }:
# Base configuration shared by all sandboxed workspaces (VMs and containers)
# This provides common settings for networking, SSH, users, and packages
#
# Parameters:
# hostConfig - The host's NixOS config (for inputs, ssh keys, etc.)
# workspaceName - Name of the workspace (used as hostname)
# ip - Static IP address for the workspace
# networkInterface - Match config for systemd-networkd (e.g., { Type = "ether"; } or { Name = "host0"; })
{ config, lib, pkgs, ... }:
let
claudeConfigFile = pkgs.writeText "claude-config.json" (builtins.toJSON {
hasCompletedOnboarding = true;
theme = "dark";
projects = {
"/home/googlebot/workspace" = {
hasTrustDialogAccepted = true;
};
};
});
in
{
imports = [
../shell.nix
hostConfig.inputs.home-manager.nixosModules.home-manager
hostConfig.inputs.nix-index-database.nixosModules.default
];
nixpkgs.overlays = [
hostConfig.inputs.claude-code-nix.overlays.default
];
# Basic system configuration
system.stateVersion = "25.11";
# Set hostname to match the workspace name
networking.hostName = workspaceName;
# Networking with systemd-networkd
networking.useNetworkd = true;
systemd.network.enable = true;
# Enable resolved to populate /etc/resolv.conf from networkd's DNS settings
services.resolved.enable = true;
# Basic networking configuration
networking.useDHCP = false;
# Static IP configuration
# Uses the host as DNS server (host forwards to upstream DNS)
systemd.network.networks."20-workspace" = {
matchConfig = networkInterface;
networkConfig = {
Address = "${ip}/24";
Gateway = hostConfig.networking.sandbox.hostAddress;
DNS = [ hostConfig.networking.sandbox.hostAddress ];
};
};
# Disable firewall inside workspaces (we're behind NAT)
networking.firewall.enable = false;
# Enable SSH for access
services.openssh = {
enable = true;
settings = {
PasswordAuthentication = false;
PermitRootLogin = "prohibit-password";
};
};
# Use persistent SSH host keys from shared directory
services.openssh.hostKeys = lib.mkForce [
{
path = "/etc/ssh-host-keys/ssh_host_ed25519_key";
type = "ed25519";
}
];
# Basic system packages
environment.systemPackages = with pkgs; [
claude-code
kakoune
vim
git
htop
wget
curl
tmux
dnsutils
];
# User configuration
users.mutableUsers = false;
users.users.googlebot = {
isNormalUser = true;
extraGroups = [ "wheel" ];
shell = pkgs.fish;
openssh.authorizedKeys.keys = hostConfig.machines.ssh.userKeys;
};
security.doas.enable = true;
security.sudo.enable = false;
security.doas.extraRules = [
{ groups = [ "wheel" ]; noPass = true; }
];
# Minimal locale settings
i18n.defaultLocale = "en_US.UTF-8";
time.timeZone = "America/Los_Angeles";
# Enable flakes
nix.settings.experimental-features = [ "nix-command" "flakes" ];
nix.settings.trusted-users = [ "googlebot" ];
# Make nixpkgs available in NIX_PATH and registry (like the NixOS ISO)
# This allows `nix-shell -p`, `nix repl '<nixpkgs>'`, etc. to work
nix.nixPath = [ "nixpkgs=${hostConfig.inputs.nixpkgs}" ];
nix.registry.nixpkgs.flake = hostConfig.inputs.nixpkgs;
# Enable fish shell
programs.fish.enable = true;
# Initialize Claude Code config on first boot (skips onboarding, trusts workspace)
systemd.services.claude-config-init = {
wantedBy = [ "multi-user.target" ];
serviceConfig = {
Type = "oneshot";
RemainAfterExit = true;
User = "googlebot";
Group = "users";
};
script = ''
if [ ! -f /home/googlebot/claude-config/.claude.json ]; then
cp ${claudeConfigFile} /home/googlebot/claude-config/.claude.json
fi
'';
};
# Home Manager configuration
home-manager.useGlobalPkgs = true;
home-manager.useUserPackages = true;
home-manager.users.googlebot = import ./home.nix;
}

View File

@@ -0,0 +1,74 @@
{ config, lib, pkgs, ... }:
# Container-specific configuration for sandboxed workspaces using systemd-nspawn
# This module is imported by default.nix for workspaces with type = "container"
with lib;
let
cfg = config.sandboxed-workspace;
hostConfig = config;
# Filter for container-type workspaces only
containerWorkspaces = filterAttrs (n: ws: ws.type == "container") cfg.workspaces;
in
{
config = mkIf (cfg.enable && containerWorkspaces != { }) {
# NixOS container module only sets restartIfChanged when autoStart=true
# Work around this by setting it directly on the systemd service
systemd.services = mapAttrs'
(name: ws: nameValuePair "container@${name}" {
restartIfChanged = lib.mkForce true;
restartTriggers = [
config.containers.${name}.path
config.environment.etc."nixos-containers/${name}.conf".source
];
})
containerWorkspaces;
# Convert container workspace configs to NixOS containers format
containers = mapAttrs
(name: ws: {
autoStart = ws.autoStart;
privateNetwork = true;
ephemeral = true;
restartIfChanged = true;
# Attach container's veth to the sandbox bridge
# This creates the veth pair and attaches host side to the bridge
hostBridge = config.networking.sandbox.bridgeName;
bindMounts = {
"/home/googlebot/workspace" = {
hostPath = "/home/googlebot/sandboxed/${name}/workspace";
isReadOnly = false;
};
"/etc/ssh-host-keys" = {
hostPath = "/home/googlebot/sandboxed/${name}/ssh-host-keys";
isReadOnly = false;
};
"/home/googlebot/claude-config" = {
hostPath = "/home/googlebot/sandboxed/${name}/claude-config";
isReadOnly = false;
};
};
config = { config, lib, pkgs, ... }: {
imports = [
(import ./base.nix {
inherit hostConfig;
workspaceName = name;
ip = ws.ip;
networkInterface = { Name = "eth0"; };
})
(import ws.config)
];
networking.useHostResolvConf = false;
nixpkgs.config.allowUnfree = true;
};
})
containerWorkspaces;
};
}

View File

@@ -0,0 +1,164 @@
{ config, lib, pkgs, ... }:
# Unified sandboxed workspace module supporting both VMs and containers
# This module provides isolated development environments with shared configuration
with lib;
let
cfg = config.sandboxed-workspace;
in
{
imports = [
./vm.nix
./container.nix
./incus.nix
];
options.sandboxed-workspace = {
enable = mkEnableOption "sandboxed workspace management";
workspaces = mkOption {
type = types.attrsOf (types.submodule {
options = {
type = mkOption {
type = types.enum [ "vm" "container" "incus" ];
description = ''
Backend type for this workspace:
- "vm": microVM with cloud-hypervisor (more isolation, uses virtiofs)
- "container": systemd-nspawn via NixOS containers (less overhead, uses bind mounts)
- "incus": Incus/LXD container (unprivileged, better security than NixOS containers)
'';
};
config = mkOption {
type = types.path;
description = "Path to the workspace configuration file";
};
ip = mkOption {
type = types.str;
example = "192.168.83.10";
description = ''
Static IP address for this workspace on the microvm bridge network.
Configures the workspace's network interface and adds an entry to /etc/hosts
on the host so the workspace can be accessed by name (e.g., ssh workspace-example).
Must be in the 192.168.83.0/24 subnet (or whatever networking.sandbox.subnet is).
'';
};
hostKey = mkOption {
type = types.nullOr types.str;
default = null;
example = "ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAA...";
description = ''
SSH host public key for this workspace. If set, adds to programs.ssh.knownHosts
so the host automatically trusts the workspace without prompting.
Get the key from: ~/sandboxed/<name>/ssh-host-keys/ssh_host_ed25519_key.pub
'';
};
autoStart = mkOption {
type = types.bool;
default = false;
description = "Whether to automatically start this workspace on boot";
};
cid = mkOption {
type = types.nullOr types.int;
default = null;
description = ''
vsock Context Identifier for this workspace (VM-only, ignored for containers).
If null, auto-generated from workspace name.
Must be unique per host. Valid range: 3 to 4294967294.
See: https://man7.org/linux/man-pages/man7/vsock.7.html
'';
};
};
});
default = { };
description = "Sandboxed workspace configurations";
};
};
config = mkIf cfg.enable {
# Automatically enable sandbox networking when workspaces are defined
networking.sandbox.enable = mkIf (cfg.workspaces != { }) true;
# Add workspace hostnames to /etc/hosts so they can be accessed by name
networking.hosts = lib.mkMerge (lib.mapAttrsToList
(name: ws: {
${ws.ip} = [ "workspace-${name}" ];
})
cfg.workspaces);
# Add workspace SSH host keys to known_hosts so host trusts workspaces without prompting
programs.ssh.knownHosts = lib.mkMerge (lib.mapAttrsToList
(name: ws:
lib.optionalAttrs (ws.hostKey != null) {
"workspace-${name}" = {
publicKey = ws.hostKey;
extraHostNames = [ ws.ip ];
};
})
cfg.workspaces);
# Shell aliases for workspace management
environment.shellAliases = lib.mkMerge (lib.mapAttrsToList
(name: ws:
let
serviceName =
if ws.type == "vm" then "microvm@${name}"
else if ws.type == "incus" then "incus-workspace-${name}"
else "container@${name}";
in
{
"workspace_${name}" = "ssh googlebot@workspace-${name}";
"workspace_${name}_start" = "doas systemctl start ${serviceName}";
"workspace_${name}_stop" = "doas systemctl stop ${serviceName}";
"workspace_${name}_restart" = "doas systemctl restart ${serviceName}";
"workspace_${name}_status" = "doas systemctl status ${serviceName}";
})
cfg.workspaces);
# Automatically generate SSH host keys and directories for all workspaces
systemd.services = lib.mapAttrs'
(name: ws:
let
serviceName =
if ws.type == "vm" then "microvm@${name}"
else if ws.type == "incus" then "incus-workspace-${name}"
else "container@${name}";
in
lib.nameValuePair "workspace-${name}-setup" {
description = "Setup directories and SSH keys for workspace ${name}";
wantedBy = [ "multi-user.target" ];
before = [ "${serviceName}.service" ];
serviceConfig = {
Type = "oneshot";
RemainAfterExit = true;
};
script = ''
# Create directories if they don't exist
mkdir -p /home/googlebot/sandboxed/${name}/workspace
mkdir -p /home/googlebot/sandboxed/${name}/ssh-host-keys
mkdir -p /home/googlebot/sandboxed/${name}/claude-config
# Fix ownership
chown -R googlebot:users /home/googlebot/sandboxed/${name}
# Generate SSH host key if it doesn't exist
if [ ! -f /home/googlebot/sandboxed/${name}/ssh-host-keys/ssh_host_ed25519_key ]; then
${pkgs.openssh}/bin/ssh-keygen -t ed25519 -N "" \
-f /home/googlebot/sandboxed/${name}/ssh-host-keys/ssh_host_ed25519_key
chown googlebot:users /home/googlebot/sandboxed/${name}/ssh-host-keys/ssh_host_ed25519_key*
echo "Generated SSH host key for workspace ${name}"
fi
'';
}
)
cfg.workspaces;
};
}

View File

@@ -0,0 +1,50 @@
{ lib, pkgs, ... }:
# Home Manager configuration for sandboxed workspace user environment
# This sets up the shell and tools inside VMs and containers
{
home.username = "googlebot";
home.homeDirectory = "/home/googlebot";
home.stateVersion = "24.11";
programs.home-manager.enable = true;
# Shell configuration
programs.fish.enable = true;
programs.starship.enable = true;
programs.starship.enableFishIntegration = true;
programs.starship.settings.container.disabled = true;
# Basic command-line tools
programs.btop.enable = true;
programs.ripgrep.enable = true;
programs.eza.enable = true;
# Git configuration
programs.git = {
enable = true;
settings = {
user.name = lib.mkDefault "googlebot";
user.email = lib.mkDefault "zuckerberg@neet.dev";
};
};
# Shell aliases
home.shellAliases = {
ls = "eza";
la = "eza -la";
ll = "eza -l";
};
# Environment variables for Claude Code
home.sessionVariables = {
# Isolate Claude config to a specific directory on the host
CLAUDE_CONFIG_DIR = "/home/googlebot/claude-config";
};
# Additional packages for development
home.packages = with pkgs; [
# Add packages as needed per workspace
];
}

View File

@@ -0,0 +1,182 @@
{ config, lib, pkgs, ... }:
# Incus-specific configuration for sandboxed workspaces
# Creates fully declarative Incus containers from NixOS configurations
with lib;
let
cfg = config.sandboxed-workspace;
hostConfig = config;
incusWorkspaces = filterAttrs (n: ws: ws.type == "incus") cfg.workspaces;
# Build a NixOS LXC image for a workspace
mkContainerImage = name: ws:
let
nixpkgs = hostConfig.inputs.nixpkgs;
containerSystem = nixpkgs.lib.nixosSystem {
modules = [
(import ./base.nix {
inherit hostConfig;
workspaceName = name;
ip = ws.ip;
networkInterface = { Name = "eth0"; };
})
(import ws.config)
({ config, lib, pkgs, ... }: {
nixpkgs.hostPlatform = hostConfig.currentSystem;
boot.isContainer = true;
networking.useHostResolvConf = false;
nixpkgs.config.allowUnfree = true;
# Incus containers don't support the kernel features nix sandbox requires
nix.settings.sandbox = false;
environment.systemPackages = [
(lib.hiPrio (pkgs.writeShellScriptBin "claude" ''
exec ${pkgs.claude-code}/bin/claude --dangerously-skip-permissions "$@"
''))
];
})
];
};
in
{
rootfs = containerSystem.config.system.build.images.lxc;
metadata = containerSystem.config.system.build.images.lxc-metadata;
toplevel = containerSystem.config.system.build.toplevel;
};
mkIncusService = name: ws:
let
images = mkContainerImage name ws;
hash = builtins.substring 0 12 (builtins.hashString "sha256" "${images.rootfs}");
imageName = "nixos-workspace-${name}-${hash}";
containerName = "workspace-${name}";
bridgeName = config.networking.sandbox.bridgeName;
mac = lib.mkMac "incus-${name}";
addDevices = ''
incus config device add ${containerName} eth0 nic nictype=bridged parent=${bridgeName} hwaddr=${mac}
incus config device add ${containerName} workspace disk source=/home/googlebot/sandboxed/${name}/workspace path=/home/googlebot/workspace shift=true
incus config device add ${containerName} ssh-keys disk source=/home/googlebot/sandboxed/${name}/ssh-host-keys path=/etc/ssh-host-keys shift=true
incus config device add ${containerName} claude-config disk source=/home/googlebot/sandboxed/${name}/claude-config path=/home/googlebot/claude-config shift=true
'';
in
{
description = "Incus workspace ${name}";
after = [ "incus.service" "incus-preseed.service" "workspace-${name}-setup.service" ];
requires = [ "incus.service" ];
wants = [ "workspace-${name}-setup.service" ];
wantedBy = optional ws.autoStart "multi-user.target";
path = [ config.virtualisation.incus.package pkgs.gnutar pkgs.xz pkgs.util-linux ];
restartTriggers = [ images.rootfs images.metadata ];
serviceConfig = {
Type = "oneshot";
RemainAfterExit = true;
};
script = ''
set -euo pipefail
# Serialize incus operations - concurrent container creation causes race conditions
exec 9>/run/incus-workspace.lock
flock -x 9
# Import image if not present
if ! incus image list --format csv | grep -q "${imageName}"; then
metadata_tarball=$(echo ${images.metadata}/tarball/*.tar.xz)
rootfs_tarball=$(echo ${images.rootfs}/tarball/*.tar.xz)
incus image import "$metadata_tarball" "$rootfs_tarball" --alias ${imageName}
# Clean up old images for this workspace
incus image list --format csv | grep "nixos-workspace-${name}-" | grep -v "${imageName}" | cut -d, -f2 | while read old_image; do
incus image delete "$old_image" || true
done || true
fi
# Always recreate container for ephemeral behavior
incus stop ${containerName} --force 2>/dev/null || true
incus delete ${containerName} --force 2>/dev/null || true
incus init ${imageName} ${containerName}
${addDevices}
incus start ${containerName}
# Wait for container to start
for i in $(seq 1 30); do
if incus list --format csv | grep -q "^${containerName},RUNNING"; then
exit 0
fi
sleep 1
done
exit 1
'';
preStop = ''
exec 9>/run/incus-workspace.lock
flock -x 9
incus stop ${containerName} --force 2>/dev/null || true
incus delete ${containerName} --force 2>/dev/null || true
# Clean up all images for this workspace
incus image list --format csv 2>/dev/null | grep "nixos-workspace-${name}-" | cut -d, -f2 | while read img; do
incus image delete "$img" 2>/dev/null || true
done
'';
};
in
{
config = mkIf (cfg.enable && incusWorkspaces != { }) {
virtualisation.incus.enable = true;
networking.nftables.enable = true;
virtualisation.incus.preseed = {
storage_pools = [{
name = "default";
driver = "dir";
config = {
source = "/var/lib/incus/storage-pools/default";
};
}];
profiles = [{
name = "default";
config = {
"security.privileged" = "false";
"security.idmap.isolated" = "true";
};
devices = {
root = {
path = "/";
pool = "default";
type = "disk";
};
};
}];
};
systemd.services = mapAttrs'
(name: ws: nameValuePair "incus-workspace-${name}" (mkIncusService name ws))
incusWorkspaces;
# Extra alias for incus shell access (ssh is also available via default.nix aliases)
environment.shellAliases = mkMerge (mapAttrsToList
(name: ws: {
"workspace_${name}_shell" = "doas incus exec workspace-${name} -- su -l googlebot";
})
incusWorkspaces);
};
}

View File

@@ -0,0 +1,140 @@
{ config, lib, pkgs, ... }:
# VM-specific configuration for sandboxed workspaces using microvm.nix
# This module is imported by default.nix for workspaces with type = "vm"
with lib;
let
cfg = config.sandboxed-workspace;
hostConfig = config;
# Generate a deterministic vsock CID from workspace name.
#
# vsock (virtual sockets) enables host-VM communication without networking.
# cloud-hypervisor uses vsock for systemd-notify integration: when a VM finishes
# booting, systemd sends READY=1 to the host via vsock, allowing the host's
# microvm@ service to accurately track VM boot status instead of guessing.
#
# Each VM needs a unique CID (Context Identifier). Reserved CIDs per vsock(7):
# - VMADDR_CID_HYPERVISOR (0): reserved for hypervisor
# - VMADDR_CID_LOCAL (1): loopback address
# - VMADDR_CID_HOST (2): host address
# See: https://man7.org/linux/man-pages/man7/vsock.7.html
# https://docs.kernel.org/virt/kvm/vsock.html
#
# We auto-generate from SHA256 hash to ensure uniqueness without manual assignment.
# Range: 100 - 16777315 (offset avoids reserved CIDs and leaves 3-99 for manual use)
nameToCid = name:
let
hash = builtins.hashString "sha256" name;
hexPart = builtins.substring 0 6 hash;
in
100 + (builtins.foldl'
(acc: c: acc * 16 + (
if c == "a" then 10
else if c == "b" then 11
else if c == "c" then 12
else if c == "d" then 13
else if c == "e" then 14
else if c == "f" then 15
else lib.strings.toInt c
)) 0
(lib.stringToCharacters hexPart));
# Filter for VM-type workspaces only
vmWorkspaces = filterAttrs (n: ws: ws.type == "vm") cfg.workspaces;
# Generate VM configuration for a workspace
mkVmConfig = name: ws: {
inherit pkgs; # Use host's pkgs (includes allowUnfree)
config = import ws.config;
specialArgs = { inputs = hostConfig.inputs; };
extraModules = [
(import ./base.nix {
inherit hostConfig;
workspaceName = name;
ip = ws.ip;
networkInterface = { Type = "ether"; };
})
{
environment.systemPackages = [
(lib.hiPrio (pkgs.writeShellScriptBin "claude" ''
exec ${pkgs.claude-code}/bin/claude --dangerously-skip-permissions "$@"
''))
];
# MicroVM specific configuration
microvm = {
# Use cloud-hypervisor for better performance
hypervisor = lib.mkDefault "cloud-hypervisor";
# Resource allocation
vcpu = 8;
mem = 4096; # 4GB RAM
# Disk for writable overlay
volumes = [{
image = "overlay.img";
mountPoint = "/nix/.rw-store";
size = 8192; # 8GB
}];
# Shared directories with host using virtiofs
shares = [
{
# Share the host's /nix/store for accessing packages
proto = "virtiofs";
tag = "ro-store";
source = "/nix/store";
mountPoint = "/nix/.ro-store";
}
{
proto = "virtiofs";
tag = "workspace";
source = "/home/googlebot/sandboxed/${name}/workspace";
mountPoint = "/home/googlebot/workspace";
}
{
proto = "virtiofs";
tag = "ssh-host-keys";
source = "/home/googlebot/sandboxed/${name}/ssh-host-keys";
mountPoint = "/etc/ssh-host-keys";
}
{
proto = "virtiofs";
tag = "claude-config";
source = "/home/googlebot/sandboxed/${name}/claude-config";
mountPoint = "/home/googlebot/claude-config";
}
];
# Writeable overlay for /nix/store
writableStoreOverlay = "/nix/.rw-store";
# TAP interface for bridged networking
# The interface name "vm-*" matches the pattern in common/network/microvm.nix
# which automatically attaches it to the microbr bridge
interfaces = [{
type = "tap";
id = "vm-${name}";
mac = lib.mkMac "vm-${name}";
}];
# Enable vsock for systemd-notify integration
vsock.cid =
if ws.cid != null
then ws.cid
else nameToCid name;
};
}
];
autostart = ws.autoStart;
};
in
{
config = mkIf (cfg.enable && vmWorkspaces != { }) {
# Convert VM workspace configs to microvm.nix format
microvm.vms = mapAttrs mkVmConfig vmWorkspaces;
};
}

View File

@@ -0,0 +1,16 @@
{ config, lib, ... }:
let
cfg = config.services.actual;
in
{
config = lib.mkIf cfg.enable {
services.actual.settings = {
port = 25448;
};
backup.group."actual-budget".paths = [
"/var/lib/actual"
];
};
}

31
common/server/atticd.nix Normal file
View File

@@ -0,0 +1,31 @@
{ config, lib, ... }:
{
config = lib.mkIf (config.thisMachine.hasRole."binary-cache") {
services.atticd = {
enable = true;
environmentFile = config.age.secrets.atticd-credentials.path;
settings = {
listen = "[::]:28338";
chunking = {
nar-size-threshold = 64 * 1024; # 64 KiB
# The preferred minimum size of a chunk, in bytes
min-size = 16 * 1024; # 16 KiB
# The preferred average size of a chunk, in bytes
avg-size = 64 * 1024; # 64 KiB
# The preferred maximum size of a chunk, in bytes
max-size = 256 * 1024; # 256 KiB
};
compression.type = "zstd";
garbage-collection.default-retention-period = "6 months";
};
};
age.secrets.atticd-credentials.file = ../../secrets/atticd-credentials.age;
};
}

View File

@@ -1,43 +0,0 @@
{ config, lib, ... }:
with lib;
let
cfg = config.ceph;
in {
options.ceph = {
};
config = mkIf cfg.enable {
# ceph.enable = true;
## S3 Object gateway
#ceph.rgw.enable = true;
#ceph.rgw.daemons = [
#];
# https://docs.ceph.com/en/latest/start/intro/
# meta object storage daemon
ceph.osd.enable = true;
ceph.osd.daemons = [
];
# monitor's ceph state
ceph.mon.enable = true;
ceph.mon.daemons = [
];
# manage ceph
ceph.mgr.enable = true;
ceph.mgr.daemons = [
];
# metadata server
ceph.mds.enable = true;
ceph.mds.daemons = [
];
ceph.global.fsid = "925773DC-D95F-476C-BBCD-08E01BF0865F";
};
}

View File

@@ -1,18 +1,20 @@
{ config, pkgs, ... }:
{ ... }:
{
imports = [
./nginx.nix
./thelounge.nix
./mumble.nix
./icecast.nix
./nginx-stream.nix
./matrix.nix
./zerobin.nix
./gitea.nix
./privatebin/privatebin.nix
./radio.nix
./samba.nix
./owncast.nix
./mailserver.nix
./nextcloud.nix
./gitea-actions-runner.nix
./atticd.nix
./librechat.nix
./actualbudget.nix
./unifi.nix
];
}

View File

@@ -0,0 +1,79 @@
{ config, lib, ... }:
# Gitea Actions Runner inside a NixOS container.
# The container shares the host's /nix/store (read-only) and nix-daemon socket,
# so builds go through the host daemon and outputs land in the host store.
# Warning: NixOS containers are not fully secure — do not run untrusted code.
# To enable, assign a machine the 'gitea-actions-runner' system role.
let
thisMachineIsARunner = config.thisMachine.hasRole."gitea-actions-runner";
containerName = "gitea-runner";
giteaRunnerUid = 991;
giteaRunnerGid = 989;
in
{
config = lib.mkIf (thisMachineIsARunner && !config.boot.isContainer) {
containers.${containerName} = {
autoStart = true;
ephemeral = true;
bindMounts = {
"/run/agenix/gitea-actions-runner-token" = {
hostPath = "/run/agenix/gitea-actions-runner-token";
isReadOnly = true;
};
"/var/lib/gitea-runner" = {
hostPath = "/var/lib/gitea-runner";
isReadOnly = false;
};
};
config = { config, lib, pkgs, ... }: {
system.stateVersion = "25.11";
services.gitea-actions-runner.instances.inst = {
enable = true;
name = containerName;
url = "https://git.neet.dev/";
tokenFile = "/run/agenix/gitea-actions-runner-token";
labels = [ "nixos:host" ];
};
# Disable dynamic user so runner state persists via bind mount
systemd.services.gitea-runner-inst.serviceConfig.DynamicUser = lib.mkForce false;
users.users.gitea-runner = {
uid = giteaRunnerUid;
home = "/var/lib/gitea-runner";
group = "gitea-runner";
isSystemUser = true;
createHome = true;
};
users.groups.gitea-runner.gid = giteaRunnerGid;
nix.settings.experimental-features = [ "nix-command" "flakes" ];
environment.systemPackages = with pkgs; [
git
nodejs
jq
attic-client
];
};
};
# Matching user on host — the container's gitea-runner UID must be
# recognized by the host's nix-daemon as trusted (shared UID namespace)
users.users.gitea-runner = {
uid = giteaRunnerUid;
home = "/var/lib/gitea-runner";
group = "gitea-runner";
isSystemUser = true;
createHome = true;
};
users.groups.gitea-runner.gid = giteaRunnerGid;
age.secrets.gitea-actions-runner-token.file = ../../secrets/gitea-actions-runner-token.age;
};
}

View File

@@ -2,7 +2,8 @@
let
cfg = config.services.gitea;
in {
in
{
options.services.gitea = {
hostname = lib.mkOption {
type = lib.types.str;
@@ -11,29 +12,63 @@ in {
};
config = lib.mkIf cfg.enable {
services.gitea = {
domain = cfg.hostname;
rootUrl = "https://${cfg.hostname}/";
appName = cfg.hostname;
ssh.enable = true;
# lfs.enable = true;
dump.enable = true;
cookieSecure = true;
disableRegistration = true;
lfs.enable = true;
# dump.enable = true;
settings = {
server = {
ROOT_URL = "https://${cfg.hostname}/";
DOMAIN = cfg.hostname;
};
other = {
SHOW_FOOTER_VERSION = false;
};
ui = {
DEFAULT_THEME = "arc-green";
DEFAULT_THEME = "gitea-dark";
};
service = {
DISABLE_REGISTRATION = true;
};
session = {
COOKIE_SECURE = true;
PROVIDER = "db";
SESSION_LIFE_TIME = 259200; # 3 days
GC_INTERVAL_TIME = 259200; # 3 days
};
mailer = {
ENABLED = true;
MAILER_TYPE = "smtp";
SMTP_ADDR = "mail.neet.dev";
SMTP_PORT = "465";
IS_TLS_ENABLED = true;
USER = "robot@runyan.org";
FROM = "no-reply@neet.dev";
};
actions = {
ENABLED = true;
};
indexer = {
REPO_INDEXER_ENABLED = true;
};
};
mailerPasswordFile = "/run/agenix/robots-email-pw";
};
age.secrets.robots-email-pw = {
file = ../../secrets/robots-email-pw.age;
owner = config.services.gitea.user;
};
# backups
backup.group."gitea".paths = [
config.services.gitea.stateDir
];
services.nginx.enable = true;
services.nginx.virtualHosts.${cfg.hostname} = {
enableACME = true;
forceSSL = true;
locations."/" = {
proxyPass = "http://localhost:${toString cfg.httpPort}";
proxyPass = "http://localhost:${toString cfg.settings.server.HTTP_PORT}";
};
};
};

View File

@@ -1,42 +0,0 @@
{ config, pkgs, ... }:
{
services.gitlab = {
enable = true;
databasePasswordFile = "/var/keys/gitlab/db_password";
initialRootPasswordFile = "/var/keys/gitlab/root_password";
https = true;
host = "git.neet.dev";
port = 443;
user = "git";
group = "git";
databaseUsername = "git";
smtp = {
enable = true;
address = "localhost";
port = 25;
};
secrets = {
dbFile = "/var/keys/gitlab/db";
secretFile = "/var/keys/gitlab/secret";
otpFile = "/var/keys/gitlab/otp";
jwsFile = "/var/keys/gitlab/jws";
};
extraConfig = {
gitlab = {
email_from = "gitlab-no-reply@neet.dev";
email_display_name = "neet.dev GitLab";
email_reply_to = "gitlab-no-reply@neet.dev";
};
};
pagesExtraArgs = [ "-listen-proxy" "127.0.0.1:8090" ];
};
services.nginx.virtualHosts = {
"git.neet.dev" = {
enableACME = true;
forceSSL = true;
locations."/".proxyPass = "http://unix:/run/gitlab/gitlab-workhorse.socket";
};
};
}

View File

@@ -1,25 +0,0 @@
{ config, pkgs, ... }:
let
domain = "hydra.neet.dev";
port = 3000;
notifyEmail = "hydra@neet.dev";
in
{
services.nginx.virtualHosts."${domain}" = {
enableACME = true;
forceSSL = true;
locations."/" = {
proxyPass = "http://localhost:${toString port}";
};
};
services.hydra = {
enable = true;
inherit port;
hydraURL = "https://${domain}";
useSubstitutes = true;
notificationSender = notifyEmail;
buildMachinesFiles = [];
};
}

View File

@@ -1,64 +0,0 @@
{ lib, config, ... }:
# configures icecast to only accept source from localhost
# to a audio optimized stream on services.icecast.mount
# made available via nginx for http access on
# https://host/mount
let
cfg = config.services.icecast;
in {
options.services.icecast = {
mount = lib.mkOption {
type = lib.types.str;
example = "stream.mp3";
};
fallback = lib.mkOption {
type = lib.types.str;
example = "fallback.mp3";
};
nginx = lib.mkEnableOption "enable nginx";
};
config = lib.mkIf cfg.enable {
services.icecast = {
listen.address = "0.0.0.0";
listen.port = 8001;
admin.password = "hackme";
extraConf = ''
<authentication>
<source-password>hackme</source-password>
</authentication>
<http-headers>
<header type="cors" name="Access-Control-Allow-Origin" />
</http-headers>
<mount type="normal">
<mount-name>/${cfg.mount}</mount-name>
<max-listeners>30</max-listeners>
<bitrate>64000</bitrate>
<hidden>false</hidden>
<public>false</public>
<fallback-mount>/${cfg.fallback}</fallback-mount>
<fallback-override>1</fallback-override>
</mount>
<mount type="normal">
<mount-name>/${cfg.fallback}</mount-name>
<max-listeners>30</max-listeners>
<bitrate>64000</bitrate>
<hidden>false</hidden>
<public>false</public>
</mount>
'';
};
services.nginx.virtualHosts.${cfg.hostname} = lib.mkIf cfg.nginx {
enableACME = true;
forceSSL = true;
locations."/${cfg.mount}" = {
proxyPass = "http://localhost:${toString cfg.listen.port}/${cfg.mount}";
extraConfig = ''
add_header Access-Control-Allow-Origin *;
'';
};
};
};
}

View File

@@ -0,0 +1,69 @@
{ config, lib, ... }:
with lib;
let
cfg = config.services.librechat-container;
in
{
options.services.librechat-container = {
enable = mkEnableOption "librechat";
port = mkOption {
type = types.int;
default = 3080;
};
host = lib.mkOption {
type = lib.types.str;
example = "example.com";
};
};
config = mkIf cfg.enable {
virtualisation.oci-containers.containers = {
librechat = {
image = "ghcr.io/danny-avila/librechat:v0.8.1";
environment = {
HOST = "0.0.0.0";
MONGO_URI = "mongodb://host.containers.internal:27017/LibreChat";
ENDPOINTS = "openAI,google,bingAI,gptPlugins";
OPENAI_MODELS = lib.concatStringsSep "," [
"gpt-4o-mini"
"o3-mini"
"gpt-4o"
"o1"
];
REFRESH_TOKEN_EXPIRY = toString (1000 * 60 * 60 * 24 * 30); # 30 days
};
environmentFiles = [
"/run/agenix/librechat-env-file"
];
ports = [
"${toString cfg.port}:3080"
];
};
};
age.secrets.librechat-env-file.file = ../../secrets/librechat-env-file.age;
services.mongodb.enable = true;
services.mongodb.bind_ip = "0.0.0.0";
# easier podman maintenance
virtualisation.oci-containers.backend = "podman";
virtualisation.podman.dockerSocket.enable = true;
virtualisation.podman.dockerCompat = true;
# For mongodb access
networking.firewall.trustedInterfaces = [
"podman0" # for librechat
];
services.nginx.virtualHosts.${cfg.host} = {
enableACME = true;
forceSSL = true;
locations."/" = {
proxyPass = "http://localhost:${toString cfg.port}";
proxyWebsockets = true;
};
};
};
}

View File

@@ -0,0 +1,121 @@
{ config, pkgs, lib, ... }:
with builtins;
let
cfg = config.mailserver;
domains = [
"neet.space"
"neet.dev"
"neet.cloud"
"runyan.org"
"runyan.rocks"
"thunderhex.com"
"tar.ninja"
"bsd.ninja"
"bsd.rocks"
];
in
{
config = lib.mkIf cfg.enable {
# kresd doesn't work with tailscale MagicDNS
mailserver.localDnsResolver = false;
services.resolved.enable = true;
mailserver = {
fqdn = "mail.neet.dev";
dkimKeyBits = 2048;
indexDir = "/var/lib/mailindex";
enableManageSieve = true;
fullTextSearch.enable = true;
fullTextSearch.memoryLimit = 500;
inherit domains;
loginAccounts = {
"jeremy@runyan.org" = {
hashedPasswordFile = "/run/agenix/hashed-email-pw";
# catchall for all domains
aliases = map (domain: "@${domain}") domains;
};
"cris@runyan.org" = {
hashedPasswordFile = "/run/agenix/cris-hashed-email-pw";
aliases = [ "chris@runyan.org" ];
};
"robot@runyan.org" = {
aliases = [
"no-reply@neet.dev"
"robot@neet.dev"
];
sendOnly = true;
hashedPasswordFile = "/run/agenix/hashed-robots-email-pw";
};
};
rejectRecipients = [
"george@runyan.org"
"joslyn@runyan.org"
"damon@runyan.org"
"jonas@runyan.org"
"simon@neet.dev"
"ellen@runyan.org"
];
forwards = {
"amazon@runyan.org" = [
"jeremy@runyan.org"
"cris@runyan.org"
];
};
x509.useACMEHost = config.mailserver.fqdn; # use let's encrypt for certs
stateVersion = 3;
};
age.secrets.hashed-email-pw.file = ../../secrets/hashed-email-pw.age;
age.secrets.cris-hashed-email-pw.file = ../../secrets/cris-hashed-email-pw.age;
age.secrets.hashed-robots-email-pw.file = ../../secrets/hashed-robots-email-pw.age;
# Get let's encrypt cert
services.nginx = {
enable = true;
virtualHosts."${config.mailserver.fqdn}" = {
forceSSL = true;
enableACME = true;
};
};
# sendmail to use xxx@domain instead of xxx@mail.domain
services.postfix.settings.main.myorigin = "$mydomain";
# relay sent mail through mailgun
# https://www.howtoforge.com/community/threads/different-smtp-relays-for-different-domains-in-postfix.82711/#post-392620
services.postfix.settings.main = {
smtp_sasl_auth_enable = "yes";
smtp_sasl_security_options = "noanonymous";
smtp_sasl_password_maps = "hash:/var/lib/postfix/conf/sasl_relay_passwd";
smtp_use_tls = "yes";
sender_dependent_relayhost_maps = "hash:/var/lib/postfix/conf/sender_relay";
smtp_sender_dependent_authentication = "yes";
};
services.postfix.mapFiles.sender_relay =
let
relayHost = "[smtp.mailgun.org]:587";
in
pkgs.writeText "sender_relay"
(concatStringsSep "\n" (map (domain: "@${domain} ${relayHost}") domains));
services.postfix.mapFiles.sasl_relay_passwd = "/run/agenix/sasl_relay_passwd";
age.secrets.sasl_relay_passwd.file = ../../secrets/sasl_relay_passwd.age;
# webmail
services.roundcube = {
enable = true;
hostName = config.mailserver.fqdn;
extraConfig = ''
# starttls needed for authentication, so the fqdn required to match the certificate
$config['smtp_server'] = "tls://${config.mailserver.fqdn}";
$config['smtp_user'] = "%u";
$config['smtp_pass'] = "%p";
'';
};
# backups
backup.group."email".paths = [
config.mailserver.mailDirectory
];
};
}

View File

@@ -3,7 +3,8 @@
let
cfg = config.services.matrix;
certs = config.security.acme.certs;
in {
in
{
options.services.matrix = {
enable = lib.mkEnableOption "enable matrix";
element-web = {
@@ -137,7 +138,8 @@ in {
];
locations."/".proxyPass = "http://localhost:${toString cfg.port}";
};
virtualHosts.${cfg.turn.host} = { # get TLS cert for TURN server
virtualHosts.${cfg.turn.host} = {
# get TLS cert for TURN server
enableACME = true;
forceSSL = true;
};

View File

@@ -3,7 +3,8 @@
let
cfg = config.services.murmur;
certs = config.security.acme.certs;
in {
in
{
options.services.murmur.domain = lib.mkOption {
type = lib.types.str;
};

155
common/server/nextcloud.nix Normal file
View File

@@ -0,0 +1,155 @@
{ config, pkgs, lib, ... }:
let
cfg = config.services.nextcloud;
nextcloudHostname = "runyan.org";
collaboraOnlineHostname = "collabora.runyan.org";
whiteboardHostname = "whiteboard.runyan.org";
whiteboardPort = 3002; # Seems impossible to change
# Hardcoded public ip of ponyo... I wish I didn't need this...
public_ip_address = "147.135.114.130";
in
{
config = lib.mkIf cfg.enable {
services.nextcloud = {
https = true;
package = pkgs.nextcloud32;
hostName = nextcloudHostname;
config.dbtype = "sqlite";
config.adminuser = "jeremy";
config.adminpassFile = "/run/agenix/nextcloud-pw";
# Apps
autoUpdateApps.enable = true;
extraAppsEnable = true;
extraApps = with config.services.nextcloud.package.packages.apps; {
# Want
inherit end_to_end_encryption mail spreed;
# For file and document editing (collabora online and excalidraw)
inherit richdocuments whiteboard;
# Might use
inherit calendar qownnotesapi;
# Try out
# inherit bookmarks cookbook deck memories maps music news notes phonetrack polls forms;
};
# Allows installing Apps from the UI (might remove later)
appstoreEnable = true;
};
age.secrets.nextcloud-pw = {
file = ../../secrets/nextcloud-pw.age;
owner = "nextcloud";
};
# backups
backup.group."nextcloud".paths = [
config.services.nextcloud.home
];
services.nginx.virtualHosts.${config.services.nextcloud.hostName} = {
enableACME = true;
forceSSL = true;
};
# collabora-online
# https://diogotc.com/blog/collabora-nextcloud-nixos/
services.collabora-online = {
enable = true;
port = 15972;
settings = {
# Rely on reverse proxy for SSL
ssl = {
enable = false;
termination = true;
};
# Listen on loopback interface only
net = {
listen = "loopback";
post_allow.host = [ "localhost" ];
};
# Restrict loading documents from WOPI Host
storage.wopi = {
"@allow" = true;
host = [ config.services.nextcloud.hostName ];
};
server_name = collaboraOnlineHostname;
};
};
services.nginx.virtualHosts.${config.services.collabora-online.settings.server_name} = {
enableACME = true;
forceSSL = true;
locations."/" = {
proxyPass = "http://localhost:${toString config.services.collabora-online.port}";
proxyWebsockets = true;
};
};
systemd.services.nextcloud-config-collabora =
let
wopi_url = "http://localhost:${toString config.services.collabora-online.port}";
public_wopi_url = "https://${collaboraOnlineHostname}";
wopi_allowlist = lib.concatStringsSep "," [
"127.0.0.1"
"::1"
public_ip_address
];
in
{
wantedBy = [ "multi-user.target" ];
after = [ "nextcloud-setup.service" "coolwsd.service" ];
requires = [ "coolwsd.service" ];
path = [
config.services.nextcloud.occ
];
script = ''
nextcloud-occ config:app:set richdocuments wopi_url --value ${lib.escapeShellArg wopi_url}
nextcloud-occ config:app:set richdocuments public_wopi_url --value ${lib.escapeShellArg public_wopi_url}
nextcloud-occ config:app:set richdocuments wopi_allowlist --value ${lib.escapeShellArg wopi_allowlist}
nextcloud-occ richdocuments:setup
'';
serviceConfig = {
Type = "oneshot";
};
};
# Whiteboard
services.nextcloud-whiteboard-server = {
enable = true;
settings.NEXTCLOUD_URL = "https://${nextcloudHostname}";
secrets = [ "/run/agenix/whiteboard-server-jwt-secret" ];
};
systemd.services.nextcloud-config-whiteboard = {
wantedBy = [ "multi-user.target" ];
after = [ "nextcloud-setup.service" ];
requires = [ "coolwsd.service" ];
path = [
config.services.nextcloud.occ
];
script = ''
nextcloud-occ config:app:set whiteboard collabBackendUrl --value="https://${whiteboardHostname}"
nextcloud-occ config:app:set whiteboard jwt_secret_key --value="$JWT_SECRET_KEY"
'';
serviceConfig = {
Type = "oneshot";
EnvironmentFile = [ "/run/agenix/whiteboard-server-jwt-secret" ];
};
};
age.secrets.whiteboard-server-jwt-secret.file = ../../secrets/whiteboard-server-jwt-secret.age;
services.nginx.virtualHosts.${whiteboardHostname} = {
enableACME = true;
forceSSL = true;
locations."/" = {
proxyPass = "http://localhost:${toString whiteboardPort}";
proxyWebsockets = true;
};
};
};
}

View File

@@ -1,75 +0,0 @@
{ lib, config, pkgs, ... }:
let
cfg = config.services.nginx.stream;
nginxWithRTMP = pkgs.nginx.override {
modules = [ pkgs.nginxModules.rtmp ];
};
in {
options.services.nginx.stream = {
enable = lib.mkEnableOption "enable nginx rtmp/hls/dash video streaming";
port = lib.mkOption {
type = lib.types.int;
default = 1935;
description = "rtmp injest/serve port";
};
rtmpName = lib.mkOption {
type = lib.types.str;
default = "live";
description = "the name of the rtmp application";
};
hostname = lib.mkOption {
type = lib.types.str;
description = "the http host to serve hls";
};
httpLocation = lib.mkOption {
type = lib.types.str;
default = "/tmp";
description = "the path of the tmp http files";
};
};
config = lib.mkIf cfg.enable {
services.nginx = {
enable = true;
package = nginxWithRTMP;
virtualHosts.${cfg.hostname} = {
enableACME = true;
forceSSL = true;
locations = {
"/stream/hls".root = "${cfg.httpLocation}/hls";
"/stream/dash".root = "${cfg.httpLocation}/dash";
};
extraConfig = ''
location /stat {
rtmp_stat all;
}
'';
};
appendConfig = ''
rtmp {
server {
listen ${toString cfg.port};
chunk_size 4096;
application ${cfg.rtmpName} {
allow publish all;
allow publish all;
live on;
record off;
hls on;
hls_path ${cfg.httpLocation}/hls;
dash on;
dash_path ${cfg.httpLocation}/dash;
}
}
}
'';
};
networking.firewall.allowedTCPPorts = [
cfg.port
];
};
}

View File

@@ -1,8 +1,13 @@
{ lib, config, pkgs, ... }:
{ lib, config, ... }:
let
cfg = config.services.nginx;
in {
in
{
options.services.nginx = {
openFirewall = lib.mkEnableOption "Open firewall ports 80 and 443";
};
config = lib.mkIf cfg.enable {
services.nginx = {
recommendedGzipSettings = true;
@@ -11,6 +16,8 @@ in {
recommendedTlsSettings = true;
};
networking.firewall.allowedTCPPorts = [ 80 443 ];
services.nginx.openFirewall = lib.mkDefault true;
networking.firewall.allowedTCPPorts = lib.mkIf cfg.openFirewall [ 80 443 ];
};
}

View File

@@ -4,7 +4,8 @@ with lib;
let
cfg = config.services.owncast;
in {
in
{
options.services.owncast = {
hostname = lib.mkOption {
type = types.str;

View File

@@ -1,42 +0,0 @@
;<?php http_response_code(403); /*
[main]
name = "Kode Paste"
discussion = false
opendiscussion = false
password = true
fileupload = false
burnafterreadingselected = false
defaultformatter = "plaintext"
sizelimit = 10485760
template = "bootstrap"
languageselection = false
[expire]
default = "1week"
[expire_options]
5min = 300
10min = 600
1hour = 3600
1day = 86400
1week = 604800
[formatter_options]
plaintext = "Plain Text"
syntaxhighlighting = "Source Code"
markdown = "Markdown"
[traffic]
limit = 10
dir = "/var/lib/privatebin"
[purge]
limit = 300
batchsize = 10
dir = "/var/lib/privatebin"
[model]
class = Filesystem
[model_options]
dir = "/var/lib/privatebin"

View File

@@ -1,73 +0,0 @@
{ config, pkgs, lib, ... }:
let
cfg = config.services.privatebin;
privateBinSrc = pkgs.stdenv.mkDerivation {
name = "privatebin";
src = pkgs.fetchFromGitHub {
owner = "privatebin";
repo = "privatebin";
rev = "d65bf02d7819a530c3c2a88f6f9947651fe5258d";
sha256 = "7ttAvEDL1ab0cUZcqZzXFkXwB2rF2t4eNpPxt48ap94=";
};
installPhase = ''
cp -ar $src $out
'';
};
in {
options.services.privatebin = {
enable = lib.mkEnableOption "enable privatebin";
host = lib.mkOption {
type = lib.types.str;
example = "example.com";
};
};
config = lib.mkIf cfg.enable {
users.users.privatebin = {
description = "privatebin service user";
group = "privatebin";
isSystemUser = true;
};
users.groups.privatebin = {};
services.nginx.enable = true;
services.nginx.virtualHosts.${cfg.host} = {
enableACME = true;
forceSSL = true;
locations."/" = {
root = privateBinSrc;
index = "index.php";
};
locations."~ \.php$" = {
root = privateBinSrc;
extraConfig = ''
fastcgi_pass unix:${config.services.phpfpm.pools.privatebin.socket};
fastcgi_index index.php;
'';
};
};
systemd.tmpfiles.rules = [
"d '/var/lib/privatebin' 0750 privatebin privatebin - -"
];
services.phpfpm.pools.privatebin = {
user = "privatebin";
group = "privatebin";
phpEnv = {
CONFIG_PATH = "${./conf.php}";
};
settings = {
pm = "dynamic";
"listen.owner" = config.services.nginx.user;
"pm.max_children" = 5;
"pm.start_servers" = 2;
"pm.min_spare_servers" = 1;
"pm.max_spare_servers" = 3;
"pm.max_requests" = 500;
};
};
};
}

View File

@@ -1,74 +0,0 @@
{ config, pkgs, lib, ... }:
let
cfg = config.services.radio;
radioPackage = config.inputs.radio.packages.${config.currentSystem}.radio;
in {
options.services.radio = {
enable = lib.mkEnableOption "enable radio";
user = lib.mkOption {
type = lib.types.str;
default = "radio";
description = ''
The user radio should run as
'';
};
group = lib.mkOption {
type = lib.types.str;
default = "radio";
description = ''
The group radio should run as
'';
};
dataDir = lib.mkOption {
type = lib.types.str;
default = "/var/lib/radio";
description = ''
Path to the radio data directory
'';
};
host = lib.mkOption {
type = lib.types.str;
description = ''
Domain radio is hosted on
'';
};
nginx = lib.mkEnableOption "enable nginx";
};
config = lib.mkIf cfg.enable {
services.icecast = {
enable = true;
hostname = cfg.host;
mount = "stream.mp3";
fallback = "fallback.mp3";
};
services.nginx.virtualHosts.${cfg.host} = lib.mkIf cfg.nginx {
enableACME = true;
forceSSL = true;
locations."/".root = config.inputs.radio-web;
};
users.users.${cfg.user} = {
isSystemUser = true;
group = cfg.group;
home = cfg.dataDir;
createHome = true;
};
users.groups.${cfg.group} = {};
systemd.services.radio = {
enable = true;
after = ["network.target"];
wantedBy = ["multi-user.target"];
serviceConfig.ExecStart = "${radioPackage}/bin/radio ${config.services.icecast.listen.address}:${toString config.services.icecast.listen.port} ${config.services.icecast.mount} 5500";
serviceConfig.User = cfg.user;
serviceConfig.Group = cfg.group;
serviceConfig.WorkingDirectory = cfg.dataDir;
preStart = ''
mkdir -p ${cfg.dataDir}
chown ${cfg.user} ${cfg.dataDir}
'';
};
};
}

View File

@@ -5,32 +5,28 @@
services.samba = {
openFirewall = true;
package = pkgs.sambaFull; # printer sharing
securityType = "user";
# should this be on?
nsswins = true;
extraConfig = ''
workgroup = HOME
server string = smbnix
netbios name = smbnix
security = user
use sendfile = yes
min protocol = smb2
guest account = nobody
map to guest = bad user
settings = {
global = {
security = "user";
workgroup = "HOME";
"server string" = "smbnix";
"netbios name" = "smbnix";
"use sendfile" = "yes";
"min protocol" = "smb2";
"guest account" = "nobody";
"map to guest" = "bad user";
# printing
load printers = yes
printing = cups
printcap name = cups
"load printers" = "yes";
printing = "cups";
"printcap name" = "cups";
# horrible files
veto files = /._*/.DS_Store/ /._*/._.DS_Store/
delete veto files = yes
'';
shares = {
"hide files" = "/.nobackup/.DS_Store/._.DS_Store/";
};
public = {
path = "/data/samba/Public";
browseable = "yes";
@@ -77,6 +73,13 @@
};
};
# backups
backup.group."samba".paths = [
config.services.samba.settings.googlebot.path
config.services.samba.settings.cris.path
config.services.samba.settings.public.path
];
# Windows discovery of samba server
services.samba-wsdd = {
enable = true;
@@ -92,7 +95,7 @@
# Printer discovery
# (is this needed?)
services.avahi.enable = true;
services.avahi.nssmdns = true;
services.avahi.nssmdns4 = true;
# printer sharing
systemd.tmpfiles.rules = [

View File

@@ -2,7 +2,8 @@
let
cfg = config.services.thelounge;
in {
in
{
options.services.thelounge = {
fileUploadBaseUrl = lib.mkOption {
type = lib.types.str;
@@ -42,6 +43,10 @@ in {
};
};
backup.group."thelounge".paths = [
"/var/lib/thelounge/"
];
# the lounge client
services.nginx.virtualHosts.${cfg.host} = {
enableACME = true;

26
common/server/unifi.nix Normal file
View File

@@ -0,0 +1,26 @@
{ config, lib, pkgs, ... }:
let
cfg = config.services.unifi;
in
{
options.services.unifi = {
# Open select Unifi ports instead of using openFirewall to avoid opening access to unifi's control panel
openMinimalFirewall = lib.mkEnableOption "Open bare minimum firewall ports";
};
config = lib.mkIf cfg.enable {
services.unifi.unifiPackage = pkgs.unifi;
services.unifi.mongodbPackage = pkgs.mongodb-7_0;
networking.firewall = lib.mkIf cfg.openMinimalFirewall {
allowedUDPPorts = [
3478 # STUN
10001 # used for device discovery.
];
allowedTCPPorts = [
8080 # Used for device and application communication.
];
};
};
}

View File

@@ -1,94 +0,0 @@
{ config, pkgs, ... }:
let
# external
rtp-port = 8083;
webrtc-peer-lower-port = 20000;
webrtc-peer-upper-port = 20100;
domain = "live.neet.space";
# internal
ingest-port = 8084;
web-port = 8085;
webrtc-port = 8086;
toStr = builtins.toString;
in
{
networking.firewall.allowedUDPPorts = [ rtp-port ];
networking.firewall.allowedTCPPortRanges = [ {
from = webrtc-peer-lower-port;
to = webrtc-peer-upper-port;
} ];
networking.firewall.allowedUDPPortRanges = [ {
from = webrtc-peer-lower-port;
to = webrtc-peer-upper-port;
} ];
virtualisation.docker.enable = true;
services.nginx.virtualHosts.${domain} = {
enableACME = true;
forceSSL = true;
locations = {
"/" = {
proxyPass = "http://localhost:${toStr web-port}";
};
"websocket" = {
proxyPass = "http://localhost:${toStr webrtc-port}/websocket";
proxyWebsockets = true;
};
};
};
virtualisation.oci-containers = {
backend = "docker";
containers = {
"lightspeed-ingest" = {
workdir = "/var/lib/lightspeed-ingest";
image = "projectlightspeed/ingest";
ports = [
"${toStr ingest-port}:8084"
];
# imageFile = pkgs.dockerTools.pullImage {
# imageName = "projectlightspeed/ingest";
# finalImageTag = "version-0.1.4";
# imageDigest = "sha256:9fc51833b7c27a76d26e40f092b9cec1ac1c4bfebe452e94ad3269f1f73ff2fc";
# sha256 = "19kxl02x0a3i6hlnsfcm49hl6qxnq2f3hfmyv1v8qdaz58f35kd5";
# };
};
"lightspeed-react" = {
workdir = "/var/lib/lightspeed-react";
image = "projectlightspeed/react";
ports = [
"${toStr web-port}:80"
];
# imageFile = pkgs.dockerTools.pullImage {
# imageName = "projectlightspeed/react";
# finalImageTag = "version-0.1.3";
# imageDigest = "sha256:b7c58425f1593f7b4304726b57aa399b6e216e55af9c0962c5c19333fae638b6";
# sha256 = "0d2jh7mr20h7dxgsp7ml7cw2qd4m8ja9rj75dpy59zyb6v0bn7js";
# };
};
"lightspeed-webrtc" = {
workdir = "/var/lib/lightspeed-webrtc";
image = "projectlightspeed/webrtc";
ports = [
"${toStr webrtc-port}:8080"
"${toStr rtp-port}:65535/udp"
"${toStr webrtc-peer-lower-port}-${toStr webrtc-peer-upper-port}:${toStr webrtc-peer-lower-port}-${toStr webrtc-peer-upper-port}/tcp"
"${toStr webrtc-peer-lower-port}-${toStr webrtc-peer-upper-port}:${toStr webrtc-peer-lower-port}-${toStr webrtc-peer-upper-port}/udp"
];
cmd = [
"lightspeed-webrtc" "--addr=0.0.0.0" "--ip=${domain}"
"--ports=${toStr webrtc-peer-lower-port}-${toStr webrtc-peer-upper-port}" "run"
];
# imageFile = pkgs.dockerTools.pullImage {
# imageName = "projectlightspeed/webrtc";
# finalImageTag = "version-0.1.2";
# imageDigest = "sha256:ddf8b3dd294485529ec11d1234a3fc38e365a53c4738998c6bc2c6930be45ecf";
# sha256 = "1bdy4ak99fjdphj5bsk8rp13xxmbqdhfyfab14drbyffivg9ad2i";
# };
};
};
};
}

View File

@@ -1,7 +0,0 @@
Copyright 2020 Matthijs Steen
Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.

View File

@@ -1,37 +0,0 @@
# Visual Studio Code Server support in NixOS
Experimental support for VS Code Server in NixOS. The NodeJS by default supplied by VS Code cannot be used within NixOS due to missing hardcoded paths, so it is automatically replaced by a symlink to a compatible version of NodeJS that does work under NixOS.
## Installation
```nix
{
imports = [
(fetchTarball "https://github.com/msteen/nixos-vscode-server/tarball/master")
];
services.vscode-server.enable = true;
}
```
And then enable them for the relevant users:
```
systemctl --user enable auto-fix-vscode-server.service
```
### Home Manager
```nix
{
imports = [
"${fetchTarball "https://github.com/msteen/nixos-vscode-server/tarball/master"}/modules/vscode-server/home.nix"
];
services.vscode-server.enable = true;
}
```
## Usage
When the service is enabled and running it should simply work, there is nothing for you to do.

View File

@@ -1 +0,0 @@
import ./modules/vscode-server

View File

@@ -1,8 +0,0 @@
import ./module.nix ({ name, description, serviceConfig }:
{
systemd.user.services.${name} = {
inherit description serviceConfig;
wantedBy = [ "default.target" ];
};
})

View File

@@ -1,15 +0,0 @@
import ./module.nix ({ name, description, serviceConfig }:
{
systemd.user.services.${name} = {
Unit = {
Description = description;
};
Service = serviceConfig;
Install = {
WantedBy = [ "default.target" ];
};
};
})

View File

@@ -1,42 +0,0 @@
moduleConfig:
{ lib, pkgs, ... }:
with lib;
{
options.services.vscode-server.enable = with types; mkEnableOption "VS Code Server";
config = moduleConfig rec {
name = "auto-fix-vscode-server";
description = "Automatically fix the VS Code server used by the remote SSH extension";
serviceConfig = {
# When a monitored directory is deleted, it will stop being monitored.
# Even if it is later recreated it will not restart monitoring it.
# Unfortunately the monitor does not kill itself when it stops monitoring,
# so rather than creating our own restart mechanism, we leverage systemd to do this for us.
Restart = "always";
RestartSec = 0;
ExecStart = pkgs.writeShellScript "${name}.sh" ''
set -euo pipefail
PATH=${makeBinPath (with pkgs; [ coreutils inotify-tools ])}
bin_dir=~/.vscode-server/bin
[[ -e $bin_dir ]] &&
find "$bin_dir" -mindepth 2 -maxdepth 2 -name node -type f -exec ln -sfT ${pkgs.nodejs-12_x}/bin/node {} \; ||
mkdir -p "$bin_dir"
while IFS=: read -r bin_dir event; do
# A new version of the VS Code Server is being created.
if [[ $event == 'CREATE,ISDIR' ]]; then
# Create a trigger to know when their node is being created and replace it for our symlink.
touch "$bin_dir/node"
inotifywait -qq -e DELETE_SELF "$bin_dir/node"
ln -sfT ${pkgs.nodejs-12_x}/bin/node "$bin_dir/node"
# The monitored directory is deleted, e.g. when "Uninstall VS Code Server from Host" has been run.
elif [[ $event == DELETE_SELF ]]; then
# See the comments above Restart in the service config.
exit 0
fi
done < <(inotifywait -q -m -e CREATE,ISDIR -e DELETE_SELF --format '%w%f:%e' "$bin_dir")
'';
};
};
}

View File

@@ -1,33 +0,0 @@
{ config, pkgs, lib, ... }:
let
cfg = config.services.zerobin;
in {
options.services.zerobin = {
host = lib.mkOption {
type = lib.types.str;
example = "example.com";
};
port = lib.mkOption {
type = lib.types.int;
default = 33422;
};
};
config = lib.mkIf cfg.enable {
services.zerobin.listenPort = cfg.port;
services.zerobin.listenAddress = "localhost";
services.nginx.virtualHosts.${cfg.host} = {
enableACME = true;
forceSSL = true;
locations."/" = {
proxyPass = "http://localhost:${toString cfg.port}";
proxyWebsockets = true;
};
};
# zerobin service is broken in nixpkgs currently
systemd.services.zerobin.serviceConfig.ExecStart = lib.mkForce
"${pkgs.zerobin}/bin/zerobin --host=${cfg.listenAddress} --port=${toString cfg.listenPort} --data-dir=${cfg.dataDir}";
};
}

View File

@@ -1,34 +1,21 @@
{ config, pkgs, ... }:
{ pkgs, ... }:
# Improvements to the default shell
# - use nix-locate for command-not-found
# - use nix-index for command-not-found
# - disable fish's annoying greeting message
# - add some handy shell commands
let
nix-locate = config.inputs.nix-locate.packages.${config.currentSystem}.default;
in {
{
# nix-index
programs.nix-index.enable = true;
programs.nix-index.enableFishIntegration = true;
programs.command-not-found.enable = false;
environment.systemPackages = [
nix-locate
];
programs.nix-index-database.comma.enable = true;
programs.fish = {
enable = true;
shellInit = let
wrapper = pkgs.writeScript "command-not-found" ''
#!${pkgs.bash}/bin/bash
source ${nix-locate}/etc/profile.d/command-not-found.sh
command_not_found_handle "$@"
'';
in ''
# use nix-locate for command-not-found functionality
function __fish_command_not_found_handler --on-event fish_command_not_found
${wrapper} $argv
end
shellInit = ''
# disable annoying fish shell greeting
set fish_greeting
'';
@@ -38,9 +25,11 @@ in {
myip = "dig +short myip.opendns.com @resolver1.opendns.com";
# https://linuxreviews.org/HOWTO_Test_Disk_I/O_Performance
io_seq_read = "nix run nixpkgs#fio -- --name TEST --eta-newline=5s --filename=temp.file --rw=read --size=2g --io_size=10g --blocksize=1024k --ioengine=libaio --fsync=10000 --iodepth=32 --direct=1 --numjobs=1 --runtime=60 --group_reporting; rm temp.file";
io_seq_write = "nix run nixpkgs#fio -- --name TEST --eta-newline=5s --filename=temp.file --rw=write --size=2g --io_size=10g --blocksize=1024k --ioengine=libaio --fsync=10000 --iodepth=32 --direct=1 --numjobs=1 --runtime=60 --group_reporting; rm temp.file";
io_rand_read = "nix run nixpkgs#fio -- --name TEST --eta-newline=5s --filename=temp.file --rw=randread --size=2g --io_size=10g --blocksize=4k --ioengine=libaio --fsync=1 --iodepth=1 --direct=1 --numjobs=32 --runtime=60 --group_reporting; rm temp.file";
io_rand_write = "nix run nixpkgs#fio -- --name TEST --eta-newline=5s --filename=temp.file --rw=randrw --size=2g --io_size=10g --blocksize=4k --ioengine=libaio --fsync=1 --iodepth=1 --direct=1 --numjobs=1 --runtime=60 --group_reporting; rm temp.file";
io_seq_read = "${pkgs.fio}/bin/fio --name TEST --eta-newline=5s --filename=temp.file --rw=read --size=2g --io_size=10g --blocksize=1024k --ioengine=libaio --fsync=10000 --iodepth=32 --direct=1 --numjobs=1 --runtime=60 --group_reporting; rm temp.file";
io_seq_write = "${pkgs.fio}/bin/fio --name TEST --eta-newline=5s --filename=temp.file --rw=write --size=2g --io_size=10g --blocksize=1024k --ioengine=libaio --fsync=10000 --iodepth=32 --direct=1 --numjobs=1 --runtime=60 --group_reporting; rm temp.file";
io_rand_read = "${pkgs.fio}/bin/fio --name TEST --eta-newline=5s --filename=temp.file --rw=randread --size=2g --io_size=10g --blocksize=4k --ioengine=libaio --fsync=1 --iodepth=1 --direct=1 --numjobs=32 --runtime=60 --group_reporting; rm temp.file";
io_rand_write = "${pkgs.fio}/bin/fio --name TEST --eta-newline=5s --filename=temp.file --rw=randrw --size=2g --io_size=10g --blocksize=4k --ioengine=libaio --fsync=1 --iodepth=1 --direct=1 --numjobs=1 --runtime=60 --group_reporting; rm temp.file";
llsblk = "lsblk -o +uuid,fsType";
};
}

View File

@@ -1,65 +1,36 @@
rec {
users = [
"ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIMVR/R3ZOsv7TZbICGBCHdjh1NDT8SnswUyINeJOC7QG"
"ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIE0dcqL/FhHmv+a1iz3f9LJ48xubO7MZHy35rW9SZOYM"
"ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIO0VFnn3+Mh0nWeN92jov81qNE9fpzTAHYBphNoY7HUx" # reg
"ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIHSkKiRUUmnErOKGx81nyge/9KqjkPh8BfDk0D3oP586" # nat
"ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIFeTK1iARlNIKP/DS8/ObBm9yUM/3L1Ub4XI5A2r9OzP" # ray
];
system = {
liza = "ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDY/pNyWedEfU7Tq9ikGbriRuF1ZWkHhegGS17L0Vcdl";
ponyo = "ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIMBBlTAIp38RhErU1wNNV5MBeb+WGH0mhF/dxh5RsAXN";
ponyo-unlock = "ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIC9LQuuImgWlkjDhEEIbM1wOd+HqRv1RxvYZuLXPSdRi";
ray = "ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDQM8hwKRgl8cZj7UVYATSLYu4LhG7I0WFJ9m2iWowiB";
s0 = "ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIAwiXcUFtAvZCayhu4+AIcF+Ktrdgv9ee/mXSIhJbp4q";
n1 = "ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIPWlhd1Oid5Xf2zdcBrcdrR0TlhObutwcJ8piobRTpRt";
n2 = "ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIJ7bRiRutnI7Bmyt/I238E3Fp5DqiClIXiVibsccipOr";
n3 = "ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIB+rJEaRrFDGirQC2UoWQkmpzLg4qgTjGJgVqiipWiU5";
n4 = "ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAINYm2ROIfCeGz6QtDwqAmcj2DX9tq2CZn0eLhskdvB4Z";
n5 = "ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIE5Qhvwq3PiHEKf+2/4w5ZJkSMNzFLhIRrPOR98m7wW4";
n6 = "ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAID/P/pa9+qhKAPfvvd8xSO2komJqDW0M1nCK7ZrP6PO7";
n7 = "ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIPtOlOvTlMX2mxPaXDJ6VlMe5rmroUXpKmJVNxgV32xL";
};
{ config, lib, ... }:
# groups
systems = with system; [
liza
ponyo
ray
s0
n1
n2
n3
n4
n5
n6
n7
];
personal = with system; [
ray
];
servers = with system; [
liza
ponyo
s0
n1
n2
n3
n4
n5
n6
n7
];
compute = with system; [
n1
n2
n3
n4
n5
n6
n7
];
storage = with system; [
s0
{
programs.ssh.knownHosts = lib.filterAttrs (n: v: v != null) (lib.concatMapAttrs
(host: cfg: {
${host} = {
hostNames = cfg.hostNames;
publicKey = cfg.hostKey;
};
"${host}-remote-unlock" =
if cfg.remoteUnlock != null then {
hostNames = builtins.filter (h: h != null) [ cfg.remoteUnlock.clearnetHost cfg.remoteUnlock.onionHost ];
publicKey = cfg.remoteUnlock.hostKey;
} else null;
})
config.machines.hosts);
# prebuilt cmds for easy ssh LUKS unlock
environment.shellAliases =
let
unlockHosts = unlockType: lib.concatMapAttrs
(host: cfg:
if cfg.remoteUnlock != null && cfg.remoteUnlock.${unlockType} != null then {
${host} = cfg.remoteUnlock.${unlockType};
} else { })
config.machines.hosts;
in
lib.concatMapAttrs (host: addr: { "unlock-over-tor_${host}" = "torsocks ssh root@${addr}"; }) (unlockHosts "onionHost")
//
lib.concatMapAttrs (host: addr: { "unlock_${host}" = "ssh root@${addr}"; }) (unlockHosts "clearnetHost");
# TODO: Old ssh keys I will remove some day...
machines.ssh.userKeys = [
"ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIHSkKiRUUmnErOKGx81nyge/9KqjkPh8BfDk0D3oP586" # nat
];
}

397
flake.lock generated
View File

@@ -3,45 +3,28 @@
"agenix": {
"inputs": {
"darwin": "darwin",
"nixpkgs": [
"nixpkgs"
]
},
"locked": {
"lastModified": 1675176355,
"narHash": "sha256-Qjxh5cmN56siY97mzmBLI1+cdjXSPqmfPVsKxBvHmwI=",
"owner": "ryantm",
"repo": "agenix",
"rev": "b7ffcfe77f817d9ee992640ba1f270718d197f28",
"type": "github"
},
"original": {
"owner": "ryantm",
"repo": "agenix",
"type": "github"
}
},
"archivebox": {
"inputs": {
"flake-utils": [
"flake-utils"
"home-manager": [
"home-manager"
],
"nixpkgs": [
"nixpkgs"
],
"systems": [
"systems"
]
},
"locked": {
"lastModified": 1648612759,
"narHash": "sha256-SJwlpD2Wz3zFoX2mIYCQfwIOYHaOdeiWGFeDXsLGM84=",
"ref": "refs/heads/master",
"rev": "39d338b9b24159d8ef3309eecc0d32a2a9f102b5",
"revCount": 2,
"type": "git",
"url": "https://git.neet.dev/zuckerberg/archivebox.git"
"lastModified": 1762618334,
"narHash": "sha256-wyT7Pl6tMFbFrs8Lk/TlEs81N6L+VSybPfiIgzU8lbQ=",
"owner": "ryantm",
"repo": "agenix",
"rev": "fcdea223397448d35d9b31f798479227e80183f6",
"type": "github"
},
"original": {
"type": "git",
"url": "https://git.neet.dev/zuckerberg/archivebox.git"
"owner": "ryantm",
"repo": "agenix",
"type": "github"
}
},
"blobs": {
@@ -60,7 +43,7 @@
"type": "gitlab"
}
},
"dailybuild_modules": {
"claude-code-nix": {
"inputs": {
"flake-utils": [
"flake-utils"
@@ -70,17 +53,40 @@
]
},
"locked": {
"lastModified": 1651719222,
"narHash": "sha256-p/GY5vOP+HUlxNL4OtEhmBNEVQsedOHXEmjfCGONVmE=",
"lastModified": 1770491193,
"narHash": "sha256-zdnWeXmPZT8BpBo52s4oansT1Rq0SNzksXKpEcMc5lE=",
"owner": "sadjow",
"repo": "claude-code-nix",
"rev": "f68a2683e812d1e4f9a022ff3e0206d46347d019",
"type": "github"
},
"original": {
"owner": "sadjow",
"repo": "claude-code-nix",
"type": "github"
}
},
"dailybot": {
"inputs": {
"flake-utils": [
"flake-utils"
],
"nixpkgs": [
"nixpkgs"
]
},
"locked": {
"lastModified": 1739947126,
"narHash": "sha256-JoiddH5H9up8jC/VKU8M7wDlk/bstKoJ3rHj+TkW4Zo=",
"ref": "refs/heads/master",
"rev": "1290ddd9a2ff2bf2d0f702750768312b80efcd34",
"revCount": 19,
"rev": "ea1ad60f1c6662103ef4a3705d8e15aa01219529",
"revCount": 20,
"type": "git",
"url": "https://git.neet.dev/zuckerberg/dailybuild_modules.git"
"url": "https://git.neet.dev/zuckerberg/dailybot.git"
},
"original": {
"type": "git",
"url": "https://git.neet.dev/zuckerberg/dailybuild_modules.git"
"url": "https://git.neet.dev/zuckerberg/dailybot.git"
}
},
"darwin": {
@@ -91,11 +97,11 @@
]
},
"locked": {
"lastModified": 1673295039,
"narHash": "sha256-AsdYgE8/GPwcelGgrntlijMg4t3hLFJFCRF3tL5WVjA=",
"lastModified": 1744478979,
"narHash": "sha256-dyN+teG9G82G+m+PX/aSAagkC+vUv0SgUw3XkPhQodQ=",
"owner": "lnl7",
"repo": "nix-darwin",
"rev": "87b9d090ad39b25b2400029c64825fc2a8868943",
"rev": "43975d782b418ebf4969e9ccba82466728c2851b",
"type": "github"
},
"original": {
@@ -105,14 +111,40 @@
"type": "github"
}
},
"deploy-rs": {
"inputs": {
"flake-compat": [
"flake-compat"
],
"nixpkgs": [
"nixpkgs"
],
"utils": [
"flake-utils"
]
},
"locked": {
"lastModified": 1766051518,
"narHash": "sha256-znKOwPXQnt3o7lDb3hdf19oDo0BLP4MfBOYiWkEHoik=",
"owner": "serokell",
"repo": "deploy-rs",
"rev": "d5eff7f948535b9c723d60cd8239f8f11ddc90fa",
"type": "github"
},
"original": {
"owner": "serokell",
"repo": "deploy-rs",
"type": "github"
}
},
"flake-compat": {
"flake": false,
"locked": {
"lastModified": 1668681692,
"narHash": "sha256-Ht91NGdewz8IQLtWZ9LCeNXMSXHUss+9COoqu6JLmXU=",
"lastModified": 1767039857,
"narHash": "sha256-vNpUSpF5Nuw8xvDLj2KCwwksIbjua2LZCqhV1LNRDns=",
"owner": "edolstra",
"repo": "flake-compat",
"rev": "009399224d5e398d03b22badca40a37ac85412a1",
"rev": "5edf11c44bc78a0d334f6334cdaf7d60d732daab",
"type": "github"
},
"original": {
@@ -122,12 +154,17 @@
}
},
"flake-utils": {
"inputs": {
"systems": [
"systems"
]
},
"locked": {
"lastModified": 1667395993,
"narHash": "sha256-nuEHfE/LcWyuSWnS8t12N1wc105Qtau+/OdUAjtQ0rA=",
"lastModified": 1731533236,
"narHash": "sha256-l0KFg5HjrsfsO/JpG+r7fRrqm12kzFHyUHqHCVpMMbI=",
"owner": "numtide",
"repo": "flake-utils",
"rev": "5aed5285a952e0b949eb3ba02c12fa4fcfef535f",
"rev": "11707dc2f618dd54ca8739b309ec4fc024de578b",
"type": "github"
},
"original": {
@@ -136,65 +173,139 @@
"type": "github"
}
},
"nix-locate": {
"git-hooks": {
"inputs": {
"flake-compat": [
"simple-nixos-mailserver",
"flake-compat"
],
"gitignore": "gitignore",
"nixpkgs": [
"simple-nixos-mailserver",
"nixpkgs"
]
},
"locked": {
"lastModified": 1763988335,
"narHash": "sha256-QlcnByMc8KBjpU37rbq5iP7Cp97HvjRP0ucfdh+M4Qc=",
"owner": "cachix",
"repo": "git-hooks.nix",
"rev": "50b9238891e388c9fdc6a5c49e49c42533a1b5ce",
"type": "github"
},
"original": {
"owner": "cachix",
"repo": "git-hooks.nix",
"type": "github"
}
},
"gitignore": {
"inputs": {
"nixpkgs": [
"simple-nixos-mailserver",
"git-hooks",
"nixpkgs"
]
},
"locked": {
"lastModified": 1709087332,
"narHash": "sha256-HG2cCnktfHsKV0s4XW83gU3F57gaTljL9KNSuG6bnQs=",
"owner": "hercules-ci",
"repo": "gitignore.nix",
"rev": "637db329424fd7e46cf4185293b9cc8c88c95394",
"type": "github"
},
"original": {
"owner": "hercules-ci",
"repo": "gitignore.nix",
"type": "github"
}
},
"home-manager": {
"inputs": {
"flake-compat": "flake-compat",
"nixpkgs": [
"nixpkgs"
]
},
"locked": {
"lastModified": 1673969751,
"narHash": "sha256-U6aYz3lqZ4NVEGEWiti1i0FyqEo4bUjnTAnA73DPnNU=",
"owner": "bennofs",
"repo": "nix-index",
"rev": "5f98881b1ed27ab6656e6d71b534f88430f6823a",
"lastModified": 1768068402,
"narHash": "sha256-bAXnnJZKJiF7Xr6eNW6+PhBf1lg2P1aFUO9+xgWkXfA=",
"owner": "nix-community",
"repo": "home-manager",
"rev": "8bc5473b6bc2b6e1529a9c4040411e1199c43b4c",
"type": "github"
},
"original": {
"owner": "bennofs",
"repo": "nix-index",
"owner": "nix-community",
"ref": "master",
"repo": "home-manager",
"type": "github"
}
},
"microvm": {
"inputs": {
"nixpkgs": [
"nixpkgs"
],
"spectrum": "spectrum"
},
"locked": {
"lastModified": 1770310890,
"narHash": "sha256-lyWAs4XKg3kLYaf4gm5qc5WJrDkYy3/qeV5G733fJww=",
"owner": "astro",
"repo": "microvm.nix",
"rev": "68c9f9c6ca91841f04f726a298c385411b7bfcd5",
"type": "github"
},
"original": {
"owner": "astro",
"repo": "microvm.nix",
"type": "github"
}
},
"nix-index-database": {
"inputs": {
"nixpkgs": [
"nixpkgs"
]
},
"locked": {
"lastModified": 1765267181,
"narHash": "sha256-d3NBA9zEtBu2JFMnTBqWj7Tmi7R5OikoU2ycrdhQEws=",
"owner": "Mic92",
"repo": "nix-index-database",
"rev": "82befcf7dc77c909b0f2a09f5da910ec95c5b78f",
"type": "github"
},
"original": {
"owner": "Mic92",
"repo": "nix-index-database",
"type": "github"
}
},
"nixos-hardware": {
"locked": {
"lastModified": 1767185284,
"narHash": "sha256-ljDBUDpD1Cg5n3mJI81Hz5qeZAwCGxon4kQW3Ho3+6Q=",
"owner": "NixOS",
"repo": "nixos-hardware",
"rev": "40b1a28dce561bea34858287fbb23052c3ee63fe",
"type": "github"
},
"original": {
"owner": "NixOS",
"ref": "master",
"repo": "nixos-hardware",
"type": "github"
}
},
"nixpkgs": {
"locked": {
"lastModified": 1672580127,
"narHash": "sha256-3lW3xZslREhJogoOkjeZtlBtvFMyxHku7I/9IVehhT8=",
"lastModified": 1768250893,
"narHash": "sha256-fWNJYFx0QvnlGlcw54EoOYs/wv2icINHUz0FVdh9RIo=",
"owner": "NixOS",
"repo": "nixpkgs",
"rev": "0874168639713f547c05947c76124f78441ea46c",
"type": "github"
},
"original": {
"owner": "NixOS",
"ref": "nixos-22.05",
"repo": "nixpkgs",
"type": "github"
}
},
"nixpkgs-22_05": {
"locked": {
"lastModified": 1654936503,
"narHash": "sha256-soKzdhI4jTHv/rSbh89RdlcJmrPgH8oMb/PLqiqIYVQ=",
"owner": "NixOS",
"repo": "nixpkgs",
"rev": "dab6df51387c3878cdea09f43589a15729cae9f4",
"type": "github"
},
"original": {
"id": "nixpkgs",
"ref": "nixos-22.05",
"type": "indirect"
}
},
"nixpkgs-unstable": {
"locked": {
"lastModified": 1675835843,
"narHash": "sha256-y1dSCQPcof4CWzRYRqDj4qZzbBl+raVPAko5Prdil28=",
"owner": "NixOS",
"repo": "nixpkgs",
"rev": "32f914af34f126f54b45e482fb2da4ae78f3095f",
"rev": "3971af1a8fc3646b1d554cb1269b26c84539c22e",
"type": "github"
},
"original": {
@@ -204,97 +315,77 @@
"type": "github"
}
},
"radio": {
"inputs": {
"flake-utils": [
"flake-utils"
],
"nixpkgs": [
"nixpkgs"
]
},
"locked": {
"lastModified": 1631585589,
"narHash": "sha256-q4o/4/2pEuJyaKZwNQC5KHnzG1obClzFB7zWk9XSDfY=",
"ref": "main",
"rev": "5bf607fed977d41a269942a7d1e92f3e6d4f2473",
"revCount": 38,
"type": "git",
"url": "https://git.neet.dev/zuckerberg/radio.git"
},
"original": {
"ref": "main",
"rev": "5bf607fed977d41a269942a7d1e92f3e6d4f2473",
"type": "git",
"url": "https://git.neet.dev/zuckerberg/radio.git"
}
},
"radio-web": {
"flake": false,
"locked": {
"lastModified": 1652121792,
"narHash": "sha256-j1Y9MAjUVNgyFSeGzPoqibAnEysJDjZSXukVfQ7+bsQ=",
"ref": "refs/heads/master",
"rev": "72e7a9e80b780c84ed8d4a6374bfbb242701f900",
"revCount": 5,
"type": "git",
"url": "https://git.neet.dev/zuckerberg/radio-web.git"
},
"original": {
"type": "git",
"url": "https://git.neet.dev/zuckerberg/radio-web.git"
}
},
"root": {
"inputs": {
"agenix": "agenix",
"archivebox": "archivebox",
"dailybuild_modules": "dailybuild_modules",
"claude-code-nix": "claude-code-nix",
"dailybot": "dailybot",
"deploy-rs": "deploy-rs",
"flake-compat": "flake-compat",
"flake-utils": "flake-utils",
"nix-locate": "nix-locate",
"home-manager": "home-manager",
"microvm": "microvm",
"nix-index-database": "nix-index-database",
"nixos-hardware": "nixos-hardware",
"nixpkgs": "nixpkgs",
"nixpkgs-unstable": "nixpkgs-unstable",
"radio": "radio",
"radio-web": "radio-web",
"simple-nixos-mailserver": "simple-nixos-mailserver"
"simple-nixos-mailserver": "simple-nixos-mailserver",
"systems": "systems"
}
},
"simple-nixos-mailserver": {
"inputs": {
"blobs": "blobs",
"flake-compat": [
"flake-compat"
],
"git-hooks": "git-hooks",
"nixpkgs": [
"nixpkgs"
],
"nixpkgs-22_05": "nixpkgs-22_05",
"utils": "utils"
]
},
"locked": {
"lastModified": 1655930346,
"narHash": "sha256-ht56HHOzEhjeIgAv5ZNFjSVX/in1YlUs0HG9c1EUXTM=",
"lastModified": 1766321686,
"narHash": "sha256-icOWbnD977HXhveirqA10zoqvErczVs3NKx8Bj+ikHY=",
"owner": "simple-nixos-mailserver",
"repo": "nixos-mailserver",
"rev": "f535d8123c4761b2ed8138f3d202ea710a334a1d",
"rev": "7d433bf89882f61621f95082e90a4ab91eb0bdd3",
"type": "gitlab"
},
"original": {
"owner": "simple-nixos-mailserver",
"ref": "nixos-22.05",
"ref": "master",
"repo": "nixos-mailserver",
"type": "gitlab"
}
},
"utils": {
"spectrum": {
"flake": false,
"locked": {
"lastModified": 1605370193,
"narHash": "sha256-YyMTf3URDL/otKdKgtoMChu4vfVL3vCMkRqpGifhUn0=",
"owner": "numtide",
"repo": "flake-utils",
"rev": "5021eac20303a61fafe17224c087f5519baed54d",
"lastModified": 1759482047,
"narHash": "sha256-H1wiXRQHxxPyMMlP39ce3ROKCwI5/tUn36P8x6dFiiQ=",
"ref": "refs/heads/main",
"rev": "c5d5786d3dc938af0b279c542d1e43bce381b4b9",
"revCount": 996,
"type": "git",
"url": "https://spectrum-os.org/git/spectrum"
},
"original": {
"type": "git",
"url": "https://spectrum-os.org/git/spectrum"
}
},
"systems": {
"locked": {
"lastModified": 1681028828,
"narHash": "sha256-Vy1rq5AaRuLzOxct8nz4T6wlgyUR7zLU309k9mBC768=",
"owner": "nix-systems",
"repo": "default",
"rev": "da67096a3b9bf56a91d16901293e51ba5b49a27e",
"type": "github"
},
"original": {
"owner": "numtide",
"repo": "flake-utils",
"owner": "nix-systems",
"repo": "default",
"type": "github"
}
}

243
flake.nix
View File

@@ -1,104 +1,205 @@
{
inputs = {
nixpkgs.url = "github:NixOS/nixpkgs/nixos-22.05";
nixpkgs-unstable.url = "github:NixOS/nixpkgs/master";
# nixpkgs
nixpkgs.url = "github:NixOS/nixpkgs/master";
flake-utils.url = "github:numtide/flake-utils";
nix-locate.url = "github:bennofs/nix-index";
nix-locate.inputs.nixpkgs.follows = "nixpkgs";
# mail server
simple-nixos-mailserver.url = "gitlab:simple-nixos-mailserver/nixos-mailserver/nixos-22.05";
simple-nixos-mailserver.inputs.nixpkgs.follows = "nixpkgs";
# agenix
agenix.url = "github:ryantm/agenix";
agenix.inputs.nixpkgs.follows = "nixpkgs";
# radio
radio.url = "git+https://git.neet.dev/zuckerberg/radio.git?ref=main&rev=5bf607fed977d41a269942a7d1e92f3e6d4f2473";
radio.inputs.nixpkgs.follows = "nixpkgs";
radio.inputs.flake-utils.follows = "flake-utils";
radio-web.url = "git+https://git.neet.dev/zuckerberg/radio-web.git";
radio-web.flake = false;
# drastikbot
dailybuild_modules.url = "git+https://git.neet.dev/zuckerberg/dailybuild_modules.git";
dailybuild_modules.inputs.nixpkgs.follows = "nixpkgs";
dailybuild_modules.inputs.flake-utils.follows = "flake-utils";
# archivebox
archivebox.url = "git+https://git.neet.dev/zuckerberg/archivebox.git";
archivebox.inputs.nixpkgs.follows = "nixpkgs";
archivebox.inputs.flake-utils.follows = "flake-utils";
# Common Utils Among flake inputs
systems.url = "github:nix-systems/default";
flake-utils = {
url = "github:numtide/flake-utils";
inputs.systems.follows = "systems";
};
flake-compat = {
url = "github:edolstra/flake-compat";
flake = false;
};
outputs = { self, nixpkgs, nixpkgs-unstable, ... }@inputs: {
# NixOS hardware
nixos-hardware.url = "github:NixOS/nixos-hardware/master";
# Home Manager
home-manager = {
url = "github:nix-community/home-manager/master";
inputs.nixpkgs.follows = "nixpkgs";
};
# Mail Server
simple-nixos-mailserver = {
url = "gitlab:simple-nixos-mailserver/nixos-mailserver/master";
inputs = {
nixpkgs.follows = "nixpkgs";
flake-compat.follows = "flake-compat";
};
};
# Agenix
agenix = {
url = "github:ryantm/agenix";
inputs = {
nixpkgs.follows = "nixpkgs";
systems.follows = "systems";
home-manager.follows = "home-manager";
};
};
# Dailybot
dailybot = {
url = "git+https://git.neet.dev/zuckerberg/dailybot.git";
inputs = {
nixpkgs.follows = "nixpkgs";
flake-utils.follows = "flake-utils";
};
};
# NixOS deployment
deploy-rs = {
url = "github:serokell/deploy-rs";
inputs = {
nixpkgs.follows = "nixpkgs";
flake-compat.follows = "flake-compat";
utils.follows = "flake-utils";
};
};
# Prebuilt nix-index database
nix-index-database = {
url = "github:Mic92/nix-index-database";
inputs.nixpkgs.follows = "nixpkgs";
};
# MicroVM support
microvm = {
url = "github:astro/microvm.nix";
inputs.nixpkgs.follows = "nixpkgs";
};
# Up to date claude-code
claude-code-nix = {
url = "github:sadjow/claude-code-nix";
inputs = {
nixpkgs.follows = "nixpkgs";
flake-utils.follows = "flake-utils";
};
};
};
outputs = { self, nixpkgs, ... }@inputs:
let
machineHosts = (import ./common/machine-info/moduleless.nix
{
inherit nixpkgs;
assertionsModule = "${nixpkgs}/nixos/modules/misc/assertions.nix";
}).machines.hosts;
in
{
nixosConfigurations =
let
modules = system: [
modules = system: hostname: with inputs; [
./common
inputs.simple-nixos-mailserver.nixosModule
inputs.agenix.nixosModules.default
inputs.dailybuild_modules.nixosModule
inputs.archivebox.nixosModule
simple-nixos-mailserver.nixosModule
agenix.nixosModules.default
dailybot.nixosModule
nix-index-database.nixosModules.default
home-manager.nixosModules.home-manager
microvm.nixosModules.host
self.nixosModules.kernel-modules
({ lib, ... }: {
config.environment.systemPackages = [
inputs.agenix.packages.${system}.agenix
config = {
nixpkgs.overlays = [
self.overlays.default
inputs.claude-code-nix.overlays.default
];
environment.systemPackages = [
agenix.packages.${system}.agenix
];
networking.hostName = hostname;
home-manager.useGlobalPkgs = true;
home-manager.useUserPackages = true;
home-manager.users.googlebot = import ./home/googlebot.nix;
};
# because nixos specialArgs doesn't work for containers... need to pass in inputs a different way
options.inputs = lib.mkOption { default = inputs; };
options.currentSystem = lib.mkOption { default = system; };
})
];
mkSystem = system: nixpkgs: path:
mkSystem = system: nixpkgs: path: hostname:
let
allModules = modules system;
in nixpkgs.lib.nixosSystem {
allModules = modules system hostname;
# allow patching nixpkgs, remove this hack once this is solved: https://github.com/NixOS/nix/issues/3920
patchedNixpkgsSrc = nixpkgs.legacyPackages.${system}.applyPatches {
name = "nixpkgs-patched";
src = nixpkgs;
patches = [
./patches/dont-break-nix-serve.patch
];
};
patchedNixpkgs = nixpkgs.lib.fix (self: (import "${patchedNixpkgsSrc}/flake.nix").outputs { self = nixpkgs; });
in
patchedNixpkgs.lib.nixosSystem {
inherit system;
modules = allModules ++ [ path ];
specialArgs = {
inherit allModules;
lib = self.lib;
nixos-hardware = inputs.nixos-hardware;
};
};
in
nixpkgs.lib.mapAttrs
(hostname: cfg:
mkSystem cfg.arch nixpkgs cfg.configurationPath hostname)
machineHosts;
# kexec produces a tarball; for a self-extracting bundle see:
# https://github.com/nix-community/nixos-generators/blob/master/formats/kexec.nix#L60
packages =
let
mkEphemeral = system: nixpkgs.lib.nixosSystem {
inherit system;
modules = [
./machines/ephemeral/minimal.nix
inputs.nix-index-database.nixosModules.default
];
};
in
{
"reg" = mkSystem "x86_64-linux" nixpkgs ./machines/reg/configuration.nix;
"ray" = mkSystem "x86_64-linux" nixpkgs-unstable ./machines/ray/configuration.nix;
"nat" = mkSystem "aarch64-linux" nixpkgs ./machines/nat/configuration.nix;
"liza" = mkSystem "x86_64-linux" nixpkgs ./machines/liza/configuration.nix;
"ponyo" = mkSystem "x86_64-linux" nixpkgs ./machines/ponyo/configuration.nix;
"s0" = mkSystem "aarch64-linux" nixpkgs-unstable ./machines/storage/s0/configuration.nix;
"n1" = mkSystem "aarch64-linux" nixpkgs ./machines/compute/n1/configuration.nix;
"n2" = mkSystem "aarch64-linux" nixpkgs ./machines/compute/n2/configuration.nix;
"n3" = mkSystem "aarch64-linux" nixpkgs ./machines/compute/n3/configuration.nix;
"n4" = mkSystem "aarch64-linux" nixpkgs ./machines/compute/n4/configuration.nix;
"n5" = mkSystem "aarch64-linux" nixpkgs ./machines/compute/n5/configuration.nix;
"n6" = mkSystem "aarch64-linux" nixpkgs ./machines/compute/n6/configuration.nix;
"n7" = mkSystem "aarch64-linux" nixpkgs ./machines/compute/n7/configuration.nix;
"x86_64-linux" = {
kexec = (mkEphemeral "x86_64-linux").config.system.build.images.kexec;
iso = (mkEphemeral "x86_64-linux").config.system.build.images.iso;
};
"aarch64-linux" = {
kexec = (mkEphemeral "aarch64-linux").config.system.build.images.kexec;
iso = (mkEphemeral "aarch64-linux").config.system.build.images.iso;
};
};
packages = let
mkKexec = system:
(nixpkgs.lib.nixosSystem {
inherit system;
modules = [ ./machines/ephemeral/kexec.nix ];
}).config.system.build.kexec_tarball;
mkIso = system:
(nixpkgs.lib.nixosSystem {
inherit system;
modules = [ ./machines/ephemeral/iso.nix ];
}).config.system.build.isoImage;
in {
"x86_64-linux"."kexec" = mkKexec "x86_64-linux";
"x86_64-linux"."iso" = mkIso "x86_64-linux";
"aarch64-linux"."kexec" = mkKexec "aarch64-linux";
"aarch64-linux"."iso" = mkIso "aarch64-linux";
overlays.default = import ./overlays { inherit inputs; };
nixosModules.kernel-modules = import ./overlays/kernel-modules;
deploy.nodes =
let
mkDeploy = configName: arch: hostname: {
inherit hostname;
magicRollback = false;
sshUser = "root";
profiles.system.path = inputs.deploy-rs.lib.${arch}.activate.nixos self.nixosConfigurations.${configName};
};
in
nixpkgs.lib.mapAttrs
(hostname: cfg:
mkDeploy hostname cfg.arch (builtins.head cfg.hostNames))
machineHosts;
checks = builtins.mapAttrs (system: deployLib: deployLib.deployChecks self.deploy) inputs.deploy-rs.lib;
lib = nixpkgs.lib.extend (final: prev: import ./lib { lib = nixpkgs.lib; });
};
}

119
home/googlebot.nix Normal file
View File

@@ -0,0 +1,119 @@
{ lib, pkgs, osConfig, ... }:
# https://home-manager-options.extranix.com/
# https://nix-community.github.io/home-manager/options.xhtml
let
# Check if the current machine has the role "personal"
thisMachineIsPersonal = osConfig.thisMachine.hasRole."personal";
in
{
home.username = "googlebot";
home.homeDirectory = "/home/googlebot";
home.stateVersion = "24.11";
programs.home-manager.enable = true;
services.ssh-agent.enable = true;
# System Monitoring
programs.btop.enable = true;
programs.bottom.enable = true;
# Modern "ls" replacement
programs.pls.enable = true;
programs.pls.enableFishIntegration = false;
programs.eza.enable = true;
# Graphical terminal
programs.ghostty.enable = thisMachineIsPersonal;
programs.ghostty.settings = {
theme = "Snazzy";
font-size = 10;
};
# Advanced terminal file explorer
programs.broot.enable = true;
# Shell promt theming
programs.fish.enable = true;
programs.starship.enable = true;
programs.starship.enableFishIntegration = true;
programs.starship.enableInteractive = true;
# programs.oh-my-posh.enable = true;
# programs.oh-my-posh.enableFishIntegration = true;
# Advanced search
programs.ripgrep.enable = true;
# tldr: Simplified, example based and community-driven man pages.
programs.tealdeer.enable = true;
home.shellAliases = {
sudo = "doas";
ls2 = "eza";
explorer = "broot";
};
programs.zed-editor = {
enable = thisMachineIsPersonal;
};
programs.vscode = {
enable = thisMachineIsPersonal;
# Must use fhs version for vscode-lldb
package = pkgs.vscodium-fhs;
profiles.default = {
userSettings = {
editor.formatOnSave = true;
nix = {
enableLanguageServer = true;
serverPath = "${pkgs.nil}/bin/nil";
serverSettings.nil = {
formatting.command = [ "${pkgs.nixpkgs-fmt}/bin/nixpkgs-fmt" ];
nix.flake.autoArchive = true;
};
};
dotnetAcquisitionExtension.sharedExistingDotnetPath = "${pkgs.dotnet-sdk_9}/bin";
godotTools = {
lsp.serverPort = 6005; # port needs to match Godot configuration
editorPath.godot4 = "godot-mono";
};
rust-analyzer = {
restartServerOnConfigChange = true;
testExplorer = true;
server.path = "rust-analyzer"; # Use the rust-analyzer from PATH (which is set by nixEnvSelector from the project's flake)
};
nixEnvSelector = {
useFlakes = true; # This hasn't ever worked for me and I have to use shell.nix... but maybe someday
suggestion = false; # Stop really annoy nagging
};
};
extensions = with pkgs.vscode-extensions; [
bbenoist.nix # nix syntax support
arrterian.nix-env-selector # nix dev envs
dart-code.dart-code
dart-code.flutter
golang.go
jnoortheen.nix-ide
ms-vscode.cpptools
rust-lang.rust-analyzer
vadimcn.vscode-lldb
tauri-apps.tauri-vscode
platformio.platformio-vscode-ide
vue.volar
wgsl-analyzer.wgsl-analyzer
# Godot
geequlim.godot-tools # For Godot GDScript support
ms-dotnettools.csharp
ms-dotnettools.vscode-dotnet-runtime
];
};
};
home.packages = lib.mkIf thisMachineIsPersonal [
pkgs.claude-code
pkgs.dotnetCorePackages.dotnet_9.sdk # For Godot-Mono VSCode-Extension CSharp
];
}

65
lib/default.nix Normal file
View File

@@ -0,0 +1,65 @@
{ lib, ... }:
with lib;
{
# Passthrough trace for debugging
pTrace = v: traceSeq v v;
# find the total sum of a int list
sum = foldr (x: y: x + y) 0;
# splits a list of length two into two params then they're passed to a func
splitPair = f: pair: f (head pair) (last pair);
# Finds the max value in a list
maxList = foldr max 0;
# Sorts a int list. Greatest value first
sortList = sort (x: y: x > y);
# Cuts a list in half and returns the two parts in a list
cutInHalf = l: [ (take (length l / 2) l) (drop (length l / 2) l) ];
# Splits a list into a list of lists with length cnt
chunksOf = cnt: l:
if length l > 0 then
[ (take cnt l) ] ++ chunksOf cnt (drop cnt l)
else [ ];
# same as intersectLists but takes an array of lists to intersect instead of just two
intersectManyLists = ll: foldr intersectLists (head ll) ll;
# converts a boolean to a int (c style)
boolToInt = b: if b then 1 else 0;
# drops the last element of a list
dropLast = l: take (length l - 1) l;
# transposes a matrix
transpose = ll:
let
outerSize = length ll;
innerSize = length (elemAt ll 0);
in
genList (i: genList (j: elemAt (elemAt ll j) i) outerSize) innerSize;
# attriset recursiveUpdate but for a list of attrisets
combineAttrs = foldl recursiveUpdate { };
# visits every single attriset element of an attriset recursively
# and accumulates the result of every visit in a flat list
recurisveVisitAttrs = f: set:
let
visitor = n: v:
if isAttrs v then [ (f n v) ] ++ recurisveVisitAttrs f v
else [ (f n v) ];
in
concatLists (map (name: visitor name set.${name}) (attrNames set));
# merges two lists of the same size (similar to map but both lists are inputs per iteration)
mergeLists = f: a: imap0 (i: f (elemAt a i));
map2D = f: ll:
let
outerSize = length ll;
innerSize = length (elemAt ll 0);
getElem = x: y: elemAt (elemAt ll y) x;
in
genList (y: genList (x: f x y (getElem x y)) innerSize) outerSize;
# Generate a deterministic MAC address from a name
# Uses locally administered unicast range (02:xx:xx:xx:xx:xx)
mkMac = name:
let
hash = builtins.hashString "sha256" name;
octets = map (i: builtins.substring i 2 hash) [ 0 2 4 6 8 ];
in
"02:${builtins.concatStringsSep ":" octets}";
}

View File

@@ -1,4 +0,0 @@
#! /usr/bin/env nix-shell
#! nix-shell -i bash -p bash
nix flake update --commit-lock-file

View File

@@ -1,24 +0,0 @@
{ config, ... }:
{
# NixOS wants to enable GRUB by default
boot.loader.grub.enable = false;
# Enables the generation of /boot/extlinux/extlinux.conf
boot.loader.generic-extlinux-compatible.enable = true;
fileSystems = {
"/" = {
device = "/dev/disk/by-label/NIXOS_SD";
fsType = "ext4";
};
};
system.autoUpgrade.enable = true;
networking.interfaces.eth0.useDHCP = true;
hardware.deviceTree.enable = true;
hardware.deviceTree.overlays = [
./sopine-baseboard-ethernet.dtbo # fix pine64 clusterboard ethernet
];
}

View File

@@ -1,9 +0,0 @@
{ config, ... }:
{
imports = [
../common.nix
];
networking.hostName = "n1";
}

View File

@@ -1,9 +0,0 @@
{ config, ... }:
{
imports = [
../common.nix
];
networking.hostName = "n2";
}

View File

@@ -1,9 +0,0 @@
{ config, ... }:
{
imports = [
../common.nix
];
networking.hostName = "n3";
}

Some files were not shown because too many files have changed in this diff Show More