171 Commits

Author SHA1 Message Date
c4bd6777c5 Update to NixOS 24.05
Some checks failed
Check Flake / check-flake (push) Failing after 2m31s
2024-06-01 20:03:09 -06:00
fd1ead0b62 Add nixos-hardware config for Howl 2024-06-01 19:57:24 -06:00
37bd7254b9 Add Howl
All checks were successful
Check Flake / check-flake (push) Successful in 1m54s
2024-05-31 23:29:39 -06:00
74e41de9d6 Enable unify v8 service
All checks were successful
Check Flake / check-flake (push) Successful in 56s
2024-05-26 17:24:46 -06:00
0bf0b8b88b Enable ollama service 2024-05-26 17:24:07 -06:00
702129d778 Enable CUDA support 2024-05-26 17:23:38 -06:00
88c67dde84 Open C&C ports 2024-05-26 17:21:58 -06:00
8e3a0761e8 Clean up 2024-05-26 17:21:34 -06:00
a785890990 Fix esphome so that it can build again 2024-05-26 17:20:05 -06:00
b482a8c106 Restore frigate functionality by reverting to an older tensorflow version for libedgetpu 2024-05-26 17:16:59 -06:00
efe50be604 Update nixpkgs
All checks were successful
Check Flake / check-flake (push) Successful in 53s
2024-03-17 09:39:54 -06:00
99904d0066 Update 'Actual' and 'Actual Server' to 'v24.3.0'
All checks were successful
Check Flake / check-flake (push) Successful in 14m33s
2024-03-03 14:57:23 -07:00
55e44bc3d0 Add 'tree' to system pkgs 2024-03-03 14:53:14 -07:00
da7ffa839b Blackhole spammed email address
All checks were successful
Check Flake / check-flake (push) Successful in 5m18s
2024-02-20 18:13:19 -07:00
01af25a57e Add Actual server
All checks were successful
Check Flake / check-flake (push) Successful in 6m3s
2024-02-19 19:44:07 -07:00
bfc1bb2da9 Use a makefile for utility snippets
All checks were successful
Check Flake / check-flake (push) Successful in 12m54s
2024-02-18 17:30:52 -07:00
0e59fa3518 Add easy boot configuration profile limit 2024-02-18 17:30:12 -07:00
7e812001f0 Add librechat
All checks were successful
Check Flake / check-flake (push) Successful in 6m12s
2024-02-09 19:57:09 -07:00
14c19b80ef Stop auto upgrade
All checks were successful
Check Flake / check-flake (push) Successful in 1m2s
2024-02-05 11:32:16 -07:00
e8dd0cb5ff Increase gitea session length
All checks were successful
Check Flake / check-flake (push) Successful in 2m17s
2024-02-04 15:48:06 -07:00
dc9f5e969a Update nextcloud
All checks were successful
Check Flake / check-flake (push) Successful in 2m48s
2024-02-04 14:34:42 -07:00
03150667b6 Enable gitea index and lfs. Fix warning.
All checks were successful
Check Flake / check-flake (push) Successful in 4m49s
2024-02-04 13:59:39 -07:00
1dfd7bc8a2 Increase seed ratio
All checks were successful
Check Flake / check-flake (push) Successful in 2m58s
2024-02-03 14:15:49 -07:00
fa649b1e2a Add missing locale settings to perl stops complaining
All checks were successful
Check Flake / check-flake (push) Successful in 12m4s
2024-02-03 14:11:26 -07:00
e34752c791 Fix transmission running in a container
https://github.com/NixOS/nixpkgs/issues/258793
2024-02-03 14:10:35 -07:00
75031567bd Two radio endpoints
All checks were successful
Check Flake / check-flake (push) Successful in 50s
2024-02-02 20:23:40 -07:00
800a95d431 Update to nixos 23.11
All checks were successful
Check Flake / check-flake (push) Successful in 1m24s
2024-02-01 21:42:33 -07:00
932b05a42e Basic oauth proxy for frigate
All checks were successful
Check Flake / check-flake (push) Successful in 1m13s
2024-01-30 22:12:18 -07:00
b5cc4d4609 Emulate ARM systems for building 2024-01-30 21:59:09 -07:00
ba3d15d82a PoC: Frigate + PCIe Coral + ESPCam, Home Assistant, ESPHome, MQTT, zigbee2mqtt
All checks were successful
Check Flake / check-flake (push) Successful in 3m24s
2023-12-17 21:29:45 -07:00
e80fb7b3db PoC: Frigate + PCIe Coral + ESPCam, Home Assistant, ESPHome, MQTT, zigbee2mqtt
Some checks failed
Check Flake / check-flake (push) Failing after 1m1s
2023-12-17 14:29:45 -07:00
84e1f6e573 wireless role was removed 2023-12-02 10:26:44 -07:00
c4847bd39b Use dashy for services homepage
All checks were successful
Check Flake / check-flake (push) Successful in 5m25s
2023-11-08 21:35:10 -07:00
c0c1ec5c67 Enable autologin for zoidberg 2023-11-08 21:34:13 -07:00
6739115cfb Fix sddm barrier service for current nixpkgs version 2023-11-08 21:33:38 -07:00
4606cc32ba Enable adb debugging 2023-11-08 21:32:26 -07:00
2d27bf7505 Allow other users to access public samba mount 2023-11-08 21:32:00 -07:00
d07af6d101 Should use tailscale eventually for remote luks unlocking 2023-11-08 21:31:14 -07:00
4890dc20e0 Add basic nix utilities
All checks were successful
Check Flake / check-flake (push) Successful in 2m21s
2023-10-20 20:13:08 -06:00
8b01a9b240 Use podman instead of docker 2023-10-20 20:12:14 -06:00
8dfba8646c Fix CI builder
All checks were successful
Check Flake / check-flake (push) Successful in 1m5s
2023-10-20 19:52:33 -06:00
63c0f52955 s0: use eth1
Some checks failed
Check Flake / check-flake (push) Failing after 9s
2023-10-16 20:21:00 -06:00
5413a8e7db Remove mounts that fail. These never worked 2023-10-16 20:20:32 -06:00
330c801e43 Fix issue where wg vpn starts slightly too early for internet access 2023-10-16 20:19:34 -06:00
8ba08ce982 Zoidberg move /boot device
Some checks failed
Check Flake / check-flake (push) Failing after 6m57s
2023-10-15 19:23:24 -06:00
2b50aeba93 Zoidberg auto login 2023-10-15 19:22:51 -06:00
c1aef574b1 Try to build only x84_64 for now
Some checks failed
Check Flake / check-flake (push) Failing after 8m22s
2023-10-15 19:09:40 -06:00
52ed25f1b9 Push derivations built during nix flake check to binary cache
Some checks failed
Check Flake / check-flake (push) Failing after 1m17s
2023-10-15 18:00:38 -06:00
0446d18712 Use official nixos module for gitea actions runner 2023-10-15 17:58:03 -06:00
d2bbbb827e Disable router 2023-10-15 17:55:44 -06:00
6fba594625 Target nixpkgs 23.05 2023-10-15 17:55:04 -06:00
fa6e092c06 Update zoidberg keyfile
Some checks failed
Check Flake / check-flake (push) Failing after 6m52s
2023-09-04 17:18:42 -06:00
3a6dae2b82 Enable barrier for use system wide
Some checks failed
Check Flake / check-flake (push) Failing after 7m29s
2023-09-03 21:59:31 -06:00
62bb740634 Enable ROCm 2023-09-03 21:58:52 -06:00
577e0d21bc Xbox wireless controller support 2023-09-03 21:58:08 -06:00
b481a518f5 Samba mount 2023-09-03 21:57:24 -06:00
f93b2c6908 Steam login option 2023-09-03 21:56:37 -06:00
890b24200e Retroarch
Some checks failed
Check Flake / check-flake (push) Failing after 8m51s
2023-08-13 18:03:45 -06:00
d3259457de Use latest kernel so amdgpu doesn't crash 2023-08-12 23:17:26 -06:00
8eb42ee68b Add common user for kodi 2023-08-12 23:16:52 -06:00
9d4c48badb Use Barrier 2023-08-12 23:16:26 -06:00
9cf2b82e92 Update nixpkgs and cleanup
Some checks failed
Check Flake / check-flake (push) Failing after 10m41s
2023-08-12 19:40:22 -06:00
61ca918cca flake.lock: Update
Flake lock file updates:

• Updated input 'agenix':
    'github:ryantm/agenix/2994d002dcff5353ca1ac48ec584c7f6589fe447' (2023-04-21)
  → 'github:ryantm/agenix/d8c973fd228949736dedf61b7f8cc1ece3236792' (2023-07-24)
• Added input 'agenix/home-manager':
    'github:nix-community/home-manager/32d3e39c491e2f91152c84f8ad8b003420eab0a1' (2023-04-22)
• Added input 'agenix/home-manager/nixpkgs':
    follows 'agenix/nixpkgs'
• Updated input 'deploy-rs':
    'github:serokell/deploy-rs/c2ea4e642dc50fd44b537e9860ec95867af30d39' (2023-04-21)
  → 'github:serokell/deploy-rs/724463b5a94daa810abfc64a4f87faef4e00f984' (2023-06-14)
• Updated input 'flake-utils':
    'github:numtide/flake-utils/cfacdce06f30d2b68473a46042957675eebb3401' (2023-04-11)
  → 'github:numtide/flake-utils/919d646de7be200f3bf08cb76ae1f09402b6f9b4' (2023-07-11)
• Updated input 'nix-index-database':
    'github:Mic92/nix-index-database/e3e320b19c192f40a5b98e8776e3870df62dee8a' (2023-04-25)
  → 'github:Mic92/nix-index-database/6c626d54d0414d34c771c0f6f9d771bc8aaaa3c4' (2023-08-06)
• Updated input 'nixpkgs':
    'github:NixOS/nixpkgs/297187b30a19f147ef260abb5abd93b0706af238' (2023-04-30)
  → 'github:NixOS/nixpkgs/a4d0fe7270cc03eeb1aba4e8b343fe47bfd7c4d5' (2023-08-13)
2023-08-12 19:00:16 -06:00
ef61792da4 Add maestral
Some checks failed
Check Flake / check-flake (push) Failing after 30s
2023-08-12 18:27:24 -06:00
3dc97f4960 Enable kde scaling 2023-08-12 18:27:01 -06:00
f4a26a8d15 Enable zfs scrubbing 2023-08-12 18:26:13 -06:00
37782a26d5 Add pavucontrol-qt 2023-08-12 18:25:46 -06:00
1434bd2df1 Share userspace packages
Some checks failed
Check Flake / check-flake (push) Failing after 19s
2023-08-11 20:48:27 -06:00
e49ea3a7c4 Share userspace packages
Some checks failed
Check Flake / check-flake (push) Failing after 8s
2023-08-11 20:45:34 -06:00
9a6cde1e89 Get zoidberg ready
Some checks failed
Check Flake / check-flake (push) Failing after 1m34s
2023-08-11 19:51:42 -06:00
35972b6d68 Xbox controller support
Some checks failed
Check Flake / check-flake (push) Failing after 18s
2023-08-10 20:39:41 -06:00
b8021c1756 Samba mount for zoidberg
Some checks failed
Check Flake / check-flake (push) Failing after 18s
2023-08-10 19:45:11 -06:00
4b21489141 Increase boot timeout for zoidberg
Some checks failed
Check Flake / check-flake (push) Failing after 19s
2023-08-10 19:44:44 -06:00
a256ab7728 Rekey secrets 2023-08-10 19:44:20 -06:00
da7ebe7baa Add Zoidberg
Some checks failed
Check Flake / check-flake (push) Failing after 2m43s
2023-08-10 19:40:01 -06:00
1922bbbcfd Local arduino development 2023-08-10 18:05:45 -06:00
b17be86927 Cleanup 2023-08-10 18:04:46 -06:00
ec73a63e09 Define vscodium extensions
All checks were successful
Check Flake / check-flake (push) Successful in 30m4s
2023-05-10 12:05:46 -06:00
af26a004e5 Forwards 2023-05-10 12:04:57 -06:00
d83782f315 Set up Nix build worker
All checks were successful
Check Flake / check-flake (push) Successful in 19m33s
2023-04-30 12:49:15 -06:00
162b544249 Set binary cache priority 2023-04-30 09:13:49 -06:00
0c58e62ed4 flake.lock: Update
All checks were successful
Check Flake / check-flake (push) Successful in 1m27s
Flake lock file updates:

• Updated input 'nix-index-database':
    'github:Mic92/nix-index-database/68ec961c51f48768f72d2bbdb396ce65a316677e' (2023-04-15)
  → 'github:Mic92/nix-index-database/e3e320b19c192f40a5b98e8776e3870df62dee8a' (2023-04-25)
• Updated input 'nixpkgs':
    'github:NixOS/nixpkgs/8dafae7c03d6aa8c2ae0a0612fbcb47e994e3fb8' (2023-04-22)
  → 'github:NixOS/nixpkgs/297187b30a19f147ef260abb5abd93b0706af238' (2023-04-30)
2023-04-29 20:34:11 -06:00
96de109d62 Basic binary cache
All checks were successful
Check Flake / check-flake (push) Successful in 7m55s
2023-04-29 20:33:10 -06:00
0efcf8f3fc Flake check gitea action
All checks were successful
Check Flake / check-flake (push) Successful in 1m28s
2023-04-29 19:20:48 -06:00
2009180827 Add mail user 2023-04-29 18:24:20 -06:00
306ce8bc3f Move s0 to systemd-boot 2023-04-25 23:41:08 -06:00
b5dd983ba3 Automatically set machine hostname 2023-04-24 20:52:17 -06:00
832894edfc Gitea runner 2023-04-23 10:29:18 -06:00
feb6270952 Update options for newer nixpkgs 2023-04-23 10:28:55 -06:00
b4dd2d4a92 update TODOs 2023-04-23 10:16:54 -06:00
38c2e5aece Fix properties.nix path loading 2023-04-21 23:24:05 -06:00
0ef689b750 flake.lock: Update
Flake lock file updates:

• Updated input 'agenix':
    'github:ryantm/agenix/b7ffcfe77f817d9ee992640ba1f270718d197f28' (2023-01-31)
  → 'github:ryantm/agenix/2994d002dcff5353ca1ac48ec584c7f6589fe447' (2023-04-21)
• Updated input 'deploy-rs':
    'github:serokell/deploy-rs/8c9ea9605eed20528bf60fae35a2b613b901fd77' (2023-01-19)
  → 'github:serokell/deploy-rs/c2ea4e642dc50fd44b537e9860ec95867af30d39' (2023-04-21)
• Updated input 'flake-utils':
    'github:numtide/flake-utils/5aed5285a952e0b949eb3ba02c12fa4fcfef535f' (2022-11-02)
  → 'github:numtide/flake-utils/cfacdce06f30d2b68473a46042957675eebb3401' (2023-04-11)
• Added input 'flake-utils/systems':
    'github:nix-systems/default/da67096a3b9bf56a91d16901293e51ba5b49a27e' (2023-04-09)
• Updated input 'nix-index-database':
    'github:Mic92/nix-index-database/4306fa7c12e098360439faac1a2e6b8e509ec97c' (2023-02-26)
  → 'github:Mic92/nix-index-database/68ec961c51f48768f72d2bbdb396ce65a316677e' (2023-04-15)
• Updated input 'nixpkgs':
    'github:NixOS/nixpkgs/78c4d33c16092e535bc4ba1284ba49e3e138483a' (2023-03-03)
  → 'github:NixOS/nixpkgs/8dafae7c03d6aa8c2ae0a0612fbcb47e994e3fb8' (2023-04-22)
2023-04-21 21:22:00 -06:00
e72e19b7e8 Fix auto upgrade 2023-04-21 18:58:54 -06:00
03603119e5 Fix invalid import issue. 2023-04-21 18:57:06 -06:00
71baa09bd2 Refactor imports and secrets. Add per system properties and role based secret access.
Highlights
- No need to update flake for every machine anymore, just add a properties.nix file.
- Roles are automatically generated from all machine configurations.
- Roles and their secrets automatically are grouped and show up in agenix secrets.nix
- Machines and their service configs may now query the properties of all machines.
- Machine configuration and secrets are now competely isolated into each machine's directory.
- Safety checks to ensure no mixing of luks unlocking secrets and hosts with primary ones.
- SSH pubkeys no longer centrally stored but instead per machine where the private key lies for better cleanup.
2023-04-21 12:58:11 -06:00
a02775a234 Update install steps 2023-04-19 21:17:45 -06:00
5800359214 Update install steps 2023-04-19 21:17:03 -06:00
0bd42f1850 Update install steps 2023-04-19 21:15:58 -06:00
40f0e5d2ac Add Phil 2023-04-19 18:12:42 -06:00
f90b9f85fd try out appvm 2023-04-18 23:15:21 -06:00
5b084fffcc moonlander 2023-04-18 23:15:03 -06:00
4dd6401f8c update TODOs 2023-04-18 23:14:49 -06:00
260bbc1ffd Use doas instead of sudo 2023-04-10 22:03:57 -06:00
c8132a67d0 Use lf as terminal file explorer 2023-04-10 22:03:29 -06:00
3412d5caf9 Use hashed passwordfile just to be safe 2023-04-09 23:00:10 -06:00
1065cc4b59 Enable gitea email notifications 2023-04-09 22:05:23 -06:00
154b37879b Cross off finished TODOs 2023-04-09 22:04:51 -06:00
a34238b3a9 Easily run restic commands on a backup group 2023-04-09 13:06:15 -06:00
42e2ebd294 Allow marking folders as omitted from backup 2023-04-09 12:35:20 -06:00
378cf47683 restic backups 2023-04-08 21:25:55 -06:00
f68a4f4431 nixpkgs-fmt everything 2023-04-04 23:30:28 -06:00
3c683e7b9e NixOS router is now in active use :) 2023-04-04 20:53:38 -06:00
68bd70b525 Basic router working using the wip hostapd module from upstream 2023-04-04 12:57:16 -06:00
2189ab9a1b Improve cifs mounts. Newer protocol version, helpful commands, better network connection resiliency. 2023-03-31 11:43:12 -06:00
acbbb8a37a encrypted samba vault with gocryptfs 2023-03-25 15:49:07 -06:00
d1e6d21d66 iperf server 2023-03-25 15:48:39 -06:00
1a98e039fe Cleanup fio tests 2023-03-25 15:48:24 -06:00
3459ce5058 Add joplin 2023-03-18 22:04:31 -06:00
c48b1995f8 Remove zerotier 2023-03-18 20:41:09 -06:00
53c0e7ba1f Add Webmail 2023-03-14 23:28:07 -06:00
820cd392f1 Choose random PIA server in a specified region instead of hardcoded. And more TODOs addressed. 2023-03-12 22:55:46 -06:00
759fe04185 with lib; 2023-03-12 21:50:46 -06:00
db441fcf98 Add ability to refuse PIA ports 2023-03-12 21:46:36 -06:00
83e9280bb4 Use the NixOS firewall instead to block unwanted PIA VPN traffic 2023-03-12 20:49:39 -06:00
478235fe32 Enable firewall for PIA VPN wireguard interface 2023-03-12 20:29:20 -06:00
440401a391 Add ponyo to deploy-rs config 2023-03-12 19:50:55 -06:00
42c0dcae2d Port forwarding for transmission 2023-03-12 19:50:29 -06:00
7159868b57 update todo's 2023-03-12 19:46:51 -06:00
ab2cc0cc0a Cleanup services 2023-03-12 17:51:10 -06:00
aaa1800d0c Cleanup mail domains 2023-03-12 13:29:12 -06:00
a795c65c32 Cleanup mail domains 2023-03-12 13:25:34 -06:00
5ed02e924d Remove liza 2023-03-12 00:15:06 -07:00
1d620372b8 Remove leftovers of removed compute nodes 2023-03-12 00:14:49 -07:00
9684a975e2 Migrate nextcloud to ponyo 2023-03-12 00:10:14 -07:00
c3c3a9e77f disable searx for now 2023-03-12 00:09:40 -07:00
ecb6d1ef63 Migrate mailserver to ponyo 2023-03-11 23:40:36 -07:00
a5f7bb8a22 Fix vpn systemd service restart issues 2023-03-09 13:07:20 -07:00
cea9b9452b Initial prototype for Wireguard based PIA VPN - not quite 'ready' yet 2023-03-08 23:49:02 -07:00
8fb45a7ee5 Turn off howdy 2023-03-08 23:47:11 -07:00
b53f03bb7d Fix typo 2023-03-08 23:45:49 -07:00
dee0243268 Peer to peer connection keepalive task 2023-03-07 22:55:37 -07:00
8b6bc354bd Peer to peer connection keepalive task 2023-03-07 22:54:26 -07:00
aff5611cdb Update renamed nixos options 2023-03-07 22:52:31 -07:00
c5e7d8b2fe Allow easy patching of nixpkgs 2023-03-03 23:24:33 -07:00
90a3549237 use comma and pregenerated nix-index 2023-03-03 00:18:20 -07:00
63f2a82ad1 ignore lid close for NAS 2023-03-03 00:16:57 -07:00
0cc39bfbe0 deploy-rs initial PoC 2023-03-03 00:16:23 -07:00
ec54b27d67 fix router serial 2023-03-03 00:14:22 -07:00
bba4f27465 add picocom for serial 2023-03-03 00:12:35 -07:00
b5c77611d7 remove unused compute nodes 2023-03-03 00:12:16 -07:00
987919417d allow root login over ssh using trusted key 2023-02-11 23:07:48 -07:00
d8dbb12959 grow disk for ponyo 2023-02-11 19:01:42 -07:00
3e0cde40b8 Cleanup remote LUKS unlock 2023-02-11 18:40:08 -07:00
2c8576a295 Hardware accelerated encoding for jellyfin 2023-02-11 16:10:19 -07:00
8aecc04d01 config cleanup 2023-02-11 16:10:10 -07:00
9bcf7cc50d VPN using its own DNS resolver is unstable 2023-02-11 16:09:02 -07:00
cb2ac1c1ba Use x86 machine for NAS 2023-02-11 16:08:48 -07:00
7f1e304012 Remove stale secrets 2023-02-11 15:19:35 -07:00
9e3dae4b16 Rekey secrets 2023-02-11 15:07:08 -07:00
c649b04bdd Update ssh keys and allow easy ssh LUKS unlocking 2023-02-11 15:05:20 -07:00
6fce2e1116 Allow unlocking over tor 2023-02-11 13:38:54 -07:00
3e192b3321 Hardware config should be in hardware config 2023-02-11 13:35:46 -07:00
bc863de165 Hardware config should be in hardware config 2023-02-11 09:48:25 -07:00
cfa5c9428e Remove reg 2023-02-11 09:46:05 -07:00
abddc5a680 Razer keyboard 2023-02-11 00:32:36 -07:00
577dc4faaa Add initial configuration for APU2E4 router 2023-02-10 20:51:10 -07:00
a8b0385c6d more ephemeral options 2023-02-08 22:27:54 -07:00
fc85627bd6 use unstable for ephemeral os config 2023-02-08 22:26:04 -07:00
f9cadba3eb improve ephemeral os config 2023-02-08 22:25:09 -07:00
c192c2d52f enable spotify 2023-02-08 18:48:08 -07:00
04c7a9ea51 Update tz 2023-02-08 18:47:58 -07:00
176 changed files with 13581 additions and 2031 deletions

View File

@@ -0,0 +1,19 @@
name: Check Flake
on: [push]
env:
DEBIAN_FRONTEND: noninteractive
PATH: /run/current-system/sw/bin/
jobs:
check-flake:
runs-on: nixos
steps:
- name: Checkout the repository
uses: actions/checkout@v3
with:
fetch-depth: 0
- name: Check Flake
run: nix flake check --print-build-logs --log-format raw --show-trace

22
Makefile Normal file
View File

@@ -0,0 +1,22 @@
# Lockfile utils
.PHONY: update-lockfile
update-lockfile:
nix flake update --commit-lock-file
.PHONY: update-lockfile-without-commit
update-lockfile-without-commit:
nix flake update
# Agenix utils
.PHONY: edit-secret
edit-secret:
cd secrets && agenix -e $(filter-out $@,$(MAKECMDGOALS))
.PHONY: rekey-secrets
rekey-secrets:
cd secrets && agenix -r
# NixOS utils
.PHONY: clean-old-nixos-profiles
clean-old-nixos-profiles:
doas nix-collect-garbage -d

View File

@@ -3,10 +3,9 @@
### Source Layout ### Source Layout
- `/common` - common configuration imported into all `/machines` - `/common` - common configuration imported into all `/machines`
- `/boot` - config related to bootloaders, cpu microcode, and unlocking LUKS root disks over tor - `/boot` - config related to bootloaders, cpu microcode, and unlocking LUKS root disks over tor
- `/network` - config for tailscale, zeroteir, and NixOS container with automatic vpn tunneling via PIA - `/network` - config for tailscale, and NixOS container with automatic vpn tunneling via PIA
- `/pc` - config that a graphical desktop computer should have. Use `de.enable = true;` to enable everthing. - `/pc` - config that a graphical desktop computer should have. Use `de.enable = true;` to enable everthing.
- `/server` - config that creates new nixos services or extends existing ones to meet my needs - `/server` - config that creates new nixos services or extends existing ones to meet my needs
- `/ssh.nix` - all ssh public host and user keys for all `/machines`
- `/machines` - all my NixOS machines along with their machine unique configuration for hardware and services - `/machines` - all my NixOS machines along with their machine unique configuration for hardware and services
- `/kexec` - a special machine for generating minimal kexec images. Does not import `/common` - `/kexec` - a special machine for generating minimal kexec images. Does not import `/common`
- `/secrets` - encrypted shared secrets unlocked through `/machines` ssh host keys - `/secrets` - encrypted shared secrets unlocked through `/machines` ssh host keys

51
TODO.md
View File

@@ -10,24 +10,12 @@
- https://nixos.wiki/wiki/Comparison_of_NixOS_setups - https://nixos.wiki/wiki/Comparison_of_NixOS_setups
### Housekeeping ### Housekeeping
- Format everything here using nixfmt
- Cleanup the line between hardware-configuration.nix and configuration.nix in machine config - Cleanup the line between hardware-configuration.nix and configuration.nix in machine config
- CI https://gvolpe.com/blog/nixos-binary-cache-ci/
- remove `options.currentSystem` - remove `options.currentSystem`
- allow `hostname` option for webservices to be null to disable configuring nginx - allow `hostname` option for webservices to be null to disable configuring nginx
### NAS ### NAS
- helios64 extra led lights
- safely turn off NAS on power disconnect - safely turn off NAS on power disconnect
- hardware de/encoding for rk3399 helios64 https://forum.pine64.org/showthread.php?tid=14018
- tor unlock
### bcachefs
- bcachefs health alerts via email
- bcachefs periodic snapshotting
- use mount.bcachefs command for mounting
- bcachefs native encryption
- just need a kernel module? https://github.com/firestack/bcachefs-tools-flake/blob/kf/dev/mvp/nixos/module/bcachefs.nix#L40
### Shell Comands ### Shell Comands
- tailexitnode = `sudo tailscale up --exit-node=<exit-node-ip> --exit-node-allow-lan-access=true` - tailexitnode = `sudo tailscale up --exit-node=<exit-node-ip> --exit-node-allow-lan-access=true`
@@ -52,21 +40,7 @@
- https://ampache.org/ - https://ampache.org/
- replace nextcloud with seafile - replace nextcloud with seafile
### VPN container
- use wireguard for vpn
- https://github.com/triffid/pia-wg/blob/master/pia-wg.sh
- https://github.com/pia-foss/manual-connections
- port forwarding for vpn
- transmission using forwarded port
- https://www.wireguard.com/netns/
- one way firewall for vpn container
### Networking
- tailscale for p2p connections
- remove all use of zerotier
### Archive ### Archive
- https://www.backblaze.com/b2/cloud-storage.html
- email - email
- https://github.com/Disassembler0/dovecot-archive/blob/main/src/dovecot_archive.py - https://github.com/Disassembler0/dovecot-archive/blob/main/src/dovecot_archive.py
- http://kb.unixservertech.com/software/dovecot/archiveserver - http://kb.unixservertech.com/software/dovecot/archiveserver
@@ -75,7 +49,32 @@
- https://christine.website/blog/paranoid-nixos-2021-07-18 - https://christine.website/blog/paranoid-nixos-2021-07-18
- https://nixos.wiki/wiki/Impermanence - https://nixos.wiki/wiki/Impermanence
# Setup CI
- CI
- hydra
- https://docs.cachix.org/continuous-integration-setup/
- Binary Cache
- Maybe use cachix https://gvolpe.com/blog/nixos-binary-cache-ci/
- Self hosted binary cache? https://www.tweag.io/blog/2019-11-21-untrusted-ci/
- https://github.com/edolstra/nix-serve
- https://nixos.wiki/wiki/Binary_Cache
- https://discourse.nixos.org/t/introducing-attic-a-self-hostable-nix-binary-cache-server/24343
- Both
- https://garnix.io/
- https://nixbuild.net
# Secrets
- consider using headscale
- Replace luks over tor for remote unlock with luks over tailscale using ephemeral keys
- Rollover luks FDE passwords
- /secrets on personal computers should only be readable using a trusted ssh key, preferably requiring a yubikey
- Rollover shared yubikey secrets
- offsite backup yubikey, pw db, and ssh key with /secrets access
### Misc ### Misc
- for automated kernel upgrades on luks systems, need to kexec with initrd that contains luks key
- https://github.com/flowztul/keyexec/blob/master/etc/default/kexec-cryptroot
- https://github.com/pop-os/system76-scheduler - https://github.com/pop-os/system76-scheduler
- improve email a little bit https://helloinbox.email - improve email a little bit https://helloinbox.email
- remap razer keys https://github.com/sezanzeb/input-remapper - remap razer keys https://github.com/sezanzeb/input-remapper

View File

@@ -4,11 +4,12 @@
let let
cfg = config.system.autoUpgrade; cfg = config.system.autoUpgrade;
in { in
{
config = lib.mkIf cfg.enable { config = lib.mkIf cfg.enable {
system.autoUpgrade = { system.autoUpgrade = {
flake = "git+https://git.neet.dev/zuckerberg/nix-config.git"; flake = "git+https://git.neet.dev/zuckerberg/nix-config.git";
flags = [ "--recreate-lock-file" ]; # ignore lock file, just pull the latest flags = [ "--recreate-lock-file" "--no-write-lock-file" ]; # ignore lock file, just pull the latest
}; };
}; };
} }

78
common/backups.nix Normal file
View File

@@ -0,0 +1,78 @@
{ config, lib, pkgs, ... }:
let
cfg = config.backup;
hostname = config.networking.hostName;
mkRespository = group: "s3:s3.us-west-004.backblazeb2.com/D22TgIt0-main-backup/${group}";
mkBackup = group: paths: {
repository = mkRespository group;
inherit paths;
initialize = true;
timerConfig = {
OnCalendar = "daily";
RandomizedDelaySec = "1h";
};
extraBackupArgs = [
''--exclude-if-present ".nobackup"''
];
pruneOpts = [
"--keep-daily 7" # one backup for each of the last n days
"--keep-weekly 5" # one backup for each of the last n weeks
"--keep-monthly 12" # one backup for each of the last n months
"--keep-yearly 75" # one backup for each of the last n years
];
environmentFile = "/run/agenix/backblaze-s3-backups";
passwordFile = "/run/agenix/restic-password";
};
# example usage: "sudo restic_samba unlock" (removes lockfile)
mkResticGroupCmd = group: pkgs.writeShellScriptBin "restic_${group}" ''
if [ "$EUID" -ne 0 ]
then echo "Run as root"
exit
fi
. /run/agenix/backblaze-s3-backups
export AWS_SECRET_ACCESS_KEY
export AWS_ACCESS_KEY_ID
export RESTIC_PASSWORD_FILE=/run/agenix/restic-password
export RESTIC_REPOSITORY="${mkRespository group}"
exec ${pkgs.restic}/bin/restic "$@"
'';
in
{
options.backup = {
group = lib.mkOption {
default = null;
type = lib.types.nullOr (lib.types.attrsOf (lib.types.submodule {
options = {
paths = lib.mkOption {
type = lib.types.listOf lib.types.str;
description = ''
Paths to backup
'';
};
};
}));
};
};
config = lib.mkIf (cfg.group != null) {
services.restic.backups = lib.concatMapAttrs
(group: groupCfg: {
${group} = mkBackup group groupCfg.paths;
})
cfg.group;
age.secrets.backblaze-s3-backups.file = ../secrets/backblaze-s3-backups.age;
age.secrets.restic-password.file = ../secrets/restic-password.age;
environment.systemPackages = map mkResticGroupCmd (builtins.attrNames cfg.group);
};
}

17
common/binary-cache.nix Normal file
View File

@@ -0,0 +1,17 @@
{ config, lib, ... }:
{
nix = {
settings = {
substituters = [
"https://cache.nixos.org/"
"https://nix-community.cachix.org"
"http://s0.koi-bebop.ts.net:5000"
];
trusted-public-keys = [
"nix-community.cachix.org-1:mB9FSh9qf2dCimDSUo8Zy7bkq5CX+/rkCWyvRCYg3Fs="
"s0.koi-bebop.ts.net:OjbzD86YjyJZpCp9RWaQKANaflcpKhtzBMNP8I2aPUU="
];
};
};
}

View File

@@ -3,24 +3,27 @@
with lib; with lib;
let let
cfg = config.bios; cfg = config.bios;
in { in
{
options.bios = { options.bios = {
enable = mkEnableOption "enable bios boot"; enable = mkEnableOption "enable bios boot";
device = mkOption { device = mkOption {
type = types.str; type = types.str;
}; };
configurationLimit = mkOption {
default = 20;
type = types.int;
};
}; };
config = mkIf cfg.enable { config = mkIf cfg.enable {
# Use GRUB 2 for BIOS
boot.loader = { boot.loader = {
timeout = 2; timeout = 2;
grub = { grub = {
enable = true; enable = true;
device = cfg.device; device = cfg.device;
version = 2;
useOSProber = true; useOSProber = true;
configurationLimit = 20; configurationLimit = cfg.configurationLimit;
theme = pkgs.nixos-grub2-theme; theme = pkgs.nixos-grub2-theme;
}; };
}; };

View File

@@ -5,6 +5,6 @@
./firmware.nix ./firmware.nix
./efi.nix ./efi.nix
./bios.nix ./bios.nix
./luks.nix ./remote-luks-unlock.nix
]; ];
} }

View File

@@ -3,24 +3,27 @@
with lib; with lib;
let let
cfg = config.efi; cfg = config.efi;
in { in
{
options.efi = { options.efi = {
enable = mkEnableOption "enable efi boot"; enable = mkEnableOption "enable efi boot";
configurationLimit = mkOption {
default = 20;
type = types.int;
};
}; };
config = mkIf cfg.enable { config = mkIf cfg.enable {
# Use GRUB2 for EFI
boot.loader = { boot.loader = {
efi.canTouchEfiVariables = true; efi.canTouchEfiVariables = true;
timeout = 2; timeout = 2;
grub = { grub = {
enable = true; enable = true;
device = "nodev"; device = "nodev";
version = 2;
efiSupport = true; efiSupport = true;
useOSProber = true; useOSProber = true;
# memtest86.enable = true; # memtest86.enable = true;
configurationLimit = 20; configurationLimit = cfg.configurationLimit;
theme = pkgs.nixos-grub2-theme; theme = pkgs.nixos-grub2-theme;
}; };
}; };

View File

@@ -3,7 +3,8 @@
with lib; with lib;
let let
cfg = config.firmware; cfg = config.firmware;
in { in
{
options.firmware.x86_64 = { options.firmware.x86_64 = {
enable = mkEnableOption "enable x86_64 firmware"; enable = mkEnableOption "enable x86_64 firmware";
}; };

View File

@@ -1,101 +0,0 @@
{ config, pkgs, lib, ... }:
let
cfg = config.luks;
in {
options.luks = {
enable = lib.mkEnableOption "enable luks root remote decrypt over ssh/tor";
device = {
name = lib.mkOption {
type = lib.types.str;
default = "enc-pv";
};
path = lib.mkOption {
type = lib.types.either lib.types.str lib.types.path;
};
allowDiscards = lib.mkOption {
type = lib.types.bool;
default = false;
};
};
sshHostKeys = lib.mkOption {
type = lib.types.listOf (lib.types.either lib.types.str lib.types.path);
default = [
"/secret/ssh_host_rsa_key"
"/secret/ssh_host_ed25519_key"
];
};
sshAuthorizedKeys = lib.mkOption {
type = lib.types.listOf lib.types.str;
default = config.users.users.googlebot.openssh.authorizedKeys.keys;
};
onionConfig = lib.mkOption {
type = lib.types.path;
default = /secret/onion;
};
kernelModules = lib.mkOption {
type = lib.types.listOf lib.types.str;
default = [ "e1000" "e1000e" "virtio_pci" "r8169" ];
};
};
config = lib.mkIf cfg.enable {
boot.initrd.luks.devices.${cfg.device.name} = {
device = cfg.device.path;
allowDiscards = cfg.device.allowDiscards;
};
# Unlock LUKS disk over ssh
boot.initrd.network.enable = true;
boot.initrd.kernelModules = cfg.kernelModules;
boot.initrd.network.ssh = {
enable = true;
port = 22;
hostKeys = cfg.sshHostKeys;
authorizedKeys = cfg.sshAuthorizedKeys;
};
boot.initrd.postDeviceCommands = ''
echo 'waiting for root device to be opened...'
mkfifo /crypt-ramfs/passphrase
echo /crypt-ramfs/passphrase >> /dev/null
'';
# Make machine accessable over tor for boot unlock
boot.initrd.secrets = {
"/etc/tor/onion/bootup" = cfg.onionConfig;
};
boot.initrd.extraUtilsCommands = ''
copy_bin_and_libs ${pkgs.tor}/bin/tor
copy_bin_and_libs ${pkgs.haveged}/bin/haveged
'';
# start tor during boot process
boot.initrd.network.postCommands = let
torRc = (pkgs.writeText "tor.rc" ''
DataDirectory /etc/tor
SOCKSPort 127.0.0.1:9050 IsolateDestAddr
SOCKSPort 127.0.0.1:9063
HiddenServiceDir /etc/tor/onion/bootup
HiddenServicePort 22 127.0.0.1:22
'');
in ''
# Add nice prompt for giving LUKS passphrase over ssh
echo 'read -s -p "Unlock Passphrase: " passphrase && echo $passphrase > /crypt-ramfs/passphrase && exit' >> /root/.profile
echo "tor: preparing onion folder"
# have to do this otherwise tor does not want to start
chmod -R 700 /etc/tor
echo "make sure localhost is up"
ip a a 127.0.0.1/8 dev lo
ip link set lo up
echo "haveged: starting haveged"
haveged -F &
echo "tor: starting tor"
tor -f ${torRc} --verify-config
tor -f ${torRc} &
'';
};
}

View File

@@ -0,0 +1,96 @@
{ config, pkgs, lib, ... }:
# TODO: use tailscale instead of tor https://gist.github.com/antifuchs/e30d58a64988907f282c82231dde2cbc
let
cfg = config.remoteLuksUnlock;
in
{
options.remoteLuksUnlock = {
enable = lib.mkEnableOption "enable luks root remote decrypt over ssh/tor";
enableTorUnlock = lib.mkOption {
type = lib.types.bool;
default = cfg.enable;
description = "Make machine accessable over tor for ssh boot unlock";
};
sshHostKeys = lib.mkOption {
type = lib.types.listOf (lib.types.either lib.types.str lib.types.path);
default = [
"/secret/ssh_host_rsa_key"
"/secret/ssh_host_ed25519_key"
];
};
sshAuthorizedKeys = lib.mkOption {
type = lib.types.listOf lib.types.str;
default = config.users.users.googlebot.openssh.authorizedKeys.keys;
};
onionConfig = lib.mkOption {
type = lib.types.path;
default = /secret/onion;
};
kernelModules = lib.mkOption {
type = lib.types.listOf lib.types.str;
default = [ "e1000" "e1000e" "virtio_pci" "r8169" ];
};
};
config = lib.mkIf cfg.enable {
# Unlock LUKS disk over ssh
boot.initrd.network.enable = true;
boot.initrd.kernelModules = cfg.kernelModules;
boot.initrd.network.ssh = {
enable = true;
port = 22;
hostKeys = cfg.sshHostKeys;
authorizedKeys = cfg.sshAuthorizedKeys;
};
boot.initrd.postDeviceCommands = ''
echo 'waiting for root device to be opened...'
mkfifo /crypt-ramfs/passphrase
echo /crypt-ramfs/passphrase >> /dev/null
'';
boot.initrd.secrets = lib.mkIf cfg.enableTorUnlock {
"/etc/tor/onion/bootup" = cfg.onionConfig;
};
boot.initrd.extraUtilsCommands = lib.mkIf cfg.enableTorUnlock ''
copy_bin_and_libs ${pkgs.tor}/bin/tor
copy_bin_and_libs ${pkgs.haveged}/bin/haveged
'';
boot.initrd.network.postCommands = lib.mkMerge [
(
''
# Add nice prompt for giving LUKS passphrase over ssh
echo 'read -s -p "Unlock Passphrase: " passphrase && echo $passphrase > /crypt-ramfs/passphrase && exit' >> /root/.profile
''
)
(
let torRc = (pkgs.writeText "tor.rc" ''
DataDirectory /etc/tor
SOCKSPort 127.0.0.1:9050 IsolateDestAddr
SOCKSPort 127.0.0.1:9063
HiddenServiceDir /etc/tor/onion/bootup
HiddenServicePort 22 127.0.0.1:22
''); in
lib.mkIf cfg.enableTorUnlock ''
echo "tor: preparing onion folder"
# have to do this otherwise tor does not want to start
chmod -R 700 /etc/tor
echo "make sure localhost is up"
ip a a 127.0.0.1/8 dev lo
ip link set lo up
echo "haveged: starting haveged"
haveged -F &
echo "tor: starting tor"
tor -f ${torRc} --verify-config
tor -f ${torRc} &
''
)
];
};
}

View File

@@ -2,6 +2,8 @@
{ {
imports = [ imports = [
./backups.nix
./binary-cache.nix
./flakes.nix ./flakes.nix
./auto-update.nix ./auto-update.nix
./shell.nix ./shell.nix
@@ -9,28 +11,43 @@
./boot ./boot
./server ./server
./pc ./pc
./machine-info
./nix-builder.nix
./ssh.nix
]; ];
nix.flakes.enable = true; nix.flakes.enable = true;
system.stateVersion = "21.11"; system.stateVersion = "23.11";
networking.useDHCP = false; networking.useDHCP = false;
networking.firewall.enable = true; networking.firewall.enable = true;
networking.firewall.allowPing = true; networking.firewall.allowPing = true;
time.timeZone = "America/New_York"; time.timeZone = "America/Denver";
i18n.defaultLocale = "en_US.UTF-8"; i18n = {
defaultLocale = "en_US.UTF-8";
extraLocaleSettings = {
LANGUAGE = "en_US.UTF-8";
LC_ALL = "en_US.UTF-8";
};
};
services.openssh.enable = true; services.openssh = {
enable = true;
settings = {
PasswordAuthentication = false;
};
};
programs.mosh.enable = true; programs.mosh.enable = true;
environment.systemPackages = with pkgs; [ environment.systemPackages = with pkgs; [
wget wget
kakoune kakoune
htop htop
git git-lfs git
git-lfs
dnsutils dnsutils
tmux tmux
nethogs nethogs
@@ -42,6 +59,10 @@
micro micro
helix helix
lm_sensors lm_sensors
picocom
lf
gnumake
tree
]; ];
nixpkgs.config.allowUnfree = true; nixpkgs.config.allowUnfree = true;
@@ -54,11 +75,24 @@
"dialout" # serial "dialout" # serial
]; ];
shell = pkgs.fish; shell = pkgs.fish;
openssh.authorizedKeys.keys = (import ./ssh.nix).users; openssh.authorizedKeys.keys = config.machines.ssh.userKeys;
hashedPassword = "$6$TuDO46rILr$gkPUuLKZe3psexhs8WFZMpzgEBGksE.c3Tjh1f8sD0KMC4oV89K2pqAABfl.Lpxu2jVdr5bgvR5cWnZRnji/r/"; hashedPassword = "$6$TuDO46rILr$gkPUuLKZe3psexhs8WFZMpzgEBGksE.c3Tjh1f8sD0KMC4oV89K2pqAABfl.Lpxu2jVdr5bgvR5cWnZRnji/r/";
uid = 1000; uid = 1000;
}; };
nix.trustedUsers = [ "root" "googlebot" ]; users.users.root = {
openssh.authorizedKeys.keys = config.machines.ssh.deployKeys;
};
nix.settings = {
trusted-users = [ "root" "googlebot" ];
};
# don't use sudo
security.doas.enable = true;
security.sudo.enable = false;
security.doas.extraRules = [
# don't ask for password every time
{ groups = [ "wheel" ]; persist = true; }
];
nix.gc.automatic = true; nix.gc.automatic = true;

View File

@@ -2,14 +2,14 @@
with lib; with lib;
let let
cfg = config.nix.flakes; cfg = config.nix.flakes;
in { in
{
options.nix.flakes = { options.nix.flakes = {
enable = mkEnableOption "use nix flakes"; enable = mkEnableOption "use nix flakes";
}; };
config = mkIf cfg.enable { config = mkIf cfg.enable {
nix = { nix = {
package = pkgs.nixFlakes;
extraOptions = '' extraOptions = ''
experimental-features = nix-command flakes experimental-features = nix-command flakes
''; '';

View File

@@ -0,0 +1,200 @@
# Gathers info about each machine to constuct overall configuration
# Ex: Each machine already trusts each others SSH fingerprint already
{ config, lib, pkgs, ... }:
let
machines = config.machines.hosts;
in
{
imports = [
./ssh.nix
./roles.nix
];
options.machines = {
hosts = lib.mkOption {
type = lib.types.attrsOf
(lib.types.submodule {
options = {
hostNames = lib.mkOption {
type = lib.types.listOf lib.types.str;
description = ''
List of hostnames for this machine. The first one is the default so it is the target of deployments.
Used for automatically trusting hosts for ssh connections.
'';
};
arch = lib.mkOption {
type = lib.types.enum [ "x86_64-linux" "aarch64-linux" ];
description = ''
The architecture of this machine.
'';
};
systemRoles = lib.mkOption {
type = lib.types.listOf lib.types.str; # TODO: maybe use an enum?
description = ''
The set of roles this machine holds. Affects secrets available. (TODO add service config as well using this info)
'';
};
hostKey = lib.mkOption {
type = lib.types.str;
description = ''
The system ssh host key of this machine. Used for automatically trusting hosts for ssh connections
and for decrypting secrets with agenix.
'';
};
remoteUnlock = lib.mkOption {
default = null;
type = lib.types.nullOr (lib.types.submodule {
options = {
hostKey = lib.mkOption {
type = lib.types.str;
description = ''
The system ssh host key of this machine used for luks boot unlocking only.
'';
};
clearnetHost = lib.mkOption {
default = null;
type = lib.types.nullOr lib.types.str;
description = ''
The hostname resolvable over clearnet used to luks boot unlock this machine
'';
};
onionHost = lib.mkOption {
default = null;
type = lib.types.nullOr lib.types.str;
description = ''
The hostname resolvable over tor used to luks boot unlock this machine
'';
};
};
});
};
userKeys = lib.mkOption {
default = [ ];
type = lib.types.listOf lib.types.str;
description = ''
The list of user keys. Each key here can be used to log into all other systems as `googlebot`.
TODO: consider auto populating other programs that use ssh keys such as gitea
'';
};
deployKeys = lib.mkOption {
default = [ ];
type = lib.types.listOf lib.types.str;
description = ''
The list of deployment keys. Each key here can be used to log into all other systems as `root`.
'';
};
configurationPath = lib.mkOption {
type = lib.types.path;
description = ''
The path to this machine's configuration directory.
'';
};
};
});
};
};
config = {
assertions = (lib.concatLists (lib.mapAttrsToList
(
name: cfg: [
{
assertion = builtins.length cfg.hostNames > 0;
message = ''
Error with config for ${name}
There must be at least one hostname.
'';
}
{
assertion = builtins.length cfg.systemRoles > 0;
message = ''
Error with config for ${name}
There must be at least one system role.
'';
}
{
assertion = cfg.remoteUnlock == null || cfg.remoteUnlock.hostKey != cfg.hostKey;
message = ''
Error with config for ${name}
Unlock hostkey and hostkey cannot be the same because unlock hostkey is in /boot, unencrypted.
'';
}
{
assertion = cfg.remoteUnlock == null || (cfg.remoteUnlock.clearnetHost != null || cfg.remoteUnlock.onionHost != null);
message = ''
Error with config for ${name}
At least one of clearnet host or onion host must be defined.
'';
}
{
assertion = cfg.remoteUnlock == null || cfg.remoteUnlock.clearnetHost == null || builtins.elem cfg.remoteUnlock.clearnetHost cfg.hostNames == false;
message = ''
Error with config for ${name}
Clearnet unlock hostname cannot be in the list of hostnames for security reasons.
'';
}
{
assertion = cfg.remoteUnlock == null || cfg.remoteUnlock.onionHost == null || lib.strings.hasSuffix ".onion" cfg.remoteUnlock.onionHost;
message = ''
Error with config for ${name}
Tor unlock hostname must be an onion address.
'';
}
{
assertion = builtins.elem "personal" cfg.systemRoles || builtins.length cfg.userKeys == 0;
message = ''
Error with config for ${name}
There must be at least one userkey defined for personal machines.
'';
}
{
assertion = builtins.elem "deploy" cfg.systemRoles || builtins.length cfg.deployKeys == 0;
message = ''
Error with config for ${name}
Only deploy machines are allowed to have deploy keys for security reasons.
'';
}
]
)
machines));
# Set per machine properties automatically using each of their `properties.nix` files respectively
machines.hosts =
let
properties = dir: lib.concatMapAttrs
(name: path: {
${name} =
import path
//
{ configurationPath = builtins.dirOf path; };
})
(propertiesFiles dir);
propertiesFiles = dir:
lib.foldl (lib.mergeAttrs) { } (propertiesFiles' dir);
propertiesFiles' = dir:
let
propFiles = lib.filter (p: baseNameOf p == "properties.nix") (lib.filesystem.listFilesRecursive dir);
dirName = path: builtins.baseNameOf (builtins.dirOf path);
in
builtins.map (p: { "${dirName p}" = p; }) propFiles;
in
properties ../../machines;
};
}

View File

@@ -0,0 +1,15 @@
# Allows getting machine-info outside the scope of nixos configuration
{ nixpkgs ? import <nixpkgs> { }
, assertionsModule ? <nixpkgs/nixos/modules/misc/assertions.nix>
}:
{
machines =
(nixpkgs.lib.evalModules {
modules = [
./default.nix
assertionsModule
];
}).config.machines;
}

View File

@@ -0,0 +1,19 @@
{ config, lib, ... }:
# Maps roles to their hosts
{
options.machines.roles = lib.mkOption {
type = lib.types.attrsOf (lib.types.listOf lib.types.str);
};
config = {
machines.roles = lib.zipAttrs
(lib.mapAttrsToList
(host: cfg:
lib.foldl (lib.mergeAttrs) { }
(builtins.map (role: { ${role} = host; })
cfg.systemRoles))
config.machines.hosts);
};
}

View File

@@ -0,0 +1,44 @@
{ config, lib, ... }:
let
machines = config.machines;
sshkeys = keyType: lib.foldl (l: cfg: l ++ cfg.${keyType}) [ ] (builtins.attrValues machines.hosts);
in
{
options.machines.ssh = {
userKeys = lib.mkOption {
type = lib.types.listOf lib.types.str;
description = ''
List of user keys aggregated from all machines.
'';
};
deployKeys = lib.mkOption {
default = [ ];
type = lib.types.listOf lib.types.str;
description = ''
List of deploy keys aggregated from all machines.
'';
};
hostKeysByRole = lib.mkOption {
type = lib.types.attrsOf (lib.types.listOf lib.types.str);
description = ''
Machine host keys divided into their roles.
'';
};
};
config = {
machines.ssh.userKeys = sshkeys "userKeys";
machines.ssh.deployKeys = sshkeys "deployKeys";
machines.ssh.hostKeysByRole = lib.mapAttrs
(role: hosts:
builtins.map
(host: machines.hosts.${host}.hostKey)
hosts)
machines.roles;
};
}

View File

@@ -7,11 +7,11 @@ let
in in
{ {
imports = [ imports = [
./hosts.nix
./pia-openvpn.nix ./pia-openvpn.nix
./pia-wireguard.nix
./ping.nix
./tailscale.nix ./tailscale.nix
./vpn.nix ./vpn.nix
./zerotier.nix
]; ];
options.networking.ip_forward = mkEnableOption "Enable ip forwarding"; options.networking.ip_forward = mkEnableOption "Enable ip forwarding";

View File

@@ -1,63 +0,0 @@
{ config, lib, ... }:
let
system = (import ../ssh.nix).system;
in {
networking.hosts = {
# some DNS providers filter local ip results from DNS request
"172.30.145.180" = [ "s0.zt.neet.dev" ];
"172.30.109.9" = [ "ponyo.zt.neet.dev" ];
"172.30.189.212" = [ "ray.zt.neet.dev" ];
};
programs.ssh.knownHosts = {
liza = {
hostNames = [ "liza" "liza.neet.dev" ];
publicKey = system.liza;
};
ponyo = {
hostNames = [ "ponyo" "ponyo.neet.dev" "ponyo.zt.neet.dev" "git.neet.dev" ];
publicKey = system.ponyo;
};
ponyo-unlock = {
hostNames = [ "unlock.ponyo.neet.dev" "cfamr6artx75qvt7ho3rrbsc7mkucmv5aawebwflsfuorusayacffryd.onion" ];
publicKey = system.ponyo-unlock;
};
ray = {
hostNames = [ "ray" "ray.zt.neet.dev" ];
publicKey = system.ray;
};
s0 = {
hostNames = [ "s0" "s0.zt.neet.dev" ];
publicKey = system.s0;
};
n1 = {
hostNames = [ "n1" ];
publicKey = system.n1;
};
n2 = {
hostNames = [ "n2" ];
publicKey = system.n2;
};
n3 = {
hostNames = [ "n3" ];
publicKey = system.n3;
};
n4 = {
hostNames = [ "n4" ];
publicKey = system.n4;
};
n5 = {
hostNames = [ "n5" ];
publicKey = system.n5;
};
n6 = {
hostNames = [ "n6" ];
publicKey = system.n6;
};
n7 = {
hostNames = [ "n7" ];
publicKey = system.n7;
};
};
}

View File

@@ -1,7 +1,7 @@
{ config, pkgs, lib, ... }: { config, pkgs, lib, ... }:
let let
cfg = config.pia; cfg = config.pia.openvpn;
vpnfailsafe = pkgs.stdenv.mkDerivation { vpnfailsafe = pkgs.stdenv.mkDerivation {
pname = "vpnfailsafe"; pname = "vpnfailsafe";
version = "0.0.1"; version = "0.0.1";
@@ -14,7 +14,7 @@ let
}; };
in in
{ {
options.pia = { options.pia.openvpn = {
enable = lib.mkEnableOption "Enable private internet access"; enable = lib.mkEnableOption "Enable private internet access";
server = lib.mkOption { server = lib.mkOption {
type = lib.types.str; type = lib.types.str;
@@ -108,6 +108,6 @@ in
}; };
}; };
}; };
age.secrets."pia-login.conf".file = ../../secrets/pia-login.conf; age.secrets."pia-login.conf".file = ../../secrets/pia-login.age;
}; };
} }

View File

@@ -0,0 +1,363 @@
{ config, lib, pkgs, ... }:
# Server list:
# https://serverlist.piaservers.net/vpninfo/servers/v6
# Reference materials:
# https://github.com/pia-foss/manual-connections
# https://github.com/thrnz/docker-wireguard-pia/blob/master/extra/wg-gen.sh
# TODO handle potential errors (or at least print status, success, and failures to the console)
# TODO parameterize names of systemd services so that multiple wg VPNs could coexist in theory easier
# TODO implement this module such that the wireguard VPN doesn't have to live in a container
# TODO don't add forward rules if the PIA port is the same as cfg.forwardedPort
# TODO verify signatures of PIA responses
# TODO `RuntimeMaxSec = "30d";` for pia-vpn-wireguard-init isn't allowed per the systemd logs. Find alternative.
with builtins;
with lib;
let
cfg = config.pia.wireguard;
getPIAToken = ''
PIA_USER=`sed '1q;d' /run/agenix/pia-login.conf`
PIA_PASS=`sed '2q;d' /run/agenix/pia-login.conf`
# PIA_TOKEN only lasts 24hrs
PIA_TOKEN=`curl -s -u "$PIA_USER:$PIA_PASS" https://www.privateinternetaccess.com/gtoken/generateToken | jq -r '.token'`
'';
chooseWireguardServer = ''
servers=$(mktemp)
servers_json=$(mktemp)
curl -s "https://serverlist.piaservers.net/vpninfo/servers/v6" > "$servers"
# extract json part only
head -n 1 "$servers" | tr -d '\n' > "$servers_json"
echo "Available location ids:" && jq '.regions | .[] | {name, id, port_forward}' "$servers_json"
# Some locations have multiple servers available. Pick a random one.
totalservers=$(jq -r '.regions | .[] | select(.id=="'${cfg.serverLocation}'") | .servers.wg | length' "$servers_json")
if ! [[ "$totalservers" =~ ^[0-9]+$ ]] || [ "$totalservers" -eq 0 ] 2>/dev/null; then
echo "Location \"${cfg.serverLocation}\" not found."
exit 1
fi
serverindex=$(( RANDOM % totalservers))
WG_HOSTNAME=$(jq -r '.regions | .[] | select(.id=="'${cfg.serverLocation}'") | .servers.wg | .['$serverindex'].cn' "$servers_json")
WG_SERVER_IP=$(jq -r '.regions | .[] | select(.id=="'${cfg.serverLocation}'") | .servers.wg | .['$serverindex'].ip' "$servers_json")
WG_SERVER_PORT=$(jq -r '.groups.wg | .[0] | .ports | .[0]' "$servers_json")
# write chosen server
rm -f /tmp/${cfg.interfaceName}-server.conf
touch /tmp/${cfg.interfaceName}-server.conf
chmod 700 /tmp/${cfg.interfaceName}-server.conf
echo "$WG_HOSTNAME" >> /tmp/${cfg.interfaceName}-server.conf
echo "$WG_SERVER_IP" >> /tmp/${cfg.interfaceName}-server.conf
echo "$WG_SERVER_PORT" >> /tmp/${cfg.interfaceName}-server.conf
rm $servers_json $servers
'';
getChosenWireguardServer = ''
WG_HOSTNAME=`sed '1q;d' /tmp/${cfg.interfaceName}-server.conf`
WG_SERVER_IP=`sed '2q;d' /tmp/${cfg.interfaceName}-server.conf`
WG_SERVER_PORT=`sed '3q;d' /tmp/${cfg.interfaceName}-server.conf`
'';
refreshPIAPort = ''
${getChosenWireguardServer}
signature=`sed '1q;d' /tmp/${cfg.interfaceName}-port-renewal`
payload=`sed '2q;d' /tmp/${cfg.interfaceName}-port-renewal`
bind_port_response=`curl -Gs -m 5 --connect-to "$WG_HOSTNAME::$WG_SERVER_IP:" --cacert "${./ca.rsa.4096.crt}" --data-urlencode "payload=$payload" --data-urlencode "signature=$signature" "https://$WG_HOSTNAME:19999/bindPort"`
'';
portForwarding = cfg.forwardPortForTransmission || cfg.forwardedPort != null;
containerServiceName = "container@${config.vpn-container.containerName}.service";
in
{
options.pia.wireguard = {
enable = mkEnableOption "Enable private internet access";
badPortForwardPorts = mkOption {
type = types.listOf types.port;
description = ''
Ports that will not be accepted from PIA.
If PIA assigns a port from this list, the connection is aborted since we cannot ask for a different port.
This is used to guarantee we are not assigned a port that is used by a service we do not want exposed.
'';
};
wireguardListenPort = mkOption {
type = types.port;
description = "The port wireguard listens on for this VPN connection";
default = 51820;
};
serverLocation = mkOption {
type = types.str;
default = "swiss";
};
interfaceName = mkOption {
type = types.str;
default = "piaw";
};
forwardedPort = mkOption {
type = types.nullOr types.port;
description = "The port to redirect port forwarded TCP VPN traffic too";
default = null;
};
forwardPortForTransmission = mkEnableOption "PIA port forwarding for transmission should be performed.";
};
config = mkIf cfg.enable {
assertions = [
{
assertion = cfg.forwardPortForTransmission != (cfg.forwardedPort != null);
message = ''
The PIA forwarded port cannot simultaneously be used by transmission and redirected to another port.
'';
}
];
# mounts used to pass the connection parameters to the container
# the container doesn't have internet until it uses these parameters so it cannot fetch them itself
vpn-container.mounts = [
"/tmp/${cfg.interfaceName}.conf"
"/tmp/${cfg.interfaceName}-server.conf"
"/tmp/${cfg.interfaceName}-address.conf"
];
# The container takes ownership of the wireguard interface on its startup
containers.vpn.interfaces = [ cfg.interfaceName ];
# TODO: while this is much better than "loose" networking, it seems to have issues with firewall restarts
# allow traffic for wireguard interface to pass since wireguard trips up rpfilter
# networking.firewall = {
# extraCommands = ''
# ip46tables -t raw -I nixos-fw-rpfilter -p udp -m udp --sport ${toString cfg.wireguardListenPort} -j RETURN
# ip46tables -t raw -I nixos-fw-rpfilter -p udp -m udp --dport ${toString cfg.wireguardListenPort} -j RETURN
# '';
# extraStopCommands = ''
# ip46tables -t raw -D nixos-fw-rpfilter -p udp -m udp --sport ${toString cfg.wireguardListenPort} -j RETURN || true
# ip46tables -t raw -D nixos-fw-rpfilter -p udp -m udp --dport ${toString cfg.wireguardListenPort} -j RETURN || true
# '';
# };
networking.firewall.checkReversePath = "loose";
systemd.services.pia-vpn-wireguard-init = {
description = "Creates PIA VPN Wireguard Interface";
wants = [ "network-online.target" ];
after = [ "network.target" "network-online.target" ];
before = [ containerServiceName ];
requiredBy = [ containerServiceName ];
partOf = [ containerServiceName ];
wantedBy = [ "multi-user.target" ];
path = with pkgs; [ wireguard-tools jq curl iproute iputils ];
serviceConfig = {
Type = "oneshot";
RemainAfterExit = true;
# restart once a month; PIA forwarded port expires after two months
# because the container is "PartOf" this unit, it gets restarted too
RuntimeMaxSec = "30d";
};
script = ''
echo Waiting for internet...
while ! ping -c 1 -W 1 1.1.1.1; do
sleep 1
done
# Prepare to connect by generating wg secrets and auth'ing with PIA since the container
# cannot do without internet to start with. NAT'ing the host's internet would address this
# issue but is not ideal because then leaking network outside of the VPN is more likely.
${chooseWireguardServer}
${getPIAToken}
# generate wireguard keys
privKey=$(wg genkey)
pubKey=$(echo "$privKey" | wg pubkey)
# authorize our WG keys with the PIA server we are about to connect to
wireguard_json=`curl -s -G --connect-to "$WG_HOSTNAME::$WG_SERVER_IP:" --cacert "${./ca.rsa.4096.crt}" --data-urlencode "pt=$PIA_TOKEN" --data-urlencode "pubkey=$pubKey" https://$WG_HOSTNAME:$WG_SERVER_PORT/addKey`
# create wg-quick config file
rm -f /tmp/${cfg.interfaceName}.conf /tmp/${cfg.interfaceName}-address.conf
touch /tmp/${cfg.interfaceName}.conf /tmp/${cfg.interfaceName}-address.conf
chmod 700 /tmp/${cfg.interfaceName}.conf /tmp/${cfg.interfaceName}-address.conf
echo "
[Interface]
# Address = $(echo "$wireguard_json" | jq -r '.peer_ip')
PrivateKey = $privKey
ListenPort = ${toString cfg.wireguardListenPort}
[Peer]
PersistentKeepalive = 25
PublicKey = $(echo "$wireguard_json" | jq -r '.server_key')
AllowedIPs = 0.0.0.0/0
Endpoint = $WG_SERVER_IP:$(echo "$wireguard_json" | jq -r '.server_port')
" >> /tmp/${cfg.interfaceName}.conf
# create file storing the VPN ip address PIA assigned to us
echo "$wireguard_json" | jq -r '.peer_ip' >> /tmp/${cfg.interfaceName}-address.conf
# Create wg interface now so it inherits from the namespace with internet access
# the container will handle actually connecting the interface since that info is
# not preserved upon moving into the container's networking namespace
# Roughly following this guide https://www.wireguard.com/netns/#ordinary-containerization
[[ -z $(ip link show dev ${cfg.interfaceName} 2>/dev/null) ]] || exit
ip link add ${cfg.interfaceName} type wireguard
'';
preStop = ''
# cleanup wireguard interface
ip link del ${cfg.interfaceName}
rm -f /tmp/${cfg.interfaceName}.conf /tmp/${cfg.interfaceName}-address.conf
'';
};
vpn-container.config.systemd.services.pia-vpn-wireguard = {
description = "Initializes the PIA VPN WireGuard Tunnel";
wants = [ "network-online.target" ];
after = [ "network.target" "network-online.target" ];
wantedBy = [ "multi-user.target" ];
path = with pkgs; [ wireguard-tools iproute curl jq iptables ];
serviceConfig = {
Type = "oneshot";
RemainAfterExit = true;
};
script = ''
# pseudo calls wg-quick
# Near equivalent of "wg-quick up /tmp/${cfg.interfaceName}.conf"
# cannot actually call wg-quick because the interface has to be already
# created before the container taken ownership of the interface
# Thus, assumes wg interface was already created:
# ip link add ${cfg.interfaceName} type wireguard
${getChosenWireguardServer}
myaddress=`cat /tmp/${cfg.interfaceName}-address.conf`
wg setconf ${cfg.interfaceName} /tmp/${cfg.interfaceName}.conf
ip -4 address add $myaddress dev ${cfg.interfaceName}
ip link set mtu 1420 up dev ${cfg.interfaceName}
wg set ${cfg.interfaceName} fwmark ${toString cfg.wireguardListenPort}
ip -4 route add 0.0.0.0/0 dev ${cfg.interfaceName} table ${toString cfg.wireguardListenPort}
# TODO is this needed?
ip -4 rule add not fwmark ${toString cfg.wireguardListenPort} table ${toString cfg.wireguardListenPort}
ip -4 rule add table main suppress_prefixlength 0
# The rest of the script is only for only for port forwarding skip if not needed
if [ ${boolToString portForwarding} == false ]; then exit 0; fi
# Reserve port
${getPIAToken}
payload_and_signature=`curl -s -m 5 --connect-to "$WG_HOSTNAME::$WG_SERVER_IP:" --cacert "${./ca.rsa.4096.crt}" -G --data-urlencode "token=$PIA_TOKEN" "https://$WG_HOSTNAME:19999/getSignature"`
signature=$(echo "$payload_and_signature" | jq -r '.signature')
payload=$(echo "$payload_and_signature" | jq -r '.payload')
port=$(echo "$payload" | base64 -d | jq -r '.port')
# Check if the port is acceptable
notallowed=(${concatStringsSep " " (map toString cfg.badPortForwardPorts)})
if [[ " ''${notallowed[*]} " =~ " $port " ]]; then
# the port PIA assigned is not allowed, kill the connection
wg-quick down /tmp/${cfg.interfaceName}.conf
exit 1
fi
# write reserved port to file readable for all users
echo $port > /tmp/${cfg.interfaceName}-port
chmod 644 /tmp/${cfg.interfaceName}-port
# write payload and signature info needed to allow refreshing allocated forwarded port
rm -f /tmp/${cfg.interfaceName}-port-renewal
touch /tmp/${cfg.interfaceName}-port-renewal
chmod 700 /tmp/${cfg.interfaceName}-port-renewal
echo $signature >> /tmp/${cfg.interfaceName}-port-renewal
echo $payload >> /tmp/${cfg.interfaceName}-port-renewal
# Block all traffic from VPN interface except for traffic that is from the forwarded port
iptables -I nixos-fw -p tcp --dport $port -j nixos-fw-accept -i ${cfg.interfaceName}
iptables -I nixos-fw -p udp --dport $port -j nixos-fw-accept -i ${cfg.interfaceName}
# The first port refresh triggers the port to be actually allocated
${refreshPIAPort}
${optionalString (cfg.forwardedPort != null) ''
# redirect the fowarded port
iptables -A INPUT -i ${cfg.interfaceName} -p tcp --dport $port -j ACCEPT
iptables -A INPUT -i ${cfg.interfaceName} -p udp --dport $port -j ACCEPT
iptables -A INPUT -i ${cfg.interfaceName} -p tcp --dport ${toString cfg.forwardedPort} -j ACCEPT
iptables -A INPUT -i ${cfg.interfaceName} -p udp --dport ${toString cfg.forwardedPort} -j ACCEPT
iptables -A PREROUTING -t nat -i ${cfg.interfaceName} -p tcp --dport $port -j REDIRECT --to-port ${toString cfg.forwardedPort}
iptables -A PREROUTING -t nat -i ${cfg.interfaceName} -p udp --dport $port -j REDIRECT --to-port ${toString cfg.forwardedPort}
''}
${optionalString cfg.forwardPortForTransmission ''
# assumes no auth needed for transmission
curlout=$(curl localhost:9091/transmission/rpc 2>/dev/null)
regex='X-Transmission-Session-Id\: (\w*)'
if [[ $curlout =~ $regex ]]; then
sessionId=''${BASH_REMATCH[1]}
else
exit 1
fi
# set the port in transmission
data='{"method": "session-set", "arguments": { "peer-port" :'$port' } }'
curl http://localhost:9091/transmission/rpc -d "$data" -H "X-Transmission-Session-Id: $sessionId"
''}
'';
preStop = ''
wg-quick down /tmp/${cfg.interfaceName}.conf
# The rest of the script is only for only for port forwarding skip if not needed
if [ ${boolToString portForwarding} == false ]; then exit 0; fi
${optionalString (cfg.forwardedPort != null) ''
# stop redirecting the forwarded port
iptables -D INPUT -i ${cfg.interfaceName} -p tcp --dport $port -j ACCEPT
iptables -D INPUT -i ${cfg.interfaceName} -p udp --dport $port -j ACCEPT
iptables -D INPUT -i ${cfg.interfaceName} -p tcp --dport ${toString cfg.forwardedPort} -j ACCEPT
iptables -D INPUT -i ${cfg.interfaceName} -p udp --dport ${toString cfg.forwardedPort} -j ACCEPT
iptables -D PREROUTING -t nat -i ${cfg.interfaceName} -p tcp --dport $port -j REDIRECT --to-port ${toString cfg.forwardedPort}
iptables -D PREROUTING -t nat -i ${cfg.interfaceName} -p udp --dport $port -j REDIRECT --to-port ${toString cfg.forwardedPort}
''}
'';
};
vpn-container.config.systemd.services.pia-vpn-wireguard-forward-port = {
enable = portForwarding;
description = "PIA VPN WireGuard Tunnel Port Forwarding";
after = [ "pia-vpn-wireguard.service" ];
requires = [ "pia-vpn-wireguard.service" ];
path = with pkgs; [ curl ];
serviceConfig = {
Type = "oneshot";
};
script = refreshPIAPort;
};
vpn-container.config.systemd.timers.pia-vpn-wireguard-forward-port = {
enable = portForwarding;
partOf = [ "pia-vpn-wireguard-forward-port.service" ];
wantedBy = [ "timers.target" ];
timerConfig = {
OnCalendar = "*:0/10"; # 10 minutes
RandomizedDelaySec = "1m"; # vary by 1 min to give PIA servers some relief
};
};
age.secrets."pia-login.conf".file = ../../secrets/pia-login.age;
};
}

59
common/network/ping.nix Normal file
View File

@@ -0,0 +1,59 @@
{ config, pkgs, lib, ... }:
# keeps peer to peer connections alive with a periodic ping
with lib;
with builtins;
# todo auto restart
let
cfg = config.keepalive-ping;
serviceTemplate = host:
{
"keepalive-ping@${host}" = {
description = "Periodic ping keep alive for ${host} connection";
requires = [ "network-online.target" ];
after = [ "network.target" "network-online.target" ];
wantedBy = [ "multi-user.target" ];
serviceConfig.Restart = "always";
path = with pkgs; [ iputils ];
script = ''
ping -i ${cfg.delay} ${host} &>/dev/null
'';
};
};
combineAttrs = foldl recursiveUpdate { };
serviceList = map serviceTemplate cfg.hosts;
services = combineAttrs serviceList;
in
{
options.keepalive-ping = {
enable = mkEnableOption "Enable keep alive ping task";
hosts = mkOption {
type = types.listOf types.str;
default = [ ];
description = ''
Hosts to ping periodically
'';
};
delay = mkOption {
type = types.str;
default = "60";
description = ''
Ping interval in seconds of periodic ping per host being pinged
'';
};
};
config = mkIf cfg.enable {
systemd.services = services;
};
}

View File

@@ -8,7 +8,11 @@ in
{ {
options.services.tailscale.exitNode = mkEnableOption "Enable exit node support"; options.services.tailscale.exitNode = mkEnableOption "Enable exit node support";
config.services.tailscale.enable = !config.boot.isContainer; config.services.tailscale.enable = mkDefault (!config.boot.isContainer);
# MagicDNS
config.networking.nameservers = mkIf cfg.enable [ "1.1.1.1" "8.8.8.8" ];
config.networking.search = mkIf cfg.enable [ "koi-bebop.ts.net" ];
# exit node # exit node
config.networking.firewall.checkReversePath = mkIf cfg.exitNode "loose"; config.networking.firewall.checkReversePath = mkIf cfg.exitNode "loose";

View File

@@ -26,9 +26,11 @@ in
''; '';
}; };
useOpenVPN = mkEnableOption "Uses OpenVPN instead of wireguard for PIA VPN connection";
config = mkOption { config = mkOption {
type = types.anything; type = types.anything;
default = {}; default = { };
example = '' example = ''
{ {
services.nginx.enable = true; services.nginx.enable = true;
@@ -41,6 +43,9 @@ in
}; };
config = mkIf cfg.enable { config = mkIf cfg.enable {
pia.wireguard.enable = !cfg.useOpenVPN;
pia.wireguard.forwardPortForTransmission = !cfg.useOpenVPN;
containers.${cfg.containerName} = { containers.${cfg.containerName} = {
ephemeral = true; ephemeral = true;
autoStart = true; autoStart = true;
@@ -59,36 +64,40 @@ in
} }
))); )));
enableTun = true; enableTun = cfg.useOpenVPN;
privateNetwork = true; privateNetwork = true;
hostAddress = "172.16.100.1"; hostAddress = "172.16.100.1";
localAddress = "172.16.100.2"; localAddress = "172.16.100.2";
config = { config = {
imports = allModules ++ [cfg.config]; imports = allModules ++ [ cfg.config ];
nixpkgs.pkgs = pkgs; # networking.firewall.enable = mkForce false;
networking.firewall.trustedInterfaces = [
# completely trust internal interface to host
"eth0"
];
networking.firewall.enable = mkForce false; pia.openvpn.enable = cfg.useOpenVPN;
pia.openvpn.server = "swiss.privacy.network"; # swiss vpn
pia.enable = true;
pia.server = "swiss.privacy.network"; # swiss vpn
# TODO fix so it does run it's own resolver again
# run it's own DNS resolver # run it's own DNS resolver
networking.useHostResolvConf = false; networking.useHostResolvConf = false;
services.resolved.enable = true; # services.resolved.enable = true;
networking.nameservers = [ "1.1.1.1" "8.8.8.8" ];
}; };
}; };
# load secrets the container needs # load secrets the container needs
age.secrets = config.containers.${cfg.containerName}.config.age.secrets; age.secrets = config.containers.${cfg.containerName}.config.age.secrets;
# forwarding for vpn container # forwarding for vpn container (only for OpenVPN)
networking.nat.enable = true; networking.nat.enable = mkIf cfg.useOpenVPN true;
networking.nat.internalInterfaces = [ networking.nat.internalInterfaces = mkIf cfg.useOpenVPN [
"ve-${cfg.containerName}" "ve-${cfg.containerName}"
]; ];
networking.ip_forward = true; networking.ip_forward = mkIf cfg.useOpenVPN true;
# assumes only one potential interface # assumes only one potential interface
networking.usePredictableInterfaceNames = false; networking.usePredictableInterfaceNames = false;

View File

@@ -1,14 +0,0 @@
{ lib, config, ... }:
let
cfg = config.services.zerotierone;
in {
config = lib.mkIf cfg.enable {
services.zerotierone.joinNetworks = [
"565799d8f6d654c0"
];
networking.firewall.allowedUDPPorts = [
9993
];
};
}

60
common/nix-builder.nix Normal file
View File

@@ -0,0 +1,60 @@
{ config, lib, ... }:
let
builderRole = "nix-builder";
builderUserName = "nix-builder";
machinesByRole = role: lib.filterAttrs (hostname: cfg: builtins.elem role cfg.systemRoles) config.machines.hosts;
otherMachinesByRole = role: lib.filterAttrs (hostname: cfg: hostname != config.networking.hostName) (machinesByRole role);
thisMachineHasRole = role: builtins.hasAttr config.networking.hostName (machinesByRole role);
builders = machinesByRole builderRole;
thisMachineIsABuilder = thisMachineHasRole builderRole;
# builders don't include themselves as a remote builder
otherBuilders = lib.filterAttrs (hostname: cfg: hostname != config.networking.hostName) builders;
in
lib.mkMerge [
# configure builder
(lib.mkIf thisMachineIsABuilder {
users.users.${builderUserName} = {
description = "Distributed Nix Build User";
group = builderUserName;
isSystemUser = true;
createHome = true;
home = "/var/lib/nix-builder";
useDefaultShell = true;
openssh.authorizedKeys.keys = builtins.map
(builderCfg: builderCfg.hostKey)
(builtins.attrValues config.machines.hosts);
};
users.groups.${builderUserName} = { };
nix.settings.trusted-users = [
builderUserName
];
})
# use each builder
{
nix.distributedBuilds = true;
nix.buildMachines = builtins.map
(builderCfg: {
hostName = builtins.elemAt builderCfg.hostNames 0;
system = builderCfg.arch;
protocol = "ssh-ng";
sshUser = builderUserName;
sshKey = "/etc/ssh/ssh_host_ed25519_key";
maxJobs = 3;
speedFactor = 10;
supportedFeatures = [ "nixos-test" "benchmark" "big-parallel" "kvm" ];
})
(builtins.attrValues otherBuilders);
# It is very likely that the builder's internet is faster or just as fast
nix.extraOptions = ''
builders-use-substitutes = true
'';
}
]

View File

@@ -2,7 +2,8 @@
let let
cfg = config.de; cfg = config.de;
in { in
{
config = lib.mkIf cfg.enable { config = lib.mkIf cfg.enable {
# enable pulseaudio support for packages # enable pulseaudio support for packages
nixpkgs.config.pulseaudio = true; nixpkgs.config.pulseaudio = true;
@@ -16,45 +17,6 @@ in {
alsa.support32Bit = true; alsa.support32Bit = true;
pulse.enable = true; pulse.enable = true;
jack.enable = true; jack.enable = true;
# use the example session manager (no others are packaged yet so this is enabled by default,
# no need to redefine it in your config for now)
#media-session.enable = true;
config.pipewire = {
"context.objects" = [
{
# A default dummy driver. This handles nodes marked with the "node.always-driver"
# properyty when no other driver is currently active. JACK clients need this.
factory = "spa-node-factory";
args = {
"factory.name" = "support.node.driver";
"node.name" = "Dummy-Driver";
"priority.driver" = 8000;
};
}
{
factory = "adapter";
args = {
"factory.name" = "support.null-audio-sink";
"node.name" = "Microphone-Proxy";
"node.description" = "Microphone";
"media.class" = "Audio/Source/Virtual";
"audio.position" = "MONO";
};
}
{
factory = "adapter";
args = {
"factory.name" = "support.null-audio-sink";
"node.name" = "Main-Output-Proxy";
"node.description" = "Main Output";
"media.class" = "Audio/Sink";
"audio.position" = "FL,FR";
};
}
];
};
}; };
users.users.googlebot.extraGroups = [ "audio" ]; users.users.googlebot.extraGroups = [ "audio" ];

View File

@@ -17,39 +17,8 @@ let
"PREFIX=$(out)" "PREFIX=$(out)"
]; ];
}; };
in
nvidia-vaapi-driver = pkgs.stdenv.mkDerivation rec { {
pname = "nvidia-vaapi-driver";
version = "0.0.5";
src = pkgs.fetchFromGitHub {
owner = "elFarto";
repo = pname;
rev = "v${version}";
sha256 = "2bycqKolVoaHK64XYcReteuaON9TjzrFhaG5kty28YY=";
};
patches = [
./use-meson-v57.patch
];
nativeBuildInputs = with pkgs; [
meson
cmake
ninja
pkg-config
];
buildInputs = with pkgs; [
nv-codec-headers-11-1-5-1
libva
gst_all_1.gstreamer
gst_all_1.gst-plugins-bad
libglvnd
];
};
in {
config = lib.mkIf cfg.enable { config = lib.mkIf cfg.enable {
# chromium with specific extensions + settings # chromium with specific extensions + settings
programs.chromium = { programs.chromium = {
@@ -92,7 +61,7 @@ in {
enable = true; enable = true;
extraPackages = with pkgs; [ extraPackages = with pkgs; [
intel-media-driver # LIBVA_DRIVER_NAME=iHD intel-media-driver # LIBVA_DRIVER_NAME=iHD
vaapiIntel # LIBVA_DRIVER_NAME=i965 (older but works better for Firefox/Chromium) vaapiIntel # LIBVA_DRIVER_NAME=i965 (older but works better for Firefox/Chromium)
# vaapiVdpau # vaapiVdpau
libvdpau-va-gl libvdpau-va-gl
nvidia-vaapi-driver nvidia-vaapi-driver

View File

@@ -2,15 +2,16 @@
let let
cfg = config.de; cfg = config.de;
in { in
{
imports = [ imports = [
./kde.nix ./kde.nix
./xfce.nix # ./xfce.nix
./yubikey.nix ./yubikey.nix
./chromium.nix ./chromium.nix
# ./firefox.nix # ./firefox.nix
./audio.nix ./audio.nix
# ./torbrowser.nix # ./torbrowser.nix
./pithos.nix ./pithos.nix
./spotify.nix ./spotify.nix
./vscodium.nix ./vscodium.nix
@@ -36,12 +37,10 @@ in {
mumble mumble
tigervnc tigervnc
bluez-tools bluez-tools
vscodium
element-desktop element-desktop
mpv mpv
nextcloud-client nextcloud-client
signal-desktop signal-desktop
minecraft
gparted gparted
libreoffice-fresh libreoffice-fresh
thunderbird thunderbird
@@ -50,6 +49,13 @@ in {
arduino arduino
yt-dlp yt-dlp
jellyfin-media-player jellyfin-media-player
joplin-desktop
config.inputs.deploy-rs.packages.${config.currentSystem}.deploy-rs
lxqt.pavucontrol-qt
barrier
# For Nix IDE
nixpkgs-fmt
]; ];
# Networking # Networking
@@ -63,7 +69,7 @@ in {
]; ];
# Printer discovery # Printer discovery
services.avahi.enable = true; services.avahi.enable = true;
services.avahi.nssmdns = true; services.avahi.nssmdns4 = true;
programs.file-roller.enable = true; programs.file-roller.enable = true;

View File

@@ -2,7 +2,8 @@
let let
cfg = config.de; cfg = config.de;
in { in
{
config = lib.mkIf cfg.enable { config = lib.mkIf cfg.enable {
users.users.googlebot.packages = [ users.users.googlebot.packages = [
pkgs.discord pkgs.discord

View File

@@ -20,7 +20,7 @@ let
}; };
firefox = pkgs.wrapFirefox somewhatPrivateFF { firefox = pkgs.wrapFirefox somewhatPrivateFF {
desktopName = "Sneed Browser"; desktopName = "Sneed Browser";
nixExtensions = [ nixExtensions = [
(pkgs.fetchFirefoxAddon { (pkgs.fetchFirefoxAddon {
@@ -71,8 +71,8 @@ let
TopSites = false; TopSites = false;
}; };
UserMessaging = { UserMessaging = {
ExtensionRecommendations = false; ExtensionRecommendations = false;
SkipOnboarding = true; SkipOnboarding = true;
}; };
WebsiteFilter = { WebsiteFilter = {
Block = [ Block = [

View File

@@ -2,14 +2,12 @@
let let
cfg = config.de; cfg = config.de;
in { in
{
config = lib.mkIf cfg.enable { config = lib.mkIf cfg.enable {
# kde plasma services.displayManager.sddm.enable = true;
services.xserver = { services.displayManager.sddm.wayland.enable = true;
enable = true; services.desktopManager.plasma6.enable = true;
desktopManager.plasma5.enable = true;
displayManager.sddm.enable = true;
};
# kde apps # kde apps
nixpkgs.config.firefox.enablePlasmaBrowserIntegration = true; nixpkgs.config.firefox.enablePlasmaBrowserIntegration = true;

View File

@@ -1,36 +1,50 @@
# mounts the samba share on s0 over zeroteir # mounts the samba share on s0 over tailscale
{ config, lib, ... }: { config, lib, pkgs, ... }:
let let
cfg = config.services.mount-samba; cfg = config.services.mount-samba;
# prevents hanging on network split # prevents hanging on network split and other similar niceties to ensure a stable connection
network_opts = "x-systemd.automount,noauto,x-systemd.idle-timeout=60,x-systemd.device-timeout=5s,x-systemd.mount-timeout=5s,nostrictsync,cache=loose,handlecache,handletimeout=30000,rwpidforward,mapposix,soft,resilienthandles,echo_interval=10,noblocksend"; network_opts = "nostrictsync,cache=strict,handlecache,handletimeout=30000,rwpidforward,mapposix,soft,resilienthandles,echo_interval=10,noblocksend,fsc";
systemd_opts = "x-systemd.automount,noauto,x-systemd.idle-timeout=60,x-systemd.device-timeout=5s,x-systemd.mount-timeout=5s";
user_opts = "uid=${toString config.users.users.googlebot.uid},file_mode=0660,dir_mode=0770,user"; user_opts = "uid=${toString config.users.users.googlebot.uid},file_mode=0660,dir_mode=0770,user";
auth_opts = "credentials=/run/agenix/smb-secrets"; auth_opts = "sec=ntlmv2i,credentials=/run/agenix/smb-secrets";
version_opts = "vers=2.1"; version_opts = "vers=3.1.1";
opts = "${network_opts},${user_opts},${version_opts},${auth_opts}"; public_user_opts = "gid=${toString config.users.groups.users.gid}";
in {
opts = "${systemd_opts},${network_opts},${user_opts},${version_opts},${auth_opts}";
in
{
options.services.mount-samba = { options.services.mount-samba = {
enable = lib.mkEnableOption "enable mounting samba shares"; enable = lib.mkEnableOption "enable mounting samba shares";
}; };
config = lib.mkIf (cfg.enable && config.services.zerotierone.enable) { config = lib.mkIf (cfg.enable && config.services.tailscale.enable) {
fileSystems."/mnt/public" = { fileSystems."/mnt/public" = {
device = "//s0.zt.neet.dev/public"; device = "//s0.koi-bebop.ts.net/public";
fsType = "cifs"; fsType = "cifs";
options = [ opts ]; options = [ "${opts},${public_user_opts}" ];
}; };
fileSystems."/mnt/private" = { fileSystems."/mnt/private" = {
device = "//s0.zt.neet.dev/googlebot"; device = "//s0.koi-bebop.ts.net/googlebot";
fsType = "cifs"; fsType = "cifs";
options = [ opts ]; options = [ opts ];
}; };
age.secrets.smb-secrets.file = ../../secrets/smb-secrets.age; age.secrets.smb-secrets.file = ../../secrets/smb-secrets.age;
environment.shellAliases = {
# remount storage
remount_public = "sudo systemctl restart mnt-public.mount";
remount_private = "sudo systemctl restart mnt-private.mount";
# Encrypted Vault
vault_unlock = "${pkgs.gocryptfs}/bin/gocryptfs /mnt/private/.vault/ /mnt/vault/";
vault_lock = "umount /mnt/vault/";
};
}; };
} }

View File

@@ -2,7 +2,8 @@
let let
cfg = config.de; cfg = config.de;
in { in
{
config = lib.mkIf cfg.enable { config = lib.mkIf cfg.enable {
nixpkgs.overlays = [ nixpkgs.overlays = [
(self: super: { (self: super: {
@@ -11,7 +12,7 @@ in {
version = "1.5.1"; version = "1.5.1";
src = super.fetchFromGitHub { src = super.fetchFromGitHub {
owner = pname; owner = pname;
repo = pname; repo = pname;
rev = version; rev = version;
sha256 = "il7OAALpHFZ6wjco9Asp04zWHCD8Ni+iBdiJWcMiQA4="; sha256 = "il7OAALpHFZ6wjco9Asp04zWHCD8Ni+iBdiJWcMiQA4=";
}; };

View File

@@ -4,7 +4,7 @@ with lib;
let let
cfg = config.services.spotifyd; cfg = config.services.spotifyd;
toml = pkgs.formats.toml {}; toml = pkgs.formats.toml { };
spotifydConf = toml.generate "spotify.conf" cfg.settings; spotifydConf = toml.generate "spotify.conf" cfg.settings;
in in
{ {
@@ -17,7 +17,7 @@ in
enable = mkEnableOption "spotifyd, a Spotify playing daemon"; enable = mkEnableOption "spotifyd, a Spotify playing daemon";
settings = mkOption { settings = mkOption {
default = {}; default = { };
type = toml.type; type = toml.type;
example = { global.bitrate = 320; }; example = { global.bitrate = 320; };
description = '' description = ''
@@ -28,7 +28,7 @@ in
users = mkOption { users = mkOption {
type = with types; listOf str; type = with types; listOf str;
default = []; default = [ ];
description = '' description = ''
Usernames to be added to the "spotifyd" group, so that they Usernames to be added to the "spotifyd" group, so that they
can start and interact with the userspace daemon. can start and interact with the userspace daemon.
@@ -43,7 +43,6 @@ in
services.spotifyd.users = [ "googlebot" ]; services.spotifyd.users = [ "googlebot" ];
users.users.googlebot.packages = with pkgs; [ users.users.googlebot.packages = with pkgs; [
spotify spotify
spotify-tui
]; ];
users.groups.spotifyd = { users.groups.spotifyd = {

View File

@@ -2,7 +2,8 @@
let let
cfg = config.de; cfg = config.de;
in { in
{
config = lib.mkIf cfg.enable { config = lib.mkIf cfg.enable {
programs.steam.enable = true; programs.steam.enable = true;
hardware.steam-hardware.enable = true; # steam controller hardware.steam-hardware.enable = true; # steam controller

View File

@@ -2,7 +2,8 @@
let let
cfg = config.de; cfg = config.de;
in { in
{
config = lib.mkIf cfg.enable { config = lib.mkIf cfg.enable {
nixpkgs.overlays = [ nixpkgs.overlays = [
(self: super: { (self: super: {

View File

@@ -2,13 +2,14 @@
let let
cfg = config.de.touchpad; cfg = config.de.touchpad;
in { in
{
options.de.touchpad = { options.de.touchpad = {
enable = lib.mkEnableOption "enable touchpad"; enable = lib.mkEnableOption "enable touchpad";
}; };
config = lib.mkIf cfg.enable { config = lib.mkIf cfg.enable {
services.xserver.libinput.enable = true; services.libinput.enable = true;
services.xserver.libinput.touchpad.naturalScrolling = true; services.libinput.touchpad.naturalScrolling = true;
}; };
} }

View File

@@ -4,8 +4,20 @@ let
cfg = config.de; cfg = config.de;
extensions = with pkgs.vscode-extensions; [ extensions = with pkgs.vscode-extensions; [
# bbenoist.Nix # nix syntax support bbenoist.nix # nix syntax support
# arrterian.nix-env-selector # nix dev envs arrterian.nix-env-selector # nix dev envs
dart-code.dart-code
dart-code.flutter
golang.go
jnoortheen.nix-ide
ms-vscode.cpptools
] ++ pkgs.vscode-utils.extensionsFromVscodeMarketplace [
{
name = "platformio-ide";
publisher = "platformio";
version = "3.1.1";
sha256 = "g9yTG3DjVUS2w9eHGAai5LoIfEGus+FPhqDnCi4e90Q=";
}
]; ];
vscodium-with-extensions = pkgs.vscode-with-extensions.override { vscodium-with-extensions = pkgs.vscode-with-extensions.override {

View File

@@ -2,7 +2,8 @@
let let
cfg = config.de; cfg = config.de;
in { in
{
config = lib.mkIf cfg.enable { config = lib.mkIf cfg.enable {
services.xserver = { services.xserver = {
enable = true; enable = true;

View File

@@ -2,7 +2,8 @@
let let
cfg = config.de; cfg = config.de;
in { in
{
config = lib.mkIf cfg.enable { config = lib.mkIf cfg.enable {
# yubikey # yubikey
services.pcscd.enable = true; services.pcscd.enable = true;

View File

@@ -0,0 +1,87 @@
# Starting point:
# https://github.com/aldoborrero/mynixpkgs/commit/c501c1e32dba8f4462dcecb57eee4b9e52038e27
{ config, pkgs, lib, ... }:
let
cfg = config.services.actual-server;
stateDir = "/var/lib/${cfg.stateDirName}";
in
{
options.services.actual-server = {
enable = lib.mkEnableOption "Actual Server";
hostname = lib.mkOption {
type = lib.types.str;
default = "localhost";
description = "Hostname for the Actual Server.";
};
port = lib.mkOption {
type = lib.types.int;
default = 25448;
description = "Port on which the Actual Server should listen.";
};
stateDirName = lib.mkOption {
type = lib.types.str;
default = "actual-server";
description = "Name of the directory under /var/lib holding the server's data.";
};
upload = {
fileSizeSyncLimitMB = lib.mkOption {
type = lib.types.nullOr lib.types.int;
default = null;
description = "File size limit in MB for synchronized files.";
};
syncEncryptedFileSizeLimitMB = lib.mkOption {
type = lib.types.nullOr lib.types.int;
default = null;
description = "File size limit in MB for synchronized encrypted files.";
};
fileSizeLimitMB = lib.mkOption {
type = lib.types.nullOr lib.types.int;
default = null;
description = "File size limit in MB for file uploads.";
};
};
};
config = lib.mkIf cfg.enable {
systemd.services.actual-server = {
description = "Actual Server";
after = [ "network.target" ];
wantedBy = [ "multi-user.target" ];
serviceConfig = {
ExecStart = "${pkgs.actual-server}/bin/actual-server";
Restart = "always";
StateDirectory = cfg.stateDirName;
WorkingDirectory = stateDir;
DynamicUser = true;
UMask = "0007";
};
environment = {
NODE_ENV = "production";
ACTUAL_PORT = toString cfg.port;
# Actual is actually very bad at configuring it's own paths despite that information being readily available
ACTUAL_USER_FILES = "${stateDir}/user-files";
ACTUAL_SERVER_FILES = "${stateDir}/server-files";
ACTUAL_DATA_DIR = stateDir;
ACTUAL_UPLOAD_FILE_SYNC_SIZE_LIMIT_MB = toString (cfg.upload.fileSizeSyncLimitMB or "");
ACTUAL_UPLOAD_SYNC_ENCRYPTED_FILE_SIZE_LIMIT_MB = toString (cfg.upload.syncEncryptedFileSizeLimitMB or "");
ACTUAL_UPLOAD_FILE_SIZE_LIMIT_MB = toString (cfg.upload.fileSizeLimitMB or "");
};
};
services.nginx.virtualHosts.${cfg.hostname} = {
enableACME = true;
forceSSL = true;
locations."/".proxyPass = "http://localhost:${toString cfg.port}";
};
};
}

View File

@@ -3,9 +3,9 @@
with lib; with lib;
let let
cfg = config.ceph; cfg = config.ceph;
in { in
options.ceph = { {
}; options.ceph = { };
config = mkIf cfg.enable { config = mkIf cfg.enable {
# ceph.enable = true; # ceph.enable = true;

53
common/server/dashy.nix Normal file
View File

@@ -0,0 +1,53 @@
{ config, lib, pkgs, ... }:
with lib;
let
cfg = config.services.dashy;
in
{
options.services.dashy = {
enable = mkEnableOption "dashy";
imageTag = mkOption {
type = types.str;
default = "latest";
};
port = mkOption {
type = types.int;
default = 56815;
};
configFile = lib.mkOption {
type = lib.types.path;
description = "Path to the YAML configuration file";
};
};
config = mkIf cfg.enable {
virtualisation.oci-containers.containers = {
dashy = {
image = "lissy93/dashy:${cfg.imageTag}";
environment = {
TZ = "${config.time.timeZone}";
};
ports = [
"127.0.0.1:${toString cfg.port}:80"
];
volumes = [
"${cfg.configFile}:/app/public/conf.yml"
];
};
};
services.nginx.enable = true;
services.nginx.virtualHosts."s0.koi-bebop.ts.net" = {
default = true;
addSSL = true;
serverAliases = [ "s0" ];
sslCertificate = "/secret/ssl/s0.koi-bebop.ts.net.crt";
sslCertificateKey = "/secret/ssl/s0.koi-bebop.ts.net.key";
locations."/" = {
proxyPass = "http://localhost:${toString cfg.port}";
};
};
};
}

View File

@@ -14,5 +14,13 @@
./radio.nix ./radio.nix
./samba.nix ./samba.nix
./owncast.nix ./owncast.nix
./mailserver.nix
./nextcloud.nix
./iodine.nix
./searx.nix
./gitea-actions-runner.nix
./dashy.nix
./librechat.nix
./actualbudget.nix
]; ];
} }

View File

@@ -0,0 +1,136 @@
{ config, pkgs, lib, allModules, ... }:
# Gitea Actions Runner. Starts 'host' runner that runs directly on the host inside of a nixos container
# This is useful for providing a real Nix/OS builder to gitea.
# Warning, NixOS containers are not secure. For example, the container shares the /nix/store
# Therefore, this should not be used to run untrusted code.
# To enable, assign a machine the 'gitea-actions-runner' system role
# TODO: skipping running inside of nixos container for now because of issues getting docker/podman running
let
runnerRole = "gitea-actions-runner";
runners = config.machines.roles.${runnerRole};
thisMachineIsARunner = builtins.elem config.networking.hostName runners;
containerName = "gitea-runner";
in
{
config = lib.mkIf (thisMachineIsARunner && !config.boot.isContainer) {
# containers.${containerName} = {
# ephemeral = true;
# autoStart = true;
# # for podman
# enableTun = true;
# # privateNetwork = true;
# # hostAddress = "172.16.101.1";
# # localAddress = "172.16.101.2";
# bindMounts =
# {
# "/run/agenix/gitea-actions-runner-token" = {
# hostPath = "/run/agenix/gitea-actions-runner-token";
# isReadOnly = true;
# };
# "/var/lib/gitea-runner" = {
# hostPath = "/var/lib/gitea-runner";
# isReadOnly = false;
# };
# };
# extraFlags = [
# # Allow podman
# ''--system-call-filter=thisystemcalldoesnotexistforsure''
# ];
# additionalCapabilities = [
# "CAP_SYS_ADMIN"
# ];
# config = {
# imports = allModules;
# # speeds up evaluation
# nixpkgs.pkgs = pkgs;
# networking.hostName = lib.mkForce containerName;
# # don't use remote builders
# nix.distributedBuilds = lib.mkForce false;
# environment.systemPackages = with pkgs; [
# git
# # Gitea Actions rely heavily on node. Include it because it would be installed anyway.
# nodejs
# ];
# services.gitea-actions-runner.instances.inst = {
# enable = true;
# name = config.networking.hostName;
# url = "https://git.neet.dev/";
# tokenFile = "/run/agenix/gitea-actions-runner-token";
# labels = [
# "ubuntu-latest:docker://node:18-bullseye"
# "nixos:host"
# ];
# };
# # To allow building on the host, must override the the service's config so it doesn't use a dynamic user
# systemd.services.gitea-runner-inst.serviceConfig.DynamicUser = lib.mkForce false;
# users.users.gitea-runner = {
# home = "/var/lib/gitea-runner";
# group = "gitea-runner";
# isSystemUser = true;
# createHome = true;
# };
# users.groups.gitea-runner = { };
# virtualisation.podman.enable = true;
# boot.binfmt.emulatedSystems = [ "aarch64-linux" ];
# };
# };
# networking.nat.enable = true;
# networking.nat.internalInterfaces = [
# "ve-${containerName}"
# ];
# networking.ip_forward = true;
# don't use remote builders
nix.distributedBuilds = lib.mkForce false;
services.gitea-actions-runner.instances.inst = {
enable = true;
name = config.networking.hostName;
url = "https://git.neet.dev/";
tokenFile = "/run/agenix/gitea-actions-runner-token";
labels = [
"ubuntu-latest:docker://node:18-bullseye"
"nixos:host"
];
};
environment.systemPackages = with pkgs; [
git
# Gitea Actions rely heavily on node. Include it because it would be installed anyway.
nodejs
];
# To allow building on the host, must override the the service's config so it doesn't use a dynamic user
systemd.services.gitea-runner-inst.serviceConfig.DynamicUser = lib.mkForce false;
users.users.gitea-runner = {
home = "/var/lib/gitea-runner";
group = "gitea-runner";
isSystemUser = true;
createHome = true;
};
users.groups.gitea-runner = { };
virtualisation.podman.enable = true;
boot.binfmt.emulatedSystems = [ "aarch64-linux" ];
age.secrets.gitea-actions-runner-token.file = ../../secrets/gitea-actions-runner-token.age;
};
}

View File

@@ -1,8 +1,9 @@
{ lib, config, ... }: { lib, pkgs, config, ... }:
let let
cfg = config.services.gitea; cfg = config.services.gitea;
in { in
{
options.services.gitea = { options.services.gitea = {
hostname = lib.mkOption { hostname = lib.mkOption {
type = lib.types.str; type = lib.types.str;
@@ -11,29 +12,63 @@ in {
}; };
config = lib.mkIf cfg.enable { config = lib.mkIf cfg.enable {
services.gitea = { services.gitea = {
domain = cfg.hostname;
rootUrl = "https://${cfg.hostname}/";
appName = cfg.hostname; appName = cfg.hostname;
ssh.enable = true; lfs.enable = true;
# lfs.enable = true; # dump.enable = true;
dump.enable = true;
cookieSecure = true;
disableRegistration = true;
settings = { settings = {
server = {
ROOT_URL = "https://${cfg.hostname}/";
DOMAIN = cfg.hostname;
};
other = { other = {
SHOW_FOOTER_VERSION = false; SHOW_FOOTER_VERSION = false;
}; };
ui = { ui = {
DEFAULT_THEME = "arc-green"; DEFAULT_THEME = "arc-green";
}; };
service = {
DISABLE_REGISTRATION = true;
};
session = {
COOKIE_SECURE = true;
PROVIDER = "db";
SESSION_LIFE_TIME = 259200; # 3 days
GC_INTERVAL_TIME = 259200; # 3 days
};
mailer = {
ENABLED = true;
MAILER_TYPE = "smtp";
SMTP_ADDR = "mail.neet.dev";
SMTP_PORT = "465";
IS_TLS_ENABLED = true;
USER = "robot@runyan.org";
FROM = "no-reply@neet.dev";
};
actions = {
ENABLED = true;
};
indexer = {
REPO_INDEXER_ENABLED = true;
};
}; };
mailerPasswordFile = "/run/agenix/robots-email-pw";
}; };
age.secrets.robots-email-pw = {
file = ../../secrets/robots-email-pw.age;
owner = config.services.gitea.user;
};
# backups
backup.group."gitea".paths = [
config.services.gitea.stateDir
];
services.nginx.enable = true; services.nginx.enable = true;
services.nginx.virtualHosts.${cfg.hostname} = { services.nginx.virtualHosts.${cfg.hostname} = {
enableACME = true; enableACME = true;
forceSSL = true; forceSSL = true;
locations."/" = { locations."/" = {
proxyPass = "http://localhost:${toString cfg.httpPort}"; proxyPass = "http://localhost:${toString cfg.settings.server.HTTP_PORT}";
}; };
}; };
}; };

View File

@@ -20,6 +20,6 @@ in
hydraURL = "https://${domain}"; hydraURL = "https://${domain}";
useSubstitutes = true; useSubstitutes = true;
notificationSender = notifyEmail; notificationSender = notifyEmail;
buildMachinesFiles = []; buildMachinesFiles = [ ];
}; };
} }

View File

@@ -7,7 +7,8 @@
let let
cfg = config.services.icecast; cfg = config.services.icecast;
in { in
{
options.services.icecast = { options.services.icecast = {
mount = lib.mkOption { mount = lib.mkOption {
type = lib.types.str; type = lib.types.str;

21
common/server/iodine.nix Normal file
View File

@@ -0,0 +1,21 @@
{ config, pkgs, lib, ... }:
let
cfg = config.services.iodine.server;
in
{
config = lib.mkIf cfg.enable {
# iodine DNS-based vpn
services.iodine.server = {
ip = "192.168.99.1";
domain = "tun.neet.dev";
passwordFile = "/run/agenix/iodine";
};
age.secrets.iodine.file = ../../secrets/iodine.age;
networking.firewall.allowedUDPPorts = [ 53 ];
networking.nat.internalInterfaces = [
"dns0" # iodine
];
};
}

View File

@@ -0,0 +1,62 @@
{ config, lib, pkgs, ... }:
with lib;
let
cfg = config.services.librechat;
in
{
options.services.librechat = {
enable = mkEnableOption "librechat";
port = mkOption {
type = types.int;
default = 3080;
};
host = lib.mkOption {
type = lib.types.str;
example = "example.com";
};
};
config = mkIf cfg.enable {
virtualisation.oci-containers.containers = {
librechat = {
image = "ghcr.io/danny-avila/librechat:v0.6.6";
environment = {
HOST = "0.0.0.0";
MONGO_URI = "mongodb://host.containers.internal:27017/LibreChat";
ENDPOINTS = "openAI,google,bingAI,gptPlugins";
};
environmentFiles = [
"/run/agenix/librechat-env-file"
];
ports = [
"${toString cfg.port}:3080"
];
};
};
age.secrets.librechat-env-file.file = ../../secrets/librechat-env-file.age;
services.mongodb.enable = true;
services.mongodb.bind_ip = "0.0.0.0";
# easier podman maintenance
virtualisation.oci-containers.backend = "podman";
virtualisation.podman.dockerSocket.enable = true;
virtualisation.podman.dockerCompat = true;
# For mongodb access
networking.firewall.trustedInterfaces = [
"podman0" # for librechat
];
services.nginx.virtualHosts.${cfg.host} = {
enableACME = true;
forceSSL = true;
locations."/" = {
proxyPass = "http://localhost:${toString cfg.port}";
proxyWebsockets = true;
};
};
};
}

View File

@@ -0,0 +1,112 @@
{ config, pkgs, lib, ... }:
with builtins;
let
cfg = config.mailserver;
domains = [
"neet.space"
"neet.dev"
"neet.cloud"
"runyan.org"
"runyan.rocks"
"thunderhex.com"
"tar.ninja"
"bsd.ninja"
"bsd.rocks"
];
in
{
config = lib.mkIf cfg.enable {
# kresd doesn't work with tailscale MagicDNS
mailserver.localDnsResolver = false;
services.resolved.enable = true;
mailserver = {
fqdn = "mail.neet.dev";
dkimKeyBits = 2048;
indexDir = "/var/lib/mailindex";
enableManageSieve = true;
fullTextSearch.enable = true;
fullTextSearch.indexAttachments = true;
fullTextSearch.memoryLimit = 500;
inherit domains;
loginAccounts = {
"jeremy@runyan.org" = {
hashedPasswordFile = "/run/agenix/hashed-email-pw";
# catchall for all domains
aliases = map (domain: "@${domain}") domains;
};
"cris@runyan.org" = {
hashedPasswordFile = "/run/agenix/cris-hashed-email-pw";
aliases = [ "chris@runyan.org" ];
};
"robot@runyan.org" = {
aliases = [
"no-reply@neet.dev"
"robot@neet.dev"
];
sendOnly = true;
hashedPasswordFile = "/run/agenix/hashed-robots-email-pw";
};
};
rejectRecipients = [
"george@runyan.org"
"joslyn@runyan.org"
"damon@runyan.org"
"jonas@runyan.org"
"simon@neet.dev"
];
forwards = {
"amazon@runyan.org" = [
"jeremy@runyan.org"
"cris@runyan.org"
];
};
certificateScheme = 3; # use let's encrypt for certs
};
age.secrets.hashed-email-pw.file = ../../secrets/hashed-email-pw.age;
age.secrets.cris-hashed-email-pw.file = ../../secrets/cris-hashed-email-pw.age;
age.secrets.hashed-robots-email-pw.file = ../../secrets/hashed-robots-email-pw.age;
# sendmail to use xxx@domain instead of xxx@mail.domain
services.postfix.origin = "$mydomain";
# relay sent mail through mailgun
# https://www.howtoforge.com/community/threads/different-smtp-relays-for-different-domains-in-postfix.82711/#post-392620
services.postfix.config = {
smtp_sasl_auth_enable = "yes";
smtp_sasl_security_options = "noanonymous";
smtp_sasl_password_maps = "hash:/var/lib/postfix/conf/sasl_relay_passwd";
smtp_use_tls = "yes";
sender_dependent_relayhost_maps = "hash:/var/lib/postfix/conf/sender_relay";
smtp_sender_dependent_authentication = "yes";
};
services.postfix.mapFiles.sender_relay =
let
relayHost = "[smtp.mailgun.org]:587";
in
pkgs.writeText "sender_relay"
(concatStringsSep "\n" (map (domain: "@${domain} ${relayHost}") domains));
services.postfix.mapFiles.sasl_relay_passwd = "/run/agenix/sasl_relay_passwd";
age.secrets.sasl_relay_passwd.file = ../../secrets/sasl_relay_passwd.age;
# webmail
services.nginx.enable = true;
services.roundcube = {
enable = true;
hostName = config.mailserver.fqdn;
extraConfig = ''
# starttls needed for authentication, so the fqdn required to match the certificate
$config['smtp_server'] = "tls://${config.mailserver.fqdn}";
$config['smtp_user'] = "%u";
$config['smtp_pass'] = "%p";
'';
};
# backups
backup.group."email".paths = [
config.mailserver.mailDirectory
];
};
}

View File

@@ -3,7 +3,8 @@
let let
cfg = config.services.matrix; cfg = config.services.matrix;
certs = config.security.acme.certs; certs = config.security.acme.certs;
in { in
{
options.services.matrix = { options.services.matrix = {
enable = lib.mkEnableOption "enable matrix"; enable = lib.mkEnableOption "enable matrix";
element-web = { element-web = {
@@ -62,15 +63,15 @@ in {
settings = { settings = {
server_name = cfg.host; server_name = cfg.host;
enable_registration = cfg.enable_registration; enable_registration = cfg.enable_registration;
listeners = [ { listeners = [{
bind_addresses = ["127.0.0.1"]; bind_addresses = [ "127.0.0.1" ];
port = cfg.port; port = cfg.port;
tls = false; tls = false;
resources = [ { resources = [{
compress = true; compress = true;
names = [ "client" "federation" ]; names = [ "client" "federation" ];
} ]; }];
} ]; }];
turn_uris = [ turn_uris = [
"turn:${cfg.turn.host}:${toString cfg.turn.port}?transport=udp" "turn:${cfg.turn.host}:${toString cfg.turn.port}?transport=udp"
"turn:${cfg.turn.host}:${toString cfg.turn.port}?transport=tcp" "turn:${cfg.turn.host}:${toString cfg.turn.port}?transport=tcp"
@@ -120,7 +121,7 @@ in {
services.nginx = { services.nginx = {
enable = true; enable = true;
virtualHosts.${cfg.host} = { virtualHosts.${cfg.host} = {
enableACME = true; enableACME = true;
forceSSL = true; forceSSL = true;
listen = [ listen = [
@@ -137,7 +138,8 @@ in {
]; ];
locations."/".proxyPass = "http://localhost:${toString cfg.port}"; locations."/".proxyPass = "http://localhost:${toString cfg.port}";
}; };
virtualHosts.${cfg.turn.host} = { # get TLS cert for TURN server virtualHosts.${cfg.turn.host} = {
# get TLS cert for TURN server
enableACME = true; enableACME = true;
forceSSL = true; forceSSL = true;
}; };

View File

@@ -3,7 +3,8 @@
let let
cfg = config.services.murmur; cfg = config.services.murmur;
certs = config.security.acme.certs; certs = config.security.acme.certs;
in { in
{
options.services.murmur.domain = lib.mkOption { options.services.murmur.domain = lib.mkOption {
type = lib.types.str; type = lib.types.str;
}; };

View File

@@ -0,0 +1,33 @@
{ config, pkgs, lib, ... }:
let
cfg = config.services.nextcloud;
in
{
config = lib.mkIf cfg.enable {
services.nextcloud = {
https = true;
package = pkgs.nextcloud28;
hostName = "neet.cloud";
config.dbtype = "sqlite";
config.adminuser = "jeremy";
config.adminpassFile = "/run/agenix/nextcloud-pw";
autoUpdateApps.enable = true;
};
age.secrets.nextcloud-pw = {
file = ../../secrets/nextcloud-pw.age;
owner = "nextcloud";
};
# backups
backup.group."nextcloud".paths = [
config.services.nextcloud.home
];
services.nginx.virtualHosts.${config.services.nextcloud.hostName} = {
enableACME = true;
forceSSL = true;
};
};
}

View File

@@ -5,7 +5,8 @@ let
nginxWithRTMP = pkgs.nginx.override { nginxWithRTMP = pkgs.nginx.override {
modules = [ pkgs.nginxModules.rtmp ]; modules = [ pkgs.nginxModules.rtmp ];
}; };
in { in
{
options.services.nginx.stream = { options.services.nginx.stream = {
enable = lib.mkEnableOption "enable nginx rtmp/hls/dash video streaming"; enable = lib.mkEnableOption "enable nginx rtmp/hls/dash video streaming";
port = lib.mkOption { port = lib.mkOption {

View File

@@ -2,7 +2,8 @@
let let
cfg = config.services.nginx; cfg = config.services.nginx;
in { in
{
config = lib.mkIf cfg.enable { config = lib.mkIf cfg.enable {
services.nginx = { services.nginx = {
recommendedGzipSettings = true; recommendedGzipSettings = true;

View File

@@ -4,7 +4,8 @@ with lib;
let let
cfg = config.services.owncast; cfg = config.services.owncast;
in { in
{
options.services.owncast = { options.services.owncast = {
hostname = lib.mkOption { hostname = lib.mkOption {
type = types.str; type = types.str;

View File

@@ -14,7 +14,8 @@ let
cp -ar $src $out cp -ar $src $out
''; '';
}; };
in { in
{
options.services.privatebin = { options.services.privatebin = {
enable = lib.mkEnableOption "enable privatebin"; enable = lib.mkEnableOption "enable privatebin";
host = lib.mkOption { host = lib.mkOption {
@@ -30,7 +31,7 @@ in {
group = "privatebin"; group = "privatebin";
isSystemUser = true; isSystemUser = true;
}; };
users.groups.privatebin = {}; users.groups.privatebin = { };
services.nginx.enable = true; services.nginx.enable = true;
services.nginx.virtualHosts.${cfg.host} = { services.nginx.virtualHosts.${cfg.host} = {

View File

@@ -3,7 +3,8 @@
let let
cfg = config.services.radio; cfg = config.services.radio;
radioPackage = config.inputs.radio.packages.${config.currentSystem}.radio; radioPackage = config.inputs.radio.packages.${config.currentSystem}.radio;
in { in
{
options.services.radio = { options.services.radio = {
enable = lib.mkEnableOption "enable radio"; enable = lib.mkEnableOption "enable radio";
user = lib.mkOption { user = lib.mkOption {
@@ -56,11 +57,11 @@ in {
home = cfg.dataDir; home = cfg.dataDir;
createHome = true; createHome = true;
}; };
users.groups.${cfg.group} = {}; users.groups.${cfg.group} = { };
systemd.services.radio = { systemd.services.radio = {
enable = true; enable = true;
after = ["network.target"]; after = [ "network.target" ];
wantedBy = ["multi-user.target"]; wantedBy = [ "multi-user.target" ];
serviceConfig.ExecStart = "${radioPackage}/bin/radio ${config.services.icecast.listen.address}:${toString config.services.icecast.listen.port} ${config.services.icecast.mount} 5500"; serviceConfig.ExecStart = "${radioPackage}/bin/radio ${config.services.icecast.listen.address}:${toString config.services.icecast.listen.port} ${config.services.icecast.mount} 5500";
serviceConfig.User = cfg.user; serviceConfig.User = cfg.user;
serviceConfig.Group = cfg.group; serviceConfig.Group = cfg.group;

View File

@@ -25,9 +25,7 @@
printing = cups printing = cups
printcap name = cups printcap name = cups
# horrible files hide files = /.nobackup/.DS_Store/._.DS_Store/
veto files = /._*/.DS_Store/ /._*/._.DS_Store/
delete veto files = yes
''; '';
shares = { shares = {
@@ -77,6 +75,13 @@
}; };
}; };
# backups
backup.group."samba".paths = [
config.services.samba.shares.googlebot.path
config.services.samba.shares.cris.path
config.services.samba.shares.public.path
];
# Windows discovery of samba server # Windows discovery of samba server
services.samba-wsdd = { services.samba-wsdd = {
enable = true; enable = true;
@@ -92,7 +97,7 @@
# Printer discovery # Printer discovery
# (is this needed?) # (is this needed?)
services.avahi.enable = true; services.avahi.enable = true;
services.avahi.nssmdns = true; services.avahi.nssmdns4 = true;
# printer sharing # printer sharing
systemd.tmpfiles.rules = [ systemd.tmpfiles.rules = [
@@ -110,6 +115,6 @@
# samba user for share # samba user for share
users.users.cris.isSystemUser = true; users.users.cris.isSystemUser = true;
users.users.cris.group = "cris"; users.users.cris.group = "cris";
users.groups.cris = {}; users.groups.cris = { };
}; };
} }

30
common/server/searx.nix Normal file
View File

@@ -0,0 +1,30 @@
{ config, pkgs, lib, ... }:
let
cfg = config.services.searx;
in
{
config = lib.mkIf cfg.enable {
services.searx = {
environmentFile = "/run/agenix/searx";
settings = {
server.port = 43254;
server.secret_key = "@SEARX_SECRET_KEY@";
engines = [{
name = "wolframalpha";
shortcut = "wa";
api_key = "@WOLFRAM_API_KEY@";
engine = "wolframalpha_api";
}];
};
};
services.nginx.virtualHosts."search.neet.space" = {
enableACME = true;
forceSSL = true;
locations."/" = {
proxyPass = "http://localhost:${toString config.services.searx.settings.server.port}";
};
};
age.secrets.searx.file = ../../secrets/searx.age;
};
}

View File

@@ -2,7 +2,8 @@
let let
cfg = config.services.thelounge; cfg = config.services.thelounge;
in { in
{
options.services.thelounge = { options.services.thelounge = {
fileUploadBaseUrl = lib.mkOption { fileUploadBaseUrl = lib.mkOption {
type = lib.types.str; type = lib.types.str;
@@ -28,7 +29,7 @@ in {
reverseProxy = true; reverseProxy = true;
maxHistory = -1; maxHistory = -1;
https.enable = false; https.enable = false;
# theme = "thelounge-theme-solarized"; # theme = "thelounge-theme-solarized";
prefetch = false; prefetch = false;
prefetchStorage = false; prefetchStorage = false;
fileUpload = { fileUpload = {
@@ -42,6 +43,10 @@ in {
}; };
}; };
backup.group."thelounge".paths = [
"/var/lib/thelounge/"
];
# the lounge client # the lounge client
services.nginx.virtualHosts.${cfg.host} = { services.nginx.virtualHosts.${cfg.host} = {
enableACME = true; enableACME = true;

View File

@@ -15,14 +15,14 @@ let
in in
{ {
networking.firewall.allowedUDPPorts = [ rtp-port ]; networking.firewall.allowedUDPPorts = [ rtp-port ];
networking.firewall.allowedTCPPortRanges = [ { networking.firewall.allowedTCPPortRanges = [{
from = webrtc-peer-lower-port; from = webrtc-peer-lower-port;
to = webrtc-peer-upper-port; to = webrtc-peer-upper-port;
} ]; }];
networking.firewall.allowedUDPPortRanges = [ { networking.firewall.allowedUDPPortRanges = [{
from = webrtc-peer-lower-port; from = webrtc-peer-lower-port;
to = webrtc-peer-upper-port; to = webrtc-peer-upper-port;
} ]; }];
virtualisation.docker.enable = true; virtualisation.docker.enable = true;
@@ -49,12 +49,12 @@ in
ports = [ ports = [
"${toStr ingest-port}:8084" "${toStr ingest-port}:8084"
]; ];
# imageFile = pkgs.dockerTools.pullImage { # imageFile = pkgs.dockerTools.pullImage {
# imageName = "projectlightspeed/ingest"; # imageName = "projectlightspeed/ingest";
# finalImageTag = "version-0.1.4"; # finalImageTag = "version-0.1.4";
# imageDigest = "sha256:9fc51833b7c27a76d26e40f092b9cec1ac1c4bfebe452e94ad3269f1f73ff2fc"; # imageDigest = "sha256:9fc51833b7c27a76d26e40f092b9cec1ac1c4bfebe452e94ad3269f1f73ff2fc";
# sha256 = "19kxl02x0a3i6hlnsfcm49hl6qxnq2f3hfmyv1v8qdaz58f35kd5"; # sha256 = "19kxl02x0a3i6hlnsfcm49hl6qxnq2f3hfmyv1v8qdaz58f35kd5";
# }; # };
}; };
"lightspeed-react" = { "lightspeed-react" = {
workdir = "/var/lib/lightspeed-react"; workdir = "/var/lib/lightspeed-react";
@@ -62,12 +62,12 @@ in
ports = [ ports = [
"${toStr web-port}:80" "${toStr web-port}:80"
]; ];
# imageFile = pkgs.dockerTools.pullImage { # imageFile = pkgs.dockerTools.pullImage {
# imageName = "projectlightspeed/react"; # imageName = "projectlightspeed/react";
# finalImageTag = "version-0.1.3"; # finalImageTag = "version-0.1.3";
# imageDigest = "sha256:b7c58425f1593f7b4304726b57aa399b6e216e55af9c0962c5c19333fae638b6"; # imageDigest = "sha256:b7c58425f1593f7b4304726b57aa399b6e216e55af9c0962c5c19333fae638b6";
# sha256 = "0d2jh7mr20h7dxgsp7ml7cw2qd4m8ja9rj75dpy59zyb6v0bn7js"; # sha256 = "0d2jh7mr20h7dxgsp7ml7cw2qd4m8ja9rj75dpy59zyb6v0bn7js";
# }; # };
}; };
"lightspeed-webrtc" = { "lightspeed-webrtc" = {
workdir = "/var/lib/lightspeed-webrtc"; workdir = "/var/lib/lightspeed-webrtc";
@@ -79,15 +79,18 @@ in
"${toStr webrtc-peer-lower-port}-${toStr webrtc-peer-upper-port}:${toStr webrtc-peer-lower-port}-${toStr webrtc-peer-upper-port}/udp" "${toStr webrtc-peer-lower-port}-${toStr webrtc-peer-upper-port}:${toStr webrtc-peer-lower-port}-${toStr webrtc-peer-upper-port}/udp"
]; ];
cmd = [ cmd = [
"lightspeed-webrtc" "--addr=0.0.0.0" "--ip=${domain}" "lightspeed-webrtc"
"--ports=${toStr webrtc-peer-lower-port}-${toStr webrtc-peer-upper-port}" "run" "--addr=0.0.0.0"
"--ip=${domain}"
"--ports=${toStr webrtc-peer-lower-port}-${toStr webrtc-peer-upper-port}"
"run"
]; ];
# imageFile = pkgs.dockerTools.pullImage { # imageFile = pkgs.dockerTools.pullImage {
# imageName = "projectlightspeed/webrtc"; # imageName = "projectlightspeed/webrtc";
# finalImageTag = "version-0.1.2"; # finalImageTag = "version-0.1.2";
# imageDigest = "sha256:ddf8b3dd294485529ec11d1234a3fc38e365a53c4738998c6bc2c6930be45ecf"; # imageDigest = "sha256:ddf8b3dd294485529ec11d1234a3fc38e365a53c4738998c6bc2c6930be45ecf";
# sha256 = "1bdy4ak99fjdphj5bsk8rp13xxmbqdhfyfab14drbyffivg9ad2i"; # sha256 = "1bdy4ak99fjdphj5bsk8rp13xxmbqdhfyfab14drbyffivg9ad2i";
# }; # };
}; };
}; };
}; };

View File

@@ -1,8 +1,8 @@
import ./module.nix ({ name, description, serviceConfig }: import ./module.nix ({ name, description, serviceConfig }:
{ {
systemd.user.services.${name} = { systemd.user.services.${name} = {
inherit description serviceConfig; inherit description serviceConfig;
wantedBy = [ "default.target" ]; wantedBy = [ "default.target" ];
}; };
}) })

View File

@@ -1,15 +1,15 @@
import ./module.nix ({ name, description, serviceConfig }: import ./module.nix ({ name, description, serviceConfig }:
{ {
systemd.user.services.${name} = { systemd.user.services.${name} = {
Unit = { Unit = {
Description = description; Description = description;
}; };
Service = serviceConfig; Service = serviceConfig;
Install = { Install = {
WantedBy = [ "default.target" ]; WantedBy = [ "default.target" ];
};
}; };
}; })
})

View File

@@ -2,7 +2,8 @@
let let
cfg = config.services.zerobin; cfg = config.services.zerobin;
in { in
{
options.services.zerobin = { options.services.zerobin = {
host = lib.mkOption { host = lib.mkOption {
type = lib.types.str; type = lib.types.str;

View File

@@ -1,36 +1,28 @@
{ config, pkgs, ... }: { config, lib, pkgs, ... }:
# Improvements to the default shell # Improvements to the default shell
# - use nix-locate for command-not-found # - use nix-index for command-not-found
# - disable fish's annoying greeting message # - disable fish's annoying greeting message
# - add some handy shell commands # - add some handy shell commands
let {
nix-locate = config.inputs.nix-locate.packages.${config.currentSystem}.default; environment.systemPackages = with pkgs; [
in { comma
programs.command-not-found.enable = false;
environment.systemPackages = [
nix-locate
]; ];
# nix-index
programs.nix-index.enable = true;
programs.nix-index.enableFishIntegration = true;
programs.command-not-found.enable = false;
programs.fish = { programs.fish = {
enable = true; enable = true;
shellInit = let shellInit = ''
wrapper = pkgs.writeScript "command-not-found" ''
#!${pkgs.bash}/bin/bash
source ${nix-locate}/etc/profile.d/command-not-found.sh
command_not_found_handle "$@"
'';
in ''
# use nix-locate for command-not-found functionality
function __fish_command_not_found_handler --on-event fish_command_not_found
${wrapper} $argv
end
# disable annoying fish shell greeting # disable annoying fish shell greeting
set fish_greeting set fish_greeting
alias sudo="doas"
''; '';
}; };
@@ -38,9 +30,23 @@ in {
myip = "dig +short myip.opendns.com @resolver1.opendns.com"; myip = "dig +short myip.opendns.com @resolver1.opendns.com";
# https://linuxreviews.org/HOWTO_Test_Disk_I/O_Performance # https://linuxreviews.org/HOWTO_Test_Disk_I/O_Performance
io_seq_read = "nix run nixpkgs#fio -- --name TEST --eta-newline=5s --filename=temp.file --rw=read --size=2g --io_size=10g --blocksize=1024k --ioengine=libaio --fsync=10000 --iodepth=32 --direct=1 --numjobs=1 --runtime=60 --group_reporting; rm temp.file"; io_seq_read = "${pkgs.fio}/bin/fio --name TEST --eta-newline=5s --filename=temp.file --rw=read --size=2g --io_size=10g --blocksize=1024k --ioengine=libaio --fsync=10000 --iodepth=32 --direct=1 --numjobs=1 --runtime=60 --group_reporting; rm temp.file";
io_seq_write = "nix run nixpkgs#fio -- --name TEST --eta-newline=5s --filename=temp.file --rw=write --size=2g --io_size=10g --blocksize=1024k --ioengine=libaio --fsync=10000 --iodepth=32 --direct=1 --numjobs=1 --runtime=60 --group_reporting; rm temp.file"; io_seq_write = "${pkgs.fio}/bin/fio --name TEST --eta-newline=5s --filename=temp.file --rw=write --size=2g --io_size=10g --blocksize=1024k --ioengine=libaio --fsync=10000 --iodepth=32 --direct=1 --numjobs=1 --runtime=60 --group_reporting; rm temp.file";
io_rand_read = "nix run nixpkgs#fio -- --name TEST --eta-newline=5s --filename=temp.file --rw=randread --size=2g --io_size=10g --blocksize=4k --ioengine=libaio --fsync=1 --iodepth=1 --direct=1 --numjobs=32 --runtime=60 --group_reporting; rm temp.file"; io_rand_read = "${pkgs.fio}/bin/fio --name TEST --eta-newline=5s --filename=temp.file --rw=randread --size=2g --io_size=10g --blocksize=4k --ioengine=libaio --fsync=1 --iodepth=1 --direct=1 --numjobs=32 --runtime=60 --group_reporting; rm temp.file";
io_rand_write = "nix run nixpkgs#fio -- --name TEST --eta-newline=5s --filename=temp.file --rw=randrw --size=2g --io_size=10g --blocksize=4k --ioengine=libaio --fsync=1 --iodepth=1 --direct=1 --numjobs=1 --runtime=60 --group_reporting; rm temp.file"; io_rand_write = "${pkgs.fio}/bin/fio --name TEST --eta-newline=5s --filename=temp.file --rw=randrw --size=2g --io_size=10g --blocksize=4k --ioengine=libaio --fsync=1 --iodepth=1 --direct=1 --numjobs=1 --runtime=60 --group_reporting; rm temp.file";
}; };
nixpkgs.overlays = [
(final: prev: {
# comma uses the "nix-index" package built into nixpkgs by default.
# That package doesn't use the prebuilt nix-index database so it needs to be changed.
comma = prev.comma.overrideAttrs (old: {
postInstall = ''
wrapProgram $out/bin/comma \
--prefix PATH : ${lib.makeBinPath [ prev.fzy config.programs.nix-index.package ]}
ln -s $out/bin/comma $out/bin/,
'';
});
})
];
} }

View File

@@ -1,65 +1,38 @@
rec { { config, lib, pkgs, ... }:
users = [
{
programs.ssh.knownHosts = lib.filterAttrs (n: v: v != null) (lib.concatMapAttrs
(host: cfg: {
${host} = {
hostNames = cfg.hostNames;
publicKey = cfg.hostKey;
};
"${host}-remote-unlock" =
if cfg.remoteUnlock != null then {
hostNames = builtins.filter (h: h != null) [ cfg.remoteUnlock.clearnetHost cfg.remoteUnlock.onionHost ];
publicKey = cfg.remoteUnlock.hostKey;
} else null;
})
config.machines.hosts);
# prebuilt cmds for easy ssh LUKS unlock
environment.shellAliases =
let
unlockHosts = unlockType: lib.concatMapAttrs
(host: cfg:
if cfg.remoteUnlock != null && cfg.remoteUnlock.${unlockType} != null then {
${host} = cfg.remoteUnlock.${unlockType};
} else { })
config.machines.hosts;
in
lib.concatMapAttrs (host: addr: { "unlock-over-tor_${host}" = "torsocks ssh root@${addr}"; }) (unlockHosts "onionHost")
//
lib.concatMapAttrs (host: addr: { "unlock_${host}" = "ssh root@${addr}"; }) (unlockHosts "clearnetHost");
# TODO: Old ssh keys I will remove some day...
machines.ssh.userKeys = [
"ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIMVR/R3ZOsv7TZbICGBCHdjh1NDT8SnswUyINeJOC7QG" "ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIMVR/R3ZOsv7TZbICGBCHdjh1NDT8SnswUyINeJOC7QG"
"ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIE0dcqL/FhHmv+a1iz3f9LJ48xubO7MZHy35rW9SZOYM" "ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIE0dcqL/FhHmv+a1iz3f9LJ48xubO7MZHy35rW9SZOYM"
"ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIO0VFnn3+Mh0nWeN92jov81qNE9fpzTAHYBphNoY7HUx" # reg
"ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIHSkKiRUUmnErOKGx81nyge/9KqjkPh8BfDk0D3oP586" # nat "ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIHSkKiRUUmnErOKGx81nyge/9KqjkPh8BfDk0D3oP586" # nat
"ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIFeTK1iARlNIKP/DS8/ObBm9yUM/3L1Ub4XI5A2r9OzP" # ray
];
system = {
liza = "ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDY/pNyWedEfU7Tq9ikGbriRuF1ZWkHhegGS17L0Vcdl";
ponyo = "ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIMBBlTAIp38RhErU1wNNV5MBeb+WGH0mhF/dxh5RsAXN";
ponyo-unlock = "ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIC9LQuuImgWlkjDhEEIbM1wOd+HqRv1RxvYZuLXPSdRi";
ray = "ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDQM8hwKRgl8cZj7UVYATSLYu4LhG7I0WFJ9m2iWowiB";
s0 = "ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIAwiXcUFtAvZCayhu4+AIcF+Ktrdgv9ee/mXSIhJbp4q";
n1 = "ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIPWlhd1Oid5Xf2zdcBrcdrR0TlhObutwcJ8piobRTpRt";
n2 = "ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIJ7bRiRutnI7Bmyt/I238E3Fp5DqiClIXiVibsccipOr";
n3 = "ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIB+rJEaRrFDGirQC2UoWQkmpzLg4qgTjGJgVqiipWiU5";
n4 = "ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAINYm2ROIfCeGz6QtDwqAmcj2DX9tq2CZn0eLhskdvB4Z";
n5 = "ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIE5Qhvwq3PiHEKf+2/4w5ZJkSMNzFLhIRrPOR98m7wW4";
n6 = "ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAID/P/pa9+qhKAPfvvd8xSO2komJqDW0M1nCK7ZrP6PO7";
n7 = "ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIPtOlOvTlMX2mxPaXDJ6VlMe5rmroUXpKmJVNxgV32xL";
};
# groups
systems = with system; [
liza
ponyo
ray
s0
n1
n2
n3
n4
n5
n6
n7
];
personal = with system; [
ray
];
servers = with system; [
liza
ponyo
s0
n1
n2
n3
n4
n5
n6
n7
];
compute = with system; [
n1
n2
n3
n4
n5
n6
n7
];
storage = with system; [
s0
]; ];
} }

198
flake.lock generated
View File

@@ -3,45 +3,24 @@
"agenix": { "agenix": {
"inputs": { "inputs": {
"darwin": "darwin", "darwin": "darwin",
"home-manager": "home-manager",
"nixpkgs": [ "nixpkgs": [
"nixpkgs" "nixpkgs"
]
},
"locked": {
"lastModified": 1675176355,
"narHash": "sha256-Qjxh5cmN56siY97mzmBLI1+cdjXSPqmfPVsKxBvHmwI=",
"owner": "ryantm",
"repo": "agenix",
"rev": "b7ffcfe77f817d9ee992640ba1f270718d197f28",
"type": "github"
},
"original": {
"owner": "ryantm",
"repo": "agenix",
"type": "github"
}
},
"archivebox": {
"inputs": {
"flake-utils": [
"flake-utils"
], ],
"nixpkgs": [ "systems": "systems"
"nixpkgs"
]
}, },
"locked": { "locked": {
"lastModified": 1648612759, "lastModified": 1716561646,
"narHash": "sha256-SJwlpD2Wz3zFoX2mIYCQfwIOYHaOdeiWGFeDXsLGM84=", "narHash": "sha256-UIGtLO89RxKt7RF2iEgPikSdU53r6v/6WYB0RW3k89I=",
"ref": "refs/heads/master", "owner": "ryantm",
"rev": "39d338b9b24159d8ef3309eecc0d32a2a9f102b5", "repo": "agenix",
"revCount": 2, "rev": "c2fc0762bbe8feb06a2e59a364fa81b3a57671c9",
"type": "git", "type": "github"
"url": "https://git.neet.dev/zuckerberg/archivebox.git"
}, },
"original": { "original": {
"type": "git", "owner": "ryantm",
"url": "https://git.neet.dev/zuckerberg/archivebox.git" "repo": "agenix",
"type": "github"
} }
}, },
"blobs": { "blobs": {
@@ -91,11 +70,11 @@
] ]
}, },
"locked": { "locked": {
"lastModified": 1673295039, "lastModified": 1700795494,
"narHash": "sha256-AsdYgE8/GPwcelGgrntlijMg4t3hLFJFCRF3tL5WVjA=", "narHash": "sha256-gzGLZSiOhf155FW7262kdHo2YDeugp3VuIFb4/GGng0=",
"owner": "lnl7", "owner": "lnl7",
"repo": "nix-darwin", "repo": "nix-darwin",
"rev": "87b9d090ad39b25b2400029c64825fc2a8868943", "rev": "4b9b83d5a92e8c1fbfd8eb27eda375908c11ec4d",
"type": "github" "type": "github"
}, },
"original": { "original": {
@@ -105,14 +84,39 @@
"type": "github" "type": "github"
} }
}, },
"deploy-rs": {
"inputs": {
"flake-compat": "flake-compat",
"nixpkgs": [
"nixpkgs"
],
"utils": [
"simple-nixos-mailserver",
"utils"
]
},
"locked": {
"lastModified": 1715699772,
"narHash": "sha256-sKhqIgucN5sI/7UQgBwsonzR4fONjfMr9OcHK/vPits=",
"owner": "serokell",
"repo": "deploy-rs",
"rev": "b3ea6f333f9057b77efd9091119ba67089399ced",
"type": "github"
},
"original": {
"owner": "serokell",
"repo": "deploy-rs",
"type": "github"
}
},
"flake-compat": { "flake-compat": {
"flake": false, "flake": false,
"locked": { "locked": {
"lastModified": 1668681692, "lastModified": 1696426674,
"narHash": "sha256-Ht91NGdewz8IQLtWZ9LCeNXMSXHUss+9COoqu6JLmXU=", "narHash": "sha256-kvjfFW7WAETZlt09AgDn1MrtKzP7t90Vf7vypd3OL1U=",
"owner": "edolstra", "owner": "edolstra",
"repo": "flake-compat", "repo": "flake-compat",
"rev": "009399224d5e398d03b22badca40a37ac85412a1", "rev": "0f9255e01c2351cc7d116c072cb317785dd33b33",
"type": "github" "type": "github"
}, },
"original": { "original": {
@@ -122,12 +126,15 @@
} }
}, },
"flake-utils": { "flake-utils": {
"inputs": {
"systems": "systems_2"
},
"locked": { "locked": {
"lastModified": 1667395993, "lastModified": 1710146030,
"narHash": "sha256-nuEHfE/LcWyuSWnS8t12N1wc105Qtau+/OdUAjtQ0rA=", "narHash": "sha256-SZ5L6eA7HJ/nmkzGG7/ISclqe6oZdOZTNoesiInkXPQ=",
"owner": "numtide", "owner": "numtide",
"repo": "flake-utils", "repo": "flake-utils",
"rev": "5aed5285a952e0b949eb3ba02c12fa4fcfef535f", "rev": "b1d9ab70662946ef0850d488da1c9019f3a9752a",
"type": "github" "type": "github"
}, },
"original": { "original": {
@@ -136,39 +143,75 @@
"type": "github" "type": "github"
} }
}, },
"nix-locate": { "home-manager": {
"inputs": {
"nixpkgs": [
"agenix",
"nixpkgs"
]
},
"locked": {
"lastModified": 1703113217,
"narHash": "sha256-7ulcXOk63TIT2lVDSExj7XzFx09LpdSAPtvgtM7yQPE=",
"owner": "nix-community",
"repo": "home-manager",
"rev": "3bfaacf46133c037bb356193bd2f1765d9dc82c1",
"type": "github"
},
"original": {
"owner": "nix-community",
"repo": "home-manager",
"type": "github"
}
},
"nix-index-database": {
"inputs": { "inputs": {
"flake-compat": "flake-compat",
"nixpkgs": [ "nixpkgs": [
"nixpkgs" "nixpkgs"
] ]
}, },
"locked": { "locked": {
"lastModified": 1673969751, "lastModified": 1716772633,
"narHash": "sha256-U6aYz3lqZ4NVEGEWiti1i0FyqEo4bUjnTAnA73DPnNU=", "narHash": "sha256-Idcye44UW+EgjbjCoklf2IDF+XrehV6CVYvxR1omst4=",
"owner": "bennofs", "owner": "Mic92",
"repo": "nix-index", "repo": "nix-index-database",
"rev": "5f98881b1ed27ab6656e6d71b534f88430f6823a", "rev": "ff80cb4a11bb87f3ce8459be6f16a25ac86eb2ac",
"type": "github" "type": "github"
}, },
"original": { "original": {
"owner": "bennofs", "owner": "Mic92",
"repo": "nix-index", "repo": "nix-index-database",
"type": "github"
}
},
"nixos-hardware": {
"locked": {
"lastModified": 1717248095,
"narHash": "sha256-e8X2eWjAHJQT82AAN+mCI0B68cIDBJpqJ156+VRrFO0=",
"owner": "NixOS",
"repo": "nixos-hardware",
"rev": "7b49d3967613d9aacac5b340ef158d493906ba79",
"type": "github"
},
"original": {
"owner": "NixOS",
"ref": "master",
"repo": "nixos-hardware",
"type": "github" "type": "github"
} }
}, },
"nixpkgs": { "nixpkgs": {
"locked": { "locked": {
"lastModified": 1672580127, "lastModified": 1717144377,
"narHash": "sha256-3lW3xZslREhJogoOkjeZtlBtvFMyxHku7I/9IVehhT8=", "narHash": "sha256-F/TKWETwB5RaR8owkPPi+SPJh83AQsm6KrQAlJ8v/uA=",
"owner": "NixOS", "owner": "NixOS",
"repo": "nixpkgs", "repo": "nixpkgs",
"rev": "0874168639713f547c05947c76124f78441ea46c", "rev": "805a384895c696f802a9bf5bf4720f37385df547",
"type": "github" "type": "github"
}, },
"original": { "original": {
"owner": "NixOS", "owner": "NixOS",
"ref": "nixos-22.05", "ref": "nixos-24.05",
"repo": "nixpkgs", "repo": "nixpkgs",
"type": "github" "type": "github"
} }
@@ -188,19 +231,19 @@
"type": "indirect" "type": "indirect"
} }
}, },
"nixpkgs-unstable": { "nixpkgs-frigate": {
"locked": { "locked": {
"lastModified": 1675835843, "lastModified": 1695825837,
"narHash": "sha256-y1dSCQPcof4CWzRYRqDj4qZzbBl+raVPAko5Prdil28=", "narHash": "sha256-4Ne11kNRnQsmSJCRSSNkFRSnHC4Y5gPDBIQGjjPfJiU=",
"owner": "NixOS", "owner": "NixOS",
"repo": "nixpkgs", "repo": "nixpkgs",
"rev": "32f914af34f126f54b45e482fb2da4ae78f3095f", "rev": "5cfafa12d57374f48bcc36fda3274ada276cf69e",
"type": "github" "type": "github"
}, },
"original": { "original": {
"owner": "NixOS", "owner": "NixOS",
"ref": "master",
"repo": "nixpkgs", "repo": "nixpkgs",
"rev": "5cfafa12d57374f48bcc36fda3274ada276cf69e",
"type": "github" "type": "github"
} }
}, },
@@ -248,12 +291,13 @@
"root": { "root": {
"inputs": { "inputs": {
"agenix": "agenix", "agenix": "agenix",
"archivebox": "archivebox",
"dailybuild_modules": "dailybuild_modules", "dailybuild_modules": "dailybuild_modules",
"deploy-rs": "deploy-rs",
"flake-utils": "flake-utils", "flake-utils": "flake-utils",
"nix-locate": "nix-locate", "nix-index-database": "nix-index-database",
"nixos-hardware": "nixos-hardware",
"nixpkgs": "nixpkgs", "nixpkgs": "nixpkgs",
"nixpkgs-unstable": "nixpkgs-unstable", "nixpkgs-frigate": "nixpkgs-frigate",
"radio": "radio", "radio": "radio",
"radio-web": "radio-web", "radio-web": "radio-web",
"simple-nixos-mailserver": "simple-nixos-mailserver" "simple-nixos-mailserver": "simple-nixos-mailserver"
@@ -283,6 +327,36 @@
"type": "gitlab" "type": "gitlab"
} }
}, },
"systems": {
"locked": {
"lastModified": 1681028828,
"narHash": "sha256-Vy1rq5AaRuLzOxct8nz4T6wlgyUR7zLU309k9mBC768=",
"owner": "nix-systems",
"repo": "default",
"rev": "da67096a3b9bf56a91d16901293e51ba5b49a27e",
"type": "github"
},
"original": {
"owner": "nix-systems",
"repo": "default",
"type": "github"
}
},
"systems_2": {
"locked": {
"lastModified": 1681028828,
"narHash": "sha256-Vy1rq5AaRuLzOxct8nz4T6wlgyUR7zLU309k9mBC768=",
"owner": "nix-systems",
"repo": "default",
"rev": "da67096a3b9bf56a91d16901293e51ba5b49a27e",
"type": "github"
},
"original": {
"owner": "nix-systems",
"repo": "default",
"type": "github"
}
},
"utils": { "utils": {
"locked": { "locked": {
"lastModified": 1605370193, "lastModified": 1605370193,

183
flake.nix
View File

@@ -1,12 +1,11 @@
{ {
inputs = { inputs = {
nixpkgs.url = "github:NixOS/nixpkgs/nixos-22.05"; nixpkgs.url = "github:NixOS/nixpkgs/nixos-24.05";
nixpkgs-unstable.url = "github:NixOS/nixpkgs/master"; nixpkgs-frigate.url = "github:NixOS/nixpkgs/5cfafa12d57374f48bcc36fda3274ada276cf69e";
flake-utils.url = "github:numtide/flake-utils"; flake-utils.url = "github:numtide/flake-utils";
nix-locate.url = "github:bennofs/nix-index"; nixos-hardware.url = "github:NixOS/nixos-hardware/master";
nix-locate.inputs.nixpkgs.follows = "nixpkgs";
# mail server # mail server
simple-nixos-mailserver.url = "gitlab:simple-nixos-mailserver/nixos-mailserver/nixos-22.05"; simple-nixos-mailserver.url = "gitlab:simple-nixos-mailserver/nixos-mailserver/nixos-22.05";
@@ -28,77 +27,121 @@
dailybuild_modules.inputs.nixpkgs.follows = "nixpkgs"; dailybuild_modules.inputs.nixpkgs.follows = "nixpkgs";
dailybuild_modules.inputs.flake-utils.follows = "flake-utils"; dailybuild_modules.inputs.flake-utils.follows = "flake-utils";
# archivebox # nixos config deployment
archivebox.url = "git+https://git.neet.dev/zuckerberg/archivebox.git"; deploy-rs.url = "github:serokell/deploy-rs";
archivebox.inputs.nixpkgs.follows = "nixpkgs"; deploy-rs.inputs.nixpkgs.follows = "nixpkgs";
archivebox.inputs.flake-utils.follows = "flake-utils"; deploy-rs.inputs.utils.follows = "simple-nixos-mailserver/utils";
# prebuilt nix-index database
nix-index-database.url = "github:Mic92/nix-index-database";
nix-index-database.inputs.nixpkgs.follows = "nixpkgs";
}; };
outputs = { self, nixpkgs, nixpkgs-unstable, ... }@inputs: { outputs = { self, nixpkgs, ... }@inputs:
nixosConfigurations =
let let
modules = system: [ machines = (import ./common/machine-info/moduleless.nix
./common {
inputs.simple-nixos-mailserver.nixosModule inherit nixpkgs;
inputs.agenix.nixosModules.default assertionsModule = "${nixpkgs}/nixos/modules/misc/assertions.nix";
inputs.dailybuild_modules.nixosModule }).machines.hosts;
inputs.archivebox.nixosModule
({ lib, ... }: {
config.environment.systemPackages = [
inputs.agenix.packages.${system}.agenix
];
# because nixos specialArgs doesn't work for containers... need to pass in inputs a different way
options.inputs = lib.mkOption { default = inputs; };
options.currentSystem = lib.mkOption { default = system; };
})
];
mkSystem = system: nixpkgs: path:
let
allModules = modules system;
in nixpkgs.lib.nixosSystem {
inherit system;
modules = allModules ++ [path];
specialArgs = {
inherit allModules;
};
};
in in
{ {
"reg" = mkSystem "x86_64-linux" nixpkgs ./machines/reg/configuration.nix; nixosConfigurations =
"ray" = mkSystem "x86_64-linux" nixpkgs-unstable ./machines/ray/configuration.nix; let
"nat" = mkSystem "aarch64-linux" nixpkgs ./machines/nat/configuration.nix; modules = system: hostname: with inputs; [
"liza" = mkSystem "x86_64-linux" nixpkgs ./machines/liza/configuration.nix; ./common
"ponyo" = mkSystem "x86_64-linux" nixpkgs ./machines/ponyo/configuration.nix; simple-nixos-mailserver.nixosModule
"s0" = mkSystem "aarch64-linux" nixpkgs-unstable ./machines/storage/s0/configuration.nix; agenix.nixosModules.default
"n1" = mkSystem "aarch64-linux" nixpkgs ./machines/compute/n1/configuration.nix; dailybuild_modules.nixosModule
"n2" = mkSystem "aarch64-linux" nixpkgs ./machines/compute/n2/configuration.nix; nix-index-database.nixosModules.nix-index
"n3" = mkSystem "aarch64-linux" nixpkgs ./machines/compute/n3/configuration.nix; self.nixosModules.kernel-modules
"n4" = mkSystem "aarch64-linux" nixpkgs ./machines/compute/n4/configuration.nix; ({ lib, ... }: {
"n5" = mkSystem "aarch64-linux" nixpkgs ./machines/compute/n5/configuration.nix; config = {
"n6" = mkSystem "aarch64-linux" nixpkgs ./machines/compute/n6/configuration.nix; nixpkgs.overlays = [ self.overlays.default ];
"n7" = mkSystem "aarch64-linux" nixpkgs ./machines/compute/n7/configuration.nix;
};
packages = let environment.systemPackages = [
mkKexec = system: agenix.packages.${system}.agenix
(nixpkgs.lib.nixosSystem { ];
inherit system;
modules = [ ./machines/ephemeral/kexec.nix ]; networking.hostName = hostname;
}).config.system.build.kexec_tarball; };
mkIso = system:
(nixpkgs.lib.nixosSystem { # because nixos specialArgs doesn't work for containers... need to pass in inputs a different way
inherit system; options.inputs = lib.mkOption { default = inputs; };
modules = [ ./machines/ephemeral/iso.nix ]; options.currentSystem = lib.mkOption { default = system; };
}).config.system.build.isoImage; })
in { ];
"x86_64-linux"."kexec" = mkKexec "x86_64-linux";
"x86_64-linux"."iso" = mkIso "x86_64-linux"; mkSystem = system: nixpkgs: path: hostname:
"aarch64-linux"."kexec" = mkKexec "aarch64-linux"; let
"aarch64-linux"."iso" = mkIso "aarch64-linux"; allModules = modules system hostname;
# allow patching nixpkgs, remove this hack once this is solved: https://github.com/NixOS/nix/issues/3920
patchedNixpkgsSrc = nixpkgs.legacyPackages.${system}.applyPatches {
name = "nixpkgs-patched";
src = nixpkgs;
patches = [
./patches/gamepadui.patch
];
};
patchedNixpkgs = nixpkgs.lib.fix (self: (import "${patchedNixpkgsSrc}/flake.nix").outputs { self = nixpkgs; });
in
patchedNixpkgs.lib.nixosSystem {
inherit system;
modules = allModules ++ [ path ];
specialArgs = {
inherit allModules;
lib = self.lib;
nixos-hardware = inputs.nixos-hardware;
};
};
in
nixpkgs.lib.mapAttrs
(hostname: cfg:
mkSystem cfg.arch nixpkgs cfg.configurationPath hostname)
machines;
packages =
let
mkKexec = system:
(nixpkgs.lib.nixosSystem {
inherit system;
modules = [ ./machines/ephemeral/kexec.nix ];
}).config.system.build.kexec_tarball;
mkIso = system:
(nixpkgs.lib.nixosSystem {
inherit system;
modules = [ ./machines/ephemeral/iso.nix ];
}).config.system.build.isoImage;
in
{
"x86_64-linux"."kexec" = mkKexec "x86_64-linux";
"x86_64-linux"."iso" = mkIso "x86_64-linux";
"aarch64-linux"."kexec" = mkKexec "aarch64-linux";
"aarch64-linux"."iso" = mkIso "aarch64-linux";
};
overlays.default = import ./overlays { inherit inputs; };
nixosModules.kernel-modules = import ./overlays/kernel-modules;
deploy.nodes =
let
mkDeploy = configName: arch: hostname: {
inherit hostname;
magicRollback = false;
sshUser = "root";
profiles.system.path = inputs.deploy-rs.lib.${arch}.activate.nixos self.nixosConfigurations.${configName};
};
in
nixpkgs.lib.mapAttrs
(hostname: cfg:
mkDeploy hostname cfg.arch (builtins.head cfg.hostNames))
machines;
checks = builtins.mapAttrs (system: deployLib: deployLib.deployChecks self.deploy) inputs.deploy-rs.lib;
lib = nixpkgs.lib.extend (final: prev: import ./lib { lib = nixpkgs.lib; });
}; };
};
} }

56
lib/default.nix Normal file
View File

@@ -0,0 +1,56 @@
{ lib, ... }:
with lib;
{
# Passthrough trace for debugging
pTrace = v: traceSeq v v;
# find the total sum of a int list
sum = foldr (x: y: x + y) 0;
# splits a list of length two into two params then they're passed to a func
splitPair = f: pair: f (head pair) (last pair);
# Finds the max value in a list
maxList = foldr max 0;
# Sorts a int list. Greatest value first
sortList = sort (x: y: x > y);
# Cuts a list in half and returns the two parts in a list
cutInHalf = l: [ (take (length l / 2) l) (drop (length l / 2) l) ];
# Splits a list into a list of lists with length cnt
chunksOf = cnt: l:
if length l > 0 then
[ (take cnt l) ] ++ chunksOf cnt (drop cnt l)
else [ ];
# same as intersectLists but takes an array of lists to intersect instead of just two
intersectManyLists = ll: foldr intersectLists (head ll) ll;
# converts a boolean to a int (c style)
boolToInt = b: if b then 1 else 0;
# drops the last element of a list
dropLast = l: take (length l - 1) l;
# transposes a matrix
transpose = ll:
let
outerSize = length ll;
innerSize = length (elemAt ll 0);
in
genList (i: genList (j: elemAt (elemAt ll j) i) outerSize) innerSize;
# attriset recursiveUpdate but for a list of attrisets
combineAttrs = foldl recursiveUpdate { };
# visits every single attriset element of an attriset recursively
# and accumulates the result of every visit in a flat list
recurisveVisitAttrs = f: set:
let
visitor = n: v:
if isAttrs v then [ (f n v) ] ++ recurisveVisitAttrs f v
else [ (f n v) ];
in
concatLists (map (name: visitor name set.${name}) (attrNames set));
# merges two lists of the same size (similar to map but both lists are inputs per iteration)
mergeLists = f: a: imap0 (i: f (elemAt a i));
map2D = f: ll:
let
outerSize = length ll;
innerSize = length (elemAt ll 0);
getElem = x: y: elemAt (elemAt ll y) x;
in
genList (y: genList (x: f x y (getElem x y)) innerSize) outerSize;
}

View File

@@ -1,4 +0,0 @@
#! /usr/bin/env nix-shell
#! nix-shell -i bash -p bash
nix flake update --commit-lock-file

View File

@@ -1,24 +0,0 @@
{ config, ... }:
{
# NixOS wants to enable GRUB by default
boot.loader.grub.enable = false;
# Enables the generation of /boot/extlinux/extlinux.conf
boot.loader.generic-extlinux-compatible.enable = true;
fileSystems = {
"/" = {
device = "/dev/disk/by-label/NIXOS_SD";
fsType = "ext4";
};
};
system.autoUpgrade.enable = true;
networking.interfaces.eth0.useDHCP = true;
hardware.deviceTree.enable = true;
hardware.deviceTree.overlays = [
./sopine-baseboard-ethernet.dtbo # fix pine64 clusterboard ethernet
];
}

View File

@@ -1,9 +0,0 @@
{ config, ... }:
{
imports = [
../common.nix
];
networking.hostName = "n1";
}

View File

@@ -1,9 +0,0 @@
{ config, ... }:
{
imports = [
../common.nix
];
networking.hostName = "n2";
}

View File

@@ -1,9 +0,0 @@
{ config, ... }:
{
imports = [
../common.nix
];
networking.hostName = "n3";
}

View File

@@ -1,9 +0,0 @@
{ config, ... }:
{
imports = [
../common.nix
];
networking.hostName = "n4";
}

View File

@@ -1,9 +0,0 @@
{ config, ... }:
{
imports = [
../common.nix
];
networking.hostName = "n5";
}

View File

@@ -1,9 +0,0 @@
{ config, ... }:
{
imports = [
../common.nix
];
networking.hostName = "n6";
}

View File

@@ -1,9 +0,0 @@
{ config, ... }:
{
imports = [
../common.nix
];
networking.hostName = "n7";
}

View File

@@ -1,15 +0,0 @@
/dts-v1/;
/ {
model = "SoPine with baseboard";
compatible = "pine64,sopine-baseboard\0pine64,sopine\0allwinner,sun50i-a64";
fragment@0 {
/* target = <&ethernet@1c30000>; */
target-path = "/soc/ethernet@1c30000";
__overlay__ {
allwinner,tx-delay-ps = <500>;
};
};
};

View File

@@ -1,28 +1,53 @@
{ pkgs, ... }: { config, pkgs, modulesPath, ... }:
{ {
imports = [
(modulesPath + "/installer/cd-dvd/channel.nix")
../../common/machine-info
../../common/ssh.nix
];
boot.initrd.availableKernelModules = [ "ata_piix" "uhci_hcd" "e1000" "e1000e" "virtio_pci" "r8169" ]; boot.initrd.availableKernelModules = [ "ata_piix" "uhci_hcd" "e1000" "e1000e" "virtio_pci" "r8169" ];
boot.kernelParams = [ boot.kernelParams = [
"panic=30" "boot.panic_on_fail" # reboot the machine upon fatal boot issues "panic=30"
"console=ttyS0" # enable serial console "boot.panic_on_fail" # reboot the machine upon fatal boot issues
"console=ttyS0,115200" # enable serial console
"console=tty1" "console=tty1"
]; ];
boot.kernel.sysctl."vm.overcommit_memory" = "1"; boot.kernel.sysctl."vm.overcommit_memory" = "1";
boot.kernelPackages = pkgs.linuxPackages_latest;
system.stateVersion = "21.11";
# hardware.enableAllFirmware = true;
# nixpkgs.config.allowUnfree = true;
environment.systemPackages = with pkgs; [ environment.systemPackages = with pkgs; [
cryptsetup cryptsetup
btrfs-progs btrfs-progs
git
git-lfs
wget
htop
dnsutils
pciutils
usbutils
lm_sensors
]; ];
environment.variables.GC_INITIAL_HEAP_SIZE = "1M"; environment.variables.GC_INITIAL_HEAP_SIZE = "1M";
networking.useDHCP = true; networking.useDHCP = true;
services.openssh = { services.openssh = {
enable = true; enable = true;
challengeResponseAuthentication = false; settings = {
passwordAuthentication = false; KbdInteractiveAuthentication = false;
PasswordAuthentication = false;
};
}; };
services.getty.autologinUser = "root"; services.getty.autologinUser = "root";
users.users.root.openssh.authorizedKeys.keys = (import ../common/ssh.nix).users; users.users.root.openssh.authorizedKeys.keys = config.machines.ssh.userKeys;
} }

View File

@@ -0,0 +1,57 @@
{ config, modulesPath, pkgs, lib, ... }:
let
pinecube-uboot = pkgs.buildUBoot {
defconfig = "pinecube_defconfig";
extraMeta.platforms = [ "armv7l-linux" ];
filesToInstall = [ "u-boot-sunxi-with-spl.bin" ];
};
in
{
imports = [
(modulesPath + "/installer/sd-card/sd-image.nix")
./minimal.nix
];
sdImage.populateFirmwareCommands = "";
sdImage.populateRootCommands = ''
mkdir -p ./files/boot
${config.boot.loader.generic-extlinux-compatible.populateCmd} -c ${config.system.build.toplevel} -d ./files/boot
'';
sdImage.postBuildCommands = ''
dd if=${pinecube-uboot}/u-boot-sunxi-with-spl.bin of=$img bs=1024 seek=8 conv=notrunc
'';
###
networking.hostName = "pinecube";
boot.loader.grub.enable = false;
boot.loader.generic-extlinux-compatible.enable = true;
boot.consoleLogLevel = 7;
# cma is 64M by default which is waay too much and we can't even unpack initrd
boot.kernelParams = [ "console=ttyS0,115200n8" "cma=32M" ];
boot.kernelModules = [ "spi-nor" ]; # Not sure why this doesn't autoload. Provides SPI NOR at /dev/mtd0
boot.extraModulePackages = [ config.boot.kernelPackages.rtl8189es ];
zramSwap.enable = true; # 128MB is not much to work with
sound.enable = true;
environment.systemPackages = with pkgs; [
ffmpeg
(v4l_utils.override { withGUI = false; })
usbutils
];
services.getty.autologinUser = lib.mkForce "googlebot";
users.users.googlebot = {
isNormalUser = true;
extraGroups = [ "wheel" "networkmanager" "video" ];
openssh.authorizedKeys.keys = config.machines.ssh.userKeys;
};
networking.wireless.enable = true;
}

68
machines/howl/default.nix Normal file
View File

@@ -0,0 +1,68 @@
{ config, pkgs, lib, nixos-hardware, ... }:
{
imports = [
./hardware-configuration.nix
nixos-hardware.nixosModules.framework-13-7040-amd
];
# for luks onlock over tor
services.tor.enable = true;
services.tor.client.enable = true;
# don't use remote builders
nix.distributedBuilds = lib.mkForce false;
services.udev.extraRules = ''
# depthai
SUBSYSTEM=="usb", ATTRS{idVendor}=="03e7", MODE="0666"
# Moonlander
# Rules for Oryx web flashing and live training
KERNEL=="hidraw*", ATTRS{idVendor}=="16c0", MODE="0664", GROUP="plugdev"
KERNEL=="hidraw*", ATTRS{idVendor}=="3297", MODE="0664", GROUP="plugdev"
# Wally Flashing rules for the Moonlander and Planck EZ
SUBSYSTEMS=="usb", ATTRS{idVendor}=="0483", ATTRS{idProduct}=="df11", MODE:="0666", SYMLINK+="stm32_dfu"
'';
services.udev.packages = [ pkgs.platformio ];
users.groups.plugdev = {
members = [ "googlebot" ];
};
# virt-manager
virtualisation.libvirtd.enable = true;
programs.dconf.enable = true;
virtualisation.spiceUSBRedirection.enable = true;
environment.systemPackages = with pkgs; [ virt-manager ];
users.users.googlebot.extraGroups = [ "libvirtd" "adbusers" ];
# allow building ARM derivations
boot.binfmt.emulatedSystems = [ "aarch64-linux" ];
services.spotifyd.enable = true;
virtualisation.podman.enable = true;
virtualisation.podman.dockerCompat = true;
virtualisation.appvm.enable = true;
virtualisation.appvm.user = "googlebot";
services.mount-samba.enable = true;
de.enable = true;
de.touchpad.enable = true;
networking.firewall.allowedTCPPorts = [
# barrier
24800
];
programs.adb.enable = true;
services.fwupd.enable = true;
# fingerprint reader has initially shown to be more of a nuisance than a help
# it makes sddm log in fail most of the time and take several minutes to finish
services.fprintd.enable = false;
}

View File

@@ -0,0 +1,50 @@
{ config, lib, pkgs, modulesPath, ... }:
{
imports =
[ (modulesPath + "/installer/scan/not-detected.nix")
];
boot.kernelPackages = pkgs.linuxPackages_latest;
# boot
boot.loader.systemd-boot.enable = true;
boot.initrd.availableKernelModules = [ "nvme" "xhci_pci" "thunderbolt" "usb_storage" "sd_mod" ];
boot.initrd.kernelModules = [ "dm-snapshot" ];
boot.kernelModules = [ "kvm-amd" ];
boot.extraModulePackages = [ ];
# thunderbolt
services.hardware.bolt.enable = true;
# firmware
firmware.x86_64.enable = true;
# disks
remoteLuksUnlock.enable = true;
boot.initrd.luks.devices."enc-pv" = {
device = "/dev/disk/by-uuid/c801586b-f0a2-465c-8dae-532e61b83fee";
allowDiscards = true;
};
fileSystems."/" =
{ device = "/dev/disk/by-uuid/95db6950-a7bc-46cf-9765-3ea675ccf014";
fsType = "btrfs";
};
fileSystems."/boot" =
{ device = "/dev/disk/by-uuid/B087-2C20";
fsType = "vfat";
options = [ "fmask=0022" "dmask=0022" ];
};
swapDevices =
[ { device = "/dev/disk/by-uuid/49fbdf62-eef4-421b-aac3-c93494afd23c"; }
];
# Enables DHCP on each ethernet and wireless interface. In case of scripted networking
# (the default) this is the recommended approach. When using systemd-networkd it's
# still possible to use this option, but it's recommended to use it in conjunction
# with explicit per-interface declarations with `networking.interfaces.<interface>.useDHCP`.
networking.useDHCP = lib.mkDefault true;
# networking.interfaces.wlp1s0.useDHCP = lib.mkDefault true;
nixpkgs.hostPlatform = lib.mkDefault "x86_64-linux";
}

View File

@@ -0,0 +1,26 @@
{
hostNames = [
"howl"
];
arch = "x86_64-linux";
systemRoles = [
"personal"
];
hostKey = "ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIEQi3q8jU6vRruExAL60J7GFO1gS8HsmXVJuKRT4ljrG";
userKeys = [
"ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIKPnLt84bKhUgFxjQf10+Htro9Lo1Pabqm8mGalBUniv"
];
deployKeys = [
# TODO
];
remoteUnlock = {
hostKey = "ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIN0N80r0Sl2WlJaUqfxZPkOtYyGumFazkIqq7eq3Gd2o";
onionHost = "ll6yjnkh4psmfwmtkmqoutl4gq4elqzbmjxv4s6gpgoavyi3kwhjvnqd.onion";
};
}

View File

@@ -1,110 +0,0 @@
{ config, pkgs, lib, ... }:
{
imports =[
./hardware-configuration.nix
];
# 5synsrjgvfzywruomjsfvfwhhlgxqhyofkzeqt2eisyijvjvebnu2xyd.onion
firmware.x86_64.enable = true;
bios = {
enable = true;
device = "/dev/sda";
};
luks = {
enable = true;
device.path = "/dev/disk/by-uuid/2f736fba-8a0c-4fb5-8041-c849fb5e1297";
};
system.autoUpgrade.enable = true;
networking.hostName = "liza";
networking.interfaces.enp1s0.useDHCP = true;
mailserver = {
enable = true;
fqdn = "mail.neet.dev";
dkimKeyBits = 2048;
indexDir = "/var/lib/mailindex";
enableManageSieve = true;
fullTextSearch.enable = true;
fullTextSearch.indexAttachments = true;
fullTextSearch.memoryLimit = 500;
domains = [
"neet.space" "neet.dev" "neet.cloud"
"runyan.org" "runyan.rocks"
"thunderhex.com" "tar.ninja"
"bsd.ninja" "bsd.rocks"
];
loginAccounts = {
"jeremy@runyan.org" = {
hashedPasswordFile = "/run/agenix/email-pw";
aliases = [
"@neet.space" "@neet.cloud" "@neet.dev"
"@runyan.org" "@runyan.rocks"
"@thunderhex.com" "@tar.ninja"
"@bsd.ninja" "@bsd.rocks"
];
};
};
rejectRecipients = [
"george@runyan.org"
"joslyn@runyan.org"
"damon@runyan.org"
"jonas@runyan.org"
];
certificateScheme = 3; # use let's encrypt for certs
};
age.secrets.email-pw.file = ../../secrets/email-pw.age;
# sendmail to use xxx@domain instead of xxx@mail.domain
services.postfix.origin = "$mydomain";
# relay sent mail through mailgun
# https://www.howtoforge.com/community/threads/different-smtp-relays-for-different-domains-in-postfix.82711/#post-392620
services.postfix.config = {
smtp_sasl_auth_enable = "yes";
smtp_sasl_security_options = "noanonymous";
smtp_sasl_password_maps = "hash:/var/lib/postfix/conf/sasl_relay_passwd";
smtp_use_tls = "yes";
sender_dependent_relayhost_maps = "hash:/var/lib/postfix/conf/sender_relay";
smtp_sender_dependent_authentication = "yes";
};
services.postfix.mapFiles.sender_relay = let
relayHost = "[smtp.mailgun.org]:587";
in pkgs.writeText "sender_relay" ''
@neet.space ${relayHost}
@neet.cloud ${relayHost}
@neet.dev ${relayHost}
@runyan.org ${relayHost}
@runyan.rocks ${relayHost}
@thunderhex.com ${relayHost}
@tar.ninja ${relayHost}
@bsd.ninja ${relayHost}
@bsd.rocks ${relayHost}
'';
services.postfix.mapFiles.sasl_relay_passwd = "/run/agenix/sasl_relay_passwd";
age.secrets.sasl_relay_passwd.file = ../../secrets/sasl_relay_passwd.age;
services.nextcloud = {
enable = true;
https = true;
package = pkgs.nextcloud22;
hostName = "neet.cloud";
config.dbtype = "sqlite";
config.adminuser = "jeremy";
config.adminpassFile = "/run/agenix/nextcloud-pw";
autoUpdateApps.enable = true;
};
age.secrets.nextcloud-pw = {
file = ../../secrets/nextcloud-pw.age;
owner = "nextcloud";
};
services.nginx.virtualHosts.${config.services.nextcloud.hostName} = {
enableACME = true;
forceSSL = true;
};
}

View File

@@ -1,36 +0,0 @@
# Do not modify this file! It was generated by nixos-generate-config
# and may be overwritten by future invocations. Please make changes
# to /etc/nixos/configuration.nix instead.
{ config, lib, pkgs, modulesPath, ... }:
{
imports =
[ (modulesPath + "/profiles/qemu-guest.nix")
];
boot.initrd.availableKernelModules = [ "ata_piix" "virtio_pci" "floppy" "sr_mod" "virtio_blk" ];
boot.initrd.kernelModules = [ "dm-snapshot" ];
boot.kernelModules = [ ];
boot.extraModulePackages = [ ];
fileSystems."/" =
{ device = "/dev/disk/by-uuid/b90eaf3c-2f91-499a-a066-861e0f4478df";
fsType = "btrfs";
};
fileSystems."/home" =
{ device = "/dev/disk/by-uuid/b90eaf3c-2f91-499a-a066-861e0f4478df";
fsType = "btrfs";
options = [ "subvol=home" ];
};
fileSystems."/boot" =
{ device = "/dev/disk/by-uuid/2b8f6f6d-9358-4d30-8341-7426574e0819";
fsType = "ext3";
};
swapDevices =
[ { device = "/dev/disk/by-uuid/ef7a83db-4b33-41d1-85fc-cff69e480352"; }
];
}

View File

@@ -10,8 +10,6 @@
networking.hostName = "nat"; networking.hostName = "nat";
networking.interfaces.ens160.useDHCP = true; networking.interfaces.ens160.useDHCP = true;
services.zerotierone.enable = true;
de.enable = true; de.enable = true;
de.touchpad.enable = true; de.touchpad.enable = true;
} }

View File

@@ -12,12 +12,14 @@
boot.extraModulePackages = [ ]; boot.extraModulePackages = [ ];
fileSystems."/" = fileSystems."/" =
{ device = "/dev/disk/by-uuid/02a8c0c7-fd4e-4443-a83c-2d0b63848779"; {
device = "/dev/disk/by-uuid/02a8c0c7-fd4e-4443-a83c-2d0b63848779";
fsType = "btrfs"; fsType = "btrfs";
}; };
fileSystems."/boot" = fileSystems."/boot" =
{ device = "/dev/disk/by-uuid/0C95-1290"; {
device = "/dev/disk/by-uuid/0C95-1290";
fsType = "vfat"; fsType = "vfat";
}; };

View File

@@ -0,0 +1,9 @@
{ config, pkgs, lib, ... }:
{
imports = [
./hardware-configuration.nix
];
networking.hostName = "phil";
}

View File

@@ -0,0 +1,46 @@
# Do not modify this file! It was generated by nixos-generate-config
# and may be overwritten by future invocations. Please make changes
# to /etc/nixos/configuration.nix instead.
{ config, lib, pkgs, modulesPath, ... }:
{
imports =
[
(modulesPath + "/profiles/qemu-guest.nix")
];
# because grub just doesn't work for some reason
boot.loader.systemd-boot.enable = true;
remoteLuksUnlock.enable = true;
remoteLuksUnlock.enableTorUnlock = false;
boot.initrd.availableKernelModules = [ "xhci_pci" ];
boot.initrd.kernelModules = [ "dm-snapshot" ];
boot.kernelModules = [ ];
boot.extraModulePackages = [ ];
boot.initrd.luks.devices."enc-pv" = {
device = "/dev/disk/by-uuid/d26c1820-4c39-4615-98c2-51442504e194";
allowDiscards = true;
};
fileSystems."/" =
{
device = "/dev/disk/by-uuid/851bfde6-93cd-439e-9380-de28aa87eda9";
fsType = "btrfs";
};
fileSystems."/boot" =
{
device = "/dev/disk/by-uuid/F185-C4E5";
fsType = "vfat";
};
swapDevices =
[{ device = "/dev/disk/by-uuid/d809e3a1-3915-405a-a200-4429c5efdf87"; }];
networking.interfaces.enp0s6.useDHCP = lib.mkDefault true;
nixpkgs.hostPlatform = lib.mkDefault "aarch64-linux";
}

View File

@@ -0,0 +1,21 @@
{
hostNames = [
"phil"
"phil.neet.dev"
];
arch = "aarch64-linux";
systemRoles = [
"server"
"nix-builder"
"gitea-actions-runner"
];
hostKey = "ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIBlgRPpuUkZqe8/lHugRPm/m2vcN9psYhh5tENHZt9I2";
remoteUnlock = {
hostKey = "ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIK0RodotOXLMy/w70aa096gaNqPBnfgiXR5ZAH4+wGzd";
clearnetHost = "unlock.phil.neet.dev";
};
}

View File

@@ -1,33 +1,28 @@
{ config, pkgs, lib, ... }: { config, pkgs, lib, ... }:
{ {
imports =[ imports = [
./hardware-configuration.nix ./hardware-configuration.nix
]; ];
networking.hostName = "ponyo"; # system.autoUpgrade.enable = true;
firmware.x86_64.enable = true; # p2p mesh network
bios = { services.tailscale.exitNode = true;
enable = true;
device = "/dev/sda";
};
luks = { # email server
enable = true; mailserver.enable = true;
device.path = "/dev/disk/by-uuid/4cc36be4-dbff-4afe-927d-69bf4637bae2";
};
system.autoUpgrade.enable = true; # nextcloud
services.nextcloud.enable = true;
services.zerotierone.enable = true;
# git
services.gitea = { services.gitea = {
enable = true; enable = true;
hostname = "git.neet.dev"; hostname = "git.neet.dev";
disableRegistration = true;
}; };
# IRC
services.thelounge = { services.thelounge = {
enable = true; enable = true;
port = 9000; port = 9000;
@@ -39,12 +34,14 @@
}; };
}; };
# mumble
services.murmur = { services.murmur = {
enable = true; enable = true;
port = 23563; port = 23563;
domain = "voice.neet.space"; domain = "voice.neet.space";
}; };
# IRC bot
services.drastikbot = { services.drastikbot = {
enable = true; enable = true;
wolframAppIdFile = "/run/agenix/wolframalpha"; wolframAppIdFile = "/run/agenix/wolframalpha";
@@ -53,8 +50,11 @@
file = ../../secrets/wolframalpha.age; file = ../../secrets/wolframalpha.age;
owner = config.services.drastikbot.user; owner = config.services.drastikbot.user;
}; };
backup.group."dailybot".paths = [
config.services.drastikbot.dataDir
];
# wrap radio in a VPN # music radio
vpn-container.enable = true; vpn-container.enable = true;
vpn-container.config = { vpn-container.config = {
services.radio = { services.radio = {
@@ -62,25 +62,37 @@
host = "radio.runyan.org"; host = "radio.runyan.org";
}; };
}; };
pia.wireguard.badPortForwardPorts = [ ];
# tailscale services.nginx.virtualHosts = {
services.tailscale.exitNode = true; "radio.runyan.org" = {
enableACME = true;
# icecast endpoint + website forceSSL = true;
services.nginx.virtualHosts."radio.runyan.org" = { locations = {
enableACME = true; "/stream.mp3" = {
forceSSL = true; proxyPass = "http://vpn.containers:8001/stream.mp3";
locations = { extraConfig = ''
"/stream.mp3" = { add_header Access-Control-Allow-Origin *;
proxyPass = "http://vpn.containers:8001/stream.mp3"; '';
extraConfig = '' };
add_header Access-Control-Allow-Origin *; "/".root = config.inputs.radio-web;
''; };
};
"radio.neet.space" = {
enableACME = true;
forceSSL = true;
locations = {
"/stream.mp3" = {
proxyPass = "http://vpn.containers:8001/stream.mp3";
extraConfig = ''
add_header Access-Control-Allow-Origin *;
'';
};
"/".root = config.inputs.radio-web;
}; };
"/".root = config.inputs.radio-web;
}; };
}; };
# matrix home server
services.matrix = { services.matrix = {
enable = true; enable = true;
host = "neet.space"; host = "neet.space";
@@ -98,67 +110,36 @@
secret = "a8369a0e96922abf72494bb888c85831b"; secret = "a8369a0e96922abf72494bb888c85831b";
}; };
}; };
services.postgresql.package = pkgs.postgresql_11; # pin postgresql for matrix (will need to migrate eventually)
services.postgresql.package = pkgs.postgresql_15;
services.searx = {
enable = true;
environmentFile = "/run/agenix/searx";
settings = {
server.port = 43254;
server.secret_key = "@SEARX_SECRET_KEY@";
engines = [ {
name = "wolframalpha";
shortcut = "wa";
api_key = "@WOLFRAM_API_KEY@";
engine = "wolframalpha_api";
} ];
};
};
services.nginx.virtualHosts."search.neet.space" = {
enableACME = true;
forceSSL = true;
locations."/" = {
proxyPass = "http://localhost:${toString config.services.searx.settings.server.port}";
};
};
age.secrets.searx.file = ../../secrets/searx.age;
# iodine DNS-based vpn # iodine DNS-based vpn
services.iodine.server = { services.iodine.server.enable = true;
enable = true;
ip = "192.168.99.1";
domain = "tun.neet.dev";
passwordFile = "/run/agenix/iodine";
};
age.secrets.iodine.file = ../../secrets/iodine.age;
networking.firewall.allowedUDPPorts = [ 53 ];
networking.nat.internalInterfaces = [
"dns0" # iodine
];
# proxied web services
services.nginx.enable = true; services.nginx.enable = true;
services.nginx.virtualHosts."jellyfin.neet.cloud" = { services.nginx.virtualHosts."jellyfin.neet.cloud" = {
enableACME = true; enableACME = true;
forceSSL = true; forceSSL = true;
locations."/" = { locations."/" = {
proxyPass = "http://s0.zt.neet.dev"; proxyPass = "http://s0.koi-bebop.ts.net";
proxyWebsockets = true; proxyWebsockets = true;
}; };
}; };
services.nginx.virtualHosts."navidrome.neet.cloud" = { services.nginx.virtualHosts."navidrome.neet.cloud" = {
enableACME = true; enableACME = true;
forceSSL = true; forceSSL = true;
locations."/".proxyPass = "http://s0.zt.neet.dev:4533"; locations."/".proxyPass = "http://s0.koi-bebop.ts.net:4533";
}; };
# TODO replace with a proper file hosting service
services.nginx.virtualHosts."tmp.neet.dev" = { services.nginx.virtualHosts."tmp.neet.dev" = {
enableACME = true; enableACME = true;
forceSSL = true; forceSSL = true;
root = "/var/www/tmp"; root = "/var/www/tmp";
}; };
# redirect to github # redirect runyan.org to github
services.nginx.virtualHosts."runyan.org" = { services.nginx.virtualHosts."runyan.org" = {
enableACME = true; enableACME = true;
forceSSL = true; forceSSL = true;
@@ -167,6 +148,14 @@
''; '';
}; };
# owncast live streaming
services.owncast.enable = true; services.owncast.enable = true;
services.owncast.hostname = "live.neet.dev"; services.owncast.hostname = "live.neet.dev";
# librechat
services.librechat.enable = true;
services.librechat.host = "chat.neet.dev";
services.actual-server.enable = true;
services.actual-server.hostname = "actual.runyan.org";
} }

View File

@@ -2,7 +2,8 @@
{ {
imports = imports =
[ (modulesPath + "/profiles/qemu-guest.nix") [
(modulesPath + "/profiles/qemu-guest.nix")
]; ];
boot.initrd.availableKernelModules = [ "ata_piix" "uhci_hcd" "virtio_pci" "virtio_scsi" "sd_mod" ]; boot.initrd.availableKernelModules = [ "ata_piix" "uhci_hcd" "virtio_pci" "virtio_scsi" "sd_mod" ];
@@ -10,13 +11,27 @@
boot.kernelModules = [ "kvm-intel" "nvme" ]; boot.kernelModules = [ "kvm-intel" "nvme" ];
boot.extraModulePackages = [ ]; boot.extraModulePackages = [ ];
firmware.x86_64.enable = true;
bios = {
enable = true;
device = "/dev/sda";
configurationLimit = 3; # Save room in /nix/store
};
remoteLuksUnlock.enable = true;
boot.initrd.luks.devices."enc-pv".device = "/dev/disk/by-uuid/4cc36be4-dbff-4afe-927d-69bf4637bae2";
boot.initrd.luks.devices."enc-pv2".device = "/dev/disk/by-uuid/e52b01b3-81c8-4bb2-ae7e-a3d9c793cb00"; # expanded disk
fileSystems."/" = fileSystems."/" =
{ device = "/dev/mapper/enc-pv"; {
device = "/dev/mapper/enc-pv";
fsType = "btrfs"; fsType = "btrfs";
}; };
fileSystems."/boot" = fileSystems."/boot" =
{ device = "/dev/disk/by-uuid/d3a3777d-1e70-47fa-a274-804dc70ee7fd"; {
device = "/dev/disk/by-uuid/d3a3777d-1e70-47fa-a274-804dc70ee7fd";
fsType = "ext4"; fsType = "ext4";
}; };
@@ -27,11 +42,5 @@
} }
]; ];
# The global useDHCP flag is deprecated, therefore explicitly set to false here. networking.interfaces.eth0.useDHCP = true;
# Per-interface useDHCP will be mandatory in the future, so this generated config
# replicates the default behaviour.
networking.useDHCP = lib.mkDefault false;
networking.interfaces.eth0.useDHCP = lib.mkDefault true;
hardware.cpu.intel.updateMicrocode = lib.mkDefault config.hardware.enableRedistributableFirmware;
} }

Some files were not shown because too many files have changed in this diff Show More