Compare commits
31 Commits
d6a0e8ec49
...
master
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
412dd12b5a | ||
| 684851d641 | |||
| 4cf50b5fb1 | |||
| 288a2841aa | |||
| 0589ca5748 | |||
| a4c5cb589a | |||
| a697ea10ad | |||
| 200d5a5d22 | |||
| 339eac52c6 | |||
| bab4b3ff8e | |||
| 54ab576914 | |||
| c84c0716ce | |||
| a921f40644 | |||
|
|
a6c17164fa | ||
| 9df8390f1f | |||
| 156f0183bd | |||
| 8b92e51ef7 | |||
| 7798872bbf | |||
| cf41285cb8 | |||
| 5a0a525f64 | |||
| 9154595910 | |||
| 1b92363b08 | |||
| 136f024cf0 | |||
| 3d08a3e9bc | |||
| 99ef62d31a | |||
| 298f473ceb | |||
| 546bd08f83 | |||
| 10f3e3a7bf | |||
| d44bd12e17 | |||
| 60e89dfc90 | |||
| 869b6af7f7 |
160
.claude/skills/create-workspace/SKILL.md
Normal file
160
.claude/skills/create-workspace/SKILL.md
Normal file
@@ -0,0 +1,160 @@
|
||||
---
|
||||
name: create-workspace
|
||||
description: >
|
||||
Creates a new sandboxed workspace (isolated dev environment) by adding
|
||||
NixOS configuration for a VM, container, or Incus instance. Use when
|
||||
the user wants to create, set up, or add a new sandboxed workspace.
|
||||
---
|
||||
|
||||
# Create Sandboxed Workspace
|
||||
|
||||
Creates an isolated development environment backed by a VM (microvm.nix), container (systemd-nspawn), or Incus instance. This produces:
|
||||
|
||||
1. A workspace config file at `machines/<machine>/workspaces/<name>.nix`
|
||||
2. A registration entry in `machines/<machine>/default.nix`
|
||||
|
||||
## Step 1: Parse Arguments
|
||||
|
||||
Extract the workspace name and backend type from `$ARGUMENTS`. If either is missing, ask the user.
|
||||
|
||||
- **Name**: lowercase alphanumeric with hyphens (e.g., `my-project`)
|
||||
- **Type**: one of `vm`, `container`, or `incus`
|
||||
|
||||
## Step 2: Detect Machine
|
||||
|
||||
Run `hostname` to get the current machine name. Verify that `machines/<hostname>/default.nix` exists.
|
||||
|
||||
If the machine directory doesn't exist, stop and tell the user this machine isn't managed by this flake.
|
||||
|
||||
## Step 3: Allocate IP Address
|
||||
|
||||
Read `machines/<hostname>/default.nix` to find existing `sandboxed-workspace.workspaces` entries and their IPs.
|
||||
|
||||
All IPs are in the `192.168.83.0/24` subnet. Use these ranges by convention:
|
||||
|
||||
| Type | IP Range |
|
||||
|------|----------|
|
||||
| vm | 192.168.83.10 - .49 |
|
||||
| container | 192.168.83.50 - .89 |
|
||||
| incus | 192.168.83.90 - .129 |
|
||||
|
||||
Pick the next available IP in the appropriate range. If no workspaces exist yet for that type, use the first IP in the range.
|
||||
|
||||
## Step 4: Create Workspace Config File
|
||||
|
||||
Create `machines/<hostname>/workspaces/<name>.nix`. Use this template:
|
||||
|
||||
```nix
|
||||
{ config, lib, pkgs, ... }:
|
||||
|
||||
{
|
||||
environment.systemPackages = with pkgs; [
|
||||
# Add packages here
|
||||
];
|
||||
}
|
||||
```
|
||||
|
||||
Ask the user if they want any packages pre-installed.
|
||||
|
||||
Create the `workspaces/` directory if it doesn't exist.
|
||||
|
||||
**Important:** After creating the file, run `git add` on it immediately. Nix flakes only see files tracked by git, so new files must be staged before `nix build` will work.
|
||||
|
||||
## Step 5: Register Workspace
|
||||
|
||||
Edit `machines/<hostname>/default.nix` to add the workspace entry inside the `sandboxed-workspace` block.
|
||||
|
||||
The entry should look like:
|
||||
|
||||
```nix
|
||||
workspaces.<name> = {
|
||||
type = "<type>";
|
||||
config = ./workspaces/<name>.nix;
|
||||
ip = "<allocated-ip>";
|
||||
};
|
||||
```
|
||||
|
||||
**If `sandboxed-workspace` block doesn't exist yet**, add the full block:
|
||||
|
||||
```nix
|
||||
sandboxed-workspace = {
|
||||
enable = true;
|
||||
workspaces.<name> = {
|
||||
type = "<type>";
|
||||
config = ./workspaces/<name>.nix;
|
||||
ip = "<allocated-ip>";
|
||||
};
|
||||
};
|
||||
```
|
||||
|
||||
The machine also needs `networking.sandbox.upstreamInterface` set. Check if it exists; if not, ask the user for their primary network interface name (they can find it with `ip route show default`).
|
||||
|
||||
Do **not** set `hostKey` — it gets auto-generated on first boot and can be added later.
|
||||
|
||||
## Step 6: Verify Build
|
||||
|
||||
Run a build to check for configuration errors:
|
||||
|
||||
```
|
||||
nix build .#nixosConfigurations.<hostname>.config.system.build.toplevel --no-link
|
||||
```
|
||||
|
||||
If the build fails, fix the configuration and retry.
|
||||
|
||||
## Step 7: Deploy
|
||||
|
||||
Tell the user to deploy by running:
|
||||
|
||||
```
|
||||
doas nixos-rebuild switch --flake .
|
||||
```
|
||||
|
||||
**Never run this command yourself** — it requires privileges.
|
||||
|
||||
## Step 8: Post-Deploy Info
|
||||
|
||||
Tell the user to deploy and then start the workspace so the host key gets generated. Provide these instructions:
|
||||
|
||||
**Deploy:**
|
||||
```
|
||||
doas nixos-rebuild switch --flake .
|
||||
```
|
||||
|
||||
**Starting the workspace:**
|
||||
```
|
||||
doas systemctl start <service>
|
||||
```
|
||||
|
||||
Where `<service>` is:
|
||||
- VM: `microvm@<name>`
|
||||
- Container: `container@<name>`
|
||||
- Incus: `incus-workspace-<name>`
|
||||
|
||||
Or use the auto-generated shell alias: `workspace_<name>_start`
|
||||
|
||||
**Connecting:**
|
||||
```
|
||||
ssh googlebot@workspace-<name>
|
||||
```
|
||||
|
||||
Or use the alias: `workspace_<name>`
|
||||
|
||||
**Never run deploy or start commands yourself** — they require privileges.
|
||||
|
||||
## Step 9: Add Host Key
|
||||
|
||||
After the user has deployed and started the workspace, add the SSH host key to the workspace config. Do NOT skip this step — always wait for the user to confirm they've started the workspace, then proceed.
|
||||
|
||||
1. Read the host key from `~/sandboxed/<name>/ssh-host-keys/ssh_host_ed25519_key.pub`
|
||||
2. Add `hostKey = "<contents>";` to the workspace entry in `machines/<hostname>/default.nix`
|
||||
3. Run the build again to verify
|
||||
4. Tell the user to redeploy with `doas nixos-rebuild switch --flake .`
|
||||
|
||||
## Backend Reference
|
||||
|
||||
| | VM | Container | Incus |
|
||||
|---|---|---|---|
|
||||
| Isolation | Full kernel (cloud-hypervisor) | Shared kernel (systemd-nspawn) | Unprivileged container |
|
||||
| Overhead | Higher (separate kernel) | Lower (bind mounts) | Medium |
|
||||
| Filesystem | virtiofs shares | Bind mounts | Incus-managed |
|
||||
| Use case | Untrusted code, kernel-level isolation | Fast dev environments | Better security than nspawn |
|
||||
29
.gitea/scripts/build-and-cache.sh
Executable file
29
.gitea/scripts/build-and-cache.sh
Executable file
@@ -0,0 +1,29 @@
|
||||
#!/usr/bin/env bash
|
||||
set -euo pipefail
|
||||
|
||||
# Configure Attic cache
|
||||
attic login local "$ATTIC_ENDPOINT" "$ATTIC_TOKEN"
|
||||
attic use local:nixos
|
||||
|
||||
# Check flake
|
||||
nix flake check --all-systems --print-build-logs --log-format raw --show-trace
|
||||
|
||||
# Build all systems
|
||||
nix eval .#nixosConfigurations --apply 'cs: builtins.attrNames cs' --json \
|
||||
| jq -r '.[]' \
|
||||
| xargs -I{} nix build ".#nixosConfigurations.{}.config.system.build.toplevel" \
|
||||
--no-link --print-build-logs --log-format raw
|
||||
|
||||
# Push to cache (only locally-built paths >= 0.5MB)
|
||||
toplevels=$(nix eval .#nixosConfigurations \
|
||||
--apply 'cs: map (n: "${cs.${n}.config.system.build.toplevel}") (builtins.attrNames cs)' \
|
||||
--json | jq -r '.[]')
|
||||
echo "Found $(echo "$toplevels" | wc -l) system toplevels"
|
||||
paths=$(echo "$toplevels" \
|
||||
| xargs nix path-info -r --json \
|
||||
| jq -r '[to_entries[] | select(
|
||||
(.value.signatures | all(startswith("cache.nixos.org") | not))
|
||||
and .value.narSize >= 524288
|
||||
) | .key] | unique[]')
|
||||
echo "Pushing $(echo "$paths" | wc -l) unique paths to cache"
|
||||
echo "$paths" | xargs attic push local:nixos
|
||||
60
.gitea/workflows/auto-update.yaml
Normal file
60
.gitea/workflows/auto-update.yaml
Normal file
@@ -0,0 +1,60 @@
|
||||
name: Auto Update Flake
|
||||
on:
|
||||
schedule:
|
||||
- cron: '0 6 * * *'
|
||||
workflow_dispatch: {}
|
||||
|
||||
env:
|
||||
DEBIAN_FRONTEND: noninteractive
|
||||
PATH: /run/current-system/sw/bin/
|
||||
XDG_CONFIG_HOME: ${{ runner.temp }}/.config
|
||||
ATTIC_ENDPOINT: ${{ vars.ATTIC_ENDPOINT }}
|
||||
ATTIC_TOKEN: ${{ secrets.ATTIC_TOKEN }}
|
||||
|
||||
jobs:
|
||||
auto-update:
|
||||
runs-on: nixos
|
||||
steps:
|
||||
- name: Checkout the repository
|
||||
uses: actions/checkout@v3
|
||||
with:
|
||||
fetch-depth: 0
|
||||
ref: master
|
||||
token: ${{ secrets.PUSH_TOKEN }}
|
||||
|
||||
- name: Configure git identity
|
||||
run: |
|
||||
git config user.name "gitea-runner"
|
||||
git config user.email "gitea-runner@neet.dev"
|
||||
|
||||
- name: Update flake inputs
|
||||
id: update
|
||||
run: |
|
||||
nix flake update
|
||||
if git diff --quiet flake.lock; then
|
||||
echo "No changes to flake.lock, nothing to do"
|
||||
echo "changed=false" >> "$GITHUB_OUTPUT"
|
||||
else
|
||||
git add flake.lock
|
||||
git commit -m "flake.lock: update inputs"
|
||||
echo "changed=true" >> "$GITHUB_OUTPUT"
|
||||
fi
|
||||
|
||||
- name: Build and cache
|
||||
if: steps.update.outputs.changed == 'true'
|
||||
run: bash .gitea/scripts/build-and-cache.sh
|
||||
|
||||
- name: Push updated lockfile
|
||||
if: steps.update.outputs.changed == 'true'
|
||||
run: git push
|
||||
|
||||
- name: Notify on failure
|
||||
if: failure() && steps.update.outputs.changed == 'true'
|
||||
run: |
|
||||
curl -s \
|
||||
-H "Authorization: Bearer ${{ secrets.NTFY_TOKEN }}" \
|
||||
-H "Title: Flake auto-update failed" \
|
||||
-H "Priority: high" \
|
||||
-H "Tags: warning" \
|
||||
-d "Auto-update workflow failed. Check: ${{ gitea.server_url }}/${{ gitea.repository }}/actions/runs/${{ gitea.run_number }}" \
|
||||
https://ntfy.neet.dev/nix-flake-updates
|
||||
@@ -5,6 +5,9 @@ on: [push]
|
||||
env:
|
||||
DEBIAN_FRONTEND: noninteractive
|
||||
PATH: /run/current-system/sw/bin/
|
||||
XDG_CONFIG_HOME: ${{ runner.temp }}/.config
|
||||
ATTIC_ENDPOINT: ${{ vars.ATTIC_ENDPOINT }}
|
||||
ATTIC_TOKEN: ${{ secrets.ATTIC_TOKEN }}
|
||||
|
||||
jobs:
|
||||
check-flake:
|
||||
@@ -15,5 +18,16 @@ jobs:
|
||||
with:
|
||||
fetch-depth: 0
|
||||
|
||||
- name: Check Flake
|
||||
run: nix flake check --all-systems --print-build-logs --log-format raw --show-trace
|
||||
- name: Build and cache
|
||||
run: bash .gitea/scripts/build-and-cache.sh
|
||||
|
||||
- name: Notify on failure
|
||||
if: failure()
|
||||
run: |
|
||||
curl -s \
|
||||
-H "Authorization: Bearer ${{ secrets.NTFY_TOKEN }}" \
|
||||
-H "Title: Flake check failed" \
|
||||
-H "Priority: high" \
|
||||
-H "Tags: warning" \
|
||||
-d "Check failed for ${{ gitea.ref_name }}. Check: ${{ gitea.server_url }}/${{ gitea.repository }}/actions/runs/${{ gitea.run_number }}" \
|
||||
https://ntfy.neet.dev/nix-flake-updates
|
||||
|
||||
1
.gitignore
vendored
1
.gitignore
vendored
@@ -1 +1,2 @@
|
||||
result
|
||||
.claude/worktrees
|
||||
|
||||
115
CLAUDE.md
115
CLAUDE.md
@@ -1,78 +1,93 @@
|
||||
# NixOS Configuration
|
||||
# CLAUDE.md
|
||||
|
||||
This is a NixOS flake configuration managing multiple machines.
|
||||
This file provides guidance to Claude Code (claude.ai/code) when working with code in this repository.
|
||||
|
||||
## Adding Packages
|
||||
## What This Is
|
||||
|
||||
**User packages** go in `home/googlebot.nix`:
|
||||
- Development tools, editors, language-specific tools
|
||||
- Use `home.packages` for CLI tools
|
||||
- Use `programs.<name>` for configurable programs (preferred when available)
|
||||
- Gate dev tools with `thisMachineIsPersonal` so they only install on workstations
|
||||
A NixOS flake managing multiple machines. All machines import `/common` for shared config, and each machine has its own directory under `/machines/<hostname>/` with a `default.nix` (machine-specific config), `hardware-configuration.nix`, and `properties.nix` (metadata: hostnames, arch, roles, SSH keys).
|
||||
|
||||
**System packages** go in `common/default.nix`:
|
||||
- Basic utilities needed on every machine (servers and workstations)
|
||||
- Examples: git, htop, tmux, wget, dnsutils
|
||||
- Keep this minimal - most packages belong in home/googlebot.nix
|
||||
## Common Commands
|
||||
|
||||
**Personal machine system packages** go in `common/pc/default.nix`:
|
||||
- Packages that must be system-level (not per-user) due to technical limitations
|
||||
- But only needed on personal/development machines, not servers
|
||||
- Examples: packages requiring udev rules, system services, or setuid
|
||||
```bash
|
||||
# Build a machine config (check for errors without deploying)
|
||||
nix build .#nixosConfigurations.<hostname>.config.system.build.toplevel --no-link
|
||||
|
||||
## Machine Roles
|
||||
# Deploy to local machine (user must run this themselves - requires privileges)
|
||||
doas nixos-rebuild switch --flake .
|
||||
|
||||
Machines have roles defined in their configuration:
|
||||
# Deploy to a remote machine (boot-only, no activate)
|
||||
deploy --remote-build --boot --debug-logs --skip-checks .#<hostname>
|
||||
|
||||
- **personal**: Development workstations (desktops, laptops). Install dev tools, GUI apps, editors here.
|
||||
- **Non-personal**: Servers and production machines. Keep minimal.
|
||||
# Deploy to a remote machine (activate immediately)
|
||||
deploy --remote-build --debug-logs --skip-checks .#<hostname>
|
||||
|
||||
Use `thisMachineIsPersonal` (or `osConfig.thisMachine.hasRole."personal"`) to conditionally include packages:
|
||||
# Update flake lockfile
|
||||
make update-lockfile
|
||||
|
||||
```nix
|
||||
home.packages = lib.mkIf thisMachineIsPersonal [
|
||||
pkgs.some-dev-tool
|
||||
];
|
||||
# Update a single flake input
|
||||
make update-input <input-name>
|
||||
|
||||
# Edit an agenix secret
|
||||
make edit-secret <secret-filename>
|
||||
|
||||
# Rekey all secrets (after adding/removing machine host keys)
|
||||
make rekey-secrets
|
||||
```
|
||||
|
||||
## Sandboxed Workspaces
|
||||
## Architecture
|
||||
|
||||
Isolated development environments using VMs or containers. See `skills/create-workspace/SKILL.md`.
|
||||
### Machine Discovery (Auto-Registration)
|
||||
|
||||
- VMs: Full kernel isolation via microvm.nix
|
||||
- Containers: Lighter weight via systemd-nspawn
|
||||
Machines are **not** listed in `flake.nix`. Instead, `common/machine-info/default.nix` recursively scans `/machines/` for any `properties.nix` file and auto-registers that directory as a machine. To add a machine, create `machines/<name>/properties.nix` and `machines/<name>/default.nix`.
|
||||
|
||||
Configuration: `common/sandboxed-workspace/`
|
||||
`properties.nix` returns a plain attrset (no NixOS module args) with: `hostNames`, `arch`, `systemRoles`, `hostKey`, and optionally `userKeys`, `deployKeys`, `remoteUnlock`.
|
||||
|
||||
## Key Directories
|
||||
### Role System
|
||||
|
||||
- `common/` - Shared NixOS modules for all machines
|
||||
- `home/` - Home Manager configurations
|
||||
- `lib/` - Custom lib functions (extends nixpkgs lib, accessible as `lib.*` in modules)
|
||||
- `machines/` - Per-machine configurations
|
||||
- `skills/` - Claude Code skills for common tasks
|
||||
Each machine declares `systemRoles` in its `properties.nix` (e.g., `["personal" "dns-challenge"]`). Roles drive conditional config:
|
||||
- `config.thisMachine.hasRole.<role>` - boolean, used to conditionally enable features (e.g., `de.enable` for `personal` role)
|
||||
- `config.machines.withRole.<role>` - list of hostnames with that role
|
||||
- Roles also determine which machines can decrypt which agenix secrets (see `secrets/secrets.nix`)
|
||||
|
||||
## Shared Library
|
||||
### Secrets (agenix)
|
||||
|
||||
Custom utility functions go in `lib/default.nix`. The flake extends `nixpkgs.lib` with these functions, so they're accessible as `lib.functionName` in all modules. Add reusable functions here when used in multiple places.
|
||||
Secrets in `/secrets/` are encrypted `.age` files. `secrets.nix` maps each secret to the SSH host keys (by role) that can decrypt it. After changing which machines have access, run `make rekey-secrets`.
|
||||
|
||||
## Code Comments
|
||||
### Sandboxed Workspaces
|
||||
|
||||
Only add comments that provide value beyond what the code already shows:
|
||||
- Explain *why* something is done, not *what* is being done
|
||||
- Document non-obvious constraints or gotchas
|
||||
- Never add filler comments that repeat the code (e.g. `# Start the service` before a start command)
|
||||
`common/sandboxed-workspace/` provides isolated dev environments. Three backends: `vm` (microvm/cloud-hypervisor), `container` (systemd-nspawn), `incus`. Workspaces are defined in machine `default.nix` files and their per-workspace config goes in `machines/<hostname>/workspaces/<name>.nix`. The base config (`base.nix`) handles networking, SSH, user setup, and Claude Code pre-configuration.
|
||||
|
||||
## Bash Commands
|
||||
IP allocation convention: VMs `.10-.49`, containers `.50-.89`, incus `.90-.129` in `192.168.83.0/24`.
|
||||
|
||||
Do not redirect stderr to stdout (no `2>&1`). This can hide important output and errors.
|
||||
### Backups
|
||||
|
||||
Do not use `doas` or `sudo` - they will fail. Ask the user to run privileged commands themselves.
|
||||
`common/backups.nix` defines a `backup.group` option. Machines declare backup groups with paths; restic handles daily backups to Backblaze B2 with automatic ZFS/btrfs snapshot support. Each group gets a `restic_<group>` CLI wrapper for manual operations.
|
||||
|
||||
## Nix Commands
|
||||
### Nixpkgs Patching
|
||||
|
||||
Use `--no-link` with `nix build` to avoid creating `result` symlinks in the working directory.
|
||||
`flake.nix` applies patches from `/patches/` to nixpkgs before building (workaround for nix#3920).
|
||||
|
||||
## Git Commits
|
||||
### Service Dashboard & Monitoring
|
||||
|
||||
Do not add "Co-Authored-By" lines to commit messages.
|
||||
When adding or removing a web-facing service, update both:
|
||||
- **Gatus** (`common/server/gatus.nix`) — add/remove the endpoint monitor
|
||||
- **Dashy** — add/remove the service entry from the dashboard config
|
||||
|
||||
### Key Conventions
|
||||
|
||||
- Uses `doas` instead of `sudo` everywhere
|
||||
- Fish shell is the default user shell
|
||||
- Home Manager is used for user-level config (`home/googlebot.nix`)
|
||||
- `lib/default.nix` extends nixpkgs lib with custom utility functions (extends via `nixpkgs.lib.extend`)
|
||||
- Overlays are in `/overlays/` and applied globally via `flake.nix`
|
||||
- The Nix formatter for this project is `nixpkgs-fmt`
|
||||
- Do not add "Co-Authored-By" lines to commit messages
|
||||
- Always use `--no-link` when running `nix build`
|
||||
- Don't use `nix build --dry-run` unless you only need evaluation — it skips the actual build
|
||||
- Avoid `2>&1` on nix commands — it can cause error output to be missed
|
||||
|
||||
## Git Worktree Requirement
|
||||
|
||||
When instructed to work in a git worktree (e.g., via `isolation: "worktree"` or told to use a worktree), you **MUST** do so. If you are unable to create or use a git worktree, you **MUST** stop work immediately and report the failure to the user. Do not fall back to working in the main working tree.
|
||||
|
||||
When applying work from a git worktree back to the main branch, commit in the worktree first, then use `git cherry-pick` from the main working tree to bring the commit over. Do not use `git checkout` or `git apply` to copy files directly. Do **not** automatically apply worktree work to the main branch — always ask the user for approval first.
|
||||
|
||||
41
README.md
41
README.md
@@ -1,11 +1,32 @@
|
||||
# My NixOS configurations
|
||||
# NixOS Configuration
|
||||
|
||||
### Source Layout
|
||||
- `/common` - common configuration imported into all `/machines`
|
||||
- `/boot` - config related to bootloaders, cpu microcode, and unlocking LUKS root disks over tor
|
||||
- `/network` - config for tailscale, and NixOS container with automatic vpn tunneling via PIA
|
||||
- `/pc` - config that a graphical PC should have. Have the `personal` role set in the machine's `properties.nix` to enable everthing.
|
||||
- `/server` - config that creates new nixos services or extends existing ones to meet my needs
|
||||
- `/machines` - all my NixOS machines along with their machine unique configuration for hardware and services
|
||||
- `/kexec` - a special machine for generating minimal kexec images. Does not import `/common`
|
||||
- `/secrets` - encrypted shared secrets unlocked through `/machines` ssh host keys
|
||||
A NixOS flake managing multiple machines with role-based configuration, agenix secrets, and sandboxed dev workspaces.
|
||||
|
||||
## Layout
|
||||
|
||||
- `/common` - shared configuration imported by all machines
|
||||
- `/boot` - bootloaders, CPU microcode, remote LUKS unlock over Tor
|
||||
- `/network` - Tailscale, VPN tunneling via PIA
|
||||
- `/pc` - desktop/graphical config (enabled by the `personal` role)
|
||||
- `/server` - service definitions and extensions
|
||||
- `/sandboxed-workspace` - isolated dev environments (VM, container, or Incus)
|
||||
- `/machines` - per-machine config (`default.nix`, `hardware-configuration.nix`, `properties.nix`)
|
||||
- `/secrets` - agenix-encrypted secrets, decryptable by machines based on their roles
|
||||
- `/home` - Home Manager user config
|
||||
- `/lib` - custom library functions extending nixpkgs lib
|
||||
- `/overlays` - nixpkgs overlays applied globally
|
||||
- `/patches` - patches applied to nixpkgs at build time
|
||||
|
||||
## Notable Features
|
||||
|
||||
**Auto-discovery & roles** — Machines register themselves by placing a `properties.nix` under `/machines/`. No manual listing in `flake.nix`. Roles declared per-machine (`"personal"`, `"dns-challenge"`, etc.) drive feature enablement via `config.thisMachine.hasRole.<role>` and control which agenix secrets each machine can decrypt.
|
||||
|
||||
**Machine properties module system** — `properties.nix` files form a separate lightweight module system (`machine-info`) for recording machine metadata (hostnames, architecture, roles, SSH keys). Since every machine's properties are visible to every other machine, each system can reflect on the properties of the entire fleet — enabling automatic SSH trust, role-based secret access, and cross-machine coordination without duplicating information.
|
||||
|
||||
**Remote LUKS unlock over Tor** — Machines with encrypted root disks can be unlocked remotely via SSH. An embedded Tor hidden service starts in the initrd so the machine is reachable even without a known IP, using a separate SSH host key for the boot environment.
|
||||
|
||||
**VPN containers** — A `vpn-container` module spins up an ephemeral NixOS container with a PIA WireGuard tunnel. The host creates the WireGuard interface and authenticates with PIA, then hands it off to the container's network namespace. This ensures that the container can **never** have direct internet access. Leakage is impossible.
|
||||
|
||||
**Sandboxed workspaces** — Isolated dev environments backed by microVMs (cloud-hypervisor), systemd-nspawn containers, or Incus. Each workspace gets a static IP on a NAT'd bridge, auto-generated SSH host keys, shell aliases for management, and comes pre-configured with Claude Code. The sandbox network blocks access to the local LAN while allowing internet.
|
||||
|
||||
**Snapshot-aware backups** — Restic backups to Backblaze B2 automatically create ZFS snapshots or btrfs read-only snapshots before backing up, using mount namespaces to bind-mount frozen data over the original paths so restic records correct paths. Each backup group gets a `restic_<group>` CLI wrapper. Supports `.nobackup` marker files.
|
||||
|
||||
@@ -1,4 +1,4 @@
|
||||
{ config, lib, ... }:
|
||||
{ config, ... }:
|
||||
|
||||
{
|
||||
nix = {
|
||||
@@ -6,11 +6,11 @@
|
||||
substituters = [
|
||||
"https://cache.nixos.org/"
|
||||
"https://nix-community.cachix.org"
|
||||
"http://s0.koi-bebop.ts.net:5000"
|
||||
"http://s0.neet.dev:28338/nixos"
|
||||
];
|
||||
trusted-public-keys = [
|
||||
"nix-community.cachix.org-1:mB9FSh9qf2dCimDSUo8Zy7bkq5CX+/rkCWyvRCYg3Fs="
|
||||
"s0.koi-bebop.ts.net:OjbzD86YjyJZpCp9RWaQKANaflcpKhtzBMNP8I2aPUU="
|
||||
"nixos:e5AMCUWWEX9MESWAAMjBkZdGUpl588NhgsUO3HsdhFw="
|
||||
];
|
||||
|
||||
# Allow substituters to be offline
|
||||
@@ -19,6 +19,11 @@
|
||||
# and use this flag as intended for deciding if it should build missing
|
||||
# derivations locally. See https://github.com/NixOS/nix/issues/6901
|
||||
fallback = true;
|
||||
|
||||
# Authenticate to private nixos cache
|
||||
netrc-file = config.age.secrets.attic-netrc.path;
|
||||
};
|
||||
};
|
||||
|
||||
age.secrets.attic-netrc.file = ../secrets/attic-netrc.age;
|
||||
}
|
||||
|
||||
@@ -1,4 +1,4 @@
|
||||
{ lib, config, pkgs, ... }:
|
||||
{ ... }:
|
||||
|
||||
{
|
||||
imports = [
|
||||
|
||||
@@ -1,4 +1,4 @@
|
||||
{ lib, config, pkgs, ... }:
|
||||
{ lib, config, ... }:
|
||||
|
||||
with lib;
|
||||
let
|
||||
|
||||
@@ -6,6 +6,8 @@
|
||||
./binary-cache.nix
|
||||
./flakes.nix
|
||||
./auto-update.nix
|
||||
./ntfy-alerts.nix
|
||||
./zfs-alerts.nix
|
||||
./shell.nix
|
||||
./network
|
||||
./boot
|
||||
@@ -56,7 +58,6 @@
|
||||
pciutils
|
||||
usbutils
|
||||
killall
|
||||
screen
|
||||
micro
|
||||
helix
|
||||
lm_sensors
|
||||
@@ -101,5 +102,5 @@
|
||||
security.acme.defaults.email = "zuckerberg@neet.dev";
|
||||
|
||||
# Enable Desktop Environment if this is a PC (machine role is "personal")
|
||||
de.enable = lib.mkDefault (config.thisMachine.hasRole."personal");
|
||||
de.enable = lib.mkDefault (config.thisMachine.hasRole."personal" && !config.boot.isContainer);
|
||||
}
|
||||
|
||||
@@ -1,4 +1,4 @@
|
||||
{ lib, pkgs, config, ... }:
|
||||
{ lib, config, ... }:
|
||||
with lib;
|
||||
let
|
||||
cfg = config.nix.flakes;
|
||||
|
||||
@@ -1,7 +1,7 @@
|
||||
# Gathers info about each machine to constuct overall configuration
|
||||
# Ex: Each machine already trusts each others SSH fingerprint already
|
||||
|
||||
{ config, lib, pkgs, ... }:
|
||||
{ config, lib, ... }:
|
||||
|
||||
let
|
||||
machines = config.machines.hosts;
|
||||
|
||||
@@ -9,7 +9,6 @@ in
|
||||
imports = [
|
||||
./pia-openvpn.nix
|
||||
./pia-wireguard.nix
|
||||
./ping.nix
|
||||
./tailscale.nix
|
||||
./vpn.nix
|
||||
./sandbox.nix
|
||||
|
||||
@@ -1,59 +0,0 @@
|
||||
{ config, pkgs, lib, ... }:
|
||||
|
||||
# keeps peer to peer connections alive with a periodic ping
|
||||
|
||||
with lib;
|
||||
with builtins;
|
||||
|
||||
# todo auto restart
|
||||
|
||||
let
|
||||
cfg = config.keepalive-ping;
|
||||
|
||||
serviceTemplate = host:
|
||||
{
|
||||
"keepalive-ping@${host}" = {
|
||||
description = "Periodic ping keep alive for ${host} connection";
|
||||
|
||||
requires = [ "network-online.target" ];
|
||||
after = [ "network.target" "network-online.target" ];
|
||||
wantedBy = [ "multi-user.target" ];
|
||||
serviceConfig.Restart = "always";
|
||||
|
||||
path = with pkgs; [ iputils ];
|
||||
|
||||
script = ''
|
||||
ping -i ${cfg.delay} ${host} &>/dev/null
|
||||
'';
|
||||
};
|
||||
};
|
||||
|
||||
combineAttrs = foldl recursiveUpdate { };
|
||||
|
||||
serviceList = map serviceTemplate cfg.hosts;
|
||||
|
||||
services = combineAttrs serviceList;
|
||||
in
|
||||
{
|
||||
options.keepalive-ping = {
|
||||
enable = mkEnableOption "Enable keep alive ping task";
|
||||
hosts = mkOption {
|
||||
type = types.listOf types.str;
|
||||
default = [ ];
|
||||
description = ''
|
||||
Hosts to ping periodically
|
||||
'';
|
||||
};
|
||||
delay = mkOption {
|
||||
type = types.str;
|
||||
default = "60";
|
||||
description = ''
|
||||
Ping interval in seconds of periodic ping per host being pinged
|
||||
'';
|
||||
};
|
||||
};
|
||||
|
||||
config = mkIf cfg.enable {
|
||||
systemd.services = services;
|
||||
};
|
||||
}
|
||||
@@ -112,5 +112,15 @@ in
|
||||
allowedTCPPorts = [ 53 ];
|
||||
allowedUDPPorts = [ 53 ];
|
||||
};
|
||||
|
||||
# Block sandboxes from reaching the local network (private RFC1918 ranges)
|
||||
# while still allowing public internet access via NAT.
|
||||
# The sandbox subnet itself is allowed so workspaces can reach the host gateway.
|
||||
networking.firewall.extraForwardRules = ''
|
||||
iifname ${cfg.bridgeName} ip daddr ${cfg.hostAddress} accept
|
||||
iifname ${cfg.bridgeName} ip daddr 10.0.0.0/8 drop
|
||||
iifname ${cfg.bridgeName} ip daddr 172.16.0.0/12 drop
|
||||
iifname ${cfg.bridgeName} ip daddr 192.168.0.0/16 drop
|
||||
'';
|
||||
};
|
||||
}
|
||||
|
||||
@@ -10,6 +10,10 @@ in
|
||||
|
||||
config.services.tailscale.enable = mkDefault (!config.boot.isContainer);
|
||||
|
||||
# Trust Tailscale interface - access control is handled by Tailscale ACLs.
|
||||
# Required because nftables (used by Incus) breaks Tailscale's automatic iptables rules.
|
||||
config.networking.firewall.trustedInterfaces = mkIf cfg.enable [ "tailscale0" ];
|
||||
|
||||
# MagicDNS
|
||||
config.networking.nameservers = mkIf cfg.enable [ "1.1.1.1" "8.8.8.8" ];
|
||||
config.networking.search = mkIf cfg.enable [ "koi-bebop.ts.net" ];
|
||||
|
||||
@@ -1,4 +1,4 @@
|
||||
{ config, pkgs, lib, allModules, ... }:
|
||||
{ config, lib, allModules, ... }:
|
||||
|
||||
with lib;
|
||||
|
||||
|
||||
@@ -4,15 +4,15 @@ let
|
||||
builderUserName = "nix-builder";
|
||||
|
||||
builderRole = "nix-builder";
|
||||
builders = config.machines.withRole.${builderRole};
|
||||
thisMachineIsABuilder = config.thisMachine.hasRole.${builderRole};
|
||||
builders = config.machines.withRole.${builderRole} or [];
|
||||
thisMachineIsABuilder = config.thisMachine.hasRole.${builderRole} or false;
|
||||
|
||||
# builders don't include themselves as a remote builder
|
||||
otherBuilders = lib.filter (hostname: hostname != config.networking.hostName) builders;
|
||||
in
|
||||
lib.mkMerge [
|
||||
# configure builder
|
||||
(lib.mkIf thisMachineIsABuilder {
|
||||
(lib.mkIf (thisMachineIsABuilder && !config.boot.isContainer) {
|
||||
users.users.${builderUserName} = {
|
||||
description = "Distributed Nix Build User";
|
||||
group = builderUserName;
|
||||
|
||||
57
common/ntfy-alerts.nix
Normal file
57
common/ntfy-alerts.nix
Normal file
@@ -0,0 +1,57 @@
|
||||
{ config, lib, pkgs, ... }:
|
||||
|
||||
let
|
||||
cfg = config.ntfy-alerts;
|
||||
in
|
||||
{
|
||||
options.ntfy-alerts = {
|
||||
serverUrl = lib.mkOption {
|
||||
type = lib.types.str;
|
||||
default = "https://ntfy.neet.dev";
|
||||
description = "Base URL of the ntfy server.";
|
||||
};
|
||||
|
||||
topic = lib.mkOption {
|
||||
type = lib.types.str;
|
||||
default = "service-failures";
|
||||
description = "ntfy topic to publish alerts to.";
|
||||
};
|
||||
};
|
||||
|
||||
config = lib.mkIf config.thisMachine.hasRole."ntfy" {
|
||||
age.secrets.ntfy-token.file = ../secrets/ntfy-token.age;
|
||||
|
||||
systemd.services."ntfy-failure@" = {
|
||||
description = "Send ntfy alert for failed unit %i";
|
||||
wants = [ "network-online.target" ];
|
||||
after = [ "network-online.target" ];
|
||||
serviceConfig = {
|
||||
Type = "oneshot";
|
||||
EnvironmentFile = "/run/agenix/ntfy-token";
|
||||
ExecStart = "${pkgs.writeShellScript "ntfy-failure-notify" ''
|
||||
unit="$1"
|
||||
${lib.getExe pkgs.curl} \
|
||||
--fail --silent --show-error \
|
||||
--max-time 30 --retry 3 \
|
||||
-H "Authorization: Bearer $NTFY_TOKEN" \
|
||||
-H "Title: Service failure on ${config.networking.hostName}" \
|
||||
-H "Priority: high" \
|
||||
-H "Tags: rotating_light" \
|
||||
-d "Unit $unit failed at $(date +%c)" \
|
||||
"${cfg.serverUrl}/${cfg.topic}"
|
||||
''} %i";
|
||||
};
|
||||
};
|
||||
|
||||
# Apply OnFailure to all services via a systemd drop-in
|
||||
systemd.packages = [
|
||||
(pkgs.runCommand "ntfy-on-failure-dropin" { } ''
|
||||
mkdir -p $out/lib/systemd/system/service.d
|
||||
cat > $out/lib/systemd/system/service.d/ntfy-on-failure.conf <<'EOF'
|
||||
[Unit]
|
||||
OnFailure=ntfy-failure@%p.service
|
||||
EOF
|
||||
'')
|
||||
];
|
||||
};
|
||||
}
|
||||
@@ -1,4 +1,4 @@
|
||||
{ lib, config, pkgs, ... }:
|
||||
{ lib, config, ... }:
|
||||
|
||||
let
|
||||
cfg = config.de;
|
||||
|
||||
@@ -1,4 +1,4 @@
|
||||
{ lib, config, pkgs, ... }:
|
||||
{ lib, config, ... }:
|
||||
|
||||
let
|
||||
cfg = config.de;
|
||||
|
||||
@@ -27,6 +27,7 @@ in
|
||||
../shell.nix
|
||||
hostConfig.inputs.home-manager.nixosModules.home-manager
|
||||
hostConfig.inputs.nix-index-database.nixosModules.default
|
||||
hostConfig.inputs.agenix.nixosModules.default
|
||||
];
|
||||
|
||||
nixpkgs.overlays = [
|
||||
@@ -114,6 +115,14 @@ in
|
||||
|
||||
# Enable flakes
|
||||
nix.settings.experimental-features = [ "nix-command" "flakes" ];
|
||||
nix.settings.trusted-users = [ "googlebot" ];
|
||||
|
||||
# Binary cache configuration (inherited from host's common/binary-cache.nix)
|
||||
nix.settings.substituters = hostConfig.nix.settings.substituters;
|
||||
nix.settings.trusted-public-keys = hostConfig.nix.settings.trusted-public-keys;
|
||||
nix.settings.fallback = true;
|
||||
nix.settings.netrc-file = config.age.secrets.attic-netrc.path;
|
||||
age.secrets.attic-netrc.file = ../../secrets/attic-netrc.age;
|
||||
|
||||
# Make nixpkgs available in NIX_PATH and registry (like the NixOS ISO)
|
||||
# This allows `nix-shell -p`, `nix repl '<nixpkgs>'`, etc. to work
|
||||
|
||||
@@ -1,4 +1,4 @@
|
||||
{ config, lib, pkgs, ... }:
|
||||
{ lib, pkgs, ... }:
|
||||
|
||||
# Home Manager configuration for sandboxed workspace user environment
|
||||
# This sets up the shell and tools inside VMs and containers
|
||||
|
||||
@@ -32,6 +32,9 @@ let
|
||||
networking.useHostResolvConf = false;
|
||||
nixpkgs.config.allowUnfree = true;
|
||||
|
||||
# Incus containers don't support the kernel features nix sandbox requires
|
||||
nix.settings.sandbox = false;
|
||||
|
||||
environment.systemPackages = [
|
||||
(lib.hiPrio (pkgs.writeShellScriptBin "claude" ''
|
||||
exec ${pkgs.claude-code}/bin/claude --dangerously-skip-permissions "$@"
|
||||
|
||||
@@ -1,4 +1,4 @@
|
||||
{ config, pkgs, lib, ... }:
|
||||
{ config, lib, ... }:
|
||||
|
||||
let
|
||||
cfg = config.services.actual;
|
||||
|
||||
62
common/server/atticd.nix
Normal file
62
common/server/atticd.nix
Normal file
@@ -0,0 +1,62 @@
|
||||
{ config, lib, ... }:
|
||||
|
||||
{
|
||||
config = lib.mkIf (config.thisMachine.hasRole."binary-cache" && !config.boot.isContainer) {
|
||||
services.atticd = {
|
||||
enable = true;
|
||||
environmentFile = config.age.secrets.atticd-credentials.path;
|
||||
settings = {
|
||||
listen = "[::]:28338";
|
||||
database.url = "postgresql:///atticd?host=/run/postgresql";
|
||||
require-proof-of-possession = false;
|
||||
|
||||
# Disable chunking — the dedup savings don't justify the CPU/IO
|
||||
# overhead for local storage, especially on ZFS which already
|
||||
# does block-level compression.
|
||||
chunking = {
|
||||
nar-size-threshold = 0;
|
||||
min-size = 16 * 1024;
|
||||
avg-size = 64 * 1024;
|
||||
max-size = 256 * 1024;
|
||||
};
|
||||
|
||||
# Let ZFS handle compression instead of double-compressing.
|
||||
compression.type = "none";
|
||||
|
||||
garbage-collection.default-retention-period = "6 months";
|
||||
};
|
||||
};
|
||||
|
||||
# PostgreSQL for atticd
|
||||
services.postgresql = {
|
||||
enable = true;
|
||||
ensureDatabases = [ "atticd" ];
|
||||
ensureUsers = [{
|
||||
name = "atticd";
|
||||
ensureDBOwnership = true;
|
||||
}];
|
||||
};
|
||||
|
||||
# Use a static user so the ZFS mountpoint at /var/lib/atticd works
|
||||
# (DynamicUser conflicts with ZFS mountpoints)
|
||||
users.users.atticd = {
|
||||
isSystemUser = true;
|
||||
group = "atticd";
|
||||
home = "/var/lib/atticd";
|
||||
};
|
||||
users.groups.atticd = { };
|
||||
|
||||
systemd.services.atticd = {
|
||||
after = [ "postgresql.service" ];
|
||||
requires = [ "postgresql.service" ];
|
||||
partOf = [ "postgresql.service" ];
|
||||
serviceConfig = {
|
||||
DynamicUser = lib.mkForce false;
|
||||
User = "atticd";
|
||||
Group = "atticd";
|
||||
};
|
||||
};
|
||||
|
||||
age.secrets.atticd-credentials.file = ../../secrets/atticd-credentials.age;
|
||||
};
|
||||
}
|
||||
@@ -1,43 +0,0 @@
|
||||
{ config, lib, ... }:
|
||||
|
||||
with lib;
|
||||
let
|
||||
cfg = config.ceph;
|
||||
in
|
||||
{
|
||||
options.ceph = { };
|
||||
|
||||
config = mkIf cfg.enable {
|
||||
# ceph.enable = true;
|
||||
|
||||
## S3 Object gateway
|
||||
#ceph.rgw.enable = true;
|
||||
#ceph.rgw.daemons = [
|
||||
#];
|
||||
|
||||
# https://docs.ceph.com/en/latest/start/intro/
|
||||
|
||||
# meta object storage daemon
|
||||
ceph.osd.enable = true;
|
||||
ceph.osd.daemons = [
|
||||
|
||||
];
|
||||
# monitor's ceph state
|
||||
ceph.mon.enable = true;
|
||||
ceph.mon.daemons = [
|
||||
|
||||
];
|
||||
# manage ceph
|
||||
ceph.mgr.enable = true;
|
||||
ceph.mgr.daemons = [
|
||||
|
||||
];
|
||||
# metadata server
|
||||
ceph.mds.enable = true;
|
||||
ceph.mds.daemons = [
|
||||
|
||||
];
|
||||
ceph.global.fsid = "925773DC-D95F-476C-BBCD-08E01BF0865F";
|
||||
|
||||
};
|
||||
}
|
||||
@@ -1,24 +1,22 @@
|
||||
{ config, pkgs, ... }:
|
||||
{ ... }:
|
||||
|
||||
{
|
||||
imports = [
|
||||
./nginx.nix
|
||||
./thelounge.nix
|
||||
./mumble.nix
|
||||
./icecast.nix
|
||||
./nginx-stream.nix
|
||||
./matrix.nix
|
||||
./zerobin.nix
|
||||
./gitea.nix
|
||||
./samba.nix
|
||||
./owncast.nix
|
||||
./mailserver.nix
|
||||
./nextcloud.nix
|
||||
./iodine.nix
|
||||
./searx.nix
|
||||
./gitea-actions-runner.nix
|
||||
./atticd.nix
|
||||
./librechat.nix
|
||||
./actualbudget.nix
|
||||
./unifi.nix
|
||||
./ntfy.nix
|
||||
./gatus.nix
|
||||
];
|
||||
}
|
||||
|
||||
146
common/server/gatus.nix
Normal file
146
common/server/gatus.nix
Normal file
@@ -0,0 +1,146 @@
|
||||
{ lib, config, ... }:
|
||||
|
||||
let
|
||||
cfg = config.services.gatus;
|
||||
port = 31103;
|
||||
in
|
||||
{
|
||||
options.services.gatus = {
|
||||
hostname = lib.mkOption {
|
||||
type = lib.types.str;
|
||||
example = "status.example.com";
|
||||
};
|
||||
};
|
||||
|
||||
config = lib.mkIf cfg.enable {
|
||||
services.gatus = {
|
||||
environmentFile = "/run/agenix/ntfy-token";
|
||||
settings = {
|
||||
storage = {
|
||||
type = "sqlite";
|
||||
path = "/var/lib/gatus/data.db";
|
||||
};
|
||||
|
||||
web = {
|
||||
address = "127.0.0.1";
|
||||
port = port;
|
||||
};
|
||||
|
||||
alerting.ntfy = {
|
||||
url = "https://ntfy.neet.dev";
|
||||
topic = "service-failures";
|
||||
priority = 4;
|
||||
default-alert = {
|
||||
enabled = true;
|
||||
failure-threshold = 3;
|
||||
success-threshold = 2;
|
||||
send-on-resolved = true;
|
||||
};
|
||||
token = "$NTFY_TOKEN";
|
||||
};
|
||||
|
||||
endpoints = [
|
||||
{
|
||||
name = "Gitea";
|
||||
group = "services";
|
||||
url = "https://git.neet.dev";
|
||||
interval = "5m";
|
||||
conditions = [
|
||||
"[STATUS] == 200"
|
||||
];
|
||||
alerts = [{ type = "ntfy"; }];
|
||||
}
|
||||
{
|
||||
name = "The Lounge";
|
||||
group = "services";
|
||||
url = "https://irc.neet.dev";
|
||||
interval = "5m";
|
||||
conditions = [
|
||||
"[STATUS] == 200"
|
||||
];
|
||||
alerts = [{ type = "ntfy"; }];
|
||||
}
|
||||
{
|
||||
name = "ntfy";
|
||||
group = "services";
|
||||
url = "https://ntfy.neet.dev/v1/health";
|
||||
interval = "5m";
|
||||
conditions = [
|
||||
"[STATUS] == 200"
|
||||
];
|
||||
alerts = [{ type = "ntfy"; }];
|
||||
}
|
||||
{
|
||||
name = "Librechat";
|
||||
group = "services";
|
||||
url = "https://chat.neet.dev";
|
||||
interval = "5m";
|
||||
conditions = [
|
||||
"[STATUS] == 200"
|
||||
];
|
||||
alerts = [{ type = "ntfy"; }];
|
||||
}
|
||||
{
|
||||
name = "Owncast";
|
||||
group = "services";
|
||||
url = "https://live.neet.dev";
|
||||
interval = "5m";
|
||||
conditions = [
|
||||
"[STATUS] == 200"
|
||||
];
|
||||
alerts = [{ type = "ntfy"; }];
|
||||
}
|
||||
{
|
||||
name = "Nextcloud";
|
||||
group = "services";
|
||||
url = "https://neet.cloud";
|
||||
interval = "5m";
|
||||
conditions = [
|
||||
"[STATUS] == any(200, 302)"
|
||||
];
|
||||
alerts = [{ type = "ntfy"; }];
|
||||
}
|
||||
{
|
||||
name = "Element Web";
|
||||
group = "services";
|
||||
url = "https://chat.neet.space";
|
||||
interval = "5m";
|
||||
conditions = [
|
||||
"[STATUS] == 200"
|
||||
];
|
||||
alerts = [{ type = "ntfy"; }];
|
||||
}
|
||||
{
|
||||
name = "Mumble";
|
||||
group = "services";
|
||||
url = "tcp://voice.neet.space:23563";
|
||||
interval = "5m";
|
||||
conditions = [
|
||||
"[CONNECTED] == true"
|
||||
];
|
||||
alerts = [{ type = "ntfy"; }];
|
||||
}
|
||||
{
|
||||
name = "Navidrome";
|
||||
group = "services";
|
||||
url = "https://navidrome.neet.cloud";
|
||||
interval = "5m";
|
||||
conditions = [
|
||||
"[STATUS] == 200"
|
||||
];
|
||||
alerts = [{ type = "ntfy"; }];
|
||||
}
|
||||
];
|
||||
};
|
||||
};
|
||||
services.nginx.enable = true;
|
||||
services.nginx.virtualHosts.${cfg.hostname} = {
|
||||
enableACME = true;
|
||||
forceSSL = true;
|
||||
locations."/" = {
|
||||
proxyPass = "http://127.0.0.1:${toString port}";
|
||||
proxyWebsockets = true;
|
||||
};
|
||||
};
|
||||
};
|
||||
}
|
||||
@@ -1,132 +1,85 @@
|
||||
{ config, pkgs, lib, allModules, ... }:
|
||||
{ config, lib, ... }:
|
||||
|
||||
# Gitea Actions Runner. Starts 'host' runner that runs directly on the host inside of a nixos container
|
||||
# This is useful for providing a real Nix/OS builder to gitea.
|
||||
# Warning, NixOS containers are not secure. For example, the container shares the /nix/store
|
||||
# Therefore, this should not be used to run untrusted code.
|
||||
# To enable, assign a machine the 'gitea-actions-runner' system role
|
||||
|
||||
# TODO: skipping running inside of nixos container for now because of issues getting docker/podman running
|
||||
# Gitea Actions Runner inside a NixOS container.
|
||||
# The container shares the host's /nix/store (read-only) and nix-daemon socket,
|
||||
# so builds go through the host daemon and outputs land in the host store.
|
||||
# Warning: NixOS containers are not fully secure — do not run untrusted code.
|
||||
# To enable, assign a machine the 'gitea-actions-runner' system role.
|
||||
|
||||
let
|
||||
thisMachineIsARunner = config.thisMachine.hasRole."gitea-actions-runner";
|
||||
containerName = "gitea-runner";
|
||||
giteaRunnerUid = 991;
|
||||
giteaRunnerGid = 989;
|
||||
in
|
||||
{
|
||||
config = lib.mkIf (thisMachineIsARunner && !config.boot.isContainer) {
|
||||
# containers.${containerName} = {
|
||||
# ephemeral = true;
|
||||
# autoStart = true;
|
||||
|
||||
# # for podman
|
||||
# enableTun = true;
|
||||
containers.${containerName} = {
|
||||
autoStart = true;
|
||||
ephemeral = true;
|
||||
|
||||
# # privateNetwork = true;
|
||||
# # hostAddress = "172.16.101.1";
|
||||
# # localAddress = "172.16.101.2";
|
||||
bindMounts = {
|
||||
"/run/agenix/gitea-actions-runner-token" = {
|
||||
hostPath = "/run/agenix/gitea-actions-runner-token";
|
||||
isReadOnly = true;
|
||||
};
|
||||
"/var/lib/gitea-runner" = {
|
||||
hostPath = "/var/lib/gitea-runner";
|
||||
isReadOnly = false;
|
||||
};
|
||||
};
|
||||
|
||||
# bindMounts =
|
||||
# {
|
||||
# "/run/agenix/gitea-actions-runner-token" = {
|
||||
# hostPath = "/run/agenix/gitea-actions-runner-token";
|
||||
# isReadOnly = true;
|
||||
# };
|
||||
# "/var/lib/gitea-runner" = {
|
||||
# hostPath = "/var/lib/gitea-runner";
|
||||
# isReadOnly = false;
|
||||
# };
|
||||
# };
|
||||
config = { config, lib, pkgs, ... }: {
|
||||
system.stateVersion = "25.11";
|
||||
|
||||
# extraFlags = [
|
||||
# # Allow podman
|
||||
# ''--system-call-filter=thisystemcalldoesnotexistforsure''
|
||||
# ];
|
||||
services.gitea-actions-runner.instances.inst = {
|
||||
enable = true;
|
||||
name = containerName;
|
||||
url = "https://git.neet.dev/";
|
||||
tokenFile = "/run/agenix/gitea-actions-runner-token";
|
||||
labels = [ "nixos:host" ];
|
||||
};
|
||||
|
||||
# additionalCapabilities = [
|
||||
# "CAP_SYS_ADMIN"
|
||||
# ];
|
||||
# Disable dynamic user so runner state persists via bind mount
|
||||
assertions = [{
|
||||
assertion = config.systemd.services.gitea-runner-inst.enable;
|
||||
message = "Expected systemd service 'gitea-runner-inst' is not enabled — the gitea-actions-runner module may have changed its naming scheme.";
|
||||
}];
|
||||
systemd.services.gitea-runner-inst.serviceConfig.DynamicUser = lib.mkForce false;
|
||||
users.users.gitea-runner = {
|
||||
uid = giteaRunnerUid;
|
||||
home = "/var/lib/gitea-runner";
|
||||
group = "gitea-runner";
|
||||
isSystemUser = true;
|
||||
createHome = true;
|
||||
};
|
||||
users.groups.gitea-runner.gid = giteaRunnerGid;
|
||||
|
||||
# config = {
|
||||
# imports = allModules;
|
||||
nix.settings.experimental-features = [ "nix-command" "flakes" ];
|
||||
|
||||
# # speeds up evaluation
|
||||
# nixpkgs.pkgs = pkgs;
|
||||
|
||||
# networking.hostName = lib.mkForce containerName;
|
||||
|
||||
# # don't use remote builders
|
||||
# nix.distributedBuilds = lib.mkForce false;
|
||||
|
||||
# environment.systemPackages = with pkgs; [
|
||||
# git
|
||||
# # Gitea Actions rely heavily on node. Include it because it would be installed anyway.
|
||||
# nodejs
|
||||
# ];
|
||||
|
||||
# services.gitea-actions-runner.instances.inst = {
|
||||
# enable = true;
|
||||
# name = config.networking.hostName;
|
||||
# url = "https://git.neet.dev/";
|
||||
# tokenFile = "/run/agenix/gitea-actions-runner-token";
|
||||
# labels = [
|
||||
# "ubuntu-latest:docker://node:18-bullseye"
|
||||
# "nixos:host"
|
||||
# ];
|
||||
# };
|
||||
|
||||
# # To allow building on the host, must override the the service's config so it doesn't use a dynamic user
|
||||
# systemd.services.gitea-runner-inst.serviceConfig.DynamicUser = lib.mkForce false;
|
||||
# users.users.gitea-runner = {
|
||||
# home = "/var/lib/gitea-runner";
|
||||
# group = "gitea-runner";
|
||||
# isSystemUser = true;
|
||||
# createHome = true;
|
||||
# };
|
||||
# users.groups.gitea-runner = { };
|
||||
|
||||
# virtualisation.podman.enable = true;
|
||||
# boot.binfmt.emulatedSystems = [ "aarch64-linux" ];
|
||||
# };
|
||||
# };
|
||||
|
||||
# networking.nat.enable = true;
|
||||
# networking.nat.internalInterfaces = [
|
||||
# "ve-${containerName}"
|
||||
# ];
|
||||
# networking.ip_forward = true;
|
||||
|
||||
# don't use remote builders
|
||||
nix.distributedBuilds = lib.mkForce false;
|
||||
|
||||
services.gitea-actions-runner.instances.inst = {
|
||||
enable = true;
|
||||
name = config.networking.hostName;
|
||||
url = "https://git.neet.dev/";
|
||||
tokenFile = "/run/agenix/gitea-actions-runner-token";
|
||||
labels = [
|
||||
"ubuntu-latest:docker://node:18-bullseye"
|
||||
"nixos:host"
|
||||
];
|
||||
environment.systemPackages = with pkgs; [
|
||||
git
|
||||
nodejs
|
||||
jq
|
||||
attic-client
|
||||
];
|
||||
};
|
||||
};
|
||||
|
||||
environment.systemPackages = with pkgs; [
|
||||
git
|
||||
# Gitea Actions rely heavily on node. Include it because it would be installed anyway.
|
||||
nodejs
|
||||
];
|
||||
# Needs to be outside of the container because container uses's the host's nix-daemon
|
||||
nix.settings.trusted-users = [ "gitea-runner" ];
|
||||
|
||||
# To allow building on the host, must override the the service's config so it doesn't use a dynamic user
|
||||
systemd.services.gitea-runner-inst.serviceConfig.DynamicUser = lib.mkForce false;
|
||||
# Matching user on host — the container's gitea-runner UID must be
|
||||
# recognized by the host's nix-daemon as trusted (shared UID namespace)
|
||||
users.users.gitea-runner = {
|
||||
uid = giteaRunnerUid;
|
||||
home = "/var/lib/gitea-runner";
|
||||
group = "gitea-runner";
|
||||
isSystemUser = true;
|
||||
createHome = true;
|
||||
};
|
||||
users.groups.gitea-runner = { };
|
||||
|
||||
virtualisation.podman.enable = true;
|
||||
boot.binfmt.emulatedSystems = [ "aarch64-linux" ];
|
||||
users.groups.gitea-runner.gid = giteaRunnerGid;
|
||||
|
||||
age.secrets.gitea-actions-runner-token.file = ../../secrets/gitea-actions-runner-token.age;
|
||||
};
|
||||
|
||||
@@ -1,4 +1,4 @@
|
||||
{ lib, pkgs, config, ... }:
|
||||
{ lib, config, ... }:
|
||||
|
||||
let
|
||||
cfg = config.services.gitea;
|
||||
|
||||
@@ -1,42 +0,0 @@
|
||||
{ config, pkgs, ... }:
|
||||
|
||||
{
|
||||
services.gitlab = {
|
||||
enable = true;
|
||||
databasePasswordFile = "/var/keys/gitlab/db_password";
|
||||
initialRootPasswordFile = "/var/keys/gitlab/root_password";
|
||||
https = true;
|
||||
host = "git.neet.dev";
|
||||
port = 443;
|
||||
user = "git";
|
||||
group = "git";
|
||||
databaseUsername = "git";
|
||||
smtp = {
|
||||
enable = true;
|
||||
address = "localhost";
|
||||
port = 25;
|
||||
};
|
||||
secrets = {
|
||||
dbFile = "/var/keys/gitlab/db";
|
||||
secretFile = "/var/keys/gitlab/secret";
|
||||
otpFile = "/var/keys/gitlab/otp";
|
||||
jwsFile = "/var/keys/gitlab/jws";
|
||||
};
|
||||
extraConfig = {
|
||||
gitlab = {
|
||||
email_from = "gitlab-no-reply@neet.dev";
|
||||
email_display_name = "neet.dev GitLab";
|
||||
email_reply_to = "gitlab-no-reply@neet.dev";
|
||||
};
|
||||
};
|
||||
pagesExtraArgs = [ "-listen-proxy" "127.0.0.1:8090" ];
|
||||
};
|
||||
|
||||
services.nginx.virtualHosts = {
|
||||
"git.neet.dev" = {
|
||||
enableACME = true;
|
||||
forceSSL = true;
|
||||
locations."/".proxyPass = "http://unix:/run/gitlab/gitlab-workhorse.socket";
|
||||
};
|
||||
};
|
||||
}
|
||||
@@ -1,25 +0,0 @@
|
||||
{ config, pkgs, ... }:
|
||||
|
||||
let
|
||||
domain = "hydra.neet.dev";
|
||||
port = 3000;
|
||||
notifyEmail = "hydra@neet.dev";
|
||||
in
|
||||
{
|
||||
services.nginx.virtualHosts."${domain}" = {
|
||||
enableACME = true;
|
||||
forceSSL = true;
|
||||
locations."/" = {
|
||||
proxyPass = "http://localhost:${toString port}";
|
||||
};
|
||||
};
|
||||
|
||||
services.hydra = {
|
||||
enable = true;
|
||||
inherit port;
|
||||
hydraURL = "https://${domain}";
|
||||
useSubstitutes = true;
|
||||
notificationSender = notifyEmail;
|
||||
buildMachinesFiles = [ ];
|
||||
};
|
||||
}
|
||||
@@ -1,65 +0,0 @@
|
||||
{ lib, config, ... }:
|
||||
|
||||
# configures icecast to only accept source from localhost
|
||||
# to a audio optimized stream on services.icecast.mount
|
||||
# made available via nginx for http access on
|
||||
# https://host/mount
|
||||
|
||||
let
|
||||
cfg = config.services.icecast;
|
||||
in
|
||||
{
|
||||
options.services.icecast = {
|
||||
mount = lib.mkOption {
|
||||
type = lib.types.str;
|
||||
example = "stream.mp3";
|
||||
};
|
||||
fallback = lib.mkOption {
|
||||
type = lib.types.str;
|
||||
example = "fallback.mp3";
|
||||
};
|
||||
nginx = lib.mkEnableOption "enable nginx";
|
||||
};
|
||||
|
||||
config = lib.mkIf cfg.enable {
|
||||
services.icecast = {
|
||||
listen.address = "0.0.0.0";
|
||||
listen.port = 8001;
|
||||
admin.password = "hackme";
|
||||
extraConf = ''
|
||||
<authentication>
|
||||
<source-password>hackme</source-password>
|
||||
</authentication>
|
||||
<http-headers>
|
||||
<header type="cors" name="Access-Control-Allow-Origin" />
|
||||
</http-headers>
|
||||
<mount type="normal">
|
||||
<mount-name>/${cfg.mount}</mount-name>
|
||||
<max-listeners>30</max-listeners>
|
||||
<bitrate>64000</bitrate>
|
||||
<hidden>false</hidden>
|
||||
<public>false</public>
|
||||
<fallback-mount>/${cfg.fallback}</fallback-mount>
|
||||
<fallback-override>1</fallback-override>
|
||||
</mount>
|
||||
<mount type="normal">
|
||||
<mount-name>/${cfg.fallback}</mount-name>
|
||||
<max-listeners>30</max-listeners>
|
||||
<bitrate>64000</bitrate>
|
||||
<hidden>false</hidden>
|
||||
<public>false</public>
|
||||
</mount>
|
||||
'';
|
||||
};
|
||||
services.nginx.virtualHosts.${cfg.hostname} = lib.mkIf cfg.nginx {
|
||||
enableACME = true;
|
||||
forceSSL = true;
|
||||
locations."/${cfg.mount}" = {
|
||||
proxyPass = "http://localhost:${toString cfg.listen.port}/${cfg.mount}";
|
||||
extraConfig = ''
|
||||
add_header Access-Control-Allow-Origin *;
|
||||
'';
|
||||
};
|
||||
};
|
||||
};
|
||||
}
|
||||
@@ -1,21 +0,0 @@
|
||||
{ config, pkgs, lib, ... }:
|
||||
|
||||
let
|
||||
cfg = config.services.iodine.server;
|
||||
in
|
||||
{
|
||||
config = lib.mkIf cfg.enable {
|
||||
# iodine DNS-based vpn
|
||||
services.iodine.server = {
|
||||
ip = "192.168.99.1";
|
||||
domain = "tun.neet.dev";
|
||||
passwordFile = "/run/agenix/iodine";
|
||||
};
|
||||
age.secrets.iodine.file = ../../secrets/iodine.age;
|
||||
networking.firewall.allowedUDPPorts = [ 53 ];
|
||||
|
||||
networking.nat.internalInterfaces = [
|
||||
"dns0" # iodine
|
||||
];
|
||||
};
|
||||
}
|
||||
@@ -1,4 +1,4 @@
|
||||
{ config, lib, pkgs, ... }:
|
||||
{ config, lib, ... }:
|
||||
|
||||
with lib;
|
||||
|
||||
|
||||
@@ -1,76 +0,0 @@
|
||||
{ lib, config, pkgs, ... }:
|
||||
|
||||
let
|
||||
cfg = config.services.nginx.stream;
|
||||
nginxWithRTMP = pkgs.nginx.override {
|
||||
modules = [ pkgs.nginxModules.rtmp ];
|
||||
};
|
||||
in
|
||||
{
|
||||
options.services.nginx.stream = {
|
||||
enable = lib.mkEnableOption "enable nginx rtmp/hls/dash video streaming";
|
||||
port = lib.mkOption {
|
||||
type = lib.types.int;
|
||||
default = 1935;
|
||||
description = "rtmp injest/serve port";
|
||||
};
|
||||
rtmpName = lib.mkOption {
|
||||
type = lib.types.str;
|
||||
default = "live";
|
||||
description = "the name of the rtmp application";
|
||||
};
|
||||
hostname = lib.mkOption {
|
||||
type = lib.types.str;
|
||||
description = "the http host to serve hls";
|
||||
};
|
||||
httpLocation = lib.mkOption {
|
||||
type = lib.types.str;
|
||||
default = "/tmp";
|
||||
description = "the path of the tmp http files";
|
||||
};
|
||||
};
|
||||
config = lib.mkIf cfg.enable {
|
||||
services.nginx = {
|
||||
enable = true;
|
||||
|
||||
package = nginxWithRTMP;
|
||||
|
||||
virtualHosts.${cfg.hostname} = {
|
||||
enableACME = true;
|
||||
forceSSL = true;
|
||||
locations = {
|
||||
"/stream/hls".root = "${cfg.httpLocation}/hls";
|
||||
"/stream/dash".root = "${cfg.httpLocation}/dash";
|
||||
};
|
||||
extraConfig = ''
|
||||
location /stat {
|
||||
rtmp_stat all;
|
||||
}
|
||||
'';
|
||||
};
|
||||
|
||||
appendConfig = ''
|
||||
rtmp {
|
||||
server {
|
||||
listen ${toString cfg.port};
|
||||
chunk_size 4096;
|
||||
application ${cfg.rtmpName} {
|
||||
allow publish all;
|
||||
allow publish all;
|
||||
live on;
|
||||
record off;
|
||||
hls on;
|
||||
hls_path ${cfg.httpLocation}/hls;
|
||||
dash on;
|
||||
dash_path ${cfg.httpLocation}/dash;
|
||||
}
|
||||
}
|
||||
}
|
||||
'';
|
||||
};
|
||||
|
||||
networking.firewall.allowedTCPPorts = [
|
||||
cfg.port
|
||||
];
|
||||
};
|
||||
}
|
||||
@@ -1,4 +1,4 @@
|
||||
{ lib, config, pkgs, ... }:
|
||||
{ lib, config, ... }:
|
||||
|
||||
let
|
||||
cfg = config.services.nginx;
|
||||
|
||||
38
common/server/ntfy.nix
Normal file
38
common/server/ntfy.nix
Normal file
@@ -0,0 +1,38 @@
|
||||
{ lib, config, ... }:
|
||||
|
||||
let
|
||||
cfg = config.services.ntfy-sh;
|
||||
in
|
||||
{
|
||||
options.services.ntfy-sh = {
|
||||
hostname = lib.mkOption {
|
||||
type = lib.types.str;
|
||||
example = "ntfy.example.com";
|
||||
};
|
||||
};
|
||||
|
||||
config = lib.mkIf cfg.enable {
|
||||
services.ntfy-sh.settings = {
|
||||
base-url = "https://${cfg.hostname}";
|
||||
listen-http = "127.0.0.1:2586";
|
||||
auth-default-access = "deny-all";
|
||||
behind-proxy = true;
|
||||
enable-login = true;
|
||||
};
|
||||
|
||||
# backups
|
||||
backup.group."ntfy".paths = [
|
||||
"/var/lib/ntfy-sh"
|
||||
];
|
||||
|
||||
services.nginx.enable = true;
|
||||
services.nginx.virtualHosts.${cfg.hostname} = {
|
||||
enableACME = true;
|
||||
forceSSL = true;
|
||||
locations."/" = {
|
||||
proxyPass = "http://127.0.0.1:2586";
|
||||
proxyWebsockets = true;
|
||||
};
|
||||
};
|
||||
};
|
||||
}
|
||||
@@ -26,6 +26,16 @@
|
||||
"printcap name" = "cups";
|
||||
|
||||
"hide files" = "/.nobackup/.DS_Store/._.DS_Store/";
|
||||
|
||||
# Samba 4.22+ enables SMB3 directory leases by default, allowing clients
|
||||
# to cache directory listings locally. When files are created locally on
|
||||
# the server (bypassing Samba), these cached listings go stale because
|
||||
# kernel oplocks — the mechanism that would break leases on local
|
||||
# changes — is incompatible with smb2 leases. Enabling kernel oplocks
|
||||
# would fix this but forces Samba to disable smb2 leases, durable
|
||||
# handles, and level2 oplocks, losing handle caching performance.
|
||||
# https://wiki.samba.org/index.php/Editing_files_locally_on_server:_interoperability
|
||||
"smb3 directory leases" = "no";
|
||||
};
|
||||
public = {
|
||||
path = "/data/samba/Public";
|
||||
|
||||
@@ -1,30 +0,0 @@
|
||||
{ config, pkgs, lib, ... }:
|
||||
|
||||
let
|
||||
cfg = config.services.searx;
|
||||
in
|
||||
{
|
||||
config = lib.mkIf cfg.enable {
|
||||
services.searx = {
|
||||
environmentFile = "/run/agenix/searx";
|
||||
settings = {
|
||||
server.port = 43254;
|
||||
server.secret_key = "@SEARX_SECRET_KEY@";
|
||||
engines = [{
|
||||
name = "wolframalpha";
|
||||
shortcut = "wa";
|
||||
api_key = "@WOLFRAM_API_KEY@";
|
||||
engine = "wolframalpha_api";
|
||||
}];
|
||||
};
|
||||
};
|
||||
services.nginx.virtualHosts."search.neet.space" = {
|
||||
enableACME = true;
|
||||
forceSSL = true;
|
||||
locations."/" = {
|
||||
proxyPass = "http://localhost:${toString config.services.searx.settings.server.port}";
|
||||
};
|
||||
};
|
||||
age.secrets.searx.file = ../../secrets/searx.age;
|
||||
};
|
||||
}
|
||||
@@ -1,97 +0,0 @@
|
||||
{ config, pkgs, ... }:
|
||||
|
||||
let
|
||||
# external
|
||||
rtp-port = 8083;
|
||||
webrtc-peer-lower-port = 20000;
|
||||
webrtc-peer-upper-port = 20100;
|
||||
domain = "live.neet.space";
|
||||
|
||||
# internal
|
||||
ingest-port = 8084;
|
||||
web-port = 8085;
|
||||
webrtc-port = 8086;
|
||||
toStr = builtins.toString;
|
||||
in
|
||||
{
|
||||
networking.firewall.allowedUDPPorts = [ rtp-port ];
|
||||
networking.firewall.allowedTCPPortRanges = [{
|
||||
from = webrtc-peer-lower-port;
|
||||
to = webrtc-peer-upper-port;
|
||||
}];
|
||||
networking.firewall.allowedUDPPortRanges = [{
|
||||
from = webrtc-peer-lower-port;
|
||||
to = webrtc-peer-upper-port;
|
||||
}];
|
||||
|
||||
virtualisation.docker.enable = true;
|
||||
|
||||
services.nginx.virtualHosts.${domain} = {
|
||||
enableACME = true;
|
||||
forceSSL = true;
|
||||
locations = {
|
||||
"/" = {
|
||||
proxyPass = "http://localhost:${toStr web-port}";
|
||||
};
|
||||
"websocket" = {
|
||||
proxyPass = "http://localhost:${toStr webrtc-port}/websocket";
|
||||
proxyWebsockets = true;
|
||||
};
|
||||
};
|
||||
};
|
||||
|
||||
virtualisation.oci-containers = {
|
||||
backend = "docker";
|
||||
containers = {
|
||||
"lightspeed-ingest" = {
|
||||
workdir = "/var/lib/lightspeed-ingest";
|
||||
image = "projectlightspeed/ingest";
|
||||
ports = [
|
||||
"${toStr ingest-port}:8084"
|
||||
];
|
||||
# imageFile = pkgs.dockerTools.pullImage {
|
||||
# imageName = "projectlightspeed/ingest";
|
||||
# finalImageTag = "version-0.1.4";
|
||||
# imageDigest = "sha256:9fc51833b7c27a76d26e40f092b9cec1ac1c4bfebe452e94ad3269f1f73ff2fc";
|
||||
# sha256 = "19kxl02x0a3i6hlnsfcm49hl6qxnq2f3hfmyv1v8qdaz58f35kd5";
|
||||
# };
|
||||
};
|
||||
"lightspeed-react" = {
|
||||
workdir = "/var/lib/lightspeed-react";
|
||||
image = "projectlightspeed/react";
|
||||
ports = [
|
||||
"${toStr web-port}:80"
|
||||
];
|
||||
# imageFile = pkgs.dockerTools.pullImage {
|
||||
# imageName = "projectlightspeed/react";
|
||||
# finalImageTag = "version-0.1.3";
|
||||
# imageDigest = "sha256:b7c58425f1593f7b4304726b57aa399b6e216e55af9c0962c5c19333fae638b6";
|
||||
# sha256 = "0d2jh7mr20h7dxgsp7ml7cw2qd4m8ja9rj75dpy59zyb6v0bn7js";
|
||||
# };
|
||||
};
|
||||
"lightspeed-webrtc" = {
|
||||
workdir = "/var/lib/lightspeed-webrtc";
|
||||
image = "projectlightspeed/webrtc";
|
||||
ports = [
|
||||
"${toStr webrtc-port}:8080"
|
||||
"${toStr rtp-port}:65535/udp"
|
||||
"${toStr webrtc-peer-lower-port}-${toStr webrtc-peer-upper-port}:${toStr webrtc-peer-lower-port}-${toStr webrtc-peer-upper-port}/tcp"
|
||||
"${toStr webrtc-peer-lower-port}-${toStr webrtc-peer-upper-port}:${toStr webrtc-peer-lower-port}-${toStr webrtc-peer-upper-port}/udp"
|
||||
];
|
||||
cmd = [
|
||||
"lightspeed-webrtc"
|
||||
"--addr=0.0.0.0"
|
||||
"--ip=${domain}"
|
||||
"--ports=${toStr webrtc-peer-lower-port}-${toStr webrtc-peer-upper-port}"
|
||||
"run"
|
||||
];
|
||||
# imageFile = pkgs.dockerTools.pullImage {
|
||||
# imageName = "projectlightspeed/webrtc";
|
||||
# finalImageTag = "version-0.1.2";
|
||||
# imageDigest = "sha256:ddf8b3dd294485529ec11d1234a3fc38e365a53c4738998c6bc2c6930be45ecf";
|
||||
# sha256 = "1bdy4ak99fjdphj5bsk8rp13xxmbqdhfyfab14drbyffivg9ad2i";
|
||||
# };
|
||||
};
|
||||
};
|
||||
};
|
||||
}
|
||||
@@ -1,7 +0,0 @@
|
||||
Copyright 2020 Matthijs Steen
|
||||
|
||||
Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions:
|
||||
|
||||
The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software.
|
||||
|
||||
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
|
||||
@@ -1,37 +0,0 @@
|
||||
# Visual Studio Code Server support in NixOS
|
||||
|
||||
Experimental support for VS Code Server in NixOS. The NodeJS by default supplied by VS Code cannot be used within NixOS due to missing hardcoded paths, so it is automatically replaced by a symlink to a compatible version of NodeJS that does work under NixOS.
|
||||
|
||||
## Installation
|
||||
|
||||
```nix
|
||||
{
|
||||
imports = [
|
||||
(fetchTarball "https://github.com/msteen/nixos-vscode-server/tarball/master")
|
||||
];
|
||||
|
||||
services.vscode-server.enable = true;
|
||||
}
|
||||
```
|
||||
|
||||
And then enable them for the relevant users:
|
||||
|
||||
```
|
||||
systemctl --user enable auto-fix-vscode-server.service
|
||||
```
|
||||
|
||||
### Home Manager
|
||||
|
||||
```nix
|
||||
{
|
||||
imports = [
|
||||
"${fetchTarball "https://github.com/msteen/nixos-vscode-server/tarball/master"}/modules/vscode-server/home.nix"
|
||||
];
|
||||
|
||||
services.vscode-server.enable = true;
|
||||
}
|
||||
```
|
||||
|
||||
## Usage
|
||||
|
||||
When the service is enabled and running it should simply work, there is nothing for you to do.
|
||||
@@ -1 +0,0 @@
|
||||
import ./modules/vscode-server
|
||||
@@ -1,8 +0,0 @@
|
||||
import ./module.nix ({ name, description, serviceConfig }:
|
||||
|
||||
{
|
||||
systemd.user.services.${name} = {
|
||||
inherit description serviceConfig;
|
||||
wantedBy = [ "default.target" ];
|
||||
};
|
||||
})
|
||||
@@ -1,15 +0,0 @@
|
||||
import ./module.nix ({ name, description, serviceConfig }:
|
||||
|
||||
{
|
||||
systemd.user.services.${name} = {
|
||||
Unit = {
|
||||
Description = description;
|
||||
};
|
||||
|
||||
Service = serviceConfig;
|
||||
|
||||
Install = {
|
||||
WantedBy = [ "default.target" ];
|
||||
};
|
||||
};
|
||||
})
|
||||
@@ -1,42 +0,0 @@
|
||||
moduleConfig:
|
||||
{ lib, pkgs, ... }:
|
||||
|
||||
with lib;
|
||||
|
||||
{
|
||||
options.services.vscode-server.enable = with types; mkEnableOption "VS Code Server";
|
||||
|
||||
config = moduleConfig rec {
|
||||
name = "auto-fix-vscode-server";
|
||||
description = "Automatically fix the VS Code server used by the remote SSH extension";
|
||||
serviceConfig = {
|
||||
# When a monitored directory is deleted, it will stop being monitored.
|
||||
# Even if it is later recreated it will not restart monitoring it.
|
||||
# Unfortunately the monitor does not kill itself when it stops monitoring,
|
||||
# so rather than creating our own restart mechanism, we leverage systemd to do this for us.
|
||||
Restart = "always";
|
||||
RestartSec = 0;
|
||||
ExecStart = pkgs.writeShellScript "${name}.sh" ''
|
||||
set -euo pipefail
|
||||
PATH=${makeBinPath (with pkgs; [ coreutils inotify-tools ])}
|
||||
bin_dir=~/.vscode-server/bin
|
||||
[[ -e $bin_dir ]] &&
|
||||
find "$bin_dir" -mindepth 2 -maxdepth 2 -name node -type f -exec ln -sfT ${pkgs.nodejs-12_x}/bin/node {} \; ||
|
||||
mkdir -p "$bin_dir"
|
||||
while IFS=: read -r bin_dir event; do
|
||||
# A new version of the VS Code Server is being created.
|
||||
if [[ $event == 'CREATE,ISDIR' ]]; then
|
||||
# Create a trigger to know when their node is being created and replace it for our symlink.
|
||||
touch "$bin_dir/node"
|
||||
inotifywait -qq -e DELETE_SELF "$bin_dir/node"
|
||||
ln -sfT ${pkgs.nodejs-12_x}/bin/node "$bin_dir/node"
|
||||
# The monitored directory is deleted, e.g. when "Uninstall VS Code Server from Host" has been run.
|
||||
elif [[ $event == DELETE_SELF ]]; then
|
||||
# See the comments above Restart in the service config.
|
||||
exit 0
|
||||
fi
|
||||
done < <(inotifywait -q -m -e CREATE,ISDIR -e DELETE_SELF --format '%w%f:%e' "$bin_dir")
|
||||
'';
|
||||
};
|
||||
};
|
||||
}
|
||||
@@ -1,34 +0,0 @@
|
||||
{ config, pkgs, lib, ... }:
|
||||
|
||||
let
|
||||
cfg = config.services.zerobin;
|
||||
in
|
||||
{
|
||||
options.services.zerobin = {
|
||||
host = lib.mkOption {
|
||||
type = lib.types.str;
|
||||
example = "example.com";
|
||||
};
|
||||
port = lib.mkOption {
|
||||
type = lib.types.int;
|
||||
default = 33422;
|
||||
};
|
||||
};
|
||||
config = lib.mkIf cfg.enable {
|
||||
services.zerobin.listenPort = cfg.port;
|
||||
services.zerobin.listenAddress = "localhost";
|
||||
|
||||
services.nginx.virtualHosts.${cfg.host} = {
|
||||
enableACME = true;
|
||||
forceSSL = true;
|
||||
locations."/" = {
|
||||
proxyPass = "http://localhost:${toString cfg.port}";
|
||||
proxyWebsockets = true;
|
||||
};
|
||||
};
|
||||
|
||||
# zerobin service is broken in nixpkgs currently
|
||||
systemd.services.zerobin.serviceConfig.ExecStart = lib.mkForce
|
||||
"${pkgs.zerobin}/bin/zerobin --host=${cfg.listenAddress} --port=${toString cfg.listenPort} --data-dir=${cfg.dataDir}";
|
||||
};
|
||||
}
|
||||
@@ -1,4 +1,4 @@
|
||||
{ config, lib, pkgs, ... }:
|
||||
{ pkgs, ... }:
|
||||
|
||||
# Improvements to the default shell
|
||||
# - use nix-index for command-not-found
|
||||
|
||||
@@ -1,4 +1,4 @@
|
||||
{ config, lib, pkgs, ... }:
|
||||
{ config, lib, ... }:
|
||||
|
||||
{
|
||||
programs.ssh.knownHosts = lib.filterAttrs (n: v: v != null) (lib.concatMapAttrs
|
||||
|
||||
87
common/zfs-alerts.nix
Normal file
87
common/zfs-alerts.nix
Normal file
@@ -0,0 +1,87 @@
|
||||
{ config, lib, pkgs, ... }:
|
||||
|
||||
let
|
||||
cfg = config.ntfy-alerts;
|
||||
hasZfs = config.boot.supportedFilesystems.zfs or false;
|
||||
hasNtfy = config.thisMachine.hasRole."ntfy";
|
||||
|
||||
checkScript = pkgs.writeShellScript "zfs-health-check" ''
|
||||
PATH="${lib.makeBinPath [ pkgs.zfs pkgs.coreutils pkgs.gawk pkgs.curl ]}"
|
||||
|
||||
unhealthy=""
|
||||
|
||||
# Check pool health status
|
||||
while IFS=$'\t' read -r pool state; do
|
||||
if [ "$state" != "ONLINE" ]; then
|
||||
unhealthy="$unhealthy"$'\n'"Pool '$pool' is $state"
|
||||
fi
|
||||
done < <(zpool list -H -o name,health)
|
||||
|
||||
# Check for errors (read, write, checksum) on any vdev
|
||||
while IFS=$'\t' read -r pool errors; do
|
||||
if [ "$errors" != "No known data errors" ] && [ -n "$errors" ]; then
|
||||
unhealthy="$unhealthy"$'\n'"Pool '$pool' has errors: $errors"
|
||||
fi
|
||||
done < <(zpool status -x 2>/dev/null | awk '
|
||||
/pool:/ { pool=$2 }
|
||||
/errors:/ { sub(/^[[:space:]]*errors: /, ""); print pool "\t" $0 }
|
||||
')
|
||||
|
||||
# Check for any drives with non-zero error counts
|
||||
drive_errors=$(zpool status 2>/dev/null | awk '
|
||||
/DEGRADED|FAULTED|OFFLINE|UNAVAIL|REMOVED/ && !/pool:/ && !/state:/ {
|
||||
print " " $0
|
||||
}
|
||||
/[0-9]+[[:space:]]+[0-9]+[[:space:]]+[0-9]+/ {
|
||||
if ($3 > 0 || $4 > 0 || $5 > 0) {
|
||||
print " " $1 " (read:" $3 " write:" $4 " cksum:" $5 ")"
|
||||
}
|
||||
}
|
||||
')
|
||||
if [ -n "$drive_errors" ]; then
|
||||
unhealthy="$unhealthy"$'\n'"Device errors:"$'\n'"$drive_errors"
|
||||
fi
|
||||
|
||||
if [ -n "$unhealthy" ]; then
|
||||
message="ZFS health check failed on ${config.networking.hostName}:$unhealthy"
|
||||
|
||||
curl \
|
||||
--fail --silent --show-error \
|
||||
--max-time 30 --retry 3 \
|
||||
-H "Authorization: Bearer $NTFY_TOKEN" \
|
||||
-H "Title: ZFS issue on ${config.networking.hostName}" \
|
||||
-H "Priority: urgent" \
|
||||
-H "Tags: warning" \
|
||||
-d "$message" \
|
||||
"${cfg.serverUrl}/${cfg.topic}"
|
||||
|
||||
echo "$message" >&2
|
||||
fi
|
||||
|
||||
echo "All ZFS pools healthy"
|
||||
'';
|
||||
in
|
||||
{
|
||||
config = lib.mkIf (hasZfs && hasNtfy) {
|
||||
systemd.services.zfs-health-check = {
|
||||
description = "Check ZFS pool health and alert on issues";
|
||||
wants = [ "network-online.target" ];
|
||||
after = [ "network-online.target" "zfs.target" ];
|
||||
serviceConfig = {
|
||||
Type = "oneshot";
|
||||
EnvironmentFile = "/run/agenix/ntfy-token";
|
||||
ExecStart = checkScript;
|
||||
};
|
||||
};
|
||||
|
||||
systemd.timers.zfs-health-check = {
|
||||
description = "Periodic ZFS health check";
|
||||
wantedBy = [ "timers.target" ];
|
||||
timerConfig = {
|
||||
OnCalendar = "daily";
|
||||
Persistent = true;
|
||||
RandomizedDelaySec = "1h";
|
||||
};
|
||||
};
|
||||
};
|
||||
}
|
||||
62
flake.lock
generated
62
flake.lock
generated
@@ -14,11 +14,11 @@
|
||||
]
|
||||
},
|
||||
"locked": {
|
||||
"lastModified": 1762618334,
|
||||
"narHash": "sha256-wyT7Pl6tMFbFrs8Lk/TlEs81N6L+VSybPfiIgzU8lbQ=",
|
||||
"lastModified": 1770165109,
|
||||
"narHash": "sha256-9VnK6Oqai65puVJ4WYtCTvlJeXxMzAp/69HhQuTdl/I=",
|
||||
"owner": "ryantm",
|
||||
"repo": "agenix",
|
||||
"rev": "fcdea223397448d35d9b31f798479227e80183f6",
|
||||
"rev": "b027ee29d959fda4b60b57566d64c98a202e0feb",
|
||||
"type": "github"
|
||||
},
|
||||
"original": {
|
||||
@@ -53,11 +53,11 @@
|
||||
]
|
||||
},
|
||||
"locked": {
|
||||
"lastModified": 1770491193,
|
||||
"narHash": "sha256-zdnWeXmPZT8BpBo52s4oansT1Rq0SNzksXKpEcMc5lE=",
|
||||
"lastModified": 1771632347,
|
||||
"narHash": "sha256-kNm0YX9RUwf7GZaWQu2F71ccm4OUMz0xFkXn6mGPfps=",
|
||||
"owner": "sadjow",
|
||||
"repo": "claude-code-nix",
|
||||
"rev": "f68a2683e812d1e4f9a022ff3e0206d46347d019",
|
||||
"rev": "ec90f84b2ea21f6d2272e00d1becbc13030d1895",
|
||||
"type": "github"
|
||||
},
|
||||
"original": {
|
||||
@@ -124,11 +124,11 @@
|
||||
]
|
||||
},
|
||||
"locked": {
|
||||
"lastModified": 1766051518,
|
||||
"narHash": "sha256-znKOwPXQnt3o7lDb3hdf19oDo0BLP4MfBOYiWkEHoik=",
|
||||
"lastModified": 1770019181,
|
||||
"narHash": "sha256-hwsYgDnby50JNVpTRYlF3UR/Rrpt01OrxVuryF40CFY=",
|
||||
"owner": "serokell",
|
||||
"repo": "deploy-rs",
|
||||
"rev": "d5eff7f948535b9c723d60cd8239f8f11ddc90fa",
|
||||
"rev": "77c906c0ba56aabdbc72041bf9111b565cdd6171",
|
||||
"type": "github"
|
||||
},
|
||||
"original": {
|
||||
@@ -186,11 +186,11 @@
|
||||
]
|
||||
},
|
||||
"locked": {
|
||||
"lastModified": 1763988335,
|
||||
"narHash": "sha256-QlcnByMc8KBjpU37rbq5iP7Cp97HvjRP0ucfdh+M4Qc=",
|
||||
"lastModified": 1769939035,
|
||||
"narHash": "sha256-Fok2AmefgVA0+eprw2NDwqKkPGEI5wvR+twiZagBvrg=",
|
||||
"owner": "cachix",
|
||||
"repo": "git-hooks.nix",
|
||||
"rev": "50b9238891e388c9fdc6a5c49e49c42533a1b5ce",
|
||||
"rev": "a8ca480175326551d6c4121498316261cbb5b260",
|
||||
"type": "github"
|
||||
},
|
||||
"original": {
|
||||
@@ -228,11 +228,11 @@
|
||||
]
|
||||
},
|
||||
"locked": {
|
||||
"lastModified": 1768068402,
|
||||
"narHash": "sha256-bAXnnJZKJiF7Xr6eNW6+PhBf1lg2P1aFUO9+xgWkXfA=",
|
||||
"lastModified": 1771756436,
|
||||
"narHash": "sha256-Tl2I0YXdhSTufGqAaD1ySh8x+cvVsEI1mJyJg12lxhI=",
|
||||
"owner": "nix-community",
|
||||
"repo": "home-manager",
|
||||
"rev": "8bc5473b6bc2b6e1529a9c4040411e1199c43b4c",
|
||||
"rev": "5bd3589390b431a63072868a90c0f24771ff4cbb",
|
||||
"type": "github"
|
||||
},
|
||||
"original": {
|
||||
@@ -250,11 +250,11 @@
|
||||
"spectrum": "spectrum"
|
||||
},
|
||||
"locked": {
|
||||
"lastModified": 1770310890,
|
||||
"narHash": "sha256-lyWAs4XKg3kLYaf4gm5qc5WJrDkYy3/qeV5G733fJww=",
|
||||
"lastModified": 1771802632,
|
||||
"narHash": "sha256-UAH8YfrHRvXAMeFxUzJ4h4B1loz1K1wiNUNI8KiPqOg=",
|
||||
"owner": "astro",
|
||||
"repo": "microvm.nix",
|
||||
"rev": "68c9f9c6ca91841f04f726a298c385411b7bfcd5",
|
||||
"rev": "b67e3d80df3ec35bdfd3a00ad64ee437ef4fcded",
|
||||
"type": "github"
|
||||
},
|
||||
"original": {
|
||||
@@ -270,11 +270,11 @@
|
||||
]
|
||||
},
|
||||
"locked": {
|
||||
"lastModified": 1765267181,
|
||||
"narHash": "sha256-d3NBA9zEtBu2JFMnTBqWj7Tmi7R5OikoU2ycrdhQEws=",
|
||||
"lastModified": 1771734689,
|
||||
"narHash": "sha256-/phvMgr1yutyAMjKnZlxkVplzxHiz60i4rc+gKzpwhg=",
|
||||
"owner": "Mic92",
|
||||
"repo": "nix-index-database",
|
||||
"rev": "82befcf7dc77c909b0f2a09f5da910ec95c5b78f",
|
||||
"rev": "8f590b832326ab9699444f3a48240595954a4b10",
|
||||
"type": "github"
|
||||
},
|
||||
"original": {
|
||||
@@ -285,11 +285,11 @@
|
||||
},
|
||||
"nixos-hardware": {
|
||||
"locked": {
|
||||
"lastModified": 1767185284,
|
||||
"narHash": "sha256-ljDBUDpD1Cg5n3mJI81Hz5qeZAwCGxon4kQW3Ho3+6Q=",
|
||||
"lastModified": 1771423359,
|
||||
"narHash": "sha256-yRKJ7gpVmXbX2ZcA8nFi6CMPkJXZGjie2unsiMzj3Ig=",
|
||||
"owner": "NixOS",
|
||||
"repo": "nixos-hardware",
|
||||
"rev": "40b1a28dce561bea34858287fbb23052c3ee63fe",
|
||||
"rev": "740a22363033e9f1bb6270fbfb5a9574067af15b",
|
||||
"type": "github"
|
||||
},
|
||||
"original": {
|
||||
@@ -301,16 +301,16 @@
|
||||
},
|
||||
"nixpkgs": {
|
||||
"locked": {
|
||||
"lastModified": 1768250893,
|
||||
"narHash": "sha256-fWNJYFx0QvnlGlcw54EoOYs/wv2icINHUz0FVdh9RIo=",
|
||||
"lastModified": 1771369470,
|
||||
"narHash": "sha256-0NBlEBKkN3lufyvFegY4TYv5mCNHbi5OmBDrzihbBMQ=",
|
||||
"owner": "NixOS",
|
||||
"repo": "nixpkgs",
|
||||
"rev": "3971af1a8fc3646b1d554cb1269b26c84539c22e",
|
||||
"rev": "0182a361324364ae3f436a63005877674cf45efb",
|
||||
"type": "github"
|
||||
},
|
||||
"original": {
|
||||
"owner": "NixOS",
|
||||
"ref": "master",
|
||||
"ref": "nixos-unstable",
|
||||
"repo": "nixpkgs",
|
||||
"type": "github"
|
||||
}
|
||||
@@ -344,11 +344,11 @@
|
||||
]
|
||||
},
|
||||
"locked": {
|
||||
"lastModified": 1766321686,
|
||||
"narHash": "sha256-icOWbnD977HXhveirqA10zoqvErczVs3NKx8Bj+ikHY=",
|
||||
"lastModified": 1770659507,
|
||||
"narHash": "sha256-RVZno9CypFN3eHxfULKN1K7mb/Cq0HkznnWqnshxpWY=",
|
||||
"owner": "simple-nixos-mailserver",
|
||||
"repo": "nixos-mailserver",
|
||||
"rev": "7d433bf89882f61621f95082e90a4ab91eb0bdd3",
|
||||
"rev": "781e833633ebc0873d251772a74e4400a73f5d78",
|
||||
"type": "gitlab"
|
||||
},
|
||||
"original": {
|
||||
|
||||
48
flake.nix
48
flake.nix
@@ -1,7 +1,7 @@
|
||||
{
|
||||
inputs = {
|
||||
# nixpkgs
|
||||
nixpkgs.url = "github:NixOS/nixpkgs/master";
|
||||
nixpkgs.url = "github:NixOS/nixpkgs/nixos-unstable";
|
||||
|
||||
# Common Utils Among flake inputs
|
||||
systems.url = "github:nix-systems/default";
|
||||
@@ -156,36 +156,30 @@
|
||||
nixpkgs.lib.mapAttrs
|
||||
(hostname: cfg:
|
||||
mkSystem cfg.arch nixpkgs cfg.configurationPath hostname)
|
||||
machineHosts
|
||||
//
|
||||
(
|
||||
let
|
||||
mkEphemeral = system: nixpkgs.lib.nixosSystem {
|
||||
inherit system;
|
||||
modules = [
|
||||
./machines/ephemeral/minimal.nix
|
||||
inputs.nix-index-database.nixosModules.default
|
||||
];
|
||||
};
|
||||
in
|
||||
{
|
||||
ephemeral-x86_64 = mkEphemeral "x86_64-linux";
|
||||
ephemeral-aarch64 = mkEphemeral "aarch64-linux";
|
||||
}
|
||||
);
|
||||
machineHosts;
|
||||
|
||||
# kexec produces a tarball; for a self-extracting bundle see:
|
||||
# https://github.com/nix-community/nixos-generators/blob/master/formats/kexec.nix#L60
|
||||
packages = {
|
||||
"x86_64-linux" = {
|
||||
kexec = self.nixosConfigurations.ephemeral-x86_64.config.system.build.images.kexec;
|
||||
iso = self.nixosConfigurations.ephemeral-x86_64.config.system.build.images.iso;
|
||||
packages =
|
||||
let
|
||||
mkEphemeral = system: nixpkgs.lib.nixosSystem {
|
||||
inherit system;
|
||||
modules = [
|
||||
./machines/ephemeral/minimal.nix
|
||||
inputs.nix-index-database.nixosModules.default
|
||||
];
|
||||
};
|
||||
in
|
||||
{
|
||||
"x86_64-linux" = {
|
||||
kexec = (mkEphemeral "x86_64-linux").config.system.build.images.kexec;
|
||||
iso = (mkEphemeral "x86_64-linux").config.system.build.images.iso;
|
||||
};
|
||||
# "aarch64-linux" = {
|
||||
# kexec = (mkEphemeral "aarch64-linux").config.system.build.images.kexec;
|
||||
# iso = (mkEphemeral "aarch64-linux").config.system.build.images.iso;
|
||||
# };
|
||||
};
|
||||
"aarch64-linux" = {
|
||||
kexec = self.nixosConfigurations.ephemeral-aarch64.config.system.build.images.kexec;
|
||||
iso = self.nixosConfigurations.ephemeral-aarch64.config.system.build.images.iso;
|
||||
};
|
||||
};
|
||||
|
||||
overlays.default = import ./overlays { inherit inputs; };
|
||||
nixosModules.kernel-modules = import ./overlays/kernel-modules;
|
||||
|
||||
@@ -1,4 +1,4 @@
|
||||
{ config, lib, pkgs, osConfig, ... }:
|
||||
{ lib, pkgs, osConfig, ... }:
|
||||
|
||||
# https://home-manager-options.extranix.com/
|
||||
# https://nix-community.github.io/home-manager/options.xhtml
|
||||
@@ -61,7 +61,8 @@ in
|
||||
|
||||
programs.vscode = {
|
||||
enable = thisMachineIsPersonal;
|
||||
package = pkgs.vscodium;
|
||||
# Must use fhs version for vscode-lldb
|
||||
package = pkgs.vscodium-fhs;
|
||||
profiles.default = {
|
||||
userSettings = {
|
||||
editor.formatOnSave = true;
|
||||
@@ -78,6 +79,15 @@ in
|
||||
lsp.serverPort = 6005; # port needs to match Godot configuration
|
||||
editorPath.godot4 = "godot-mono";
|
||||
};
|
||||
rust-analyzer = {
|
||||
restartServerOnConfigChange = true;
|
||||
testExplorer = true;
|
||||
server.path = "rust-analyzer"; # Use the rust-analyzer from PATH (which is set by nixEnvSelector from the project's flake)
|
||||
};
|
||||
nixEnvSelector = {
|
||||
useFlakes = true; # This hasn't ever worked for me and I have to use shell.nix... but maybe someday
|
||||
suggestion = false; # Stop really annoy nagging
|
||||
};
|
||||
};
|
||||
extensions = with pkgs.vscode-extensions; [
|
||||
bbenoist.nix # nix syntax support
|
||||
|
||||
@@ -13,6 +13,18 @@
|
||||
# Upstream interface for sandbox networking (NAT)
|
||||
networking.sandbox.upstreamInterface = lib.mkDefault "enp191s0";
|
||||
|
||||
# Enable sandboxed workspace
|
||||
sandboxed-workspace = {
|
||||
enable = true;
|
||||
workspaces.test-incus = {
|
||||
type = "incus";
|
||||
autoStart = true;
|
||||
config = ./workspaces/test-container.nix;
|
||||
ip = "192.168.83.90";
|
||||
hostKey = "ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIL0SNSy/MdW38NqKzLr1SG8WKrs8XkrqibacaJtJPzgW";
|
||||
};
|
||||
};
|
||||
|
||||
environment.systemPackages = with pkgs; [
|
||||
system76-keyboard-configurator
|
||||
];
|
||||
|
||||
@@ -8,6 +8,7 @@
|
||||
systemRoles = [
|
||||
"personal"
|
||||
"dns-challenge"
|
||||
"ntfy"
|
||||
];
|
||||
|
||||
hostKey = "ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAID/Df5lG07Il7fizEgZR/T9bMlR0joESRJ7cqM9BkOyP";
|
||||
|
||||
@@ -1,4 +1,4 @@
|
||||
{ config, lib, pkgs, ... }:
|
||||
{ pkgs, ... }:
|
||||
|
||||
# Test container workspace configuration
|
||||
#
|
||||
|
||||
@@ -1,23 +0,0 @@
|
||||
{ config, lib, pkgs, ... }:
|
||||
|
||||
# Example VM workspace configuration
|
||||
#
|
||||
# Add to sandboxed-workspace.workspaces in machines/fry/default.nix:
|
||||
# sandboxed-workspace.workspaces.example = {
|
||||
# type = "vm";
|
||||
# config = ./workspaces/example.nix;
|
||||
# ip = "192.168.83.10";
|
||||
# };
|
||||
#
|
||||
# The workspace name ("example") becomes the hostname automatically.
|
||||
# The IP is configured in default.nix, not here.
|
||||
|
||||
{
|
||||
# Install packages as needed
|
||||
environment.systemPackages = with pkgs; [
|
||||
# Add packages here
|
||||
];
|
||||
|
||||
# Additional shares beyond the standard ones (workspace, ssh-host-keys, claude-config):
|
||||
# microvm.shares = [ ... ];
|
||||
}
|
||||
@@ -1,4 +1,4 @@
|
||||
{ config, pkgs, lib, ... }:
|
||||
{ lib, ... }:
|
||||
|
||||
{
|
||||
imports = [
|
||||
|
||||
@@ -7,6 +7,7 @@
|
||||
|
||||
systemRoles = [
|
||||
"personal"
|
||||
"ntfy"
|
||||
];
|
||||
|
||||
hostKey = "ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIEQi3q8jU6vRruExAL60J7GFO1gS8HsmXVJuKRT4ljrG";
|
||||
|
||||
@@ -1,4 +1,4 @@
|
||||
{ config, pkgs, fetchurl, lib, ... }:
|
||||
{ ... }:
|
||||
|
||||
{
|
||||
imports = [
|
||||
|
||||
@@ -1,7 +1,7 @@
|
||||
# Do not modify this file! It was generated by ‘nixos-generate-config’
|
||||
# and may be overwritten by future invocations. Please make changes
|
||||
# to /etc/nixos/configuration.nix instead.
|
||||
{ config, lib, pkgs, modulesPath, ... }:
|
||||
{ ... }:
|
||||
|
||||
{
|
||||
imports = [ ];
|
||||
|
||||
@@ -1,9 +0,0 @@
|
||||
{ config, pkgs, lib, ... }:
|
||||
|
||||
{
|
||||
imports = [
|
||||
./hardware-configuration.nix
|
||||
];
|
||||
|
||||
networking.hostName = "phil";
|
||||
}
|
||||
@@ -1,46 +0,0 @@
|
||||
# Do not modify this file! It was generated by ‘nixos-generate-config’
|
||||
# and may be overwritten by future invocations. Please make changes
|
||||
# to /etc/nixos/configuration.nix instead.
|
||||
{ config, lib, pkgs, modulesPath, ... }:
|
||||
|
||||
{
|
||||
imports =
|
||||
[
|
||||
(modulesPath + "/profiles/qemu-guest.nix")
|
||||
];
|
||||
|
||||
# because grub just doesn't work for some reason
|
||||
boot.loader.systemd-boot.enable = true;
|
||||
|
||||
remoteLuksUnlock.enable = true;
|
||||
remoteLuksUnlock.enableTorUnlock = false;
|
||||
|
||||
boot.initrd.availableKernelModules = [ "xhci_pci" ];
|
||||
boot.initrd.kernelModules = [ "dm-snapshot" ];
|
||||
boot.kernelModules = [ ];
|
||||
boot.extraModulePackages = [ ];
|
||||
|
||||
boot.initrd.luks.devices."enc-pv" = {
|
||||
device = "/dev/disk/by-uuid/d26c1820-4c39-4615-98c2-51442504e194";
|
||||
allowDiscards = true;
|
||||
};
|
||||
|
||||
fileSystems."/" =
|
||||
{
|
||||
device = "/dev/disk/by-uuid/851bfde6-93cd-439e-9380-de28aa87eda9";
|
||||
fsType = "btrfs";
|
||||
};
|
||||
|
||||
fileSystems."/boot" =
|
||||
{
|
||||
device = "/dev/disk/by-uuid/F185-C4E5";
|
||||
fsType = "vfat";
|
||||
};
|
||||
|
||||
swapDevices =
|
||||
[{ device = "/dev/disk/by-uuid/d809e3a1-3915-405a-a200-4429c5efdf87"; }];
|
||||
|
||||
networking.interfaces.enp0s6.useDHCP = lib.mkDefault true;
|
||||
|
||||
nixpkgs.hostPlatform = lib.mkDefault "aarch64-linux";
|
||||
}
|
||||
@@ -1,20 +0,0 @@
|
||||
{
|
||||
hostNames = [
|
||||
"phil"
|
||||
"phil.neet.dev"
|
||||
];
|
||||
|
||||
arch = "aarch64-linux";
|
||||
|
||||
systemRoles = [
|
||||
"server"
|
||||
"nix-builder"
|
||||
];
|
||||
|
||||
hostKey = "ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIBlgRPpuUkZqe8/lHugRPm/m2vcN9psYhh5tENHZt9I2";
|
||||
|
||||
remoteUnlock = {
|
||||
hostKey = "ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIK0RodotOXLMy/w70aa096gaNqPBnfgiXR5ZAH4+wGzd";
|
||||
clearnetHost = "unlock.phil.neet.dev";
|
||||
};
|
||||
}
|
||||
@@ -77,9 +77,6 @@
|
||||
# pin postgresql for matrix (will need to migrate eventually)
|
||||
services.postgresql.package = pkgs.postgresql_15;
|
||||
|
||||
# iodine DNS-based vpn
|
||||
# services.iodine.server.enable = true;
|
||||
|
||||
# proxied web services
|
||||
services.nginx.enable = true;
|
||||
services.nginx.virtualHosts."navidrome.neet.cloud" = {
|
||||
@@ -111,4 +108,12 @@
|
||||
# librechat
|
||||
services.librechat-container.enable = true;
|
||||
services.librechat-container.host = "chat.neet.dev";
|
||||
|
||||
# push notifications
|
||||
services.ntfy-sh.enable = true;
|
||||
services.ntfy-sh.hostname = "ntfy.neet.dev";
|
||||
|
||||
# uptime monitoring
|
||||
services.gatus.enable = true;
|
||||
services.gatus.hostname = "status.neet.dev";
|
||||
}
|
||||
|
||||
@@ -1,4 +1,4 @@
|
||||
{ config, lib, pkgs, modulesPath, ... }:
|
||||
{ lib, modulesPath, ... }:
|
||||
|
||||
{
|
||||
imports =
|
||||
|
||||
@@ -10,12 +10,12 @@
|
||||
systemRoles = [
|
||||
"server"
|
||||
"email-server"
|
||||
"iodine"
|
||||
"pia"
|
||||
"nextcloud"
|
||||
"dailybot"
|
||||
"gitea"
|
||||
"librechat"
|
||||
"ntfy"
|
||||
];
|
||||
|
||||
hostKey = "ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIMBBlTAIp38RhErU1wNNV5MBeb+WGH0mhF/dxh5RsAXN";
|
||||
|
||||
@@ -1,37 +0,0 @@
|
||||
{ config, lib, pkgs, ... }:
|
||||
|
||||
{
|
||||
imports = [
|
||||
./hardware-configuration.nix
|
||||
./router.nix
|
||||
];
|
||||
|
||||
# https://dataswamp.org/~solene/2022-08-03-nixos-with-live-usb-router.html
|
||||
# https://github.com/mdlayher/homelab/blob/391cfc0de06434e4dee0abe2bec7a2f0637345ac/nixos/routnerr-2/configuration.nix
|
||||
# https://github.com/skogsbrus/os/blob/master/sys/router.nix
|
||||
# http://trac.gateworks.com/wiki/wireless/wifi
|
||||
|
||||
system.autoUpgrade.enable = true;
|
||||
|
||||
services.tailscale.exitNode = true;
|
||||
|
||||
router.enable = true;
|
||||
router.privateSubnet = "192.168.3";
|
||||
|
||||
services.iperf3.enable = true;
|
||||
|
||||
# networking.useDHCP = lib.mkForce true;
|
||||
|
||||
networking.usePredictableInterfaceNames = false;
|
||||
|
||||
powerManagement.cpuFreqGovernor = "ondemand";
|
||||
|
||||
|
||||
services.irqbalance.enable = true;
|
||||
|
||||
# services.miniupnpd = {
|
||||
# enable = true;
|
||||
# externalInterface = "eth0";
|
||||
# internalIPs = [ "br0" ];
|
||||
# };
|
||||
}
|
||||
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
@@ -1,223 +0,0 @@
|
||||
#!/bin/sh
|
||||
|
||||
# TODO allow adding custom parameters to ht_capab, vht_capab
|
||||
# TODO detect bad channel numbers (preferably not at runtime)
|
||||
# TODO error if 160mhz is not supported
|
||||
# TODO 'b' only goes up to 40mhz
|
||||
|
||||
# gets the phy number using the input interface
|
||||
# Ex: get_phy_number("wlan0") -> "1"
|
||||
get_phy_number() {
|
||||
local interface=$1
|
||||
phy=$(iw dev "$interface" info | awk '/phy/ {gsub(/#/,"");print $2}')
|
||||
if [[ -z "$phy" ]]; then
|
||||
echo "Error: interface not found" >&2
|
||||
exit 1
|
||||
fi
|
||||
phy=phy$phy
|
||||
}
|
||||
|
||||
get_ht_cap_mask() {
|
||||
ht_cap_mask=0
|
||||
|
||||
for cap in $(iw phy "$phy" info | grep 'Capabilities:' | cut -d: -f2); do
|
||||
ht_cap_mask="$(($ht_cap_mask | $cap))"
|
||||
done
|
||||
|
||||
local cap_rx_stbc
|
||||
cap_rx_stbc=$((($ht_cap_mask >> 8) & 3))
|
||||
ht_cap_mask="$(( ($ht_cap_mask & ~(0x300)) | ($cap_rx_stbc << 8) ))"
|
||||
}
|
||||
|
||||
get_vht_cap_mask() {
|
||||
vht_cap_mask=0
|
||||
for cap in $(iw phy "$phy" info | awk -F "[()]" '/VHT Capabilities/ { print $2 }'); do
|
||||
vht_cap_mask="$(($vht_cap_mask | $cap))"
|
||||
done
|
||||
|
||||
local cap_rx_stbc
|
||||
cap_rx_stbc=$((($vht_cap_mask >> 8) & 7))
|
||||
vht_cap_mask="$(( ($vht_cap_mask & ~(0x700)) | ($cap_rx_stbc << 8) ))"
|
||||
}
|
||||
|
||||
mac80211_add_capabilities() {
|
||||
local __var="$1"; shift
|
||||
local __mask="$1"; shift
|
||||
local __out= oifs
|
||||
|
||||
oifs="$IFS"
|
||||
IFS=:
|
||||
for capab in "$@"; do
|
||||
set -- $capab
|
||||
[ "$(($4))" -gt 0 ] || continue
|
||||
[ "$(($__mask & $2))" -eq "$((${3:-$2}))" ] || continue
|
||||
__out="$__out[$1]"
|
||||
done
|
||||
IFS="$oifs"
|
||||
|
||||
export -n -- "$__var=$__out"
|
||||
}
|
||||
|
||||
add_special_ht_capabilities() {
|
||||
case "$hwmode" in
|
||||
a)
|
||||
case "$(( ($channel / 4) % 2 ))" in
|
||||
1) ht_capab="$ht_capab[HT40+]";;
|
||||
0) ht_capab="$ht_capab[HT40-]";;
|
||||
esac
|
||||
;;
|
||||
*)
|
||||
if [ "$channel" -lt 7 ]; then
|
||||
ht_capab="$ht_capab[HT40+]"
|
||||
else
|
||||
ht_capab="$ht_capab[HT40-]"
|
||||
fi
|
||||
;;
|
||||
esac
|
||||
}
|
||||
|
||||
add_special_vht_capabilities() {
|
||||
local cap_ant
|
||||
[ "$(($vht_cap_mask & 0x800))" -gt 0 ] && {
|
||||
cap_ant="$(( ( ($vht_cap_mask >> 16) & 3 ) + 1 ))"
|
||||
[ "$cap_ant" -gt 1 ] && vht_capab="$vht_capab[SOUNDING-DIMENSION-$cap_ant]"
|
||||
}
|
||||
|
||||
[ "$(($vht_cap_mask & 0x1000))" -gt 0 ] && {
|
||||
cap_ant="$(( ( ($vht_cap_mask >> 13) & 3 ) + 1 ))"
|
||||
[ "$cap_ant" -gt 1 ] && vht_capab="$vht_capab[BF-ANTENNA-$cap_ant]"
|
||||
}
|
||||
|
||||
if [ "$(($vht_cap_mask & 12))" -eq 4 ]; then
|
||||
vht_capab="$vht_capab[VHT160]"
|
||||
fi
|
||||
|
||||
local vht_max_mpdu_hw=3895
|
||||
[ "$(($vht_cap_mask & 3))" -ge 1 ] && \
|
||||
vht_max_mpdu_hw=7991
|
||||
[ "$(($vht_cap_mask & 3))" -ge 2 ] && \
|
||||
vht_max_mpdu_hw=11454
|
||||
[ "$vht_max_mpdu_hw" != 3895 ] && \
|
||||
vht_capab="$vht_capab[MAX-MPDU-$vht_max_mpdu_hw]"
|
||||
|
||||
# maximum A-MPDU length exponent
|
||||
local vht_max_a_mpdu_len_exp_hw=0
|
||||
[ "$(($vht_cap_mask & 58720256))" -ge 8388608 ] && \
|
||||
vht_max_a_mpdu_len_exp_hw=1
|
||||
[ "$(($vht_cap_mask & 58720256))" -ge 16777216 ] && \
|
||||
vht_max_a_mpdu_len_exp_hw=2
|
||||
[ "$(($vht_cap_mask & 58720256))" -ge 25165824 ] && \
|
||||
vht_max_a_mpdu_len_exp_hw=3
|
||||
[ "$(($vht_cap_mask & 58720256))" -ge 33554432 ] && \
|
||||
vht_max_a_mpdu_len_exp_hw=4
|
||||
[ "$(($vht_cap_mask & 58720256))" -ge 41943040 ] && \
|
||||
vht_max_a_mpdu_len_exp_hw=5
|
||||
[ "$(($vht_cap_mask & 58720256))" -ge 50331648 ] && \
|
||||
vht_max_a_mpdu_len_exp_hw=6
|
||||
[ "$(($vht_cap_mask & 58720256))" -ge 58720256 ] && \
|
||||
vht_max_a_mpdu_len_exp_hw=7
|
||||
vht_capab="$vht_capab[MAX-A-MPDU-LEN-EXP$vht_max_a_mpdu_len_exp_hw]"
|
||||
|
||||
local vht_link_adapt_hw=0
|
||||
[ "$(($vht_cap_mask & 201326592))" -ge 134217728 ] && \
|
||||
vht_link_adapt_hw=2
|
||||
[ "$(($vht_cap_mask & 201326592))" -ge 201326592 ] && \
|
||||
vht_link_adapt_hw=3
|
||||
[ "$vht_link_adapt_hw" != 0 ] && \
|
||||
vht_capab="$vht_capab[VHT-LINK-ADAPT-$vht_link_adapt_hw]"
|
||||
}
|
||||
|
||||
calculate_channel_offsets() {
|
||||
vht_oper_chwidth=0
|
||||
vht_oper_centr_freq_seg0_idx=
|
||||
|
||||
local idx="$channel"
|
||||
case "$channelWidth" in
|
||||
40)
|
||||
case "$(( ($channel / 4) % 2 ))" in
|
||||
1) idx=$(($channel + 2));;
|
||||
0) idx=$(($channel - 2));;
|
||||
esac
|
||||
vht_oper_centr_freq_seg0_idx=$idx
|
||||
;;
|
||||
80)
|
||||
case "$(( ($channel / 4) % 4 ))" in
|
||||
1) idx=$(($channel + 6));;
|
||||
2) idx=$(($channel + 2));;
|
||||
3) idx=$(($channel - 2));;
|
||||
0) idx=$(($channel - 6));;
|
||||
esac
|
||||
vht_oper_chwidth=1
|
||||
vht_oper_centr_freq_seg0_idx=$idx
|
||||
;;
|
||||
160)
|
||||
case "$channel" in
|
||||
36|40|44|48|52|56|60|64) idx=50;;
|
||||
100|104|108|112|116|120|124|128) idx=114;;
|
||||
esac
|
||||
vht_oper_chwidth=2
|
||||
vht_oper_centr_freq_seg0_idx=$idx
|
||||
;;
|
||||
esac
|
||||
|
||||
he_oper_chwidth=$vht_oper_chwidth
|
||||
he_oper_centr_freq_seg0_idx=$vht_oper_centr_freq_seg0_idx
|
||||
}
|
||||
|
||||
interface=$1
|
||||
channel=$2
|
||||
hwmode=$3
|
||||
channelWidth=$4
|
||||
|
||||
get_phy_number $interface
|
||||
get_ht_cap_mask
|
||||
get_vht_cap_mask
|
||||
|
||||
mac80211_add_capabilities vht_capab $vht_cap_mask \
|
||||
RXLDPC:0x10::1 \
|
||||
SHORT-GI-80:0x20::1 \
|
||||
SHORT-GI-160:0x40::1 \
|
||||
TX-STBC-2BY1:0x80::1 \
|
||||
SU-BEAMFORMER:0x800::1 \
|
||||
SU-BEAMFORMEE:0x1000::1 \
|
||||
MU-BEAMFORMER:0x80000::1 \
|
||||
MU-BEAMFORMEE:0x100000::1 \
|
||||
VHT-TXOP-PS:0x200000::1 \
|
||||
HTC-VHT:0x400000::1 \
|
||||
RX-ANTENNA-PATTERN:0x10000000::1 \
|
||||
TX-ANTENNA-PATTERN:0x20000000::1 \
|
||||
RX-STBC-1:0x700:0x100:1 \
|
||||
RX-STBC-12:0x700:0x200:1 \
|
||||
RX-STBC-123:0x700:0x300:1 \
|
||||
RX-STBC-1234:0x700:0x400:1 \
|
||||
|
||||
mac80211_add_capabilities ht_capab $ht_cap_mask \
|
||||
LDPC:0x1::1 \
|
||||
GF:0x10::1 \
|
||||
SHORT-GI-20:0x20::1 \
|
||||
SHORT-GI-40:0x40::1 \
|
||||
TX-STBC:0x80::1 \
|
||||
RX-STBC1:0x300::1 \
|
||||
MAX-AMSDU-7935:0x800::1 \
|
||||
|
||||
# TODO this is active when the driver doesn't support it?
|
||||
# DSSS_CCK-40:0x1000::1 \
|
||||
|
||||
# TODO these are active when the driver doesn't support them?
|
||||
# RX-STBC1:0x300:0x100:1 \
|
||||
# RX-STBC12:0x300:0x200:1 \
|
||||
# RX-STBC123:0x300:0x300:1 \
|
||||
|
||||
add_special_ht_capabilities
|
||||
add_special_vht_capabilities
|
||||
|
||||
echo ht_capab=$ht_capab
|
||||
echo vht_capab=$vht_capab
|
||||
|
||||
if [ "$channelWidth" != "20" ]; then
|
||||
calculate_channel_offsets
|
||||
echo he_oper_chwidth=$he_oper_chwidth
|
||||
echo vht_oper_chwidth=$vht_oper_chwidth
|
||||
echo he_oper_centr_freq_seg0_idx=$he_oper_centr_freq_seg0_idx
|
||||
echo vht_oper_centr_freq_seg0_idx=$vht_oper_centr_freq_seg0_idx
|
||||
fi
|
||||
@@ -1,48 +0,0 @@
|
||||
{ config, pkgs, ... }:
|
||||
|
||||
{
|
||||
# kernel
|
||||
boot.kernelPackages = pkgs.linuxPackages_latest;
|
||||
boot.initrd.availableKernelModules = [ "igb" "mt7915e" "xhci_pci" "ahci" "ehci_pci" "usb_storage" "sd_mod" "sdhci_pci" ];
|
||||
boot.initrd.kernelModules = [ "dm-snapshot" ];
|
||||
boot.kernelModules = [ "kvm-amd" ];
|
||||
boot.extraModulePackages = [ ];
|
||||
|
||||
# Enable serial output
|
||||
boot.kernelParams = [
|
||||
"console=ttyS0,115200n8" # enable serial console
|
||||
];
|
||||
boot.loader.grub.extraConfig = "
|
||||
serial --speed=115200 --unit=0 --word=8 --parity=no --stop=1
|
||||
terminal_input serial
|
||||
terminal_output serial
|
||||
";
|
||||
|
||||
# firmware
|
||||
firmware.x86_64.enable = true;
|
||||
nixpkgs.config.allowUnfree = true;
|
||||
hardware.enableRedistributableFirmware = true;
|
||||
hardware.enableAllFirmware = true;
|
||||
|
||||
# boot
|
||||
bios = {
|
||||
enable = true;
|
||||
device = "/dev/sda";
|
||||
};
|
||||
|
||||
# disks
|
||||
fileSystems."/" =
|
||||
{
|
||||
device = "/dev/disk/by-uuid/6aa7f79e-bef8-4b0f-b22c-9d1b3e8ac94b";
|
||||
fsType = "ext4";
|
||||
};
|
||||
fileSystems."/boot" =
|
||||
{
|
||||
device = "/dev/disk/by-uuid/14dfc562-0333-4ddd-b10c-4eeefe1cd05f";
|
||||
fsType = "ext3";
|
||||
};
|
||||
swapDevices =
|
||||
[{ device = "/dev/disk/by-uuid/adf37c64-3b54-480c-a9a7-099d61c6eac7"; }];
|
||||
|
||||
nixpkgs.hostPlatform = "x86_64-linux";
|
||||
}
|
||||
@@ -1,17 +0,0 @@
|
||||
{
|
||||
hostNames = [
|
||||
"router"
|
||||
"192.168.6.159"
|
||||
"192.168.3.1"
|
||||
];
|
||||
|
||||
arch = "x86_64-linux";
|
||||
|
||||
systemRoles = [
|
||||
"server"
|
||||
"wireless"
|
||||
"router"
|
||||
];
|
||||
|
||||
hostKey = "ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIKDCMhEvWJxFBNyvpyuljv5Uun8AdXCxBK9HvPBRe5x6";
|
||||
}
|
||||
@@ -1,238 +0,0 @@
|
||||
{ config, pkgs, lib, ... }:
|
||||
|
||||
let
|
||||
cfg = config.router;
|
||||
inherit (lib) mapAttrs' genAttrs nameValuePair mkOption types mkIf mkEnableOption;
|
||||
in
|
||||
{
|
||||
options.router = {
|
||||
enable = mkEnableOption "router";
|
||||
|
||||
privateSubnet = mkOption {
|
||||
type = types.str;
|
||||
default = "192.168.1";
|
||||
description = "IP block (/24) to use for the private subnet";
|
||||
};
|
||||
};
|
||||
|
||||
config = mkIf cfg.enable {
|
||||
networking.ip_forward = true;
|
||||
|
||||
networking.interfaces.enp1s0.useDHCP = true;
|
||||
|
||||
networking.nat = {
|
||||
enable = true;
|
||||
internalInterfaces = [
|
||||
"br0"
|
||||
];
|
||||
externalInterface = "enp1s0";
|
||||
};
|
||||
|
||||
networking.bridges = {
|
||||
br0 = {
|
||||
interfaces = [
|
||||
"eth2"
|
||||
# "wlp4s0"
|
||||
# "wlan1"
|
||||
"wlan0"
|
||||
"wlan1"
|
||||
];
|
||||
};
|
||||
};
|
||||
|
||||
networking.interfaces = {
|
||||
br0 = {
|
||||
useDHCP = false;
|
||||
ipv4.addresses = [
|
||||
{
|
||||
address = "${cfg.privateSubnet}.1";
|
||||
prefixLength = 24;
|
||||
}
|
||||
];
|
||||
};
|
||||
};
|
||||
|
||||
networking.firewall = {
|
||||
enable = true;
|
||||
trustedInterfaces = [ "br0" "tailscale0" ];
|
||||
|
||||
interfaces = {
|
||||
enp1s0 = {
|
||||
allowedTCPPorts = [ ];
|
||||
allowedUDPPorts = [ ];
|
||||
};
|
||||
};
|
||||
};
|
||||
|
||||
services.dnsmasq = {
|
||||
enable = true;
|
||||
settings = {
|
||||
# sensible behaviours
|
||||
domain-needed = true;
|
||||
bogus-priv = true;
|
||||
no-resolv = true;
|
||||
|
||||
# upstream name servers
|
||||
server = [
|
||||
"1.1.1.1"
|
||||
"8.8.8.8"
|
||||
];
|
||||
|
||||
# local domains
|
||||
expand-hosts = true;
|
||||
domain = "home";
|
||||
local = "/home/";
|
||||
|
||||
# Interfaces to use DNS on
|
||||
interface = "br0";
|
||||
|
||||
# subnet IP blocks to use DHCP on
|
||||
dhcp-range = "${cfg.privateSubnet}.10,${cfg.privateSubnet}.254,24h";
|
||||
};
|
||||
};
|
||||
|
||||
services.hostapd = {
|
||||
enable = true;
|
||||
radios = {
|
||||
# Simple 2.4GHz AP
|
||||
wlan0 = {
|
||||
countryCode = "US";
|
||||
networks.wlan0 = {
|
||||
ssid = "CXNK00BF9176-1";
|
||||
authentication.saePasswords = [{ passwordFile = "/run/agenix/hostapd-pw-CXNK00BF9176"; }];
|
||||
};
|
||||
};
|
||||
|
||||
# WiFi 5 (5GHz) with two advertised networks
|
||||
wlan1 = {
|
||||
band = "5g";
|
||||
channel = 0;
|
||||
countryCode = "US";
|
||||
networks.wlan1 = {
|
||||
ssid = "CXNK00BF9176-1";
|
||||
authentication.saePasswords = [{ passwordFile = "/run/agenix/hostapd-pw-CXNK00BF9176"; }];
|
||||
};
|
||||
};
|
||||
};
|
||||
};
|
||||
age.secrets.hostapd-pw-CXNK00BF9176.file = ../../secrets/hostapd-pw-CXNK00BF9176.age;
|
||||
|
||||
# wlan0 5Ghz 00:0a:52:08:38:32
|
||||
# wlp4s0 2.4Ghz 00:0a:52:08:38:33
|
||||
|
||||
# services.hostapd = {
|
||||
# enable = true;
|
||||
# radios = {
|
||||
# # 2.4GHz
|
||||
# wlp4s0 = {
|
||||
# band = "2g";
|
||||
# noScan = true;
|
||||
# channel = 6;
|
||||
# countryCode = "US";
|
||||
# wifi4 = {
|
||||
# capabilities = [ "LDPC" "GF" "SHORT-GI-20" "SHORT-GI-40" "TX-STBC" "RX-STBC1" "MAX-AMSDU-7935" "HT40+" ];
|
||||
# };
|
||||
# wifi5 = {
|
||||
# operatingChannelWidth = "20or40";
|
||||
# capabilities = [ "MAX-A-MPDU-LEN-EXP0" ];
|
||||
# };
|
||||
# wifi6 = {
|
||||
# enable = true;
|
||||
# singleUserBeamformer = true;
|
||||
# singleUserBeamformee = true;
|
||||
# multiUserBeamformer = true;
|
||||
# operatingChannelWidth = "20or40";
|
||||
# };
|
||||
# networks = {
|
||||
# wlp4s0 = {
|
||||
# ssid = "CXNK00BF9176";
|
||||
# authentication.saePasswordsFile = "/run/agenix/hostapd-pw-CXNK00BF9176";
|
||||
# };
|
||||
# # wlp4s0-1 = {
|
||||
# # ssid = "- Experimental 5G Tower by AT&T";
|
||||
# # authentication.saePasswordsFile = "/run/agenix/hostapd-pw-experimental-tower";
|
||||
# # };
|
||||
# # wlp4s0-2 = {
|
||||
# # ssid = "FBI Surveillance Van 2";
|
||||
# # authentication.saePasswordsFile = "/run/agenix/hostapd-pw-experimental-tower";
|
||||
# # };
|
||||
# };
|
||||
# settings = {
|
||||
# he_oper_centr_freq_seg0_idx = 8;
|
||||
# vht_oper_centr_freq_seg0_idx = 8;
|
||||
# };
|
||||
# };
|
||||
|
||||
# # 5GHz
|
||||
# wlan1 = {
|
||||
# band = "5g";
|
||||
# noScan = true;
|
||||
# channel = 128;
|
||||
# countryCode = "US";
|
||||
# wifi4 = {
|
||||
# capabilities = [ "LDPC" "GF" "SHORT-GI-20" "SHORT-GI-40" "TX-STBC" "RX-STBC1" "MAX-AMSDU-7935" "HT40-" ];
|
||||
# };
|
||||
# wifi5 = {
|
||||
# operatingChannelWidth = "160";
|
||||
# capabilities = [ "RXLDPC" "SHORT-GI-80" "SHORT-GI-160" "TX-STBC-2BY1" "SU-BEAMFORMER" "SU-BEAMFORMEE" "MU-BEAMFORMER" "MU-BEAMFORMEE" "RX-ANTENNA-PATTERN" "TX-ANTENNA-PATTERN" "RX-STBC-1" "SOUNDING-DIMENSION-3" "BF-ANTENNA-3" "VHT160" "MAX-MPDU-11454" "MAX-A-MPDU-LEN-EXP7" ];
|
||||
# };
|
||||
# wifi6 = {
|
||||
# enable = true;
|
||||
# singleUserBeamformer = true;
|
||||
# singleUserBeamformee = true;
|
||||
# multiUserBeamformer = true;
|
||||
# operatingChannelWidth = "160";
|
||||
# };
|
||||
# networks = {
|
||||
# wlan1 = {
|
||||
# ssid = "CXNK00BF9176";
|
||||
# authentication.saePasswordsFile = "/run/agenix/hostapd-pw-CXNK00BF9176";
|
||||
# };
|
||||
# # wlan1-1 = {
|
||||
# # ssid = "- Experimental 5G Tower by AT&T";
|
||||
# # authentication.saePasswordsFile = "/run/agenix/hostapd-pw-experimental-tower";
|
||||
# # };
|
||||
# # wlan1-2 = {
|
||||
# # ssid = "FBI Surveillance Van 5";
|
||||
# # authentication.saePasswordsFile = "/run/agenix/hostapd-pw-experimental-tower";
|
||||
# # };
|
||||
# };
|
||||
# settings = {
|
||||
# vht_oper_centr_freq_seg0_idx = 114;
|
||||
# he_oper_centr_freq_seg0_idx = 114;
|
||||
# };
|
||||
# };
|
||||
# };
|
||||
# };
|
||||
# age.secrets.hostapd-pw-experimental-tower.file = ../../secrets/hostapd-pw-experimental-tower.age;
|
||||
# age.secrets.hostapd-pw-CXNK00BF9176.file = ../../secrets/hostapd-pw-CXNK00BF9176.age;
|
||||
|
||||
# hardware.firmware = [
|
||||
# pkgs.mt7916-firmware
|
||||
# ];
|
||||
|
||||
# nixpkgs.overlays = [
|
||||
# (self: super: {
|
||||
# mt7916-firmware = pkgs.stdenvNoCC.mkDerivation {
|
||||
# pname = "mt7916-firmware";
|
||||
# version = "custom-feb-02-23";
|
||||
# src = ./firmware/mediatek; # from here https://github.com/openwrt/mt76/issues/720#issuecomment-1413537674
|
||||
# dontBuild = true;
|
||||
# installPhase = ''
|
||||
# for i in \
|
||||
# mt7916_eeprom.bin \
|
||||
# mt7916_rom_patch.bin \
|
||||
# mt7916_wa.bin \
|
||||
# mt7916_wm.bin;
|
||||
# do
|
||||
# install -D -pm644 $i $out/lib/firmware/mediatek/$i
|
||||
# done
|
||||
# '';
|
||||
# meta = with lib; {
|
||||
# license = licenses.unfreeRedistributableFirmware;
|
||||
# };
|
||||
# };
|
||||
# })
|
||||
# ];
|
||||
};
|
||||
}
|
||||
@@ -1,4 +1,4 @@
|
||||
{ config, pkgs, lib, ... }:
|
||||
{ config, lib, ... }:
|
||||
|
||||
let
|
||||
frigateHostname = "frigate.s0.neet.dev";
|
||||
|
||||
@@ -1,4 +1,4 @@
|
||||
{ config, lib, pkgs, modulesPath, ... }:
|
||||
{ lib, pkgs, modulesPath, ... }:
|
||||
|
||||
{
|
||||
imports =
|
||||
@@ -72,5 +72,5 @@
|
||||
};
|
||||
};
|
||||
|
||||
powerManagement.cpuFreqGovernor = "powersave";
|
||||
powerManagement.cpuFreqGovernor = "schedutil";
|
||||
}
|
||||
|
||||
@@ -1,4 +1,4 @@
|
||||
{ config, lib, pkgs, ... }:
|
||||
{ config, ... }:
|
||||
|
||||
{
|
||||
services.esphome.enable = true;
|
||||
|
||||
@@ -18,6 +18,7 @@
|
||||
"linkwarden"
|
||||
"outline"
|
||||
"dns-challenge"
|
||||
"ntfy"
|
||||
];
|
||||
|
||||
hostKey = "ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIAwiXcUFtAvZCayhu4+AIcF+Ktrdgv9ee/mXSIhJbp4q";
|
||||
|
||||
@@ -1,4 +1,4 @@
|
||||
{ config, pkgs, lib, ... }:
|
||||
{ config, pkgs, ... }:
|
||||
|
||||
{
|
||||
imports = [
|
||||
|
||||
@@ -8,6 +8,7 @@
|
||||
systemRoles = [
|
||||
"personal"
|
||||
"media-center"
|
||||
"ntfy"
|
||||
];
|
||||
|
||||
hostKey = "ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIHvdC1EiLqSNVmk5L1p7cWRIrrlelbK+NMj6tEBrwqIq";
|
||||
|
||||
@@ -1,4 +1,4 @@
|
||||
{ config, lib, ... }:
|
||||
{ config, ... }:
|
||||
|
||||
# Adds additional kernel modules to the nixos system
|
||||
# Not actually an overlay but a module. Has to be this way because kernel
|
||||
|
||||
BIN
secrets/attic-netrc.age
Normal file
BIN
secrets/attic-netrc.age
Normal file
Binary file not shown.
BIN
secrets/atticd-credentials.age
Normal file
BIN
secrets/atticd-credentials.age
Normal file
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
@@ -1,11 +0,0 @@
|
||||
age-encryption.org/v1
|
||||
-> ssh-ed25519 6AT2/g 3s+reqcb4Hu/3Z7rICFZBOkW02ibISthFAT1sveyLBo
|
||||
Eh5ynxeqqXhNbv/ASWZxzKXAzKX41uI5iJI4KqluHRI
|
||||
-> ssh-ed25519 ZDy34A cHcA2p0VrGr6jP/CUTOSU4Gef04ujh6wmJjmEWmWNE0
|
||||
wwaQnj7RABFzTbU74awlIJeHHePtO7jihNd2EUkNZPU
|
||||
-> ssh-ed25519 w3nu8g hN/fWUHspXoJmpibR4NAL3EXkKExe2tRjUzmLGK6VnE
|
||||
F1KQnGe3M8eD9hjnHLc7hqFTw9iXh7ICz0u421DuFOs
|
||||
-> ssh-ed25519 evqvfg r3AoIJ3KWCYIsV8+RTgYY+Eg+1EcBVNrX+ZRunKaug8
|
||||
KSXd4uq1/0ErZzSTPrCmY/66v4TT5PmFqv9LRSHNi9A
|
||||
--- 3bGqZANqdfEgdiUzu38n4dzPOShgGUzQGtO7l2S+hwU
|
||||
Ì?\<5C>•Öå¢aÚ'¤¤ÐÚ{˜/}ÉýÝL„:¨|¸G`†Ó+ºMÜÈY$s¸+‚Uk¥áäg‡ID¾K·
|
||||
19
secrets/ntfy-token.age
Normal file
19
secrets/ntfy-token.age
Normal file
@@ -0,0 +1,19 @@
|
||||
age-encryption.org/v1
|
||||
-> ssh-ed25519 qEbiMg 5JtpNApPNiFqAB/gQcAsE1gz0Fg/uHW92f6Kx1J2ggQ
|
||||
RzC1MQxyDYW1IuMo+OtSgcsND4v7XIRn0rCSkKCFA3A
|
||||
-> ssh-ed25519 N7drjg mn6LWo+2zWEtUavbFQar966+j+g5su+lcBfWYz1aZDQ
|
||||
EmKpdfkCSQao1+O/HJdOiam7UvBnDYcEEkgH6KrudQI
|
||||
-> ssh-ed25519 jQaHAA an3Ukqz3BVoz0FEAA6/Lw1XKOkQWHwmTut+XD4E4vS8
|
||||
9N2ePtXG2FPJSmOwcAO9p92MJKJJpTlEhKSmgMiinB0
|
||||
-> ssh-ed25519 ZDy34A v98HzmBgwgOpUk2WrRuFsCdNR+nF2veVLzyT2pU2ZXY
|
||||
o2pO5JbVEeaOFQ3beBvej6qgDdT9mgPCVHxmw2umhA0
|
||||
-> ssh-ed25519 w3nu8g Uba2LWQueJ50Ds1/RjvkXI+VH7calMiM7dbL02sRJ3U
|
||||
mFj5skmDXhJV9lK5iwUpebqxqVPAexdUntrbWJEix+Q
|
||||
-> ssh-ed25519 evqvfg wLEEdTdmDRiFGYDrYFQjvRzc3zmGXgvztIy3UuFXnWg
|
||||
CtCV/PpaxBmtDV+6InmcbKxNDmbPUTzyCm8tCf1Qw/4
|
||||
-> ssh-ed25519 6AT2/g NaA1AhOdb+FGOCoWFEX0QN4cXS1CxlpFwpsH1L6vBA0
|
||||
Z9aMYAofQ4Ath0zsg0YdZG3GTnCN2uQW7EG02bMZRsc
|
||||
-> ssh-ed25519 hPp1nw +shAZrydcbjXfYxm1UW1YosKg5ZwBBKO6cct4HdotBo
|
||||
GxnopKlmZQ/I6kMZPNurLgqwwkFHpUradaNYTPnlMFU
|
||||
--- W6DrhCmU08IEIyPpHDiRV21xVeALNk1bDHrLYc2YcC4
|
||||
'+Æ©<C386>CËl”i.Z£t;ýL²1Eãk#5ŸŠÑ£38‰?± šFWhö6?±_Ã=Âæ<C382>ãj/æÌRFÆáãåiÿóãÌZ{‘
|
||||
@@ -1,21 +0,0 @@
|
||||
age-encryption.org/v1
|
||||
-> ssh-ed25519 WBT1Hw QyirfN0ibrERO2bNZrb/8xqT5thl8LQmjn+xAFVMryc
|
||||
bLND1Cb4eO2VAGtM+ehm4YW8jN5Tcki+jc3JxLHSZuo
|
||||
-> ssh-ed25519 6AT2/g DqNkPFZ/b96oYl8RiUkVxi9vmv8RG0Pbs2y0cqKRGX4
|
||||
5FLcVYepU/bNRq2Cr9zdHDN/vM9OFO6Q7QlWX+PPa4Q
|
||||
-> ssh-ed25519 r848+g iSF57inO0hafZ0N6hIWGML1kRE48fN3WooeeHXXIRSs
|
||||
RdYVTCEwMc31x9yl2VBmRCEJXUGCVeJjBBdO1rAL3A8
|
||||
-> ssh-ed25519 hPp1nw mhanVdWbVK7OAinjTmEqx1jawd8pTlPe6YTIa/sEckQ
|
||||
MVBgbEa8uNYIoCCmEBmFzMQR5cO033C57lMze5z+n54
|
||||
-> ssh-ed25519 ZDy34A su3VVvWZhGKTR11mNKoOLzYjvnBCOG+U4qIeHUY6VXE
|
||||
DRscTOjNk5BpejadPMVABLeLC+0mB6uAYxsSm5HqUgw
|
||||
-> ssh-ed25519 w3nu8g kZXxRHeMvnzk96IhW73XUkXo6lM0CfUjgFFcio5e4TA
|
||||
1vWdp3DVAH74cBd2hUujCz4J4ztQzFseP9SKYk2juAM
|
||||
-> ssh-ed25519 evqvfg xRV4zs+y8jaqkLH7qMbRsThjptxuokIn1h1S2eIUmXg
|
||||
6+a1IS7X2qucszKXa1XOeEgVDeNf3PF2HgQMixGPR7s
|
||||
--- 6gSqjzHmrwlNUz8bmuoeB/2zUIOvQ82RDu77vaCtnvs
|
||||
]‹qˆªÓ®ÓñAz}µeÂUª(‡ˆeÿmŸ{^c¤ñ’Þ<1E>ìÖ°)Á7ê¡»ÏÞ[H
|
||||
ªgK܉Þù#ÞF$ô)ÎÝOE…Á5Œ{³Á6‚‡ ÌÖ‹µÓÚQNJ.Î3YN<59>oXS`bZ
|
||||
W§„ü;*
|
||||
AÑUÛ¾&¢wîj@BLç/
|
||||
<¢xËSµH쇅h
|
||||
Binary file not shown.
@@ -7,6 +7,9 @@ let
|
||||
|
||||
# nobody is using this secret but I still need to be able to r/w it
|
||||
nobody = sshKeys.userKeys;
|
||||
|
||||
# For secrets that all machines need to know
|
||||
everyone = lib.unique (roles.personal ++ roles.server);
|
||||
in
|
||||
|
||||
with roles;
|
||||
@@ -22,31 +25,29 @@ with roles;
|
||||
# nix binary cache
|
||||
# public key: s0.koi-bebop.ts.net:OjbzD86YjyJZpCp9RWaQKANaflcpKhtzBMNP8I2aPUU=
|
||||
"binary-cache-private-key.age".publicKeys = binary-cache;
|
||||
# public key: ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAINpUZFFL9BpBVqeeU63sFPhR9ewuhEZerTCDIGW1NPSB
|
||||
"binary-cache-push-sshkey.age".publicKeys = nobody; # this value is directly given to gitea
|
||||
|
||||
# attic binary cache
|
||||
"atticd-credentials.age".publicKeys = binary-cache;
|
||||
"attic-netrc.age".publicKeys = everyone;
|
||||
|
||||
# vpn
|
||||
"iodine.age".publicKeys = iodine;
|
||||
"pia-login.age".publicKeys = pia;
|
||||
|
||||
# cloud
|
||||
"nextcloud-pw.age".publicKeys = nextcloud;
|
||||
"whiteboard-server-jwt-secret.age".publicKeys = nextcloud;
|
||||
"smb-secrets.age".publicKeys = personal ++ media-center;
|
||||
"oauth2-proxy-env.age".publicKeys = server;
|
||||
|
||||
# services
|
||||
"searx.age".publicKeys = nobody;
|
||||
"wolframalpha.age".publicKeys = dailybot;
|
||||
"linkwarden-environment.age".publicKeys = linkwarden;
|
||||
|
||||
# hostapd
|
||||
"hostapd-pw-experimental-tower.age".publicKeys = nobody;
|
||||
"hostapd-pw-CXNK00BF9176.age".publicKeys = nobody;
|
||||
|
||||
# backups
|
||||
"backblaze-s3-backups.age".publicKeys = personal ++ server;
|
||||
"restic-password.age".publicKeys = personal ++ server;
|
||||
"backblaze-s3-backups.age".publicKeys = everyone;
|
||||
"restic-password.age".publicKeys = everyone;
|
||||
|
||||
# ntfy alerts
|
||||
"ntfy-token.age".publicKeys = everyone;
|
||||
|
||||
# gitea actions runner
|
||||
"gitea-actions-runner-token.age".publicKeys = gitea-actions-runner;
|
||||
|
||||
@@ -1,235 +0,0 @@
|
||||
# Create Workspace Skill
|
||||
|
||||
This skill enables you to create new ephemeral sandboxed workspaces for isolated development environments. Workspaces can be either VMs (using microvm.nix) or containers (using systemd-nspawn).
|
||||
|
||||
## When to use this skill
|
||||
|
||||
Use this skill when:
|
||||
- Creating a new isolated development environment
|
||||
- Setting up a workspace for a specific project
|
||||
- Need a clean environment to run AI coding agents safely
|
||||
- Want to test something without affecting the host system
|
||||
|
||||
## Choosing between VM and Container
|
||||
|
||||
| Feature | VM (`type = "vm"`) | Container (`type = "container"`) |
|
||||
|---------|-------------------|----------------------------------|
|
||||
| Isolation | Full kernel isolation | Shared kernel with namespaces |
|
||||
| Overhead | Higher (separate kernel) | Lower (process-level) |
|
||||
| Startup time | Slower | Faster |
|
||||
| Storage | virtiofs shares | bind mounts |
|
||||
| Use case | Untrusted code, kernel testing | General development |
|
||||
|
||||
**Recommendation**: Use containers for most development work. Use VMs when you need stronger isolation or are testing potentially dangerous code.
|
||||
|
||||
## How to create a workspace
|
||||
|
||||
Follow these steps to create a new workspace:
|
||||
|
||||
### 1. Choose workspace name, type, and IP address
|
||||
|
||||
- Workspace name should be descriptive (e.g., "myproject", "testing", "nixpkgs-contrib")
|
||||
- Type should be "vm" or "container"
|
||||
- IP address should be in the 192.168.83.x range (192.168.83.10-254)
|
||||
- Check existing workspaces in `machines/fry/default.nix` to avoid IP conflicts
|
||||
|
||||
### 2. Create workspace configuration file
|
||||
|
||||
Create `machines/fry/workspaces/<name>.nix`:
|
||||
|
||||
```nix
|
||||
{ config, lib, pkgs, ... }:
|
||||
|
||||
# The workspace name becomes the hostname automatically.
|
||||
# The IP is configured in default.nix, not here.
|
||||
|
||||
{
|
||||
# Install packages as needed
|
||||
environment.systemPackages = with pkgs; [
|
||||
# Add packages here
|
||||
];
|
||||
|
||||
# Additional configuration as needed
|
||||
}
|
||||
```
|
||||
|
||||
The module automatically configures:
|
||||
- **Hostname**: Set to the workspace name from `sandboxed-workspace.workspaces.<name>`
|
||||
- **Static IP**: From the `ip` option
|
||||
- **DNS**: Uses the host as DNS server
|
||||
- **Network**: TAP interface (VM) or veth pair (container) on the bridge
|
||||
- **Standard shares**: workspace, ssh-host-keys, claude-config
|
||||
|
||||
### 3. Register workspace in machines/fry/default.nix
|
||||
|
||||
Add the workspace to the `sandboxed-workspace.workspaces` attribute set:
|
||||
|
||||
```nix
|
||||
sandboxed-workspace = {
|
||||
enable = true;
|
||||
workspaces.<name> = {
|
||||
type = "vm"; # or "container"
|
||||
config = ./workspaces/<name>.nix;
|
||||
ip = "192.168.83.XX"; # Choose unique IP
|
||||
autoStart = false; # optional, defaults to false
|
||||
};
|
||||
};
|
||||
```
|
||||
|
||||
### 4. Optional: Pre-create workspace with project
|
||||
|
||||
If you want to clone a repository before deployment:
|
||||
|
||||
```bash
|
||||
mkdir -p ~/sandboxed/<name>/workspace
|
||||
cd ~/sandboxed/<name>/workspace
|
||||
git clone <repository-url>
|
||||
```
|
||||
|
||||
Note: Directories and SSH keys are auto-created on first deployment if they don't exist.
|
||||
|
||||
### 5. Verify configuration builds
|
||||
|
||||
```bash
|
||||
nix build .#nixosConfigurations.fry.config.system.build.toplevel --dry-run
|
||||
```
|
||||
|
||||
### 6. Deploy the configuration
|
||||
|
||||
```bash
|
||||
doas nixos-rebuild switch --flake .#fry
|
||||
```
|
||||
|
||||
### 7. Start the workspace
|
||||
|
||||
```bash
|
||||
# Using the shell alias:
|
||||
workspace_<name>_start
|
||||
|
||||
# Or manually:
|
||||
doas systemctl start microvm@<name> # for VMs
|
||||
doas systemctl start container@<name> # for containers
|
||||
```
|
||||
|
||||
### 8. Access the workspace
|
||||
|
||||
SSH into the workspace by name (added to /etc/hosts automatically):
|
||||
|
||||
```bash
|
||||
# Using the shell alias:
|
||||
workspace_<name>
|
||||
|
||||
# Or manually:
|
||||
ssh googlebot@workspace-<name>
|
||||
```
|
||||
|
||||
Or by IP:
|
||||
|
||||
```bash
|
||||
ssh googlebot@192.168.83.XX
|
||||
```
|
||||
|
||||
## Managing workspaces
|
||||
|
||||
### Shell aliases
|
||||
|
||||
For each workspace, these aliases are automatically created:
|
||||
|
||||
- `workspace_<name>` - SSH into the workspace
|
||||
- `workspace_<name>_start` - Start the workspace
|
||||
- `workspace_<name>_stop` - Stop the workspace
|
||||
- `workspace_<name>_restart` - Restart the workspace
|
||||
- `workspace_<name>_status` - Show workspace status
|
||||
|
||||
### Check workspace status
|
||||
```bash
|
||||
workspace_<name>_status
|
||||
```
|
||||
|
||||
### Stop workspace
|
||||
```bash
|
||||
workspace_<name>_stop
|
||||
```
|
||||
|
||||
### View workspace logs
|
||||
```bash
|
||||
doas journalctl -u microvm@<name> # for VMs
|
||||
doas journalctl -u container@<name> # for containers
|
||||
```
|
||||
|
||||
### List running workspaces
|
||||
```bash
|
||||
doas systemctl list-units 'microvm@*' 'container@*'
|
||||
```
|
||||
|
||||
## Example workflow
|
||||
|
||||
Creating a VM workspace named "nixpkgs-dev":
|
||||
|
||||
```bash
|
||||
# 1. Create machines/fry/workspaces/nixpkgs-dev.nix (minimal, just packages if needed)
|
||||
|
||||
# 2. Update machines/fry/default.nix:
|
||||
# sandboxed-workspace.workspaces.nixpkgs-dev = {
|
||||
# type = "vm";
|
||||
# config = ./workspaces/nixpkgs-dev.nix;
|
||||
# ip = "192.168.83.20";
|
||||
# };
|
||||
|
||||
# 3. Build and deploy (auto-creates directories and SSH keys)
|
||||
doas nixos-rebuild switch --flake .#fry
|
||||
|
||||
# 4. Optional: Clone repository into workspace
|
||||
mkdir -p ~/sandboxed/nixpkgs-dev/workspace
|
||||
cd ~/sandboxed/nixpkgs-dev/workspace
|
||||
git clone https://github.com/NixOS/nixpkgs.git
|
||||
|
||||
# 5. Start the workspace
|
||||
workspace_nixpkgs-dev_start
|
||||
|
||||
# 6. SSH into the workspace
|
||||
workspace_nixpkgs-dev
|
||||
```
|
||||
|
||||
Creating a container workspace named "quick-test":
|
||||
|
||||
```bash
|
||||
# 1. Create machines/fry/workspaces/quick-test.nix
|
||||
|
||||
# 2. Update machines/fry/default.nix:
|
||||
# sandboxed-workspace.workspaces.quick-test = {
|
||||
# type = "container";
|
||||
# config = ./workspaces/quick-test.nix;
|
||||
# ip = "192.168.83.30";
|
||||
# };
|
||||
|
||||
# 3. Build and deploy
|
||||
doas nixos-rebuild switch --flake .#fry
|
||||
|
||||
# 4. Start and access
|
||||
workspace_quick-test_start
|
||||
workspace_quick-test
|
||||
```
|
||||
|
||||
## Directory structure
|
||||
|
||||
Workspaces store persistent data in `~/sandboxed/<name>/`:
|
||||
|
||||
```
|
||||
~/sandboxed/<name>/
|
||||
├── workspace/ # Shared workspace directory
|
||||
├── ssh-host-keys/ # Persistent SSH host keys
|
||||
└── claude-config/ # Claude Code configuration
|
||||
```
|
||||
|
||||
## Notes
|
||||
|
||||
- Workspaces are ephemeral - only data in shared directories persists
|
||||
- VMs have isolated nix store via overlay
|
||||
- Containers share the host's nix store (read-only)
|
||||
- SSH host keys persist across workspace rebuilds
|
||||
- Claude config directory is isolated per workspace
|
||||
- Workspaces can access the internet via NAT through the host
|
||||
- DNS queries go through the host (uses host's DNS)
|
||||
- Default VM resources: 8 vCPUs, 4GB RAM, 8GB disk overlay
|
||||
- Containers have no resource limits by default
|
||||
Reference in New Issue
Block a user