Clean up CLAUDE.md and make the claude skill correctly this time
Some checks failed
Check Flake / check-flake (push) Failing after 6s

This commit is contained in:
2026-02-10 21:08:13 -08:00
parent 869b6af7f7
commit 60e89dfc90
3 changed files with 213 additions and 285 deletions

View File

@@ -0,0 +1,160 @@
---
name: create-workspace
description: >
Creates a new sandboxed workspace (isolated dev environment) by adding
NixOS configuration for a VM, container, or Incus instance. Use when
the user wants to create, set up, or add a new sandboxed workspace.
---
# Create Sandboxed Workspace
Creates an isolated development environment backed by a VM (microvm.nix), container (systemd-nspawn), or Incus instance. This produces:
1. A workspace config file at `machines/<machine>/workspaces/<name>.nix`
2. A registration entry in `machines/<machine>/default.nix`
## Step 1: Parse Arguments
Extract the workspace name and backend type from `$ARGUMENTS`. If either is missing, ask the user.
- **Name**: lowercase alphanumeric with hyphens (e.g., `my-project`)
- **Type**: one of `vm`, `container`, or `incus`
## Step 2: Detect Machine
Run `hostname` to get the current machine name. Verify that `machines/<hostname>/default.nix` exists.
If the machine directory doesn't exist, stop and tell the user this machine isn't managed by this flake.
## Step 3: Allocate IP Address
Read `machines/<hostname>/default.nix` to find existing `sandboxed-workspace.workspaces` entries and their IPs.
All IPs are in the `192.168.83.0/24` subnet. Use these ranges by convention:
| Type | IP Range |
|------|----------|
| vm | 192.168.83.10 - .49 |
| container | 192.168.83.50 - .89 |
| incus | 192.168.83.90 - .129 |
Pick the next available IP in the appropriate range. If no workspaces exist yet for that type, use the first IP in the range.
## Step 4: Create Workspace Config File
Create `machines/<hostname>/workspaces/<name>.nix`. Use this template:
```nix
{ config, lib, pkgs, ... }:
{
environment.systemPackages = with pkgs; [
# Add packages here
];
}
```
Ask the user if they want any packages pre-installed.
Create the `workspaces/` directory if it doesn't exist.
**Important:** After creating the file, run `git add` on it immediately. Nix flakes only see files tracked by git, so new files must be staged before `nix build` will work.
## Step 5: Register Workspace
Edit `machines/<hostname>/default.nix` to add the workspace entry inside the `sandboxed-workspace` block.
The entry should look like:
```nix
workspaces.<name> = {
type = "<type>";
config = ./workspaces/<name>.nix;
ip = "<allocated-ip>";
};
```
**If `sandboxed-workspace` block doesn't exist yet**, add the full block:
```nix
sandboxed-workspace = {
enable = true;
workspaces.<name> = {
type = "<type>";
config = ./workspaces/<name>.nix;
ip = "<allocated-ip>";
};
};
```
The machine also needs `networking.sandbox.upstreamInterface` set. Check if it exists; if not, ask the user for their primary network interface name (they can find it with `ip route show default`).
Do **not** set `hostKey` — it gets auto-generated on first boot and can be added later.
## Step 6: Verify Build
Run a build to check for configuration errors:
```
nix build .#nixosConfigurations.<hostname>.config.system.build.toplevel --no-link
```
If the build fails, fix the configuration and retry.
## Step 7: Deploy
Tell the user to deploy by running:
```
doas nixos-rebuild switch --flake .
```
**Never run this command yourself** — it requires privileges.
## Step 8: Post-Deploy Info
Tell the user to deploy and then start the workspace so the host key gets generated. Provide these instructions:
**Deploy:**
```
doas nixos-rebuild switch --flake .
```
**Starting the workspace:**
```
doas systemctl start <service>
```
Where `<service>` is:
- VM: `microvm@<name>`
- Container: `container@<name>`
- Incus: `incus-workspace-<name>`
Or use the auto-generated shell alias: `workspace_<name>_start`
**Connecting:**
```
ssh googlebot@workspace-<name>
```
Or use the alias: `workspace_<name>`
**Never run deploy or start commands yourself** — they require privileges.
## Step 9: Add Host Key
After the user has deployed and started the workspace, add the SSH host key to the workspace config. Do NOT skip this step — always wait for the user to confirm they've started the workspace, then proceed.
1. Read the host key from `~/sandboxed/<name>/ssh-host-keys/ssh_host_ed25519_key.pub`
2. Add `hostKey = "<contents>";` to the workspace entry in `machines/<hostname>/default.nix`
3. Run the build again to verify
4. Tell the user to redeploy with `doas nixos-rebuild switch --flake .`
## Backend Reference
| | VM | Container | Incus |
|---|---|---|---|
| Isolation | Full kernel (cloud-hypervisor) | Shared kernel (systemd-nspawn) | Unprivileged container |
| Overhead | Higher (separate kernel) | Lower (bind mounts) | Medium |
| Filesystem | virtiofs shares | Bind mounts | Incus-managed |
| Use case | Untrusted code, kernel-level isolation | Fast dev environments | Better security than nspawn |

103
CLAUDE.md
View File

@@ -1,78 +1,81 @@
# NixOS Configuration
# CLAUDE.md
This is a NixOS flake configuration managing multiple machines.
This file provides guidance to Claude Code (claude.ai/code) when working with code in this repository.
## Adding Packages
## What This Is
**User packages** go in `home/googlebot.nix`:
- Development tools, editors, language-specific tools
- Use `home.packages` for CLI tools
- Use `programs.<name>` for configurable programs (preferred when available)
- Gate dev tools with `thisMachineIsPersonal` so they only install on workstations
A NixOS flake managing multiple machines. All machines import `/common` for shared config, and each machine has its own directory under `/machines/<hostname>/` with a `default.nix` (machine-specific config), `hardware-configuration.nix`, and `properties.nix` (metadata: hostnames, arch, roles, SSH keys).
**System packages** go in `common/default.nix`:
- Basic utilities needed on every machine (servers and workstations)
- Examples: git, htop, tmux, wget, dnsutils
- Keep this minimal - most packages belong in home/googlebot.nix
## Common Commands
**Personal machine system packages** go in `common/pc/default.nix`:
- Packages that must be system-level (not per-user) due to technical limitations
- But only needed on personal/development machines, not servers
- Examples: packages requiring udev rules, system services, or setuid
```bash
# Build a machine config (check for errors without deploying)
nix build .#nixosConfigurations.<hostname>.config.system.build.toplevel --no-link
## Machine Roles
# Deploy to local machine (user must run this themselves - requires privileges)
doas nixos-rebuild switch --flake .
Machines have roles defined in their configuration:
# Deploy to a remote machine (boot-only, no activate)
deploy --remote-build --boot --debug-logs --skip-checks .#<hostname>
- **personal**: Development workstations (desktops, laptops). Install dev tools, GUI apps, editors here.
- **Non-personal**: Servers and production machines. Keep minimal.
# Deploy to a remote machine (activate immediately)
deploy --remote-build --debug-logs --skip-checks .#<hostname>
Use `thisMachineIsPersonal` (or `osConfig.thisMachine.hasRole."personal"`) to conditionally include packages:
# Update flake lockfile
make update-lockfile
```nix
home.packages = lib.mkIf thisMachineIsPersonal [
pkgs.some-dev-tool
];
# Update a single flake input
make update-input <input-name>
# Edit an agenix secret
make edit-secret <secret-filename>
# Rekey all secrets (after adding/removing machine host keys)
make rekey-secrets
```
## Sandboxed Workspaces
## Architecture
Isolated development environments using VMs or containers. See `skills/create-workspace/SKILL.md`.
### Machine Discovery (Auto-Registration)
- VMs: Full kernel isolation via microvm.nix
- Containers: Lighter weight via systemd-nspawn
Machines are **not** listed in `flake.nix`. Instead, `common/machine-info/default.nix` recursively scans `/machines/` for any `properties.nix` file and auto-registers that directory as a machine. To add a machine, create `machines/<name>/properties.nix` and `machines/<name>/default.nix`.
Configuration: `common/sandboxed-workspace/`
`properties.nix` returns a plain attrset (no NixOS module args) with: `hostNames`, `arch`, `systemRoles`, `hostKey`, and optionally `userKeys`, `deployKeys`, `remoteUnlock`.
## Key Directories
### Role System
- `common/` - Shared NixOS modules for all machines
- `home/` - Home Manager configurations
- `lib/` - Custom lib functions (extends nixpkgs lib, accessible as `lib.*` in modules)
- `machines/` - Per-machine configurations
- `skills/` - Claude Code skills for common tasks
Each machine declares `systemRoles` in its `properties.nix` (e.g., `["personal" "dns-challenge"]`). Roles drive conditional config:
- `config.thisMachine.hasRole.<role>` - boolean, used to conditionally enable features (e.g., `de.enable` for `personal` role)
- `config.machines.withRole.<role>` - list of hostnames with that role
- Roles also determine which machines can decrypt which agenix secrets (see `secrets/secrets.nix`)
## Shared Library
### Secrets (agenix)
Custom utility functions go in `lib/default.nix`. The flake extends `nixpkgs.lib` with these functions, so they're accessible as `lib.functionName` in all modules. Add reusable functions here when used in multiple places.
Secrets in `/secrets/` are encrypted `.age` files. `secrets.nix` maps each secret to the SSH host keys (by role) that can decrypt it. After changing which machines have access, run `make rekey-secrets`.
## Code Comments
### Sandboxed Workspaces
Only add comments that provide value beyond what the code already shows:
- Explain *why* something is done, not *what* is being done
- Document non-obvious constraints or gotchas
- Never add filler comments that repeat the code (e.g. `# Start the service` before a start command)
`common/sandboxed-workspace/` provides isolated dev environments. Three backends: `vm` (microvm/cloud-hypervisor), `container` (systemd-nspawn), `incus`. Workspaces are defined in machine `default.nix` files and their per-workspace config goes in `machines/<hostname>/workspaces/<name>.nix`. The base config (`base.nix`) handles networking, SSH, user setup, and Claude Code pre-configuration.
## Bash Commands
IP allocation convention: VMs `.10-.49`, containers `.50-.89`, incus `.90-.129` in `192.168.83.0/24`.
Do not redirect stderr to stdout (no `2>&1`). This can hide important output and errors.
### Backups
Do not use `doas` or `sudo` - they will fail. Ask the user to run privileged commands themselves.
`common/backups.nix` defines a `backup.group` option. Machines declare backup groups with paths; restic handles daily backups to Backblaze B2 with automatic ZFS/btrfs snapshot support. Each group gets a `restic_<group>` CLI wrapper for manual operations.
## Nix Commands
### Nixpkgs Patching
Use `--no-link` with `nix build` to avoid creating `result` symlinks in the working directory.
`flake.nix` applies patches from `/patches/` to nixpkgs before building (workaround for nix#3920).
## Git Commits
### Key Conventions
Do not add "Co-Authored-By" lines to commit messages.
- Uses `doas` instead of `sudo` everywhere
- Fish shell is the default user shell
- Home Manager is used for user-level config (`home/googlebot.nix`)
- `lib/default.nix` extends nixpkgs lib with custom utility functions (extends via `nixpkgs.lib.extend`)
- Overlays are in `/overlays/` and applied globally via `flake.nix`
- The Nix formatter for this project is `nixpkgs-fmt`
- Do not add "Co-Authored-By" lines to commit messages
- Always use `--no-link` when running `nix build`
- Don't use `nix build --dry-run` unless you only need evaluation — it skips the actual build
- Avoid `2>&1` on nix commands — it can cause error output to be missed

View File

@@ -1,235 +0,0 @@
# Create Workspace Skill
This skill enables you to create new ephemeral sandboxed workspaces for isolated development environments. Workspaces can be either VMs (using microvm.nix) or containers (using systemd-nspawn).
## When to use this skill
Use this skill when:
- Creating a new isolated development environment
- Setting up a workspace for a specific project
- Need a clean environment to run AI coding agents safely
- Want to test something without affecting the host system
## Choosing between VM and Container
| Feature | VM (`type = "vm"`) | Container (`type = "container"`) |
|---------|-------------------|----------------------------------|
| Isolation | Full kernel isolation | Shared kernel with namespaces |
| Overhead | Higher (separate kernel) | Lower (process-level) |
| Startup time | Slower | Faster |
| Storage | virtiofs shares | bind mounts |
| Use case | Untrusted code, kernel testing | General development |
**Recommendation**: Use containers for most development work. Use VMs when you need stronger isolation or are testing potentially dangerous code.
## How to create a workspace
Follow these steps to create a new workspace:
### 1. Choose workspace name, type, and IP address
- Workspace name should be descriptive (e.g., "myproject", "testing", "nixpkgs-contrib")
- Type should be "vm" or "container"
- IP address should be in the 192.168.83.x range (192.168.83.10-254)
- Check existing workspaces in `machines/fry/default.nix` to avoid IP conflicts
### 2. Create workspace configuration file
Create `machines/fry/workspaces/<name>.nix`:
```nix
{ config, lib, pkgs, ... }:
# The workspace name becomes the hostname automatically.
# The IP is configured in default.nix, not here.
{
# Install packages as needed
environment.systemPackages = with pkgs; [
# Add packages here
];
# Additional configuration as needed
}
```
The module automatically configures:
- **Hostname**: Set to the workspace name from `sandboxed-workspace.workspaces.<name>`
- **Static IP**: From the `ip` option
- **DNS**: Uses the host as DNS server
- **Network**: TAP interface (VM) or veth pair (container) on the bridge
- **Standard shares**: workspace, ssh-host-keys, claude-config
### 3. Register workspace in machines/fry/default.nix
Add the workspace to the `sandboxed-workspace.workspaces` attribute set:
```nix
sandboxed-workspace = {
enable = true;
workspaces.<name> = {
type = "vm"; # or "container"
config = ./workspaces/<name>.nix;
ip = "192.168.83.XX"; # Choose unique IP
autoStart = false; # optional, defaults to false
};
};
```
### 4. Optional: Pre-create workspace with project
If you want to clone a repository before deployment:
```bash
mkdir -p ~/sandboxed/<name>/workspace
cd ~/sandboxed/<name>/workspace
git clone <repository-url>
```
Note: Directories and SSH keys are auto-created on first deployment if they don't exist.
### 5. Verify configuration builds
```bash
nix build .#nixosConfigurations.fry.config.system.build.toplevel --dry-run
```
### 6. Deploy the configuration
```bash
doas nixos-rebuild switch --flake .#fry
```
### 7. Start the workspace
```bash
# Using the shell alias:
workspace_<name>_start
# Or manually:
doas systemctl start microvm@<name> # for VMs
doas systemctl start container@<name> # for containers
```
### 8. Access the workspace
SSH into the workspace by name (added to /etc/hosts automatically):
```bash
# Using the shell alias:
workspace_<name>
# Or manually:
ssh googlebot@workspace-<name>
```
Or by IP:
```bash
ssh googlebot@192.168.83.XX
```
## Managing workspaces
### Shell aliases
For each workspace, these aliases are automatically created:
- `workspace_<name>` - SSH into the workspace
- `workspace_<name>_start` - Start the workspace
- `workspace_<name>_stop` - Stop the workspace
- `workspace_<name>_restart` - Restart the workspace
- `workspace_<name>_status` - Show workspace status
### Check workspace status
```bash
workspace_<name>_status
```
### Stop workspace
```bash
workspace_<name>_stop
```
### View workspace logs
```bash
doas journalctl -u microvm@<name> # for VMs
doas journalctl -u container@<name> # for containers
```
### List running workspaces
```bash
doas systemctl list-units 'microvm@*' 'container@*'
```
## Example workflow
Creating a VM workspace named "nixpkgs-dev":
```bash
# 1. Create machines/fry/workspaces/nixpkgs-dev.nix (minimal, just packages if needed)
# 2. Update machines/fry/default.nix:
# sandboxed-workspace.workspaces.nixpkgs-dev = {
# type = "vm";
# config = ./workspaces/nixpkgs-dev.nix;
# ip = "192.168.83.20";
# };
# 3. Build and deploy (auto-creates directories and SSH keys)
doas nixos-rebuild switch --flake .#fry
# 4. Optional: Clone repository into workspace
mkdir -p ~/sandboxed/nixpkgs-dev/workspace
cd ~/sandboxed/nixpkgs-dev/workspace
git clone https://github.com/NixOS/nixpkgs.git
# 5. Start the workspace
workspace_nixpkgs-dev_start
# 6. SSH into the workspace
workspace_nixpkgs-dev
```
Creating a container workspace named "quick-test":
```bash
# 1. Create machines/fry/workspaces/quick-test.nix
# 2. Update machines/fry/default.nix:
# sandboxed-workspace.workspaces.quick-test = {
# type = "container";
# config = ./workspaces/quick-test.nix;
# ip = "192.168.83.30";
# };
# 3. Build and deploy
doas nixos-rebuild switch --flake .#fry
# 4. Start and access
workspace_quick-test_start
workspace_quick-test
```
## Directory structure
Workspaces store persistent data in `~/sandboxed/<name>/`:
```
~/sandboxed/<name>/
├── workspace/ # Shared workspace directory
├── ssh-host-keys/ # Persistent SSH host keys
└── claude-config/ # Claude Code configuration
```
## Notes
- Workspaces are ephemeral - only data in shared directories persists
- VMs have isolated nix store via overlay
- Containers share the host's nix store (read-only)
- SSH host keys persist across workspace rebuilds
- Claude config directory is isolated per workspace
- Workspaces can access the internet via NAT through the host
- DNS queries go through the host (uses host's DNS)
- Default VM resources: 8 vCPUs, 4GB RAM, 8GB disk overlay
- Containers have no resource limits by default