Compare commits

...

10 Commits

Author SHA1 Message Date
d6a0e8ec49 Disable tailscaleAuth for now because it doesn't work with tailscale's ACL tagged group
Some checks failed
Check Flake / check-flake (push) Failing after 35s
2026-02-09 19:57:20 -08:00
8293a7dc2a Rework Claude Code config in sandboxed workspaces
Remove credential passing to sandboxes (didn't work well enough).
Move onboarding config init from host-side setup into base.nix so
each workspace initializes its own Claude config on first boot.
Wrap claude command in VM and Incus workspaces to always skip
permission prompts.
2026-02-09 19:56:11 -08:00
cbf2aedcad Add use flake for fresh claude code 2026-02-09 18:04:09 -08:00
69fc3ad837 Add ZFS/btrfs snapshot support to backups
Creates filesystem snapshots before backup for point-in-time consistency.
Uses mount namespaces to bind mount snapshots over original paths, so
restic records correct paths while reading from frozen snapshot data.

- Auto-detects filesystem type via findmnt
- Deterministic snapshot names using path hash
- Graceful fallback for unsupported filesystems
2026-02-08 20:16:37 -08:00
6041d4d09f Replace nixos-generators with upstream nixpkgs image support 2026-02-08 17:57:16 -08:00
cf71b74d6f Add Incus container support to sandboxed workspaces
- Add incus.nix module for fully declarative Incus/LXC containers
- Build NixOS LXC images using nixpkgs.lib.nixosSystem
- Ephemeral containers: recreated on each start, cleaned up on stop
- Use flock to serialize concurrent container operations
- Deterministic MAC addresses via lib.mkMac to prevent ARP cache issues
- Add veth* to NetworkManager unmanaged interfaces
- Update CLAUDE.md with coding conventions and shared lib docs
2026-02-08 15:16:40 -08:00
5178ea6835 Configure Claude Code for sandboxed workspaces
- Add credentials bind mount in container.nix
- Create claude-credentials-dir service to copy credentials for VMs
- Generate .claude.json with onboarding skipped and workspace trusted
- Add allowUnfree to container config
2026-02-08 14:53:31 -08:00
87db330e5b Add sandboxed-workspace module for isolated dev environments
Provides isolated development environments using either VMs (microvm.nix)
or containers (systemd-nspawn) with a unified configuration interface.

Features:
- Unified options with required type field ("vm" or "container")
- Shared base configuration for networking, SSH, users, packages
- Automatic SSH host key generation and persistence
- Shell aliases for workspace management (start/stop/status/ssh)
- Automatic /etc/hosts entries for workspace hostnames
- restartIfChanged support for both VMs and containers
- Passwordless doas in workspaces

Container backend:
- Uses hostBridge for proper bridge networking with /24 subnet
- systemd-networkd for IP configuration
- systemd-resolved for DNS

VM backend:
- TAP interface with deterministic MAC addresses
- virtiofs shares for workspace directories
- vsock CID generation
2026-02-07 22:43:08 -08:00
70f0064d7b Add claude-code to personal machines 2026-02-07 22:37:35 -08:00
cef8456332 Add CLAUDE.md with project conventions 2026-02-07 22:36:11 -08:00
20 changed files with 1460 additions and 70 deletions

78
CLAUDE.md Normal file
View File

@@ -0,0 +1,78 @@
# NixOS Configuration
This is a NixOS flake configuration managing multiple machines.
## Adding Packages
**User packages** go in `home/googlebot.nix`:
- Development tools, editors, language-specific tools
- Use `home.packages` for CLI tools
- Use `programs.<name>` for configurable programs (preferred when available)
- Gate dev tools with `thisMachineIsPersonal` so they only install on workstations
**System packages** go in `common/default.nix`:
- Basic utilities needed on every machine (servers and workstations)
- Examples: git, htop, tmux, wget, dnsutils
- Keep this minimal - most packages belong in home/googlebot.nix
**Personal machine system packages** go in `common/pc/default.nix`:
- Packages that must be system-level (not per-user) due to technical limitations
- But only needed on personal/development machines, not servers
- Examples: packages requiring udev rules, system services, or setuid
## Machine Roles
Machines have roles defined in their configuration:
- **personal**: Development workstations (desktops, laptops). Install dev tools, GUI apps, editors here.
- **Non-personal**: Servers and production machines. Keep minimal.
Use `thisMachineIsPersonal` (or `osConfig.thisMachine.hasRole."personal"`) to conditionally include packages:
```nix
home.packages = lib.mkIf thisMachineIsPersonal [
pkgs.some-dev-tool
];
```
## Sandboxed Workspaces
Isolated development environments using VMs or containers. See `skills/create-workspace/SKILL.md`.
- VMs: Full kernel isolation via microvm.nix
- Containers: Lighter weight via systemd-nspawn
Configuration: `common/sandboxed-workspace/`
## Key Directories
- `common/` - Shared NixOS modules for all machines
- `home/` - Home Manager configurations
- `lib/` - Custom lib functions (extends nixpkgs lib, accessible as `lib.*` in modules)
- `machines/` - Per-machine configurations
- `skills/` - Claude Code skills for common tasks
## Shared Library
Custom utility functions go in `lib/default.nix`. The flake extends `nixpkgs.lib` with these functions, so they're accessible as `lib.functionName` in all modules. Add reusable functions here when used in multiple places.
## Code Comments
Only add comments that provide value beyond what the code already shows:
- Explain *why* something is done, not *what* is being done
- Document non-obvious constraints or gotchas
- Never add filler comments that repeat the code (e.g. `# Start the service` before a start command)
## Bash Commands
Do not redirect stderr to stdout (no `2>&1`). This can hide important output and errors.
Do not use `doas` or `sudo` - they will fail. Ask the user to run privileged commands themselves.
## Nix Commands
Use `--no-link` with `nix build` to avoid creating `result` symlinks in the working directory.
## Git Commits
Do not add "Co-Authored-By" lines to commit messages.

View File

@@ -2,13 +2,101 @@
let let
cfg = config.backup; cfg = config.backup;
hostname = config.networking.hostName;
mkRespository = group: "s3:s3.us-west-004.backblazeb2.com/D22TgIt0-main-backup/${group}"; mkRespository = group: "s3:s3.us-west-004.backblazeb2.com/D22TgIt0-main-backup/${group}";
findmnt = "${pkgs.util-linux}/bin/findmnt";
mount = "${pkgs.util-linux}/bin/mount";
umount = "${pkgs.util-linux}/bin/umount";
btrfs = "${pkgs.btrfs-progs}/bin/btrfs";
zfs = "/run/current-system/sw/bin/zfs";
# Creates snapshots and bind mounts them over original paths within the
# service's mount namespace, so restic sees correct paths but reads frozen data
snapshotHelperFn = ''
snapshot_for_path() {
local group="$1" path="$2" action="$3"
local pathhash fstype
pathhash=$(echo -n "$path" | sha256sum | cut -c1-8)
fstype=$(${findmnt} -n -o FSTYPE -T "$path" 2>/dev/null || echo "unknown")
case "$fstype" in
zfs)
local dataset mount subpath snapname snappath
dataset=$(${findmnt} -n -o SOURCE -T "$path")
mount=$(${findmnt} -n -o TARGET -T "$path")
subpath=''${path#"$mount"}
[[ "$subpath" != /* ]] && subpath="/$subpath"
snapname="''${dataset}@restic-''${group}-''${pathhash}"
snappath="''${mount}/.zfs/snapshot/restic-''${group}-''${pathhash}''${subpath}"
case "$action" in
create)
${zfs} destroy "$snapname" 2>/dev/null || true
${zfs} snapshot "$snapname"
${mount} --bind "$snappath" "$path"
echo "$path"
;;
destroy)
${umount} "$path" 2>/dev/null || true
${zfs} destroy "$snapname" 2>/dev/null || true
;;
esac
;;
btrfs)
local mount subpath snapdir snappath
mount=$(${findmnt} -n -o TARGET -T "$path")
subpath=''${path#"$mount"}
[[ "$subpath" != /* ]] && subpath="/$subpath"
snapdir="/.restic-snapshots/''${group}-''${pathhash}"
snappath="''${snapdir}''${subpath}"
case "$action" in
create)
${btrfs} subvolume delete "$snapdir" 2>/dev/null || true
mkdir -p /.restic-snapshots
${btrfs} subvolume snapshot -r "$mount" "$snapdir" >&2
${mount} --bind "$snappath" "$path"
echo "$path"
;;
destroy)
${umount} "$path" 2>/dev/null || true
${btrfs} subvolume delete "$snapdir" 2>/dev/null || true
;;
esac
;;
*)
echo "No snapshot support for $fstype ($path), using original" >&2
[ "$action" = "create" ] && echo "$path"
;;
esac
}
'';
mkBackup = group: paths: { mkBackup = group: paths: {
repository = mkRespository group; repository = mkRespository group;
inherit paths;
dynamicFilesFrom = "cat /run/restic-backup-${group}/paths";
backupPrepareCommand = ''
mkdir -p /run/restic-backup-${group}
: > /run/restic-backup-${group}/paths
${snapshotHelperFn}
for path in ${lib.escapeShellArgs paths}; do
snapshot_for_path ${lib.escapeShellArg group} "$path" create >> /run/restic-backup-${group}/paths
done
'';
backupCleanupCommand = ''
${snapshotHelperFn}
for path in ${lib.escapeShellArgs paths}; do
snapshot_for_path ${lib.escapeShellArg group} "$path" destroy
done
rm -rf /run/restic-backup-${group}
'';
initialize = true; initialize = true;
@@ -21,11 +109,11 @@ let
''--exclude-if-present ".nobackup"'' ''--exclude-if-present ".nobackup"''
]; ];
# Keeps backups from up to 6 months ago
pruneOpts = [ pruneOpts = [
"--keep-daily 7" # one backup for each of the last n days "--keep-daily 7" # one backup for each of the last n days
"--keep-weekly 5" # one backup for each of the last n weeks "--keep-weekly 5" # one backup for each of the last n weeks
"--keep-monthly 12" # one backup for each of the last n months "--keep-monthly 6" # one backup for each of the last n months
"--keep-yearly 75" # one backup for each of the last n years
]; ];
environmentFile = "/run/agenix/backblaze-s3-backups"; environmentFile = "/run/agenix/backblaze-s3-backups";
@@ -64,12 +152,25 @@ in
}; };
config = lib.mkIf (cfg.group != null) { config = lib.mkIf (cfg.group != null) {
assertions = lib.mapAttrsToList (group: _: {
assertion = config.systemd.services."restic-backups-${group}".enable;
message = "Expected systemd service 'restic-backups-${group}' not found. The nixpkgs restic module may have changed its naming convention.";
}) cfg.group;
services.restic.backups = lib.concatMapAttrs services.restic.backups = lib.concatMapAttrs
(group: groupCfg: { (group: groupCfg: {
${group} = mkBackup group groupCfg.paths; ${group} = mkBackup group groupCfg.paths;
}) })
cfg.group; cfg.group;
# Mount namespace lets us bind mount snapshots over original paths,
# so restic backs up from frozen snapshots while recording correct paths
systemd.services = lib.concatMapAttrs
(group: _: {
"restic-backups-${group}".serviceConfig.PrivateMounts = true;
})
cfg.group;
age.secrets.backblaze-s3-backups.file = ../secrets/backblaze-s3-backups.age; age.secrets.backblaze-s3-backups.file = ../secrets/backblaze-s3-backups.age;
age.secrets.restic-password.file = ../secrets/restic-password.age; age.secrets.restic-password.file = ../secrets/restic-password.age;

View File

@@ -14,6 +14,7 @@
./machine-info ./machine-info
./nix-builder.nix ./nix-builder.nix
./ssh.nix ./ssh.nix
./sandboxed-workspace
]; ];
nix.flakes.enable = true; nix.flakes.enable = true;

View File

@@ -12,6 +12,7 @@ in
./ping.nix ./ping.nix
./tailscale.nix ./tailscale.nix
./vpn.nix ./vpn.nix
./sandbox.nix
]; ];
options.networking.ip_forward = mkEnableOption "Enable ip forwarding"; options.networking.ip_forward = mkEnableOption "Enable ip forwarding";

116
common/network/sandbox.nix Normal file
View File

@@ -0,0 +1,116 @@
{ config, lib, ... }:
# Network configuration for sandboxed workspaces (VMs and containers)
# Creates a bridge network with NAT for isolated environments
with lib;
let
cfg = config.networking.sandbox;
in
{
options.networking.sandbox = {
enable = mkEnableOption "sandboxed workspace network bridge";
bridgeName = mkOption {
type = types.str;
default = "sandbox-br";
description = "Name of the bridge interface for sandboxed workspaces";
};
subnet = mkOption {
type = types.str;
default = "192.168.83.0/24";
description = "Subnet for sandboxed workspace network";
};
hostAddress = mkOption {
type = types.str;
default = "192.168.83.1";
description = "Host address on the sandbox bridge";
};
upstreamInterface = mkOption {
type = types.str;
description = "Upstream network interface for NAT";
};
};
config = mkIf cfg.enable {
networking.ip_forward = true;
# Create the bridge interface
systemd.network.netdevs."10-${cfg.bridgeName}" = {
netdevConfig = {
Kind = "bridge";
Name = cfg.bridgeName;
};
};
systemd.network.networks."10-${cfg.bridgeName}" = {
matchConfig.Name = cfg.bridgeName;
networkConfig = {
Address = "${cfg.hostAddress}/24";
DHCPServer = false;
IPv4Forwarding = true;
IPv6Forwarding = false;
IPMasquerade = "ipv4";
};
linkConfig.RequiredForOnline = "no";
};
# Automatically attach VM tap interfaces to the bridge
systemd.network.networks."11-vm" = {
matchConfig.Name = "vm-*";
networkConfig.Bridge = cfg.bridgeName;
linkConfig.RequiredForOnline = "no";
};
# Automatically attach container veth interfaces to the bridge
systemd.network.networks."11-container" = {
matchConfig.Name = "ve-*";
networkConfig.Bridge = cfg.bridgeName;
linkConfig.RequiredForOnline = "no";
};
# NAT configuration for sandboxed workspaces
networking.nat = {
enable = true;
internalInterfaces = [ cfg.bridgeName ];
externalInterface = cfg.upstreamInterface;
};
# Enable systemd-networkd (required for bridge setup)
systemd.network.enable = true;
# When NetworkManager handles primary networking, disable systemd-networkd-wait-online.
# The bridge is the only interface managed by systemd-networkd and it never reaches
# "online" state without connected workspaces. NetworkManager-wait-online.service already
# gates network-online.target for the primary interface.
# On pure systemd-networkd systems (no NM), we just ignore the bridge.
systemd.network.wait-online.enable =
!config.networking.networkmanager.enable;
systemd.network.wait-online.ignoredInterfaces =
lib.mkIf (!config.networking.networkmanager.enable) [ cfg.bridgeName ];
# If NetworkManager is enabled, tell it to ignore sandbox interfaces
# This allows systemd-networkd and NetworkManager to coexist
networking.networkmanager.unmanaged = [
"interface-name:${cfg.bridgeName}"
"interface-name:vm-*"
"interface-name:ve-*"
"interface-name:veth*"
];
# Make systemd-resolved listen on the bridge for workspace DNS queries.
# By default resolved only listens on 127.0.0.53 (localhost).
# DNSStubListenerExtra adds the bridge address so workspaces can use the host as DNS.
services.resolved.settings.Resolve.DNSStubListenerExtra = cfg.hostAddress;
# Allow DNS traffic from workspaces to the host
networking.firewall.interfaces.${cfg.bridgeName} = {
allowedTCPPorts = [ 53 ];
allowedUDPPorts = [ 53 ];
};
};
}

View File

@@ -0,0 +1,146 @@
{ hostConfig, workspaceName, ip, networkInterface }:
# Base configuration shared by all sandboxed workspaces (VMs and containers)
# This provides common settings for networking, SSH, users, and packages
#
# Parameters:
# hostConfig - The host's NixOS config (for inputs, ssh keys, etc.)
# workspaceName - Name of the workspace (used as hostname)
# ip - Static IP address for the workspace
# networkInterface - Match config for systemd-networkd (e.g., { Type = "ether"; } or { Name = "host0"; })
{ config, lib, pkgs, ... }:
let
claudeConfigFile = pkgs.writeText "claude-config.json" (builtins.toJSON {
hasCompletedOnboarding = true;
theme = "dark";
projects = {
"/home/googlebot/workspace" = {
hasTrustDialogAccepted = true;
};
};
});
in
{
imports = [
../shell.nix
hostConfig.inputs.home-manager.nixosModules.home-manager
hostConfig.inputs.nix-index-database.nixosModules.default
];
nixpkgs.overlays = [
hostConfig.inputs.claude-code-nix.overlays.default
];
# Basic system configuration
system.stateVersion = "25.11";
# Set hostname to match the workspace name
networking.hostName = workspaceName;
# Networking with systemd-networkd
networking.useNetworkd = true;
systemd.network.enable = true;
# Enable resolved to populate /etc/resolv.conf from networkd's DNS settings
services.resolved.enable = true;
# Basic networking configuration
networking.useDHCP = false;
# Static IP configuration
# Uses the host as DNS server (host forwards to upstream DNS)
systemd.network.networks."20-workspace" = {
matchConfig = networkInterface;
networkConfig = {
Address = "${ip}/24";
Gateway = hostConfig.networking.sandbox.hostAddress;
DNS = [ hostConfig.networking.sandbox.hostAddress ];
};
};
# Disable firewall inside workspaces (we're behind NAT)
networking.firewall.enable = false;
# Enable SSH for access
services.openssh = {
enable = true;
settings = {
PasswordAuthentication = false;
PermitRootLogin = "prohibit-password";
};
};
# Use persistent SSH host keys from shared directory
services.openssh.hostKeys = lib.mkForce [
{
path = "/etc/ssh-host-keys/ssh_host_ed25519_key";
type = "ed25519";
}
];
# Basic system packages
environment.systemPackages = with pkgs; [
claude-code
kakoune
vim
git
htop
wget
curl
tmux
dnsutils
];
# User configuration
users.mutableUsers = false;
users.users.googlebot = {
isNormalUser = true;
extraGroups = [ "wheel" ];
shell = pkgs.fish;
openssh.authorizedKeys.keys = hostConfig.machines.ssh.userKeys;
};
security.doas.enable = true;
security.sudo.enable = false;
security.doas.extraRules = [
{ groups = [ "wheel" ]; noPass = true; }
];
# Minimal locale settings
i18n.defaultLocale = "en_US.UTF-8";
time.timeZone = "America/Los_Angeles";
# Enable flakes
nix.settings.experimental-features = [ "nix-command" "flakes" ];
# Make nixpkgs available in NIX_PATH and registry (like the NixOS ISO)
# This allows `nix-shell -p`, `nix repl '<nixpkgs>'`, etc. to work
nix.nixPath = [ "nixpkgs=${hostConfig.inputs.nixpkgs}" ];
nix.registry.nixpkgs.flake = hostConfig.inputs.nixpkgs;
# Enable fish shell
programs.fish.enable = true;
# Initialize Claude Code config on first boot (skips onboarding, trusts workspace)
systemd.services.claude-config-init = {
wantedBy = [ "multi-user.target" ];
serviceConfig = {
Type = "oneshot";
RemainAfterExit = true;
User = "googlebot";
Group = "users";
};
script = ''
if [ ! -f /home/googlebot/claude-config/.claude.json ]; then
cp ${claudeConfigFile} /home/googlebot/claude-config/.claude.json
fi
'';
};
# Home Manager configuration
home-manager.useGlobalPkgs = true;
home-manager.useUserPackages = true;
home-manager.users.googlebot = import ./home.nix;
}

View File

@@ -0,0 +1,74 @@
{ config, lib, pkgs, ... }:
# Container-specific configuration for sandboxed workspaces using systemd-nspawn
# This module is imported by default.nix for workspaces with type = "container"
with lib;
let
cfg = config.sandboxed-workspace;
hostConfig = config;
# Filter for container-type workspaces only
containerWorkspaces = filterAttrs (n: ws: ws.type == "container") cfg.workspaces;
in
{
config = mkIf (cfg.enable && containerWorkspaces != { }) {
# NixOS container module only sets restartIfChanged when autoStart=true
# Work around this by setting it directly on the systemd service
systemd.services = mapAttrs'
(name: ws: nameValuePair "container@${name}" {
restartIfChanged = lib.mkForce true;
restartTriggers = [
config.containers.${name}.path
config.environment.etc."nixos-containers/${name}.conf".source
];
})
containerWorkspaces;
# Convert container workspace configs to NixOS containers format
containers = mapAttrs
(name: ws: {
autoStart = ws.autoStart;
privateNetwork = true;
ephemeral = true;
restartIfChanged = true;
# Attach container's veth to the sandbox bridge
# This creates the veth pair and attaches host side to the bridge
hostBridge = config.networking.sandbox.bridgeName;
bindMounts = {
"/home/googlebot/workspace" = {
hostPath = "/home/googlebot/sandboxed/${name}/workspace";
isReadOnly = false;
};
"/etc/ssh-host-keys" = {
hostPath = "/home/googlebot/sandboxed/${name}/ssh-host-keys";
isReadOnly = false;
};
"/home/googlebot/claude-config" = {
hostPath = "/home/googlebot/sandboxed/${name}/claude-config";
isReadOnly = false;
};
};
config = { config, lib, pkgs, ... }: {
imports = [
(import ./base.nix {
inherit hostConfig;
workspaceName = name;
ip = ws.ip;
networkInterface = { Name = "eth0"; };
})
(import ws.config)
];
networking.useHostResolvConf = false;
nixpkgs.config.allowUnfree = true;
};
})
containerWorkspaces;
};
}

View File

@@ -0,0 +1,164 @@
{ config, lib, pkgs, ... }:
# Unified sandboxed workspace module supporting both VMs and containers
# This module provides isolated development environments with shared configuration
with lib;
let
cfg = config.sandboxed-workspace;
in
{
imports = [
./vm.nix
./container.nix
./incus.nix
];
options.sandboxed-workspace = {
enable = mkEnableOption "sandboxed workspace management";
workspaces = mkOption {
type = types.attrsOf (types.submodule {
options = {
type = mkOption {
type = types.enum [ "vm" "container" "incus" ];
description = ''
Backend type for this workspace:
- "vm": microVM with cloud-hypervisor (more isolation, uses virtiofs)
- "container": systemd-nspawn via NixOS containers (less overhead, uses bind mounts)
- "incus": Incus/LXD container (unprivileged, better security than NixOS containers)
'';
};
config = mkOption {
type = types.path;
description = "Path to the workspace configuration file";
};
ip = mkOption {
type = types.str;
example = "192.168.83.10";
description = ''
Static IP address for this workspace on the microvm bridge network.
Configures the workspace's network interface and adds an entry to /etc/hosts
on the host so the workspace can be accessed by name (e.g., ssh workspace-example).
Must be in the 192.168.83.0/24 subnet (or whatever networking.sandbox.subnet is).
'';
};
hostKey = mkOption {
type = types.nullOr types.str;
default = null;
example = "ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAA...";
description = ''
SSH host public key for this workspace. If set, adds to programs.ssh.knownHosts
so the host automatically trusts the workspace without prompting.
Get the key from: ~/sandboxed/<name>/ssh-host-keys/ssh_host_ed25519_key.pub
'';
};
autoStart = mkOption {
type = types.bool;
default = false;
description = "Whether to automatically start this workspace on boot";
};
cid = mkOption {
type = types.nullOr types.int;
default = null;
description = ''
vsock Context Identifier for this workspace (VM-only, ignored for containers).
If null, auto-generated from workspace name.
Must be unique per host. Valid range: 3 to 4294967294.
See: https://man7.org/linux/man-pages/man7/vsock.7.html
'';
};
};
});
default = { };
description = "Sandboxed workspace configurations";
};
};
config = mkIf cfg.enable {
# Automatically enable sandbox networking when workspaces are defined
networking.sandbox.enable = mkIf (cfg.workspaces != { }) true;
# Add workspace hostnames to /etc/hosts so they can be accessed by name
networking.hosts = lib.mkMerge (lib.mapAttrsToList
(name: ws: {
${ws.ip} = [ "workspace-${name}" ];
})
cfg.workspaces);
# Add workspace SSH host keys to known_hosts so host trusts workspaces without prompting
programs.ssh.knownHosts = lib.mkMerge (lib.mapAttrsToList
(name: ws:
lib.optionalAttrs (ws.hostKey != null) {
"workspace-${name}" = {
publicKey = ws.hostKey;
extraHostNames = [ ws.ip ];
};
})
cfg.workspaces);
# Shell aliases for workspace management
environment.shellAliases = lib.mkMerge (lib.mapAttrsToList
(name: ws:
let
serviceName =
if ws.type == "vm" then "microvm@${name}"
else if ws.type == "incus" then "incus-workspace-${name}"
else "container@${name}";
in
{
"workspace_${name}" = "ssh googlebot@workspace-${name}";
"workspace_${name}_start" = "doas systemctl start ${serviceName}";
"workspace_${name}_stop" = "doas systemctl stop ${serviceName}";
"workspace_${name}_restart" = "doas systemctl restart ${serviceName}";
"workspace_${name}_status" = "doas systemctl status ${serviceName}";
})
cfg.workspaces);
# Automatically generate SSH host keys and directories for all workspaces
systemd.services = lib.mapAttrs'
(name: ws:
let
serviceName =
if ws.type == "vm" then "microvm@${name}"
else if ws.type == "incus" then "incus-workspace-${name}"
else "container@${name}";
in
lib.nameValuePair "workspace-${name}-setup" {
description = "Setup directories and SSH keys for workspace ${name}";
wantedBy = [ "multi-user.target" ];
before = [ "${serviceName}.service" ];
serviceConfig = {
Type = "oneshot";
RemainAfterExit = true;
};
script = ''
# Create directories if they don't exist
mkdir -p /home/googlebot/sandboxed/${name}/workspace
mkdir -p /home/googlebot/sandboxed/${name}/ssh-host-keys
mkdir -p /home/googlebot/sandboxed/${name}/claude-config
# Fix ownership
chown -R googlebot:users /home/googlebot/sandboxed/${name}
# Generate SSH host key if it doesn't exist
if [ ! -f /home/googlebot/sandboxed/${name}/ssh-host-keys/ssh_host_ed25519_key ]; then
${pkgs.openssh}/bin/ssh-keygen -t ed25519 -N "" \
-f /home/googlebot/sandboxed/${name}/ssh-host-keys/ssh_host_ed25519_key
chown googlebot:users /home/googlebot/sandboxed/${name}/ssh-host-keys/ssh_host_ed25519_key*
echo "Generated SSH host key for workspace ${name}"
fi
'';
}
)
cfg.workspaces;
};
}

View File

@@ -0,0 +1,50 @@
{ config, lib, pkgs, ... }:
# Home Manager configuration for sandboxed workspace user environment
# This sets up the shell and tools inside VMs and containers
{
home.username = "googlebot";
home.homeDirectory = "/home/googlebot";
home.stateVersion = "24.11";
programs.home-manager.enable = true;
# Shell configuration
programs.fish.enable = true;
programs.starship.enable = true;
programs.starship.enableFishIntegration = true;
programs.starship.settings.container.disabled = true;
# Basic command-line tools
programs.btop.enable = true;
programs.ripgrep.enable = true;
programs.eza.enable = true;
# Git configuration
programs.git = {
enable = true;
settings = {
user.name = lib.mkDefault "googlebot";
user.email = lib.mkDefault "zuckerberg@neet.dev";
};
};
# Shell aliases
home.shellAliases = {
ls = "eza";
la = "eza -la";
ll = "eza -l";
};
# Environment variables for Claude Code
home.sessionVariables = {
# Isolate Claude config to a specific directory on the host
CLAUDE_CONFIG_DIR = "/home/googlebot/claude-config";
};
# Additional packages for development
home.packages = with pkgs; [
# Add packages as needed per workspace
];
}

View File

@@ -0,0 +1,179 @@
{ config, lib, pkgs, ... }:
# Incus-specific configuration for sandboxed workspaces
# Creates fully declarative Incus containers from NixOS configurations
with lib;
let
cfg = config.sandboxed-workspace;
hostConfig = config;
incusWorkspaces = filterAttrs (n: ws: ws.type == "incus") cfg.workspaces;
# Build a NixOS LXC image for a workspace
mkContainerImage = name: ws:
let
nixpkgs = hostConfig.inputs.nixpkgs;
containerSystem = nixpkgs.lib.nixosSystem {
modules = [
(import ./base.nix {
inherit hostConfig;
workspaceName = name;
ip = ws.ip;
networkInterface = { Name = "eth0"; };
})
(import ws.config)
({ config, lib, pkgs, ... }: {
nixpkgs.hostPlatform = hostConfig.currentSystem;
boot.isContainer = true;
networking.useHostResolvConf = false;
nixpkgs.config.allowUnfree = true;
environment.systemPackages = [
(lib.hiPrio (pkgs.writeShellScriptBin "claude" ''
exec ${pkgs.claude-code}/bin/claude --dangerously-skip-permissions "$@"
''))
];
})
];
};
in
{
rootfs = containerSystem.config.system.build.images.lxc;
metadata = containerSystem.config.system.build.images.lxc-metadata;
toplevel = containerSystem.config.system.build.toplevel;
};
mkIncusService = name: ws:
let
images = mkContainerImage name ws;
hash = builtins.substring 0 12 (builtins.hashString "sha256" "${images.rootfs}");
imageName = "nixos-workspace-${name}-${hash}";
containerName = "workspace-${name}";
bridgeName = config.networking.sandbox.bridgeName;
mac = lib.mkMac "incus-${name}";
addDevices = ''
incus config device add ${containerName} eth0 nic nictype=bridged parent=${bridgeName} hwaddr=${mac}
incus config device add ${containerName} workspace disk source=/home/googlebot/sandboxed/${name}/workspace path=/home/googlebot/workspace shift=true
incus config device add ${containerName} ssh-keys disk source=/home/googlebot/sandboxed/${name}/ssh-host-keys path=/etc/ssh-host-keys shift=true
incus config device add ${containerName} claude-config disk source=/home/googlebot/sandboxed/${name}/claude-config path=/home/googlebot/claude-config shift=true
'';
in
{
description = "Incus workspace ${name}";
after = [ "incus.service" "incus-preseed.service" "workspace-${name}-setup.service" ];
requires = [ "incus.service" ];
wants = [ "workspace-${name}-setup.service" ];
wantedBy = optional ws.autoStart "multi-user.target";
path = [ config.virtualisation.incus.package pkgs.gnutar pkgs.xz pkgs.util-linux ];
restartTriggers = [ images.rootfs images.metadata ];
serviceConfig = {
Type = "oneshot";
RemainAfterExit = true;
};
script = ''
set -euo pipefail
# Serialize incus operations - concurrent container creation causes race conditions
exec 9>/run/incus-workspace.lock
flock -x 9
# Import image if not present
if ! incus image list --format csv | grep -q "${imageName}"; then
metadata_tarball=$(echo ${images.metadata}/tarball/*.tar.xz)
rootfs_tarball=$(echo ${images.rootfs}/tarball/*.tar.xz)
incus image import "$metadata_tarball" "$rootfs_tarball" --alias ${imageName}
# Clean up old images for this workspace
incus image list --format csv | grep "nixos-workspace-${name}-" | grep -v "${imageName}" | cut -d, -f2 | while read old_image; do
incus image delete "$old_image" || true
done || true
fi
# Always recreate container for ephemeral behavior
incus stop ${containerName} --force 2>/dev/null || true
incus delete ${containerName} --force 2>/dev/null || true
incus init ${imageName} ${containerName}
${addDevices}
incus start ${containerName}
# Wait for container to start
for i in $(seq 1 30); do
if incus list --format csv | grep -q "^${containerName},RUNNING"; then
exit 0
fi
sleep 1
done
exit 1
'';
preStop = ''
exec 9>/run/incus-workspace.lock
flock -x 9
incus stop ${containerName} --force 2>/dev/null || true
incus delete ${containerName} --force 2>/dev/null || true
# Clean up all images for this workspace
incus image list --format csv 2>/dev/null | grep "nixos-workspace-${name}-" | cut -d, -f2 | while read img; do
incus image delete "$img" 2>/dev/null || true
done
'';
};
in
{
config = mkIf (cfg.enable && incusWorkspaces != { }) {
virtualisation.incus.enable = true;
networking.nftables.enable = true;
virtualisation.incus.preseed = {
storage_pools = [{
name = "default";
driver = "dir";
config = {
source = "/var/lib/incus/storage-pools/default";
};
}];
profiles = [{
name = "default";
config = {
"security.privileged" = "false";
"security.idmap.isolated" = "true";
};
devices = {
root = {
path = "/";
pool = "default";
type = "disk";
};
};
}];
};
systemd.services = mapAttrs'
(name: ws: nameValuePair "incus-workspace-${name}" (mkIncusService name ws))
incusWorkspaces;
# Extra alias for incus shell access (ssh is also available via default.nix aliases)
environment.shellAliases = mkMerge (mapAttrsToList
(name: ws: {
"workspace_${name}_shell" = "doas incus exec workspace-${name} -- su -l googlebot";
})
incusWorkspaces);
};
}

View File

@@ -0,0 +1,140 @@
{ config, lib, pkgs, ... }:
# VM-specific configuration for sandboxed workspaces using microvm.nix
# This module is imported by default.nix for workspaces with type = "vm"
with lib;
let
cfg = config.sandboxed-workspace;
hostConfig = config;
# Generate a deterministic vsock CID from workspace name.
#
# vsock (virtual sockets) enables host-VM communication without networking.
# cloud-hypervisor uses vsock for systemd-notify integration: when a VM finishes
# booting, systemd sends READY=1 to the host via vsock, allowing the host's
# microvm@ service to accurately track VM boot status instead of guessing.
#
# Each VM needs a unique CID (Context Identifier). Reserved CIDs per vsock(7):
# - VMADDR_CID_HYPERVISOR (0): reserved for hypervisor
# - VMADDR_CID_LOCAL (1): loopback address
# - VMADDR_CID_HOST (2): host address
# See: https://man7.org/linux/man-pages/man7/vsock.7.html
# https://docs.kernel.org/virt/kvm/vsock.html
#
# We auto-generate from SHA256 hash to ensure uniqueness without manual assignment.
# Range: 100 - 16777315 (offset avoids reserved CIDs and leaves 3-99 for manual use)
nameToCid = name:
let
hash = builtins.hashString "sha256" name;
hexPart = builtins.substring 0 6 hash;
in
100 + (builtins.foldl'
(acc: c: acc * 16 + (
if c == "a" then 10
else if c == "b" then 11
else if c == "c" then 12
else if c == "d" then 13
else if c == "e" then 14
else if c == "f" then 15
else lib.strings.toInt c
)) 0
(lib.stringToCharacters hexPart));
# Filter for VM-type workspaces only
vmWorkspaces = filterAttrs (n: ws: ws.type == "vm") cfg.workspaces;
# Generate VM configuration for a workspace
mkVmConfig = name: ws: {
inherit pkgs; # Use host's pkgs (includes allowUnfree)
config = import ws.config;
specialArgs = { inputs = hostConfig.inputs; };
extraModules = [
(import ./base.nix {
inherit hostConfig;
workspaceName = name;
ip = ws.ip;
networkInterface = { Type = "ether"; };
})
{
environment.systemPackages = [
(lib.hiPrio (pkgs.writeShellScriptBin "claude" ''
exec ${pkgs.claude-code}/bin/claude --dangerously-skip-permissions "$@"
''))
];
# MicroVM specific configuration
microvm = {
# Use cloud-hypervisor for better performance
hypervisor = lib.mkDefault "cloud-hypervisor";
# Resource allocation
vcpu = 8;
mem = 4096; # 4GB RAM
# Disk for writable overlay
volumes = [{
image = "overlay.img";
mountPoint = "/nix/.rw-store";
size = 8192; # 8GB
}];
# Shared directories with host using virtiofs
shares = [
{
# Share the host's /nix/store for accessing packages
proto = "virtiofs";
tag = "ro-store";
source = "/nix/store";
mountPoint = "/nix/.ro-store";
}
{
proto = "virtiofs";
tag = "workspace";
source = "/home/googlebot/sandboxed/${name}/workspace";
mountPoint = "/home/googlebot/workspace";
}
{
proto = "virtiofs";
tag = "ssh-host-keys";
source = "/home/googlebot/sandboxed/${name}/ssh-host-keys";
mountPoint = "/etc/ssh-host-keys";
}
{
proto = "virtiofs";
tag = "claude-config";
source = "/home/googlebot/sandboxed/${name}/claude-config";
mountPoint = "/home/googlebot/claude-config";
}
];
# Writeable overlay for /nix/store
writableStoreOverlay = "/nix/.rw-store";
# TAP interface for bridged networking
# The interface name "vm-*" matches the pattern in common/network/microvm.nix
# which automatically attaches it to the microbr bridge
interfaces = [{
type = "tap";
id = "vm-${name}";
mac = lib.mkMac "vm-${name}";
}];
# Enable vsock for systemd-notify integration
vsock.cid =
if ws.cid != null
then ws.cid
else nameToCid name;
};
}
];
autostart = ws.autoStart;
};
in
{
config = mkIf (cfg.enable && vmWorkspaces != { }) {
# Convert VM workspace configs to microvm.nix format
microvm.vms = mapAttrs mkVmConfig vmWorkspaces;
};
}

103
flake.lock generated
View File

@@ -43,7 +43,30 @@
"type": "gitlab" "type": "gitlab"
} }
}, },
"dailybuild_modules": { "claude-code-nix": {
"inputs": {
"flake-utils": [
"flake-utils"
],
"nixpkgs": [
"nixpkgs"
]
},
"locked": {
"lastModified": 1770491193,
"narHash": "sha256-zdnWeXmPZT8BpBo52s4oansT1Rq0SNzksXKpEcMc5lE=",
"owner": "sadjow",
"repo": "claude-code-nix",
"rev": "f68a2683e812d1e4f9a022ff3e0206d46347d019",
"type": "github"
},
"original": {
"owner": "sadjow",
"repo": "claude-code-nix",
"type": "github"
}
},
"dailybot": {
"inputs": { "inputs": {
"flake-utils": [ "flake-utils": [
"flake-utils" "flake-utils"
@@ -219,6 +242,27 @@
"type": "github" "type": "github"
} }
}, },
"microvm": {
"inputs": {
"nixpkgs": [
"nixpkgs"
],
"spectrum": "spectrum"
},
"locked": {
"lastModified": 1770310890,
"narHash": "sha256-lyWAs4XKg3kLYaf4gm5qc5WJrDkYy3/qeV5G733fJww=",
"owner": "astro",
"repo": "microvm.nix",
"rev": "68c9f9c6ca91841f04f726a298c385411b7bfcd5",
"type": "github"
},
"original": {
"owner": "astro",
"repo": "microvm.nix",
"type": "github"
}
},
"nix-index-database": { "nix-index-database": {
"inputs": { "inputs": {
"nixpkgs": [ "nixpkgs": [
@@ -239,42 +283,6 @@
"type": "github" "type": "github"
} }
}, },
"nixlib": {
"locked": {
"lastModified": 1736643958,
"narHash": "sha256-tmpqTSWVRJVhpvfSN9KXBvKEXplrwKnSZNAoNPf/S/s=",
"owner": "nix-community",
"repo": "nixpkgs.lib",
"rev": "1418bc28a52126761c02dd3d89b2d8ca0f521181",
"type": "github"
},
"original": {
"owner": "nix-community",
"repo": "nixpkgs.lib",
"type": "github"
}
},
"nixos-generators": {
"inputs": {
"nixlib": "nixlib",
"nixpkgs": [
"nixpkgs"
]
},
"locked": {
"lastModified": 1764234087,
"narHash": "sha256-NHF7QWa0ZPT8hsJrvijREW3+nifmF2rTXgS2v0tpcEA=",
"owner": "nix-community",
"repo": "nixos-generators",
"rev": "032a1878682fafe829edfcf5fdfad635a2efe748",
"type": "github"
},
"original": {
"owner": "nix-community",
"repo": "nixos-generators",
"type": "github"
}
},
"nixos-hardware": { "nixos-hardware": {
"locked": { "locked": {
"lastModified": 1767185284, "lastModified": 1767185284,
@@ -310,13 +318,14 @@
"root": { "root": {
"inputs": { "inputs": {
"agenix": "agenix", "agenix": "agenix",
"dailybuild_modules": "dailybuild_modules", "claude-code-nix": "claude-code-nix",
"dailybot": "dailybot",
"deploy-rs": "deploy-rs", "deploy-rs": "deploy-rs",
"flake-compat": "flake-compat", "flake-compat": "flake-compat",
"flake-utils": "flake-utils", "flake-utils": "flake-utils",
"home-manager": "home-manager", "home-manager": "home-manager",
"microvm": "microvm",
"nix-index-database": "nix-index-database", "nix-index-database": "nix-index-database",
"nixos-generators": "nixos-generators",
"nixos-hardware": "nixos-hardware", "nixos-hardware": "nixos-hardware",
"nixpkgs": "nixpkgs", "nixpkgs": "nixpkgs",
"simple-nixos-mailserver": "simple-nixos-mailserver", "simple-nixos-mailserver": "simple-nixos-mailserver",
@@ -349,6 +358,22 @@
"type": "gitlab" "type": "gitlab"
} }
}, },
"spectrum": {
"flake": false,
"locked": {
"lastModified": 1759482047,
"narHash": "sha256-H1wiXRQHxxPyMMlP39ce3ROKCwI5/tUn36P8x6dFiiQ=",
"ref": "refs/heads/main",
"rev": "c5d5786d3dc938af0b279c542d1e43bce381b4b9",
"revCount": 996,
"type": "git",
"url": "https://spectrum-os.org/git/spectrum"
},
"original": {
"type": "git",
"url": "https://spectrum-os.org/git/spectrum"
}
},
"systems": { "systems": {
"locked": { "locked": {
"lastModified": 1681028828, "lastModified": 1681028828,

View File

@@ -3,11 +3,6 @@
# nixpkgs # nixpkgs
nixpkgs.url = "github:NixOS/nixpkgs/master"; nixpkgs.url = "github:NixOS/nixpkgs/master";
nixos-generators = {
url = "github:nix-community/nixos-generators";
inputs.nixpkgs.follows = "nixpkgs";
};
# Common Utils Among flake inputs # Common Utils Among flake inputs
systems.url = "github:nix-systems/default"; systems.url = "github:nix-systems/default";
flake-utils = { flake-utils = {
@@ -48,7 +43,7 @@
}; };
# Dailybot # Dailybot
dailybuild_modules = { dailybot = {
url = "git+https://git.neet.dev/zuckerberg/dailybot.git"; url = "git+https://git.neet.dev/zuckerberg/dailybot.git";
inputs = { inputs = {
nixpkgs.follows = "nixpkgs"; nixpkgs.follows = "nixpkgs";
@@ -71,6 +66,21 @@
url = "github:Mic92/nix-index-database"; url = "github:Mic92/nix-index-database";
inputs.nixpkgs.follows = "nixpkgs"; inputs.nixpkgs.follows = "nixpkgs";
}; };
# MicroVM support
microvm = {
url = "github:astro/microvm.nix";
inputs.nixpkgs.follows = "nixpkgs";
};
# Up to date claude-code
claude-code-nix = {
url = "github:sadjow/claude-code-nix";
inputs = {
nixpkgs.follows = "nixpkgs";
flake-utils.follows = "flake-utils";
};
};
}; };
outputs = { self, nixpkgs, ... }@inputs: outputs = { self, nixpkgs, ... }@inputs:
@@ -88,13 +98,17 @@
./common ./common
simple-nixos-mailserver.nixosModule simple-nixos-mailserver.nixosModule
agenix.nixosModules.default agenix.nixosModules.default
dailybuild_modules.nixosModule dailybot.nixosModule
nix-index-database.nixosModules.default nix-index-database.nixosModules.default
home-manager.nixosModules.home-manager home-manager.nixosModules.home-manager
microvm.nixosModules.host
self.nixosModules.kernel-modules self.nixosModules.kernel-modules
({ lib, ... }: { ({ lib, ... }: {
config = { config = {
nixpkgs.overlays = [ self.overlays.default ]; nixpkgs.overlays = [
self.overlays.default
inputs.claude-code-nix.overlays.default
];
environment.systemPackages = [ environment.systemPackages = [
agenix.packages.${system}.agenix agenix.packages.${system}.agenix
@@ -142,25 +156,35 @@
nixpkgs.lib.mapAttrs nixpkgs.lib.mapAttrs
(hostname: cfg: (hostname: cfg:
mkSystem cfg.arch nixpkgs cfg.configurationPath hostname) mkSystem cfg.arch nixpkgs cfg.configurationPath hostname)
machineHosts; machineHosts
//
packages = (
with inputs;
let let
mkEphemeral = system: format: nixos-generators.nixosGenerate { mkEphemeral = system: nixpkgs.lib.nixosSystem {
inherit system; inherit system;
inherit format;
modules = [ modules = [
./machines/ephemeral/minimal.nix ./machines/ephemeral/minimal.nix
nix-index-database.nixosModules.default inputs.nix-index-database.nixosModules.default
]; ];
}; };
in in
{ {
"x86_64-linux".kexec = mkEphemeral "x86_64-linux" "kexec-bundle"; ephemeral-x86_64 = mkEphemeral "x86_64-linux";
"x86_64-linux".iso = mkEphemeral "x86_64-linux" "iso"; ephemeral-aarch64 = mkEphemeral "aarch64-linux";
"aarch64-linux".kexec = mkEphemeral "aarch64-linux" "kexec-bundle"; }
"aarch64-linux".iso = mkEphemeral "aarch64-linux" "iso"; );
# kexec produces a tarball; for a self-extracting bundle see:
# https://github.com/nix-community/nixos-generators/blob/master/formats/kexec.nix#L60
packages = {
"x86_64-linux" = {
kexec = self.nixosConfigurations.ephemeral-x86_64.config.system.build.images.kexec;
iso = self.nixosConfigurations.ephemeral-x86_64.config.system.build.images.iso;
};
"aarch64-linux" = {
kexec = self.nixosConfigurations.ephemeral-aarch64.config.system.build.images.kexec;
iso = self.nixosConfigurations.ephemeral-aarch64.config.system.build.images.iso;
};
}; };
overlays.default = import ./overlays { inherit inputs; }; overlays.default = import ./overlays { inherit inputs; };

View File

@@ -103,6 +103,7 @@ in
}; };
home.packages = lib.mkIf thisMachineIsPersonal [ home.packages = lib.mkIf thisMachineIsPersonal [
pkgs.claude-code
pkgs.dotnetCorePackages.dotnet_9.sdk # For Godot-Mono VSCode-Extension CSharp pkgs.dotnetCorePackages.dotnet_9.sdk # For Godot-Mono VSCode-Extension CSharp
]; ];
} }

View File

@@ -53,4 +53,13 @@ with lib;
getElem = x: y: elemAt (elemAt ll y) x; getElem = x: y: elemAt (elemAt ll y) x;
in in
genList (y: genList (x: f x y (getElem x y)) innerSize) outerSize; genList (y: genList (x: f x y (getElem x y)) innerSize) outerSize;
# Generate a deterministic MAC address from a name
# Uses locally administered unicast range (02:xx:xx:xx:xx:xx)
mkMac = name:
let
hash = builtins.hashString "sha256" name;
octets = map (i: builtins.substring i 2 hash) [ 0 2 4 6 8 ];
in
"02:${builtins.concatStringsSep ":" octets}";
} }

View File

@@ -10,6 +10,9 @@
nix.gc.automatic = lib.mkForce false; nix.gc.automatic = lib.mkForce false;
# Upstream interface for sandbox networking (NAT)
networking.sandbox.upstreamInterface = lib.mkDefault "enp191s0";
environment.systemPackages = with pkgs; [ environment.systemPackages = with pkgs; [
system76-keyboard-configurator system76-keyboard-configurator
]; ];

View File

@@ -0,0 +1,20 @@
{ config, lib, pkgs, ... }:
# Test container workspace configuration
#
# Add to sandboxed-workspace.workspaces in machines/fry/default.nix:
# sandboxed-workspace.workspaces.test-container = {
# type = "container" OR "incus";
# config = ./workspaces/test-container.nix;
# ip = "192.168.83.50";
# };
#
# The workspace name ("test-container") becomes the hostname automatically.
# The IP is configured in default.nix, not here.
{
# Install packages as needed
environment.systemPackages = with pkgs; [
# Add packages here
];
}

View File

@@ -0,0 +1,23 @@
{ config, lib, pkgs, ... }:
# Example VM workspace configuration
#
# Add to sandboxed-workspace.workspaces in machines/fry/default.nix:
# sandboxed-workspace.workspaces.example = {
# type = "vm";
# config = ./workspaces/example.nix;
# ip = "192.168.83.10";
# };
#
# The workspace name ("example") becomes the hostname automatically.
# The IP is configured in default.nix, not here.
{
# Install packages as needed
environment.systemPackages = with pkgs; [
# Add packages here
];
# Additional shares beyond the standard ones (workspace, ssh-host-keys, claude-config):
# microvm.shares = [ ... ];
}

View File

@@ -254,7 +254,7 @@
]; ];
tailscaleAuth = { tailscaleAuth = {
enable = true; enable = false; # Disabled for now because it doesn't work with tailscale's ACL tagged groups
virtualHosts = [ virtualHosts = [
"bazarr.s0.neet.dev" "bazarr.s0.neet.dev"
"radarr.s0.neet.dev" "radarr.s0.neet.dev"

View File

@@ -0,0 +1,235 @@
# Create Workspace Skill
This skill enables you to create new ephemeral sandboxed workspaces for isolated development environments. Workspaces can be either VMs (using microvm.nix) or containers (using systemd-nspawn).
## When to use this skill
Use this skill when:
- Creating a new isolated development environment
- Setting up a workspace for a specific project
- Need a clean environment to run AI coding agents safely
- Want to test something without affecting the host system
## Choosing between VM and Container
| Feature | VM (`type = "vm"`) | Container (`type = "container"`) |
|---------|-------------------|----------------------------------|
| Isolation | Full kernel isolation | Shared kernel with namespaces |
| Overhead | Higher (separate kernel) | Lower (process-level) |
| Startup time | Slower | Faster |
| Storage | virtiofs shares | bind mounts |
| Use case | Untrusted code, kernel testing | General development |
**Recommendation**: Use containers for most development work. Use VMs when you need stronger isolation or are testing potentially dangerous code.
## How to create a workspace
Follow these steps to create a new workspace:
### 1. Choose workspace name, type, and IP address
- Workspace name should be descriptive (e.g., "myproject", "testing", "nixpkgs-contrib")
- Type should be "vm" or "container"
- IP address should be in the 192.168.83.x range (192.168.83.10-254)
- Check existing workspaces in `machines/fry/default.nix` to avoid IP conflicts
### 2. Create workspace configuration file
Create `machines/fry/workspaces/<name>.nix`:
```nix
{ config, lib, pkgs, ... }:
# The workspace name becomes the hostname automatically.
# The IP is configured in default.nix, not here.
{
# Install packages as needed
environment.systemPackages = with pkgs; [
# Add packages here
];
# Additional configuration as needed
}
```
The module automatically configures:
- **Hostname**: Set to the workspace name from `sandboxed-workspace.workspaces.<name>`
- **Static IP**: From the `ip` option
- **DNS**: Uses the host as DNS server
- **Network**: TAP interface (VM) or veth pair (container) on the bridge
- **Standard shares**: workspace, ssh-host-keys, claude-config
### 3. Register workspace in machines/fry/default.nix
Add the workspace to the `sandboxed-workspace.workspaces` attribute set:
```nix
sandboxed-workspace = {
enable = true;
workspaces.<name> = {
type = "vm"; # or "container"
config = ./workspaces/<name>.nix;
ip = "192.168.83.XX"; # Choose unique IP
autoStart = false; # optional, defaults to false
};
};
```
### 4. Optional: Pre-create workspace with project
If you want to clone a repository before deployment:
```bash
mkdir -p ~/sandboxed/<name>/workspace
cd ~/sandboxed/<name>/workspace
git clone <repository-url>
```
Note: Directories and SSH keys are auto-created on first deployment if they don't exist.
### 5. Verify configuration builds
```bash
nix build .#nixosConfigurations.fry.config.system.build.toplevel --dry-run
```
### 6. Deploy the configuration
```bash
doas nixos-rebuild switch --flake .#fry
```
### 7. Start the workspace
```bash
# Using the shell alias:
workspace_<name>_start
# Or manually:
doas systemctl start microvm@<name> # for VMs
doas systemctl start container@<name> # for containers
```
### 8. Access the workspace
SSH into the workspace by name (added to /etc/hosts automatically):
```bash
# Using the shell alias:
workspace_<name>
# Or manually:
ssh googlebot@workspace-<name>
```
Or by IP:
```bash
ssh googlebot@192.168.83.XX
```
## Managing workspaces
### Shell aliases
For each workspace, these aliases are automatically created:
- `workspace_<name>` - SSH into the workspace
- `workspace_<name>_start` - Start the workspace
- `workspace_<name>_stop` - Stop the workspace
- `workspace_<name>_restart` - Restart the workspace
- `workspace_<name>_status` - Show workspace status
### Check workspace status
```bash
workspace_<name>_status
```
### Stop workspace
```bash
workspace_<name>_stop
```
### View workspace logs
```bash
doas journalctl -u microvm@<name> # for VMs
doas journalctl -u container@<name> # for containers
```
### List running workspaces
```bash
doas systemctl list-units 'microvm@*' 'container@*'
```
## Example workflow
Creating a VM workspace named "nixpkgs-dev":
```bash
# 1. Create machines/fry/workspaces/nixpkgs-dev.nix (minimal, just packages if needed)
# 2. Update machines/fry/default.nix:
# sandboxed-workspace.workspaces.nixpkgs-dev = {
# type = "vm";
# config = ./workspaces/nixpkgs-dev.nix;
# ip = "192.168.83.20";
# };
# 3. Build and deploy (auto-creates directories and SSH keys)
doas nixos-rebuild switch --flake .#fry
# 4. Optional: Clone repository into workspace
mkdir -p ~/sandboxed/nixpkgs-dev/workspace
cd ~/sandboxed/nixpkgs-dev/workspace
git clone https://github.com/NixOS/nixpkgs.git
# 5. Start the workspace
workspace_nixpkgs-dev_start
# 6. SSH into the workspace
workspace_nixpkgs-dev
```
Creating a container workspace named "quick-test":
```bash
# 1. Create machines/fry/workspaces/quick-test.nix
# 2. Update machines/fry/default.nix:
# sandboxed-workspace.workspaces.quick-test = {
# type = "container";
# config = ./workspaces/quick-test.nix;
# ip = "192.168.83.30";
# };
# 3. Build and deploy
doas nixos-rebuild switch --flake .#fry
# 4. Start and access
workspace_quick-test_start
workspace_quick-test
```
## Directory structure
Workspaces store persistent data in `~/sandboxed/<name>/`:
```
~/sandboxed/<name>/
├── workspace/ # Shared workspace directory
├── ssh-host-keys/ # Persistent SSH host keys
└── claude-config/ # Claude Code configuration
```
## Notes
- Workspaces are ephemeral - only data in shared directories persists
- VMs have isolated nix store via overlay
- Containers share the host's nix store (read-only)
- SSH host keys persist across workspace rebuilds
- Claude config directory is isolated per workspace
- Workspaces can access the internet via NAT through the host
- DNS queries go through the host (uses host's DNS)
- Default VM resources: 8 vCPUs, 4GB RAM, 8GB disk overlay
- Containers have no resource limits by default