Compare commits
31 Commits
3d08a3e9bc
...
stage
| Author | SHA1 | Date | |
|---|---|---|---|
| facaa261bc | |||
| 1d915f9524 | |||
| 73633eaddc | |||
| 6a0540dddd | |||
| ce9bda8a0b | |||
| 1dd1b420d5 | |||
| 59623c8a3b | |||
| f2f5761c83 | |||
|
|
412dd12b5a | ||
| 684851d641 | |||
| 4cf50b5fb1 | |||
| 288a2841aa | |||
| 0589ca5748 | |||
| a4c5cb589a | |||
| a697ea10ad | |||
| 200d5a5d22 | |||
| 339eac52c6 | |||
| bab4b3ff8e | |||
| 54ab576914 | |||
| c84c0716ce | |||
| a921f40644 | |||
|
|
a6c17164fa | ||
| 9df8390f1f | |||
| 156f0183bd | |||
| 8b92e51ef7 | |||
| 7798872bbf | |||
| cf41285cb8 | |||
| 5a0a525f64 | |||
| 9154595910 | |||
| 1b92363b08 | |||
| 136f024cf0 |
202
.claude/skills/skill-creator/LICENSE.txt
Normal file
202
.claude/skills/skill-creator/LICENSE.txt
Normal file
@@ -0,0 +1,202 @@
|
||||
|
||||
Apache License
|
||||
Version 2.0, January 2004
|
||||
http://www.apache.org/licenses/
|
||||
|
||||
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
|
||||
|
||||
1. Definitions.
|
||||
|
||||
"License" shall mean the terms and conditions for use, reproduction,
|
||||
and distribution as defined by Sections 1 through 9 of this document.
|
||||
|
||||
"Licensor" shall mean the copyright owner or entity authorized by
|
||||
the copyright owner that is granting the License.
|
||||
|
||||
"Legal Entity" shall mean the union of the acting entity and all
|
||||
other entities that control, are controlled by, or are under common
|
||||
control with that entity. For the purposes of this definition,
|
||||
"control" means (i) the power, direct or indirect, to cause the
|
||||
direction or management of such entity, whether by contract or
|
||||
otherwise, or (ii) ownership of fifty percent (50%) or more of the
|
||||
outstanding shares, or (iii) beneficial ownership of such entity.
|
||||
|
||||
"You" (or "Your") shall mean an individual or Legal Entity
|
||||
exercising permissions granted by this License.
|
||||
|
||||
"Source" form shall mean the preferred form for making modifications,
|
||||
including but not limited to software source code, documentation
|
||||
source, and configuration files.
|
||||
|
||||
"Object" form shall mean any form resulting from mechanical
|
||||
transformation or translation of a Source form, including but
|
||||
not limited to compiled object code, generated documentation,
|
||||
and conversions to other media types.
|
||||
|
||||
"Work" shall mean the work of authorship, whether in Source or
|
||||
Object form, made available under the License, as indicated by a
|
||||
copyright notice that is included in or attached to the work
|
||||
(an example is provided in the Appendix below).
|
||||
|
||||
"Derivative Works" shall mean any work, whether in Source or Object
|
||||
form, that is based on (or derived from) the Work and for which the
|
||||
editorial revisions, annotations, elaborations, or other modifications
|
||||
represent, as a whole, an original work of authorship. For the purposes
|
||||
of this License, Derivative Works shall not include works that remain
|
||||
separable from, or merely link (or bind by name) to the interfaces of,
|
||||
the Work and Derivative Works thereof.
|
||||
|
||||
"Contribution" shall mean any work of authorship, including
|
||||
the original version of the Work and any modifications or additions
|
||||
to that Work or Derivative Works thereof, that is intentionally
|
||||
submitted to Licensor for inclusion in the Work by the copyright owner
|
||||
or by an individual or Legal Entity authorized to submit on behalf of
|
||||
the copyright owner. For the purposes of this definition, "submitted"
|
||||
means any form of electronic, verbal, or written communication sent
|
||||
to the Licensor or its representatives, including but not limited to
|
||||
communication on electronic mailing lists, source code control systems,
|
||||
and issue tracking systems that are managed by, or on behalf of, the
|
||||
Licensor for the purpose of discussing and improving the Work, but
|
||||
excluding communication that is conspicuously marked or otherwise
|
||||
designated in writing by the copyright owner as "Not a Contribution."
|
||||
|
||||
"Contributor" shall mean Licensor and any individual or Legal Entity
|
||||
on behalf of whom a Contribution has been received by Licensor and
|
||||
subsequently incorporated within the Work.
|
||||
|
||||
2. Grant of Copyright License. Subject to the terms and conditions of
|
||||
this License, each Contributor hereby grants to You a perpetual,
|
||||
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
|
||||
copyright license to reproduce, prepare Derivative Works of,
|
||||
publicly display, publicly perform, sublicense, and distribute the
|
||||
Work and such Derivative Works in Source or Object form.
|
||||
|
||||
3. Grant of Patent License. Subject to the terms and conditions of
|
||||
this License, each Contributor hereby grants to You a perpetual,
|
||||
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
|
||||
(except as stated in this section) patent license to make, have made,
|
||||
use, offer to sell, sell, import, and otherwise transfer the Work,
|
||||
where such license applies only to those patent claims licensable
|
||||
by such Contributor that are necessarily infringed by their
|
||||
Contribution(s) alone or by combination of their Contribution(s)
|
||||
with the Work to which such Contribution(s) was submitted. If You
|
||||
institute patent litigation against any entity (including a
|
||||
cross-claim or counterclaim in a lawsuit) alleging that the Work
|
||||
or a Contribution incorporated within the Work constitutes direct
|
||||
or contributory patent infringement, then any patent licenses
|
||||
granted to You under this License for that Work shall terminate
|
||||
as of the date such litigation is filed.
|
||||
|
||||
4. Redistribution. You may reproduce and distribute copies of the
|
||||
Work or Derivative Works thereof in any medium, with or without
|
||||
modifications, and in Source or Object form, provided that You
|
||||
meet the following conditions:
|
||||
|
||||
(a) You must give any other recipients of the Work or
|
||||
Derivative Works a copy of this License; and
|
||||
|
||||
(b) You must cause any modified files to carry prominent notices
|
||||
stating that You changed the files; and
|
||||
|
||||
(c) You must retain, in the Source form of any Derivative Works
|
||||
that You distribute, all copyright, patent, trademark, and
|
||||
attribution notices from the Source form of the Work,
|
||||
excluding those notices that do not pertain to any part of
|
||||
the Derivative Works; and
|
||||
|
||||
(d) If the Work includes a "NOTICE" text file as part of its
|
||||
distribution, then any Derivative Works that You distribute must
|
||||
include a readable copy of the attribution notices contained
|
||||
within such NOTICE file, excluding those notices that do not
|
||||
pertain to any part of the Derivative Works, in at least one
|
||||
of the following places: within a NOTICE text file distributed
|
||||
as part of the Derivative Works; within the Source form or
|
||||
documentation, if provided along with the Derivative Works; or,
|
||||
within a display generated by the Derivative Works, if and
|
||||
wherever such third-party notices normally appear. The contents
|
||||
of the NOTICE file are for informational purposes only and
|
||||
do not modify the License. You may add Your own attribution
|
||||
notices within Derivative Works that You distribute, alongside
|
||||
or as an addendum to the NOTICE text from the Work, provided
|
||||
that such additional attribution notices cannot be construed
|
||||
as modifying the License.
|
||||
|
||||
You may add Your own copyright statement to Your modifications and
|
||||
may provide additional or different license terms and conditions
|
||||
for use, reproduction, or distribution of Your modifications, or
|
||||
for any such Derivative Works as a whole, provided Your use,
|
||||
reproduction, and distribution of the Work otherwise complies with
|
||||
the conditions stated in this License.
|
||||
|
||||
5. Submission of Contributions. Unless You explicitly state otherwise,
|
||||
any Contribution intentionally submitted for inclusion in the Work
|
||||
by You to the Licensor shall be under the terms and conditions of
|
||||
this License, without any additional terms or conditions.
|
||||
Notwithstanding the above, nothing herein shall supersede or modify
|
||||
the terms of any separate license agreement you may have executed
|
||||
with Licensor regarding such Contributions.
|
||||
|
||||
6. Trademarks. This License does not grant permission to use the trade
|
||||
names, trademarks, service marks, or product names of the Licensor,
|
||||
except as required for reasonable and customary use in describing the
|
||||
origin of the Work and reproducing the content of the NOTICE file.
|
||||
|
||||
7. Disclaimer of Warranty. Unless required by applicable law or
|
||||
agreed to in writing, Licensor provides the Work (and each
|
||||
Contributor provides its Contributions) on an "AS IS" BASIS,
|
||||
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
|
||||
implied, including, without limitation, any warranties or conditions
|
||||
of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
|
||||
PARTICULAR PURPOSE. You are solely responsible for determining the
|
||||
appropriateness of using or redistributing the Work and assume any
|
||||
risks associated with Your exercise of permissions under this License.
|
||||
|
||||
8. Limitation of Liability. In no event and under no legal theory,
|
||||
whether in tort (including negligence), contract, or otherwise,
|
||||
unless required by applicable law (such as deliberate and grossly
|
||||
negligent acts) or agreed to in writing, shall any Contributor be
|
||||
liable to You for damages, including any direct, indirect, special,
|
||||
incidental, or consequential damages of any character arising as a
|
||||
result of this License or out of the use or inability to use the
|
||||
Work (including but not limited to damages for loss of goodwill,
|
||||
work stoppage, computer failure or malfunction, or any and all
|
||||
other commercial damages or losses), even if such Contributor
|
||||
has been advised of the possibility of such damages.
|
||||
|
||||
9. Accepting Warranty or Additional Liability. While redistributing
|
||||
the Work or Derivative Works thereof, You may choose to offer,
|
||||
and charge a fee for, acceptance of support, warranty, indemnity,
|
||||
or other liability obligations and/or rights consistent with this
|
||||
License. However, in accepting such obligations, You may act only
|
||||
on Your own behalf and on Your sole responsibility, not on behalf
|
||||
of any other Contributor, and only if You agree to indemnify,
|
||||
defend, and hold each Contributor harmless for any liability
|
||||
incurred by, or claims asserted against, such Contributor by reason
|
||||
of your accepting any such warranty or additional liability.
|
||||
|
||||
END OF TERMS AND CONDITIONS
|
||||
|
||||
APPENDIX: How to apply the Apache License to your work.
|
||||
|
||||
To apply the Apache License to your work, attach the following
|
||||
boilerplate notice, with the fields enclosed by brackets "[]"
|
||||
replaced with your own identifying information. (Don't include
|
||||
the brackets!) The text should be enclosed in the appropriate
|
||||
comment syntax for the file format. We also recommend that a
|
||||
file or class name and description of purpose be included on the
|
||||
same "printed page" as the copyright notice for easier
|
||||
identification within third-party archives.
|
||||
|
||||
Copyright [yyyy] [name of copyright owner]
|
||||
|
||||
Licensed under the Apache License, Version 2.0 (the "License");
|
||||
you may not use this file except in compliance with the License.
|
||||
You may obtain a copy of the License at
|
||||
|
||||
http://www.apache.org/licenses/LICENSE-2.0
|
||||
|
||||
Unless required by applicable law or agreed to in writing, software
|
||||
distributed under the License is distributed on an "AS IS" BASIS,
|
||||
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
See the License for the specific language governing permissions and
|
||||
limitations under the License.
|
||||
357
.claude/skills/skill-creator/SKILL.md
Normal file
357
.claude/skills/skill-creator/SKILL.md
Normal file
@@ -0,0 +1,357 @@
|
||||
---
|
||||
name: skill-creator
|
||||
description: Guide for creating effective skills. This skill should be used when users want to create a new skill (or update an existing skill) that extends Claude's capabilities with specialized knowledge, workflows, or tool integrations.
|
||||
license: Complete terms in LICENSE.txt
|
||||
---
|
||||
|
||||
# Skill Creator
|
||||
|
||||
This skill provides guidance for creating effective skills.
|
||||
|
||||
## About Skills
|
||||
|
||||
Skills are modular, self-contained packages that extend Claude's capabilities by providing
|
||||
specialized knowledge, workflows, and tools. Think of them as "onboarding guides" for specific
|
||||
domains or tasks—they transform Claude from a general-purpose agent into a specialized agent
|
||||
equipped with procedural knowledge that no model can fully possess.
|
||||
|
||||
### What Skills Provide
|
||||
|
||||
1. Specialized workflows - Multi-step procedures for specific domains
|
||||
2. Tool integrations - Instructions for working with specific file formats or APIs
|
||||
3. Domain expertise - Company-specific knowledge, schemas, business logic
|
||||
4. Bundled resources - Scripts, references, and assets for complex and repetitive tasks
|
||||
|
||||
## Core Principles
|
||||
|
||||
### Concise is Key
|
||||
|
||||
The context window is a public good. Skills share the context window with everything else Claude needs: system prompt, conversation history, other Skills' metadata, and the actual user request.
|
||||
|
||||
**Default assumption: Claude is already very smart.** Only add context Claude doesn't already have. Challenge each piece of information: "Does Claude really need this explanation?" and "Does this paragraph justify its token cost?"
|
||||
|
||||
Prefer concise examples over verbose explanations.
|
||||
|
||||
### Set Appropriate Degrees of Freedom
|
||||
|
||||
Match the level of specificity to the task's fragility and variability:
|
||||
|
||||
**High freedom (text-based instructions)**: Use when multiple approaches are valid, decisions depend on context, or heuristics guide the approach.
|
||||
|
||||
**Medium freedom (pseudocode or scripts with parameters)**: Use when a preferred pattern exists, some variation is acceptable, or configuration affects behavior.
|
||||
|
||||
**Low freedom (specific scripts, few parameters)**: Use when operations are fragile and error-prone, consistency is critical, or a specific sequence must be followed.
|
||||
|
||||
Think of Claude as exploring a path: a narrow bridge with cliffs needs specific guardrails (low freedom), while an open field allows many routes (high freedom).
|
||||
|
||||
### Anatomy of a Skill
|
||||
|
||||
Every skill consists of a required SKILL.md file and optional bundled resources:
|
||||
|
||||
```
|
||||
skill-name/
|
||||
├── SKILL.md (required)
|
||||
│ ├── YAML frontmatter metadata (required)
|
||||
│ │ ├── name: (required)
|
||||
│ │ ├── description: (required)
|
||||
│ │ └── compatibility: (optional, rarely needed)
|
||||
│ └── Markdown instructions (required)
|
||||
└── Bundled Resources (optional)
|
||||
├── scripts/ - Executable code (Python/Bash/etc.)
|
||||
├── references/ - Documentation intended to be loaded into context as needed
|
||||
└── assets/ - Files used in output (templates, icons, fonts, etc.)
|
||||
```
|
||||
|
||||
#### SKILL.md (required)
|
||||
|
||||
Every SKILL.md consists of:
|
||||
|
||||
- **Frontmatter** (YAML): Contains `name` and `description` fields (required), plus optional fields like `license`, `metadata`, and `compatibility`. Only `name` and `description` are read by Claude to determine when the skill triggers, so be clear and comprehensive about what the skill is and when it should be used. The `compatibility` field is for noting environment requirements (target product, system packages, etc.) but most skills don't need it.
|
||||
- **Body** (Markdown): Instructions and guidance for using the skill. Only loaded AFTER the skill triggers (if at all).
|
||||
|
||||
#### Bundled Resources (optional)
|
||||
|
||||
##### Scripts (`scripts/`)
|
||||
|
||||
Executable code (Python/Bash/etc.) for tasks that require deterministic reliability or are repeatedly rewritten.
|
||||
|
||||
- **When to include**: When the same code is being rewritten repeatedly or deterministic reliability is needed
|
||||
- **Example**: `scripts/rotate_pdf.py` for PDF rotation tasks
|
||||
- **Benefits**: Token efficient, deterministic, may be executed without loading into context
|
||||
- **Note**: Scripts may still need to be read by Claude for patching or environment-specific adjustments
|
||||
|
||||
##### References (`references/`)
|
||||
|
||||
Documentation and reference material intended to be loaded as needed into context to inform Claude's process and thinking.
|
||||
|
||||
- **When to include**: For documentation that Claude should reference while working
|
||||
- **Examples**: `references/finance.md` for financial schemas, `references/mnda.md` for company NDA template, `references/policies.md` for company policies, `references/api_docs.md` for API specifications
|
||||
- **Use cases**: Database schemas, API documentation, domain knowledge, company policies, detailed workflow guides
|
||||
- **Benefits**: Keeps SKILL.md lean, loaded only when Claude determines it's needed
|
||||
- **Best practice**: If files are large (>10k words), include grep search patterns in SKILL.md
|
||||
- **Avoid duplication**: Information should live in either SKILL.md or references files, not both. Prefer references files for detailed information unless it's truly core to the skill—this keeps SKILL.md lean while making information discoverable without hogging the context window. Keep only essential procedural instructions and workflow guidance in SKILL.md; move detailed reference material, schemas, and examples to references files.
|
||||
|
||||
##### Assets (`assets/`)
|
||||
|
||||
Files not intended to be loaded into context, but rather used within the output Claude produces.
|
||||
|
||||
- **When to include**: When the skill needs files that will be used in the final output
|
||||
- **Examples**: `assets/logo.png` for brand assets, `assets/slides.pptx` for PowerPoint templates, `assets/frontend-template/` for HTML/React boilerplate, `assets/font.ttf` for typography
|
||||
- **Use cases**: Templates, images, icons, boilerplate code, fonts, sample documents that get copied or modified
|
||||
- **Benefits**: Separates output resources from documentation, enables Claude to use files without loading them into context
|
||||
|
||||
#### What to Not Include in a Skill
|
||||
|
||||
A skill should only contain essential files that directly support its functionality. Do NOT create extraneous documentation or auxiliary files, including:
|
||||
|
||||
- README.md
|
||||
- INSTALLATION_GUIDE.md
|
||||
- QUICK_REFERENCE.md
|
||||
- CHANGELOG.md
|
||||
- etc.
|
||||
|
||||
The skill should only contain the information needed for an AI agent to do the job at hand. It should not contain auxilary context about the process that went into creating it, setup and testing procedures, user-facing documentation, etc. Creating additional documentation files just adds clutter and confusion.
|
||||
|
||||
### Progressive Disclosure Design Principle
|
||||
|
||||
Skills use a three-level loading system to manage context efficiently:
|
||||
|
||||
1. **Metadata (name + description)** - Always in context (~100 words)
|
||||
2. **SKILL.md body** - When skill triggers (<5k words)
|
||||
3. **Bundled resources** - As needed by Claude (Unlimited because scripts can be executed without reading into context window)
|
||||
|
||||
#### Progressive Disclosure Patterns
|
||||
|
||||
Keep SKILL.md body to the essentials and under 500 lines to minimize context bloat. Split content into separate files when approaching this limit. When splitting out content into other files, it is very important to reference them from SKILL.md and describe clearly when to read them, to ensure the reader of the skill knows they exist and when to use them.
|
||||
|
||||
**Key principle:** When a skill supports multiple variations, frameworks, or options, keep only the core workflow and selection guidance in SKILL.md. Move variant-specific details (patterns, examples, configuration) into separate reference files.
|
||||
|
||||
**Pattern 1: High-level guide with references**
|
||||
|
||||
```markdown
|
||||
# PDF Processing
|
||||
|
||||
## Quick start
|
||||
|
||||
Extract text with pdfplumber:
|
||||
[code example]
|
||||
|
||||
## Advanced features
|
||||
|
||||
- **Form filling**: See [FORMS.md](FORMS.md) for complete guide
|
||||
- **API reference**: See [REFERENCE.md](REFERENCE.md) for all methods
|
||||
- **Examples**: See [EXAMPLES.md](EXAMPLES.md) for common patterns
|
||||
```
|
||||
|
||||
Claude loads FORMS.md, REFERENCE.md, or EXAMPLES.md only when needed.
|
||||
|
||||
**Pattern 2: Domain-specific organization**
|
||||
|
||||
For Skills with multiple domains, organize content by domain to avoid loading irrelevant context:
|
||||
|
||||
```
|
||||
bigquery-skill/
|
||||
├── SKILL.md (overview and navigation)
|
||||
└── reference/
|
||||
├── finance.md (revenue, billing metrics)
|
||||
├── sales.md (opportunities, pipeline)
|
||||
├── product.md (API usage, features)
|
||||
└── marketing.md (campaigns, attribution)
|
||||
```
|
||||
|
||||
When a user asks about sales metrics, Claude only reads sales.md.
|
||||
|
||||
Similarly, for skills supporting multiple frameworks or variants, organize by variant:
|
||||
|
||||
```
|
||||
cloud-deploy/
|
||||
├── SKILL.md (workflow + provider selection)
|
||||
└── references/
|
||||
├── aws.md (AWS deployment patterns)
|
||||
├── gcp.md (GCP deployment patterns)
|
||||
└── azure.md (Azure deployment patterns)
|
||||
```
|
||||
|
||||
When the user chooses AWS, Claude only reads aws.md.
|
||||
|
||||
**Pattern 3: Conditional details**
|
||||
|
||||
Show basic content, link to advanced content:
|
||||
|
||||
```markdown
|
||||
# DOCX Processing
|
||||
|
||||
## Creating documents
|
||||
|
||||
Use docx-js for new documents. See [DOCX-JS.md](DOCX-JS.md).
|
||||
|
||||
## Editing documents
|
||||
|
||||
For simple edits, modify the XML directly.
|
||||
|
||||
**For tracked changes**: See [REDLINING.md](REDLINING.md)
|
||||
**For OOXML details**: See [OOXML.md](OOXML.md)
|
||||
```
|
||||
|
||||
Claude reads REDLINING.md or OOXML.md only when the user needs those features.
|
||||
|
||||
**Important guidelines:**
|
||||
|
||||
- **Avoid deeply nested references** - Keep references one level deep from SKILL.md. All reference files should link directly from SKILL.md.
|
||||
- **Structure longer reference files** - For files longer than 100 lines, include a table of contents at the top so Claude can see the full scope when previewing.
|
||||
|
||||
## Skill Creation Process
|
||||
|
||||
Skill creation involves these steps:
|
||||
|
||||
1. Understand the skill with concrete examples
|
||||
2. Plan reusable skill contents (scripts, references, assets)
|
||||
3. Initialize the skill (run init_skill.py)
|
||||
4. Edit the skill (implement resources and write SKILL.md)
|
||||
5. Package the skill (run package_skill.py)
|
||||
6. Iterate based on real usage
|
||||
|
||||
Follow these steps in order, skipping only if there is a clear reason why they are not applicable.
|
||||
|
||||
### Step 1: Understanding the Skill with Concrete Examples
|
||||
|
||||
Skip this step only when the skill's usage patterns are already clearly understood. It remains valuable even when working with an existing skill.
|
||||
|
||||
To create an effective skill, clearly understand concrete examples of how the skill will be used. This understanding can come from either direct user examples or generated examples that are validated with user feedback.
|
||||
|
||||
For example, when building an image-editor skill, relevant questions include:
|
||||
|
||||
- "What functionality should the image-editor skill support? Editing, rotating, anything else?"
|
||||
- "Can you give some examples of how this skill would be used?"
|
||||
- "I can imagine users asking for things like 'Remove the red-eye from this image' or 'Rotate this image'. Are there other ways you imagine this skill being used?"
|
||||
- "What would a user say that should trigger this skill?"
|
||||
|
||||
To avoid overwhelming users, avoid asking too many questions in a single message. Start with the most important questions and follow up as needed for better effectiveness.
|
||||
|
||||
Conclude this step when there is a clear sense of the functionality the skill should support.
|
||||
|
||||
### Step 2: Planning the Reusable Skill Contents
|
||||
|
||||
To turn concrete examples into an effective skill, analyze each example by:
|
||||
|
||||
1. Considering how to execute on the example from scratch
|
||||
2. Identifying what scripts, references, and assets would be helpful when executing these workflows repeatedly
|
||||
|
||||
Example: When building a `pdf-editor` skill to handle queries like "Help me rotate this PDF," the analysis shows:
|
||||
|
||||
1. Rotating a PDF requires re-writing the same code each time
|
||||
2. A `scripts/rotate_pdf.py` script would be helpful to store in the skill
|
||||
|
||||
Example: When designing a `frontend-webapp-builder` skill for queries like "Build me a todo app" or "Build me a dashboard to track my steps," the analysis shows:
|
||||
|
||||
1. Writing a frontend webapp requires the same boilerplate HTML/React each time
|
||||
2. An `assets/hello-world/` template containing the boilerplate HTML/React project files would be helpful to store in the skill
|
||||
|
||||
Example: When building a `big-query` skill to handle queries like "How many users have logged in today?" the analysis shows:
|
||||
|
||||
1. Querying BigQuery requires re-discovering the table schemas and relationships each time
|
||||
2. A `references/schema.md` file documenting the table schemas would be helpful to store in the skill
|
||||
|
||||
To establish the skill's contents, analyze each concrete example to create a list of the reusable resources to include: scripts, references, and assets.
|
||||
|
||||
### Step 3: Initializing the Skill
|
||||
|
||||
At this point, it is time to actually create the skill.
|
||||
|
||||
Skip this step only if the skill being developed already exists, and iteration or packaging is needed. In this case, continue to the next step.
|
||||
|
||||
When creating a new skill from scratch, always run the `init_skill.py` script. The script conveniently generates a new template skill directory that automatically includes everything a skill requires, making the skill creation process much more efficient and reliable.
|
||||
|
||||
Usage:
|
||||
|
||||
```bash
|
||||
scripts/init_skill.py <skill-name> --path <output-directory>
|
||||
```
|
||||
|
||||
The script:
|
||||
|
||||
- Creates the skill directory at the specified path
|
||||
- Generates a SKILL.md template with proper frontmatter and TODO placeholders
|
||||
- Creates example resource directories: `scripts/`, `references/`, and `assets/`
|
||||
- Adds example files in each directory that can be customized or deleted
|
||||
|
||||
After initialization, customize or remove the generated SKILL.md and example files as needed.
|
||||
|
||||
### Step 4: Edit the Skill
|
||||
|
||||
When editing the (newly-generated or existing) skill, remember that the skill is being created for another instance of Claude to use. Include information that would be beneficial and non-obvious to Claude. Consider what procedural knowledge, domain-specific details, or reusable assets would help another Claude instance execute these tasks more effectively.
|
||||
|
||||
#### Learn Proven Design Patterns
|
||||
|
||||
Consult these helpful guides based on your skill's needs:
|
||||
|
||||
- **Multi-step processes**: See references/workflows.md for sequential workflows and conditional logic
|
||||
- **Specific output formats or quality standards**: See references/output-patterns.md for template and example patterns
|
||||
|
||||
These files contain established best practices for effective skill design.
|
||||
|
||||
#### Start with Reusable Skill Contents
|
||||
|
||||
To begin implementation, start with the reusable resources identified above: `scripts/`, `references/`, and `assets/` files. Note that this step may require user input. For example, when implementing a `brand-guidelines` skill, the user may need to provide brand assets or templates to store in `assets/`, or documentation to store in `references/`.
|
||||
|
||||
Added scripts must be tested by actually running them to ensure there are no bugs and that the output matches what is expected. If there are many similar scripts, only a representative sample needs to be tested to ensure confidence that they all work while balancing time to completion.
|
||||
|
||||
Any example files and directories not needed for the skill should be deleted. The initialization script creates example files in `scripts/`, `references/`, and `assets/` to demonstrate structure, but most skills won't need all of them.
|
||||
|
||||
#### Update SKILL.md
|
||||
|
||||
**Writing Guidelines:** Always use imperative/infinitive form.
|
||||
|
||||
##### Frontmatter
|
||||
|
||||
Write the YAML frontmatter with `name` and `description`:
|
||||
|
||||
- `name`: The skill name
|
||||
- `description`: This is the primary triggering mechanism for your skill, and helps Claude understand when to use the skill.
|
||||
- Include both what the Skill does and specific triggers/contexts for when to use it.
|
||||
- Include all "when to use" information here - Not in the body. The body is only loaded after triggering, so "When to Use This Skill" sections in the body are not helpful to Claude.
|
||||
- Example description for a `docx` skill: "Comprehensive document creation, editing, and analysis with support for tracked changes, comments, formatting preservation, and text extraction. Use when Claude needs to work with professional documents (.docx files) for: (1) Creating new documents, (2) Modifying or editing content, (3) Working with tracked changes, (4) Adding comments, or any other document tasks"
|
||||
|
||||
Do not include any other fields in YAML frontmatter.
|
||||
|
||||
##### Body
|
||||
|
||||
Write instructions for using the skill and its bundled resources.
|
||||
|
||||
### Step 5: Packaging a Skill
|
||||
|
||||
Once development of the skill is complete, it must be packaged into a distributable .skill file that gets shared with the user. The packaging process automatically validates the skill first to ensure it meets all requirements:
|
||||
|
||||
```bash
|
||||
scripts/package_skill.py <path/to/skill-folder>
|
||||
```
|
||||
|
||||
Optional output directory specification:
|
||||
|
||||
```bash
|
||||
scripts/package_skill.py <path/to/skill-folder> ./dist
|
||||
```
|
||||
|
||||
The packaging script will:
|
||||
|
||||
1. **Validate** the skill automatically, checking:
|
||||
|
||||
- YAML frontmatter format and required fields
|
||||
- Skill naming conventions and directory structure
|
||||
- Description completeness and quality
|
||||
- File organization and resource references
|
||||
|
||||
2. **Package** the skill if validation passes, creating a .skill file named after the skill (e.g., `my-skill.skill`) that includes all files and maintains the proper directory structure for distribution. The .skill file is a zip file with a .skill extension.
|
||||
|
||||
If validation fails, the script will report the errors and exit without creating a package. Fix any validation errors and run the packaging command again.
|
||||
|
||||
### Step 6: Iterate
|
||||
|
||||
After testing the skill, users may request improvements. Often this happens right after using the skill, with fresh context of how the skill performed.
|
||||
|
||||
**Iteration workflow:**
|
||||
|
||||
1. Use the skill on real tasks
|
||||
2. Notice struggles or inefficiencies
|
||||
3. Identify how SKILL.md or bundled resources should be updated
|
||||
4. Implement changes and test again
|
||||
82
.claude/skills/skill-creator/references/output-patterns.md
Normal file
82
.claude/skills/skill-creator/references/output-patterns.md
Normal file
@@ -0,0 +1,82 @@
|
||||
# Output Patterns
|
||||
|
||||
Use these patterns when skills need to produce consistent, high-quality output.
|
||||
|
||||
## Template Pattern
|
||||
|
||||
Provide templates for output format. Match the level of strictness to your needs.
|
||||
|
||||
**For strict requirements (like API responses or data formats):**
|
||||
|
||||
```markdown
|
||||
## Report structure
|
||||
|
||||
ALWAYS use this exact template structure:
|
||||
|
||||
# [Analysis Title]
|
||||
|
||||
## Executive summary
|
||||
[One-paragraph overview of key findings]
|
||||
|
||||
## Key findings
|
||||
- Finding 1 with supporting data
|
||||
- Finding 2 with supporting data
|
||||
- Finding 3 with supporting data
|
||||
|
||||
## Recommendations
|
||||
1. Specific actionable recommendation
|
||||
2. Specific actionable recommendation
|
||||
```
|
||||
|
||||
**For flexible guidance (when adaptation is useful):**
|
||||
|
||||
```markdown
|
||||
## Report structure
|
||||
|
||||
Here is a sensible default format, but use your best judgment:
|
||||
|
||||
# [Analysis Title]
|
||||
|
||||
## Executive summary
|
||||
[Overview]
|
||||
|
||||
## Key findings
|
||||
[Adapt sections based on what you discover]
|
||||
|
||||
## Recommendations
|
||||
[Tailor to the specific context]
|
||||
|
||||
Adjust sections as needed for the specific analysis type.
|
||||
```
|
||||
|
||||
## Examples Pattern
|
||||
|
||||
For skills where output quality depends on seeing examples, provide input/output pairs:
|
||||
|
||||
```markdown
|
||||
## Commit message format
|
||||
|
||||
Generate commit messages following these examples:
|
||||
|
||||
**Example 1:**
|
||||
Input: Added user authentication with JWT tokens
|
||||
Output:
|
||||
```
|
||||
feat(auth): implement JWT-based authentication
|
||||
|
||||
Add login endpoint and token validation middleware
|
||||
```
|
||||
|
||||
**Example 2:**
|
||||
Input: Fixed bug where dates displayed incorrectly in reports
|
||||
Output:
|
||||
```
|
||||
fix(reports): correct date formatting in timezone conversion
|
||||
|
||||
Use UTC timestamps consistently across report generation
|
||||
```
|
||||
|
||||
Follow this style: type(scope): brief description, then detailed explanation.
|
||||
```
|
||||
|
||||
Examples help Claude understand the desired style and level of detail more clearly than descriptions alone.
|
||||
28
.claude/skills/skill-creator/references/workflows.md
Normal file
28
.claude/skills/skill-creator/references/workflows.md
Normal file
@@ -0,0 +1,28 @@
|
||||
# Workflow Patterns
|
||||
|
||||
## Sequential Workflows
|
||||
|
||||
For complex tasks, break operations into clear, sequential steps. It is often helpful to give Claude an overview of the process towards the beginning of SKILL.md:
|
||||
|
||||
```markdown
|
||||
Filling a PDF form involves these steps:
|
||||
|
||||
1. Analyze the form (run analyze_form.py)
|
||||
2. Create field mapping (edit fields.json)
|
||||
3. Validate mapping (run validate_fields.py)
|
||||
4. Fill the form (run fill_form.py)
|
||||
5. Verify output (run verify_output.py)
|
||||
```
|
||||
|
||||
## Conditional Workflows
|
||||
|
||||
For tasks with branching logic, guide Claude through decision points:
|
||||
|
||||
```markdown
|
||||
1. Determine the modification type:
|
||||
**Creating new content?** → Follow "Creation workflow" below
|
||||
**Editing existing content?** → Follow "Editing workflow" below
|
||||
|
||||
2. Creation workflow: [steps]
|
||||
3. Editing workflow: [steps]
|
||||
```
|
||||
Binary file not shown.
303
.claude/skills/skill-creator/scripts/init_skill.py
Executable file
303
.claude/skills/skill-creator/scripts/init_skill.py
Executable file
@@ -0,0 +1,303 @@
|
||||
#!/usr/bin/env python3
|
||||
"""
|
||||
Skill Initializer - Creates a new skill from template
|
||||
|
||||
Usage:
|
||||
init_skill.py <skill-name> --path <path>
|
||||
|
||||
Examples:
|
||||
init_skill.py my-new-skill --path skills/public
|
||||
init_skill.py my-api-helper --path skills/private
|
||||
init_skill.py custom-skill --path /custom/location
|
||||
"""
|
||||
|
||||
import sys
|
||||
from pathlib import Path
|
||||
|
||||
|
||||
SKILL_TEMPLATE = """---
|
||||
name: {skill_name}
|
||||
description: [TODO: Complete and informative explanation of what the skill does and when to use it. Include WHEN to use this skill - specific scenarios, file types, or tasks that trigger it.]
|
||||
---
|
||||
|
||||
# {skill_title}
|
||||
|
||||
## Overview
|
||||
|
||||
[TODO: 1-2 sentences explaining what this skill enables]
|
||||
|
||||
## Structuring This Skill
|
||||
|
||||
[TODO: Choose the structure that best fits this skill's purpose. Common patterns:
|
||||
|
||||
**1. Workflow-Based** (best for sequential processes)
|
||||
- Works well when there are clear step-by-step procedures
|
||||
- Example: DOCX skill with "Workflow Decision Tree" → "Reading" → "Creating" → "Editing"
|
||||
- Structure: ## Overview → ## Workflow Decision Tree → ## Step 1 → ## Step 2...
|
||||
|
||||
**2. Task-Based** (best for tool collections)
|
||||
- Works well when the skill offers different operations/capabilities
|
||||
- Example: PDF skill with "Quick Start" → "Merge PDFs" → "Split PDFs" → "Extract Text"
|
||||
- Structure: ## Overview → ## Quick Start → ## Task Category 1 → ## Task Category 2...
|
||||
|
||||
**3. Reference/Guidelines** (best for standards or specifications)
|
||||
- Works well for brand guidelines, coding standards, or requirements
|
||||
- Example: Brand styling with "Brand Guidelines" → "Colors" → "Typography" → "Features"
|
||||
- Structure: ## Overview → ## Guidelines → ## Specifications → ## Usage...
|
||||
|
||||
**4. Capabilities-Based** (best for integrated systems)
|
||||
- Works well when the skill provides multiple interrelated features
|
||||
- Example: Product Management with "Core Capabilities" → numbered capability list
|
||||
- Structure: ## Overview → ## Core Capabilities → ### 1. Feature → ### 2. Feature...
|
||||
|
||||
Patterns can be mixed and matched as needed. Most skills combine patterns (e.g., start with task-based, add workflow for complex operations).
|
||||
|
||||
Delete this entire "Structuring This Skill" section when done - it's just guidance.]
|
||||
|
||||
## [TODO: Replace with the first main section based on chosen structure]
|
||||
|
||||
[TODO: Add content here. See examples in existing skills:
|
||||
- Code samples for technical skills
|
||||
- Decision trees for complex workflows
|
||||
- Concrete examples with realistic user requests
|
||||
- References to scripts/templates/references as needed]
|
||||
|
||||
## Resources
|
||||
|
||||
This skill includes example resource directories that demonstrate how to organize different types of bundled resources:
|
||||
|
||||
### scripts/
|
||||
Executable code (Python/Bash/etc.) that can be run directly to perform specific operations.
|
||||
|
||||
**Examples from other skills:**
|
||||
- PDF skill: `fill_fillable_fields.py`, `extract_form_field_info.py` - utilities for PDF manipulation
|
||||
- DOCX skill: `document.py`, `utilities.py` - Python modules for document processing
|
||||
|
||||
**Appropriate for:** Python scripts, shell scripts, or any executable code that performs automation, data processing, or specific operations.
|
||||
|
||||
**Note:** Scripts may be executed without loading into context, but can still be read by Claude for patching or environment adjustments.
|
||||
|
||||
### references/
|
||||
Documentation and reference material intended to be loaded into context to inform Claude's process and thinking.
|
||||
|
||||
**Examples from other skills:**
|
||||
- Product management: `communication.md`, `context_building.md` - detailed workflow guides
|
||||
- BigQuery: API reference documentation and query examples
|
||||
- Finance: Schema documentation, company policies
|
||||
|
||||
**Appropriate for:** In-depth documentation, API references, database schemas, comprehensive guides, or any detailed information that Claude should reference while working.
|
||||
|
||||
### assets/
|
||||
Files not intended to be loaded into context, but rather used within the output Claude produces.
|
||||
|
||||
**Examples from other skills:**
|
||||
- Brand styling: PowerPoint template files (.pptx), logo files
|
||||
- Frontend builder: HTML/React boilerplate project directories
|
||||
- Typography: Font files (.ttf, .woff2)
|
||||
|
||||
**Appropriate for:** Templates, boilerplate code, document templates, images, icons, fonts, or any files meant to be copied or used in the final output.
|
||||
|
||||
---
|
||||
|
||||
**Any unneeded directories can be deleted.** Not every skill requires all three types of resources.
|
||||
"""
|
||||
|
||||
EXAMPLE_SCRIPT = '''#!/usr/bin/env python3
|
||||
"""
|
||||
Example helper script for {skill_name}
|
||||
|
||||
This is a placeholder script that can be executed directly.
|
||||
Replace with actual implementation or delete if not needed.
|
||||
|
||||
Example real scripts from other skills:
|
||||
- pdf/scripts/fill_fillable_fields.py - Fills PDF form fields
|
||||
- pdf/scripts/convert_pdf_to_images.py - Converts PDF pages to images
|
||||
"""
|
||||
|
||||
def main():
|
||||
print("This is an example script for {skill_name}")
|
||||
# TODO: Add actual script logic here
|
||||
# This could be data processing, file conversion, API calls, etc.
|
||||
|
||||
if __name__ == "__main__":
|
||||
main()
|
||||
'''
|
||||
|
||||
EXAMPLE_REFERENCE = """# Reference Documentation for {skill_title}
|
||||
|
||||
This is a placeholder for detailed reference documentation.
|
||||
Replace with actual reference content or delete if not needed.
|
||||
|
||||
Example real reference docs from other skills:
|
||||
- product-management/references/communication.md - Comprehensive guide for status updates
|
||||
- product-management/references/context_building.md - Deep-dive on gathering context
|
||||
- bigquery/references/ - API references and query examples
|
||||
|
||||
## When Reference Docs Are Useful
|
||||
|
||||
Reference docs are ideal for:
|
||||
- Comprehensive API documentation
|
||||
- Detailed workflow guides
|
||||
- Complex multi-step processes
|
||||
- Information too lengthy for main SKILL.md
|
||||
- Content that's only needed for specific use cases
|
||||
|
||||
## Structure Suggestions
|
||||
|
||||
### API Reference Example
|
||||
- Overview
|
||||
- Authentication
|
||||
- Endpoints with examples
|
||||
- Error codes
|
||||
- Rate limits
|
||||
|
||||
### Workflow Guide Example
|
||||
- Prerequisites
|
||||
- Step-by-step instructions
|
||||
- Common patterns
|
||||
- Troubleshooting
|
||||
- Best practices
|
||||
"""
|
||||
|
||||
EXAMPLE_ASSET = """# Example Asset File
|
||||
|
||||
This placeholder represents where asset files would be stored.
|
||||
Replace with actual asset files (templates, images, fonts, etc.) or delete if not needed.
|
||||
|
||||
Asset files are NOT intended to be loaded into context, but rather used within
|
||||
the output Claude produces.
|
||||
|
||||
Example asset files from other skills:
|
||||
- Brand guidelines: logo.png, slides_template.pptx
|
||||
- Frontend builder: hello-world/ directory with HTML/React boilerplate
|
||||
- Typography: custom-font.ttf, font-family.woff2
|
||||
- Data: sample_data.csv, test_dataset.json
|
||||
|
||||
## Common Asset Types
|
||||
|
||||
- Templates: .pptx, .docx, boilerplate directories
|
||||
- Images: .png, .jpg, .svg, .gif
|
||||
- Fonts: .ttf, .otf, .woff, .woff2
|
||||
- Boilerplate code: Project directories, starter files
|
||||
- Icons: .ico, .svg
|
||||
- Data files: .csv, .json, .xml, .yaml
|
||||
|
||||
Note: This is a text placeholder. Actual assets can be any file type.
|
||||
"""
|
||||
|
||||
|
||||
def title_case_skill_name(skill_name):
|
||||
"""Convert hyphenated skill name to Title Case for display."""
|
||||
return ' '.join(word.capitalize() for word in skill_name.split('-'))
|
||||
|
||||
|
||||
def init_skill(skill_name, path):
|
||||
"""
|
||||
Initialize a new skill directory with template SKILL.md.
|
||||
|
||||
Args:
|
||||
skill_name: Name of the skill
|
||||
path: Path where the skill directory should be created
|
||||
|
||||
Returns:
|
||||
Path to created skill directory, or None if error
|
||||
"""
|
||||
# Determine skill directory path
|
||||
skill_dir = Path(path).resolve() / skill_name
|
||||
|
||||
# Check if directory already exists
|
||||
if skill_dir.exists():
|
||||
print(f"❌ Error: Skill directory already exists: {skill_dir}")
|
||||
return None
|
||||
|
||||
# Create skill directory
|
||||
try:
|
||||
skill_dir.mkdir(parents=True, exist_ok=False)
|
||||
print(f"✅ Created skill directory: {skill_dir}")
|
||||
except Exception as e:
|
||||
print(f"❌ Error creating directory: {e}")
|
||||
return None
|
||||
|
||||
# Create SKILL.md from template
|
||||
skill_title = title_case_skill_name(skill_name)
|
||||
skill_content = SKILL_TEMPLATE.format(
|
||||
skill_name=skill_name,
|
||||
skill_title=skill_title
|
||||
)
|
||||
|
||||
skill_md_path = skill_dir / 'SKILL.md'
|
||||
try:
|
||||
skill_md_path.write_text(skill_content)
|
||||
print("✅ Created SKILL.md")
|
||||
except Exception as e:
|
||||
print(f"❌ Error creating SKILL.md: {e}")
|
||||
return None
|
||||
|
||||
# Create resource directories with example files
|
||||
try:
|
||||
# Create scripts/ directory with example script
|
||||
scripts_dir = skill_dir / 'scripts'
|
||||
scripts_dir.mkdir(exist_ok=True)
|
||||
example_script = scripts_dir / 'example.py'
|
||||
example_script.write_text(EXAMPLE_SCRIPT.format(skill_name=skill_name))
|
||||
example_script.chmod(0o755)
|
||||
print("✅ Created scripts/example.py")
|
||||
|
||||
# Create references/ directory with example reference doc
|
||||
references_dir = skill_dir / 'references'
|
||||
references_dir.mkdir(exist_ok=True)
|
||||
example_reference = references_dir / 'api_reference.md'
|
||||
example_reference.write_text(EXAMPLE_REFERENCE.format(skill_title=skill_title))
|
||||
print("✅ Created references/api_reference.md")
|
||||
|
||||
# Create assets/ directory with example asset placeholder
|
||||
assets_dir = skill_dir / 'assets'
|
||||
assets_dir.mkdir(exist_ok=True)
|
||||
example_asset = assets_dir / 'example_asset.txt'
|
||||
example_asset.write_text(EXAMPLE_ASSET)
|
||||
print("✅ Created assets/example_asset.txt")
|
||||
except Exception as e:
|
||||
print(f"❌ Error creating resource directories: {e}")
|
||||
return None
|
||||
|
||||
# Print next steps
|
||||
print(f"\n✅ Skill '{skill_name}' initialized successfully at {skill_dir}")
|
||||
print("\nNext steps:")
|
||||
print("1. Edit SKILL.md to complete the TODO items and update the description")
|
||||
print("2. Customize or delete the example files in scripts/, references/, and assets/")
|
||||
print("3. Run the validator when ready to check the skill structure")
|
||||
|
||||
return skill_dir
|
||||
|
||||
|
||||
def main():
|
||||
if len(sys.argv) < 4 or sys.argv[2] != '--path':
|
||||
print("Usage: init_skill.py <skill-name> --path <path>")
|
||||
print("\nSkill name requirements:")
|
||||
print(" - Kebab-case identifier (e.g., 'my-data-analyzer')")
|
||||
print(" - Lowercase letters, digits, and hyphens only")
|
||||
print(" - Max 64 characters")
|
||||
print(" - Must match directory name exactly")
|
||||
print("\nExamples:")
|
||||
print(" init_skill.py my-new-skill --path skills/public")
|
||||
print(" init_skill.py my-api-helper --path skills/private")
|
||||
print(" init_skill.py custom-skill --path /custom/location")
|
||||
sys.exit(1)
|
||||
|
||||
skill_name = sys.argv[1]
|
||||
path = sys.argv[3]
|
||||
|
||||
print(f"🚀 Initializing skill: {skill_name}")
|
||||
print(f" Location: {path}")
|
||||
print()
|
||||
|
||||
result = init_skill(skill_name, path)
|
||||
|
||||
if result:
|
||||
sys.exit(0)
|
||||
else:
|
||||
sys.exit(1)
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
main()
|
||||
110
.claude/skills/skill-creator/scripts/package_skill.py
Executable file
110
.claude/skills/skill-creator/scripts/package_skill.py
Executable file
@@ -0,0 +1,110 @@
|
||||
#!/usr/bin/env python3
|
||||
"""
|
||||
Skill Packager - Creates a distributable .skill file of a skill folder
|
||||
|
||||
Usage:
|
||||
python utils/package_skill.py <path/to/skill-folder> [output-directory]
|
||||
|
||||
Example:
|
||||
python utils/package_skill.py skills/public/my-skill
|
||||
python utils/package_skill.py skills/public/my-skill ./dist
|
||||
"""
|
||||
|
||||
import sys
|
||||
import zipfile
|
||||
from pathlib import Path
|
||||
from quick_validate import validate_skill
|
||||
|
||||
|
||||
def package_skill(skill_path, output_dir=None):
|
||||
"""
|
||||
Package a skill folder into a .skill file.
|
||||
|
||||
Args:
|
||||
skill_path: Path to the skill folder
|
||||
output_dir: Optional output directory for the .skill file (defaults to current directory)
|
||||
|
||||
Returns:
|
||||
Path to the created .skill file, or None if error
|
||||
"""
|
||||
skill_path = Path(skill_path).resolve()
|
||||
|
||||
# Validate skill folder exists
|
||||
if not skill_path.exists():
|
||||
print(f"❌ Error: Skill folder not found: {skill_path}")
|
||||
return None
|
||||
|
||||
if not skill_path.is_dir():
|
||||
print(f"❌ Error: Path is not a directory: {skill_path}")
|
||||
return None
|
||||
|
||||
# Validate SKILL.md exists
|
||||
skill_md = skill_path / "SKILL.md"
|
||||
if not skill_md.exists():
|
||||
print(f"❌ Error: SKILL.md not found in {skill_path}")
|
||||
return None
|
||||
|
||||
# Run validation before packaging
|
||||
print("🔍 Validating skill...")
|
||||
valid, message = validate_skill(skill_path)
|
||||
if not valid:
|
||||
print(f"❌ Validation failed: {message}")
|
||||
print(" Please fix the validation errors before packaging.")
|
||||
return None
|
||||
print(f"✅ {message}\n")
|
||||
|
||||
# Determine output location
|
||||
skill_name = skill_path.name
|
||||
if output_dir:
|
||||
output_path = Path(output_dir).resolve()
|
||||
output_path.mkdir(parents=True, exist_ok=True)
|
||||
else:
|
||||
output_path = Path.cwd()
|
||||
|
||||
skill_filename = output_path / f"{skill_name}.skill"
|
||||
|
||||
# Create the .skill file (zip format)
|
||||
try:
|
||||
with zipfile.ZipFile(skill_filename, 'w', zipfile.ZIP_DEFLATED) as zipf:
|
||||
# Walk through the skill directory
|
||||
for file_path in skill_path.rglob('*'):
|
||||
if file_path.is_file():
|
||||
# Calculate the relative path within the zip
|
||||
arcname = file_path.relative_to(skill_path.parent)
|
||||
zipf.write(file_path, arcname)
|
||||
print(f" Added: {arcname}")
|
||||
|
||||
print(f"\n✅ Successfully packaged skill to: {skill_filename}")
|
||||
return skill_filename
|
||||
|
||||
except Exception as e:
|
||||
print(f"❌ Error creating .skill file: {e}")
|
||||
return None
|
||||
|
||||
|
||||
def main():
|
||||
if len(sys.argv) < 2:
|
||||
print("Usage: python utils/package_skill.py <path/to/skill-folder> [output-directory]")
|
||||
print("\nExample:")
|
||||
print(" python utils/package_skill.py skills/public/my-skill")
|
||||
print(" python utils/package_skill.py skills/public/my-skill ./dist")
|
||||
sys.exit(1)
|
||||
|
||||
skill_path = sys.argv[1]
|
||||
output_dir = sys.argv[2] if len(sys.argv) > 2 else None
|
||||
|
||||
print(f"📦 Packaging skill: {skill_path}")
|
||||
if output_dir:
|
||||
print(f" Output directory: {output_dir}")
|
||||
print()
|
||||
|
||||
result = package_skill(skill_path, output_dir)
|
||||
|
||||
if result:
|
||||
sys.exit(0)
|
||||
else:
|
||||
sys.exit(1)
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
main()
|
||||
103
.claude/skills/skill-creator/scripts/quick_validate.py
Executable file
103
.claude/skills/skill-creator/scripts/quick_validate.py
Executable file
@@ -0,0 +1,103 @@
|
||||
#!/usr/bin/env python3
|
||||
"""
|
||||
Quick validation script for skills - minimal version
|
||||
"""
|
||||
|
||||
import sys
|
||||
import os
|
||||
import re
|
||||
import yaml
|
||||
from pathlib import Path
|
||||
|
||||
def validate_skill(skill_path):
|
||||
"""Basic validation of a skill"""
|
||||
skill_path = Path(skill_path)
|
||||
|
||||
# Check SKILL.md exists
|
||||
skill_md = skill_path / 'SKILL.md'
|
||||
if not skill_md.exists():
|
||||
return False, "SKILL.md not found"
|
||||
|
||||
# Read and validate frontmatter
|
||||
content = skill_md.read_text()
|
||||
if not content.startswith('---'):
|
||||
return False, "No YAML frontmatter found"
|
||||
|
||||
# Extract frontmatter
|
||||
match = re.match(r'^---\n(.*?)\n---', content, re.DOTALL)
|
||||
if not match:
|
||||
return False, "Invalid frontmatter format"
|
||||
|
||||
frontmatter_text = match.group(1)
|
||||
|
||||
# Parse YAML frontmatter
|
||||
try:
|
||||
frontmatter = yaml.safe_load(frontmatter_text)
|
||||
if not isinstance(frontmatter, dict):
|
||||
return False, "Frontmatter must be a YAML dictionary"
|
||||
except yaml.YAMLError as e:
|
||||
return False, f"Invalid YAML in frontmatter: {e}"
|
||||
|
||||
# Define allowed properties
|
||||
ALLOWED_PROPERTIES = {'name', 'description', 'license', 'allowed-tools', 'metadata', 'compatibility'}
|
||||
|
||||
# Check for unexpected properties (excluding nested keys under metadata)
|
||||
unexpected_keys = set(frontmatter.keys()) - ALLOWED_PROPERTIES
|
||||
if unexpected_keys:
|
||||
return False, (
|
||||
f"Unexpected key(s) in SKILL.md frontmatter: {', '.join(sorted(unexpected_keys))}. "
|
||||
f"Allowed properties are: {', '.join(sorted(ALLOWED_PROPERTIES))}"
|
||||
)
|
||||
|
||||
# Check required fields
|
||||
if 'name' not in frontmatter:
|
||||
return False, "Missing 'name' in frontmatter"
|
||||
if 'description' not in frontmatter:
|
||||
return False, "Missing 'description' in frontmatter"
|
||||
|
||||
# Extract name for validation
|
||||
name = frontmatter.get('name', '')
|
||||
if not isinstance(name, str):
|
||||
return False, f"Name must be a string, got {type(name).__name__}"
|
||||
name = name.strip()
|
||||
if name:
|
||||
# Check naming convention (kebab-case: lowercase with hyphens)
|
||||
if not re.match(r'^[a-z0-9-]+$', name):
|
||||
return False, f"Name '{name}' should be kebab-case (lowercase letters, digits, and hyphens only)"
|
||||
if name.startswith('-') or name.endswith('-') or '--' in name:
|
||||
return False, f"Name '{name}' cannot start/end with hyphen or contain consecutive hyphens"
|
||||
# Check name length (max 64 characters per spec)
|
||||
if len(name) > 64:
|
||||
return False, f"Name is too long ({len(name)} characters). Maximum is 64 characters."
|
||||
|
||||
# Extract and validate description
|
||||
description = frontmatter.get('description', '')
|
||||
if not isinstance(description, str):
|
||||
return False, f"Description must be a string, got {type(description).__name__}"
|
||||
description = description.strip()
|
||||
if description:
|
||||
# Check for angle brackets
|
||||
if '<' in description or '>' in description:
|
||||
return False, "Description cannot contain angle brackets (< or >)"
|
||||
# Check description length (max 1024 characters per spec)
|
||||
if len(description) > 1024:
|
||||
return False, f"Description is too long ({len(description)} characters). Maximum is 1024 characters."
|
||||
|
||||
# Validate compatibility field if present (optional)
|
||||
compatibility = frontmatter.get('compatibility', '')
|
||||
if compatibility:
|
||||
if not isinstance(compatibility, str):
|
||||
return False, f"Compatibility must be a string, got {type(compatibility).__name__}"
|
||||
if len(compatibility) > 500:
|
||||
return False, f"Compatibility is too long ({len(compatibility)} characters). Maximum is 500 characters."
|
||||
|
||||
return True, "Skill is valid!"
|
||||
|
||||
if __name__ == "__main__":
|
||||
if len(sys.argv) != 2:
|
||||
print("Usage: python quick_validate.py <skill_directory>")
|
||||
sys.exit(1)
|
||||
|
||||
valid, message = validate_skill(sys.argv[1])
|
||||
print(message)
|
||||
sys.exit(0 if valid else 1)
|
||||
46
.claude/skills/update-flake/SKILL.md
Normal file
46
.claude/skills/update-flake/SKILL.md
Normal file
@@ -0,0 +1,46 @@
|
||||
---
|
||||
name: update-flake
|
||||
description: Update nix flake inputs to latest versions, fix build breakage from upstream changes, build all NixOS machines, and run garbage collection. Use when the user wants to update nixpkgs, update flake inputs, upgrade packages, or refresh the flake lockfile.
|
||||
---
|
||||
|
||||
# Update Flake
|
||||
|
||||
## Workflow
|
||||
|
||||
### 1. Update All Inputs
|
||||
|
||||
The flake tracks `nixos-unstable`. Update everything at once:
|
||||
|
||||
```bash
|
||||
nix flake update
|
||||
```
|
||||
|
||||
### 2. Check for Breakage and Fix
|
||||
|
||||
Build errors typically fall into these categories:
|
||||
|
||||
- **Patches failing to apply**: Check `patches/` directory. Rebase or remove patches if the upstream issue was fixed.
|
||||
- **Nextcloud version bump**: Check `common/server/nextcloud.nix` for the pinned version (e.g. `pkgs.nextcloud32`). If nixpkgs dropped the current version, upgrade by exactly ONE major version. Notify the user.
|
||||
- **Removed/renamed packages or options**: Search nixpkgs history for migration guidance and apply fixes.
|
||||
- **stateVersion changes**: If ANY change requires incrementing `system.stateVersion`, `home.stateVersion`, or data migration — STOP and ask the user. Do not proceed.
|
||||
|
||||
### 3. Build All Machines
|
||||
|
||||
See [references/machines.md](references/machines.md). Get machine list and build each:
|
||||
|
||||
```bash
|
||||
nix eval .#nixosConfigurations --apply 'x: builtins.attrNames x' --json
|
||||
nix build .#nixosConfigurations.<hostname>.config.system.build.toplevel --no-link
|
||||
```
|
||||
|
||||
Fix any build failures before continuing.
|
||||
|
||||
### 4. Garbage Collection
|
||||
|
||||
```bash
|
||||
nix store gc
|
||||
```
|
||||
|
||||
### 5. Summary
|
||||
|
||||
Report: inputs updated, fixes applied, nextcloud changes, and anything needing user attention.
|
||||
19
.claude/skills/update-flake/references/machines.md
Normal file
19
.claude/skills/update-flake/references/machines.md
Normal file
@@ -0,0 +1,19 @@
|
||||
# Machine Build Reference
|
||||
|
||||
## Listing Machines
|
||||
|
||||
Get the current list dynamically:
|
||||
```bash
|
||||
nix eval .#nixosConfigurations --apply 'x: builtins.attrNames x' --json
|
||||
```
|
||||
|
||||
## Building a Machine
|
||||
|
||||
```bash
|
||||
nix build .#nixosConfigurations.<hostname>.config.system.build.toplevel --no-link
|
||||
```
|
||||
|
||||
## Important Constraints
|
||||
|
||||
- **stateVersion**: If any update requires incrementing `system.stateVersion` or `home.stateVersion`, or any data migration, STOP and ask the user. Do not proceed on your own.
|
||||
- **nextcloud**: Pinned to a specific version (e.g. `pkgs.nextcloud32`). Only upgrade one major version at a time. Notify the user when upgrading.
|
||||
29
.gitea/scripts/build-and-cache.sh
Executable file
29
.gitea/scripts/build-and-cache.sh
Executable file
@@ -0,0 +1,29 @@
|
||||
#!/usr/bin/env bash
|
||||
set -euo pipefail
|
||||
|
||||
# Configure Attic cache
|
||||
attic login local "$ATTIC_ENDPOINT" "$ATTIC_TOKEN"
|
||||
attic use local:nixos
|
||||
|
||||
# Check flake
|
||||
nix flake check --all-systems --print-build-logs --log-format raw --show-trace
|
||||
|
||||
# Build all systems
|
||||
nix eval .#nixosConfigurations --apply 'cs: builtins.attrNames cs' --json \
|
||||
| jq -r '.[]' \
|
||||
| xargs -I{} nix build ".#nixosConfigurations.{}.config.system.build.toplevel" \
|
||||
--no-link --print-build-logs --log-format raw
|
||||
|
||||
# Push to cache (only locally-built paths >= 0.5MB)
|
||||
toplevels=$(nix eval .#nixosConfigurations \
|
||||
--apply 'cs: map (n: "${cs.${n}.config.system.build.toplevel}") (builtins.attrNames cs)' \
|
||||
--json | jq -r '.[]')
|
||||
echo "Found $(echo "$toplevels" | wc -l) system toplevels"
|
||||
paths=$(echo "$toplevels" \
|
||||
| xargs nix path-info -r --json \
|
||||
| jq -r '[to_entries[] | select(
|
||||
(.value.signatures | all(startswith("cache.nixos.org") | not))
|
||||
and .value.narSize >= 524288
|
||||
) | .key] | unique[]')
|
||||
echo "Pushing $(echo "$paths" | wc -l) unique paths to cache"
|
||||
echo "$paths" | xargs attic push local:nixos
|
||||
60
.gitea/workflows/auto-update.yaml
Normal file
60
.gitea/workflows/auto-update.yaml
Normal file
@@ -0,0 +1,60 @@
|
||||
name: Auto Update Flake
|
||||
on:
|
||||
schedule:
|
||||
- cron: '0 6 * * *'
|
||||
workflow_dispatch: {}
|
||||
|
||||
env:
|
||||
DEBIAN_FRONTEND: noninteractive
|
||||
PATH: /run/current-system/sw/bin/
|
||||
XDG_CONFIG_HOME: ${{ runner.temp }}/.config
|
||||
ATTIC_ENDPOINT: ${{ vars.ATTIC_ENDPOINT }}
|
||||
ATTIC_TOKEN: ${{ secrets.ATTIC_TOKEN }}
|
||||
|
||||
jobs:
|
||||
auto-update:
|
||||
runs-on: nixos
|
||||
steps:
|
||||
- name: Checkout the repository
|
||||
uses: actions/checkout@v3
|
||||
with:
|
||||
fetch-depth: 0
|
||||
ref: master
|
||||
token: ${{ secrets.PUSH_TOKEN }}
|
||||
|
||||
- name: Configure git identity
|
||||
run: |
|
||||
git config user.name "gitea-runner"
|
||||
git config user.email "gitea-runner@neet.dev"
|
||||
|
||||
- name: Update flake inputs
|
||||
id: update
|
||||
run: |
|
||||
nix flake update
|
||||
if git diff --quiet flake.lock; then
|
||||
echo "No changes to flake.lock, nothing to do"
|
||||
echo "changed=false" >> "$GITHUB_OUTPUT"
|
||||
else
|
||||
git add flake.lock
|
||||
git commit -m "flake.lock: update inputs"
|
||||
echo "changed=true" >> "$GITHUB_OUTPUT"
|
||||
fi
|
||||
|
||||
- name: Build and cache
|
||||
if: steps.update.outputs.changed == 'true'
|
||||
run: bash .gitea/scripts/build-and-cache.sh
|
||||
|
||||
- name: Push updated lockfile
|
||||
if: steps.update.outputs.changed == 'true'
|
||||
run: git push
|
||||
|
||||
- name: Notify on failure
|
||||
if: failure() && steps.update.outputs.changed == 'true'
|
||||
run: |
|
||||
curl -s \
|
||||
-H "Authorization: Bearer ${{ secrets.NTFY_TOKEN }}" \
|
||||
-H "Title: Flake auto-update failed" \
|
||||
-H "Priority: high" \
|
||||
-H "Tags: warning" \
|
||||
-d "Auto-update workflow failed. Check: ${{ gitea.server_url }}/${{ gitea.repository }}/actions/runs/${{ gitea.run_number }}" \
|
||||
https://ntfy.neet.dev/nix-flake-updates
|
||||
@@ -5,6 +5,9 @@ on: [push]
|
||||
env:
|
||||
DEBIAN_FRONTEND: noninteractive
|
||||
PATH: /run/current-system/sw/bin/
|
||||
XDG_CONFIG_HOME: ${{ runner.temp }}/.config
|
||||
ATTIC_ENDPOINT: ${{ vars.ATTIC_ENDPOINT }}
|
||||
ATTIC_TOKEN: ${{ secrets.ATTIC_TOKEN }}
|
||||
|
||||
jobs:
|
||||
check-flake:
|
||||
@@ -15,5 +18,16 @@ jobs:
|
||||
with:
|
||||
fetch-depth: 0
|
||||
|
||||
- name: Check Flake
|
||||
run: nix flake check --all-systems --print-build-logs --log-format raw --show-trace
|
||||
- name: Build and cache
|
||||
run: bash .gitea/scripts/build-and-cache.sh
|
||||
|
||||
- name: Notify on failure
|
||||
if: failure()
|
||||
run: |
|
||||
curl -s \
|
||||
-H "Authorization: Bearer ${{ secrets.NTFY_TOKEN }}" \
|
||||
-H "Title: Flake check failed" \
|
||||
-H "Priority: high" \
|
||||
-H "Tags: warning" \
|
||||
-d "Check failed for ${{ gitea.ref_name }}. Check: ${{ gitea.server_url }}/${{ gitea.repository }}/actions/runs/${{ gitea.run_number }}" \
|
||||
https://ntfy.neet.dev/nix-flake-updates
|
||||
|
||||
1
.gitignore
vendored
1
.gitignore
vendored
@@ -1 +1,2 @@
|
||||
result
|
||||
.claude/worktrees
|
||||
|
||||
20
CLAUDE.md
20
CLAUDE.md
@@ -67,6 +67,12 @@ IP allocation convention: VMs `.10-.49`, containers `.50-.89`, incus `.90-.129`
|
||||
|
||||
`flake.nix` applies patches from `/patches/` to nixpkgs before building (workaround for nix#3920).
|
||||
|
||||
### Service Dashboard & Monitoring
|
||||
|
||||
When adding or removing a web-facing service, update both:
|
||||
- **Gatus** (`common/server/gatus.nix`) — add/remove the endpoint monitor
|
||||
- **Dashy** — add/remove the service entry from the dashboard config
|
||||
|
||||
### Key Conventions
|
||||
|
||||
- Uses `doas` instead of `sudo` everywhere
|
||||
@@ -79,3 +85,17 @@ IP allocation convention: VMs `.10-.49`, containers `.50-.89`, incus `.90-.129`
|
||||
- Always use `--no-link` when running `nix build`
|
||||
- Don't use `nix build --dry-run` unless you only need evaluation — it skips the actual build
|
||||
- Avoid `2>&1` on nix commands — it can cause error output to be missed
|
||||
|
||||
## Git Worktrees
|
||||
|
||||
When the user asks you to "start a worktree" or work in a worktree, **do not create one manually** with `git worktree add`. Instead, tell the user to start a new session with:
|
||||
|
||||
```bash
|
||||
claude --worktree <name>
|
||||
```
|
||||
|
||||
This is the built-in Claude Code worktree workflow. It creates the worktree at `.claude/worktrees/<name>/` with a branch `worktree-<name>` and starts a new Claude session inside it. Cleanup is handled automatically on exit.
|
||||
|
||||
When instructed to work in a git worktree (e.g., via `isolation: "worktree"` on a subagent), you **MUST** do so. If you are unable to create or use a git worktree, you **MUST** stop work immediately and report the failure to the user. Do not fall back to working in the main working tree.
|
||||
|
||||
When applying work from a git worktree back to the main branch, commit in the worktree first, then use `git cherry-pick` from the main working tree to bring the commit over. Do not use `git checkout` or `git apply` to copy files directly. Do **not** automatically apply worktree work to the main branch — always ask the user for approval first.
|
||||
|
||||
@@ -1,4 +1,4 @@
|
||||
{ ... }:
|
||||
{ config, ... }:
|
||||
|
||||
{
|
||||
nix = {
|
||||
@@ -6,11 +6,11 @@
|
||||
substituters = [
|
||||
"https://cache.nixos.org/"
|
||||
"https://nix-community.cachix.org"
|
||||
"http://s0.koi-bebop.ts.net:5000"
|
||||
"http://s0.neet.dev:28338/nixos"
|
||||
];
|
||||
trusted-public-keys = [
|
||||
"nix-community.cachix.org-1:mB9FSh9qf2dCimDSUo8Zy7bkq5CX+/rkCWyvRCYg3Fs="
|
||||
"s0.koi-bebop.ts.net:OjbzD86YjyJZpCp9RWaQKANaflcpKhtzBMNP8I2aPUU="
|
||||
"nixos:e5AMCUWWEX9MESWAAMjBkZdGUpl588NhgsUO3HsdhFw="
|
||||
];
|
||||
|
||||
# Allow substituters to be offline
|
||||
@@ -19,6 +19,11 @@
|
||||
# and use this flag as intended for deciding if it should build missing
|
||||
# derivations locally. See https://github.com/NixOS/nix/issues/6901
|
||||
fallback = true;
|
||||
|
||||
# Authenticate to private nixos cache
|
||||
netrc-file = config.age.secrets.attic-netrc.path;
|
||||
};
|
||||
};
|
||||
|
||||
age.secrets.attic-netrc.file = ../secrets/attic-netrc.age;
|
||||
}
|
||||
|
||||
@@ -6,6 +6,7 @@
|
||||
./binary-cache.nix
|
||||
./flakes.nix
|
||||
./auto-update.nix
|
||||
./ntfy
|
||||
./shell.nix
|
||||
./network
|
||||
./boot
|
||||
@@ -94,11 +95,11 @@
|
||||
{ groups = [ "wheel" ]; persist = true; }
|
||||
];
|
||||
|
||||
nix.gc.automatic = true;
|
||||
nix.gc.automatic = !config.boot.isContainer;
|
||||
|
||||
security.acme.acceptTerms = true;
|
||||
security.acme.defaults.email = "zuckerberg@neet.dev";
|
||||
|
||||
# Enable Desktop Environment if this is a PC (machine role is "personal")
|
||||
de.enable = lib.mkDefault (config.thisMachine.hasRole."personal");
|
||||
de.enable = lib.mkDefault (config.thisMachine.hasRole."personal" && !config.boot.isContainer);
|
||||
}
|
||||
|
||||
@@ -7,10 +7,8 @@ let
|
||||
in
|
||||
{
|
||||
imports = [
|
||||
./pia-openvpn.nix
|
||||
./pia-wireguard.nix
|
||||
./pia-vpn
|
||||
./tailscale.nix
|
||||
./vpn.nix
|
||||
./sandbox.nix
|
||||
];
|
||||
|
||||
|
||||
@@ -1,113 +0,0 @@
|
||||
{ config, pkgs, lib, ... }:
|
||||
|
||||
let
|
||||
cfg = config.pia.openvpn;
|
||||
vpnfailsafe = pkgs.stdenv.mkDerivation {
|
||||
pname = "vpnfailsafe";
|
||||
version = "0.0.1";
|
||||
src = ./.;
|
||||
installPhase = ''
|
||||
mkdir -p $out
|
||||
cp vpnfailsafe.sh $out/vpnfailsafe.sh
|
||||
sed -i 's|getent|${pkgs.getent}/bin/getent|' $out/vpnfailsafe.sh
|
||||
'';
|
||||
};
|
||||
in
|
||||
{
|
||||
options.pia.openvpn = {
|
||||
enable = lib.mkEnableOption "Enable private internet access";
|
||||
server = lib.mkOption {
|
||||
type = lib.types.str;
|
||||
default = "us-washingtondc.privacy.network";
|
||||
example = "swiss.privacy.network";
|
||||
};
|
||||
};
|
||||
|
||||
config = lib.mkIf cfg.enable {
|
||||
services.openvpn = {
|
||||
servers = {
|
||||
pia = {
|
||||
config = ''
|
||||
client
|
||||
dev tun
|
||||
proto udp
|
||||
remote ${cfg.server} 1198
|
||||
resolv-retry infinite
|
||||
nobind
|
||||
persist-key
|
||||
persist-tun
|
||||
cipher aes-128-cbc
|
||||
auth sha1
|
||||
tls-client
|
||||
remote-cert-tls server
|
||||
|
||||
auth-user-pass
|
||||
compress
|
||||
verb 1
|
||||
reneg-sec 0
|
||||
<crl-verify>
|
||||
-----BEGIN X509 CRL-----
|
||||
MIICWDCCAUAwDQYJKoZIhvcNAQENBQAwgegxCzAJBgNVBAYTAlVTMQswCQYDVQQI
|
||||
EwJDQTETMBEGA1UEBxMKTG9zQW5nZWxlczEgMB4GA1UEChMXUHJpdmF0ZSBJbnRl
|
||||
cm5ldCBBY2Nlc3MxIDAeBgNVBAsTF1ByaXZhdGUgSW50ZXJuZXQgQWNjZXNzMSAw
|
||||
HgYDVQQDExdQcml2YXRlIEludGVybmV0IEFjY2VzczEgMB4GA1UEKRMXUHJpdmF0
|
||||
ZSBJbnRlcm5ldCBBY2Nlc3MxLzAtBgkqhkiG9w0BCQEWIHNlY3VyZUBwcml2YXRl
|
||||
aW50ZXJuZXRhY2Nlc3MuY29tFw0xNjA3MDgxOTAwNDZaFw0zNjA3MDMxOTAwNDZa
|
||||
MCYwEQIBARcMMTYwNzA4MTkwMDQ2MBECAQYXDDE2MDcwODE5MDA0NjANBgkqhkiG
|
||||
9w0BAQ0FAAOCAQEAQZo9X97ci8EcPYu/uK2HB152OZbeZCINmYyluLDOdcSvg6B5
|
||||
jI+ffKN3laDvczsG6CxmY3jNyc79XVpEYUnq4rT3FfveW1+Ralf+Vf38HdpwB8EW
|
||||
B4hZlQ205+21CALLvZvR8HcPxC9KEnev1mU46wkTiov0EKc+EdRxkj5yMgv0V2Re
|
||||
ze7AP+NQ9ykvDScH4eYCsmufNpIjBLhpLE2cuZZXBLcPhuRzVoU3l7A9lvzG9mjA
|
||||
5YijHJGHNjlWFqyrn1CfYS6koa4TGEPngBoAziWRbDGdhEgJABHrpoaFYaL61zqy
|
||||
MR6jC0K2ps9qyZAN74LEBedEfK7tBOzWMwr58A==
|
||||
-----END X509 CRL-----
|
||||
</crl-verify>
|
||||
|
||||
<ca>
|
||||
-----BEGIN CERTIFICATE-----
|
||||
MIIFqzCCBJOgAwIBAgIJAKZ7D5Yv87qDMA0GCSqGSIb3DQEBDQUAMIHoMQswCQYD
|
||||
VQQGEwJVUzELMAkGA1UECBMCQ0ExEzARBgNVBAcTCkxvc0FuZ2VsZXMxIDAeBgNV
|
||||
BAoTF1ByaXZhdGUgSW50ZXJuZXQgQWNjZXNzMSAwHgYDVQQLExdQcml2YXRlIElu
|
||||
dGVybmV0IEFjY2VzczEgMB4GA1UEAxMXUHJpdmF0ZSBJbnRlcm5ldCBBY2Nlc3Mx
|
||||
IDAeBgNVBCkTF1ByaXZhdGUgSW50ZXJuZXQgQWNjZXNzMS8wLQYJKoZIhvcNAQkB
|
||||
FiBzZWN1cmVAcHJpdmF0ZWludGVybmV0YWNjZXNzLmNvbTAeFw0xNDA0MTcxNzM1
|
||||
MThaFw0zNDA0MTIxNzM1MThaMIHoMQswCQYDVQQGEwJVUzELMAkGA1UECBMCQ0Ex
|
||||
EzARBgNVBAcTCkxvc0FuZ2VsZXMxIDAeBgNVBAoTF1ByaXZhdGUgSW50ZXJuZXQg
|
||||
QWNjZXNzMSAwHgYDVQQLExdQcml2YXRlIEludGVybmV0IEFjY2VzczEgMB4GA1UE
|
||||
AxMXUHJpdmF0ZSBJbnRlcm5ldCBBY2Nlc3MxIDAeBgNVBCkTF1ByaXZhdGUgSW50
|
||||
ZXJuZXQgQWNjZXNzMS8wLQYJKoZIhvcNAQkBFiBzZWN1cmVAcHJpdmF0ZWludGVy
|
||||
bmV0YWNjZXNzLmNvbTCCASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBAPXD
|
||||
L1L9tX6DGf36liA7UBTy5I869z0UVo3lImfOs/GSiFKPtInlesP65577nd7UNzzX
|
||||
lH/P/CnFPdBWlLp5ze3HRBCc/Avgr5CdMRkEsySL5GHBZsx6w2cayQ2EcRhVTwWp
|
||||
cdldeNO+pPr9rIgPrtXqT4SWViTQRBeGM8CDxAyTopTsobjSiYZCF9Ta1gunl0G/
|
||||
8Vfp+SXfYCC+ZzWvP+L1pFhPRqzQQ8k+wMZIovObK1s+nlwPaLyayzw9a8sUnvWB
|
||||
/5rGPdIYnQWPgoNlLN9HpSmsAcw2z8DXI9pIxbr74cb3/HSfuYGOLkRqrOk6h4RC
|
||||
OfuWoTrZup1uEOn+fw8CAwEAAaOCAVQwggFQMB0GA1UdDgQWBBQv63nQ/pJAt5tL
|
||||
y8VJcbHe22ZOsjCCAR8GA1UdIwSCARYwggESgBQv63nQ/pJAt5tLy8VJcbHe22ZO
|
||||
sqGB7qSB6zCB6DELMAkGA1UEBhMCVVMxCzAJBgNVBAgTAkNBMRMwEQYDVQQHEwpM
|
||||
b3NBbmdlbGVzMSAwHgYDVQQKExdQcml2YXRlIEludGVybmV0IEFjY2VzczEgMB4G
|
||||
A1UECxMXUHJpdmF0ZSBJbnRlcm5ldCBBY2Nlc3MxIDAeBgNVBAMTF1ByaXZhdGUg
|
||||
SW50ZXJuZXQgQWNjZXNzMSAwHgYDVQQpExdQcml2YXRlIEludGVybmV0IEFjY2Vz
|
||||
czEvMC0GCSqGSIb3DQEJARYgc2VjdXJlQHByaXZhdGVpbnRlcm5ldGFjY2Vzcy5j
|
||||
b22CCQCmew+WL/O6gzAMBgNVHRMEBTADAQH/MA0GCSqGSIb3DQEBDQUAA4IBAQAn
|
||||
a5PgrtxfwTumD4+3/SYvwoD66cB8IcK//h1mCzAduU8KgUXocLx7QgJWo9lnZ8xU
|
||||
ryXvWab2usg4fqk7FPi00bED4f4qVQFVfGfPZIH9QQ7/48bPM9RyfzImZWUCenK3
|
||||
7pdw4Bvgoys2rHLHbGen7f28knT2j/cbMxd78tQc20TIObGjo8+ISTRclSTRBtyC
|
||||
GohseKYpTS9himFERpUgNtefvYHbn70mIOzfOJFTVqfrptf9jXa9N8Mpy3ayfodz
|
||||
1wiqdteqFXkTYoSDctgKMiZ6GdocK9nMroQipIQtpnwd4yBDWIyC6Bvlkrq5TQUt
|
||||
YDQ8z9v+DMO6iwyIDRiU
|
||||
-----END CERTIFICATE-----
|
||||
</ca>
|
||||
|
||||
disable-occ
|
||||
auth-user-pass /run/agenix/pia-login.conf
|
||||
'';
|
||||
autoStart = true;
|
||||
up = "${vpnfailsafe}/vpnfailsafe.sh";
|
||||
down = "${vpnfailsafe}/vpnfailsafe.sh";
|
||||
};
|
||||
};
|
||||
};
|
||||
age.secrets."pia-login.conf".file = ../../secrets/pia-login.age;
|
||||
};
|
||||
}
|
||||
89
common/network/pia-vpn/README.md
Normal file
89
common/network/pia-vpn/README.md
Normal file
@@ -0,0 +1,89 @@
|
||||
# PIA VPN Multi-Container Module
|
||||
|
||||
Routes service containers through a PIA WireGuard VPN using a shared bridge network.
|
||||
|
||||
## Architecture
|
||||
|
||||
```
|
||||
internet
|
||||
│
|
||||
┌──────┴──────┐
|
||||
│ Host │
|
||||
│ tinyproxy │ ← PIA API bootstrap proxy
|
||||
│ 10.100.0.1 │
|
||||
└──────┬───────┘
|
||||
│ br-vpn (no IPMasquerade)
|
||||
┌────────────┼──────────────┐
|
||||
│ │ │
|
||||
┌──────┴──────┐ ┌───┴────┐ ┌─────┴──────┐
|
||||
│ VPN ctr │ │ servarr│ │transmission│
|
||||
│ 10.100.0.2 │ │ .11 │ │ .10 │
|
||||
│ piaw (WG) │ │ │ │ │
|
||||
│ gateway+NAT │ └────────┘ └────────────┘
|
||||
└─────────────┘
|
||||
```
|
||||
|
||||
- **Host** creates the WG interface (encrypted UDP stays in host netns) and runs tinyproxy on the bridge so the VPN container can bootstrap PIA auth before WG is up.
|
||||
- **VPN container** authenticates with PIA via the proxy, configures WG, sets up NAT (masquerade bridge→WG) and optional port forwarding DNAT.
|
||||
- **Service containers** default-route through the VPN container. No WG interface = no internet if VPN is down = leak-proof by topology.
|
||||
- **Host** reaches containers directly on the bridge for nginx reverse proxying.
|
||||
|
||||
## Key design decisions
|
||||
|
||||
- **Bridge, not veth pairs**: All containers share one bridge (`br-vpn`), so the VPN container can act as a single gateway. The host does NOT masquerade bridge traffic — only the VPN container does (through WG).
|
||||
- **Port forwarding is implicit**: If any container sets `receiveForwardedPort`, the VPN container automatically handles PIA port forwarding and DNAT. No separate toggle needed.
|
||||
- **DNS through WG**: Service containers use the VPN container as their DNS server. The VPN container runs `systemd-resolved` listening on its bridge IP, forwarding queries through the WG tunnel.
|
||||
- **Monthly renewal**: `pia-vpn-setup` uses `Type=simple` + `Restart=always` + `RuntimeMaxSec=30d` to periodically re-authenticate with PIA and get a fresh port forwarding signature (signatures expire after ~2 months). Service containers are unaffected during renewal.
|
||||
|
||||
## Files
|
||||
|
||||
| File | Purpose |
|
||||
|---|---|
|
||||
| `default.nix` | Options, bridge, tinyproxy, host firewall, WG interface creation, assertions |
|
||||
| `vpn-container.nix` | VPN container: PIA auth, WG config, NAT, DNAT, port refresh timer |
|
||||
| `service-container.nix` | Generates service containers with static IP and gateway→VPN |
|
||||
| `scripts.nix` | Bash function library for PIA API calls and WG configuration |
|
||||
| `ca.rsa.4096.crt` | PIA CA certificate for API TLS verification |
|
||||
|
||||
## Usage
|
||||
|
||||
```nix
|
||||
pia-vpn = {
|
||||
enable = true;
|
||||
serverLocation = "swiss";
|
||||
|
||||
containers.my-service = {
|
||||
ip = "10.100.0.10";
|
||||
mounts."/data".hostPath = "/data";
|
||||
config = { services.my-app.enable = true; };
|
||||
|
||||
# Optional: receive PIA's forwarded port (at most one container)
|
||||
receiveForwardedPort = { port = 8080; protocol = "both"; };
|
||||
onPortForwarded = ''
|
||||
echo "PIA assigned port $PORT, forwarding to $TARGET_IP:8080"
|
||||
'';
|
||||
};
|
||||
};
|
||||
```
|
||||
|
||||
## Debugging
|
||||
|
||||
```bash
|
||||
# Check VPN container status
|
||||
machinectl shell pia-vpn
|
||||
systemctl status pia-vpn-setup
|
||||
journalctl -u pia-vpn-setup
|
||||
|
||||
# Verify WG tunnel
|
||||
wg show
|
||||
|
||||
# Check NAT/DNAT rules
|
||||
iptables -t nat -L -v
|
||||
iptables -L FORWARD -v
|
||||
|
||||
# From a service container — verify VPN routing
|
||||
curl ifconfig.me
|
||||
|
||||
# Port refresh logs
|
||||
journalctl -u pia-vpn-port-refresh
|
||||
```
|
||||
279
common/network/pia-vpn/default.nix
Normal file
279
common/network/pia-vpn/default.nix
Normal file
@@ -0,0 +1,279 @@
|
||||
{ config, lib, pkgs, ... }:
|
||||
|
||||
# PIA VPN multi-container module.
|
||||
#
|
||||
# Architecture:
|
||||
# Host creates WG interface, runs tinyproxy on bridge for PIA API bootstrap.
|
||||
# VPN container does all PIA logic via proxy, configures WG, masquerades bridge→piaw.
|
||||
# Service containers default route → VPN container (leak-proof by topology).
|
||||
#
|
||||
# Reference: https://www.wireguard.com/netns/#ordinary-containerization
|
||||
|
||||
with lib;
|
||||
|
||||
let
|
||||
cfg = config.pia-vpn;
|
||||
|
||||
# Derive prefix length from subnet CIDR (e.g. "10.100.0.0/24" → "24")
|
||||
subnetPrefixLen = last (splitString "/" cfg.subnet);
|
||||
|
||||
containerSubmodule = types.submodule ({ name, ... }: {
|
||||
options = {
|
||||
ip = mkOption {
|
||||
type = types.str;
|
||||
description = "Static IP address for this container on the VPN bridge";
|
||||
};
|
||||
|
||||
config = mkOption {
|
||||
type = types.anything;
|
||||
default = { };
|
||||
description = "NixOS configuration for this container";
|
||||
};
|
||||
|
||||
mounts = mkOption {
|
||||
type = types.attrsOf (types.submodule {
|
||||
options = {
|
||||
hostPath = mkOption {
|
||||
type = types.str;
|
||||
description = "Path on the host to bind mount";
|
||||
};
|
||||
isReadOnly = mkOption {
|
||||
type = types.bool;
|
||||
default = false;
|
||||
description = "Whether the mount is read-only";
|
||||
};
|
||||
};
|
||||
});
|
||||
default = { };
|
||||
description = "Bind mounts for the container";
|
||||
};
|
||||
|
||||
receiveForwardedPort = mkOption {
|
||||
type = types.nullOr (types.submodule {
|
||||
options = {
|
||||
port = mkOption {
|
||||
type = types.nullOr types.port;
|
||||
default = null;
|
||||
description = ''
|
||||
Target port to forward to. If null, forwards to the same PIA-assigned port.
|
||||
PIA-assigned ports below 10000 are rejected to avoid accidentally
|
||||
forwarding traffic to other services.
|
||||
'';
|
||||
};
|
||||
protocol = mkOption {
|
||||
type = types.enum [ "tcp" "udp" "both" ];
|
||||
default = "both";
|
||||
description = "Protocol(s) to forward";
|
||||
};
|
||||
};
|
||||
});
|
||||
default = null;
|
||||
description = "Port forwarding configuration. At most one container may set this.";
|
||||
};
|
||||
|
||||
onPortForwarded = mkOption {
|
||||
type = types.nullOr types.lines;
|
||||
default = null;
|
||||
description = ''
|
||||
Optional script run in the VPN container after port forwarding is established.
|
||||
Available environment variables: $PORT (PIA-assigned port), $TARGET_IP (this container's IP).
|
||||
'';
|
||||
};
|
||||
};
|
||||
});
|
||||
|
||||
# NOTE: All derivations of cfg.containers are kept INSIDE config = mkIf ... { }
|
||||
# to avoid infinite recursion. The module system's pushDownProperties eagerly
|
||||
# evaluates let bindings and mkMerge contents, so any top-level let binding
|
||||
# that touches cfg.containers would force config evaluation during structure
|
||||
# discovery, creating a cycle.
|
||||
in
|
||||
{
|
||||
imports = [
|
||||
./vpn-container.nix
|
||||
./service-container.nix
|
||||
];
|
||||
|
||||
options.pia-vpn = {
|
||||
enable = mkEnableOption "PIA VPN multi-container setup";
|
||||
|
||||
serverLocation = mkOption {
|
||||
type = types.str;
|
||||
default = "swiss";
|
||||
description = "PIA server region ID";
|
||||
};
|
||||
|
||||
interfaceName = mkOption {
|
||||
type = types.str;
|
||||
default = "piaw";
|
||||
description = "WireGuard interface name";
|
||||
};
|
||||
|
||||
wireguardListenPort = mkOption {
|
||||
type = types.port;
|
||||
default = 51820;
|
||||
description = "WireGuard listen port";
|
||||
};
|
||||
|
||||
bridgeName = mkOption {
|
||||
type = types.str;
|
||||
default = "br-vpn";
|
||||
description = "Bridge interface name for VPN containers";
|
||||
};
|
||||
|
||||
subnet = mkOption {
|
||||
type = types.str;
|
||||
default = "10.100.0.0/24";
|
||||
description = "Subnet CIDR for VPN bridge network";
|
||||
};
|
||||
|
||||
hostAddress = mkOption {
|
||||
type = types.str;
|
||||
default = "10.100.0.1";
|
||||
description = "Host IP on the VPN bridge";
|
||||
};
|
||||
|
||||
vpnAddress = mkOption {
|
||||
type = types.str;
|
||||
default = "10.100.0.2";
|
||||
description = "VPN container IP on the bridge";
|
||||
};
|
||||
|
||||
proxyPort = mkOption {
|
||||
type = types.port;
|
||||
default = 8888;
|
||||
description = "Tinyproxy port for PIA API bootstrap";
|
||||
};
|
||||
|
||||
containers = mkOption {
|
||||
type = types.attrsOf containerSubmodule;
|
||||
default = { };
|
||||
description = "Service containers that route through the VPN";
|
||||
};
|
||||
|
||||
# Subnet prefix length derived from cfg.subnet (exposed for other submodules)
|
||||
subnetPrefixLen = mkOption {
|
||||
type = types.str;
|
||||
default = subnetPrefixLen;
|
||||
description = "Prefix length derived from subnet CIDR";
|
||||
readOnly = true;
|
||||
};
|
||||
|
||||
# Shared host entries for all containers (host + VPN + service containers)
|
||||
containerHosts = mkOption {
|
||||
type = types.attrsOf (types.listOf types.str);
|
||||
internal = true;
|
||||
readOnly = true;
|
||||
};
|
||||
};
|
||||
|
||||
config = mkIf cfg.enable {
|
||||
assertions =
|
||||
let
|
||||
forwardingContainers = filterAttrs (_: c: c.receiveForwardedPort != null) cfg.containers;
|
||||
containerIPs = mapAttrsToList (_: c: c.ip) cfg.containers;
|
||||
in
|
||||
[
|
||||
{
|
||||
assertion = length (attrNames forwardingContainers) <= 1;
|
||||
message = "At most one pia-vpn container may set receiveForwardedPort";
|
||||
}
|
||||
{
|
||||
assertion = length containerIPs == length (unique containerIPs);
|
||||
message = "pia-vpn container IPs must be unique";
|
||||
}
|
||||
];
|
||||
|
||||
# Enable systemd-networkd for bridge management
|
||||
systemd.network.enable = true;
|
||||
|
||||
# TODO: re-enable once primary networking uses networkd
|
||||
systemd.network.wait-online.enable = false;
|
||||
|
||||
# Tell NetworkManager to ignore VPN bridge and container interfaces
|
||||
networking.networkmanager.unmanaged = mkIf config.networking.networkmanager.enable [
|
||||
"interface-name:${cfg.bridgeName}"
|
||||
"interface-name:ve-*"
|
||||
];
|
||||
|
||||
# Bridge network device
|
||||
systemd.network.netdevs."20-${cfg.bridgeName}".netdevConfig = {
|
||||
Kind = "bridge";
|
||||
Name = cfg.bridgeName;
|
||||
};
|
||||
|
||||
# Bridge network configuration — NO IPMasquerade (host must NOT be gateway)
|
||||
systemd.network.networks."20-${cfg.bridgeName}" = {
|
||||
matchConfig.Name = cfg.bridgeName;
|
||||
networkConfig = {
|
||||
Address = "${cfg.hostAddress}/${cfg.subnetPrefixLen}";
|
||||
DHCPServer = false;
|
||||
};
|
||||
linkConfig.RequiredForOnline = "no";
|
||||
};
|
||||
|
||||
# Allow wireguard traffic through rpfilter
|
||||
networking.firewall.checkReversePath = "loose";
|
||||
|
||||
# Block bridge → outside forwarding (prevents host from being a gateway for containers)
|
||||
networking.firewall.extraForwardRules = ''
|
||||
iifname "${cfg.bridgeName}" oifname != "${cfg.bridgeName}" drop
|
||||
'';
|
||||
|
||||
# Allow tinyproxy from bridge (tinyproxy itself restricts to VPN container IP)
|
||||
networking.firewall.interfaces.${cfg.bridgeName}.allowedTCPPorts = [ cfg.proxyPort ];
|
||||
|
||||
# Tinyproxy — runs on bridge IP so VPN container can bootstrap PIA auth
|
||||
services.tinyproxy = {
|
||||
enable = true;
|
||||
settings = {
|
||||
Listen = cfg.hostAddress;
|
||||
Port = cfg.proxyPort;
|
||||
};
|
||||
};
|
||||
systemd.services.tinyproxy.before = [ "container@pia-vpn.service" ];
|
||||
|
||||
# WireGuard interface creation (host-side oneshot)
|
||||
# Creates the interface in the host namespace so encrypted UDP stays in host netns.
|
||||
# The container takes ownership of the interface on startup via `interfaces = [ ... ]`.
|
||||
systemd.services.pia-vpn-wg-create = {
|
||||
description = "Create PIA VPN WireGuard interface";
|
||||
|
||||
before = [ "container@pia-vpn.service" ];
|
||||
requiredBy = [ "container@pia-vpn.service" ];
|
||||
partOf = [ "container@pia-vpn.service" ];
|
||||
wantedBy = [ "multi-user.target" ];
|
||||
|
||||
path = with pkgs; [ iproute2 ];
|
||||
|
||||
serviceConfig = {
|
||||
Type = "oneshot";
|
||||
RemainAfterExit = true;
|
||||
};
|
||||
|
||||
script = ''
|
||||
[[ -z $(ip link show dev ${cfg.interfaceName} 2>/dev/null) ]] || exit 0
|
||||
ip link add ${cfg.interfaceName} type wireguard
|
||||
'';
|
||||
|
||||
preStop = ''
|
||||
ip link del ${cfg.interfaceName} 2>/dev/null || true
|
||||
'';
|
||||
};
|
||||
|
||||
# Host entries for container hostnames — NixOS only auto-creates these for
|
||||
# hostAddress/localAddress containers, not hostBridge. Use the standard
|
||||
# {name}.containers convention.
|
||||
pia-vpn.containerHosts =
|
||||
{ ${cfg.vpnAddress} = [ "pia-vpn.containers" ]; }
|
||||
// mapAttrs' (name: ctr: nameValuePair ctr.ip [ "${name}.containers" ]) cfg.containers;
|
||||
|
||||
networking.hosts = cfg.containerHosts;
|
||||
|
||||
# PIA login secret
|
||||
age.secrets."pia-login.conf".file = ../../../secrets/pia-login.age;
|
||||
|
||||
# IP forwarding needed for bridge traffic between containers
|
||||
networking.ip_forward = true;
|
||||
};
|
||||
}
|
||||
9
common/network/pia-vpn/pubkey.pem
Normal file
9
common/network/pia-vpn/pubkey.pem
Normal file
@@ -0,0 +1,9 @@
|
||||
-----BEGIN PUBLIC KEY-----
|
||||
MIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEAzLYHwX5Ug/oUObZ5eH5P
|
||||
rEwmfj4E/YEfSKLgFSsyRGGsVmmjiXBmSbX2s3xbj/ofuvYtkMkP/VPFHy9E/8ox
|
||||
Y+cRjPzydxz46LPY7jpEw1NHZjOyTeUero5e1nkLhiQqO/cMVYmUnuVcuFfZyZvc
|
||||
8Apx5fBrIp2oWpF/G9tpUZfUUJaaHiXDtuYP8o8VhYtyjuUu3h7rkQFoMxvuoOFH
|
||||
6nkc0VQmBsHvCfq4T9v8gyiBtQRy543leapTBMT34mxVIQ4ReGLPVit/6sNLoGLb
|
||||
gSnGe9Bk/a5V/5vlqeemWF0hgoRtUxMtU1hFbe7e8tSq1j+mu0SHMyKHiHd+OsmU
|
||||
IQIDAQAB
|
||||
-----END PUBLIC KEY-----
|
||||
227
common/network/pia-vpn/scripts.nix
Normal file
227
common/network/pia-vpn/scripts.nix
Normal file
@@ -0,0 +1,227 @@
|
||||
let
|
||||
caPath = ./ca.rsa.4096.crt;
|
||||
pubKeyPath = ./pubkey.pem;
|
||||
in
|
||||
|
||||
# Bash function library for PIA VPN WireGuard operations.
|
||||
# All PIA API calls accept an optional $proxy variable:
|
||||
# proxy="http://10.100.0.1:8888" fetchPIAToken
|
||||
# When $proxy is set, curl uses --proxy "$proxy"; otherwise direct connection.
|
||||
|
||||
# Reference materials:
|
||||
# https://serverlist.piaservers.net/vpninfo/servers/v6
|
||||
# https://github.com/pia-foss/manual-connections
|
||||
# https://github.com/thrnz/docker-wireguard-pia/blob/master/extra/wg-gen.sh
|
||||
# https://www.wireguard.com/netns/#ordinary-containerization
|
||||
|
||||
{
|
||||
scriptCommon = ''
|
||||
proxy_args() {
|
||||
if [[ -n "''${proxy:-}" ]]; then
|
||||
echo "--proxy $proxy"
|
||||
fi
|
||||
}
|
||||
|
||||
# Verify the server list RSA-SHA256 signature (line 1 = JSON, lines 3+ = base64 signature).
|
||||
verifyServerList() {
|
||||
local raw=$1
|
||||
local sig_file
|
||||
sig_file=$(mktemp)
|
||||
echo "$raw" | tail -n +3 | base64 -d > "$sig_file"
|
||||
if ! echo -n "$(echo "$raw" | head -n 1 | tr -d '\n')" | \
|
||||
openssl dgst -sha256 -verify "${pubKeyPath}" \
|
||||
-signature "$sig_file"; then
|
||||
echo "ERROR: Server list signature verification failed" >&2
|
||||
rm -f "$sig_file"
|
||||
return 1
|
||||
fi
|
||||
echo "Server list signature verified"
|
||||
rm -f "$sig_file"
|
||||
}
|
||||
|
||||
fetchPIAToken() {
|
||||
local PIA_USER PIA_PASS resp
|
||||
echo "Reading PIA credentials..."
|
||||
PIA_USER=$(sed '1q;d' /run/agenix/pia-login.conf)
|
||||
PIA_PASS=$(sed '2q;d' /run/agenix/pia-login.conf)
|
||||
echo "Requesting PIA authentication token..."
|
||||
resp=$(curl -s $(proxy_args) -u "$PIA_USER:$PIA_PASS" \
|
||||
"https://www.privateinternetaccess.com/gtoken/generateToken")
|
||||
PIA_TOKEN=$(echo "$resp" | jq -r '.token')
|
||||
if [[ -z "$PIA_TOKEN" || "$PIA_TOKEN" == "null" ]]; then
|
||||
echo "ERROR: Failed to fetch PIA token: $resp" >&2
|
||||
return 1
|
||||
fi
|
||||
echo "PIA token acquired"
|
||||
}
|
||||
|
||||
choosePIAServer() {
|
||||
local serverLocation=$1
|
||||
local raw servers_json totalservers serverindex
|
||||
servers_json=$(mktemp)
|
||||
echo "Fetching PIA server list..."
|
||||
raw=$(curl -s $(proxy_args) \
|
||||
"https://serverlist.piaservers.net/vpninfo/servers/v6")
|
||||
verifyServerList "$raw"
|
||||
echo "$raw" | head -n 1 | tr -d '\n' > "$servers_json"
|
||||
|
||||
totalservers=$(jq -r \
|
||||
'.regions | .[] | select(.id=="'"$serverLocation"'") | .servers.wg | length' \
|
||||
"$servers_json")
|
||||
if ! [[ "$totalservers" =~ ^[0-9]+$ ]] || [ "$totalservers" -eq 0 ] 2>/dev/null; then
|
||||
echo "ERROR: Location \"$serverLocation\" not found." >&2
|
||||
rm -f "$servers_json"
|
||||
return 1
|
||||
fi
|
||||
echo "Found $totalservers WireGuard servers in region '$serverLocation'"
|
||||
serverindex=$(( RANDOM % totalservers ))
|
||||
|
||||
WG_HOSTNAME=$(jq -r \
|
||||
'.regions | .[] | select(.id=="'"$serverLocation"'") | .servers.wg | .['"$serverindex"'].cn' \
|
||||
"$servers_json")
|
||||
WG_SERVER_IP=$(jq -r \
|
||||
'.regions | .[] | select(.id=="'"$serverLocation"'") | .servers.wg | .['"$serverindex"'].ip' \
|
||||
"$servers_json")
|
||||
WG_SERVER_PORT=$(jq -r '.groups.wg | .[0] | .ports | .[0]' "$servers_json")
|
||||
|
||||
rm -f "$servers_json"
|
||||
echo "Selected server $serverindex/$totalservers: $WG_HOSTNAME ($WG_SERVER_IP:$WG_SERVER_PORT)"
|
||||
}
|
||||
|
||||
generateWireguardKey() {
|
||||
PRIVATE_KEY=$(wg genkey)
|
||||
PUBLIC_KEY=$(echo "$PRIVATE_KEY" | wg pubkey)
|
||||
echo "Generated WireGuard keypair"
|
||||
}
|
||||
|
||||
authorizeKeyWithPIAServer() {
|
||||
local addKeyResponse
|
||||
echo "Sending addKey request to $WG_HOSTNAME ($WG_SERVER_IP:$WG_SERVER_PORT)..."
|
||||
addKeyResponse=$(curl -s -G $(proxy_args) \
|
||||
--connect-to "$WG_HOSTNAME::$WG_SERVER_IP:" \
|
||||
--cacert "${caPath}" \
|
||||
--data-urlencode "pt=$PIA_TOKEN" \
|
||||
--data-urlencode "pubkey=$PUBLIC_KEY" \
|
||||
"https://$WG_HOSTNAME:$WG_SERVER_PORT/addKey")
|
||||
local status
|
||||
status=$(echo "$addKeyResponse" | jq -r '.status')
|
||||
if [[ "$status" != "OK" ]]; then
|
||||
echo "ERROR: addKey failed: $addKeyResponse" >&2
|
||||
return 1
|
||||
fi
|
||||
MY_IP=$(echo "$addKeyResponse" | jq -r '.peer_ip')
|
||||
WG_SERVER_PUBLIC_KEY=$(echo "$addKeyResponse" | jq -r '.server_key')
|
||||
WG_SERVER_PORT=$(echo "$addKeyResponse" | jq -r '.server_port')
|
||||
echo "Key authorized — assigned VPN IP: $MY_IP, server port: $WG_SERVER_PORT"
|
||||
}
|
||||
|
||||
writeWireguardQuickFile() {
|
||||
local wgFile=$1
|
||||
local listenPort=$2
|
||||
rm -f "$wgFile"
|
||||
touch "$wgFile"
|
||||
chmod 700 "$wgFile"
|
||||
cat > "$wgFile" <<WGEOF
|
||||
[Interface]
|
||||
PrivateKey = $PRIVATE_KEY
|
||||
ListenPort = $listenPort
|
||||
[Peer]
|
||||
PersistentKeepalive = 25
|
||||
PublicKey = $WG_SERVER_PUBLIC_KEY
|
||||
AllowedIPs = 0.0.0.0/0
|
||||
Endpoint = $WG_SERVER_IP:$WG_SERVER_PORT
|
||||
WGEOF
|
||||
echo "Wrote WireGuard config to $wgFile (listen=$listenPort)"
|
||||
}
|
||||
|
||||
writeChosenServerToFile() {
|
||||
local serverFile=$1
|
||||
jq -n \
|
||||
--arg hostname "$WG_HOSTNAME" \
|
||||
--arg ip "$WG_SERVER_IP" \
|
||||
--arg port "$WG_SERVER_PORT" \
|
||||
'{hostname: $hostname, ip: $ip, port: $port}' > "$serverFile"
|
||||
chmod 700 "$serverFile"
|
||||
echo "Wrote server info to $serverFile"
|
||||
}
|
||||
|
||||
loadChosenServerFromFile() {
|
||||
local serverFile=$1
|
||||
WG_HOSTNAME=$(jq -r '.hostname' "$serverFile")
|
||||
WG_SERVER_IP=$(jq -r '.ip' "$serverFile")
|
||||
WG_SERVER_PORT=$(jq -r '.port' "$serverFile")
|
||||
echo "Loaded server info from $serverFile: $WG_HOSTNAME ($WG_SERVER_IP:$WG_SERVER_PORT)"
|
||||
}
|
||||
|
||||
# Reset WG interface and tear down NAT/forwarding rules.
|
||||
# Called on startup (clear stale state) and on exit via trap.
|
||||
cleanupVpn() {
|
||||
local interfaceName=$1
|
||||
wg set "$interfaceName" listen-port 0 2>/dev/null || true
|
||||
ip -4 address flush dev "$interfaceName" 2>/dev/null || true
|
||||
ip route del default dev "$interfaceName" 2>/dev/null || true
|
||||
iptables -t nat -F 2>/dev/null || true
|
||||
iptables -F FORWARD 2>/dev/null || true
|
||||
}
|
||||
|
||||
connectToServer() {
|
||||
local wgFile=$1
|
||||
local interfaceName=$2
|
||||
|
||||
echo "Applying WireGuard config to $interfaceName..."
|
||||
wg setconf "$interfaceName" "$wgFile"
|
||||
ip -4 address add "$MY_IP" dev "$interfaceName"
|
||||
ip link set mtu 1420 up dev "$interfaceName"
|
||||
echo "WireGuard interface $interfaceName is up with IP $MY_IP"
|
||||
}
|
||||
|
||||
reservePortForward() {
|
||||
local payload_and_signature
|
||||
echo "Requesting port forward signature from $WG_HOSTNAME..."
|
||||
payload_and_signature=$(curl -s -m 5 \
|
||||
--connect-to "$WG_HOSTNAME::$WG_SERVER_IP:" \
|
||||
--cacert "${caPath}" \
|
||||
-G --data-urlencode "token=$PIA_TOKEN" \
|
||||
"https://$WG_HOSTNAME:19999/getSignature")
|
||||
local status
|
||||
status=$(echo "$payload_and_signature" | jq -r '.status')
|
||||
if [[ "$status" != "OK" ]]; then
|
||||
echo "ERROR: getSignature failed: $payload_and_signature" >&2
|
||||
return 1
|
||||
fi
|
||||
PORT_SIGNATURE=$(echo "$payload_and_signature" | jq -r '.signature')
|
||||
PORT_PAYLOAD=$(echo "$payload_and_signature" | jq -r '.payload')
|
||||
PORT=$(echo "$PORT_PAYLOAD" | base64 -d | jq -r '.port')
|
||||
echo "Port forward reserved: port $PORT"
|
||||
}
|
||||
|
||||
writePortRenewalFile() {
|
||||
local portRenewalFile=$1
|
||||
jq -n \
|
||||
--arg signature "$PORT_SIGNATURE" \
|
||||
--arg payload "$PORT_PAYLOAD" \
|
||||
'{signature: $signature, payload: $payload}' > "$portRenewalFile"
|
||||
chmod 700 "$portRenewalFile"
|
||||
echo "Wrote port renewal data to $portRenewalFile"
|
||||
}
|
||||
|
||||
readPortRenewalFile() {
|
||||
local portRenewalFile=$1
|
||||
PORT_SIGNATURE=$(jq -r '.signature' "$portRenewalFile")
|
||||
PORT_PAYLOAD=$(jq -r '.payload' "$portRenewalFile")
|
||||
echo "Loaded port renewal data from $portRenewalFile"
|
||||
}
|
||||
|
||||
refreshPIAPort() {
|
||||
local bindPortResponse
|
||||
echo "Refreshing port forward binding with $WG_HOSTNAME..."
|
||||
bindPortResponse=$(curl -Gs -m 5 \
|
||||
--connect-to "$WG_HOSTNAME::$WG_SERVER_IP:" \
|
||||
--cacert "${caPath}" \
|
||||
--data-urlencode "payload=$PORT_PAYLOAD" \
|
||||
--data-urlencode "signature=$PORT_SIGNATURE" \
|
||||
"https://$WG_HOSTNAME:19999/bindPort")
|
||||
echo "bindPort response: $bindPortResponse"
|
||||
}
|
||||
'';
|
||||
}
|
||||
87
common/network/pia-vpn/service-container.nix
Normal file
87
common/network/pia-vpn/service-container.nix
Normal file
@@ -0,0 +1,87 @@
|
||||
{ config, lib, allModules, ... }:
|
||||
|
||||
# Generates service containers that route all traffic through the VPN container.
|
||||
# Each container gets a static IP on the VPN bridge with default route → VPN container.
|
||||
#
|
||||
# Uses lazy mapAttrs inside fixed config keys to avoid infinite recursion.
|
||||
# (mkMerge + mapAttrsToList at the top level forces eager evaluation of cfg.containers
|
||||
# during module structure discovery, which creates a cycle with config evaluation.)
|
||||
|
||||
with lib;
|
||||
|
||||
let
|
||||
cfg = config.pia-vpn;
|
||||
|
||||
mkContainer = name: ctr: {
|
||||
autoStart = true;
|
||||
ephemeral = true;
|
||||
privateNetwork = true;
|
||||
hostBridge = cfg.bridgeName;
|
||||
|
||||
bindMounts = mapAttrs
|
||||
(_: mount: {
|
||||
hostPath = mount.hostPath;
|
||||
isReadOnly = mount.isReadOnly;
|
||||
})
|
||||
ctr.mounts;
|
||||
|
||||
config = { config, pkgs, lib, ... }: {
|
||||
imports = allModules ++ [ ctr.config ];
|
||||
|
||||
# Static IP with gateway pointing to VPN container
|
||||
networking.useNetworkd = true;
|
||||
systemd.network.enable = true;
|
||||
networking.useDHCP = false;
|
||||
|
||||
systemd.network.networks."20-eth0" = {
|
||||
matchConfig.Name = "eth0";
|
||||
networkConfig = {
|
||||
Address = "${ctr.ip}/${cfg.subnetPrefixLen}";
|
||||
Gateway = cfg.vpnAddress;
|
||||
DNS = [ cfg.vpnAddress ];
|
||||
};
|
||||
};
|
||||
|
||||
networking.hosts = cfg.containerHosts;
|
||||
|
||||
# DNS through VPN container (queries go through WG tunnel = no DNS leak)
|
||||
networking.nameservers = [ cfg.vpnAddress ];
|
||||
|
||||
# Wait for actual VPN connectivity before network-online.target.
|
||||
# Without this, services start before the VPN tunnel is ready and failures
|
||||
# can't be reported to ntfy (no outbound connectivity yet).
|
||||
systemd.services.wait-for-vpn = {
|
||||
description = "Wait for VPN connectivity";
|
||||
before = [ "network-online.target" ];
|
||||
wantedBy = [ "network-online.target" ];
|
||||
after = [ "systemd-networkd-wait-online.service" ];
|
||||
serviceConfig.Type = "oneshot";
|
||||
path = [ pkgs.iputils ];
|
||||
script = ''
|
||||
until ping -c1 -W2 1.1.1.1 >/dev/null 2>&1; do
|
||||
echo "Waiting for VPN connectivity..."
|
||||
sleep 1
|
||||
done
|
||||
'';
|
||||
};
|
||||
|
||||
# Trust the bridge interface (host reaches us directly for nginx)
|
||||
networking.firewall.trustedInterfaces = [ "eth0" ];
|
||||
|
||||
# Disable host resolv.conf — we use our own networkd DNS config
|
||||
networking.useHostResolvConf = false;
|
||||
};
|
||||
};
|
||||
|
||||
mkContainerOrdering = name: _ctr: nameValuePair "container@${name}" {
|
||||
after = [ "container@pia-vpn.service" ];
|
||||
requires = [ "container@pia-vpn.service" ];
|
||||
partOf = [ "container@pia-vpn.service" ];
|
||||
};
|
||||
in
|
||||
{
|
||||
config = mkIf cfg.enable {
|
||||
containers = mapAttrs mkContainer cfg.containers;
|
||||
systemd.services = mapAttrs' mkContainerOrdering cfg.containers;
|
||||
};
|
||||
}
|
||||
232
common/network/pia-vpn/vpn-container.nix
Normal file
232
common/network/pia-vpn/vpn-container.nix
Normal file
@@ -0,0 +1,232 @@
|
||||
{ config, lib, allModules, ... }:
|
||||
|
||||
# VPN container: runs all PIA logic, acts as WireGuard gateway + NAT for service containers.
|
||||
|
||||
with lib;
|
||||
|
||||
let
|
||||
cfg = config.pia-vpn;
|
||||
scripts = import ./scripts.nix;
|
||||
|
||||
# Port forwarding derived state
|
||||
forwardingContainers = filterAttrs (_: c: c.receiveForwardedPort != null) cfg.containers;
|
||||
portForwarding = forwardingContainers != { };
|
||||
forwardingContainerName = if portForwarding then head (attrNames forwardingContainers) else null;
|
||||
forwardingContainer = if portForwarding then forwardingContainers.${forwardingContainerName} else null;
|
||||
|
||||
serverFile = "/var/lib/pia-vpn/server.json";
|
||||
wgFile = "/var/lib/pia-vpn/wg.conf";
|
||||
portRenewalFile = "/var/lib/pia-vpn/port-renewal.json";
|
||||
proxy = "http://${cfg.hostAddress}:${toString cfg.proxyPort}";
|
||||
|
||||
# DNAT/forwarding rules for port forwarding
|
||||
dnatSetupScript = optionalString portForwarding (
|
||||
let
|
||||
fwd = forwardingContainer.receiveForwardedPort;
|
||||
targetIp = forwardingContainer.ip;
|
||||
dnatTarget = if fwd.port != null then "${targetIp}:${toString fwd.port}" else targetIp;
|
||||
targetPort = if fwd.port != null then toString fwd.port else "$PORT";
|
||||
tcpRules = optionalString (fwd.protocol == "tcp" || fwd.protocol == "both") ''
|
||||
echo "Setting up TCP DNAT: port $PORT → ${targetIp}:${targetPort}"
|
||||
iptables -t nat -A PREROUTING -i ${cfg.interfaceName} -p tcp --dport $PORT -j DNAT --to ${dnatTarget}
|
||||
iptables -A FORWARD -i ${cfg.interfaceName} -d ${targetIp} -p tcp --dport ${targetPort} -j ACCEPT
|
||||
'';
|
||||
udpRules = optionalString (fwd.protocol == "udp" || fwd.protocol == "both") ''
|
||||
echo "Setting up UDP DNAT: port $PORT → ${targetIp}:${targetPort}"
|
||||
iptables -t nat -A PREROUTING -i ${cfg.interfaceName} -p udp --dport $PORT -j DNAT --to ${dnatTarget}
|
||||
iptables -A FORWARD -i ${cfg.interfaceName} -d ${targetIp} -p udp --dport ${targetPort} -j ACCEPT
|
||||
'';
|
||||
onPortForwarded = optionalString (forwardingContainer.onPortForwarded != null) ''
|
||||
TARGET_IP="${targetIp}"
|
||||
export PORT TARGET_IP
|
||||
echo "Running onPortForwarded hook for ${forwardingContainerName} (port=$PORT, target=$TARGET_IP)"
|
||||
${forwardingContainer.onPortForwarded}
|
||||
'';
|
||||
in
|
||||
''
|
||||
if [ "$PORT" -lt 10000 ]; then
|
||||
echo "ERROR: PIA assigned low port $PORT (< 10000), refusing to set up DNAT" >&2
|
||||
else
|
||||
${tcpRules}
|
||||
${udpRules}
|
||||
${onPortForwarded}
|
||||
fi
|
||||
''
|
||||
);
|
||||
in
|
||||
{
|
||||
config = mkIf cfg.enable {
|
||||
# Give the container more time to boot (pia-vpn-setup retries can delay readiness)
|
||||
systemd.services."container@pia-vpn".serviceConfig.TimeoutStartSec = mkForce "180s";
|
||||
|
||||
containers.pia-vpn = {
|
||||
autoStart = true;
|
||||
ephemeral = true;
|
||||
privateNetwork = true;
|
||||
hostBridge = cfg.bridgeName;
|
||||
interfaces = [ cfg.interfaceName ];
|
||||
|
||||
bindMounts."/run/agenix" = {
|
||||
hostPath = "/run/agenix";
|
||||
isReadOnly = true;
|
||||
};
|
||||
|
||||
config = { config, pkgs, lib, ... }:
|
||||
let
|
||||
scriptPkgs = with pkgs; [ wireguard-tools iproute2 curl jq iptables coreutils openssl ];
|
||||
in
|
||||
{
|
||||
imports = allModules;
|
||||
|
||||
networking.hosts = cfg.containerHosts;
|
||||
|
||||
# Static IP on bridge — no gateway (VPN container routes via WG only)
|
||||
networking.useNetworkd = true;
|
||||
systemd.network.enable = true;
|
||||
networking.useDHCP = false;
|
||||
|
||||
systemd.network.networks."20-eth0" = {
|
||||
matchConfig.Name = "eth0";
|
||||
networkConfig = {
|
||||
Address = "${cfg.vpnAddress}/${cfg.subnetPrefixLen}";
|
||||
DHCPServer = false;
|
||||
};
|
||||
};
|
||||
|
||||
# Ignore WG interface for wait-online (it's configured manually, not by networkd)
|
||||
systemd.network.wait-online.ignoredInterfaces = [ cfg.interfaceName ];
|
||||
|
||||
# Route ntfy alerts through the host proxy (VPN container has no gateway on eth0)
|
||||
ntfy-alerts.curlExtraArgs = "--proxy http://${cfg.hostAddress}:${toString cfg.proxyPort}";
|
||||
|
||||
# Enable forwarding so bridge traffic can go through WG
|
||||
boot.kernel.sysctl."net.ipv4.ip_forward" = 1;
|
||||
|
||||
# Trust bridge interface
|
||||
networking.firewall.trustedInterfaces = [ "eth0" ];
|
||||
|
||||
# DNS: use systemd-resolved listening on bridge IP so service containers
|
||||
# can use VPN container as DNS server (queries go through WG tunnel = no DNS leak)
|
||||
services.resolved = {
|
||||
enable = true;
|
||||
settings.Resolve.DNSStubListenerExtra = cfg.vpnAddress;
|
||||
};
|
||||
|
||||
# Don't use host resolv.conf — resolved manages DNS
|
||||
networking.useHostResolvConf = false;
|
||||
|
||||
# State directory for PIA config files
|
||||
systemd.tmpfiles.rules = [
|
||||
"d /var/lib/pia-vpn 0700 root root -"
|
||||
];
|
||||
|
||||
# PIA VPN setup service — does all the PIA auth, WG config, and NAT setup
|
||||
systemd.services.pia-vpn-setup = {
|
||||
description = "PIA VPN WireGuard Setup";
|
||||
|
||||
wants = [ "network-online.target" ];
|
||||
after = [ "network.target" "network-online.target" "systemd-networkd.service" ];
|
||||
wantedBy = [ "multi-user.target" ];
|
||||
|
||||
path = scriptPkgs;
|
||||
|
||||
serviceConfig = {
|
||||
Type = "simple";
|
||||
Restart = "always";
|
||||
RestartSec = "5m";
|
||||
RuntimeMaxSec = "30d";
|
||||
};
|
||||
|
||||
script = ''
|
||||
set -euo pipefail
|
||||
${scripts.scriptCommon}
|
||||
|
||||
trap 'cleanupVpn ${cfg.interfaceName}' EXIT
|
||||
cleanupVpn ${cfg.interfaceName}
|
||||
|
||||
proxy="${proxy}"
|
||||
|
||||
# 1. Authenticate with PIA via proxy (VPN container has no internet yet)
|
||||
echo "Choosing PIA server in region '${cfg.serverLocation}'..."
|
||||
choosePIAServer '${cfg.serverLocation}'
|
||||
|
||||
echo "Fetching PIA authentication token..."
|
||||
fetchPIAToken
|
||||
|
||||
# 2. Generate WG keys and authorize with PIA server
|
||||
echo "Generating WireGuard keypair..."
|
||||
generateWireguardKey
|
||||
|
||||
echo "Authorizing key with PIA server $WG_HOSTNAME..."
|
||||
authorizeKeyWithPIAServer
|
||||
|
||||
# 3. Configure WG interface (already created by host and moved into our namespace)
|
||||
echo "Configuring WireGuard interface ${cfg.interfaceName}..."
|
||||
writeWireguardQuickFile '${wgFile}' ${toString cfg.wireguardListenPort}
|
||||
writeChosenServerToFile '${serverFile}'
|
||||
connectToServer '${wgFile}' '${cfg.interfaceName}'
|
||||
|
||||
# 4. Default route through WG
|
||||
ip route replace default dev ${cfg.interfaceName}
|
||||
echo "Default route set through ${cfg.interfaceName}"
|
||||
|
||||
# 5. NAT: masquerade bridge → WG (so service containers' traffic appears to come from VPN IP)
|
||||
echo "Setting up NAT masquerade..."
|
||||
iptables -t nat -A POSTROUTING -o ${cfg.interfaceName} -j MASQUERADE
|
||||
iptables -A FORWARD -i eth0 -o ${cfg.interfaceName} -j ACCEPT
|
||||
iptables -A FORWARD -i ${cfg.interfaceName} -o eth0 -m state --state RELATED,ESTABLISHED -j ACCEPT
|
||||
|
||||
${optionalString portForwarding ''
|
||||
# 6. Port forwarding setup
|
||||
echo "Reserving port forward..."
|
||||
reservePortForward
|
||||
writePortRenewalFile '${portRenewalFile}'
|
||||
|
||||
# First bindPort triggers actual port allocation
|
||||
echo "Binding port $PORT..."
|
||||
refreshPIAPort
|
||||
|
||||
echo "PIA assigned port: $PORT"
|
||||
|
||||
# DNAT rules to forward PIA port to target container
|
||||
${dnatSetupScript}
|
||||
''}
|
||||
|
||||
echo "PIA VPN setup complete"
|
||||
exec sleep infinity
|
||||
'';
|
||||
|
||||
};
|
||||
|
||||
# Port refresh timer (every 10 min) — keeps PIA port forwarding alive
|
||||
systemd.services.pia-vpn-port-refresh = mkIf portForwarding {
|
||||
description = "PIA VPN Port Forward Refresh";
|
||||
after = [ "pia-vpn-setup.service" ];
|
||||
requires = [ "pia-vpn-setup.service" ];
|
||||
|
||||
path = scriptPkgs;
|
||||
|
||||
serviceConfig.Type = "oneshot";
|
||||
|
||||
script = ''
|
||||
set -euo pipefail
|
||||
${scripts.scriptCommon}
|
||||
loadChosenServerFromFile '${serverFile}'
|
||||
readPortRenewalFile '${portRenewalFile}'
|
||||
echo "Refreshing PIA port forward..."
|
||||
refreshPIAPort
|
||||
'';
|
||||
};
|
||||
|
||||
systemd.timers.pia-vpn-port-refresh = mkIf portForwarding {
|
||||
partOf = [ "pia-vpn-port-refresh.service" ];
|
||||
wantedBy = [ "timers.target" ];
|
||||
timerConfig = {
|
||||
OnCalendar = "*:0/10";
|
||||
RandomizedDelaySec = "1m";
|
||||
};
|
||||
};
|
||||
};
|
||||
};
|
||||
};
|
||||
}
|
||||
@@ -1,363 +0,0 @@
|
||||
{ config, lib, pkgs, ... }:
|
||||
|
||||
# Server list:
|
||||
# https://serverlist.piaservers.net/vpninfo/servers/v6
|
||||
# Reference materials:
|
||||
# https://github.com/pia-foss/manual-connections
|
||||
# https://github.com/thrnz/docker-wireguard-pia/blob/master/extra/wg-gen.sh
|
||||
|
||||
# TODO handle potential errors (or at least print status, success, and failures to the console)
|
||||
# TODO parameterize names of systemd services so that multiple wg VPNs could coexist in theory easier
|
||||
# TODO implement this module such that the wireguard VPN doesn't have to live in a container
|
||||
# TODO don't add forward rules if the PIA port is the same as cfg.forwardedPort
|
||||
# TODO verify signatures of PIA responses
|
||||
# TODO `RuntimeMaxSec = "30d";` for pia-vpn-wireguard-init isn't allowed per the systemd logs. Find alternative.
|
||||
|
||||
with builtins;
|
||||
with lib;
|
||||
|
||||
let
|
||||
cfg = config.pia.wireguard;
|
||||
|
||||
getPIAToken = ''
|
||||
PIA_USER=`sed '1q;d' /run/agenix/pia-login.conf`
|
||||
PIA_PASS=`sed '2q;d' /run/agenix/pia-login.conf`
|
||||
# PIA_TOKEN only lasts 24hrs
|
||||
PIA_TOKEN=`curl -s -u "$PIA_USER:$PIA_PASS" https://www.privateinternetaccess.com/gtoken/generateToken | jq -r '.token'`
|
||||
'';
|
||||
|
||||
chooseWireguardServer = ''
|
||||
servers=$(mktemp)
|
||||
servers_json=$(mktemp)
|
||||
curl -s "https://serverlist.piaservers.net/vpninfo/servers/v6" > "$servers"
|
||||
# extract json part only
|
||||
head -n 1 "$servers" | tr -d '\n' > "$servers_json"
|
||||
|
||||
echo "Available location ids:" && jq '.regions | .[] | {name, id, port_forward}' "$servers_json"
|
||||
|
||||
# Some locations have multiple servers available. Pick a random one.
|
||||
totalservers=$(jq -r '.regions | .[] | select(.id=="'${cfg.serverLocation}'") | .servers.wg | length' "$servers_json")
|
||||
if ! [[ "$totalservers" =~ ^[0-9]+$ ]] || [ "$totalservers" -eq 0 ] 2>/dev/null; then
|
||||
echo "Location \"${cfg.serverLocation}\" not found."
|
||||
exit 1
|
||||
fi
|
||||
serverindex=$(( RANDOM % totalservers))
|
||||
WG_HOSTNAME=$(jq -r '.regions | .[] | select(.id=="'${cfg.serverLocation}'") | .servers.wg | .['$serverindex'].cn' "$servers_json")
|
||||
WG_SERVER_IP=$(jq -r '.regions | .[] | select(.id=="'${cfg.serverLocation}'") | .servers.wg | .['$serverindex'].ip' "$servers_json")
|
||||
WG_SERVER_PORT=$(jq -r '.groups.wg | .[0] | .ports | .[0]' "$servers_json")
|
||||
|
||||
# write chosen server
|
||||
rm -f /tmp/${cfg.interfaceName}-server.conf
|
||||
touch /tmp/${cfg.interfaceName}-server.conf
|
||||
chmod 700 /tmp/${cfg.interfaceName}-server.conf
|
||||
echo "$WG_HOSTNAME" >> /tmp/${cfg.interfaceName}-server.conf
|
||||
echo "$WG_SERVER_IP" >> /tmp/${cfg.interfaceName}-server.conf
|
||||
echo "$WG_SERVER_PORT" >> /tmp/${cfg.interfaceName}-server.conf
|
||||
|
||||
rm $servers_json $servers
|
||||
'';
|
||||
|
||||
getChosenWireguardServer = ''
|
||||
WG_HOSTNAME=`sed '1q;d' /tmp/${cfg.interfaceName}-server.conf`
|
||||
WG_SERVER_IP=`sed '2q;d' /tmp/${cfg.interfaceName}-server.conf`
|
||||
WG_SERVER_PORT=`sed '3q;d' /tmp/${cfg.interfaceName}-server.conf`
|
||||
'';
|
||||
|
||||
refreshPIAPort = ''
|
||||
${getChosenWireguardServer}
|
||||
signature=`sed '1q;d' /tmp/${cfg.interfaceName}-port-renewal`
|
||||
payload=`sed '2q;d' /tmp/${cfg.interfaceName}-port-renewal`
|
||||
bind_port_response=`curl -Gs -m 5 --connect-to "$WG_HOSTNAME::$WG_SERVER_IP:" --cacert "${./ca.rsa.4096.crt}" --data-urlencode "payload=$payload" --data-urlencode "signature=$signature" "https://$WG_HOSTNAME:19999/bindPort"`
|
||||
'';
|
||||
|
||||
portForwarding = cfg.forwardPortForTransmission || cfg.forwardedPort != null;
|
||||
|
||||
containerServiceName = "container@${config.vpn-container.containerName}.service";
|
||||
in
|
||||
{
|
||||
options.pia.wireguard = {
|
||||
enable = mkEnableOption "Enable private internet access";
|
||||
badPortForwardPorts = mkOption {
|
||||
type = types.listOf types.port;
|
||||
description = ''
|
||||
Ports that will not be accepted from PIA.
|
||||
If PIA assigns a port from this list, the connection is aborted since we cannot ask for a different port.
|
||||
This is used to guarantee we are not assigned a port that is used by a service we do not want exposed.
|
||||
'';
|
||||
};
|
||||
wireguardListenPort = mkOption {
|
||||
type = types.port;
|
||||
description = "The port wireguard listens on for this VPN connection";
|
||||
default = 51820;
|
||||
};
|
||||
serverLocation = mkOption {
|
||||
type = types.str;
|
||||
default = "swiss";
|
||||
};
|
||||
interfaceName = mkOption {
|
||||
type = types.str;
|
||||
default = "piaw";
|
||||
};
|
||||
forwardedPort = mkOption {
|
||||
type = types.nullOr types.port;
|
||||
description = "The port to redirect port forwarded TCP VPN traffic too";
|
||||
default = null;
|
||||
};
|
||||
forwardPortForTransmission = mkEnableOption "PIA port forwarding for transmission should be performed.";
|
||||
};
|
||||
|
||||
config = mkIf cfg.enable {
|
||||
assertions = [
|
||||
{
|
||||
assertion = cfg.forwardPortForTransmission != (cfg.forwardedPort != null);
|
||||
message = ''
|
||||
The PIA forwarded port cannot simultaneously be used by transmission and redirected to another port.
|
||||
'';
|
||||
}
|
||||
];
|
||||
|
||||
# mounts used to pass the connection parameters to the container
|
||||
# the container doesn't have internet until it uses these parameters so it cannot fetch them itself
|
||||
vpn-container.mounts = [
|
||||
"/tmp/${cfg.interfaceName}.conf"
|
||||
"/tmp/${cfg.interfaceName}-server.conf"
|
||||
"/tmp/${cfg.interfaceName}-address.conf"
|
||||
];
|
||||
|
||||
# The container takes ownership of the wireguard interface on its startup
|
||||
containers.vpn.interfaces = [ cfg.interfaceName ];
|
||||
|
||||
# TODO: while this is much better than "loose" networking, it seems to have issues with firewall restarts
|
||||
# allow traffic for wireguard interface to pass since wireguard trips up rpfilter
|
||||
# networking.firewall = {
|
||||
# extraCommands = ''
|
||||
# ip46tables -t raw -I nixos-fw-rpfilter -p udp -m udp --sport ${toString cfg.wireguardListenPort} -j RETURN
|
||||
# ip46tables -t raw -I nixos-fw-rpfilter -p udp -m udp --dport ${toString cfg.wireguardListenPort} -j RETURN
|
||||
# '';
|
||||
# extraStopCommands = ''
|
||||
# ip46tables -t raw -D nixos-fw-rpfilter -p udp -m udp --sport ${toString cfg.wireguardListenPort} -j RETURN || true
|
||||
# ip46tables -t raw -D nixos-fw-rpfilter -p udp -m udp --dport ${toString cfg.wireguardListenPort} -j RETURN || true
|
||||
# '';
|
||||
# };
|
||||
networking.firewall.checkReversePath = "loose";
|
||||
|
||||
systemd.services.pia-vpn-wireguard-init = {
|
||||
description = "Creates PIA VPN Wireguard Interface";
|
||||
|
||||
wants = [ "network-online.target" ];
|
||||
after = [ "network.target" "network-online.target" ];
|
||||
before = [ containerServiceName ];
|
||||
requiredBy = [ containerServiceName ];
|
||||
partOf = [ containerServiceName ];
|
||||
wantedBy = [ "multi-user.target" ];
|
||||
|
||||
path = with pkgs; [ wireguard-tools jq curl iproute2 iputils ];
|
||||
|
||||
serviceConfig = {
|
||||
Type = "oneshot";
|
||||
RemainAfterExit = true;
|
||||
|
||||
# restart once a month; PIA forwarded port expires after two months
|
||||
# because the container is "PartOf" this unit, it gets restarted too
|
||||
RuntimeMaxSec = "30d";
|
||||
};
|
||||
|
||||
script = ''
|
||||
echo Waiting for internet...
|
||||
while ! ping -c 1 -W 1 1.1.1.1; do
|
||||
sleep 1
|
||||
done
|
||||
|
||||
# Prepare to connect by generating wg secrets and auth'ing with PIA since the container
|
||||
# cannot do without internet to start with. NAT'ing the host's internet would address this
|
||||
# issue but is not ideal because then leaking network outside of the VPN is more likely.
|
||||
|
||||
${chooseWireguardServer}
|
||||
|
||||
${getPIAToken}
|
||||
|
||||
# generate wireguard keys
|
||||
privKey=$(wg genkey)
|
||||
pubKey=$(echo "$privKey" | wg pubkey)
|
||||
|
||||
# authorize our WG keys with the PIA server we are about to connect to
|
||||
wireguard_json=`curl -s -G --connect-to "$WG_HOSTNAME::$WG_SERVER_IP:" --cacert "${./ca.rsa.4096.crt}" --data-urlencode "pt=$PIA_TOKEN" --data-urlencode "pubkey=$pubKey" https://$WG_HOSTNAME:$WG_SERVER_PORT/addKey`
|
||||
|
||||
# create wg-quick config file
|
||||
rm -f /tmp/${cfg.interfaceName}.conf /tmp/${cfg.interfaceName}-address.conf
|
||||
touch /tmp/${cfg.interfaceName}.conf /tmp/${cfg.interfaceName}-address.conf
|
||||
chmod 700 /tmp/${cfg.interfaceName}.conf /tmp/${cfg.interfaceName}-address.conf
|
||||
echo "
|
||||
[Interface]
|
||||
# Address = $(echo "$wireguard_json" | jq -r '.peer_ip')
|
||||
PrivateKey = $privKey
|
||||
ListenPort = ${toString cfg.wireguardListenPort}
|
||||
[Peer]
|
||||
PersistentKeepalive = 25
|
||||
PublicKey = $(echo "$wireguard_json" | jq -r '.server_key')
|
||||
AllowedIPs = 0.0.0.0/0
|
||||
Endpoint = $WG_SERVER_IP:$(echo "$wireguard_json" | jq -r '.server_port')
|
||||
" >> /tmp/${cfg.interfaceName}.conf
|
||||
|
||||
# create file storing the VPN ip address PIA assigned to us
|
||||
echo "$wireguard_json" | jq -r '.peer_ip' >> /tmp/${cfg.interfaceName}-address.conf
|
||||
|
||||
# Create wg interface now so it inherits from the namespace with internet access
|
||||
# the container will handle actually connecting the interface since that info is
|
||||
# not preserved upon moving into the container's networking namespace
|
||||
# Roughly following this guide https://www.wireguard.com/netns/#ordinary-containerization
|
||||
[[ -z $(ip link show dev ${cfg.interfaceName} 2>/dev/null) ]] || exit
|
||||
ip link add ${cfg.interfaceName} type wireguard
|
||||
'';
|
||||
|
||||
preStop = ''
|
||||
# cleanup wireguard interface
|
||||
ip link del ${cfg.interfaceName}
|
||||
rm -f /tmp/${cfg.interfaceName}.conf /tmp/${cfg.interfaceName}-address.conf
|
||||
'';
|
||||
};
|
||||
|
||||
vpn-container.config.systemd.services.pia-vpn-wireguard = {
|
||||
description = "Initializes the PIA VPN WireGuard Tunnel";
|
||||
|
||||
wants = [ "network-online.target" ];
|
||||
after = [ "network.target" "network-online.target" ];
|
||||
wantedBy = [ "multi-user.target" ];
|
||||
|
||||
path = with pkgs; [ wireguard-tools iproute2 curl jq iptables ];
|
||||
|
||||
serviceConfig = {
|
||||
Type = "oneshot";
|
||||
RemainAfterExit = true;
|
||||
};
|
||||
|
||||
script = ''
|
||||
# pseudo calls wg-quick
|
||||
# Near equivalent of "wg-quick up /tmp/${cfg.interfaceName}.conf"
|
||||
# cannot actually call wg-quick because the interface has to be already
|
||||
# created before the container taken ownership of the interface
|
||||
# Thus, assumes wg interface was already created:
|
||||
# ip link add ${cfg.interfaceName} type wireguard
|
||||
|
||||
${getChosenWireguardServer}
|
||||
|
||||
myaddress=`cat /tmp/${cfg.interfaceName}-address.conf`
|
||||
|
||||
wg setconf ${cfg.interfaceName} /tmp/${cfg.interfaceName}.conf
|
||||
ip -4 address add $myaddress dev ${cfg.interfaceName}
|
||||
ip link set mtu 1420 up dev ${cfg.interfaceName}
|
||||
wg set ${cfg.interfaceName} fwmark ${toString cfg.wireguardListenPort}
|
||||
ip -4 route add 0.0.0.0/0 dev ${cfg.interfaceName} table ${toString cfg.wireguardListenPort}
|
||||
|
||||
# TODO is this needed?
|
||||
ip -4 rule add not fwmark ${toString cfg.wireguardListenPort} table ${toString cfg.wireguardListenPort}
|
||||
ip -4 rule add table main suppress_prefixlength 0
|
||||
|
||||
# The rest of the script is only for only for port forwarding skip if not needed
|
||||
if [ ${boolToString portForwarding} == false ]; then exit 0; fi
|
||||
|
||||
# Reserve port
|
||||
${getPIAToken}
|
||||
payload_and_signature=`curl -s -m 5 --connect-to "$WG_HOSTNAME::$WG_SERVER_IP:" --cacert "${./ca.rsa.4096.crt}" -G --data-urlencode "token=$PIA_TOKEN" "https://$WG_HOSTNAME:19999/getSignature"`
|
||||
signature=$(echo "$payload_and_signature" | jq -r '.signature')
|
||||
payload=$(echo "$payload_and_signature" | jq -r '.payload')
|
||||
port=$(echo "$payload" | base64 -d | jq -r '.port')
|
||||
|
||||
# Check if the port is acceptable
|
||||
notallowed=(${concatStringsSep " " (map toString cfg.badPortForwardPorts)})
|
||||
if [[ " ''${notallowed[*]} " =~ " $port " ]]; then
|
||||
# the port PIA assigned is not allowed, kill the connection
|
||||
wg-quick down /tmp/${cfg.interfaceName}.conf
|
||||
exit 1
|
||||
fi
|
||||
|
||||
# write reserved port to file readable for all users
|
||||
echo $port > /tmp/${cfg.interfaceName}-port
|
||||
chmod 644 /tmp/${cfg.interfaceName}-port
|
||||
|
||||
# write payload and signature info needed to allow refreshing allocated forwarded port
|
||||
rm -f /tmp/${cfg.interfaceName}-port-renewal
|
||||
touch /tmp/${cfg.interfaceName}-port-renewal
|
||||
chmod 700 /tmp/${cfg.interfaceName}-port-renewal
|
||||
echo $signature >> /tmp/${cfg.interfaceName}-port-renewal
|
||||
echo $payload >> /tmp/${cfg.interfaceName}-port-renewal
|
||||
|
||||
# Block all traffic from VPN interface except for traffic that is from the forwarded port
|
||||
iptables -I nixos-fw -p tcp --dport $port -j nixos-fw-accept -i ${cfg.interfaceName}
|
||||
iptables -I nixos-fw -p udp --dport $port -j nixos-fw-accept -i ${cfg.interfaceName}
|
||||
|
||||
# The first port refresh triggers the port to be actually allocated
|
||||
${refreshPIAPort}
|
||||
|
||||
${optionalString (cfg.forwardedPort != null) ''
|
||||
# redirect the fowarded port
|
||||
iptables -A INPUT -i ${cfg.interfaceName} -p tcp --dport $port -j ACCEPT
|
||||
iptables -A INPUT -i ${cfg.interfaceName} -p udp --dport $port -j ACCEPT
|
||||
iptables -A INPUT -i ${cfg.interfaceName} -p tcp --dport ${toString cfg.forwardedPort} -j ACCEPT
|
||||
iptables -A INPUT -i ${cfg.interfaceName} -p udp --dport ${toString cfg.forwardedPort} -j ACCEPT
|
||||
iptables -A PREROUTING -t nat -i ${cfg.interfaceName} -p tcp --dport $port -j REDIRECT --to-port ${toString cfg.forwardedPort}
|
||||
iptables -A PREROUTING -t nat -i ${cfg.interfaceName} -p udp --dport $port -j REDIRECT --to-port ${toString cfg.forwardedPort}
|
||||
''}
|
||||
|
||||
${optionalString cfg.forwardPortForTransmission ''
|
||||
# assumes no auth needed for transmission
|
||||
curlout=$(curl localhost:9091/transmission/rpc 2>/dev/null)
|
||||
regex='X-Transmission-Session-Id\: (\w*)'
|
||||
if [[ $curlout =~ $regex ]]; then
|
||||
sessionId=''${BASH_REMATCH[1]}
|
||||
else
|
||||
exit 1
|
||||
fi
|
||||
|
||||
# set the port in transmission
|
||||
data='{"method": "session-set", "arguments": { "peer-port" :'$port' } }'
|
||||
curl http://localhost:9091/transmission/rpc -d "$data" -H "X-Transmission-Session-Id: $sessionId"
|
||||
''}
|
||||
'';
|
||||
|
||||
preStop = ''
|
||||
wg-quick down /tmp/${cfg.interfaceName}.conf
|
||||
|
||||
# The rest of the script is only for only for port forwarding skip if not needed
|
||||
if [ ${boolToString portForwarding} == false ]; then exit 0; fi
|
||||
|
||||
${optionalString (cfg.forwardedPort != null) ''
|
||||
# stop redirecting the forwarded port
|
||||
iptables -D INPUT -i ${cfg.interfaceName} -p tcp --dport $port -j ACCEPT
|
||||
iptables -D INPUT -i ${cfg.interfaceName} -p udp --dport $port -j ACCEPT
|
||||
iptables -D INPUT -i ${cfg.interfaceName} -p tcp --dport ${toString cfg.forwardedPort} -j ACCEPT
|
||||
iptables -D INPUT -i ${cfg.interfaceName} -p udp --dport ${toString cfg.forwardedPort} -j ACCEPT
|
||||
iptables -D PREROUTING -t nat -i ${cfg.interfaceName} -p tcp --dport $port -j REDIRECT --to-port ${toString cfg.forwardedPort}
|
||||
iptables -D PREROUTING -t nat -i ${cfg.interfaceName} -p udp --dport $port -j REDIRECT --to-port ${toString cfg.forwardedPort}
|
||||
''}
|
||||
'';
|
||||
};
|
||||
|
||||
vpn-container.config.systemd.services.pia-vpn-wireguard-forward-port = {
|
||||
enable = portForwarding;
|
||||
description = "PIA VPN WireGuard Tunnel Port Forwarding";
|
||||
after = [ "pia-vpn-wireguard.service" ];
|
||||
requires = [ "pia-vpn-wireguard.service" ];
|
||||
|
||||
path = with pkgs; [ curl ];
|
||||
|
||||
serviceConfig = {
|
||||
Type = "oneshot";
|
||||
};
|
||||
|
||||
script = refreshPIAPort;
|
||||
};
|
||||
|
||||
vpn-container.config.systemd.timers.pia-vpn-wireguard-forward-port = {
|
||||
enable = portForwarding;
|
||||
partOf = [ "pia-vpn-wireguard-forward-port.service" ];
|
||||
wantedBy = [ "timers.target" ];
|
||||
timerConfig = {
|
||||
OnCalendar = "*:0/10"; # 10 minutes
|
||||
RandomizedDelaySec = "1m"; # vary by 1 min to give PIA servers some relief
|
||||
};
|
||||
};
|
||||
|
||||
age.secrets."pia-login.conf".file = ../../secrets/pia-login.age;
|
||||
};
|
||||
}
|
||||
@@ -10,6 +10,10 @@ in
|
||||
|
||||
config.services.tailscale.enable = mkDefault (!config.boot.isContainer);
|
||||
|
||||
# Trust Tailscale interface - access control is handled by Tailscale ACLs.
|
||||
# Required because nftables (used by Incus) breaks Tailscale's automatic iptables rules.
|
||||
config.networking.firewall.trustedInterfaces = mkIf cfg.enable [ "tailscale0" ];
|
||||
|
||||
# MagicDNS
|
||||
config.networking.nameservers = mkIf cfg.enable [ "1.1.1.1" "8.8.8.8" ];
|
||||
config.networking.search = mkIf cfg.enable [ "koi-bebop.ts.net" ];
|
||||
|
||||
@@ -1,106 +0,0 @@
|
||||
{ config, lib, allModules, ... }:
|
||||
|
||||
with lib;
|
||||
|
||||
let
|
||||
cfg = config.vpn-container;
|
||||
in
|
||||
{
|
||||
options.vpn-container = {
|
||||
enable = mkEnableOption "Enable VPN container";
|
||||
|
||||
containerName = mkOption {
|
||||
type = types.str;
|
||||
default = "vpn";
|
||||
description = ''
|
||||
Name of the VPN container.
|
||||
'';
|
||||
};
|
||||
|
||||
mounts = mkOption {
|
||||
type = types.listOf types.str;
|
||||
default = [ "/var/lib" ];
|
||||
example = "/home/example";
|
||||
description = ''
|
||||
List of mounts on the host to bind to the vpn container.
|
||||
'';
|
||||
};
|
||||
|
||||
useOpenVPN = mkEnableOption "Uses OpenVPN instead of wireguard for PIA VPN connection";
|
||||
|
||||
config = mkOption {
|
||||
type = types.anything;
|
||||
default = { };
|
||||
example = ''
|
||||
{
|
||||
services.nginx.enable = true;
|
||||
}
|
||||
'';
|
||||
description = ''
|
||||
NixOS config for the vpn container.
|
||||
'';
|
||||
};
|
||||
};
|
||||
|
||||
config = mkIf cfg.enable {
|
||||
pia.wireguard.enable = !cfg.useOpenVPN;
|
||||
pia.wireguard.forwardPortForTransmission = !cfg.useOpenVPN;
|
||||
|
||||
containers.${cfg.containerName} = {
|
||||
ephemeral = true;
|
||||
autoStart = true;
|
||||
|
||||
bindMounts = mkMerge ([{
|
||||
"/run/agenix" = {
|
||||
hostPath = "/run/agenix";
|
||||
isReadOnly = true;
|
||||
};
|
||||
}] ++ (lists.forEach cfg.mounts (mount:
|
||||
{
|
||||
"${mount}" = {
|
||||
hostPath = mount;
|
||||
isReadOnly = false;
|
||||
};
|
||||
}
|
||||
)));
|
||||
|
||||
enableTun = cfg.useOpenVPN;
|
||||
privateNetwork = true;
|
||||
hostAddress = "172.16.100.1";
|
||||
localAddress = "172.16.100.2";
|
||||
|
||||
config = {
|
||||
imports = allModules ++ [ cfg.config ];
|
||||
|
||||
# networking.firewall.enable = mkForce false;
|
||||
networking.firewall.trustedInterfaces = [
|
||||
# completely trust internal interface to host
|
||||
"eth0"
|
||||
];
|
||||
|
||||
pia.openvpn.enable = cfg.useOpenVPN;
|
||||
pia.openvpn.server = "swiss.privacy.network"; # swiss vpn
|
||||
|
||||
# TODO fix so it does run it's own resolver again
|
||||
# run it's own DNS resolver
|
||||
networking.useHostResolvConf = false;
|
||||
# services.resolved.enable = true;
|
||||
networking.nameservers = [ "1.1.1.1" "8.8.8.8" ];
|
||||
};
|
||||
};
|
||||
|
||||
# load secrets the container needs
|
||||
age.secrets = config.containers.${cfg.containerName}.config.age.secrets;
|
||||
|
||||
# forwarding for vpn container (only for OpenVPN)
|
||||
networking.nat.enable = mkIf cfg.useOpenVPN true;
|
||||
networking.nat.internalInterfaces = mkIf cfg.useOpenVPN [
|
||||
"ve-${cfg.containerName}"
|
||||
];
|
||||
networking.ip_forward = mkIf cfg.useOpenVPN true;
|
||||
|
||||
# assumes only one potential interface
|
||||
networking.usePredictableInterfaceNames = false;
|
||||
networking.nat.externalInterface = "eth0";
|
||||
};
|
||||
}
|
||||
@@ -1,187 +0,0 @@
|
||||
#!/usr/bin/env bash
|
||||
|
||||
set -eEo pipefail
|
||||
|
||||
# $@ := ""
|
||||
set_route_vars() {
|
||||
local network_var
|
||||
local -a network_vars; read -ra network_vars <<<"${!route_network_*}"
|
||||
for network_var in "${network_vars[@]}"; do
|
||||
local -i i="${network_var#route_network_}"
|
||||
local -a vars=("route_network_$i" "route_netmask_$i" "route_gateway_$i" "route_metric_$i")
|
||||
route_networks[i]="${!vars[0]}"
|
||||
route_netmasks[i]="${!vars[1]:-255.255.255.255}"
|
||||
route_gateways[i]="${!vars[2]:-$route_vpn_gateway}"
|
||||
route_metrics[i]="${!vars[3]:-0}"
|
||||
done
|
||||
}
|
||||
|
||||
# Configuration.
|
||||
readonly prog="$(basename "$0")"
|
||||
readonly private_nets="127.0.0.0/8,10.0.0.0/8,172.16.0.0/12,192.168.0.0/16"
|
||||
declare -a remotes cnf_remote_domains cnf_remote_ips route_networks route_netmasks route_gateways route_metrics
|
||||
read -ra remotes <<<"$(env|grep -oP '^remote_[0-9]+=.*'|sort -n|cut -d= -f2|tr '\n' '\t')"
|
||||
read -ra cnf_remote_domains <<<"$(printf '%s\n' "${remotes[@]%%*[0-9]}"|sort -u|tr '\n' '\t')"
|
||||
read -ra cnf_remote_ips <<<"$(printf '%s\n' "${remotes[@]##*[!0-9.]*}"|sort -u|tr '\n' '\t')"
|
||||
set_route_vars
|
||||
read -ra numbered_vars <<<"${!foreign_option_*} ${!proto_*} ${!remote_*} ${!remote_port_*} \
|
||||
${!route_network_*} ${!route_netmask_*} ${!route_gateway_*} ${!route_metric_*}"
|
||||
readonly numbered_vars "${numbered_vars[@]}" dev ifconfig_local ifconfig_netmask ifconfig_remote \
|
||||
route_net_gateway route_vpn_gateway script_type trusted_ip trusted_port untrusted_ip untrusted_port \
|
||||
remotes cnf_remote_domains cnf_remote_ips route_networks route_netmasks route_gateways route_metrics
|
||||
readonly cur_remote_ip="${trusted_ip:-$untrusted_ip}"
|
||||
readonly cur_port="${trusted_port:-$untrusted_port}"
|
||||
|
||||
# $@ := ""
|
||||
update_hosts() {
|
||||
if remote_entries="$(getent -s dns hosts "${cnf_remote_domains[@]}"|grep -v :)"; then
|
||||
local -r beg="# VPNFAILSAFE BEGIN" end="# VPNFAILSAFE END"
|
||||
{
|
||||
sed -e "/^$beg/,/^$end/d" /etc/hosts
|
||||
echo -e "$beg\\n$remote_entries\\n$end"
|
||||
} >/etc/hosts.vpnfailsafe
|
||||
chmod --reference=/etc/hosts /etc/hosts.vpnfailsafe
|
||||
mv /etc/hosts.vpnfailsafe /etc/hosts
|
||||
fi
|
||||
}
|
||||
|
||||
# $@ := "up" | "down"
|
||||
update_routes() {
|
||||
local -a resolved_ips
|
||||
read -ra resolved_ips <<<"$(getent -s files hosts "${cnf_remote_domains[@]:-ENOENT}"|cut -d' ' -f1|tr '\n' '\t' || true)"
|
||||
local -ar remote_ips=("$cur_remote_ip" "${resolved_ips[@]}" "${cnf_remote_ips[@]}")
|
||||
if [[ "$*" == up ]]; then
|
||||
for remote_ip in "${remote_ips[@]}"; do
|
||||
if [[ -n "$remote_ip" && -z "$(ip route show "$remote_ip")" ]]; then
|
||||
ip route add "$remote_ip" via "$route_net_gateway"
|
||||
fi
|
||||
done
|
||||
for net in 0.0.0.0/1 128.0.0.0/1; do
|
||||
if [[ -z "$(ip route show "$net")" ]]; then
|
||||
ip route add "$net" via "$route_vpn_gateway"
|
||||
fi
|
||||
done
|
||||
for i in $(seq 1 "${#route_networks[@]}"); do
|
||||
if [[ -z "$(ip route show "${route_networks[i]}/${route_netmasks[i]}")" ]]; then
|
||||
ip route add "${route_networks[i]}/${route_netmasks[i]}" \
|
||||
via "${route_gateways[i]}" metric "${route_metrics[i]}" dev "$dev"
|
||||
fi
|
||||
done
|
||||
elif [[ "$*" == down ]]; then
|
||||
for route in "${remote_ips[@]}" 0.0.0.0/1 128.0.0.0/1; do
|
||||
if [[ -n "$route" && -n "$(ip route show "$route")" ]]; then
|
||||
ip route del "$route"
|
||||
fi
|
||||
done
|
||||
for i in $(seq 1 "${#route_networks[@]}"); do
|
||||
if [[ -n "$(ip route show "${route_networks[i]}/${route_netmasks[i]}")" ]]; then
|
||||
ip route del "${route_networks[i]}/${route_netmasks[i]}"
|
||||
fi
|
||||
done
|
||||
fi
|
||||
}
|
||||
|
||||
# $@ := ""
|
||||
update_firewall() {
|
||||
# $@ := "INPUT" | "OUTPUT" | "FORWARD"
|
||||
insert_chain() {
|
||||
if iptables -C "$*" -j "VPNFAILSAFE_$*" 2>/dev/null; then
|
||||
iptables -D "$*" -j "VPNFAILSAFE_$*"
|
||||
for opt in F X; do
|
||||
iptables -"$opt" "VPNFAILSAFE_$*"
|
||||
done
|
||||
fi
|
||||
iptables -N "VPNFAILSAFE_$*"
|
||||
iptables -I "$*" -j "VPNFAILSAFE_$*"
|
||||
}
|
||||
|
||||
# $@ := "INPUT" | "OUTPUT"
|
||||
accept_remotes() {
|
||||
case "$@" in
|
||||
INPUT) local -r icmp_type=reply io=i sd=s states="";;
|
||||
OUTPUT) local -r icmp_type=request io=o sd=d states=NEW,;;
|
||||
esac
|
||||
local -r public_nic="$(ip route show "$cur_remote_ip"|cut -d' ' -f5)"
|
||||
local -ar suf=(-m conntrack --ctstate "$states"RELATED,ESTABLISHED -"$io" "${public_nic:?}" -j ACCEPT)
|
||||
icmp_rule() {
|
||||
iptables "$1" "$2" -p icmp --icmp-type "echo-$icmp_type" -"$sd" "$3" "${suf[@]/%ACCEPT/RETURN}"
|
||||
}
|
||||
for ((i=1; i <= ${#remotes[*]}; ++i)); do
|
||||
local port="remote_port_$i"
|
||||
local proto="proto_$i"
|
||||
iptables -A "VPNFAILSAFE_$*" -p "${!proto%-client}" -"$sd" "${remotes[i-1]}" --"$sd"port "${!port}" "${suf[@]}"
|
||||
if ! icmp_rule -C "VPNFAILSAFE_$*" "${remotes[i-1]}" 2>/dev/null; then
|
||||
icmp_rule -A "VPNFAILSAFE_$*" "${remotes[i-1]}"
|
||||
fi
|
||||
done
|
||||
if ! iptables -S|grep -q "^-A VPNFAILSAFE_$* .*-$sd $cur_remote_ip/32 .*-j ACCEPT$"; then
|
||||
for p in tcp udp; do
|
||||
iptables -A "VPNFAILSAFE_$*" -p "$p" -"$sd" "$cur_remote_ip" --"$sd"port "${cur_port}" "${suf[@]}"
|
||||
done
|
||||
icmp_rule -A "VPNFAILSAFE_$*" "$cur_remote_ip"
|
||||
fi
|
||||
}
|
||||
|
||||
# $@ := "OUTPUT" | "FORWARD"
|
||||
reject_dns() {
|
||||
for proto in udp tcp; do
|
||||
iptables -A "VPNFAILSAFE_$*" -p "$proto" --dport 53 ! -o "$dev" -j REJECT
|
||||
done
|
||||
}
|
||||
|
||||
# $@ := "INPUT" | "OUTPUT" | "FORWARD"
|
||||
pass_private_nets() {
|
||||
case "$@" in
|
||||
INPUT) local -r io=i sd=s;;&
|
||||
OUTPUT|FORWARD) local -r io=o sd=d;;&
|
||||
INPUT) local -r vpn="${ifconfig_remote:-$ifconfig_local}/${ifconfig_netmask:-32}"
|
||||
iptables -A "VPNFAILSAFE_$*" -"$sd" "$vpn" -"$io" "$dev" -j RETURN
|
||||
for i in $(seq 1 "${#route_networks[@]}"); do
|
||||
iptables -A "VPNFAILSAFE_$*" -"$sd" "${route_networks[i]}/${route_netmasks[i]}" -"$io" "$dev" -j RETURN
|
||||
done;;&
|
||||
*) iptables -A "VPNFAILSAFE_$*" -"$sd" "$private_nets" ! -"$io" "$dev" -j RETURN;;&
|
||||
INPUT) iptables -A "VPNFAILSAFE_$*" -s "$private_nets" -i "$dev" -j DROP;;&
|
||||
*) for iface in "$dev" lo+; do
|
||||
iptables -A "VPNFAILSAFE_$*" -"$io" "$iface" -j RETURN
|
||||
done;;
|
||||
esac
|
||||
}
|
||||
|
||||
# $@ := "INPUT" | "OUTPUT" | "FORWARD"
|
||||
drop_other() {
|
||||
iptables -A "VPNFAILSAFE_$*" -j DROP
|
||||
}
|
||||
|
||||
for chain in INPUT OUTPUT FORWARD; do
|
||||
insert_chain "$chain"
|
||||
[[ $chain == FORWARD ]] || accept_remotes "$chain"
|
||||
[[ $chain == INPUT ]] || reject_dns "$chain"
|
||||
pass_private_nets "$chain"
|
||||
drop_other "$chain"
|
||||
done
|
||||
}
|
||||
|
||||
# $@ := ""
|
||||
cleanup() {
|
||||
update_resolv down
|
||||
update_routes down
|
||||
}
|
||||
trap cleanup INT TERM
|
||||
|
||||
# $@ := line_number exit_code
|
||||
err_msg() {
|
||||
echo "$0:$1: \`$(sed -n "$1,+0{s/^\\s*//;p}" "$0")' returned $2" >&2
|
||||
cleanup
|
||||
}
|
||||
trap 'err_msg "$LINENO" "$?"' ERR
|
||||
|
||||
# $@ := ""
|
||||
main() {
|
||||
case "${script_type:-down}" in
|
||||
up) for f in hosts routes firewall; do "update_$f" up; done;;
|
||||
down) update_routes down
|
||||
update_resolv down;;
|
||||
esac
|
||||
}
|
||||
|
||||
main
|
||||
@@ -4,15 +4,15 @@ let
|
||||
builderUserName = "nix-builder";
|
||||
|
||||
builderRole = "nix-builder";
|
||||
builders = config.machines.withRole.${builderRole};
|
||||
thisMachineIsABuilder = config.thisMachine.hasRole.${builderRole};
|
||||
builders = config.machines.withRole.${builderRole} or [];
|
||||
thisMachineIsABuilder = config.thisMachine.hasRole.${builderRole} or false;
|
||||
|
||||
# builders don't include themselves as a remote builder
|
||||
otherBuilders = lib.filter (hostname: hostname != config.networking.hostName) builders;
|
||||
in
|
||||
lib.mkMerge [
|
||||
# configure builder
|
||||
(lib.mkIf thisMachineIsABuilder {
|
||||
(lib.mkIf (thisMachineIsABuilder && !config.boot.isContainer) {
|
||||
users.users.${builderUserName} = {
|
||||
description = "Distributed Nix Build User";
|
||||
group = builderUserName;
|
||||
|
||||
27
common/ntfy/default.nix
Normal file
27
common/ntfy/default.nix
Normal file
@@ -0,0 +1,27 @@
|
||||
{ config, lib, ... }:
|
||||
|
||||
{
|
||||
imports = [
|
||||
./service-failure.nix
|
||||
./ssh-login.nix
|
||||
./zfs.nix
|
||||
];
|
||||
|
||||
options.ntfy-alerts = {
|
||||
serverUrl = lib.mkOption {
|
||||
type = lib.types.str;
|
||||
default = "https://ntfy.neet.dev";
|
||||
description = "Base URL of the ntfy server.";
|
||||
};
|
||||
|
||||
curlExtraArgs = lib.mkOption {
|
||||
type = lib.types.str;
|
||||
default = "";
|
||||
description = "Extra arguments to pass to curl (e.g. --proxy http://host:port).";
|
||||
};
|
||||
};
|
||||
|
||||
config = lib.mkIf config.thisMachine.hasRole."ntfy" {
|
||||
age.secrets.ntfy-token.file = ../../secrets/ntfy-token.age;
|
||||
};
|
||||
}
|
||||
48
common/ntfy/service-failure.nix
Normal file
48
common/ntfy/service-failure.nix
Normal file
@@ -0,0 +1,48 @@
|
||||
{ config, lib, pkgs, ... }:
|
||||
|
||||
let
|
||||
cfg = config.ntfy-alerts;
|
||||
in
|
||||
{
|
||||
config = lib.mkIf config.thisMachine.hasRole."ntfy" {
|
||||
systemd.services."ntfy-failure@" = {
|
||||
description = "Send ntfy alert for failed unit %i";
|
||||
wants = [ "network-online.target" ];
|
||||
after = [ "network-online.target" ];
|
||||
serviceConfig = {
|
||||
Type = "oneshot";
|
||||
EnvironmentFile = "/run/agenix/ntfy-token";
|
||||
ExecStart = "${pkgs.writeShellScript "ntfy-failure-notify" ''
|
||||
unit="$1"
|
||||
logfile=$(mktemp)
|
||||
trap 'rm -f "$logfile"' EXIT
|
||||
${pkgs.systemd}/bin/journalctl -u "$unit" -n 50 --no-pager -o short > "$logfile" 2>/dev/null \
|
||||
|| echo "(no logs available)" > "$logfile"
|
||||
${lib.getExe pkgs.curl} \
|
||||
-T "$logfile" \
|
||||
--fail --silent --show-error \
|
||||
--max-time 30 --retry 3 \
|
||||
${cfg.curlExtraArgs} \
|
||||
-H "Authorization: Bearer $NTFY_TOKEN" \
|
||||
-H "Title: Service failure on ${config.networking.hostName}" \
|
||||
-H "Priority: high" \
|
||||
-H "Tags: rotating_light" \
|
||||
-H "Message: Unit $unit failed at $(date +%c)" \
|
||||
-H "Filename: $unit.log" \
|
||||
"${cfg.serverUrl}/service-failures"
|
||||
''} %i";
|
||||
};
|
||||
};
|
||||
|
||||
# Apply OnFailure to all services via a systemd drop-in
|
||||
systemd.packages = [
|
||||
(pkgs.runCommand "ntfy-on-failure-dropin" { } ''
|
||||
mkdir -p $out/lib/systemd/system/service.d
|
||||
cat > $out/lib/systemd/system/service.d/ntfy-on-failure.conf <<'EOF'
|
||||
[Unit]
|
||||
OnFailure=ntfy-failure@%p.service
|
||||
EOF
|
||||
'')
|
||||
];
|
||||
};
|
||||
}
|
||||
36
common/ntfy/ssh-login.nix
Normal file
36
common/ntfy/ssh-login.nix
Normal file
@@ -0,0 +1,36 @@
|
||||
{ config, lib, pkgs, ... }:
|
||||
|
||||
let
|
||||
cfg = config.ntfy-alerts;
|
||||
|
||||
notifyScript = pkgs.writeShellScript "ssh-login-notify" ''
|
||||
# Only notify on session open, not close
|
||||
[ "$PAM_TYPE" = "open_session" ] || exit 0
|
||||
|
||||
. /run/agenix/ntfy-token
|
||||
|
||||
# Send notification in background so login isn't delayed
|
||||
${lib.getExe pkgs.curl} \
|
||||
--fail --silent --show-error \
|
||||
--max-time 10 --retry 1 \
|
||||
${cfg.curlExtraArgs} \
|
||||
-H "Authorization: Bearer $NTFY_TOKEN" \
|
||||
-H "Title: SSH login on ${config.networking.hostName}" \
|
||||
-H "Tags: key" \
|
||||
-d "$PAM_USER from $PAM_RHOST at $(date +%c)" \
|
||||
"${cfg.serverUrl}/ssh-logins" &
|
||||
'';
|
||||
in
|
||||
{
|
||||
config = lib.mkIf config.thisMachine.hasRole."ntfy" {
|
||||
security.pam.services.sshd.rules.session.ntfy-login = {
|
||||
order = 99999;
|
||||
control = "optional";
|
||||
modulePath = "${pkgs.pam}/lib/security/pam_exec.so";
|
||||
args = [
|
||||
"quiet"
|
||||
(toString notifyScript)
|
||||
];
|
||||
};
|
||||
};
|
||||
}
|
||||
87
common/ntfy/zfs.nix
Normal file
87
common/ntfy/zfs.nix
Normal file
@@ -0,0 +1,87 @@
|
||||
{ config, lib, pkgs, ... }:
|
||||
|
||||
let
|
||||
cfg = config.ntfy-alerts;
|
||||
hasZfs = config.boot.supportedFilesystems.zfs or false;
|
||||
hasNtfy = config.thisMachine.hasRole."ntfy";
|
||||
|
||||
checkScript = pkgs.writeShellScript "zfs-health-check" ''
|
||||
PATH="${lib.makeBinPath [ pkgs.zfs pkgs.coreutils pkgs.gawk pkgs.curl ]}"
|
||||
|
||||
unhealthy=""
|
||||
|
||||
# Check pool health status
|
||||
while IFS=$'\t' read -r pool state; do
|
||||
if [ "$state" != "ONLINE" ]; then
|
||||
unhealthy="$unhealthy"$'\n'"Pool '$pool' is $state"
|
||||
fi
|
||||
done < <(zpool list -H -o name,health)
|
||||
|
||||
# Check for errors (read, write, checksum) on any vdev
|
||||
while IFS=$'\t' read -r pool errors; do
|
||||
if [ "$errors" != "No known data errors" ] && [ -n "$errors" ]; then
|
||||
unhealthy="$unhealthy"$'\n'"Pool '$pool' has errors: $errors"
|
||||
fi
|
||||
done < <(zpool status -x 2>/dev/null | awk '
|
||||
/pool:/ { pool=$2 }
|
||||
/errors:/ { sub(/^[[:space:]]*errors: /, ""); print pool "\t" $0 }
|
||||
')
|
||||
|
||||
# Check for any drives with non-zero error counts
|
||||
drive_errors=$(zpool status 2>/dev/null | awk '
|
||||
/DEGRADED|FAULTED|OFFLINE|UNAVAIL|REMOVED/ && !/pool:/ && !/state:/ {
|
||||
print " " $0
|
||||
}
|
||||
/[0-9]+[[:space:]]+[0-9]+[[:space:]]+[0-9]+/ {
|
||||
if ($3 > 0 || $4 > 0 || $5 > 0) {
|
||||
print " " $1 " (read:" $3 " write:" $4 " cksum:" $5 ")"
|
||||
}
|
||||
}
|
||||
')
|
||||
if [ -n "$drive_errors" ]; then
|
||||
unhealthy="$unhealthy"$'\n'"Device errors:"$'\n'"$drive_errors"
|
||||
fi
|
||||
|
||||
if [ -n "$unhealthy" ]; then
|
||||
message="ZFS health check failed on ${config.networking.hostName}:$unhealthy"
|
||||
|
||||
curl \
|
||||
--fail --silent --show-error \
|
||||
--max-time 30 --retry 3 \
|
||||
-H "Authorization: Bearer $NTFY_TOKEN" \
|
||||
-H "Title: ZFS issue on ${config.networking.hostName}" \
|
||||
-H "Priority: urgent" \
|
||||
-H "Tags: warning" \
|
||||
-d "$message" \
|
||||
"${cfg.serverUrl}/service-failures"
|
||||
|
||||
echo "$message" >&2
|
||||
fi
|
||||
|
||||
echo "All ZFS pools healthy"
|
||||
'';
|
||||
in
|
||||
{
|
||||
config = lib.mkIf (hasZfs && hasNtfy) {
|
||||
systemd.services.zfs-health-check = {
|
||||
description = "Check ZFS pool health and alert on issues";
|
||||
wants = [ "network-online.target" ];
|
||||
after = [ "network-online.target" "zfs.target" ];
|
||||
serviceConfig = {
|
||||
Type = "oneshot";
|
||||
EnvironmentFile = "/run/agenix/ntfy-token";
|
||||
ExecStart = checkScript;
|
||||
};
|
||||
};
|
||||
|
||||
systemd.timers.zfs-health-check = {
|
||||
description = "Periodic ZFS health check";
|
||||
wantedBy = [ "timers.target" ];
|
||||
timerConfig = {
|
||||
OnCalendar = "daily";
|
||||
Persistent = true;
|
||||
RandomizedDelaySec = "1h";
|
||||
};
|
||||
};
|
||||
};
|
||||
}
|
||||
@@ -27,6 +27,7 @@ in
|
||||
../shell.nix
|
||||
hostConfig.inputs.home-manager.nixosModules.home-manager
|
||||
hostConfig.inputs.nix-index-database.nixosModules.default
|
||||
hostConfig.inputs.agenix.nixosModules.default
|
||||
];
|
||||
|
||||
nixpkgs.overlays = [
|
||||
@@ -116,6 +117,13 @@ in
|
||||
nix.settings.experimental-features = [ "nix-command" "flakes" ];
|
||||
nix.settings.trusted-users = [ "googlebot" ];
|
||||
|
||||
# Binary cache configuration (inherited from host's common/binary-cache.nix)
|
||||
nix.settings.substituters = hostConfig.nix.settings.substituters;
|
||||
nix.settings.trusted-public-keys = hostConfig.nix.settings.trusted-public-keys;
|
||||
nix.settings.fallback = true;
|
||||
nix.settings.netrc-file = config.age.secrets.attic-netrc.path;
|
||||
age.secrets.attic-netrc.file = ../../secrets/attic-netrc.age;
|
||||
|
||||
# Make nixpkgs available in NIX_PATH and registry (like the NixOS ISO)
|
||||
# This allows `nix-shell -p`, `nix repl '<nixpkgs>'`, etc. to work
|
||||
nix.nixPath = [ "nixpkgs=${hostConfig.inputs.nixpkgs}" ];
|
||||
|
||||
@@ -133,8 +133,15 @@ let
|
||||
};
|
||||
in
|
||||
{
|
||||
config = mkIf (cfg.enable && vmWorkspaces != { }) {
|
||||
# Convert VM workspace configs to microvm.nix format
|
||||
microvm.vms = mapAttrs mkVmConfig vmWorkspaces;
|
||||
};
|
||||
config = mkMerge [
|
||||
(mkIf (cfg.enable && vmWorkspaces != { }) {
|
||||
# Convert VM workspace configs to microvm.nix format
|
||||
microvm.vms = mapAttrs mkVmConfig vmWorkspaces;
|
||||
})
|
||||
|
||||
# microvm.nixosModules.host enables KSM, but /sys is read-only in containers
|
||||
(mkIf config.boot.isContainer {
|
||||
hardware.ksm.enable = false;
|
||||
})
|
||||
];
|
||||
}
|
||||
|
||||
62
common/server/atticd.nix
Normal file
62
common/server/atticd.nix
Normal file
@@ -0,0 +1,62 @@
|
||||
{ config, lib, ... }:
|
||||
|
||||
{
|
||||
config = lib.mkIf (config.thisMachine.hasRole."binary-cache" && !config.boot.isContainer) {
|
||||
services.atticd = {
|
||||
enable = true;
|
||||
environmentFile = config.age.secrets.atticd-credentials.path;
|
||||
settings = {
|
||||
listen = "[::]:28338";
|
||||
database.url = "postgresql:///atticd?host=/run/postgresql";
|
||||
require-proof-of-possession = false;
|
||||
|
||||
# Disable chunking — the dedup savings don't justify the CPU/IO
|
||||
# overhead for local storage, especially on ZFS which already
|
||||
# does block-level compression.
|
||||
chunking = {
|
||||
nar-size-threshold = 0;
|
||||
min-size = 16 * 1024;
|
||||
avg-size = 64 * 1024;
|
||||
max-size = 256 * 1024;
|
||||
};
|
||||
|
||||
# Let ZFS handle compression instead of double-compressing.
|
||||
compression.type = "none";
|
||||
|
||||
garbage-collection.default-retention-period = "6 months";
|
||||
};
|
||||
};
|
||||
|
||||
# PostgreSQL for atticd
|
||||
services.postgresql = {
|
||||
enable = true;
|
||||
ensureDatabases = [ "atticd" ];
|
||||
ensureUsers = [{
|
||||
name = "atticd";
|
||||
ensureDBOwnership = true;
|
||||
}];
|
||||
};
|
||||
|
||||
# Use a static user so the ZFS mountpoint at /var/lib/atticd works
|
||||
# (DynamicUser conflicts with ZFS mountpoints)
|
||||
users.users.atticd = {
|
||||
isSystemUser = true;
|
||||
group = "atticd";
|
||||
home = "/var/lib/atticd";
|
||||
};
|
||||
users.groups.atticd = { };
|
||||
|
||||
systemd.services.atticd = {
|
||||
after = [ "postgresql.service" ];
|
||||
requires = [ "postgresql.service" ];
|
||||
partOf = [ "postgresql.service" ];
|
||||
serviceConfig = {
|
||||
DynamicUser = lib.mkForce false;
|
||||
User = "atticd";
|
||||
Group = "atticd";
|
||||
};
|
||||
};
|
||||
|
||||
age.secrets.atticd-credentials.file = ../../secrets/atticd-credentials.age;
|
||||
};
|
||||
}
|
||||
@@ -12,8 +12,11 @@
|
||||
./mailserver.nix
|
||||
./nextcloud.nix
|
||||
./gitea-actions-runner.nix
|
||||
./atticd.nix
|
||||
./librechat.nix
|
||||
./actualbudget.nix
|
||||
./unifi.nix
|
||||
./ntfy.nix
|
||||
./gatus.nix
|
||||
];
|
||||
}
|
||||
|
||||
366
common/server/gatus.nix
Normal file
366
common/server/gatus.nix
Normal file
@@ -0,0 +1,366 @@
|
||||
{ lib, config, ... }:
|
||||
|
||||
let
|
||||
cfg = config.services.gatus;
|
||||
port = 31103;
|
||||
in
|
||||
{
|
||||
options.services.gatus = {
|
||||
hostname = lib.mkOption {
|
||||
type = lib.types.str;
|
||||
example = "status.example.com";
|
||||
};
|
||||
};
|
||||
|
||||
config = lib.mkIf cfg.enable {
|
||||
services.gatus = {
|
||||
environmentFile = "/run/agenix/ntfy-token";
|
||||
settings = {
|
||||
storage = {
|
||||
type = "sqlite";
|
||||
path = "/var/lib/gatus/data.db";
|
||||
};
|
||||
|
||||
web = {
|
||||
address = "127.0.0.1";
|
||||
port = port;
|
||||
};
|
||||
|
||||
alerting.ntfy = {
|
||||
url = "https://ntfy.neet.dev";
|
||||
topic = "service-failures";
|
||||
priority = 4;
|
||||
default-alert = {
|
||||
enabled = true;
|
||||
failure-threshold = 3;
|
||||
success-threshold = 2;
|
||||
send-on-resolved = true;
|
||||
};
|
||||
token = "$NTFY_TOKEN";
|
||||
};
|
||||
|
||||
endpoints = [
|
||||
{
|
||||
name = "Gitea";
|
||||
group = "services";
|
||||
url = "https://git.neet.dev";
|
||||
interval = "5m";
|
||||
conditions = [
|
||||
"[STATUS] == 200"
|
||||
];
|
||||
alerts = [{ type = "ntfy"; }];
|
||||
}
|
||||
{
|
||||
name = "The Lounge";
|
||||
group = "services";
|
||||
url = "https://irc.neet.dev";
|
||||
interval = "5m";
|
||||
conditions = [
|
||||
"[STATUS] == 200"
|
||||
];
|
||||
alerts = [{ type = "ntfy"; }];
|
||||
}
|
||||
{
|
||||
name = "ntfy";
|
||||
group = "services";
|
||||
url = "https://ntfy.neet.dev/v1/health";
|
||||
interval = "5m";
|
||||
conditions = [
|
||||
"[STATUS] == 200"
|
||||
];
|
||||
alerts = [{ type = "ntfy"; }];
|
||||
}
|
||||
{
|
||||
name = "Librechat";
|
||||
group = "services";
|
||||
url = "https://chat.neet.dev";
|
||||
interval = "5m";
|
||||
conditions = [
|
||||
"[STATUS] == 200"
|
||||
];
|
||||
alerts = [{ type = "ntfy"; }];
|
||||
}
|
||||
{
|
||||
name = "Owncast";
|
||||
group = "services";
|
||||
url = "https://live.neet.dev";
|
||||
interval = "5m";
|
||||
conditions = [
|
||||
"[STATUS] == 200"
|
||||
];
|
||||
alerts = [{ type = "ntfy"; }];
|
||||
}
|
||||
{
|
||||
name = "Nextcloud";
|
||||
group = "services";
|
||||
url = "https://neet.cloud";
|
||||
interval = "5m";
|
||||
conditions = [
|
||||
"[STATUS] == any(200, 302)"
|
||||
];
|
||||
alerts = [{ type = "ntfy"; }];
|
||||
}
|
||||
{
|
||||
name = "Element Web";
|
||||
group = "services";
|
||||
url = "https://chat.neet.space";
|
||||
interval = "5m";
|
||||
conditions = [
|
||||
"[STATUS] == 200"
|
||||
];
|
||||
alerts = [{ type = "ntfy"; }];
|
||||
}
|
||||
{
|
||||
name = "Mumble";
|
||||
group = "services";
|
||||
url = "tcp://voice.neet.space:23563";
|
||||
interval = "5m";
|
||||
conditions = [
|
||||
"[CONNECTED] == true"
|
||||
];
|
||||
alerts = [{ type = "ntfy"; }];
|
||||
}
|
||||
{
|
||||
name = "Navidrome";
|
||||
group = "services";
|
||||
url = "https://navidrome.neet.cloud";
|
||||
interval = "5m";
|
||||
conditions = [
|
||||
"[STATUS] == 200"
|
||||
];
|
||||
alerts = [{ type = "ntfy"; }];
|
||||
}
|
||||
{
|
||||
name = "Roundcube";
|
||||
group = "services";
|
||||
url = "https://mail.neet.dev";
|
||||
interval = "5m";
|
||||
conditions = [
|
||||
"[STATUS] == 200"
|
||||
];
|
||||
alerts = [{ type = "ntfy"; }];
|
||||
}
|
||||
{
|
||||
name = "Collabora Online";
|
||||
group = "services";
|
||||
url = "https://collabora.runyan.org";
|
||||
interval = "5m";
|
||||
conditions = [
|
||||
"[STATUS] == 200"
|
||||
];
|
||||
alerts = [{ type = "ntfy"; }];
|
||||
}
|
||||
{
|
||||
name = "Jellyfin";
|
||||
group = "s0";
|
||||
url = "https://jellyfin.s0.neet.dev";
|
||||
interval = "5m";
|
||||
conditions = [
|
||||
"[STATUS] == 200"
|
||||
];
|
||||
alerts = [{ type = "ntfy"; }];
|
||||
}
|
||||
{
|
||||
name = "Sonarr";
|
||||
group = "s0";
|
||||
url = "https://sonarr.s0.neet.dev";
|
||||
interval = "5m";
|
||||
conditions = [
|
||||
"[STATUS] == 200"
|
||||
];
|
||||
alerts = [{ type = "ntfy"; }];
|
||||
}
|
||||
{
|
||||
name = "Radarr";
|
||||
group = "s0";
|
||||
url = "https://radarr.s0.neet.dev";
|
||||
interval = "5m";
|
||||
conditions = [
|
||||
"[STATUS] == 200"
|
||||
];
|
||||
alerts = [{ type = "ntfy"; }];
|
||||
}
|
||||
{
|
||||
name = "Lidarr";
|
||||
group = "s0";
|
||||
url = "https://lidarr.s0.neet.dev";
|
||||
interval = "5m";
|
||||
conditions = [
|
||||
"[STATUS] == 200"
|
||||
];
|
||||
alerts = [{ type = "ntfy"; }];
|
||||
}
|
||||
{
|
||||
name = "Prowlarr";
|
||||
group = "s0";
|
||||
url = "https://prowlarr.s0.neet.dev";
|
||||
interval = "5m";
|
||||
conditions = [
|
||||
"[STATUS] == 200"
|
||||
];
|
||||
alerts = [{ type = "ntfy"; }];
|
||||
}
|
||||
{
|
||||
name = "Bazarr";
|
||||
group = "s0";
|
||||
url = "https://bazarr.s0.neet.dev";
|
||||
interval = "5m";
|
||||
conditions = [
|
||||
"[STATUS] == 200"
|
||||
];
|
||||
alerts = [{ type = "ntfy"; }];
|
||||
}
|
||||
{
|
||||
name = "Transmission";
|
||||
group = "s0";
|
||||
url = "https://transmission.s0.neet.dev";
|
||||
interval = "5m";
|
||||
conditions = [
|
||||
"[STATUS] == 200"
|
||||
];
|
||||
alerts = [{ type = "ntfy"; }];
|
||||
}
|
||||
{
|
||||
name = "Home Assistant";
|
||||
group = "s0";
|
||||
url = "https://ha.s0.neet.dev";
|
||||
interval = "5m";
|
||||
conditions = [
|
||||
"[STATUS] == 200"
|
||||
];
|
||||
alerts = [{ type = "ntfy"; }];
|
||||
}
|
||||
{
|
||||
name = "ESPHome";
|
||||
group = "s0";
|
||||
url = "https://esphome.s0.neet.dev";
|
||||
interval = "5m";
|
||||
conditions = [
|
||||
"[STATUS] == 200"
|
||||
];
|
||||
alerts = [{ type = "ntfy"; }];
|
||||
}
|
||||
{
|
||||
name = "Zigbee2MQTT";
|
||||
group = "s0";
|
||||
url = "https://zigbee.s0.neet.dev";
|
||||
interval = "5m";
|
||||
conditions = [
|
||||
"[STATUS] == 200"
|
||||
];
|
||||
alerts = [{ type = "ntfy"; }];
|
||||
}
|
||||
{
|
||||
name = "Frigate";
|
||||
group = "s0";
|
||||
url = "https://frigate.s0.neet.dev";
|
||||
interval = "5m";
|
||||
conditions = [
|
||||
"[STATUS] == 200"
|
||||
];
|
||||
alerts = [{ type = "ntfy"; }];
|
||||
}
|
||||
{
|
||||
name = "Valetudo";
|
||||
group = "s0";
|
||||
url = "https://vacuum.s0.neet.dev";
|
||||
interval = "5m";
|
||||
conditions = [
|
||||
"[STATUS] == 200"
|
||||
];
|
||||
alerts = [{ type = "ntfy"; }];
|
||||
}
|
||||
{
|
||||
name = "Sandman";
|
||||
group = "s0";
|
||||
url = "https://sandman.s0.neet.dev";
|
||||
interval = "5m";
|
||||
conditions = [
|
||||
"[STATUS] == 200"
|
||||
];
|
||||
alerts = [{ type = "ntfy"; }];
|
||||
}
|
||||
{
|
||||
name = "Vikunja";
|
||||
group = "s0";
|
||||
url = "https://todo.s0.neet.dev";
|
||||
interval = "5m";
|
||||
conditions = [
|
||||
"[STATUS] == 200"
|
||||
];
|
||||
alerts = [{ type = "ntfy"; }];
|
||||
}
|
||||
{
|
||||
name = "Actual Budget";
|
||||
group = "s0";
|
||||
url = "https://budget.s0.neet.dev";
|
||||
interval = "5m";
|
||||
conditions = [
|
||||
"[STATUS] == 200"
|
||||
];
|
||||
alerts = [{ type = "ntfy"; }];
|
||||
}
|
||||
{
|
||||
name = "Linkwarden";
|
||||
group = "s0";
|
||||
url = "https://linkwarden.s0.neet.dev";
|
||||
interval = "5m";
|
||||
conditions = [
|
||||
"[STATUS] == 200"
|
||||
];
|
||||
alerts = [{ type = "ntfy"; }];
|
||||
}
|
||||
{
|
||||
name = "Memos";
|
||||
group = "s0";
|
||||
url = "https://memos.s0.neet.dev";
|
||||
interval = "5m";
|
||||
conditions = [
|
||||
"[STATUS] == 200"
|
||||
];
|
||||
alerts = [{ type = "ntfy"; }];
|
||||
}
|
||||
{
|
||||
name = "Outline";
|
||||
group = "s0";
|
||||
url = "https://outline.s0.neet.dev";
|
||||
interval = "5m";
|
||||
conditions = [
|
||||
"[STATUS] == 200"
|
||||
];
|
||||
alerts = [{ type = "ntfy"; }];
|
||||
}
|
||||
{
|
||||
name = "LanguageTool";
|
||||
group = "s0";
|
||||
url = "https://languagetool.s0.neet.dev";
|
||||
interval = "5m";
|
||||
conditions = [
|
||||
"[STATUS] == 200"
|
||||
];
|
||||
alerts = [{ type = "ntfy"; }];
|
||||
}
|
||||
{
|
||||
name = "Unifi";
|
||||
group = "s0";
|
||||
url = "https://unifi.s0.neet.dev";
|
||||
interval = "5m";
|
||||
conditions = [
|
||||
"[STATUS] == 200"
|
||||
];
|
||||
alerts = [{ type = "ntfy"; }];
|
||||
}
|
||||
];
|
||||
};
|
||||
};
|
||||
services.nginx.enable = true;
|
||||
services.nginx.virtualHosts.${cfg.hostname} = {
|
||||
enableACME = true;
|
||||
forceSSL = true;
|
||||
locations."/" = {
|
||||
proxyPass = "http://127.0.0.1:${toString port}";
|
||||
proxyWebsockets = true;
|
||||
};
|
||||
};
|
||||
};
|
||||
}
|
||||
@@ -1,132 +1,85 @@
|
||||
{ config, pkgs, lib, ... }:
|
||||
{ config, lib, ... }:
|
||||
|
||||
# Gitea Actions Runner. Starts 'host' runner that runs directly on the host inside of a nixos container
|
||||
# This is useful for providing a real Nix/OS builder to gitea.
|
||||
# Warning, NixOS containers are not secure. For example, the container shares the /nix/store
|
||||
# Therefore, this should not be used to run untrusted code.
|
||||
# To enable, assign a machine the 'gitea-actions-runner' system role
|
||||
|
||||
# TODO: skipping running inside of nixos container for now because of issues getting docker/podman running
|
||||
# Gitea Actions Runner inside a NixOS container.
|
||||
# The container shares the host's /nix/store (read-only) and nix-daemon socket,
|
||||
# so builds go through the host daemon and outputs land in the host store.
|
||||
# Warning: NixOS containers are not fully secure — do not run untrusted code.
|
||||
# To enable, assign a machine the 'gitea-actions-runner' system role.
|
||||
|
||||
let
|
||||
thisMachineIsARunner = config.thisMachine.hasRole."gitea-actions-runner";
|
||||
containerName = "gitea-runner";
|
||||
giteaRunnerUid = 991;
|
||||
giteaRunnerGid = 989;
|
||||
in
|
||||
{
|
||||
config = lib.mkIf (thisMachineIsARunner && !config.boot.isContainer) {
|
||||
# containers.${containerName} = {
|
||||
# ephemeral = true;
|
||||
# autoStart = true;
|
||||
|
||||
# # for podman
|
||||
# enableTun = true;
|
||||
containers.${containerName} = {
|
||||
autoStart = true;
|
||||
ephemeral = true;
|
||||
|
||||
# # privateNetwork = true;
|
||||
# # hostAddress = "172.16.101.1";
|
||||
# # localAddress = "172.16.101.2";
|
||||
bindMounts = {
|
||||
"/run/agenix/gitea-actions-runner-token" = {
|
||||
hostPath = "/run/agenix/gitea-actions-runner-token";
|
||||
isReadOnly = true;
|
||||
};
|
||||
"/var/lib/gitea-runner" = {
|
||||
hostPath = "/var/lib/gitea-runner";
|
||||
isReadOnly = false;
|
||||
};
|
||||
};
|
||||
|
||||
# bindMounts =
|
||||
# {
|
||||
# "/run/agenix/gitea-actions-runner-token" = {
|
||||
# hostPath = "/run/agenix/gitea-actions-runner-token";
|
||||
# isReadOnly = true;
|
||||
# };
|
||||
# "/var/lib/gitea-runner" = {
|
||||
# hostPath = "/var/lib/gitea-runner";
|
||||
# isReadOnly = false;
|
||||
# };
|
||||
# };
|
||||
config = { config, lib, pkgs, ... }: {
|
||||
system.stateVersion = "25.11";
|
||||
|
||||
# extraFlags = [
|
||||
# # Allow podman
|
||||
# ''--system-call-filter=thisystemcalldoesnotexistforsure''
|
||||
# ];
|
||||
services.gitea-actions-runner.instances.inst = {
|
||||
enable = true;
|
||||
name = containerName;
|
||||
url = "https://git.neet.dev/";
|
||||
tokenFile = "/run/agenix/gitea-actions-runner-token";
|
||||
labels = [ "nixos:host" ];
|
||||
};
|
||||
|
||||
# additionalCapabilities = [
|
||||
# "CAP_SYS_ADMIN"
|
||||
# ];
|
||||
# Disable dynamic user so runner state persists via bind mount
|
||||
assertions = [{
|
||||
assertion = config.systemd.services.gitea-runner-inst.enable;
|
||||
message = "Expected systemd service 'gitea-runner-inst' is not enabled — the gitea-actions-runner module may have changed its naming scheme.";
|
||||
}];
|
||||
systemd.services.gitea-runner-inst.serviceConfig.DynamicUser = lib.mkForce false;
|
||||
users.users.gitea-runner = {
|
||||
uid = giteaRunnerUid;
|
||||
home = "/var/lib/gitea-runner";
|
||||
group = "gitea-runner";
|
||||
isSystemUser = true;
|
||||
createHome = true;
|
||||
};
|
||||
users.groups.gitea-runner.gid = giteaRunnerGid;
|
||||
|
||||
# config = {
|
||||
# imports = allModules;
|
||||
nix.settings.experimental-features = [ "nix-command" "flakes" ];
|
||||
|
||||
# # speeds up evaluation
|
||||
# nixpkgs.pkgs = pkgs;
|
||||
|
||||
# networking.hostName = lib.mkForce containerName;
|
||||
|
||||
# # don't use remote builders
|
||||
# nix.distributedBuilds = lib.mkForce false;
|
||||
|
||||
# environment.systemPackages = with pkgs; [
|
||||
# git
|
||||
# # Gitea Actions rely heavily on node. Include it because it would be installed anyway.
|
||||
# nodejs
|
||||
# ];
|
||||
|
||||
# services.gitea-actions-runner.instances.inst = {
|
||||
# enable = true;
|
||||
# name = config.networking.hostName;
|
||||
# url = "https://git.neet.dev/";
|
||||
# tokenFile = "/run/agenix/gitea-actions-runner-token";
|
||||
# labels = [
|
||||
# "ubuntu-latest:docker://node:18-bullseye"
|
||||
# "nixos:host"
|
||||
# ];
|
||||
# };
|
||||
|
||||
# # To allow building on the host, must override the the service's config so it doesn't use a dynamic user
|
||||
# systemd.services.gitea-runner-inst.serviceConfig.DynamicUser = lib.mkForce false;
|
||||
# users.users.gitea-runner = {
|
||||
# home = "/var/lib/gitea-runner";
|
||||
# group = "gitea-runner";
|
||||
# isSystemUser = true;
|
||||
# createHome = true;
|
||||
# };
|
||||
# users.groups.gitea-runner = { };
|
||||
|
||||
# virtualisation.podman.enable = true;
|
||||
# boot.binfmt.emulatedSystems = [ "aarch64-linux" ];
|
||||
# };
|
||||
# };
|
||||
|
||||
# networking.nat.enable = true;
|
||||
# networking.nat.internalInterfaces = [
|
||||
# "ve-${containerName}"
|
||||
# ];
|
||||
# networking.ip_forward = true;
|
||||
|
||||
# don't use remote builders
|
||||
nix.distributedBuilds = lib.mkForce false;
|
||||
|
||||
services.gitea-actions-runner.instances.inst = {
|
||||
enable = true;
|
||||
name = config.networking.hostName;
|
||||
url = "https://git.neet.dev/";
|
||||
tokenFile = "/run/agenix/gitea-actions-runner-token";
|
||||
labels = [
|
||||
"ubuntu-latest:docker://node:18-bullseye"
|
||||
"nixos:host"
|
||||
];
|
||||
environment.systemPackages = with pkgs; [
|
||||
git
|
||||
nodejs
|
||||
jq
|
||||
attic-client
|
||||
];
|
||||
};
|
||||
};
|
||||
|
||||
environment.systemPackages = with pkgs; [
|
||||
git
|
||||
# Gitea Actions rely heavily on node. Include it because it would be installed anyway.
|
||||
nodejs
|
||||
];
|
||||
# Needs to be outside of the container because container uses's the host's nix-daemon
|
||||
nix.settings.trusted-users = [ "gitea-runner" ];
|
||||
|
||||
# To allow building on the host, must override the the service's config so it doesn't use a dynamic user
|
||||
systemd.services.gitea-runner-inst.serviceConfig.DynamicUser = lib.mkForce false;
|
||||
# Matching user on host — the container's gitea-runner UID must be
|
||||
# recognized by the host's nix-daemon as trusted (shared UID namespace)
|
||||
users.users.gitea-runner = {
|
||||
uid = giteaRunnerUid;
|
||||
home = "/var/lib/gitea-runner";
|
||||
group = "gitea-runner";
|
||||
isSystemUser = true;
|
||||
createHome = true;
|
||||
};
|
||||
users.groups.gitea-runner = { };
|
||||
|
||||
virtualisation.podman.enable = true;
|
||||
boot.binfmt.emulatedSystems = [ "aarch64-linux" ];
|
||||
users.groups.gitea-runner.gid = giteaRunnerGid;
|
||||
|
||||
age.secrets.gitea-actions-runner-token.file = ../../secrets/gitea-actions-runner-token.age;
|
||||
};
|
||||
|
||||
@@ -11,6 +11,7 @@ let
|
||||
|
||||
# Hardcoded public ip of ponyo... I wish I didn't need this...
|
||||
public_ip_address = "147.135.114.130";
|
||||
|
||||
in
|
||||
{
|
||||
config = lib.mkIf cfg.enable {
|
||||
@@ -23,7 +24,6 @@ in
|
||||
config.adminpassFile = "/run/agenix/nextcloud-pw";
|
||||
|
||||
# Apps
|
||||
autoUpdateApps.enable = true;
|
||||
extraAppsEnable = true;
|
||||
extraApps = with config.services.nextcloud.package.packages.apps; {
|
||||
# Want
|
||||
@@ -39,8 +39,6 @@ in
|
||||
# inherit bookmarks cookbook deck memories maps music news notes phonetrack polls forms;
|
||||
};
|
||||
|
||||
# Allows installing Apps from the UI (might remove later)
|
||||
appstoreEnable = true;
|
||||
};
|
||||
age.secrets.nextcloud-pw = {
|
||||
file = ../../secrets/nextcloud-pw.age;
|
||||
|
||||
38
common/server/ntfy.nix
Normal file
38
common/server/ntfy.nix
Normal file
@@ -0,0 +1,38 @@
|
||||
{ lib, config, ... }:
|
||||
|
||||
let
|
||||
cfg = config.services.ntfy-sh;
|
||||
in
|
||||
{
|
||||
options.services.ntfy-sh = {
|
||||
hostname = lib.mkOption {
|
||||
type = lib.types.str;
|
||||
example = "ntfy.example.com";
|
||||
};
|
||||
};
|
||||
|
||||
config = lib.mkIf cfg.enable {
|
||||
services.ntfy-sh.settings = {
|
||||
base-url = "https://${cfg.hostname}";
|
||||
listen-http = "127.0.0.1:2586";
|
||||
auth-default-access = "deny-all";
|
||||
behind-proxy = true;
|
||||
enable-login = true;
|
||||
};
|
||||
|
||||
# backups
|
||||
backup.group."ntfy".paths = [
|
||||
"/var/lib/ntfy-sh"
|
||||
];
|
||||
|
||||
services.nginx.enable = true;
|
||||
services.nginx.virtualHosts.${cfg.hostname} = {
|
||||
enableACME = true;
|
||||
forceSSL = true;
|
||||
locations."/" = {
|
||||
proxyPass = "http://127.0.0.1:2586";
|
||||
proxyWebsockets = true;
|
||||
};
|
||||
};
|
||||
};
|
||||
}
|
||||
@@ -26,6 +26,16 @@
|
||||
"printcap name" = "cups";
|
||||
|
||||
"hide files" = "/.nobackup/.DS_Store/._.DS_Store/";
|
||||
|
||||
# Samba 4.22+ enables SMB3 directory leases by default, allowing clients
|
||||
# to cache directory listings locally. When files are created locally on
|
||||
# the server (bypassing Samba), these cached listings go stale because
|
||||
# kernel oplocks — the mechanism that would break leases on local
|
||||
# changes — is incompatible with smb2 leases. Enabling kernel oplocks
|
||||
# would fix this but forces Samba to disable smb2 leases, durable
|
||||
# handles, and level2 oplocks, losing handle caching performance.
|
||||
# https://wiki.samba.org/index.php/Editing_files_locally_on_server:_interoperability
|
||||
"smb3 directory leases" = "no";
|
||||
};
|
||||
public = {
|
||||
path = "/data/samba/Public";
|
||||
|
||||
62
flake.lock
generated
62
flake.lock
generated
@@ -14,11 +14,11 @@
|
||||
]
|
||||
},
|
||||
"locked": {
|
||||
"lastModified": 1762618334,
|
||||
"narHash": "sha256-wyT7Pl6tMFbFrs8Lk/TlEs81N6L+VSybPfiIgzU8lbQ=",
|
||||
"lastModified": 1770165109,
|
||||
"narHash": "sha256-9VnK6Oqai65puVJ4WYtCTvlJeXxMzAp/69HhQuTdl/I=",
|
||||
"owner": "ryantm",
|
||||
"repo": "agenix",
|
||||
"rev": "fcdea223397448d35d9b31f798479227e80183f6",
|
||||
"rev": "b027ee29d959fda4b60b57566d64c98a202e0feb",
|
||||
"type": "github"
|
||||
},
|
||||
"original": {
|
||||
@@ -53,11 +53,11 @@
|
||||
]
|
||||
},
|
||||
"locked": {
|
||||
"lastModified": 1770491193,
|
||||
"narHash": "sha256-zdnWeXmPZT8BpBo52s4oansT1Rq0SNzksXKpEcMc5lE=",
|
||||
"lastModified": 1771632347,
|
||||
"narHash": "sha256-kNm0YX9RUwf7GZaWQu2F71ccm4OUMz0xFkXn6mGPfps=",
|
||||
"owner": "sadjow",
|
||||
"repo": "claude-code-nix",
|
||||
"rev": "f68a2683e812d1e4f9a022ff3e0206d46347d019",
|
||||
"rev": "ec90f84b2ea21f6d2272e00d1becbc13030d1895",
|
||||
"type": "github"
|
||||
},
|
||||
"original": {
|
||||
@@ -124,11 +124,11 @@
|
||||
]
|
||||
},
|
||||
"locked": {
|
||||
"lastModified": 1766051518,
|
||||
"narHash": "sha256-znKOwPXQnt3o7lDb3hdf19oDo0BLP4MfBOYiWkEHoik=",
|
||||
"lastModified": 1770019181,
|
||||
"narHash": "sha256-hwsYgDnby50JNVpTRYlF3UR/Rrpt01OrxVuryF40CFY=",
|
||||
"owner": "serokell",
|
||||
"repo": "deploy-rs",
|
||||
"rev": "d5eff7f948535b9c723d60cd8239f8f11ddc90fa",
|
||||
"rev": "77c906c0ba56aabdbc72041bf9111b565cdd6171",
|
||||
"type": "github"
|
||||
},
|
||||
"original": {
|
||||
@@ -186,11 +186,11 @@
|
||||
]
|
||||
},
|
||||
"locked": {
|
||||
"lastModified": 1763988335,
|
||||
"narHash": "sha256-QlcnByMc8KBjpU37rbq5iP7Cp97HvjRP0ucfdh+M4Qc=",
|
||||
"lastModified": 1769939035,
|
||||
"narHash": "sha256-Fok2AmefgVA0+eprw2NDwqKkPGEI5wvR+twiZagBvrg=",
|
||||
"owner": "cachix",
|
||||
"repo": "git-hooks.nix",
|
||||
"rev": "50b9238891e388c9fdc6a5c49e49c42533a1b5ce",
|
||||
"rev": "a8ca480175326551d6c4121498316261cbb5b260",
|
||||
"type": "github"
|
||||
},
|
||||
"original": {
|
||||
@@ -228,11 +228,11 @@
|
||||
]
|
||||
},
|
||||
"locked": {
|
||||
"lastModified": 1768068402,
|
||||
"narHash": "sha256-bAXnnJZKJiF7Xr6eNW6+PhBf1lg2P1aFUO9+xgWkXfA=",
|
||||
"lastModified": 1771756436,
|
||||
"narHash": "sha256-Tl2I0YXdhSTufGqAaD1ySh8x+cvVsEI1mJyJg12lxhI=",
|
||||
"owner": "nix-community",
|
||||
"repo": "home-manager",
|
||||
"rev": "8bc5473b6bc2b6e1529a9c4040411e1199c43b4c",
|
||||
"rev": "5bd3589390b431a63072868a90c0f24771ff4cbb",
|
||||
"type": "github"
|
||||
},
|
||||
"original": {
|
||||
@@ -250,11 +250,11 @@
|
||||
"spectrum": "spectrum"
|
||||
},
|
||||
"locked": {
|
||||
"lastModified": 1770310890,
|
||||
"narHash": "sha256-lyWAs4XKg3kLYaf4gm5qc5WJrDkYy3/qeV5G733fJww=",
|
||||
"lastModified": 1771802632,
|
||||
"narHash": "sha256-UAH8YfrHRvXAMeFxUzJ4h4B1loz1K1wiNUNI8KiPqOg=",
|
||||
"owner": "astro",
|
||||
"repo": "microvm.nix",
|
||||
"rev": "68c9f9c6ca91841f04f726a298c385411b7bfcd5",
|
||||
"rev": "b67e3d80df3ec35bdfd3a00ad64ee437ef4fcded",
|
||||
"type": "github"
|
||||
},
|
||||
"original": {
|
||||
@@ -270,11 +270,11 @@
|
||||
]
|
||||
},
|
||||
"locked": {
|
||||
"lastModified": 1765267181,
|
||||
"narHash": "sha256-d3NBA9zEtBu2JFMnTBqWj7Tmi7R5OikoU2ycrdhQEws=",
|
||||
"lastModified": 1771734689,
|
||||
"narHash": "sha256-/phvMgr1yutyAMjKnZlxkVplzxHiz60i4rc+gKzpwhg=",
|
||||
"owner": "Mic92",
|
||||
"repo": "nix-index-database",
|
||||
"rev": "82befcf7dc77c909b0f2a09f5da910ec95c5b78f",
|
||||
"rev": "8f590b832326ab9699444f3a48240595954a4b10",
|
||||
"type": "github"
|
||||
},
|
||||
"original": {
|
||||
@@ -285,11 +285,11 @@
|
||||
},
|
||||
"nixos-hardware": {
|
||||
"locked": {
|
||||
"lastModified": 1767185284,
|
||||
"narHash": "sha256-ljDBUDpD1Cg5n3mJI81Hz5qeZAwCGxon4kQW3Ho3+6Q=",
|
||||
"lastModified": 1771423359,
|
||||
"narHash": "sha256-yRKJ7gpVmXbX2ZcA8nFi6CMPkJXZGjie2unsiMzj3Ig=",
|
||||
"owner": "NixOS",
|
||||
"repo": "nixos-hardware",
|
||||
"rev": "40b1a28dce561bea34858287fbb23052c3ee63fe",
|
||||
"rev": "740a22363033e9f1bb6270fbfb5a9574067af15b",
|
||||
"type": "github"
|
||||
},
|
||||
"original": {
|
||||
@@ -301,16 +301,16 @@
|
||||
},
|
||||
"nixpkgs": {
|
||||
"locked": {
|
||||
"lastModified": 1768250893,
|
||||
"narHash": "sha256-fWNJYFx0QvnlGlcw54EoOYs/wv2icINHUz0FVdh9RIo=",
|
||||
"lastModified": 1771369470,
|
||||
"narHash": "sha256-0NBlEBKkN3lufyvFegY4TYv5mCNHbi5OmBDrzihbBMQ=",
|
||||
"owner": "NixOS",
|
||||
"repo": "nixpkgs",
|
||||
"rev": "3971af1a8fc3646b1d554cb1269b26c84539c22e",
|
||||
"rev": "0182a361324364ae3f436a63005877674cf45efb",
|
||||
"type": "github"
|
||||
},
|
||||
"original": {
|
||||
"owner": "NixOS",
|
||||
"ref": "master",
|
||||
"ref": "nixos-unstable",
|
||||
"repo": "nixpkgs",
|
||||
"type": "github"
|
||||
}
|
||||
@@ -344,11 +344,11 @@
|
||||
]
|
||||
},
|
||||
"locked": {
|
||||
"lastModified": 1766321686,
|
||||
"narHash": "sha256-icOWbnD977HXhveirqA10zoqvErczVs3NKx8Bj+ikHY=",
|
||||
"lastModified": 1770659507,
|
||||
"narHash": "sha256-RVZno9CypFN3eHxfULKN1K7mb/Cq0HkznnWqnshxpWY=",
|
||||
"owner": "simple-nixos-mailserver",
|
||||
"repo": "nixos-mailserver",
|
||||
"rev": "7d433bf89882f61621f95082e90a4ab91eb0bdd3",
|
||||
"rev": "781e833633ebc0873d251772a74e4400a73f5d78",
|
||||
"type": "gitlab"
|
||||
},
|
||||
"original": {
|
||||
|
||||
10
flake.nix
10
flake.nix
@@ -1,7 +1,7 @@
|
||||
{
|
||||
inputs = {
|
||||
# nixpkgs
|
||||
nixpkgs.url = "github:NixOS/nixpkgs/master";
|
||||
nixpkgs.url = "github:NixOS/nixpkgs/nixos-unstable";
|
||||
|
||||
# Common Utils Among flake inputs
|
||||
systems.url = "github:nix-systems/default";
|
||||
@@ -175,10 +175,10 @@
|
||||
kexec = (mkEphemeral "x86_64-linux").config.system.build.images.kexec;
|
||||
iso = (mkEphemeral "x86_64-linux").config.system.build.images.iso;
|
||||
};
|
||||
"aarch64-linux" = {
|
||||
kexec = (mkEphemeral "aarch64-linux").config.system.build.images.kexec;
|
||||
iso = (mkEphemeral "aarch64-linux").config.system.build.images.iso;
|
||||
};
|
||||
# "aarch64-linux" = {
|
||||
# kexec = (mkEphemeral "aarch64-linux").config.system.build.images.kexec;
|
||||
# iso = (mkEphemeral "aarch64-linux").config.system.build.images.iso;
|
||||
# };
|
||||
};
|
||||
|
||||
overlays.default = import ./overlays { inherit inputs; };
|
||||
|
||||
@@ -61,7 +61,8 @@ in
|
||||
|
||||
programs.vscode = {
|
||||
enable = thisMachineIsPersonal;
|
||||
package = pkgs.vscodium;
|
||||
# Must use fhs version for vscode-lldb
|
||||
package = pkgs.vscodium-fhs;
|
||||
profiles.default = {
|
||||
userSettings = {
|
||||
editor.formatOnSave = true;
|
||||
@@ -78,6 +79,15 @@ in
|
||||
lsp.serverPort = 6005; # port needs to match Godot configuration
|
||||
editorPath.godot4 = "godot-mono";
|
||||
};
|
||||
rust-analyzer = {
|
||||
restartServerOnConfigChange = true;
|
||||
testExplorer = true;
|
||||
server.path = "rust-analyzer"; # Use the rust-analyzer from PATH (which is set by nixEnvSelector from the project's flake)
|
||||
};
|
||||
nixEnvSelector = {
|
||||
useFlakes = true; # This hasn't ever worked for me and I have to use shell.nix... but maybe someday
|
||||
suggestion = false; # Stop really annoy nagging
|
||||
};
|
||||
};
|
||||
extensions = with pkgs.vscode-extensions; [
|
||||
bbenoist.nix # nix syntax support
|
||||
|
||||
@@ -13,6 +13,18 @@
|
||||
# Upstream interface for sandbox networking (NAT)
|
||||
networking.sandbox.upstreamInterface = lib.mkDefault "enp191s0";
|
||||
|
||||
# Enable sandboxed workspace
|
||||
sandboxed-workspace = {
|
||||
enable = true;
|
||||
workspaces.test-incus = {
|
||||
type = "incus";
|
||||
autoStart = true;
|
||||
config = ./workspaces/test-container.nix;
|
||||
ip = "192.168.83.90";
|
||||
hostKey = "ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIL0SNSy/MdW38NqKzLr1SG8WKrs8XkrqibacaJtJPzgW";
|
||||
};
|
||||
};
|
||||
|
||||
environment.systemPackages = with pkgs; [
|
||||
system76-keyboard-configurator
|
||||
];
|
||||
|
||||
@@ -8,6 +8,7 @@
|
||||
systemRoles = [
|
||||
"personal"
|
||||
"dns-challenge"
|
||||
"ntfy"
|
||||
];
|
||||
|
||||
hostKey = "ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAID/Df5lG07Il7fizEgZR/T9bMlR0joESRJ7cqM9BkOyP";
|
||||
|
||||
@@ -7,6 +7,7 @@
|
||||
|
||||
systemRoles = [
|
||||
"personal"
|
||||
"ntfy"
|
||||
];
|
||||
|
||||
hostKey = "ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIEQi3q8jU6vRruExAL60J7GFO1gS8HsmXVJuKRT4ljrG";
|
||||
|
||||
@@ -1,9 +0,0 @@
|
||||
{ lib, ... }:
|
||||
|
||||
{
|
||||
imports = [
|
||||
./hardware-configuration.nix
|
||||
];
|
||||
|
||||
networking.hostName = "phil";
|
||||
}
|
||||
@@ -1,46 +0,0 @@
|
||||
# Do not modify this file! It was generated by ‘nixos-generate-config’
|
||||
# and may be overwritten by future invocations. Please make changes
|
||||
# to /etc/nixos/configuration.nix instead.
|
||||
{ lib, modulesPath, ... }:
|
||||
|
||||
{
|
||||
imports =
|
||||
[
|
||||
(modulesPath + "/profiles/qemu-guest.nix")
|
||||
];
|
||||
|
||||
# because grub just doesn't work for some reason
|
||||
boot.loader.systemd-boot.enable = true;
|
||||
|
||||
remoteLuksUnlock.enable = true;
|
||||
remoteLuksUnlock.enableTorUnlock = false;
|
||||
|
||||
boot.initrd.availableKernelModules = [ "xhci_pci" ];
|
||||
boot.initrd.kernelModules = [ "dm-snapshot" ];
|
||||
boot.kernelModules = [ ];
|
||||
boot.extraModulePackages = [ ];
|
||||
|
||||
boot.initrd.luks.devices."enc-pv" = {
|
||||
device = "/dev/disk/by-uuid/d26c1820-4c39-4615-98c2-51442504e194";
|
||||
allowDiscards = true;
|
||||
};
|
||||
|
||||
fileSystems."/" =
|
||||
{
|
||||
device = "/dev/disk/by-uuid/851bfde6-93cd-439e-9380-de28aa87eda9";
|
||||
fsType = "btrfs";
|
||||
};
|
||||
|
||||
fileSystems."/boot" =
|
||||
{
|
||||
device = "/dev/disk/by-uuid/F185-C4E5";
|
||||
fsType = "vfat";
|
||||
};
|
||||
|
||||
swapDevices =
|
||||
[{ device = "/dev/disk/by-uuid/d809e3a1-3915-405a-a200-4429c5efdf87"; }];
|
||||
|
||||
networking.interfaces.enp0s6.useDHCP = lib.mkDefault true;
|
||||
|
||||
nixpkgs.hostPlatform = lib.mkDefault "aarch64-linux";
|
||||
}
|
||||
@@ -1,20 +0,0 @@
|
||||
{
|
||||
hostNames = [
|
||||
"phil"
|
||||
"phil.neet.dev"
|
||||
];
|
||||
|
||||
arch = "aarch64-linux";
|
||||
|
||||
systemRoles = [
|
||||
"server"
|
||||
"nix-builder"
|
||||
];
|
||||
|
||||
hostKey = "ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIBlgRPpuUkZqe8/lHugRPm/m2vcN9psYhh5tENHZt9I2";
|
||||
|
||||
remoteUnlock = {
|
||||
hostKey = "ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIK0RodotOXLMy/w70aa096gaNqPBnfgiXR5ZAH4+wGzd";
|
||||
clearnetHost = "unlock.phil.neet.dev";
|
||||
};
|
||||
}
|
||||
@@ -108,4 +108,12 @@
|
||||
# librechat
|
||||
services.librechat-container.enable = true;
|
||||
services.librechat-container.host = "chat.neet.dev";
|
||||
|
||||
# push notifications
|
||||
services.ntfy-sh.enable = true;
|
||||
services.ntfy-sh.hostname = "ntfy.neet.dev";
|
||||
|
||||
# uptime monitoring
|
||||
services.gatus.enable = true;
|
||||
services.gatus.hostname = "status.neet.dev";
|
||||
}
|
||||
|
||||
@@ -15,6 +15,7 @@
|
||||
"dailybot"
|
||||
"gitea"
|
||||
"librechat"
|
||||
"ntfy"
|
||||
];
|
||||
|
||||
hostKey = "ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIMBBlTAIp38RhErU1wNNV5MBeb+WGH0mhF/dxh5RsAXN";
|
||||
|
||||
@@ -181,10 +181,20 @@
|
||||
statusCheck = true;
|
||||
id = "1_746_wireless";
|
||||
};
|
||||
unifi = {
|
||||
title = "Unifi";
|
||||
description = "unifi.s0.neet.dev";
|
||||
icon = "hl-unifi";
|
||||
url = "https://unifi.s0.neet.dev";
|
||||
target = "sametab";
|
||||
statusCheck = false;
|
||||
id = "2_746_unifi";
|
||||
};
|
||||
};
|
||||
networkList = [
|
||||
networkItems.gateway
|
||||
networkItems.wireless
|
||||
networkItems.unifi
|
||||
];
|
||||
in
|
||||
{
|
||||
@@ -259,14 +269,59 @@
|
||||
statusCheck = true;
|
||||
id = "6_836_roundcube";
|
||||
};
|
||||
jitsimeet = {
|
||||
title = "Jitsi Meet";
|
||||
description = "meet.neet.space";
|
||||
icon = "hl-jitsimeet";
|
||||
url = "https://meet.neet.space";
|
||||
ntfy = {
|
||||
title = "ntfy";
|
||||
description = "ntfy.neet.dev";
|
||||
icon = "hl-ntfy";
|
||||
url = "https://ntfy.neet.dev";
|
||||
target = "sametab";
|
||||
statusCheck = true;
|
||||
id = "7_836_jitsimeet";
|
||||
id = "7_836_ntfy";
|
||||
};
|
||||
librechat = {
|
||||
title = "Librechat";
|
||||
description = "chat.neet.dev";
|
||||
icon = "hl-librechat";
|
||||
url = "https://chat.neet.dev";
|
||||
target = "sametab";
|
||||
statusCheck = true;
|
||||
id = "8_836_librechat";
|
||||
};
|
||||
owncast = {
|
||||
title = "Owncast";
|
||||
description = "live.neet.dev";
|
||||
icon = "hl-owncast";
|
||||
url = "https://live.neet.dev";
|
||||
target = "sametab";
|
||||
statusCheck = true;
|
||||
id = "9_836_owncast";
|
||||
};
|
||||
navidrome-public = {
|
||||
title = "Navidrome";
|
||||
description = "navidrome.neet.cloud";
|
||||
icon = "hl-navidrome";
|
||||
url = "https://navidrome.neet.cloud";
|
||||
target = "sametab";
|
||||
statusCheck = true;
|
||||
id = "10_836_navidrome-public";
|
||||
};
|
||||
collabora = {
|
||||
title = "Collabora";
|
||||
description = "collabora.runyan.org";
|
||||
icon = "hl-collabora";
|
||||
url = "https://collabora.runyan.org";
|
||||
target = "sametab";
|
||||
statusCheck = true;
|
||||
id = "11_836_collabora";
|
||||
};
|
||||
gatus = {
|
||||
title = "Gatus";
|
||||
description = "status.neet.dev";
|
||||
icon = "hl-gatus";
|
||||
url = "https://status.neet.dev";
|
||||
target = "sametab";
|
||||
statusCheck = true;
|
||||
id = "12_836_gatus";
|
||||
};
|
||||
};
|
||||
servicesList = [
|
||||
@@ -276,7 +331,12 @@
|
||||
servicesItems.git
|
||||
servicesItems.nextcloud
|
||||
servicesItems.roundcube
|
||||
servicesItems.jitsimeet
|
||||
servicesItems.ntfy
|
||||
servicesItems.librechat
|
||||
servicesItems.owncast
|
||||
servicesItems.navidrome-public
|
||||
servicesItems.collabora
|
||||
servicesItems.gatus
|
||||
];
|
||||
in
|
||||
{
|
||||
@@ -293,5 +353,169 @@
|
||||
};
|
||||
}
|
||||
)
|
||||
|
||||
(
|
||||
let
|
||||
haItems = {
|
||||
home-assistant = {
|
||||
title = "Home Assistant";
|
||||
description = "ha.s0.neet.dev";
|
||||
icon = "hl-home-assistant";
|
||||
url = "https://ha.s0.neet.dev";
|
||||
target = "sametab";
|
||||
statusCheck = false;
|
||||
id = "0_4201_home-assistant";
|
||||
};
|
||||
esphome = {
|
||||
title = "ESPHome";
|
||||
description = "esphome.s0.neet.dev";
|
||||
icon = "hl-esphome";
|
||||
url = "https://esphome.s0.neet.dev";
|
||||
target = "sametab";
|
||||
statusCheck = false;
|
||||
id = "1_4201_esphome";
|
||||
};
|
||||
zigbee2mqtt = {
|
||||
title = "Zigbee2MQTT";
|
||||
description = "zigbee.s0.neet.dev";
|
||||
icon = "hl-zigbee2mqtt";
|
||||
url = "https://zigbee.s0.neet.dev";
|
||||
target = "sametab";
|
||||
statusCheck = false;
|
||||
id = "2_4201_zigbee2mqtt";
|
||||
};
|
||||
frigate = {
|
||||
title = "Frigate";
|
||||
description = "frigate.s0.neet.dev";
|
||||
icon = "hl-frigate";
|
||||
url = "https://frigate.s0.neet.dev";
|
||||
target = "sametab";
|
||||
statusCheck = false;
|
||||
id = "3_4201_frigate";
|
||||
};
|
||||
valetudo = {
|
||||
title = "Valetudo";
|
||||
description = "vacuum.s0.neet.dev";
|
||||
icon = "hl-valetudo";
|
||||
url = "https://vacuum.s0.neet.dev";
|
||||
target = "sametab";
|
||||
statusCheck = false;
|
||||
id = "4_4201_valetudo";
|
||||
};
|
||||
sandman = {
|
||||
title = "Sandman";
|
||||
description = "sandman.s0.neet.dev";
|
||||
icon = "fas fa-bed";
|
||||
url = "https://sandman.s0.neet.dev";
|
||||
target = "sametab";
|
||||
statusCheck = false;
|
||||
id = "5_4201_sandman";
|
||||
};
|
||||
};
|
||||
haList = [
|
||||
haItems.home-assistant
|
||||
haItems.esphome
|
||||
haItems.zigbee2mqtt
|
||||
haItems.frigate
|
||||
haItems.valetudo
|
||||
haItems.sandman
|
||||
];
|
||||
in
|
||||
{
|
||||
name = "Home Automation";
|
||||
icon = "fas fa-home";
|
||||
items = haList;
|
||||
filteredItems = haList;
|
||||
displayData = {
|
||||
sortBy = "default";
|
||||
rows = 1;
|
||||
cols = 1;
|
||||
collapsed = false;
|
||||
hideForGuests = false;
|
||||
};
|
||||
}
|
||||
)
|
||||
|
||||
(
|
||||
let
|
||||
prodItems = {
|
||||
vikunja = {
|
||||
title = "Vikunja";
|
||||
description = "todo.s0.neet.dev";
|
||||
icon = "hl-vikunja";
|
||||
url = "https://todo.s0.neet.dev";
|
||||
target = "sametab";
|
||||
statusCheck = false;
|
||||
id = "0_5301_vikunja";
|
||||
};
|
||||
actual = {
|
||||
title = "Actual Budget";
|
||||
description = "budget.s0.neet.dev";
|
||||
icon = "hl-actual-budget";
|
||||
url = "https://budget.s0.neet.dev";
|
||||
target = "sametab";
|
||||
statusCheck = false;
|
||||
id = "1_5301_actual";
|
||||
};
|
||||
linkwarden = {
|
||||
title = "Linkwarden";
|
||||
description = "linkwarden.s0.neet.dev";
|
||||
icon = "hl-linkwarden";
|
||||
url = "https://linkwarden.s0.neet.dev";
|
||||
target = "sametab";
|
||||
statusCheck = false;
|
||||
id = "2_5301_linkwarden";
|
||||
};
|
||||
memos = {
|
||||
title = "Memos";
|
||||
description = "memos.s0.neet.dev";
|
||||
icon = "hl-memos";
|
||||
url = "https://memos.s0.neet.dev";
|
||||
target = "sametab";
|
||||
statusCheck = false;
|
||||
id = "3_5301_memos";
|
||||
};
|
||||
outline = {
|
||||
title = "Outline";
|
||||
description = "outline.s0.neet.dev";
|
||||
icon = "hl-outline";
|
||||
url = "https://outline.s0.neet.dev";
|
||||
target = "sametab";
|
||||
statusCheck = false;
|
||||
id = "4_5301_outline";
|
||||
};
|
||||
languagetool = {
|
||||
title = "LanguageTool";
|
||||
description = "languagetool.s0.neet.dev";
|
||||
icon = "hl-languagetool";
|
||||
url = "https://languagetool.s0.neet.dev";
|
||||
target = "sametab";
|
||||
statusCheck = false;
|
||||
id = "5_5301_languagetool";
|
||||
};
|
||||
};
|
||||
prodList = [
|
||||
prodItems.vikunja
|
||||
prodItems.actual
|
||||
prodItems.linkwarden
|
||||
prodItems.memos
|
||||
prodItems.outline
|
||||
prodItems.languagetool
|
||||
];
|
||||
in
|
||||
{
|
||||
name = "Productivity";
|
||||
icon = "fas fa-tasks";
|
||||
items = prodList;
|
||||
filteredItems = prodList;
|
||||
displayData = {
|
||||
sortBy = "default";
|
||||
rows = 1;
|
||||
cols = 1;
|
||||
collapsed = false;
|
||||
hideForGuests = false;
|
||||
};
|
||||
}
|
||||
)
|
||||
];
|
||||
}
|
||||
|
||||
@@ -55,123 +55,135 @@
|
||||
users.users.googlebot.extraGroups = [ "transmission" ];
|
||||
users.groups.transmission.gid = config.ids.gids.transmission;
|
||||
|
||||
vpn-container.enable = true;
|
||||
vpn-container.mounts = [
|
||||
"/var/lib"
|
||||
"/data/samba/Public"
|
||||
];
|
||||
vpn-container.config = {
|
||||
# servarr services
|
||||
services.prowlarr.enable = true;
|
||||
services.sonarr.enable = true;
|
||||
services.sonarr.user = "public_data";
|
||||
services.sonarr.group = "public_data";
|
||||
services.bazarr.enable = true;
|
||||
services.bazarr.user = "public_data";
|
||||
services.bazarr.group = "public_data";
|
||||
services.radarr.enable = true;
|
||||
services.radarr.user = "public_data";
|
||||
services.radarr.group = "public_data";
|
||||
services.lidarr.enable = true;
|
||||
services.lidarr.user = "public_data";
|
||||
services.lidarr.group = "public_data";
|
||||
services.recyclarr = {
|
||||
enable = true;
|
||||
configuration = {
|
||||
radarr.radarr_main = {
|
||||
api_key = {
|
||||
_secret = "/run/credentials/recyclarr.service/radarr-api-key";
|
||||
};
|
||||
base_url = "http://localhost:7878";
|
||||
pia-vpn = {
|
||||
enable = true;
|
||||
serverLocation = "swiss";
|
||||
|
||||
quality_definition.type = "movie";
|
||||
containers.transmission = {
|
||||
ip = "10.100.0.10";
|
||||
mounts."/var/lib".hostPath = "/var/lib";
|
||||
mounts."/data/samba/Public".hostPath = "/data/samba/Public";
|
||||
receiveForwardedPort = { protocol = "both"; };
|
||||
onPortForwarded = ''
|
||||
# Notify Transmission of the PIA-assigned peer port via RPC
|
||||
for i in $(seq 1 30); do
|
||||
curlout=$(curl -s "http://transmission.containers:8080/transmission/rpc" 2>/dev/null) && break
|
||||
sleep 2
|
||||
done
|
||||
regex='X-Transmission-Session-Id: (\w*)'
|
||||
if [[ $curlout =~ $regex ]]; then
|
||||
sessionId=''${BASH_REMATCH[1]}
|
||||
curl -s "http://transmission.containers:8080/transmission/rpc" \
|
||||
-d "{\"method\":\"session-set\",\"arguments\":{\"peer-port\":$PORT}}" \
|
||||
-H "X-Transmission-Session-Id: $sessionId"
|
||||
fi
|
||||
'';
|
||||
config = {
|
||||
services.transmission = {
|
||||
enable = true;
|
||||
package = pkgs.transmission_4;
|
||||
performanceNetParameters = true;
|
||||
user = "public_data";
|
||||
group = "public_data";
|
||||
settings = {
|
||||
"download-dir" = "/data/samba/Public/Media/Transmission";
|
||||
"incomplete-dir" = "/var/lib/transmission/.incomplete";
|
||||
"incomplete-dir-enabled" = true;
|
||||
|
||||
"rpc-enabled" = true;
|
||||
"rpc-port" = 8080;
|
||||
"rpc-bind-address" = "0.0.0.0";
|
||||
"rpc-whitelist" = "127.0.0.1,10.100.*.*,192.168.*.*";
|
||||
"rpc-host-whitelist-enabled" = false;
|
||||
|
||||
"port-forwarding-enabled" = true;
|
||||
"peer-port" = 51413;
|
||||
"peer-port-random-on-start" = false;
|
||||
|
||||
"encryption" = 1;
|
||||
"lpd-enabled" = true;
|
||||
"dht-enabled" = true;
|
||||
"pex-enabled" = true;
|
||||
|
||||
"blocklist-enabled" = true;
|
||||
"blocklist-updates-enabled" = true;
|
||||
"blocklist-url" = "https://github.com/Naunter/BT_BlockLists/raw/master/bt_blocklists.gz";
|
||||
|
||||
"ratio-limit" = 3;
|
||||
"ratio-limit-enabled" = true;
|
||||
|
||||
"download-queue-enabled" = true;
|
||||
"download-queue-size" = 20;
|
||||
};
|
||||
};
|
||||
# https://github.com/NixOS/nixpkgs/issues/258793
|
||||
systemd.services.transmission.serviceConfig = {
|
||||
RootDirectoryStartOnly = lib.mkForce (lib.mkForce false);
|
||||
RootDirectory = lib.mkForce (lib.mkForce "");
|
||||
};
|
||||
sonarr.sonarr_main = {
|
||||
api_key = {
|
||||
_secret = "/run/credentials/recyclarr.service/sonarr-api-key";
|
||||
};
|
||||
base_url = "http://localhost:8989";
|
||||
|
||||
quality_definition.type = "series";
|
||||
users.groups.public_data.gid = 994;
|
||||
users.users.public_data = {
|
||||
isSystemUser = true;
|
||||
group = "public_data";
|
||||
uid = 994;
|
||||
};
|
||||
};
|
||||
};
|
||||
|
||||
systemd.services.recyclarr.serviceConfig.LoadCredential = [
|
||||
"radarr-api-key:/run/agenix/radarr-api-key"
|
||||
"sonarr-api-key:/run/agenix/sonarr-api-key"
|
||||
];
|
||||
containers.servarr = {
|
||||
ip = "10.100.0.11";
|
||||
mounts."/var/lib".hostPath = "/var/lib";
|
||||
mounts."/data/samba/Public".hostPath = "/data/samba/Public";
|
||||
mounts."/run/agenix" = { hostPath = "/run/agenix"; isReadOnly = true; };
|
||||
config = {
|
||||
services.prowlarr.enable = true;
|
||||
services.sonarr.enable = true;
|
||||
services.sonarr.user = "public_data";
|
||||
services.sonarr.group = "public_data";
|
||||
services.bazarr.enable = true;
|
||||
services.bazarr.user = "public_data";
|
||||
services.bazarr.group = "public_data";
|
||||
services.radarr.enable = true;
|
||||
services.radarr.user = "public_data";
|
||||
services.radarr.group = "public_data";
|
||||
services.lidarr.enable = true;
|
||||
services.lidarr.user = "public_data";
|
||||
services.lidarr.group = "public_data";
|
||||
services.recyclarr = {
|
||||
enable = true;
|
||||
configuration = {
|
||||
radarr.radarr_main = {
|
||||
api_key = {
|
||||
_secret = "/run/credentials/recyclarr.service/radarr-api-key";
|
||||
};
|
||||
base_url = "http://localhost:7878";
|
||||
quality_definition.type = "movie";
|
||||
};
|
||||
sonarr.sonarr_main = {
|
||||
api_key = {
|
||||
_secret = "/run/credentials/recyclarr.service/sonarr-api-key";
|
||||
};
|
||||
base_url = "http://localhost:8989";
|
||||
quality_definition.type = "series";
|
||||
};
|
||||
};
|
||||
};
|
||||
|
||||
services.transmission = {
|
||||
enable = true;
|
||||
package = pkgs.transmission_4;
|
||||
performanceNetParameters = true;
|
||||
user = "public_data";
|
||||
group = "public_data";
|
||||
settings = {
|
||||
/* directory settings */
|
||||
# "watch-dir" = "/srv/storage/Transmission/To-Download";
|
||||
# "watch-dir-enabled" = true;
|
||||
"download-dir" = "/data/samba/Public/Media/Transmission";
|
||||
"incomplete-dir" = "/var/lib/transmission/.incomplete";
|
||||
"incomplete-dir-enabled" = true;
|
||||
systemd.services.recyclarr.serviceConfig.LoadCredential = [
|
||||
"radarr-api-key:/run/agenix/radarr-api-key"
|
||||
"sonarr-api-key:/run/agenix/sonarr-api-key"
|
||||
];
|
||||
|
||||
/* web interface, accessible from local network */
|
||||
"rpc-enabled" = true;
|
||||
"rpc-bind-address" = "0.0.0.0";
|
||||
"rpc-whitelist" = "127.0.0.1,192.168.*.*,172.16.*.*";
|
||||
"rpc-host-whitelist" = "void,192.168.*.*,172.16.*.*";
|
||||
"rpc-host-whitelist-enabled" = false;
|
||||
|
||||
"port-forwarding-enabled" = true;
|
||||
"peer-port" = 50023;
|
||||
"peer-port-random-on-start" = false;
|
||||
|
||||
"encryption" = 1;
|
||||
"lpd-enabled" = true; /* local peer discovery */
|
||||
"dht-enabled" = true; /* dht peer discovery in swarm */
|
||||
"pex-enabled" = true; /* peer exchange */
|
||||
|
||||
/* ip blocklist */
|
||||
"blocklist-enabled" = true;
|
||||
"blocklist-updates-enabled" = true;
|
||||
"blocklist-url" = "https://github.com/Naunter/BT_BlockLists/raw/master/bt_blocklists.gz";
|
||||
|
||||
/* download speed settings */
|
||||
# "speed-limit-down" = 1200;
|
||||
# "speed-limit-down-enabled" = false;
|
||||
# "speed-limit-up" = 500;
|
||||
# "speed-limit-up-enabled" = true;
|
||||
|
||||
/* seeding limit */
|
||||
"ratio-limit" = 3;
|
||||
"ratio-limit-enabled" = true;
|
||||
|
||||
"download-queue-enabled" = true;
|
||||
"download-queue-size" = 20; # gotta go fast
|
||||
users.groups.public_data.gid = 994;
|
||||
users.users.public_data = {
|
||||
isSystemUser = true;
|
||||
group = "public_data";
|
||||
uid = 994;
|
||||
};
|
||||
};
|
||||
};
|
||||
# https://github.com/NixOS/nixpkgs/issues/258793
|
||||
systemd.services.transmission.serviceConfig = {
|
||||
RootDirectoryStartOnly = lib.mkForce (lib.mkForce false);
|
||||
RootDirectory = lib.mkForce (lib.mkForce "");
|
||||
};
|
||||
|
||||
users.groups.public_data.gid = 994;
|
||||
users.users.public_data = {
|
||||
isSystemUser = true;
|
||||
group = "public_data";
|
||||
uid = 994;
|
||||
};
|
||||
};
|
||||
pia.wireguard.badPortForwardPorts = [
|
||||
9696 # prowlarr
|
||||
8989 # sonarr
|
||||
6767 # bazarr
|
||||
7878 # radarr
|
||||
8686 # lidarr
|
||||
9091 # transmission web
|
||||
];
|
||||
age.secrets.radarr-api-key.file = ../../../secrets/radarr-api-key.age;
|
||||
age.secrets.sonarr-api-key.file = ../../../secrets/sonarr-api-key.age;
|
||||
|
||||
@@ -215,12 +227,12 @@
|
||||
};
|
||||
in
|
||||
lib.mkMerge [
|
||||
(mkVirtualHost "bazarr.s0.neet.dev" "http://vpn.containers:6767")
|
||||
(mkVirtualHost "radarr.s0.neet.dev" "http://vpn.containers:7878")
|
||||
(mkVirtualHost "lidarr.s0.neet.dev" "http://vpn.containers:8686")
|
||||
(mkVirtualHost "sonarr.s0.neet.dev" "http://vpn.containers:8989")
|
||||
(mkVirtualHost "prowlarr.s0.neet.dev" "http://vpn.containers:9696")
|
||||
(mkVirtualHost "transmission.s0.neet.dev" "http://vpn.containers:9091")
|
||||
(mkVirtualHost "bazarr.s0.neet.dev" "http://servarr.containers:6767")
|
||||
(mkVirtualHost "radarr.s0.neet.dev" "http://servarr.containers:7878")
|
||||
(mkVirtualHost "lidarr.s0.neet.dev" "http://servarr.containers:8686")
|
||||
(mkVirtualHost "sonarr.s0.neet.dev" "http://servarr.containers:8989")
|
||||
(mkVirtualHost "prowlarr.s0.neet.dev" "http://servarr.containers:9696")
|
||||
(mkVirtualHost "transmission.s0.neet.dev" "http://transmission.containers:8080")
|
||||
(mkVirtualHost "unifi.s0.neet.dev" "https://localhost:8443")
|
||||
(mkVirtualHost "music.s0.neet.dev" "http://localhost:4533")
|
||||
(mkVirtualHost "jellyfin.s0.neet.dev" "http://localhost:8096")
|
||||
|
||||
@@ -72,5 +72,5 @@
|
||||
};
|
||||
};
|
||||
|
||||
powerManagement.cpuFreqGovernor = "powersave";
|
||||
powerManagement.cpuFreqGovernor = "schedutil";
|
||||
}
|
||||
|
||||
@@ -18,6 +18,7 @@
|
||||
"linkwarden"
|
||||
"outline"
|
||||
"dns-challenge"
|
||||
"ntfy"
|
||||
];
|
||||
|
||||
hostKey = "ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIAwiXcUFtAvZCayhu4+AIcF+Ktrdgv9ee/mXSIhJbp4q";
|
||||
|
||||
@@ -8,6 +8,7 @@
|
||||
systemRoles = [
|
||||
"personal"
|
||||
"media-center"
|
||||
"ntfy"
|
||||
];
|
||||
|
||||
hostKey = "ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIHvdC1EiLqSNVmk5L1p7cWRIrrlelbK+NMj6tEBrwqIq";
|
||||
|
||||
BIN
secrets/attic-netrc.age
Normal file
BIN
secrets/attic-netrc.age
Normal file
Binary file not shown.
BIN
secrets/atticd-credentials.age
Normal file
BIN
secrets/atticd-credentials.age
Normal file
Binary file not shown.
Binary file not shown.
Binary file not shown.
19
secrets/ntfy-token.age
Normal file
19
secrets/ntfy-token.age
Normal file
@@ -0,0 +1,19 @@
|
||||
age-encryption.org/v1
|
||||
-> ssh-ed25519 qEbiMg 5JtpNApPNiFqAB/gQcAsE1gz0Fg/uHW92f6Kx1J2ggQ
|
||||
RzC1MQxyDYW1IuMo+OtSgcsND4v7XIRn0rCSkKCFA3A
|
||||
-> ssh-ed25519 N7drjg mn6LWo+2zWEtUavbFQar966+j+g5su+lcBfWYz1aZDQ
|
||||
EmKpdfkCSQao1+O/HJdOiam7UvBnDYcEEkgH6KrudQI
|
||||
-> ssh-ed25519 jQaHAA an3Ukqz3BVoz0FEAA6/Lw1XKOkQWHwmTut+XD4E4vS8
|
||||
9N2ePtXG2FPJSmOwcAO9p92MJKJJpTlEhKSmgMiinB0
|
||||
-> ssh-ed25519 ZDy34A v98HzmBgwgOpUk2WrRuFsCdNR+nF2veVLzyT2pU2ZXY
|
||||
o2pO5JbVEeaOFQ3beBvej6qgDdT9mgPCVHxmw2umhA0
|
||||
-> ssh-ed25519 w3nu8g Uba2LWQueJ50Ds1/RjvkXI+VH7calMiM7dbL02sRJ3U
|
||||
mFj5skmDXhJV9lK5iwUpebqxqVPAexdUntrbWJEix+Q
|
||||
-> ssh-ed25519 evqvfg wLEEdTdmDRiFGYDrYFQjvRzc3zmGXgvztIy3UuFXnWg
|
||||
CtCV/PpaxBmtDV+6InmcbKxNDmbPUTzyCm8tCf1Qw/4
|
||||
-> ssh-ed25519 6AT2/g NaA1AhOdb+FGOCoWFEX0QN4cXS1CxlpFwpsH1L6vBA0
|
||||
Z9aMYAofQ4Ath0zsg0YdZG3GTnCN2uQW7EG02bMZRsc
|
||||
-> ssh-ed25519 hPp1nw +shAZrydcbjXfYxm1UW1YosKg5ZwBBKO6cct4HdotBo
|
||||
GxnopKlmZQ/I6kMZPNurLgqwwkFHpUradaNYTPnlMFU
|
||||
--- W6DrhCmU08IEIyPpHDiRV21xVeALNk1bDHrLYc2YcC4
|
||||
'+Æ©<C386>CËl”i.Z£t;ýL²1Eãk#5ŸŠÑ£38‰?± šFWhö6?±_Ã=Âæ<C382>ãj/æÌRFÆáãåiÿóãÌZ{‘
|
||||
@@ -7,6 +7,9 @@ let
|
||||
|
||||
# nobody is using this secret but I still need to be able to r/w it
|
||||
nobody = sshKeys.userKeys;
|
||||
|
||||
# For secrets that all machines need to know
|
||||
everyone = lib.unique (roles.personal ++ roles.server);
|
||||
in
|
||||
|
||||
with roles;
|
||||
@@ -22,8 +25,10 @@ with roles;
|
||||
# nix binary cache
|
||||
# public key: s0.koi-bebop.ts.net:OjbzD86YjyJZpCp9RWaQKANaflcpKhtzBMNP8I2aPUU=
|
||||
"binary-cache-private-key.age".publicKeys = binary-cache;
|
||||
# public key: ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAINpUZFFL9BpBVqeeU63sFPhR9ewuhEZerTCDIGW1NPSB
|
||||
"binary-cache-push-sshkey.age".publicKeys = nobody; # this value is directly given to gitea
|
||||
|
||||
# attic binary cache
|
||||
"atticd-credentials.age".publicKeys = binary-cache;
|
||||
"attic-netrc.age".publicKeys = everyone;
|
||||
|
||||
# vpn
|
||||
"pia-login.age".publicKeys = pia;
|
||||
@@ -38,8 +43,11 @@ with roles;
|
||||
"linkwarden-environment.age".publicKeys = linkwarden;
|
||||
|
||||
# backups
|
||||
"backblaze-s3-backups.age".publicKeys = personal ++ server;
|
||||
"restic-password.age".publicKeys = personal ++ server;
|
||||
"backblaze-s3-backups.age".publicKeys = everyone;
|
||||
"restic-password.age".publicKeys = everyone;
|
||||
|
||||
# ntfy alerts
|
||||
"ntfy-token.age".publicKeys = everyone;
|
||||
|
||||
# gitea actions runner
|
||||
"gitea-actions-runner-token.age".publicKeys = gitea-actions-runner;
|
||||
|
||||
Reference in New Issue
Block a user