Compare commits

...

12 commits

Author SHA1 Message Date
fd7048b676
Merge 7bb3e05942 into 81d352cacc 2026-04-20 12:11:58 +00:00
81d352cacc docs: add terminal demo GIF and polish README
- Record real CLI session on greenarch (list, deploy, status, stop)
- Punchier tagline: "One command to deploy your self-hosted stack"
- Feature bullet list, default ports in project table
- Collapsible project-specific notes (Tailscale profiles, Nextcloud, Minecraft)
- last-commit badge, docker_compose badge
- VHS tape included for re-recording
2026-04-15 11:02:38 +08:00
4a674026ea docs: redesign README as project portal with banner and badges
- Centered SVG banner with terminal prompt icon
- shields.io badges (license, release, bash, docker, stars)
- Feature highlights, project table with upstream links
- Project structure diagram, requirements table
- Contributing guide for adding new projects
2026-04-15 10:38:58 +08:00
48cd34ca27 Merge refactor/project-restructure: self-contained project dirs with interactive CLI 2026-04-15 10:36:20 +08:00
81b4d51f80 fix: use pre-increment to avoid set -e trap on ((0))
((ok++)) when ok=0 evaluates to 0 (falsy), which set -e treats as
failure. Use ((++ok)) so the expression always returns >= 1.
2026-04-15 10:26:41 +08:00
ccb424286c feat: add -y/--yes non-interactive flag for CI and scripted deploys
Skip all prompts: accept defaults, auto-generate secrets, keep existing
.env files, and auto-confirm deploy. Follows npm init -y pattern.
2026-04-15 10:23:58 +08:00
e5542de818 polish: CLI UX overhaul and rich .env.example metadata
CLI improvements:
- Unicode status indicators (✔ ✘ ▶ ● ○ ⚠) and braille spinners
- Animated spinner for docker pull/up operations
- Project metadata parsed from .env.example (@name, @desc, @url, @port, @note)
- Descriptions shown in list, deploy selection, and status views
- Auto-generate passwords for secret fields (PASSWORD/TOKEN/AUTHKEY)
- Confirmation prompt before deploy with project summary
- Post-deploy access URL hint based on @port metadata
- Divider lines for visual section separation
- Helpful error messages with suggested commands
- Command aliases: ls, st, ps, down, log, configure
- Bash 3.2 compatible (no associative arrays)

.env.example enrichment:
- All projects now have @name, @desc, @url, @port metadata headers
- Inline field descriptions shown as context during interactive config
- Tailscale: @note hints for profile-based DERP deployment
- Structured comments group related settings visually

Installer:
- Prerequisite check with per-tool status (✔/✘)
- Quieter git operations
- Cleaner post-install instructions
2026-04-15 10:15:43 +08:00
adcf0b1884 forgejo: use wget for healthcheck, add SQLite WAL mode
- wget is available in the forgejo image without extra deps (curl isn't)
- increase start_period to 60s for DB migration window
- enable SQLite WAL journal mode for better concurrent read performance
2026-04-15 10:03:30 +08:00
1ef24b3be8 improve: best-practice configs for all projects, CLI UX overhaul
Compose improvements:
- forgejo: add healthcheck (/api/healthz), ROOT_URL + SSH_PORT env, LFS
- tailscale: drop redundant privileged (use cap_add only), use devices
  for /dev/net/tun, mount /lib/modules, reliable healthcheck (tailscale
  status), profiles for opt-in DERP, headscale comment in .env.example
- uptime-kuma: add built-in healthcheck (extra/healthcheck)
- filesuite: add healthchecks for both cloudreve and qbittorrent
- minecraft: add mc-health check (built into itzg image), simplify volumes
- teamspeak: add healthcheck via ServerQuery (nc localhost 10011)
- nextcloud: add healthchecks for all 3 services, depends_on with
  service_healthy conditions so startup order is correct

CLI improvements:
- Fix docker compose detection (was broken with space in arg)
- Use global array for project discovery (no word-splitting bugs)
- Empty selection no longer defaults to "all" (safety)
- Show .env.example comments as hints during interactive configure
- Required fields (empty default) loop until user provides a value
- Disable colors when stdout is not a terminal
- compose() wrapper auto-adds --env-file
- Deduplicate project_exists / project_dir helpers
2026-04-15 10:02:41 +08:00
48b32d46b2 remove monitoring, huajibot, dockge, notification-center
These services are not needed and should be decommissioned from production.
2026-04-15 09:56:42 +08:00
3433516287 refactor: restructure as self-contained project dirs with interactive CLI
- Remove old services/, bin/, config.sh, Makefile, setup.sh
- Each Docker Compose project is now a top-level self-contained directory
  with compose.yaml + .env.example (project self-governance)
- Add automa CLI: interactive deploy, status, logs, stop, update, config
- Add install.sh for curl-pipe-bash quick start
- New projects from production: uptime-kuma, tailscale+derp, monitoring
  (prometheus+grafana+blackbox+node-exporter), filesuite (cloudreve+qbt),
  huajibot, dockge, notification-center
- Clean up existing projects: forgejo, minecraft, teamspeak, nextcloud
- Sanitize all .env.example files (no real secrets)
2026-04-15 09:54:23 +08:00
7bb3e05942 feat: add infrastructure services for monitoring and automation
Add infrastructure layer with following components:

**Reverse Proxy & SSL:**
- Caddy: Auto HTTPS with Let's Encrypt, simple configuration
- Caddyfile with reverse proxy rules for Nextcloud and Grafana

**Monitoring Stack (Observability):**
- Prometheus: Metrics collection and time-series database
- Grafana: Visualization dashboards with datasource provisioning
- Loki: Lightweight log aggregation
- Promtail: Log collection agent for Docker containers
- cAdvisor: Container resource monitoring

**Automation:**
- Watchtower: Automatic Docker image updates (label-based)
- Duplicati: Remote backup with web UI and encryption support

**Security:**
- Fail2ban: Intrusion prevention and IP banning

**Key Features:**
- All services use official Alpine-based images (lightweight)
- Network isolation (automa-proxy, automa-monitoring)
- Resource limits and health checks configured
- Read-only configs where applicable
- Comprehensive README with setup instructions

**Resource Usage:**
- Total additional overhead: ~1.5GB RAM, ~16GB disk
- Follows KISS principles and Unix philosophy
- All services replaceable and independently scalable

Refs: #3
2026-01-19 16:32:00 +08:00
98 changed files with 1868 additions and 5563 deletions

22
.github/banner.svg vendored Normal file
View file

@ -0,0 +1,22 @@
<svg xmlns="http://www.w3.org/2000/svg" viewBox="0 0 600 160" fill="none">
<defs>
<linearGradient id="bg" x1="0" y1="0" x2="600" y2="160" gradientUnits="userSpaceOnUse">
<stop offset="0%" stop-color="#0f172a"/>
<stop offset="100%" stop-color="#1e293b"/>
</linearGradient>
<linearGradient id="accent" x1="0" y1="0" x2="1" y2="1">
<stop offset="0%" stop-color="#38bdf8"/>
<stop offset="100%" stop-color="#818cf8"/>
</linearGradient>
</defs>
<rect width="600" height="160" rx="16" fill="url(#bg)"/>
<!-- Terminal prompt icon -->
<g transform="translate(40, 48)">
<rect width="64" height="64" rx="14" fill="url(#accent)" opacity="0.15"/>
<text x="32" y="44" font-family="monospace" font-size="36" font-weight="bold" fill="url(#accent)" text-anchor="middle">$_</text>
</g>
<!-- Title -->
<text x="124" y="78" font-family="system-ui, -apple-system, sans-serif" font-size="42" font-weight="bold" fill="#f8fafc" letter-spacing="-1">automa</text>
<!-- Tagline -->
<text x="124" y="108" font-family="system-ui, -apple-system, sans-serif" font-size="16" fill="#94a3b8">Self-hosted Docker Compose deployer</text>
</svg>

After

Width:  |  Height:  |  Size: 1.1 KiB

BIN
.github/demo.gif vendored Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 420 KiB

31
.github/demo.tape vendored Normal file
View file

@ -0,0 +1,31 @@
# VHS demo tape for automa
# Run: vhs .github/demo.tape
Output .github/demo.gif
Set Shell bash
Set FontSize 15
Set Width 900
Set Height 500
Set Padding 20
Set Theme { "name": "automa", "black": "#0f172a", "red": "#ef4444", "green": "#22c55e", "yellow": "#eab308", "blue": "#3b82f6", "magenta": "#a855f7", "cyan": "#06b6d4", "white": "#f8fafc", "brightBlack": "#64748b", "brightRed": "#f87171", "brightGreen": "#4ade80", "brightYellow": "#facc15", "brightBlue": "#60a5fa", "brightMagenta": "#c084fc", "brightCyan": "#22d3ee", "brightWhite": "#ffffff", "background": "#0f172a", "foreground": "#f8fafc", "selection": "#334155", "cursor": "#38bdf8" }
Sleep 500ms
Type "./automa help"
Enter
Sleep 3s
Type "./automa list"
Enter
Sleep 3s
Type "./automa -y deploy uptime-kuma"
Enter
Sleep 6s
Type "./automa status"
Enter
Sleep 3s
Sleep 1s

19
.gitignore vendored
View file

@ -3,41 +3,36 @@
*.env *.env
!.env.example !.env.example
# Docker volumes and data # Runtime data
**/data/ **/data/
**/volumes/ **/volumes/
**/tailscale-data/
**/cloudreve-data/
**/qbt-config/
**/downloads/
# Logs # Logs
*.log *.log
logs/
**/logs/ **/logs/
# Backups # Backups
*.tar.gz *.tar.gz
*.zip *.zip
backups/
**/backups/
# OS specific # OS
.DS_Store .DS_Store
Thumbs.db Thumbs.db
*.swp *.swp
*.swo
*~ *~
# IDE # IDE
.vscode/ .vscode/
.idea/ .idea/
*.iml
# Minecraft specific # Minecraft specific
minecraft/mods/
minecraft/world/ minecraft/world/
minecraft/world_*/ minecraft/world_*/
minecraft/server.jar minecraft/server.jar
minecraft/libraries/ minecraft/libraries/
minecraft/.fabric/ minecraft/.fabric/
# Temporary files
tmp/
temp/
*.tmp

252
Makefile
View file

@ -1,252 +0,0 @@
# Automa - Unified Makefile
# Provides common operations across all services
.PHONY: help all status up down logs restart clean minecraft teamspeak nextcloud
.PHONY: health health-minecraft health-teamspeak health-nextcloud
.PHONY: backup backup-minecraft backup-teamspeak backup-nextcloud backup-list backup-cleanup
.PHONY: deploy-email deploy-nginx deploy-ss-server deploy-ss-client deploy-frp-server deploy-frp-client
# Default target
help:
@echo "Automa - Self-hosted Services Manager"
@echo ""
@echo "Usage: make [target]"
@echo ""
@echo "Global Commands:"
@echo " help Show this help message"
@echo " status Show status of all services"
@echo " all-up Start all services"
@echo " all-down Stop all services"
@echo " health Run health checks on all services"
@echo " backup Backup all services"
@echo " backup-list List available backups"
@echo " backup-cleanup Remove old backups"
@echo ""
@echo "Infrastructure Deploy (set INFRA_DIR first):"
@echo " deploy-email Deploy Postfix+Dovecot+OpenDKIM+SpamAssassin"
@echo " deploy-nginx Deploy Nginx vhosts"
@echo " deploy-ss-server Deploy Shadowsocks server"
@echo " deploy-ss-client Deploy Shadowsocks client + privoxy"
@echo " deploy-frp-server Deploy FRP server (frps)"
@echo " deploy-frp-client Deploy FRP client (frpc)"
@echo ""
@echo "Service-specific Commands:"
@echo " Minecraft:"
@echo " minecraft-up Start Minecraft server"
@echo " minecraft-down Stop Minecraft server"
@echo " minecraft-logs View Minecraft logs"
@echo " minecraft-restart Restart Minecraft server"
@echo " minecraft-status Show server status"
@echo " minecraft-setup Initialize environment"
@echo " minecraft-mods-download Download mods from Modrinth"
@echo " minecraft-mods-list List installed mods"
@echo " minecraft-mods-update Update all mods"
@echo " minecraft-backup Create full backup"
@echo " minecraft-backup-world Backup world data only"
@echo " minecraft-backup-list List available backups"
@echo " health-minecraft Check Minecraft health"
@echo ""
@echo " TeamSpeak:"
@echo " teamspeak-up Start TeamSpeak server"
@echo " teamspeak-down Stop TeamSpeak server"
@echo " teamspeak-logs View TeamSpeak logs"
@echo " teamspeak-restart Restart TeamSpeak server"
@echo " health-teamspeak Check TeamSpeak health"
@echo ""
@echo " Nextcloud:"
@echo " nextcloud-up Start Nextcloud"
@echo " nextcloud-down Stop Nextcloud"
@echo " nextcloud-logs View Nextcloud logs"
@echo " nextcloud-restart Restart Nextcloud"
@echo " health-nextcloud Check Nextcloud health"
@echo ""
@echo "Utility Commands:"
@echo " check Check prerequisites"
@echo " clean Remove stopped containers and unused volumes"
# ============================================================================
# Infrastructure Service Targets
# Requires INFRA_DIR pointing to the corresponding infra module directory.
# ============================================================================
# deploy-email: INFRA_DIR=/path/to/infra/services/email make deploy-email
deploy-email:
@[ -n "$(INFRA_DIR)" ] || { echo "Set INFRA_DIR=/path/to/infra/services/email"; exit 1; }
INFRA_DIR=$(INFRA_DIR) ./services/email/deploy.sh
deploy-nginx:
@[ -n "$(INFRA_DIR)" ] || { echo "Set INFRA_DIR=/path/to/infra/services/nginx"; exit 1; }
INFRA_DIR=$(INFRA_DIR) ./services/nginx/deploy.sh
deploy-ss-server:
@[ -n "$(INFRA_DIR)" ] || { echo "Set INFRA_DIR=/path/to/infra/services/shadowsocks/server"; exit 1; }
INFRA_DIR=$(INFRA_DIR) ./services/shadowsocks/server/deploy.sh
deploy-ss-client:
@[ -n "$(INFRA_DIR)" ] || { echo "Set INFRA_DIR=/path/to/infra/services/shadowsocks/client"; exit 1; }
INFRA_DIR=$(INFRA_DIR) ./services/shadowsocks/client/deploy.sh
deploy-frp-server:
@[ -n "$(INFRA_DIR)" ] || { echo "Set INFRA_DIR=/path/to/infra/services/frp/server"; exit 1; }
INFRA_DIR=$(INFRA_DIR) ./services/frp/server/deploy.sh
deploy-frp-client:
@[ -n "$(INFRA_DIR)" ] || { echo "Set INFRA_DIR=/path/to/infra/services/frp/client"; exit 1; }
INFRA_DIR=$(INFRA_DIR) ./services/frp/client/deploy.sh
# Check prerequisites
check:
@echo "Checking prerequisites..."
@command -v docker >/dev/null 2>&1 || { echo "Docker not found. Install: https://docs.docker.com/get-docker/"; exit 1; }
@command -v docker compose >/dev/null 2>&1 || command -v docker-compose >/dev/null 2>&1 || { echo "Docker Compose not found."; exit 1; }
@echo "✓ All prerequisites satisfied"
# Status check for all services
status:
@echo "=== Service Status ==="
@echo ""
@echo "Minecraft:"
@cd minecraft && docker compose ps 2>/dev/null || echo " Not running"
@echo ""
@echo "TeamSpeak:"
@cd teamspeak && docker compose ps 2>/dev/null || echo " Not running"
@echo ""
@echo "Nextcloud:"
@cd nextcloud && docker compose ps 2>/dev/null || echo " Not running"
# Start all services
all-up:
@echo "Starting all services..."
@cd minecraft && docker compose up -d
@cd teamspeak && docker compose up -d
@cd nextcloud && docker compose up -d
@echo "✓ All services started"
# Stop all services
all-down:
@echo "Stopping all services..."
@cd minecraft && docker compose down
@cd teamspeak && docker compose down
@cd nextcloud && docker compose down
@echo "✓ All services stopped"
# Minecraft
minecraft-up:
@cd minecraft && docker compose up -d
@echo "✓ Minecraft server started"
minecraft-down:
@cd minecraft && docker compose down
@echo "✓ Minecraft server stopped"
minecraft-logs:
@cd minecraft && docker compose logs -f
minecraft-restart:
@cd minecraft && docker compose restart
@echo "✓ Minecraft server restarted"
minecraft-status:
@cd minecraft && ./scripts/monitor.sh status
minecraft-setup:
@cd minecraft && ./scripts/setup.sh
minecraft-mods-download:
@cd minecraft && ./scripts/mod-manager.sh download
minecraft-mods-list:
@cd minecraft && ./scripts/mod-manager.sh list
minecraft-mods-update:
@cd minecraft && ./scripts/mod-manager.sh update
minecraft-mods-check:
@cd minecraft && ./scripts/mod-manager.sh check
minecraft-backup:
@cd minecraft && ./scripts/backup.sh backup all
minecraft-backup-world:
@cd minecraft && ./scripts/backup.sh backup world
minecraft-backup-list:
@cd minecraft && ./scripts/backup.sh list
minecraft-backup-cleanup:
@cd minecraft && ./scripts/backup.sh cleanup
# TeamSpeak
teamspeak-up:
@cd teamspeak && docker compose up -d
@echo "✓ TeamSpeak server started"
teamspeak-down:
@cd teamspeak && docker compose down
@echo "✓ TeamSpeak server stopped"
teamspeak-logs:
@cd teamspeak && docker compose logs -f
teamspeak-restart:
@cd teamspeak && docker compose restart
@echo "✓ TeamSpeak server restarted"
# Nextcloud
nextcloud-up:
@cd nextcloud && docker compose up -d
@echo "✓ Nextcloud started"
nextcloud-down:
@cd nextcloud && docker compose down
@echo "✓ Nextcloud stopped"
nextcloud-logs:
@cd nextcloud && docker compose logs -f
nextcloud-restart:
@cd nextcloud && docker compose restart
@echo "✓ Nextcloud restarted"
# Cleanup
clean:
@echo "Cleaning up Docker resources..."
@docker container prune -f
@docker volume prune -f
@echo "✓ Cleanup complete"
# ============================================================================
# Health Check Targets
# ============================================================================
health:
@./bin/healthcheck.sh all
health-minecraft:
@./bin/healthcheck.sh minecraft
health-teamspeak:
@./bin/healthcheck.sh teamspeak
health-nextcloud:
@./bin/healthcheck.sh nextcloud
# ============================================================================
# Backup Targets (using bin/backup.sh)
# ============================================================================
backup:
@./bin/backup.sh backup all
backup-minecraft:
@./bin/backup.sh backup minecraft
backup-teamspeak:
@./bin/backup.sh backup teamspeak
backup-nextcloud:
@./bin/backup.sh backup nextcloud
backup-list:
@./bin/backup.sh list
backup-cleanup:
@./bin/backup.sh cleanup

326
README.md
View file

@ -1,240 +1,156 @@
# Automa <p align="center">
<img src=".github/banner.svg" alt="automa" width="600">
</p>
Deployment scripts for self-hosted infrastructure. Pairs with [infra](https://github.com/m1ngsama/infra) (private) for configuration. <p align="center">
<b>One command to deploy your self-hosted stack.</b><br>
<sub>Interactive CLI for Docker Compose — guided setup, auto-generated secrets, zero YAML editing.</sub>
</p>
``` <p align="center">
infra/services/<name>/.env → automa/services/<name>/deploy.sh <a href="https://github.com/m1ngsama/automa/releases"><img src="https://img.shields.io/github/v/release/m1ngsama/automa?style=flat-square&color=green" alt="Release"></a>
``` <a href="https://github.com/m1ngsama/automa/blob/main/LICENSE"><img src="https://img.shields.io/github/license/m1ngsama/automa?style=flat-square" alt="License"></a>
<a href="https://github.com/m1ngsama/automa/commits/main"><img src="https://img.shields.io/github/last-commit/m1ngsama/automa?style=flat-square" alt="Last commit"></a>
<img src="https://img.shields.io/badge/bash-%3E%3D4.0-4EAA25?style=flat-square&logo=gnubash&logoColor=white" alt="Bash 4+">
<img src="https://img.shields.io/badge/docker_compose-v2-2496ED?style=flat-square&logo=docker&logoColor=white" alt="Docker Compose v2">
</p>
## Relationship with infra <br>
**infra** (private) holds config templates and `.env.example` files — the "what" and "how to configure". <p align="center">
**automa** (public) holds deployment scripts — the "how to deploy". Zero hardcoded values, zero domain names. <img src=".github/demo.gif" alt="automa demo" width="700">
</p>
Workflow: ---
1. Clone infra (private), fill in `.env` files for each service you want
2. Clone automa (public), run the matching deploy script ## Quick Start
3. Each script reads `INFRA_DIR` to locate the corresponding `.env`
```bash ```bash
# Example curl -fsSL https://raw.githubusercontent.com/m1ngsama/automa/main/install.sh | bash
cd infra/services/email && cp .env.example .env && $EDITOR .env cd ~/automa
cd automa/services/email ./automa deploy
INFRA_DIR=../../infra/services/email ./deploy.sh
``` ```
## Philosophy That's it. The installer checks prerequisites, clones the repo, and you're ready to deploy.
This project embraces Unix principles: ## Features
- **Modularity**: Each service is self-contained
- **Simplicity**: Minimal dependencies, clear configuration
- **Composability**: Tools work together through standard interfaces
- **Transparency**: Plain text configuration, readable scripts
## Infrastructure Services - **Interactive CLI** — select projects from a numbered menu, guided `.env` setup with hints
- **Zero config** — sensible defaults for every project, passwords and secrets auto-generated
- **Non-interactive mode**`automa -y deploy` accepts all defaults for CI/scripts
- **Self-contained projects** — each is an independent directory, no shared dependencies
- **Production-ready** — health checks, security hardening, least-privilege containers
- **Extensible** — drop in a `compose.yaml` + `.env.example` and automa discovers it
System services deployed from infra module configs. ## Bundled Projects
### Email | Project | Description | Default Port | Upstream |
Postfix + Dovecot + OpenDKIM + SpamAssassin. |---------|-------------|:------------:|----------|
| **Forgejo** | Self-hosted Git service (Gitea fork) | `3000` | [forgejo.org](https://forgejo.org) |
| **Nextcloud** | Private cloud with MariaDB + Redis | `8080` | [nextcloud.com](https://nextcloud.com) |
| **Uptime Kuma** | Uptime monitoring dashboard | `3001` | [GitHub](https://github.com/louislam/uptime-kuma) |
| **Tailscale** | Mesh VPN client + optional DERP relay | host | [tailscale.com](https://tailscale.com) |
| **Filesuite** | Cloudreve cloud storage + qBittorrent | `5212` `8090` | [cloudreve.org](https://cloudreve.org) |
| **Minecraft** | Fabric server (itzg/minecraft-server) | `25565` | [Docs](https://docker-minecraft-server.readthedocs.io) |
| **TeamSpeak** | Voice communication server | `9987/udp` | [teamspeak.com](https://teamspeak.com) |
<details>
<summary><b>Project-specific notes</b></summary>
#### Tailscale
Uses Docker Compose profiles — deploy only the VPN client or include the DERP relay:
```bash ```bash
INFRA_DIR=/path/to/infra/services/email ./services/email/deploy.sh # Tailscale client only
docker compose --profile tailscale up -d
# Client + DERP relay
docker compose --profile derp up -d
``` ```
### Nginx #### Nextcloud
Web server and reverse proxy vhosts.
Ships with MariaDB 11 and Redis 7 as backing services. All three containers have health checks with `depends_on` ordering. First startup takes ~60s for database migration.
#### Minecraft
Uses the [itzg/minecraft-server](https://docker-minecraft-server.readthedocs.io) image with Fabric mod loader. Mods go in `minecraft/data/mods/`. First startup takes ~2 minutes to download server files.
</details>
## Usage
```bash ```bash
INFRA_DIR=/path/to/infra/services/nginx ./services/nginx/deploy.sh automa deploy # interactive project selection
automa deploy forgejo nextcloud # deploy specific projects
automa -y deploy forgejo # non-interactive (CI/scripts)
automa status # overview dashboard
automa logs minecraft # follow container logs
automa stop forgejo # stop a project
automa restart nextcloud # restart a project
automa update nextcloud # pull latest images & recreate
automa config tailscale # reconfigure .env
automa list # list all projects
``` ```
### Shadowsocks ## How It Works
GFW-resistant proxy (legacy; new deployments should use sing-box).
```
~/automa/
├── automa # CLI entry point (single bash script)
├── install.sh # curl-pipe-bash installer
├── forgejo/
│ ├── compose.yaml # Docker Compose definition
│ ├── .env.example # Template with @metadata + defaults
│ └── .env # Your config (gitignored, created by CLI)
├── nextcloud/
│ └── ...
└── ...
```
Each `.env.example` carries metadata that the CLI reads:
```bash ```bash
# Server (VPS) # @name Forgejo
INFRA_DIR=/path/to/infra/services/shadowsocks/server ./services/shadowsocks/server/deploy.sh # @desc Self-hosted Git service (Gitea fork)
# @url https://forgejo.org
# Client (home machine) # @port FORGEJO_HTTP_PORT
INFRA_DIR=/path/to/infra/services/shadowsocks/client ./services/shadowsocks/client/deploy.sh
``` ```
### Sing-box The CLI uses these annotations to show project names, descriptions, docs links, and access URLs — all without extra configuration files.
Multi-protocol proxy (VLESS/Reality, VMess/WS, Hysteria2). Config generated once
by [sing-box-yg](https://github.com/yonggekkk/sing-box-yg), then stored in infra. ## Requirements
| Dependency | Minimum | Check |
|------------|---------|-------|
| Docker | 20.10+ | `docker --version` |
| Docker Compose | v2 (plugin) | `docker compose version` |
| Bash | 4.0+ | `bash --version` |
| Git | any | `git --version` |
The installer verifies all prerequisites automatically.
## Uninstall
```bash ```bash
# Server (VPS) cd ~/automa
INFRA_DIR=/path/to/infra/services/sing-box/server ./services/sing-box/server/deploy.sh ./automa stop <each-project> # stop running containers
cd ~ && rm -rf ~/automa # remove automa
# Client (home machine)
INFRA_DIR=/path/to/infra/services/sing-box/client ./services/sing-box/client/deploy.sh
``` ```
### FRP Data is stored in each project's `data/` directory. Back up before removing if needed.
Reverse tunnel — expose home services through VPS.
```bash
# Server (VPS)
INFRA_DIR=/path/to/infra/services/frp/server ./services/frp/server/deploy.sh
# Client (home machine)
INFRA_DIR=/path/to/infra/services/frp/client ./services/frp/client/deploy.sh
```
### TNT
SSH-based terminal chat server.
```bash
INFRA_DIR=/path/to/infra/services/tnt ./services/tnt/deploy.sh
```
### MinIO
S3-compatible object storage.
```bash
INFRA_DIR=/path/to/infra/services/minio ./services/minio/deploy.sh
```
### Galene
WebRTC video conferencing server.
```bash
INFRA_DIR=/path/to/infra/services/galene ./services/galene/deploy.sh
```
## Home Services
Docker-based services with their own config.
### Minecraft Server
Automated Minecraft Fabric server deployment with mod management.
**Location**: `minecraft/`
**Quick Start**:
```bash
cd minecraft
cp .env.example .env # Edit as needed
docker compose up -d
```
See [minecraft/README.md](minecraft/README.md) for details.
### TeamSpeak Server
Voice communication server with minimal configuration.
**Location**: `teamspeak/`
**Quick Start**:
```bash
cd teamspeak
cp .env.example .env # Edit as needed
docker compose up -d
```
See [teamspeak/README.md](teamspeak/README.md) for details.
### Nextcloud
Self-hosted file sync and collaboration platform.
**Location**: `nextcloud/`
**Quick Start**:
```bash
cd nextcloud
cp .env.example .env # Edit as needed
docker compose up -d
```
See [nextcloud/README.md](nextcloud/README.md) for details.
## Utilities
### Organization Repository Cloner
Batch clone all repositories from a GitHub organization.
**Location**: `bin/org-clone.sh`
**Usage**:
```bash
./bin/org-clone.sh <org-name>
```
## Prerequisites
- Docker & Docker Compose
- Bash 4.0+
- Git
## Project Structure
```
automa/
├── bin/ # Utility scripts
│ └── lib/common.sh # Shared logging + env helpers
├── services/ # Infrastructure deploy scripts (reads infra .env)
│ ├── email/deploy.sh
│ ├── nginx/deploy.sh
│ ├── shadowsocks/
│ │ ├── server/deploy.sh
│ │ └── client/deploy.sh
│ ├── sing-box/
│ │ ├── server/deploy.sh
│ │ └── client/deploy.sh
│ ├── frp/
│ │ ├── server/deploy.sh
│ │ └── client/deploy.sh
│ ├── tnt/deploy.sh
│ ├── minio/deploy.sh
│ └── galene/deploy.sh
├── minecraft/ # Minecraft server (Docker)
├── teamspeak/ # TeamSpeak server (Docker)
├── nextcloud/ # Nextcloud (Docker)
└── README.md
```
## Common Operations
All services follow consistent patterns:
### Start a Service
```bash
cd <service-name>
docker compose up -d
```
### View Logs
```bash
cd <service-name>
docker compose logs -f
```
### Stop a Service
```bash
cd <service-name>
docker compose down
```
### Update a Service
```bash
cd <service-name>
docker compose pull
docker compose up -d
```
## Security Notes
- Always change default passwords in `.env` files
- Keep `.env` files out of version control
- Use strong passwords for production deployments
- Review exposed ports before deployment
## Contributing ## Contributing
Contributions welcome. Keep changes: Contributions welcome! To add a new project:
- Simple and focused
- Well-documented 1. Create a directory with `compose.yaml` (include health checks)
- Following existing patterns 2. Add `.env.example` with metadata headers (`@name`, `@desc`, `@url`, `@port`)
- Unix philosophy aligned 3. Open a pull request
See existing projects for reference.
## License ## License
MIT License - See [LICENSE](LICENSE) file for details. [MIT](LICENSE) &copy; [m1ngsama](https://github.com/m1ngsama)

699
automa Executable file
View file

@ -0,0 +1,699 @@
#!/usr/bin/env bash
# automa - interactive Docker Compose project deployer
#
# Quick start:
# curl -fsSL https://raw.githubusercontent.com/m1ngsama/automa/main/install.sh | bash
# cd ~/automa && ./automa deploy
set -euo pipefail
AUTOMA_VERSION="1.0.0"
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
YES=0 # set to 1 by -y/--yes to skip all prompts
# ============================================================================
# Terminal
# ============================================================================
if [[ -t 1 ]]; then
RED='\033[0;31m' GREEN='\033[0;32m' YELLOW='\033[1;33m'
BLUE='\033[0;34m' CYAN='\033[0;36m'
BOLD='\033[1m' DIM='\033[2m' NC='\033[0m'
COLS=$(tput cols 2>/dev/null || echo 80)
else
RED='' GREEN='' YELLOW='' BLUE='' CYAN=''
BOLD='' DIM='' NC=''
COLS=80
fi
# ============================================================================
# Output helpers
# ============================================================================
info() { echo -e " ${GREEN}\xe2\x9c\x94${NC} $*"; }
warn() { echo -e " ${YELLOW}\xe2\x9a\xa0${NC} $*"; }
error() { echo -e " ${RED}\xe2\x9c\x98${NC} $*" >&2; }
step() { echo -e " ${CYAN}\xe2\x96\xb6${NC} ${BOLD}$*${NC}"; }
dim() { echo -e " ${DIM}$*${NC}"; }
divider() {
printf " ${DIM}"
printf '%.0s\xe2\x94\x80' $(seq 1 $(( (COLS - 4) / 3 + 1 )) )
printf "${NC}\n"
}
# Spinner for long operations
spinner() {
local pid=$1 msg="${2:-Working...}"
local frames=('⠋' '⠙' '⠹' '⠸' '⠼' '⠴' '⠦' '⠧' '⠇' '⠏')
local i=0
while kill -0 "$pid" 2>/dev/null; do
printf "\r ${CYAN}%s${NC} %s" "${frames[$i]}" "$msg"
i=$(( (i + 1) % ${#frames[@]} ))
sleep 0.1
done
wait "$pid"
local rc=$?
printf "\r\033[2K" # clear line
return $rc
}
run_with_spinner() {
local msg="$1"; shift
"$@" &>/dev/null &
local pid=$!
if spinner "$pid" "$msg"; then
info "$msg"
return 0
else
error "$msg"
return 1
fi
}
# ============================================================================
# Prerequisites
# ============================================================================
check_docker() {
if ! command -v docker &>/dev/null; then
echo ""
error "Docker is not installed"
echo ""
dim "Install Docker:"
dim " curl -fsSL https://get.docker.com | sh"
dim ""
dim "Or visit: https://docs.docker.com/engine/install/"
echo ""
exit 1
fi
if ! docker compose version &>/dev/null 2>&1; then
echo ""
error "Docker Compose plugin is not installed"
dim "Install: https://docs.docker.com/compose/install/"
echo ""
exit 1
fi
}
# ============================================================================
# Project helpers
# ============================================================================
PROJECTS=()
discover_projects() {
PROJECTS=()
for dir in "$SCRIPT_DIR"/*/; do
[[ -f "$dir/compose.yaml" ]] && PROJECTS+=("$(basename "$dir")")
done
}
project_exists() { [[ -f "$SCRIPT_DIR/$1/compose.yaml" ]]; }
# Parse @key from .env.example header
project_meta() {
local slug="$1" key="$2"
local env_example="$SCRIPT_DIR/$slug/.env.example"
[[ -f "$env_example" ]] || return
while IFS= read -r line; do
if [[ "$line" =~ ^#\ @${key}\ (.+) ]]; then
echo "${BASH_REMATCH[1]}"
return
fi
[[ ! "$line" =~ ^# ]] && return # stop at first non-comment
done < "$env_example"
}
# Collect all @note lines
project_notes() {
local slug="$1"
local env_example="$SCRIPT_DIR/$slug/.env.example"
[[ -f "$env_example" ]] || return
while IFS= read -r line; do
[[ "$line" =~ ^#\ @note\ (.+) ]] && echo "${BASH_REMATCH[1]}"
[[ ! "$line" =~ ^# ]] && return
done < "$env_example"
}
project_status() {
local slug="$1"
if [[ ! -f "$SCRIPT_DIR/$slug/.env" ]]; then
echo "not_configured"
elif compose "$slug" ps --status running 2>/dev/null | grep -q .; then
echo "running"
else
echo "stopped"
fi
}
status_badge() {
case "$1" in
running) echo -e "${GREEN}\xe2\x97\x8f running${NC}" ;;
stopped) echo -e "${YELLOW}\xe2\x97\x8f stopped${NC}" ;;
not_configured) echo -e "${DIM}\xe2\x97\x8b not configured${NC}" ;;
esac
}
# compose wrapper
compose() {
local slug="$1"; shift
local dir="$SCRIPT_DIR/$slug"
local args=(-f "$dir/compose.yaml")
[[ -f "$dir/.env" ]] && args+=(--env-file "$dir/.env")
docker compose "${args[@]}" "$@"
}
# Get access URL after deploy
access_hint() {
local slug="$1"
local port_var
port_var=$(project_meta "$slug" "port")
[[ -z "$port_var" ]] && return
local env_file="$SCRIPT_DIR/$slug/.env"
[[ -f "$env_file" ]] || return
local port_val
port_val=$(grep "^${port_var}=" "$env_file" 2>/dev/null | cut -d= -f2)
[[ -z "$port_val" ]] && return
if [[ "$port_val" == *:* ]]; then
echo "http://${port_val}"
elif [[ "$port_val" == */* ]]; then
echo "$port_val"
else
echo "http://localhost:${port_val}"
fi
}
# ============================================================================
# Interactive .env configuration
# ============================================================================
generate_password() {
LC_ALL=C tr -dc 'A-Za-z0-9' </dev/urandom 2>/dev/null | head -c 24 || openssl rand -hex 12
}
configure_env() {
local slug="$1"
local dir="$SCRIPT_DIR/$slug"
local env_example="$dir/.env.example"
local env_file="$dir/.env"
if [[ ! -f "$env_example" ]]; then
warn "No .env.example found, skipping"
return 0
fi
local name
name=$(project_meta "$slug" "name")
name="${name:-$slug}"
# Handle existing .env
if [[ -f "$env_file" ]]; then
if [[ $YES -eq 1 ]]; then
info "Keeping existing .env for ${name}"
return 0
fi
echo ""
dim ".env already exists for ${BOLD}${name}${NC}"
echo ""
echo -e " ${BOLD}k${NC} Keep current configuration"
echo -e " ${BOLD}r${NC} Reconfigure from scratch"
echo -e " ${BOLD}v${NC} View current values"
echo ""
while true; do
read -rp " Choose [k]: " choice
case "${choice:-k}" in
k) info "Keeping existing .env"; return 0 ;;
r) break ;;
v)
echo ""
divider
while IFS= read -r line; do
echo -e " ${DIM}${line}${NC}"
done < "$env_file"
divider
echo ""
;;
*) dim "Enter k, r, or v" ;;
esac
done
fi
echo ""
step "Configure ${name}"
local desc
desc=$(project_meta "$slug" "desc")
local url
url=$(project_meta "$slug" "url")
[[ -n "$desc" ]] && dim "$desc"
[[ -n "$url" ]] && dim "Docs: ${url}"
# Show notes
local has_notes=0
while IFS= read -r note; do
[[ -z "$note" ]] && continue
[[ $has_notes -eq 0 ]] && echo ""
echo -e " ${YELLOW}!${NC} ${DIM}${note}${NC}"
has_notes=1
done < <(project_notes "$slug")
local tmp_env
tmp_env="$(mktemp)"
if [[ $YES -eq 1 ]]; then
# Non-interactive: accept all defaults, generate secrets
while IFS= read -r line; do
[[ -z "$line" || "$line" =~ ^# ]] && continue
local key="${line%%=*}"
local default="${line#*=}"
if [[ -z "$default" && "$key" =~ PASSWORD|SECRET|TOKEN|AUTHKEY ]]; then
default=$(generate_password)
fi
echo "${key}=${default}" >> "$tmp_env"
done < "$env_example"
info "Configuration generated for ${name}"
else
echo ""
divider
dim "Press ${BOLD}Enter${NC}${DIM} to accept [default] values${NC}"
echo ""
while IFS= read -r line; do
# Blank line
[[ -z "$line" ]] && continue
# Skip @metadata
[[ "$line" =~ ^#\ @(name|desc|url|port|note) ]] && continue
# Comment → show as hint
if [[ "$line" =~ ^#.* ]]; then
echo -e " ${DIM}${line#\# }${NC}"
continue
fi
local key="${line%%=*}"
local default="${line#*=}"
if [[ -n "$default" ]]; then
read -rp " ${BOLD}${key}${NC} [${DIM}${default}${NC}]: " val
echo "${key}=${val:-$default}" >> "$tmp_env"
else
# Required — check if it's a secret
if [[ "$key" =~ PASSWORD|SECRET|TOKEN|AUTHKEY ]]; then
echo -e " ${DIM}Leave blank to auto-generate${NC}"
read -rp " ${BOLD}${key}${NC}: " val
if [[ -z "$val" ]]; then
val=$(generate_password)
echo -e " ${DIM}Generated: ${val}${NC}"
fi
else
while true; do
read -rp " ${BOLD}${key}${NC} ${RED}(required)${NC}: " val
[[ -n "$val" ]] && break
echo -e " ${RED}This field cannot be empty${NC}"
done
fi
echo "${key}=${val}" >> "$tmp_env"
fi
done < "$env_example"
echo ""
divider
fi
mv "$tmp_env" "$env_file"
chmod 600 "$env_file"
info "Configuration saved"
}
# ============================================================================
# Commands
# ============================================================================
cmd_list() {
banner
check_docker
discover_projects
if [[ ${#PROJECTS[@]} -eq 0 ]]; then
warn "No projects found"
return 1
fi
local i=1
for slug in "${PROJECTS[@]}"; do
local st
st=$(project_status "$slug")
local badge
badge=$(status_badge "$st")
local desc
desc=$(project_meta "$slug" "desc")
printf " ${BOLD}%2d${NC} %-20s %b\n" "$i" "$slug" "$badge"
[[ -n "$desc" ]] && echo -e " ${DIM}${desc}${NC}"
((i++))
done
echo ""
}
cmd_deploy() {
banner
check_docker
discover_projects
if [[ ${#PROJECTS[@]} -eq 0 ]]; then
error "No projects found"
return 1
fi
# Direct deploy
if [[ $# -gt 0 ]]; then
local ok=0 fail=0
for name in "$@"; do
echo ""
if deploy_project "$name"; then ((++ok)); else ((++fail)); fi
done
deploy_summary $ok $fail
return
fi
# Interactive
step "Select projects to deploy"
echo ""
local i=1
for slug in "${PROJECTS[@]}"; do
local st
st=$(project_status "$slug")
local badge
badge=$(status_badge "$st")
local desc
desc=$(project_meta "$slug" "desc")
printf " ${BOLD}%2d${NC} %-20s %b\n" "$i" "$slug" "$badge"
[[ -n "$desc" ]] && echo -e " ${DIM}${desc}${NC}"
((i++))
done
echo ""
dim "Enter numbers separated by spaces, e.g. ${BOLD}1 3 5${NC}"
dim "Type ${BOLD}all${NC} to deploy everything, or ${BOLD}q${NC} to quit"
echo ""
read -rp " > " selection
[[ -z "$selection" || "$selection" == "q" ]] && return 0
local selected=()
if [[ "$selection" == "all" ]]; then
selected=("${PROJECTS[@]}")
else
for num in $selection; do
if [[ "$num" =~ ^[0-9]+$ ]] && ((num > 0 && num <= ${#PROJECTS[@]})); then
selected+=("${PROJECTS[$((num-1))]}")
else
warn "Skipping invalid: $num"
fi
done
fi
[[ ${#selected[@]} -eq 0 ]] && return 0
# Confirmation
echo ""
divider
step "Will deploy:"
for s in "${selected[@]}"; do
local desc
desc=$(project_meta "$s" "desc")
echo -e " ${CYAN}\xe2\x96\xb6${NC} ${s} ${DIM}${desc:-}${NC}"
done
echo ""
if [[ $YES -eq 0 ]]; then
read -rp " Proceed? [Y/n] " confirm
[[ "${confirm:-y}" =~ ^[Nn] ]] && { echo ""; dim "Cancelled."; return 0; }
fi
divider
local ok=0 fail=0
for name in "${selected[@]}"; do
echo ""
if deploy_project "$name"; then ((++ok)); else ((++fail)); fi
done
deploy_summary $ok $fail
}
deploy_project() {
local slug="$1"
if ! project_exists "$slug"; then
error "Project not found: ${slug}"
dim "Run ${BOLD}automa list${NC}${DIM} to see available projects${NC}"
return 1
fi
local name
name=$(project_meta "$slug" "name")
name="${name:-$slug}"
local desc
desc=$(project_meta "$slug" "desc")
step "${name}"
[[ -n "$desc" ]] && dim "${desc}"
configure_env "$slug"
if [[ ! -f "$SCRIPT_DIR/$slug/.env" ]]; then
error "No .env — run: ${BOLD}automa config $slug${NC}"
return 1
fi
echo ""
if run_with_spinner "Pulling images..." compose "$slug" pull; then
if run_with_spinner "Starting containers..." compose "$slug" up -d; then
local url
url=$(access_hint "$slug")
[[ -n "$url" ]] && dim "Access: ${BOLD}${url}${NC}"
return 0
fi
fi
return 1
}
deploy_summary() {
local ok=$1 fail=$2
echo ""
divider
echo ""
if [[ $fail -eq 0 ]]; then
info "${BOLD}All done!${NC} ${ok} project(s) deployed"
else
warn "${BOLD}Done.${NC} ${GREEN}${ok} deployed${NC}, ${RED}${fail} failed${NC}"
fi
echo ""
dim "Useful commands:"
dim " ${BOLD}automa status${NC}${DIM} — check running state${NC}"
dim " ${BOLD}automa logs${NC}${DIM} <project> — view logs${NC}"
dim " ${BOLD}automa update${NC}${DIM} <project> — pull & restart${NC}"
echo ""
}
cmd_stop() {
local slug="${1:-}"
if [[ -z "$slug" ]]; then
error "Usage: ${BOLD}automa stop <project>${NC}"
dim "Run ${BOLD}automa list${NC}${DIM} to see available projects${NC}"
return 1
fi
if ! project_exists "$slug"; then
error "Project not found: ${slug}"
return 1
fi
run_with_spinner "Stopping ${slug}..." compose "$slug" down
}
cmd_logs() {
local slug="${1:-}"
if [[ -z "$slug" ]]; then
error "Usage: ${BOLD}automa logs <project>${NC}"
return 1
fi
shift
if ! project_exists "$slug"; then
error "Project not found: ${slug}"
return 1
fi
compose "$slug" logs -f "$@"
}
cmd_status() {
banner
check_docker
discover_projects
if [[ ${#PROJECTS[@]} -eq 0 ]]; then
warn "No projects found"
return 1
fi
for slug in "${PROJECTS[@]}"; do
local st name
st=$(project_status "$slug")
name=$(project_meta "$slug" "name")
name="${name:-$slug}"
local badge
badge=$(status_badge "$st")
echo -e " ${BOLD}${name}${NC} ${badge}"
if [[ "$st" == "running" ]]; then
compose "$slug" ps --format "table {{.Name}}\t{{.Status}}" 2>/dev/null \
| tail -n +2 \
| while IFS= read -r line; do
echo -e " ${DIM}${line}${NC}"
done
fi
echo ""
done
}
cmd_restart() {
local slug="${1:-}"
if [[ -z "$slug" ]]; then
error "Usage: ${BOLD}automa restart <project>${NC}"
return 1
fi
if ! project_exists "$slug"; then
error "Project not found: ${slug}"
return 1
fi
run_with_spinner "Restarting ${slug}..." compose "$slug" restart
}
cmd_config() {
local slug="${1:-}"
if [[ -z "$slug" ]]; then
error "Usage: ${BOLD}automa config <project>${NC}"
return 1
fi
if ! project_exists "$slug"; then
error "Project not found: ${slug}"
return 1
fi
configure_env "$slug"
}
cmd_update() {
local slug="${1:-}"
if [[ -z "$slug" ]]; then
error "Usage: ${BOLD}automa update <project>${NC}"
return 1
fi
if ! project_exists "$slug"; then
error "Project not found: ${slug}"
return 1
fi
local name
name=$(project_meta "$slug" "name")
name="${name:-$slug}"
echo ""
step "Updating ${name}"
run_with_spinner "Pulling latest images..." compose "$slug" pull
run_with_spinner "Recreating containers..." compose "$slug" up -d
echo ""
info "Update complete"
echo ""
}
banner() {
echo ""
echo -e " ${BOLD}${CYAN}automa${NC} ${DIM}v${AUTOMA_VERSION}${NC}"
dim "Self-hosted Docker Compose deployer"
echo ""
}
cmd_help() {
banner
cat <<EOF
${BOLD}Usage${NC}
automa <command> [options]
${BOLD}Global flags${NC}
${BOLD}-y, --yes${NC} Skip all prompts (accept defaults, auto-generate secrets)
${BOLD}Commands${NC}
${BOLD}deploy${NC} [project...] Deploy projects interactively or by name
${BOLD}list${NC} List all projects and their status
${BOLD}status${NC} Show running containers
${BOLD}config${NC} <project> Configure environment variables
${BOLD}stop${NC} <project> Stop a running project
${BOLD}restart${NC} <project> Restart a project
${BOLD}update${NC} <project> Pull latest images and recreate
${BOLD}logs${NC} <project> Follow container logs
${BOLD}help${NC} Show this help message
${BOLD}Examples${NC}
${DIM}\$${NC} automa deploy ${DIM}# interactive selection${NC}
${DIM}\$${NC} automa deploy forgejo nextcloud ${DIM}# deploy by name${NC}
${DIM}\$${NC} automa -y deploy forgejo ${DIM}# non-interactive (CI/scripts)${NC}
${DIM}\$${NC} automa status ${DIM}# overview dashboard${NC}
${DIM}\$${NC} automa logs minecraft ${DIM}# follow logs${NC}
${BOLD}Quick start${NC}
${DIM}\$${NC} curl -fsSL https://raw.githubusercontent.com/m1ngsama/automa/main/install.sh | bash
${DIM}\$${NC} cd ~/automa && ./automa deploy
EOF
}
# ============================================================================
# Main
# ============================================================================
main() {
# Parse global flags
while [[ "${1:-}" =~ ^- ]]; do
case "$1" in
-y|--yes) YES=1; shift ;;
-h|--help) cmd_help; return ;;
-v|--version) echo "automa v${AUTOMA_VERSION}"; return ;;
*) break ;;
esac
done
local cmd="${1:-}"
[[ -z "$cmd" ]] && { cmd_help; return; }
shift
case "$cmd" in
deploy) cmd_deploy "$@" ;;
list|ls) cmd_list ;;
status|st|ps) cmd_status ;;
stop|down) cmd_stop "$@" ;;
restart) cmd_restart "$@" ;;
logs|log) cmd_logs "$@" ;;
config|configure) cmd_config "$@" ;;
update|upgrade) cmd_update "$@" ;;
help|-h|--help) cmd_help ;;
version|-v|--version) echo "automa v${AUTOMA_VERSION}" ;;
*)
error "Unknown command: ${cmd}"
dim "Run ${BOLD}automa help${NC}${DIM} for usage${NC}"
exit 1
;;
esac
}
main "$@"

View file

@ -1,295 +0,0 @@
#!/usr/bin/env bash
# Backup utility for all services
# Usage: ./bin/backup.sh [command] [service]
set -euo pipefail
# Source shared library and config
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
PROJECT_ROOT="$(dirname "$SCRIPT_DIR")"
source "$SCRIPT_DIR/lib/common.sh"
source "$PROJECT_ROOT/config.sh"
readonly TIMESTAMP=$(date +%Y%m%d-%H%M%S)
# ============================================================================
# Pre-flight checks
# ============================================================================
check_prerequisites() {
if ! require_command "docker"; then
log_error "Docker is required for backup operations"
exit 1
fi
}
check_container_running() {
local container_name="$1"
local service_name="$2"
if ! check_container_health "$container_name"; then
log_warn "$service_name container ($container_name) is not running"
log_warn "Some backup operations may fail"
return 1
fi
return 0
}
# ============================================================================
# Backup functions
# ============================================================================
backup_minecraft() {
log_info "Backing up Minecraft server..."
local backup_dir="$BACKUP_ROOT/minecraft/$TIMESTAMP"
ensure_dir "$backup_dir"
# Check if container is running (warning only)
check_container_running "$CONTAINER_MINECRAFT" "Minecraft" || true
# Backup world data
if [[ -d "$PROJECT_ROOT/minecraft/data" ]]; then
log_info " Archiving world data..."
tar -czf "$backup_dir/world-data.tar.gz" -C "$PROJECT_ROOT/minecraft" data 2>/dev/null || {
log_error " Failed to backup world data"
return 1
}
log_info " ✓ World data backed up"
else
log_warn " No world data directory found"
fi
# Backup configs
if [[ -d "$PROJECT_ROOT/minecraft/configs" ]]; then
log_info " Archiving configs..."
tar -czf "$backup_dir/configs.tar.gz" -C "$PROJECT_ROOT/minecraft" configs 2>/dev/null || {
log_warn " Failed to backup configs"
}
fi
# Create manifest
cat > "$backup_dir/manifest.txt" <<EOF
Minecraft Backup
Created: $(date)
Location: $backup_dir
Container: $CONTAINER_MINECRAFT
Contents:
- World data
- Configuration files
EOF
log_info " ✓ Backup complete: $backup_dir"
}
backup_teamspeak() {
log_info "Backing up TeamSpeak server..."
local backup_dir="$BACKUP_ROOT/teamspeak/$TIMESTAMP"
ensure_dir "$backup_dir"
# Check if container is running
check_container_running "$CONTAINER_TEAMSPEAK" "TeamSpeak" || true
# Export Docker volume
if docker volume ls | grep -q teamspeak_data; then
log_info " Exporting volume data..."
docker run --rm -v teamspeak_data:/data -v "$PWD/$backup_dir":/backup \
alpine tar -czf /backup/teamspeak-data.tar.gz -C /data . 2>/dev/null || {
log_error " Failed to export volume"
return 1
}
log_info " ✓ Volume data backed up"
else
log_warn " No TeamSpeak volume found"
fi
log_info " ✓ Backup complete: $backup_dir"
}
backup_nextcloud() {
log_info "Backing up Nextcloud..."
local backup_dir="$BACKUP_ROOT/nextcloud/$TIMESTAMP"
ensure_dir "$backup_dir"
# Load Nextcloud environment if available
local nextcloud_env="$PROJECT_ROOT/nextcloud/.env"
if [[ -f "$nextcloud_env" ]]; then
log_info " Loading Nextcloud environment..."
load_env "$nextcloud_env"
else
log_warn " No .env file found at $nextcloud_env"
log_warn " Using default credentials (not recommended)"
fi
# Validate required environment variables
if [[ -z "${MYSQL_PASSWORD:-}" ]]; then
log_warn " MYSQL_PASSWORD not set, database backup may fail"
fi
# Check if database container is running
if ! check_container_running "$CONTAINER_NEXTCLOUD_DB" "Nextcloud DB"; then
log_error " Database container must be running for backup"
return 1
fi
# Backup database (use environment variable, no default password)
log_info " Backing up database..."
if [[ -n "${MYSQL_PASSWORD:-}" ]]; then
docker exec "$CONTAINER_NEXTCLOUD_DB" mariadb-dump \
-u"${MYSQL_USER:-nextcloud}" \
-p"$MYSQL_PASSWORD" \
--single-transaction \
"${MYSQL_DATABASE:-nextcloud}" > "$backup_dir/database.sql" 2>/dev/null || {
log_error " Database backup failed"
}
else
log_error " Skipping database backup: MYSQL_PASSWORD not set"
fi
# Export volumes
for vol in nextcloud_html nextcloud_data nextcloud_config nextcloud_apps; do
if docker volume ls | grep -q "$vol"; then
log_info " Exporting $vol..."
docker run --rm -v "$vol":/data -v "$PWD/$backup_dir":/backup \
alpine tar -czf "/backup/${vol}.tar.gz" -C /data . 2>/dev/null || {
log_warn " Failed to export $vol"
}
fi
done
# Create manifest
cat > "$backup_dir/manifest.txt" <<EOF
Nextcloud Backup
Created: $(date)
Location: $backup_dir
Containers: $CONTAINER_NEXTCLOUD, $CONTAINER_NEXTCLOUD_DB, $CONTAINER_NEXTCLOUD_REDIS
Contents:
- MariaDB database dump
- Application volumes
- User data
EOF
log_info " ✓ Backup complete: $backup_dir"
}
# ============================================================================
# Utility functions
# ============================================================================
list_backups() {
log_info "Available backups:"
echo
local found=0
for service in minecraft teamspeak nextcloud; do
if [[ -d "$BACKUP_ROOT/$service" ]]; then
found=1
echo "=== $service ==="
ls -lh "$BACKUP_ROOT/$service" 2>/dev/null | tail -n +2 || echo " (empty)"
echo
fi
done
if [[ $found -eq 0 ]]; then
log_info "No backups found in $BACKUP_ROOT"
fi
}
cleanup_old_backups() {
local keep_days="${1:-$BACKUP_RETENTION_DAYS}"
if [[ ! -d "$BACKUP_ROOT" ]]; then
log_info "No backup directory found"
return 0
fi
log_info "Cleaning up backups older than $keep_days days..."
local count_before
count_before=$(find "$BACKUP_ROOT" -type f -name "*.tar.gz" 2>/dev/null | wc -l)
find "$BACKUP_ROOT" -type f -name "*.tar.gz" -mtime +"$keep_days" -delete 2>/dev/null || true
find "$BACKUP_ROOT" -type f -name "*.sql" -mtime +"$keep_days" -delete 2>/dev/null || true
find "$BACKUP_ROOT" -type f -name "manifest.txt" -mtime +"$keep_days" -delete 2>/dev/null || true
find "$BACKUP_ROOT" -type d -empty -delete 2>/dev/null || true
local count_after
count_after=$(find "$BACKUP_ROOT" -type f -name "*.tar.gz" 2>/dev/null | wc -l)
local removed=$((count_before - count_after))
log_info " ✓ Cleanup complete (removed $removed archive(s))"
}
# ============================================================================
# Main
# ============================================================================
show_usage() {
cat <<EOF
Usage: $0 <command> [options]
Commands:
backup [service] Create backup (default: all)
list List available backups
cleanup [days] Remove backups older than N days (default: $BACKUP_RETENTION_DAYS)
Services:
minecraft, teamspeak, nextcloud, all
Examples:
$0 backup minecraft
$0 backup all
$0 list
$0 cleanup 30
Environment:
BACKUP_ROOT Backup directory (default: ./backups)
BACKUP_RETENTION_DAYS Days to keep backups (default: 7)
EOF
exit 1
}
main() {
check_prerequisites
local action="${1:-backup}"
local service="${2:-all}"
case "$action" in
backup)
case "$service" in
minecraft)
backup_minecraft
;;
teamspeak)
backup_teamspeak
;;
nextcloud)
backup_nextcloud
;;
all)
backup_minecraft || true
backup_teamspeak || true
backup_nextcloud || true
;;
*)
log_error "Unknown service: $service"
show_usage
;;
esac
;;
list)
list_backups
;;
cleanup)
cleanup_old_backups "${service:-$BACKUP_RETENTION_DAYS}"
;;
-h|--help|help)
show_usage
;;
*)
log_error "Unknown command: $action"
show_usage
;;
esac
}
main "$@"

View file

@ -1,111 +0,0 @@
#!/usr/bin/env bash
# Health check script for all services
# Usage: ./bin/healthcheck.sh [service]
set -euo pipefail
# Source shared library and config
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
PROJECT_ROOT="$(dirname "$SCRIPT_DIR")"
source "$SCRIPT_DIR/lib/common.sh"
source "$PROJECT_ROOT/config.sh"
check_minecraft() {
log_info "Checking Minecraft server..."
if check_container_health "$CONTAINER_MINECRAFT"; then
log_info " ✓ Container is running"
else
log_error " ✗ Container is not running"
return 1
fi
if check_port "localhost" "$PORT_MINECRAFT"; then
log_info " ✓ Server port $PORT_MINECRAFT is accessible"
else
log_warn " ⚠ Server port $PORT_MINECRAFT is not accessible"
fi
if check_port "localhost" "$PORT_MINECRAFT_RCON"; then
log_info " ✓ RCON port $PORT_MINECRAFT_RCON is accessible"
else
log_warn " ⚠ RCON port $PORT_MINECRAFT_RCON is not accessible"
fi
}
check_teamspeak() {
log_info "Checking TeamSpeak server..."
if check_container_health "$CONTAINER_TEAMSPEAK"; then
log_info " ✓ Container is running"
else
log_error " ✗ Container is not running"
return 1
fi
if check_port "localhost" "$PORT_TEAMSPEAK_QUERY"; then
log_info " ✓ Query port $PORT_TEAMSPEAK_QUERY is accessible"
else
log_warn " ⚠ Port $PORT_TEAMSPEAK_QUERY is not accessible"
fi
}
check_nextcloud() {
log_info "Checking Nextcloud..."
if check_container_health "$CONTAINER_NEXTCLOUD"; then
log_info " ✓ Nextcloud container is running"
else
log_error " ✗ Nextcloud container is not running"
return 1
fi
if check_container_health "$CONTAINER_NEXTCLOUD_DB"; then
log_info " ✓ Database container is running"
else
log_error " ✗ Database container is not running"
fi
if check_container_health "$CONTAINER_NEXTCLOUD_REDIS"; then
log_info " ✓ Redis container is running"
else
log_warn " ⚠ Redis container is not running"
fi
if check_port "localhost" "$PORT_NEXTCLOUD_WEB"; then
log_info " ✓ Web interface port $PORT_NEXTCLOUD_WEB is accessible"
else
log_warn " ⚠ Port $PORT_NEXTCLOUD_WEB is not accessible"
fi
}
main() {
local service="${1:-all}"
case "$service" in
minecraft)
check_minecraft
;;
teamspeak)
check_teamspeak
;;
nextcloud)
check_nextcloud
;;
all)
echo "=== Health Check Report ==="
echo
check_minecraft || true
echo
check_teamspeak || true
echo
check_nextcloud || true
;;
*)
echo "Usage: $0 [minecraft|teamspeak|nextcloud|all]"
exit 1
;;
esac
}
main "$@"

View file

@ -1,119 +0,0 @@
#!/usr/bin/env bash
# Shared utility library for all scripts
# Source this file: source "$(dirname "$0")/lib/common.sh"
# Prevent multiple sourcing
[[ -n "${_COMMON_SH_LOADED:-}" ]] && return
readonly _COMMON_SH_LOADED=1
# ============================================================================
# Color definitions
# ============================================================================
readonly RED='\033[0;31m'
readonly GREEN='\033[0;32m'
readonly YELLOW='\033[1;33m'
readonly BLUE='\033[0;34m'
readonly NC='\033[0m' # No Color
# ============================================================================
# Logging functions
# ============================================================================
log_info() { echo -e "${GREEN}[INFO]${NC} $*"; }
log_warn() { echo -e "${YELLOW}[WARN]${NC} $*"; }
log_error() { echo -e "${RED}[ERROR]${NC} $*" >&2; }
log_debug() { [[ "${DEBUG:-}" == "1" ]] && echo -e "${BLUE}[DEBUG]${NC} $*"; }
# ============================================================================
# Container utilities
# ============================================================================
# Check if a container is running
# Usage: check_container_health "container_name"
# Returns: 0 if running, 1 otherwise
check_container_health() {
local container_name="$1"
if ! docker ps --filter "name=$container_name" --format '{{.Names}}' | grep -q "^${container_name}$"; then
return 1
fi
local status
status=$(docker inspect --format='{{.State.Status}}' "$container_name" 2>/dev/null)
[[ "$status" == "running" ]]
}
# Check if a port is accessible
# Usage: check_port "host" "port"
# Returns: 0 if accessible, 1 otherwise
check_port() {
local host="${1:-localhost}"
local port="$2"
if timeout 2 bash -c "cat < /dev/null > /dev/tcp/$host/$port" 2>/dev/null; then
return 0
else
return 1
fi
}
# ============================================================================
# File utilities
# ============================================================================
# Ensure a directory exists
# Usage: ensure_dir "/path/to/dir"
ensure_dir() {
local dir="$1"
[[ -d "$dir" ]] || mkdir -p "$dir"
}
# Check if a command exists
# Usage: require_command "docker" "https://docs.docker.com/get-docker/"
require_command() {
local cmd="$1"
local install_url="${2:-}"
if ! command -v "$cmd" &>/dev/null; then
log_error "$cmd is not installed"
[[ -n "$install_url" ]] && log_info "Install from: $install_url"
return 1
fi
return 0
}
# ============================================================================
# Environment utilities
# ============================================================================
# Load .env file if it exists
# Usage: load_env "/path/to/.env"
load_env() {
local env_file="${1:-.env}"
if [[ -f "$env_file" ]]; then
set -a
# shellcheck source=/dev/null
source "$env_file"
set +a
return 0
fi
return 1
}
# Validate that required environment variables are set
# Usage: require_env "VAR1" "VAR2" "VAR3"
require_env() {
local missing=()
for var in "$@"; do
if [[ -z "${!var:-}" ]]; then
missing+=("$var")
fi
done
if [[ ${#missing[@]} -gt 0 ]]; then
log_error "Missing required environment variables: ${missing[*]}"
return 1
fi
return 0
}

View file

@ -1,120 +0,0 @@
#!/usr/bin/env bash
# Clone all repositories from a GitHub organization
# Requires: gh (GitHub CLI)
#
# Usage: ./org-clone.sh <org-name> [destination-dir]
# Example: ./org-clone.sh myorg ~/repos/myorg
set -euo pipefail
# Source shared library
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
source "$SCRIPT_DIR/lib/common.sh"
# Check prerequisites
check_prerequisites() {
if ! command -v gh &>/dev/null; then
log_error "GitHub CLI (gh) is not installed"
log_info "Install from: https://cli.github.com/"
exit 1
fi
if ! gh auth status &>/dev/null; then
log_error "Not authenticated with GitHub CLI"
log_info "Run: gh auth login"
exit 1
fi
}
# Show usage
usage() {
cat <<EOF
Usage: $0 <org-name> [destination-dir]
Clone all repositories from a GitHub organization.
Arguments:
org-name GitHub organization name (required)
destination-dir Target directory (default: current directory)
Examples:
$0 myorg
$0 myorg ~/repos/myorg
Requirements:
- GitHub CLI (gh) installed and authenticated
EOF
exit 1
}
# Main function
main() {
local org_name="${1:-}"
local dest_dir="${2:-.}"
# Validate arguments
if [[ -z "$org_name" ]]; then
log_error "Organization name is required"
usage
fi
check_prerequisites
# Create destination directory if needed
if [[ ! -d "$dest_dir" ]]; then
log_info "Creating directory: $dest_dir"
mkdir -p "$dest_dir"
fi
# Change to destination directory
cd "$dest_dir" || {
log_error "Cannot access directory: $dest_dir"
exit 1
}
log_info "Cloning repositories from organization: $org_name"
log_info "Destination: $(pwd)"
# Count repositories
local repo_count
repo_count=$(gh repo list "$org_name" --limit 4000 --json name --jq '. | length')
if [[ "$repo_count" -eq 0 ]]; then
log_warn "No repositories found for organization: $org_name"
exit 0
fi
log_info "Found $repo_count repositories"
# Clone repositories
local success=0
local failed=0
while IFS= read -r repo; do
local repo_name="${repo##*/}"
if [[ -d "$repo_name" ]]; then
log_warn "Skipping $repo_name (already exists)"
continue
fi
log_info "Cloning: $repo"
if gh repo clone "$repo" "$repo_name" 2>/dev/null; then
((success++))
log_info "✓ Successfully cloned: $repo_name"
else
((failed++))
log_error "✗ Failed to clone: $repo_name"
fi
done < <(gh repo list "$org_name" --limit 4000 --json nameWithOwner --jq '.[].nameWithOwner')
# Summary
echo
log_info "=== Clone Summary ==="
log_info "Success: $success"
[[ $failed -gt 0 ]] && log_warn "Failed: $failed" || log_info "Failed: $failed"
log_info "Total: $repo_count"
}
main "$@"

View file

@ -1,43 +0,0 @@
#!/usr/bin/env bash
# Centralized configuration for all services
# Source this file to get consistent container names and settings
# Prevent multiple sourcing
[[ -n "${_CONFIG_SH_LOADED:-}" ]] && return
readonly _CONFIG_SH_LOADED=1
# ============================================================================
# Container Names
# ============================================================================
# These are the canonical container names used across all scripts.
# Update here if container names change in docker-compose.yml files.
readonly CONTAINER_MINECRAFT="${CONTAINER_MINECRAFT:-mc-fabric-1.21.1}"
readonly CONTAINER_TEAMSPEAK="${CONTAINER_TEAMSPEAK:-teamspeak-server}"
readonly CONTAINER_NEXTCLOUD="${CONTAINER_NEXTCLOUD:-nextcloud}"
readonly CONTAINER_NEXTCLOUD_DB="${CONTAINER_NEXTCLOUD_DB:-nextcloud-db}"
readonly CONTAINER_NEXTCLOUD_REDIS="${CONTAINER_NEXTCLOUD_REDIS:-nextcloud-redis}"
# ============================================================================
# Service Ports
# ============================================================================
readonly PORT_MINECRAFT=25565
readonly PORT_MINECRAFT_RCON=25575
readonly PORT_TEAMSPEAK_VOICE=9987
readonly PORT_TEAMSPEAK_FILETRANSFER=30033
readonly PORT_TEAMSPEAK_QUERY=10011
readonly PORT_NEXTCLOUD_WEB=8080
# ============================================================================
# Backup Configuration
# ============================================================================
readonly BACKUP_ROOT="${BACKUP_ROOT:-./backups}"
readonly BACKUP_RETENTION_DAYS="${BACKUP_RETENTION_DAYS:-7}"
# ============================================================================
# Helper function to get project root
# ============================================================================
get_project_root() {
local script_path="${BASH_SOURCE[1]:-$0}"
cd "$(dirname "$script_path")" && pwd
}

20
filesuite/.env.example Normal file
View file

@ -0,0 +1,20 @@
# @name Filesuite
# @desc Cloudreve cloud storage + qBittorrent downloader
# @url https://cloudreve.org
# @port CLOUDREVE_PORT
TZ=Asia/Shanghai
PUID=1000
PGID=1000
# Shared download directory — both services read/write here
# Use an absolute path for external drives (e.g. /mnt/data/downloads)
DOWNLOADS_DIR=./downloads
# Cloudreve — web file manager
CLOUDREVE_PORT=5212
# qBittorrent — torrent client
QB_WEBUI_PORT=8090
# BT listen port — must be forwarded in your router/firewall
QB_BT_PORT=44773

42
filesuite/compose.yaml Normal file
View file

@ -0,0 +1,42 @@
services:
cloudreve:
image: cloudreve/cloudreve:latest
container_name: cloudreve
environment:
TZ: "${TZ:-Asia/Shanghai}"
CR_ENABLE_ARIA2: "${CR_ENABLE_ARIA2:-0}"
volumes:
- ./cloudreve-data:/cloudreve/data
- ${DOWNLOADS_DIR:-./downloads}:/data/downloads
ports:
- "${CLOUDREVE_PORT:-5212}:5212"
healthcheck:
test: ["CMD-SHELL", "curl -fSs http://localhost:5212/ || exit 1"]
interval: 30s
timeout: 5s
retries: 3
start_period: 15s
restart: unless-stopped
qbittorrent:
image: lscr.io/linuxserver/qbittorrent:latest
container_name: qbittorrent
environment:
PUID: "${PUID:-1000}"
PGID: "${PGID:-1000}"
TZ: "${TZ:-Asia/Shanghai}"
WEBUI_PORT: "${QB_WEBUI_PORT:-8090}"
volumes:
- ./qbt-config:/config
- ${DOWNLOADS_DIR:-./downloads}:/downloads
ports:
- "${QB_WEBUI_PORT:-8090}:${QB_WEBUI_PORT:-8090}"
- "${QB_BT_PORT:-44773}:${QB_BT_PORT:-44773}"
- "${QB_BT_PORT:-44773}:${QB_BT_PORT:-44773}/udp"
healthcheck:
test: ["CMD-SHELL", "curl -fSs http://localhost:${QB_WEBUI_PORT:-8090}/ || exit 1"]
interval: 30s
timeout: 5s
retries: 3
start_period: 15s
restart: unless-stopped

11
forgejo/.env.example Normal file
View file

@ -0,0 +1,11 @@
# @name Forgejo
# @desc Self-hosted Git service (Gitea fork)
# @url https://forgejo.org
# @port FORGEJO_HTTP_PORT
# Web and SSH access ports
FORGEJO_HTTP_PORT=3000
FORGEJO_SSH_PORT=2223
# Public URL — set this to your domain when behind a reverse proxy
FORGEJO_ROOT_URL=http://localhost:3000

25
forgejo/compose.yaml Normal file
View file

@ -0,0 +1,25 @@
services:
forgejo:
image: codeberg.org/forgejo/forgejo:9
container_name: forgejo
environment:
USER_UID: "${FORGEJO_UID:-1000}"
USER_GID: "${FORGEJO_GID:-1000}"
FORGEJO__server__ROOT_URL: "${FORGEJO_ROOT_URL:-http://localhost:3000}"
FORGEJO__server__SSH_PORT: "${FORGEJO_SSH_PORT:-2223}"
FORGEJO__server__LFS_START_SERVER: "true"
FORGEJO__database__SQLITE_JOURNAL_MODE: "WAL"
ports:
- "${FORGEJO_HTTP_PORT:-3000}:3000"
- "${FORGEJO_SSH_PORT:-2223}:22"
volumes:
- ./data:/data
- /etc/timezone:/etc/timezone:ro
- /etc/localtime:/etc/localtime:ro
healthcheck:
test: ["CMD", "wget", "--no-verbose", "--tries=1", "--spider", "http://localhost:3000/api/healthz"]
interval: 30s
timeout: 10s
retries: 5
start_period: 60s
restart: unless-stopped

173
infrastructure/README.md Normal file
View file

@ -0,0 +1,173 @@
# Infrastructure Services
Core infrastructure components for automa self-hosted platform.
## Quick Start
### 1. Create Networks
```bash
docker network create automa-proxy
docker network create automa-monitoring
```
### 2. Setup Environment
```bash
# Copy global env file
cp ../.env.example ../.env
# Edit with your values
vim ../.env
```
Required variables:
```bash
DOMAIN=example.com
GRAFANA_ADMIN_PASSWORD=changeme
TZ=Asia/Shanghai
```
### 3. Start Infrastructure
```bash
# Start all at once
cd caddy && docker compose up -d && cd ..
cd monitoring && docker compose up -d && cd ..
cd watchtower && docker compose up -d && cd ..
cd duplicati && docker compose up -d && cd ..
cd fail2ban && docker compose up -d && cd ..
# Or use Makefile
make infra-up
```
### 4. Verify
```bash
docker ps
docker network ls | grep automa
```
## Services
### Caddy (Reverse Proxy)
- **Port**: 80, 443
- **Web**: N/A (proxy only)
- **Config**: `caddy/Caddyfile`
- Auto HTTPS via Let's Encrypt
### Grafana (Monitoring Dashboard)
- **Port**: 3000 (internal)
- **Web**: https://grafana.example.com
- **User**: admin
- **Pass**: (from .env)
Import dashboards:
- 11074 - Node Exporter
- 193 - Docker
- 12486 - Loki Logs
### Prometheus (Metrics)
- **Port**: 9090 (localhost only)
- **Web**: http://localhost:9090
- **Config**: `monitoring/prometheus.yml`
### Loki (Logs)
- **Port**: 3100 (internal)
- No direct web UI (use Grafana)
### Duplicati (Remote Backup)
- **Port**: 8200 (localhost only)
- **Web**: http://localhost:8200
- Setup backup jobs via web UI
### Watchtower (Auto Update)
- No ports exposed
- Runs daily at midnight
- Only updates containers with label:
```yaml
labels:
- "com.centurylinklabs.watchtower.enable=true"
```
### Fail2ban (Security)
- No ports exposed
- Monitors logs and bans IPs
- Config: `fail2ban/data/jail.d/`
## Network Architecture
```
Internet
Caddy (80/443)
├─→ automa-proxy ─→ Nextcloud, Grafana
└─→ automa-monitoring ─→ Prometheus, Loki, etc.
```
## Updating Services
### Manual Update
```bash
cd monitoring
docker compose pull
docker compose up -d
```
### Auto Update (via Watchtower)
- Runs daily automatically
- Only updates labeled containers
- To disable for a service, set label to `false`
## Troubleshooting
### Check logs
```bash
docker logs automa-caddy
docker logs automa-prometheus
```
### Restart service
```bash
cd monitoring
docker compose restart grafana
```
### Reset service
```bash
cd monitoring
docker compose down
docker compose up -d
```
### Test Caddy config
```bash
docker exec -it automa-caddy caddy validate --config /etc/caddy/Caddyfile
```
## Resource Usage
Typical usage per service:
| Service | CPU | RAM | Disk |
|---------|-----|-----|------|
| Caddy | 0.1 | 50M | 50M |
| Prometheus | 0.5 | 500M | 10G |
| Grafana | 0.1 | 200M | 500M |
| Loki | 0.2 | 300M | 5G |
| Promtail | 0.02 | 50M | 10M |
| cAdvisor | 0.1 | 100M | 10M |
| Watchtower | 0.01 | 30M | 10M |
| Duplicati | 0.05 | 100M | 100M |
| Fail2ban | 0.02 | 50M | 100M |
| **Total** | **~1.2** | **~1.4G** | **~16G** |
## Security Notes
- Grafana and Duplicati only accessible via localhost
- Add firewall rules to restrict access
- Change default passwords
- Enable 2FA where supported
- Review logs regularly

View file

@ -0,0 +1,39 @@
# Global options
{
# ACME email for Let's Encrypt
email admin@{$DOMAIN}
# Disable admin API in production
admin off
}
# Nextcloud
cloud.{$DOMAIN} {
reverse_proxy nextcloud:80 {
header_up X-Forwarded-Proto {scheme}
header_up X-Real-IP {remote_host}
}
encode gzip
# Security headers
header Strict-Transport-Security "max-age=31536000;"
header X-Content-Type-Options "nosniff"
header X-Frame-Options "SAMEORIGIN"
}
# Grafana (monitoring dashboard)
grafana.{$DOMAIN} {
reverse_proxy grafana:3000
encode gzip
}
# Health check endpoint (no SSL)
http://health.{$DOMAIN} {
respond "OK" 200
}
# Default catch-all
{$DOMAIN} {
respond "Automa Services" 404
}

View file

@ -0,0 +1,42 @@
services:
caddy:
image: caddy:2-alpine
container_name: automa-caddy
restart: unless-stopped
ports:
- "80:80"
- "443:443"
- "443:443/udp" # HTTP/3
volumes:
- ./Caddyfile:/etc/caddy/Caddyfile:ro
- caddy_data:/data
- caddy_config:/config
environment:
- DOMAIN=${DOMAIN:-example.com}
networks:
- automa-proxy
labels:
- "com.automa.service=caddy"
- "com.centurylinklabs.watchtower.enable=true"
healthcheck:
test: ["CMD", "wget", "--spider", "-q", "http://localhost:2019/config/"]
interval: 30s
timeout: 10s
retries: 3
volumes:
caddy_data:
name: automa_caddy_data
caddy_config:
name: automa_caddy_config
networks:
automa-proxy:
name: automa-proxy
external: true

View file

@ -0,0 +1,33 @@
services:
duplicati:
image: lscr.io/linuxserver/duplicati:latest
container_name: automa-duplicati
restart: unless-stopped
environment:
- PUID=1000
- PGID=1000
- TZ=${TZ:-Asia/Shanghai}
volumes:
- duplicati_config:/config
- ../../backups:/source:ro # Read-only access to local backups
ports:
- "127.0.0.1:8200:8200" # Only accessible locally
labels:
- "com.automa.service=duplicati"
- "com.centurylinklabs.watchtower.enable=true"
volumes:
duplicati_config:
name: automa_duplicati_config
# Setup:
# 1. Open http://localhost:8200
# 2. Add backup job
# 3. Source: /source (local backups)
# 4. Destination: S3/SFTP/WebDAV/etc
# 5. Schedule: Daily at 3 AM
# 6. Retention: Keep 30 days

View file

@ -0,0 +1,26 @@
services:
fail2ban:
image: crazymax/fail2ban:latest
container_name: automa-fail2ban
restart: unless-stopped
network_mode: host
cap_add:
- NET_ADMIN
- NET_RAW
environment:
- TZ=${TZ:-Asia/Shanghai}
- F2B_LOG_LEVEL=INFO
volumes:
- fail2ban_data:/data
- /var/log:/var/log:ro
labels:
- "com.automa.service=fail2ban"
volumes:
fail2ban_data:
name: automa_fail2ban_data

View file

@ -0,0 +1,137 @@
services:
# Prometheus - Metrics collection
prometheus:
image: prom/prometheus:v2.48-alpine
container_name: automa-prometheus
restart: unless-stopped
ports:
- "127.0.0.1:9090:9090"
volumes:
- ./prometheus.yml:/etc/prometheus/prometheus.yml:ro
- prometheus_data:/prometheus
command:
- '--config.file=/etc/prometheus/prometheus.yml'
- '--storage.tsdb.path=/prometheus'
- '--storage.tsdb.retention.time=30d'
- '--storage.tsdb.retention.size=10GB'
- '--web.enable-lifecycle'
networks:
- automa-monitoring
- automa-proxy
labels:
- "com.automa.service=prometheus"
- "com.centurylinklabs.watchtower.enable=false"
# Grafana - Visualization
grafana:
image: grafana/grafana:10-alpine
container_name: automa-grafana
restart: unless-stopped
ports:
- "127.0.0.1:3000:3000"
volumes:
- grafana_data:/var/lib/grafana
- ./grafana-datasources.yml:/etc/grafana/provisioning/datasources/datasources.yml:ro
environment:
- GF_SECURITY_ADMIN_PASSWORD=${GRAFANA_ADMIN_PASSWORD:-changeme}
- GF_ANALYTICS_REPORTING_ENABLED=false
- GF_SERVER_ROOT_URL=https://grafana.${DOMAIN:-example.com}
networks:
- automa-monitoring
- automa-proxy
labels:
- "com.automa.service=grafana"
- "com.centurylinklabs.watchtower.enable=true"
# Loki - Log aggregation
loki:
image: grafana/loki:2-alpine
container_name: automa-loki
restart: unless-stopped
ports:
- "127.0.0.1:3100:3100"
volumes:
- ./loki-config.yml:/etc/loki/loki-config.yml:ro
- loki_data:/loki
command: -config.file=/etc/loki/loki-config.yml
networks:
- automa-monitoring
labels:
- "com.automa.service=loki"
# Promtail - Log collection
promtail:
image: grafana/promtail:2-alpine
container_name: automa-promtail
restart: unless-stopped
volumes:
- ./promtail-config.yml:/etc/promtail/promtail-config.yml:ro
- /var/log:/var/log:ro
- /var/run/docker.sock:/var/run/docker.sock:ro
command: -config.file=/etc/promtail/promtail-config.yml
networks:
- automa-monitoring
labels:
- "com.automa.service=promtail"
# cAdvisor - Container metrics
cadvisor:
image: gcr.io/cadvisor/cadvisor:latest
container_name: automa-cadvisor
restart: unless-stopped
ports:
- "127.0.0.1:8080:8080"
volumes:
- /:/rootfs:ro
- /var/run:/var/run:ro
- /sys:/sys:ro
- /var/lib/docker:/var/lib/docker:ro
privileged: true
networks:
- automa-monitoring
labels:
- "com.automa.service=cadvisor"
command:
- '--docker_only=true'
- '--housekeeping_interval=30s'
volumes:
prometheus_data:
name: automa_prometheus_data
grafana_data:
name: automa_grafana_data
loki_data:
name: automa_loki_data
networks:
automa-monitoring:
name: automa-monitoring
external: true
automa-proxy:
name: automa-proxy
external: true

View file

@ -0,0 +1,15 @@
apiVersion: 1
datasources:
- name: Prometheus
type: prometheus
access: proxy
url: http://prometheus:9090
isDefault: true
editable: false
- name: Loki
type: loki
access: proxy
url: http://loki:3100
editable: false

View file

@ -0,0 +1,34 @@
auth_enabled: false
server:
http_listen_port: 3100
common:
path_prefix: /loki
storage:
filesystem:
chunks_directory: /loki/chunks
rules_directory: /loki/rules
replication_factor: 1
ring:
kvstore:
store: inmemory
schema_config:
configs:
- from: 2023-01-01
store: boltdb-shipper
object_store: filesystem
schema: v11
index:
prefix: index_
period: 24h
limits_config:
retention_period: 30d
max_query_length: 721h
compactor:
working_directory: /loki/compactor
shared_store: filesystem
retention_enabled: true

View file

@ -0,0 +1,23 @@
global:
scrape_interval: 30s
evaluation_interval: 30s
scrape_configs:
# Prometheus self-monitoring
- job_name: 'prometheus'
static_configs:
- targets: ['localhost:9090']
labels:
service: 'prometheus'
# Container metrics
- job_name: 'cadvisor'
static_configs:
- targets: ['cadvisor:8080']
labels:
service: 'cadvisor'
# Add more targets as needed
# - job_name: 'nextcloud'
# static_configs:
# - targets: ['nextcloud-exporter:9205']

View file

@ -0,0 +1,35 @@
server:
http_listen_port: 9080
grpc_listen_port: 0
positions:
filename: /tmp/positions.yaml
clients:
- url: http://loki:3100/loki/api/v1/push
scrape_configs:
# Docker containers
- job_name: docker
docker_sd_configs:
- host: unix:///var/run/docker.sock
refresh_interval: 5s
relabel_configs:
- source_labels: ['__meta_docker_container_name']
regex: '/(.*)'
target_label: 'container'
- source_labels: ['__meta_docker_container_label_com_automa_service']
target_label: 'service'
pipeline_stages:
- docker: {}
# System logs
- job_name: system
static_configs:
- targets:
- localhost
labels:
job: syslog
__path__: /var/log/syslog

View file

@ -0,0 +1,23 @@
services:
watchtower:
image: containrrr/watchtower:latest
container_name: automa-watchtower
restart: unless-stopped
environment:
- WATCHTOWER_CLEANUP=true # Remove old images
- WATCHTOWER_POLL_INTERVAL=86400 # Check every 24 hours
- WATCHTOWER_LABEL_ENABLE=true # Only update labeled containers
- WATCHTOWER_INCLUDE_STOPPED=false # Skip stopped containers
- TZ=${TZ:-Asia/Shanghai}
volumes:
- /var/run/docker.sock:/var/run/docker.sock
labels:
- "com.automa.service=watchtower"
- "com.centurylinklabs.watchtower.enable=false" # Don't update itself
# Add this label to containers you want to auto-update:
# labels:
# - "com.centurylinklabs.watchtower.enable=true"

66
install.sh Executable file
View file

@ -0,0 +1,66 @@
#!/usr/bin/env bash
# automa installer
# Usage: curl -fsSL https://raw.githubusercontent.com/m1ngsama/automa/main/install.sh | bash
set -euo pipefail
REPO="https://github.com/m1ngsama/automa.git"
INSTALL_DIR="${AUTOMA_DIR:-$HOME/automa}"
RED='\033[0;31m' GREEN='\033[0;32m' CYAN='\033[0;36m'
BOLD='\033[1m' DIM='\033[2m' NC='\033[0m'
info() { echo -e " ${GREEN}\xe2\x9c\x94${NC} $*"; }
error() { echo -e " ${RED}\xe2\x9c\x98${NC} $*" >&2; }
step() { echo -e " ${CYAN}\xe2\x96\xb6${NC} ${BOLD}$*${NC}"; }
echo ""
echo -e " ${BOLD}${CYAN}automa${NC}${BOLD} installer${NC}"
echo ""
# Check prerequisites
missing=0
for cmd in git docker; do
if command -v "$cmd" &>/dev/null; then
info "$cmd found"
else
error "$cmd is not installed"
missing=1
fi
done
if docker compose version &>/dev/null 2>&1; then
info "docker compose plugin found"
else
error "docker compose plugin is not installed"
echo -e " ${DIM}Install: https://docs.docker.com/compose/install/${NC}"
missing=1
fi
if [[ $missing -eq 1 ]]; then
echo ""
error "Please install missing dependencies and try again"
exit 1
fi
echo ""
# Clone or update
if [[ -d "$INSTALL_DIR/.git" ]]; then
step "Updating existing installation..."
git -C "$INSTALL_DIR" pull --ff-only --quiet
info "Updated"
else
step "Installing to ${INSTALL_DIR}..."
git clone --quiet "$REPO" "$INSTALL_DIR"
info "Cloned"
fi
chmod +x "$INSTALL_DIR/automa"
echo ""
echo -e " ${GREEN}${BOLD}Ready!${NC}"
echo ""
echo -e " ${DIM}Get started:${NC}"
echo -e " ${BOLD}cd ${INSTALL_DIR}${NC}"
echo -e " ${BOLD}./automa deploy${NC}"
echo ""

View file

@ -1,12 +0,0 @@
# .env 文件:用于存储敏感或可变配置,避免硬编码到 YAML 中。
# 用户需自行修改以下值,并确保文件权限安全(例如 chmod 600 .env
# 用户权限(避免容器文件权限问题,替换为您的主机用户 UID/GID使用命令 'id' 查询)
UID=1000
GID=1000
# RCON 密码(远程控制密码,必须复杂且安全;原配置中已移除具体值)
RCON_PASSWORD=your_rcon_password_here
# 时区(根据服务器位置调整,例如 Asia/Shanghai 为中国标准时间)
TZ=Asia/Shanghai

View file

@ -1,19 +1,23 @@
# Minecraft Server Environment Configuration # @name Minecraft
# Copy this file to .env and modify the values # @desc Fabric Minecraft server (itzg/minecraft-server)
# Usage: cp .env.example .env && chmod 600 .env # @url https://docker-minecraft-server.readthedocs.io
# @port MC_PORT
# User permissions (avoid container file permission issues)
# Replace with your host user UID/GID, use 'id' command to check
UID=1000
GID=1000
# RCON password (remote control, must be complex and secure)
# Generate a strong password: openssl rand -base64 32
RCON_PASSWORD=your_secure_rcon_password_here
# Timezone (adjust based on server location)
# Examples: America/New_York, Europe/London, Asia/Shanghai
TZ=Asia/Shanghai TZ=Asia/Shanghai
# Container name (used by scripts for health checks) # Server type and version
CONTAINER_NAME=mc-fabric-1.21.1 MC_TYPE=FABRIC
MC_VERSION=1.21.1
# Memory allocation — adjust based on player count and mods
MC_MEMORY=4G
# Set to true for Mojang account verification
MC_ONLINE_MODE=false
# Ports
MC_PORT=25565
RCON_PORT=25575
# RCON password for remote console access
RCON_PASSWORD=

View file

@ -1,184 +0,0 @@
# Minecraft 自动化方案重构日志
## 2025-12-09 - 自动化架构重构
### 🎯 重构目标
整合原有的本地部署方案(`src/automatic/`)和 Docker Compose 方案,提供统一的自动化管理系统。
### ✨ 新增功能
#### 1. 统一的脚本体系
创建 `scripts/` 目录,包含以下模块:
- **utils.sh** - 通用工具库
- 彩色日志输出
- Docker 环境检查
- 容器状态管理
- 文件备份工具
- 网络连接检测
- **setup.sh** - 环境初始化
- 系统环境检查
- 目录结构初始化
- 配置文件验证
- 权限自动修复
- **mod-manager.sh** - Mods 管理
- 从 Modrinth 自动下载
- 批量更新 mods
- 列出已安装 mods
- 清理和状态检查
- **backup.sh** - 备份管理
- 世界数据备份
- 配置文件备份
- Mods 备份
- 备份恢复功能
- 自动清理旧备份
- **monitor.sh** - 服务器监控
- 容器状态检查
- 资源使用监控
- 在线玩家查询
- 日志分析
- 持续监控模式
#### 2. Makefile 集成
在根目录 `Makefile` 中新增命令:
**服务器管理**
- `make minecraft-status` - 查看服务器状态
- `make minecraft-setup` - 初始化环境
**Mods 管理**
- `make minecraft-mods-download` - 下载 mods
- `make minecraft-mods-list` - 列出 mods
- `make minecraft-mods-update` - 更新 mods
- `make minecraft-mods-check` - 检查状态
**备份管理**
- `make minecraft-backup` - 完整备份
- `make minecraft-backup-world` - 备份世界
- `make minecraft-backup-list` - 列出备份
- `make minecraft-backup-cleanup` - 清理备份
#### 3. 完整的文档
重写 `minecraft/README.md`
- 详细的快速开始指南
- 完整的命令参考
- 高级用法示例
- 故障排查指南
- 迁移指南
### 🔄 架构改进
#### 从旧方案继承的优点
1. **日志系统**
- 保留了 `logger.sh` 的彩色输出设计
- 增强了日志功能(系统信息、时间戳、文件记录)
2. **Mods 下载逻辑**
- 基于 `download-mods.sh` 改进
- 统一使用 `extras/mods.txt` 格式
- 增加错误处理和重试机制
3. **部署流程**
- 参考 `deploy.sh` 的备份逻辑
- 适配 Docker Compose 环境
#### 新增的优势
1. **Docker 优先**
- 完全容器化部署
- 一致的运行环境
- 简化依赖管理
2. **模块化设计**
- 每个脚本职责单一
- 通过 `utils.sh` 共享通用功能
- 易于维护和扩展
3. **统一管理**
- Makefile 统一入口
- 一致的命令格式
- 与其他服务TeamSpeak、Nextcloud集成
### 📂 目录变更
```
旧结构:
minecraft/
├── src/automatic/
│ ├── deploy.sh
│ ├── download-mods.sh
│ ├── logger.sh
│ └── requirements.txt
新结构:
minecraft/
├── scripts/ # 新增:统一的脚本目录
│ ├── utils.sh
│ ├── setup.sh
│ ├── mod-manager.sh
│ ├── backup.sh
│ └── monitor.sh
├── extras/
│ └── mods.txt # 统一的 mods 配置
├── backups/ # 新增:自动备份目录
├── logs/ # 新增:脚本日志目录
└── src/automatic/ # 保留(供参考)
```
### 🔧 技术细节
1. **错误处理**
- 所有脚本使用 `set -e`
- 完善的返回码检查
- 详细的错误消息
2. **跨平台兼容**
- macOS 和 Linux 兼容的命令
- 自动检测 `docker compose` vs `docker-compose`
- 处理不同的 `stat` 命令格式
3. **安全性**
- 敏感信息通过 `.env` 管理
- RCON 密码验证
- 备份前的确认机制
### 📋 迁移建议
如果使用旧的 `src/automatic/` 脚本:
1. 旧脚本仍可使用(未删除)
2. 建议迁移到新的 Docker 方案
3. 新方案提供更多自动化功能
4. 通过 Makefile 统一管理更便捷
### 🎯 后续计划
- [ ] 添加定时备份的 systemd/cron 模板
- [ ] 集成 Prometheus 指标监控
- [ ] 添加自动更新检查
- [ ] Web 控制面板集成
- [ ] 多服务器管理支持
### 📝 配置兼容性
- ✅ `docker-compose.yml` - 无变化
- ✅ `.env` - 无变化
- ✅ `configs/` - 无变化
- ✅ `mods/` - 无变化
- ✅ `extras/mods.txt` - 格式与旧 `requirements.txt` 兼容
### 🙏 致谢
重构整合了原有设计的精华:
- 日志系统的设计理念
- Modrinth API 集成逻辑
- 部署流程的最佳实践

View file

@ -1,316 +0,0 @@
# Automa Minecraft 服务器
基于 Docker Compose 的 Minecraft Fabric 1.21.1 服务器,提供完整的自动化管理方案。
## 📁 目录结构
```
minecraft/
├── docker-compose.yml # Docker Compose 配置
├── .env # 环境变量配置(需自定义)
├── configs/ # 服务器配置文件
│ ├── server.properties # 服务器属性配置
│ └── whitelist.json # 白名单配置
├── mods/ # Mods 存放目录
├── data/ # 持久化数据(世界、日志等)
├── backups/ # 自动备份目录
├── logs/ # 自动化脚本日志
├── extras/
│ └── mods.txt # Modrinth Mods 列表
└── scripts/ # 自动化脚本
├── utils.sh # 工具库
├── setup.sh # 环境初始化
├── mod-manager.sh # Mods 管理
├── backup.sh # 备份管理
└── monitor.sh # 服务器监控
```
## 🚀 快速开始
### 1. 环境初始化
```bash
# 检查环境并初始化目录结构
make minecraft-setup
```
### 2. 配置服务器
编辑 `.env` 文件,设置必要的配置:
```bash
# 用户权限(使用 id 命令查看)
UID=1000
GID=1000
# RCON 密码(远程管理)
RCON_PASSWORD=your_secure_password
# 时区
TZ=Asia/Shanghai
```
### 3. 下载 Mods可选
```bash
# 从 Modrinth 下载 mods根据 extras/mods.txt
make minecraft-mods-download
# 或手动将 mods 放入 mods/ 目录
```
### 4. 启动服务器
```bash
# 启动服务器
make minecraft-up
# 查看日志
make minecraft-logs
```
## 📋 常用命令
### 服务器管理
```bash
make minecraft-up # 启动服务器
make minecraft-down # 停止服务器
make minecraft-restart # 重启服务器
make minecraft-logs # 查看实时日志
make minecraft-status # 查看服务器状态
```
### Mods 管理
```bash
make minecraft-mods-download # 下载所有 mods
make minecraft-mods-list # 列出已安装的 mods
make minecraft-mods-update # 更新所有 mods
make minecraft-mods-check # 检查 mods 状态
```
Mods 配置在 `extras/mods.txt` 中,每行一个 Modrinth slug
```
fabric-api
sodium
lithium
iris
```
### 备份管理
```bash
make minecraft-backup # 完整备份世界、配置、mods
make minecraft-backup-world # 仅备份世界数据
make minecraft-backup-list # 列出所有备份
make minecraft-backup-cleanup # 清理旧备份
```
备份存储在 `backups/` 目录,按类型分类:
- `backups/worlds/` - 世界数据备份
- `backups/configs/` - 配置文件备份
- `backups/mods/` - Mods 备份
## 🔧 高级用法
### 直接使用脚本
所有自动化脚本位于 `scripts/` 目录:
```bash
# 环境初始化
./scripts/setup.sh
# Mods 管理
./scripts/mod-manager.sh download # 下载 mods
./scripts/mod-manager.sh list # 列出 mods
./scripts/mod-manager.sh update # 更新 mods
./scripts/mod-manager.sh clean # 清理 mods
# 备份管理
./scripts/backup.sh backup all # 完整备份
./scripts/backup.sh backup world # 仅备份世界
./scripts/backup.sh list # 列出备份
./scripts/backup.sh restore <file> # 恢复备份
./scripts/backup.sh cleanup 10 # 保留最近10个备份
# 服务器监控
./scripts/monitor.sh status # 完整状态
./scripts/monitor.sh resources # 资源使用
./scripts/monitor.sh players # 在线玩家
./scripts/monitor.sh logs 50 # 最近50行日志
./scripts/monitor.sh watch 10 # 持续监控每10秒
```
### 自定义配置
#### 修改服务器配置
编辑 `configs/server.properties`,然后重启服务器:
```bash
vim configs/server.properties
make minecraft-restart
```
#### 添加新 Mods
1. 在 Modrinth 找到 mod 的 slugURL中的ID
2. 添加到 `extras/mods.txt`
3. 运行下载命令:
```bash
make minecraft-mods-download
make minecraft-restart
```
#### 配置白名单
编辑 `configs/whitelist.json`,并在 `.env``configs/server.properties` 中启用白名单:
```properties
white-list=true
enforce-whitelist=true
```
## 🔍 监控与维护
### 实时监控
```bash
# 查看完整状态(容器、资源、玩家、错误)
make minecraft-status
# 持续监控模式每5秒刷新
cd minecraft && ./scripts/monitor.sh watch 5
```
### 日志管理
```bash
# 查看 Docker 容器日志
make minecraft-logs
# 查看自动化脚本日志
ls -lh logs/
tail -f logs/automation-*.log
```
### 定期备份
建议配置 cron 任务定期备份:
```bash
# 每天凌晨3点备份世界数据
0 3 * * * cd /path/to/automa && make minecraft-backup-world
# 每周清理旧备份保留最近10个
0 4 * * 0 cd /path/to/automa/minecraft && ./scripts/backup.sh cleanup 10
```
## 📊 性能优化
项目已包含性能优化 mods
- **Sodium** - 渲染优化
- **Lithium** - 服务器性能优化
- **Iris** - 着色器支持
内存配置在 `docker-compose.yml` 中:
```yaml
environment:
MEMORY: "4G" # 最大内存
INIT_MEMORY: "2G" # 初始内存
```
## 🛡️ 安全建议
1. **修改 RCON 密码**:在 `.env` 中设置强密码
2. **配置防火墙**仅开放必要端口25565, 25575
3. **启用白名单**:在 `configs/server.properties` 中配置
4. **定期备份**:使用自动化备份脚本
5. **监控日志**:定期检查错误日志
## 🔄 迁移指南
### 从旧版本迁移
如果你使用的是 `src/automatic/` 下的旧脚本:
1. 新方案使用 Docker Compose更易部署和维护
2. Mods 管理统一使用 `extras/mods.txt` 格式
3. 所有自动化功能集成到 `scripts/` 目录
4. 通过 Makefile 统一管理
迁移步骤:
```bash
# 1. 备份旧数据
cp -r old_server_dir/world minecraft/data/
# 2. 复制 mods如果手动管理
cp -r old_mods_dir/* minecraft/mods/
# 3. 初始化新环境
make minecraft-setup
# 4. 启动服务器
make minecraft-up
```
## 📝 故障排查
### 容器无法启动
```bash
# 检查 Docker 服务
docker info
# 查看容器日志
make minecraft-logs
# 检查环境配置
cat .env
```
### Mods 下载失败
```bash
# 检查网络连接
curl -I https://api.modrinth.com
# 查看详细日志
cat logs/automation-*.log
# 手动下载 mods 放入 mods/ 目录
```
### 性能问题
```bash
# 查看资源使用
cd minecraft && ./scripts/monitor.sh resources
# 调整内存配置
vim docker-compose.yml # 修改 MEMORY 和 INIT_MEMORY
# 重启服务器
make minecraft-restart
```
## 📚 相关资源
- [Docker 文档](https://docs.docker.com/)
- [itzg/minecraft-server 镜像](https://github.com/itzg/docker-minecraft-server)
- [Modrinth](https://modrinth.com/) - Mods 下载平台
- [Fabric](https://fabricmc.net/) - Mod 加载器
## 🤝 贡献
欢迎提交 Issue 和 Pull Request
## 📄 许可
MIT License

28
minecraft/compose.yaml Normal file
View file

@ -0,0 +1,28 @@
services:
mc:
image: itzg/minecraft-server:latest
container_name: mc-fabric
ports:
- "${MC_PORT:-25565}:25565"
- "${RCON_PORT:-25575}:25575"
environment:
EULA: "TRUE"
TYPE: "${MC_TYPE:-FABRIC}"
VERSION: "${MC_VERSION:-1.21.1}"
MEMORY: "${MC_MEMORY:-4G}"
ONLINE_MODE: "${MC_ONLINE_MODE:-false}"
ENABLE_RCON: "true"
RCON_PORT: 25575
RCON_PASSWORD: "${RCON_PASSWORD}"
TZ: "${TZ:-Asia/Shanghai}"
volumes:
- ./data:/data
healthcheck:
test: mc-health
interval: 30s
timeout: 10s
retries: 5
start_period: 120s
restart: unless-stopped
tty: true
stdin_open: true

View file

@ -1,70 +0,0 @@
# server.properties 文件Minecraft 服务器核心配置。
# 此文件通过卷挂载到容器中,优先覆盖环境变量设置。
# 所有值均为示例,用户可根据需求修改。
# 注意:移除原 rcon.password 值,使用 .env 中的 RCON_PASSWORD 动态注入。
#Minecraft server properties
# 生成时间示例(自动更新,无需修改)
#Tue Nov 04 17:06:20 CST 2025
accepts-transfers=false # 是否允许玩家从其他服务器传输(默认 false
allow-flight=false # 是否允许飞行false 防止作弊)
allow-nether=true # 是否启用下界true 为标准生存)
broadcast-console-to-ops=true # 是否向 OP 广播控制台消息
broadcast-rcon-to-ops=true # 是否向 OP 广播 RCON 消息
bug-report-link= # 错误报告链接(留空或填自定义)
difficulty=normal # 难度peaceful/easy/normal/hardnormal 为标准)
enable-command-block=false # 是否启用命令方块false 防止滥用)
enable-jmx-monitoring=false # 是否启用 JMX 监控(用于高级调试)
enable-query=false # 是否启用查询端口(用于外部工具查询服务器状态)
enable-rcon=true # 是否启用 RCONtrue与 .env 中的密码结合使用)
enable-status=true # 是否启用服务器状态查询
enforce-secure-profile=true # 是否强制安全配置文件(正版验证相关)
enforce-whitelist=false # 是否强制白名单false未启用如需启用设为 true 并配置 whitelist.json
entity-broadcast-range-percentage=100 # 实体广播范围百分比100 为默认)
force-gamemode=false # 是否强制游戏模式
function-permission-level=2 # 函数权限级别2 为标准)
gamemode=survival # 默认游戏模式survival/creative/adventure/spectator
generate-structures=true # 是否生成结构(如村庄)
generator-settings={} # 生成器设置JSON留空为默认
hardcore=false # 是否硬核模式(死亡后封号)
hide-online-players=false # 是否隐藏在线玩家列表
initial-disabled-packs= # 初始禁用数据包
initial-enabled-packs=vanilla,fabric,easyauth,fabric-convention-tags-v2,server_translations_api # 初始启用数据包(基于 Fabric 和模组)
level-name=world # 世界名称world 为默认文件夹)
level-seed= # 世界种子(留空为随机)
level-type=minecraft\:normal # 世界类型normal 为标准)
log-ips=true # 是否记录玩家 IPtrue 为安全审计)
max-chained-neighbor-updates=1000000 # 最大链式邻居更新(防止滞后)
max-players=20 # 最大玩家数(原为 114514改为通用示例 20根据服务器资源调整
max-tick-time=60000 # 最大 tick 时间(毫秒,超时重启)
max-world-size=29999984 # 最大世界大小
motd=A Minecraft Server # 服务器欢迎消息(改为通用示例,原为个人化;支持颜色代码如 §6
network-compression-threshold=256 # 网络压缩阈值
online-mode=false # 是否正版验证false 允许盗版客户端连接)
op-permission-level=4 # OP 权限级别4 为最高)
player-idle-timeout=0 # 玩家空闲超时0 为无限)
prevent-proxy-connections=false # 是否防止代理连接
pvp=true # 是否启用 PVP
query.port=25565 # 查询端口(与服务器端口一致)
rate-limit=0 # 速率限制0 为无)
rcon.password= # RCON 密码(留空,使用 .env 中的值动态注入;原已移除)
rcon.port=25575 # RCON 端口
region-file-compression=deflate # 区域文件压缩deflate 为标准)
require-resource-pack=false # 是否要求资源包
resource-pack= # 资源包 URL
resource-pack-id= # 资源包 ID
resource-pack-prompt= # 资源包提示
resource-pack-sha1= # 资源包 SHA1
server-ip= # 服务器 IP留空为所有接口
server-port=25565 # 服务器端口
simulation-distance=10 # 模拟距离(影响性能)
spawn-animals=true # 是否生成动物
spawn-monsters=true # 是否生成怪物
spawn-npcs=true # 是否生成 NPC
spawn-protection=16 # 生成保护半径(新手保护)
sync-chunk-writes=true # 是否同步区块写入(防止数据丢失)
text-filtering-config= # 文本过滤配置
use-native-transport=true # 是否使用本地传输(优化性能)
view-distance=10 # 视图距离(影响性能)
white-list=false # 是否启用白名单false未启用如需启用设为 true

View file

@ -1,17 +0,0 @@
[
// whitelist.json JSON
// uuid name
//
// UUID mcuuid.net
//
//
{
"uuid": "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx",
"name": "example_player1"
},
{
"uuid": "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx",
"name": "example_player2"
}
//
]

View file

@ -1,66 +0,0 @@
# docker-compose.yml 文件Docker Compose 配置,用于启动 Minecraft Fabric 服务器容器。
# 版本3.8(兼容大多数 Docker 版本)。
# 此文件已去敏,所有敏感数据移到 .env 或 configs 中。
# 运行命令docker compose up -d后台启动
version: "3.8"
services:
minecraft: # 服务名称(可自定义)
image: itzg/minecraft-server:latest # 使用镜像itzg/minecraft-server支持 Fabric 和自动模组管理)
container_name: mc-fabric-1.21.1 # 容器名称(便于 docker ps 查看)
ports:
- "25565:25565/tcp" # Minecraft 主端口TCP客户端连接
- "25565:25565/udp" # 查询端口UDP服务器列表查询
- "25575:25575/tcp" # RCON 端口TCP远程控制与 server.properties 一致)
environment: # 环境变量:用于动态配置服务器参数
UID: "${UID}" # 从 .env 加载主机用户 UID避免权限问题
GID: "${GID}" # 从 .env 加载主机用户 GID
EULA: "TRUE" # 同意 Minecraft EULA必须为 TRUE否则启动失败
TYPE: "FABRIC" # 服务器类型FABRIC支持模组加载器
VERSION: "1.21.1" # Minecraft 版本(固定为 1.21.1
FABRIC_VERSION: "latest" # Fabric 加载器版本latest 为自动最新)
MEMORY: "4G" # 最大内存分配4G 为示例,根据主机 RAM 调整;模组多建议 4G+
INIT_MEMORY: "2G" # 初始内存分配2G 为示例)
# 服务器核心设置(可覆盖 server.properties 中的值)
ONLINE_MODE: "false" # 正版验证false 允许非正版客户端;生产环境建议 true 以防安全风险)
ENABLE_RCON: "true" # 启用 RCONtrue与 .env 密码结合)
RCON_PASSWORD: "${RCON_PASSWORD}" # RCON 密码(从 .env 加载,用户自行设置)
RCON_PORT: 25575 # RCON 端口(与 server.properties 一致)
DIFFICULTY: "normal" # 难度normal 为默认)
GAMEMODE: "survival" # 游戏模式survival 为生存)
MAX_PLAYERS: 20 # 最大玩家数(改为通用示例,原为特定值;根据资源调整)
MOTD: "A Minecraft Server" # 欢迎消息(改为通用示例,支持颜色代码)
VIEW_DISTANCE: 10 # 视图距离10 为平衡性能)
SIMULATION_DISTANCE: 10 # 模拟距离10 为平衡)
SPAWN_PROTECTION: 16 # 生成保护16 为默认)
PVP: "true" # 启用 PVPtrue
WHITE_LIST: "false" # 启用白名单false如需启用设为 true 并配置 whitelist.json
ENFORCE_WHITELIST: "false" # 强制白名单false
# 其他优化设置
TZ: "${TZ}" # 时区(从 .env 加载)
REMOVE_OLD_MODS: "true" # 自动移除旧模组true保持 mods 干净)
# 可选:模组自动下载(如果不挂载 mods/ 目录,取消注释启用)
# MODRINTH_PROJECTS: "@/extras/mods.txt" # 从 mods.txt 加载 Modrinth slugs
# MODRINTH_DOWNLOAD_DEPENDENCIES: "required" # 只下载必需依赖
volumes: # 卷挂载:持久化数据和自定义文件
- ./data:/data # 持久化世界、backups、logs 等(重要,防止数据丢失)
- ./mods:/data/mods # 挂载用户 mods jar从原服务器复制
- ./configs/server.properties:/data/server.properties:ro # 挂载自定义 server.propertiesro 为只读,防止容器修改)
- ./configs/whitelist.json:/data/whitelist.json:ro # 挂载自定义白名单
- ./configs/ops.json:/data/ops.json:ro # 挂载自定义 OP
# 可选:挂载其他目录(根据需要取消注释)
# - ./data/backups:/data/backups # 备份目录
# - ./configs/EasyAuth:/data/EasyAuth:ro # EasyAuth 配置(如果有)
# - ./extras:/extras:ro # mods.txt 目录(如果启用自动下载)
restart: unless-stopped # 重启策略:除非手动停止,否则自动重启(提升可用性)
stdin_open: true # 启用 stdin便于 attach 容器)
tty: true # 启用 tty便于交互调试
logging: # 日志配置(防止日志文件过大)
driver: "json-file" # 日志驱动json-file 为 Docker 默认)
options:
max-size: "10m" # 单个日志文件最大大小10m 为示例)
max-file: "3" # 最大日志文件数3 个轮换)

View file

@ -1,36 +0,0 @@
# mods.txt 文件:用于 Modrinth 自动下载模组的 slugs 列表。
# 每行一个 slug从 Modrinth 网站获取),镜像会自动下载最新 1.21.1 兼容版本。
# 如果您有现有 jar 文件,优先挂载 mods/ 目录;否则启用此文件。
# 基于原 mods 列表推断,聚焦性能、兼容和实用模组。
# 核心 API 和加载器
fabric-api
# 性能优化模组(减少延迟、提升 TPS
sodium
iris
lithium
krypton
architectury
# 地毯模组(调试和服务器优化)
carpet
carpet-extra
# 版本兼容模组(允许旧版客户端连接)
viafabric
viaversion
viabackwards
# 地图模组(小地图和世界地图)
xaeros-minimap
xaeros-world-map
# 其他实用模组(地图渲染、备份、认证等)
bluemap
ftbbackups2
easyauth
no-chat-reports
polylib
energy
minimotd # 假设为 minimotd 的 slug请在 Modrinth 确认

View file

@ -1,376 +0,0 @@
#!/usr/bin/env bash
# Minecraft 服务器备份管理
# 支持世界数据、配置文件、mods 的备份和恢复
set -e
# 加载工具库
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
source "$SCRIPT_DIR/utils.sh"
# ============================================
# 配置变量
# ============================================
readonly PROJECT_ROOT="$(cd "$SCRIPT_DIR/.." && pwd)"
readonly BACKUP_DIR="$PROJECT_ROOT/backups"
readonly DATA_DIR="$PROJECT_ROOT/data"
readonly MODS_DIR="$PROJECT_ROOT/mods"
readonly CONFIGS_DIR="$PROJECT_ROOT/configs"
readonly CONTAINER_NAME="mc-fabric-1.21.1"
# ============================================
# 备份函数
# ============================================
# 备份世界数据
backup_world() {
log_info "备份世界数据..."
local world_dir="$DATA_DIR/world"
if [[ ! -d "$world_dir" ]]; then
log_warning "世界数据不存在: $world_dir"
return 1
fi
# 通知服务器保存
if docker ps --format '{{.Names}}' | grep -q "^${CONTAINER_NAME}$"; then
log_info "通知服务器保存世界..."
docker exec "$CONTAINER_NAME" rcon-cli save-all flush 2>/dev/null || true
sleep 3
fi
# 创建备份
local backup_file=$(create_backup "$world_dir" "$BACKUP_DIR/worlds")
if [[ -n "$backup_file" ]]; then
log_success "世界备份完成"
return 0
else
log_error "世界备份失败"
return 1
fi
}
# 备份配置文件
backup_configs() {
log_info "备份配置文件..."
if [[ ! -d "$CONFIGS_DIR" ]]; then
log_warning "配置目录不存在: $CONFIGS_DIR"
return 1
fi
local backup_file=$(create_backup "$CONFIGS_DIR" "$BACKUP_DIR/configs")
if [[ -n "$backup_file" ]]; then
log_success "配置备份完成"
return 0
else
log_error "配置备份失败"
return 1
fi
}
# 备份 mods
backup_mods() {
log_info "备份 Mods..."
if [[ ! -d "$MODS_DIR" || -z "$(ls -A "$MODS_DIR" 2>/dev/null)" ]]; then
log_warning "Mods 目录为空"
return 1
fi
local backup_file=$(create_backup "$MODS_DIR" "$BACKUP_DIR/mods")
if [[ -n "$backup_file" ]]; then
log_success "Mods 备份完成"
return 0
else
log_error "Mods 备份失败"
return 1
fi
}
# 完整备份
backup_all() {
log_info "开始完整备份..."
log_separator
local success=0
local failed=0
# 备份世界
if backup_world; then
((success++))
else
((failed++))
fi
log_separator
# 备份配置
if backup_configs; then
((success++))
else
((failed++))
fi
log_separator
# 备份 mods
if backup_mods; then
((success++))
else
((failed++))
fi
log_separator
log_info "备份完成 - 成功: $success, 失败: $failed"
return $failed
}
# ============================================
# 恢复函数
# ============================================
# 列出可用备份
list_backups() {
local type="${1:-all}"
log_info "可用备份:"
case "$type" in
world | worlds)
list_backup_type "worlds" "世界数据"
;;
config | configs)
list_backup_type "configs" "配置文件"
;;
mods)
list_backup_type "mods" "Mods"
;;
all | *)
list_backup_type "worlds" "世界数据"
list_backup_type "configs" "配置文件"
list_backup_type "mods" "Mods"
;;
esac
}
# 列出特定类型的备份
list_backup_type() {
local subdir="$1"
local name="$2"
local backup_path="$BACKUP_DIR/$subdir"
echo ""
log_info "=== $name ==="
if [[ ! -d "$backup_path" ]]; then
log_warning "无可用备份"
return
fi
local count=0
while IFS= read -r file; do
local filename=$(basename "$file")
local size=$(du -h "$file" | cut -f1)
local date=$(echo "$filename" | grep -oP '\d{8}-\d{6}' || echo "未知")
printf " ${CYAN}${NC} %-60s %8s\n" "$filename" "$size"
((count++))
done < <(find "$backup_path" -name "*.tar.gz" -type f 2>/dev/null | sort -r)
if [[ $count -eq 0 ]]; then
log_warning "无可用备份"
else
log_info "$count 个备份"
fi
}
# 恢复备份
restore_backup() {
local backup_file="$1"
if [[ ! -f "$backup_file" ]]; then
log_error "备份文件不存在: $backup_file"
return 1
fi
log_warning "恢复备份将覆盖现有数据"
log_info "备份文件: $(basename "$backup_file")"
# 确定恢复目标
local target_dir=""
if [[ "$backup_file" =~ /worlds/ ]]; then
target_dir="$DATA_DIR"
elif [[ "$backup_file" =~ /configs/ ]]; then
target_dir=$(dirname "$CONFIGS_DIR")
elif [[ "$backup_file" =~ /mods/ ]]; then
target_dir=$(dirname "$MODS_DIR")
else
log_error "无法确定备份类型"
return 1
fi
log_info "恢复目标: $target_dir"
# 解压备份
if tar -xzf "$backup_file" -C "$target_dir"; then
log_success "备份恢复完成"
return 0
else
log_error "备份恢复失败"
return 1
fi
}
# ============================================
# 清理函数
# ============================================
# 清理旧备份
cleanup_backups() {
local keep="${1:-5}"
log_info "清理旧备份(保留最近 $keep 个)..."
log_separator
cleanup_old_backups "$BACKUP_DIR/worlds" "$keep"
cleanup_old_backups "$BACKUP_DIR/configs" "$keep"
cleanup_old_backups "$BACKUP_DIR/mods" "$keep"
log_success "备份清理完成"
}
# 显示备份统计
show_backup_stats() {
log_info "备份统计信息"
log_separator
local types=("worlds:世界数据" "configs:配置文件" "mods:Mods")
for type_pair in "${types[@]}"; do
local type="${type_pair%%:*}"
local name="${type_pair##*:}"
local backup_path="$BACKUP_DIR/$type"
if [[ -d "$backup_path" ]]; then
local count=$(find "$backup_path" -name "*.tar.gz" 2>/dev/null | wc -l)
local total_size=$(du -sh "$backup_path" 2>/dev/null | cut -f1)
log_info "$name: $count 个备份, 总大小: $total_size"
else
log_info "$name: 无备份"
fi
done
log_separator
if [[ -d "$BACKUP_DIR" ]]; then
local total_size=$(du -sh "$BACKUP_DIR" 2>/dev/null | cut -f1)
log_info "备份总大小: $total_size"
fi
}
# ============================================
# 主函数
# ============================================
show_usage() {
cat <<EOF
Minecraft 服务器备份管理
用法: $(basename "$0") [命令] [选项]
命令:
backup [type] 创建备份
type:
world 仅备份世界数据
config 仅备份配置文件
mods 仅备份 mods
all 完整备份(默认)
list [type] 列出可用备份
restore <file> 恢复指定备份
cleanup [num] 清理旧备份默认保留5个
stats 显示备份统计信息
help 显示此帮助信息
示例:
$(basename "$0") backup all # 完整备份
$(basename "$0") backup world # 仅备份世界
$(basename "$0") list # 列出所有备份
$(basename "$0") cleanup 10 # 保留最近10个备份
备份位置: $BACKUP_DIR
EOF
}
main() {
local command="${1:-help}"
shift || true
# 初始化日志
init_log "Minecraft 备份管理"
log_system_info
case "$command" in
backup)
local type="${1:-all}"
case "$type" in
world | worlds)
backup_world
;;
config | configs)
backup_configs
;;
mods)
backup_mods
;;
all | *)
backup_all
;;
esac
;;
list)
list_backups "${1:-all}"
;;
restore)
local file="$1"
if [[ -z "$file" ]]; then
log_error "请指定备份文件"
exit 1
fi
restore_backup "$file"
;;
cleanup)
cleanup_backups "${1:-5}"
;;
stats)
show_backup_stats
;;
help | --help | -h)
show_usage
;;
*)
log_error "未知命令: $command"
echo ""
show_usage
exit 1
;;
esac
show_log_file
}
# 执行主函数
main "$@"

View file

@ -1,344 +0,0 @@
#!/usr/bin/env bash
# Minecraft Mods 管理器
# 支持从 Modrinth 下载、更新、清理 mods
set -e
# 加载工具库
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
source "$SCRIPT_DIR/utils.sh"
# ============================================
# 配置变量
# ============================================
readonly PROJECT_ROOT="$(cd "$SCRIPT_DIR/.." && pwd)"
readonly MODS_DIR="$PROJECT_ROOT/mods"
readonly MODS_LIST="$PROJECT_ROOT/extras/mods.txt"
readonly MC_VERSION="1.21.1"
readonly LOADER="fabric"
readonly MODRINTH_API="https://api.modrinth.com/v2"
# ============================================
# Modrinth API 函数
# ============================================
# 获取项目信息
get_project_info() {
local slug="$1"
local response
response=$(curl -s "$MODRINTH_API/project/$slug")
if [[ $? -ne 0 || -z "$response" ]]; then
return 1
fi
echo "$response"
}
# 获取最新版本
get_latest_version() {
local slug="$1"
local game_version="$2"
local loader="$3"
local url="$MODRINTH_API/project/$slug/version"
url+="?game_versions=%5B%22${game_version}%22%5D"
url+="&loaders=%5B%22${loader}%22%5D"
local response=$(curl -s "$url")
if [[ $? -ne 0 || -z "$response" || "$response" == "[]" ]]; then
return 1
fi
echo "$response"
}
# 解析下载信息
parse_download_info() {
local json="$1"
# 提取文件名和下载链接
local filename=$(echo "$json" | grep -o '"filename":"[^"]*"' | head -1 | cut -d'"' -f4)
local url=$(echo "$json" | grep -o '"url":"[^"]*"' | head -1 | cut -d'"' -f4)
if [[ -z "$filename" || -z "$url" ]]; then
return 1
fi
echo "$filename|$url"
}
# ============================================
# Mods 下载函数
# ============================================
# 下载单个 mod
download_mod() {
local slug="$1"
local mod_name="${2:-$slug}"
log_info "处理: $mod_name ($slug)"
# 获取版本信息
local versions=$(get_latest_version "$slug" "$MC_VERSION" "$LOADER")
if [[ $? -ne 0 ]]; then
log_error "未找到适配版本: $slug"
return 1
fi
# 解析下载信息
local download_info=$(parse_download_info "$versions")
if [[ $? -ne 0 ]]; then
log_error "解析下载信息失败: $slug"
return 1
fi
local filename=$(echo "$download_info" | cut -d'|' -f1)
local download_url=$(echo "$download_info" | cut -d'|' -f2)
# 检查文件是否已存在
if [[ -f "$MODS_DIR/$filename" ]]; then
log_info "已存在: $filename"
return 0
fi
# 下载文件
log_info "下载: $filename"
if curl -L -o "$MODS_DIR/$filename" "$download_url" 2>/dev/null; then
# 验证文件大小
local size=$(stat -f%z "$MODS_DIR/$filename" 2>/dev/null || stat -c%s "$MODS_DIR/$filename" 2>/dev/null)
if [[ $size -gt 1000 ]]; then
log_success "下载完成: $filename ($(($size / 1024)) KB)"
return 0
else
log_error "文件大小异常: $filename ($size bytes)"
rm -f "$MODS_DIR/$filename"
return 1
fi
else
log_error "下载失败: $slug"
return 1
fi
}
# 从列表下载所有 mods
download_all_mods() {
if [[ ! -f "$MODS_LIST" ]]; then
log_error "Mods 列表不存在: $MODS_LIST"
return 1
fi
log_info "开始批量下载 mods"
log_info "Minecraft 版本: $MC_VERSION"
log_info "加载器: $LOADER"
log_info "目标目录: $MODS_DIR"
mkdir -p "$MODS_DIR"
local success_count=0
local fail_count=0
local skip_count=0
while IFS= read -r line; do
# 跳过空行和注释
line=$(echo "$line" | xargs)
if [[ -z "$line" || "$line" =~ ^# ]]; then
continue
fi
log_separator
if download_mod "$line"; then
((success_count++))
else
((fail_count++))
fi
# 限流避免API限制
sleep 1
done <"$MODS_LIST"
log_separator
log_info "下载完成 - 成功: $success_count, 失败: $fail_count"
return 0
}
# ============================================
# Mods 管理函数
# ============================================
# 列出已安装的 mods
list_mods() {
log_info "已安装的 Mods:"
if [[ ! -d "$MODS_DIR" || -z "$(ls -A "$MODS_DIR" 2>/dev/null)" ]]; then
log_warning "未找到任何 mods"
return 0
fi
local count=0
while IFS= read -r file; do
local filename=$(basename "$file")
local size=$(stat -f%z "$file" 2>/dev/null || stat -c%s "$file" 2>/dev/null)
local size_kb=$(($size / 1024))
printf " ${GREEN}${NC} %-50s %8s KB\n" "$filename" "$size_kb"
((count++))
done < <(find "$MODS_DIR" -name "*.jar" -type f 2>/dev/null | sort)
log_info "总计: $count 个 mods"
}
# 清理 mods
clean_mods() {
local backup="${1:-true}"
if [[ ! -d "$MODS_DIR" ]]; then
log_info "Mods 目录不存在,无需清理"
return 0
fi
local count=$(find "$MODS_DIR" -name "*.jar" 2>/dev/null | wc -l)
if [[ $count -eq 0 ]]; then
log_info "没有 mods 需要清理"
return 0
fi
log_warning "准备清理 $count 个 mods"
# 创建备份
if [[ "$backup" == "true" ]]; then
create_backup "$MODS_DIR"
fi
# 删除所有 jar 文件
find "$MODS_DIR" -name "*.jar" -type f -delete
log_success "Mods 清理完成"
}
# 更新所有 mods
update_mods() {
log_info "更新所有 mods"
# 备份并清理现有 mods
clean_mods true
# 重新下载
download_all_mods
}
# 检查 mods 状态
check_mods() {
log_info "检查 Mods 状态"
if [[ ! -d "$MODS_DIR" ]]; then
log_warning "Mods 目录不存在"
return 1
fi
local jar_count=$(find "$MODS_DIR" -name "*.jar" 2>/dev/null | wc -l)
local total_size=$(du -sh "$MODS_DIR" 2>/dev/null | cut -f1)
log_info "Mods 数量: $jar_count"
log_info "总大小: $total_size"
# 检查必需的核心 mods
local required_mods=("fabric-api")
local missing_count=0
for mod in "${required_mods[@]}"; do
if ! find "$MODS_DIR" -name "*${mod}*.jar" | grep -q .; then
log_warning "缺少核心 mod: $mod"
((missing_count++))
fi
done
if [[ $missing_count -gt 0 ]]; then
log_warning "缺少 $missing_count 个核心 mods建议运行下载命令"
else
log_success "核心 mods 完整"
fi
return 0
}
# ============================================
# 主函数
# ============================================
show_usage() {
cat <<EOF
Minecraft Mods 管理器
用法: $(basename "$0") [命令]
命令:
download 下载所有 mods根据 extras/mods.txt
list 列出已安装的 mods
update 更新所有 mods备份、清理、重新下载
clean 清理所有 mods自动备份
check 检查 mods 状态
help 显示此帮助信息
示例:
$(basename "$0") download # 下载所有 mods
$(basename "$0") list # 列出已安装的 mods
$(basename "$0") update # 更新所有 mods
配置文件:
Mods 列表: $MODS_LIST
Mods 目录: $MODS_DIR
EOF
}
main() {
local command="${1:-help}"
# 初始化日志
init_log "Minecraft Mods 管理器"
log_system_info
case "$command" in
download)
check_network || exit 1
download_all_mods
;;
list)
list_mods
;;
update)
check_network || exit 1
update_mods
;;
clean)
clean_mods true
;;
check)
check_mods
;;
help | --help | -h)
show_usage
;;
*)
log_error "未知命令: $command"
echo ""
show_usage
exit 1
;;
esac
show_log_file
}
# 执行主函数
main "$@"

View file

@ -1,315 +0,0 @@
#!/usr/bin/env bash
# Minecraft 服务器监控脚本
# 监控容器状态、资源使用、玩家在线等信息
set -e
# 加载工具库
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
source "$SCRIPT_DIR/utils.sh"
# ============================================
# 配置变量
# ============================================
readonly PROJECT_ROOT="$(cd "$SCRIPT_DIR/.." && pwd)"
readonly CONTAINER_NAME="mc-fabric-1.21.1"
# ============================================
# 容器状态检查
# ============================================
check_container() {
log_info "检查容器状态..."
local status=$(check_container_status "$CONTAINER_NAME")
case "$status" in
running)
log_success "容器正在运行"
return 0
;;
stopped)
log_warning "容器已停止"
return 1
;;
not_found)
log_error "容器不存在"
return 2
;;
esac
}
# ============================================
# 资源使用监控
# ============================================
show_resource_usage() {
log_info "资源使用情况"
log_separator
if ! docker ps --format '{{.Names}}' | grep -q "^${CONTAINER_NAME}$"; then
log_warning "容器未运行"
return 1
fi
# 获取容器统计信息
local stats=$(docker stats --no-stream --format "{{.CPUPerc}}|{{.MemUsage}}|{{.NetIO}}|{{.BlockIO}}" "$CONTAINER_NAME")
local cpu=$(echo "$stats" | cut -d'|' -f1)
local mem=$(echo "$stats" | cut -d'|' -f2)
local net=$(echo "$stats" | cut -d'|' -f3)
local disk=$(echo "$stats" | cut -d'|' -f4)
printf "${CYAN}CPU 使用:${NC} %s\n" "$cpu"
printf "${CYAN}内存使用:${NC} %s\n" "$mem"
printf "${CYAN}网络 I/O:${NC} %s\n" "$net"
printf "${CYAN}磁盘 I/O:${NC} %s\n" "$disk"
log_separator
}
# ============================================
# 服务器信息
# ============================================
show_server_info() {
log_info "服务器信息"
log_separator
if ! docker ps --format '{{.Names}}' | grep -q "^${CONTAINER_NAME}$"; then
log_warning "容器未运行"
return 1
fi
# 容器运行时间
local uptime=$(docker inspect --format='{{.State.StartedAt}}' "$CONTAINER_NAME")
uptime=$(date -j -f "%Y-%m-%dT%H:%M:%S" "${uptime:0:19}" "+%Y-%m-%d %H:%M:%S" 2>/dev/null || echo "$uptime")
printf "${CYAN}启动时间:${NC} %s\n" "$uptime"
# 端口映射
local ports=$(docker port "$CONTAINER_NAME" 2>/dev/null | grep "25565" | head -1)
printf "${CYAN}服务端口:${NC} %s\n" "${ports:-未映射}"
# 容器 IP
local container_ip=$(docker inspect --format='{{range .NetworkSettings.Networks}}{{.IPAddress}}{{end}}' "$CONTAINER_NAME")
printf "${CYAN}容器 IP:${NC} %s\n" "${container_ip:-未知}"
log_separator
}
# ============================================
# 玩家信息
# ============================================
show_players() {
log_info "在线玩家"
log_separator
if ! docker ps --format '{{.Names}}' | grep -q "^${CONTAINER_NAME}$"; then
log_warning "容器未运行"
return 1
fi
# 尝试使用 rcon 获取玩家列表
local player_list=$(docker exec "$CONTAINER_NAME" rcon-cli list 2>/dev/null || echo "无法获取玩家信息")
if [[ "$player_list" =~ "There are" ]]; then
echo "$player_list"
else
log_warning "无法获取玩家信息(可能需要配置 RCON"
fi
log_separator
}
# ============================================
# 日志监控
# ============================================
show_recent_logs() {
local lines="${1:-20}"
log_info "最近 $lines 行日志"
log_separator
if ! docker ps --format '{{.Names}}' | grep -q "^${CONTAINER_NAME}$"; then
log_warning "容器未运行"
return 1
fi
docker logs --tail "$lines" "$CONTAINER_NAME" 2>&1
log_separator
}
# 监控日志中的错误
check_errors() {
log_info "检查错误日志"
log_separator
if ! docker ps --format '{{.Names}}' | grep -q "^${CONTAINER_NAME}$"; then
log_warning "容器未运行"
return 1
fi
local error_count=$(docker logs --tail 1000 "$CONTAINER_NAME" 2>&1 | grep -ic "error" || echo "0")
local warn_count=$(docker logs --tail 1000 "$CONTAINER_NAME" 2>&1 | grep -ic "warn" || echo "0")
if [[ $error_count -gt 0 ]]; then
log_warning "发现 $error_count 个错误"
else
log_success "无错误日志"
fi
if [[ $warn_count -gt 0 ]]; then
log_warning "发现 $warn_count 个警告"
fi
log_separator
}
# ============================================
# 完整状态报告
# ============================================
show_full_status() {
log_info "=== Minecraft 服务器完整状态 ==="
log_separator
# 检查容器
if ! check_container; then
return 1
fi
log_separator
# 服务器信息
show_server_info
# 资源使用
show_resource_usage
# 玩家信息
show_players
# 错误检查
check_errors
log_success "状态检查完成"
}
# ============================================
# 持续监控
# ============================================
watch_mode() {
local interval="${1:-5}"
log_info "开始持续监控(每 ${interval}s 刷新Ctrl+C 退出)"
log_separator
while true; do
clear
echo "=== Minecraft 服务器监控 ==="
echo "刷新时间: $(date '+%Y-%m-%d %H:%M:%S')"
echo ""
show_server_info 2>/dev/null || true
show_resource_usage 2>/dev/null || true
show_players 2>/dev/null || true
sleep "$interval"
done
}
# ============================================
# 主函数
# ============================================
show_usage() {
cat <<EOF
Minecraft 服务器监控工具
用法: $(basename "$0") [命令] [选项]
命令:
status 显示完整状态(默认)
container 检查容器状态
resources 显示资源使用情况
server 显示服务器信息
players 显示在线玩家
logs [n] 显示最近 n 行日志默认20行
errors 检查错误日志
watch [sec] 持续监控模式默认每5秒刷新
help 显示此帮助信息
示例:
$(basename "$0") # 显示完整状态
$(basename "$0") resources # 仅显示资源使用
$(basename "$0") logs 50 # 显示最近50行日志
$(basename "$0") watch 10 # 每10秒刷新一次
容器名称: $CONTAINER_NAME
EOF
}
main() {
local command="${1:-status}"
shift || true
# 初始化日志
init_log "Minecraft 服务器监控"
case "$command" in
status)
show_full_status
;;
container)
check_container
;;
resources | res)
show_resource_usage
;;
server | info)
show_server_info
;;
players | player)
show_players
;;
logs | log)
show_recent_logs "${1:-20}"
;;
errors | error)
check_errors
;;
watch)
watch_mode "${1:-5}"
;;
help | --help | -h)
show_usage
;;
*)
log_error "未知命令: $command"
echo ""
show_usage
exit 1
;;
esac
if [[ "$command" != "watch" ]]; then
show_log_file
fi
}
# 执行主函数
main "$@"

View file

@ -1,184 +0,0 @@
#!/usr/bin/env bash
# Minecraft 服务器初始化脚本
# 用于首次部署或环境重置
set -e
# 加载工具库
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
source "$SCRIPT_DIR/utils.sh"
# ============================================
# 配置变量
# ============================================
readonly PROJECT_ROOT="$(cd "$SCRIPT_DIR/.." && pwd)"
# ============================================
# 初始化检查
# ============================================
check_prerequisites() {
log_info "检查系统环境..."
# 检查 Docker
if ! check_docker; then
log_error "Docker 环境检查失败"
exit 1
fi
# 检查必需文件
local required_files=(
"$PROJECT_ROOT/docker-compose.yml"
"$PROJECT_ROOT/.env"
)
for file in "${required_files[@]}"; do
if ! check_file "$file"; then
log_error "缺少必需文件"
exit 1
fi
done
log_success "环境检查通过"
}
# ============================================
# 目录结构初始化
# ============================================
init_directories() {
log_info "初始化目录结构..."
local dirs=(
"$PROJECT_ROOT/data"
"$PROJECT_ROOT/mods"
"$PROJECT_ROOT/configs"
"$PROJECT_ROOT/backups"
"$PROJECT_ROOT/logs"
)
for dir in "${dirs[@]}"; do
if [[ ! -d "$dir" ]]; then
mkdir -p "$dir"
log_success "创建目录: $(basename "$dir")"
else
log_info "目录已存在: $(basename "$dir")"
fi
done
}
# ============================================
# 配置文件初始化
# ============================================
check_env_config() {
log_info "检查环境配置..."
local env_file="$PROJECT_ROOT/.env"
# 检查关键配置
if grep -q "your_rcon_password_here" "$env_file"; then
log_warning ".env 中的 RCON_PASSWORD 未配置"
log_warning "请编辑 .env 文件并设置安全的密码"
fi
# 检查 UID/GID
local current_uid=$(id -u)
local current_gid=$(id -g)
if ! grep -q "UID=$current_uid" "$env_file"; then
log_info "当前用户 UID: $current_uid"
log_warning "建议更新 .env 中的 UID 为: $current_uid"
fi
if ! grep -q "GID=$current_gid" "$env_file"; then
log_info "当前用户 GID: $current_gid"
log_warning "建议更新 .env 中的 GID 为: $current_gid"
fi
}
# ============================================
# Mods 初始化
# ============================================
init_mods() {
log_info "检查 Mods 状态..."
local mods_count=$(find "$PROJECT_ROOT/mods" -name "*.jar" 2>/dev/null | wc -l)
if [[ $mods_count -eq 0 ]]; then
log_warning "未找到任何 mods"
log_info "建议运行: make minecraft-mods-download"
else
log_success "发现 $mods_count 个 mods"
fi
}
# ============================================
# 权限设置
# ============================================
fix_permissions() {
log_info "修复文件权限..."
# 确保脚本可执行
find "$PROJECT_ROOT/scripts" -name "*.sh" -type f -exec chmod +x {} \;
# 确保数据目录可写
if [[ -d "$PROJECT_ROOT/data" ]]; then
chmod -R 755 "$PROJECT_ROOT/data"
fi
log_success "权限修复完成"
}
# ============================================
# 显示配置摘要
# ============================================
show_summary() {
log_separator
log_success "初始化完成!"
echo ""
log_info "配置摘要:"
log_info " 项目目录: $PROJECT_ROOT"
log_info " Docker Compose: $(get_docker_compose_cmd)"
log_info " Mods 数量: $(find "$PROJECT_ROOT/mods" -name "*.jar" 2>/dev/null | wc -l)"
echo ""
log_info "后续步骤:"
log_info " 1. 检查并编辑 .env 文件,设置 RCON_PASSWORD"
log_info " 2. 如需下载 mods: make minecraft-mods-download"
log_info " 3. 启动服务器: make minecraft-up"
log_info " 4. 查看日志: make minecraft-logs"
echo ""
show_log_file
}
# ============================================
# 主函数
# ============================================
main() {
# 初始化日志
init_log "Minecraft 服务器初始化"
log_system_info
log_info "开始初始化 Minecraft 服务器环境..."
log_separator
# 执行初始化步骤
check_prerequisites
init_directories
check_env_config
init_mods
fix_permissions
# 显示摘要
show_summary
}
# 执行主函数
main "$@"

View file

@ -1,339 +0,0 @@
#!/usr/bin/env bash
# Minecraft 自动化工具 - 通用工具库
# 提供日志、环境检查、Docker操作等通用函数
# ============================================
# 颜色定义
# ============================================
readonly RED='\033[0;31m'
readonly GREEN='\033[0;32m'
readonly YELLOW='\033[1;33m'
readonly BLUE='\033[0;34m'
readonly CYAN='\033[0;36m'
readonly NC='\033[0m' # No Color
# ============================================
# 全局变量
# ============================================
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
PROJECT_ROOT="$(cd "$SCRIPT_DIR/.." && pwd)"
LOG_DIR="$PROJECT_ROOT/logs"
LOG_FILE="$LOG_DIR/automation-$(date +%Y%m%d).log"
# 确保日志目录存在
mkdir -p "$LOG_DIR"
# ============================================
# 日志函数
# ============================================
# 初始化日志文件
init_log() {
local title="${1:-Minecraft 自动化任务}"
{
echo "=========================================="
echo "$title"
echo "开始时间: $(date '+%Y-%m-%d %H:%M:%S')"
echo "工作目录: $(pwd)"
echo "用户: $(whoami)"
echo "=========================================="
echo ""
} >>"$LOG_FILE"
}
# 记录信息
log_info() {
local msg="$1"
local timestamp=$(date '+%H:%M:%S')
echo -e "${BLUE}[INFO]${NC} ${timestamp} $msg"
echo "[$(date '+%Y-%m-%d %H:%M:%S')] [INFO] $msg" >>"$LOG_FILE"
}
# 记录成功
log_success() {
local msg="$1"
local timestamp=$(date '+%H:%M:%S')
echo -e "${GREEN}[✓]${NC} ${timestamp} $msg"
echo "[$(date '+%Y-%m-%d %H:%M:%S')] [SUCCESS] $msg" >>"$LOG_FILE"
}
# 记录警告
log_warning() {
local msg="$1"
local timestamp=$(date '+%H:%M:%S')
echo -e "${YELLOW}[⚠]${NC} ${timestamp} $msg"
echo "[$(date '+%Y-%m-%d %H:%M:%S')] [WARNING] $msg" >>"$LOG_FILE"
}
# 记录错误
log_error() {
local msg="$1"
local timestamp=$(date '+%H:%M:%S')
echo -e "${RED}[✗]${NC} ${timestamp} $msg"
echo "[$(date '+%Y-%m-%d %H:%M:%S')] [ERROR] $msg" >>"$LOG_FILE"
}
# 记录命令执行
log_command() {
local cmd="$1"
echo "[$(date '+%Y-%m-%d %H:%M:%S')] [COMMAND] $cmd" >>"$LOG_FILE"
}
# 记录分隔线
log_separator() {
echo "" | tee -a "$LOG_FILE"
echo "==========================================">>"$LOG_FILE"
echo "" | tee -a "$LOG_FILE"
}
# 显示日志位置
show_log_file() {
log_info "详细日志: $LOG_FILE"
}
# ============================================
# 环境检查函数
# ============================================
# 检查命令是否存在
check_command() {
local cmd="$1"
if ! command -v "$cmd" &>/dev/null; then
log_error "未找到命令: $cmd"
return 1
fi
return 0
}
# 检查Docker环境
check_docker() {
log_info "检查 Docker 环境..."
if ! check_command docker; then
log_error "Docker 未安装。请访问: https://docs.docker.com/get-docker/"
return 1
fi
if ! docker info &>/dev/null; then
log_error "Docker 服务未运行"
return 1
fi
if ! command -v docker compose &>/dev/null && ! command -v docker-compose &>/dev/null; then
log_error "Docker Compose 未安装"
return 1
fi
log_success "Docker 环境正常"
return 0
}
# 检查文件是否存在
check_file() {
local file="$1"
local desc="${2:-文件}"
if [[ ! -f "$file" ]]; then
log_error "$desc 不存在: $file"
return 1
fi
return 0
}
# 检查目录是否存在
check_dir() {
local dir="$1"
local desc="${2:-目录}"
if [[ ! -d "$dir" ]]; then
log_error "$desc 不存在: $dir"
return 1
fi
return 0
}
# ============================================
# Docker操作函数
# ============================================
# 获取Docker Compose命令
get_docker_compose_cmd() {
if command -v docker &>/dev/null && docker compose version &>/dev/null; then
echo "docker compose"
elif command -v docker-compose &>/dev/null; then
echo "docker-compose"
else
log_error "未找到 Docker Compose"
return 1
fi
}
# 检查容器状态
check_container_status() {
local container_name="${1:-mc-fabric-1.21.1}"
if docker ps --format '{{.Names}}' | grep -q "^${container_name}$"; then
echo "running"
return 0
elif docker ps -a --format '{{.Names}}' | grep -q "^${container_name}$"; then
echo "stopped"
return 1
else
echo "not_found"
return 2
fi
}
# 等待容器启动
wait_for_container() {
local container_name="${1:-mc-fabric-1.21.1}"
local timeout="${2:-60}"
local elapsed=0
log_info "等待容器启动: $container_name"
while [[ $elapsed -lt $timeout ]]; do
if docker ps --format '{{.Names}}' | grep -q "^${container_name}$"; then
log_success "容器已启动"
return 0
fi
sleep 2
((elapsed += 2))
echo -n "."
done
echo ""
log_error "容器启动超时"
return 1
}
# 执行Docker Compose命令
docker_compose_exec() {
local cmd="$1"
shift
local compose_cmd=$(get_docker_compose_cmd)
if [[ -z "$compose_cmd" ]]; then
return 1
fi
log_command "$compose_cmd $cmd $*"
cd "$PROJECT_ROOT" && $compose_cmd $cmd "$@"
}
# ============================================
# 文件操作函数
# ============================================
# 创建备份
create_backup() {
local source="$1"
local backup_dir="${2:-$PROJECT_ROOT/backups}"
local name=$(basename "$source")
local timestamp=$(date +%Y%m%d-%H%M%S)
local backup_file="$backup_dir/${name}-${timestamp}.tar.gz"
mkdir -p "$backup_dir"
log_info "创建备份: $name"
if [[ -d "$source" ]]; then
if tar -czf "$backup_file" -C "$(dirname "$source")" "$name" 2>/dev/null; then
local size=$(du -h "$backup_file" | cut -f1)
log_success "备份完成: $(basename "$backup_file") ($size)"
echo "$backup_file"
return 0
fi
elif [[ -f "$source" ]]; then
if cp "$source" "$backup_file"; then
log_success "备份完成: $(basename "$backup_file")"
echo "$backup_file"
return 0
fi
fi
log_error "备份失败: $source"
return 1
}
# 清理旧备份
cleanup_old_backups() {
local backup_dir="$1"
local keep_count="${2:-5}"
if [[ ! -d "$backup_dir" ]]; then
return 0
fi
log_info "清理旧备份(保留最近 $keep_count 个)"
local backup_count=$(find "$backup_dir" -name "*.tar.gz" | wc -l)
if [[ $backup_count -le $keep_count ]]; then
log_info "当前备份数: $backup_count,无需清理"
return 0
fi
find "$backup_dir" -name "*.tar.gz" -type f -printf '%T@ %p\n' | \
sort -rn | \
tail -n +$((keep_count + 1)) | \
cut -d' ' -f2- | \
while read -r file; do
rm -f "$file"
log_info "删除旧备份: $(basename "$file")"
done
log_success "备份清理完成"
}
# ============================================
# 网络检查函数
# ============================================
# 检查网络连接
check_network() {
local url="${1:-https://api.modrinth.com/v2/project/fabric-api}"
if curl -s --connect-timeout 5 "$url" >/dev/null; then
return 0
else
log_warning "网络连接失败: $url"
return 1
fi
}
# ============================================
# 系统信息函数
# ============================================
# 记录系统信息
log_system_info() {
{
echo "=== 系统信息 ==="
echo "主机: $(hostname)"
echo "系统: $(uname -s) $(uname -r)"
echo "架构: $(uname -m)"
if command -v docker &>/dev/null; then
echo "Docker: $(docker --version 2>/dev/null || echo '未安装')"
fi
if command -v free &>/dev/null; then
echo "内存: $(free -h | grep Mem: | awk '{print $3 "/" $2}')"
fi
echo "磁盘: $(df -h "$PROJECT_ROOT" | tail -1 | awk '{print $3 "/" $2 " (已用 " $5 ")"}')"
echo "===================="
} >>"$LOG_FILE"
}
# ============================================
# 主函数
# ============================================
# 如果直接执行此脚本
if [[ "${BASH_SOURCE[0]}" == "${0}" ]]; then
echo "这是一个工具库,请通过 source 命令加载"
echo "用法: source $(basename "$0")"
exit 1
fi

View file

@ -1,11 +0,0 @@
```bash
# Structure
minecraft/
├── automatic/
│ ├── deploy.sh # 主部署脚本
│ ├── download-mods.sh # 下载 mods 脚本
│ ├── logger.sh # 日志模块
│ ├── requirements.txt # mods 列表
│ └── server.properties # 服务器配置
└── mods/ # 下载的 mods由脚本管理
```

View file

@ -1,256 +0,0 @@
#!/usr/bin/env bash
# Minecraft server automated deployment
# Deploys mods and configuration to server directory
# 加载日志模块
source "$(dirname "$0")/logger.sh"
# 配置变量
SCRIPT_DIR="$(cd "$(dirname "$0")" && pwd)"
PROJECT_ROOT="$(cd "$SCRIPT_DIR/.." && pwd)"
MODS_SOURCE_DIR="$PROJECT_ROOT/mods"
SERVER_TARGET_DIR="$HOME/repo/mc-fabric"
BACKUP_DIR="$SERVER_TARGET_DIR/backups"
# 初始化日志
main init
log_start "Minecraft 服务器自动化部署"
START_TIME=$(date +%s)
log_info "=== Minecraft 服务器自动化部署 ==="
log_info "脚本目录: $SCRIPT_DIR"
log_info "项目根目录: $PROJECT_ROOT"
log_info "Mods 源目录: $MODS_SOURCE_DIR"
log_info "服务器目标目录: $SERVER_TARGET_DIR"
# 检查目录是否存在
check_directories() {
log_info "检查目录结构..."
if [[ ! -d "$MODS_SOURCE_DIR" ]]; then
log_warning "Mods 源目录不存在,创建: $MODS_SOURCE_DIR"
mkdir -p "$MODS_SOURCE_DIR"
fi
if [[ ! -d "$SERVER_TARGET_DIR" ]]; then
log_error "服务器目标目录不存在: $SERVER_TARGET_DIR"
log_error "请先创建服务器目录或修改 SERVER_TARGET_DIR 变量"
exit 1
fi
log_success "目录检查完成"
}
# 备份现有 mods
backup_existing_mods() {
local backup_name="mods-backup-$(date +%Y%m%d-%H%M%S)"
local target_mods_dir="$SERVER_TARGET_DIR/mods"
if [[ -d "$target_mods_dir" && "$(ls -A "$target_mods_dir" 2>/dev/null)" ]]; then
log_info "备份现有 mods..."
mkdir -p "$BACKUP_DIR"
if tar -czf "$BACKUP_DIR/$backup_name.tar.gz" -C "$SERVER_TARGET_DIR" "mods"; then
log_success "Mods 备份完成: $backup_name.tar.gz"
# 统计备份信息
local file_count=$(find "$target_mods_dir" -name "*.jar" | wc -l)
local total_size=$(du -sh "$target_mods_dir" | cut -f1)
log_info "备份内容: $file_count 个 mods 文件, 总大小: $total_size"
else
log_error "Mods 备份失败"
return 1
fi
else
log_info "没有现有的 mods 需要备份"
fi
}
# 下载 mods如果需要
download_mods_if_needed() {
log_info "检查 mods 下载状态..."
local mods_count=$(find "$MODS_SOURCE_DIR" -name "*.jar" | wc -l)
if [[ $mods_count -eq 0 ]]; then
log_warning "未找到任何 mods开始下载..."
if [[ -f "$SCRIPT_DIR/download-mods.sh" ]]; then
if "$SCRIPT_DIR/download-mods.sh"; then
log_success "Mods 下载完成"
else
log_error "Mods 下载失败"
return 1
fi
else
log_error "下载脚本不存在: $SCRIPT_DIR/download-mods.sh"
return 1
fi
else
log_success "发现 $mods_count 个 mods 文件,跳过下载"
# 显示 mods 列表
log_info "当前 mods 列表:"
find "$MODS_SOURCE_DIR" -name "*.jar" -exec basename {} \; | while read mod; do
log_info " - $mod"
done
fi
}
# 部署 mods 到服务器
deploy_mods() {
log_info "部署 mods 到服务器..."
local target_mods_dir="$SERVER_TARGET_DIR/mods"
# 创建目标 mods 目录
mkdir -p "$target_mods_dir"
# 清空目标目录(在备份之后)
if [[ -d "$target_mods_dir" ]]; then
log_info "清空目标 mods 目录..."
rm -f "$target_mods_dir"/*.jar
# 检查是否清空成功
local remaining_files=$(find "$target_mods_dir" -name "*.jar" | wc -l)
if [[ $remaining_files -ne 0 ]]; then
log_warning "$remaining_files 个文件未能删除,可能是正在被使用"
fi
fi
# 复制新的 mods
log_info "复制 mods 文件..."
local copied_count=0
for mod_file in "$MODS_SOURCE_DIR"/*.jar; do
if [[ -f "$mod_file" ]]; then
local filename=$(basename "$mod_file")
if cp "$mod_file" "$target_mods_dir/"; then
log_success "复制: $filename"
((copied_count++))
else
log_error "复制失败: $filename"
fi
fi
done
if [[ $copied_count -eq 0 ]]; then
log_error "没有成功复制任何 mods 文件"
return 1
fi
log_success "Mods 部署完成: 成功复制 $copied_count 个文件"
# 验证部署
local deployed_count=$(find "$target_mods_dir" -name "*.jar" | wc -l)
log_info "服务器 mods 目录现有文件: $deployed_count"
}
# 部署服务器配置
deploy_server_config() {
log_info "部署服务器配置..."
local config_source="$SCRIPT_DIR/server.properties"
local config_target="$SERVER_TARGET_DIR/server.properties"
if [[ -f "$config_source" ]]; then
if cp "$config_source" "$config_target"; then
log_success "服务器配置部署完成"
# 显示配置差异(如果有旧配置)
if [[ -f "${config_target}.old" ]]; then
log_info "配置变更摘要:"
diff -u "${config_target}.old" "$config_target" | head -20
fi
# 备份旧配置
if [[ -f "$config_target" ]]; then
cp "$config_target" "${config_target}.old"
fi
else
log_error "服务器配置部署失败"
fi
else
log_warning "未找到服务器配置文件: $config_source"
fi
}
# 检查服务器状态
check_server_status() {
log_info "检查服务器状态..."
# 检查服务器进程
if pgrep -f "fabric-server-mc" >/dev/null; then
log_warning "检测到服务器正在运行,部署后需要重启服务器"
SERVER_RUNNING=true
else
log_info "服务器当前未运行"
SERVER_RUNNING=false
fi
# 检查关键文件
local essential_files=(
"$SERVER_TARGET_DIR/eula.txt"
"$SERVER_TARGET_DIR/server.properties"
"$SERVER_TARGET_DIR/fabric-server-mc.1.21.1-loader.0.17.2-launcher.1.1.0.jar"
)
for file in "${essential_files[@]}"; do
if [[ -f "$file" ]]; then
log_success "存在: $(basename "$file")"
else
log_warning "缺失: $(basename "$file")"
fi
done
}
# 显示部署摘要
show_deployment_summary() {
log_separator
log_success "=== 部署完成摘要 ==="
local deployed_mods=$(find "$SERVER_TARGET_DIR/mods" -name "*.jar" | wc -l)
local source_mods=$(find "$MODS_SOURCE_DIR" -name "*.jar" | wc -l)
log_info "部署的 mods 数量: $deployed_mods"
log_info "源 mods 数量: $source_mods"
log_info "备份位置: $BACKUP_DIR"
if [[ "$SERVER_RUNNING" == "true" ]]; then
log_warning "⚠️ 服务器正在运行,需要重启以应用更改"
log_info "重启命令: cd $SERVER_TARGET_DIR && ./boot.sh stop && ./boot.sh start"
else
log_info "启动服务器: cd $SERVER_TARGET_DIR && ./boot.sh start"
fi
log_info "查看服务器日志: tail -f $SERVER_TARGET_DIR/logs/latest.log"
}
# 主部署流程
main_deployment() {
log_info "开始自动化部署流程..."
# 执行各个步骤
check_directories
check_server_status
backup_existing_mods
download_mods_if_needed
deploy_mods
deploy_server_config
log_success "所有部署步骤完成"
}
# 执行部署
if main_deployment; then
END_TIME=$(date +%s)
DURATION=$((END_TIME - START_TIME))
show_deployment_summary
log_end 0 "$DURATION"
show_log_location
else
log_error "部署过程中出现错误"
log_end 1
exit 1
fi

View file

@ -1,130 +0,0 @@
#!/usr/bin/env bash
# Download Minecraft mods from Modrinth
# Uses requirements.txt to specify mods
# 加载日志模块
source "$(dirname "$0")/logger.sh"
MODS_FILE="./requirements.txt"
MODS_DIR="./data/mods"
# 初始化日志
main init
log_start "Minecraft 1.21.1 Fabric Mods 下载"
START_TIME=$(date +%s)
mkdir -p $MODS_DIR
log_info "开始下载 Minecraft 1.21.1 Fabric mods..."
log_info "Mods 文件: $MODS_FILE"
log_info "目标目录: $MODS_DIR"
# 测试网络连接
log_info "测试 Modrinth API 连接..."
if ! curl -s "https://api.modrinth.com/v2/project/fabric-api" >/dev/null; then
log_error "无法连接到 Modrinth API请检查网络"
exit 1
fi
log_success "API 连接正常"
download_mod() {
local project_slug="$1"
local mod_name="$2"
log_info "正在处理: $mod_name ($project_slug)"
# 获取项目信息
project_info=$(curl -s "https://api.modrinth.com/v2/project/$project_slug")
if [[ $? -ne 0 || -z "$project_info" ]]; then
log_error "无法获取项目信息: $project_slug"
return 1
fi
# 获取版本信息
versions_url="https://api.modrinth.com/v2/project/$project_slug/version?game_versions=%5B%221.21.1%22%5D&loaders=%5B%22fabric%22%5D"
log_info "获取版本信息..."
versions_response=$(curl -s "$versions_url")
if [[ $? -ne 0 || -z "$versions_response" ]]; then
log_error "无法获取版本信息"
return 1
fi
# 检查是否返回空数组
if [[ "$versions_response" == "[]" ]]; then
log_warning "没有找到 1.21.1 Fabric 版本的 $mod_name"
return 1
fi
# 提取最新版本的信息
filename=$(echo "$versions_response" | grep -o '"filename":"[^"]*"' | head -1 | cut -d'"' -f4)
download_url=$(echo "$versions_response" | grep -o '"url":"[^"]*"' | head -1 | cut -d'"' -f4)
if [[ -z "$filename" || -z "$download_url" ]]; then
log_error "无法解析下载信息"
return 1
fi
log_info "文件名: $filename"
log_info "下载链接: $download_url"
# 下载文件
log_info "下载中..."
if curl -L -o "$MODS_DIR/$filename" "$download_url"; then
file_size=$(stat -c%s "$MODS_DIR/$filename" 2>/dev/null || stat -f%z "$MODS_DIR/$filename" 2>/dev/null)
if [[ $file_size -gt 1000 ]]; then
log_success "成功下载: $filename ($(($file_size / 1024)) KB)"
return 0
else
log_error "文件大小异常: $filename (只有 $file_size 字节)"
rm -f "$MODS_DIR/$filename"
return 1
fi
else
log_error "下载失败"
return 1
fi
}
# 读取 mods 文件
success_count=0
fail_count=0
while IFS='|' read -r mod_name project_slug version; do
# 去除空白字符
mod_name=$(echo "$mod_name" | xargs)
project_slug=$(echo "$project_slug" | xargs)
version=$(echo "$version" | xargs)
# 跳过空行和注释
if [[ -z "$mod_name" || "$mod_name" == \#* ]]; then
continue
fi
log_separator
if download_mod "$project_slug" "$mod_name"; then
((success_count++))
else
((fail_count++))
fi
sleep 1
done <"$MODS_FILE"
END_TIME=$(date +%s)
DURATION=$((END_TIME - START_TIME))
log_separator
log_info "Minecraft 1.21.1 Fabric mods 下载完成!"
log_info "位置: $MODS_DIR"
log_info "统计: 成功 $success_count, 失败 $fail_count"
# 记录最终文件列表
log_info "最终文件列表:"
ls -la "$MODS_DIR" >>"$LOG_FILE" 2>/dev/null
log_command "ls -la \"$MODS_DIR\""
log_end 0 "$DURATION"
show_log_location

View file

@ -1,169 +0,0 @@
#!/usr/bin/env bash
# Logger utility for Minecraft automation scripts
# Provides consistent logging across all scripts
LOG_FILE="./logs/mod-install-log.txt"
mkdir -p "$(dirname "$LOG_FILE")"
# 颜色定义
RED='\033[0;31m'
GREEN='\033[0;32m'
YELLOW='\033[1;33m'
BLUE='\033[0;34m'
NC='\033[0m' # No Color
# 初始化日志文件
init_log() {
echo "==================================================" >"$LOG_FILE"
echo "Minecraft Mod 安装日志" >>"$LOG_FILE"
echo "开始时间: $(date)" >>"$LOG_FILE"
echo "==================================================" >>"$LOG_FILE"
echo "" >>"$LOG_FILE"
}
# 记录系统信息
log_system_info() {
{
echo "=== 系统信息 ==="
echo "主机名: $(hostname)"
echo "操作系统: $(uname -s) $(uname -r)"
echo "架构: $(uname -m)"
# CPU 信息
if command -v nproc >/dev/null; then
echo "CPU 核心数: $(nproc)"
fi
if command -v lscpu >/dev/null; then
echo "CPU 型号: $(lscpu | grep "Model name" | cut -d: -f2 | xargs)"
fi
# 内存信息
if command -v free >/dev/null; then
echo "内存总量: $(free -h | grep Mem: | awk '{print $2}')"
echo "可用内存: $(free -h | grep Mem: | awk '{print $7}')"
fi
# 磁盘信息
echo "磁盘使用:"
df -h . | tail -1 | awk '{print " 总量: " $2 ", 可用: " $4 ", 使用率: " $5}'
# 网络信息
echo "公网 IP: $(curl -s ifconfig.me 2>/dev/null || echo "无法获取")"
echo ""
} >>"$LOG_FILE"
}
# 记录命令开始
log_start() {
local command_name="$1"
local timestamp=$(date '+%Y-%m-%d %H:%M:%S')
{
echo "╔═══════════════════════════════════════════════"
echo "║ 命令: $command_name"
echo "║ 开始时间: $timestamp"
echo "║ 工作目录: $(pwd)"
echo "║ 用户: $(whoami)"
echo "╚═══════════════════════════════════════════════"
echo ""
} >>"$LOG_FILE"
}
# 记录命令结束
log_end() {
local exit_code="$1"
local timestamp=$(date '+%Y-%m-%d %H:%M:%S')
local duration="${2:-0}"
{
echo ""
echo "╔═══════════════════════════════════════════════"
echo "║ 结束时间: $timestamp"
if [[ -n "$duration" && "$duration" != "0" ]]; then
echo "║ 执行时长: ${duration}"
fi
echo "║ 退出代码: $exit_code"
echo "║ 状态: $([ $exit_code -eq 0 ] && echo "成功" || echo "失败")"
echo "╚═══════════════════════════════════════════════"
echo ""
echo ""
} >>"$LOG_FILE"
}
# 记录信息(同时输出到屏幕和日志)
log_info() {
local message="$1"
local timestamp=$(date '+%Y-%m-%d %H:%M:%S')
echo -e "${BLUE}[INFO]${NC} $message"
echo "[$timestamp] [INFO] $message" >>"$LOG_FILE"
}
# 记录成功
log_success() {
local message="$1"
local timestamp=$(date '+%Y-%m-%d %H:%M:%S')
echo -e "${GREEN}[SUCCESS]${NC} $message"
echo "[$timestamp] [SUCCESS] $message" >>"$LOG_FILE"
}
# 记录警告
log_warning() {
local message="$1"
local timestamp=$(date '+%Y-%m-%d %H:%M:%S')
echo -e "${YELLOW}[WARNING]${NC} $message"
echo "[$timestamp] [WARNING] $message" >>"$LOG_FILE"
}
# 记录错误
log_error() {
local message="$1"
local timestamp=$(date '+%Y-%m-%d %H:%M:%S')
echo -e "${RED}[ERROR]${NC} $message"
echo "[$timestamp] [ERROR] $message" >>"$LOG_FILE"
}
# 记录命令输出
log_command() {
local command="$1"
local timestamp=$(date '+%Y-%m-%d %H:%M:%S')
{
echo "[$timestamp] [COMMAND] 执行: $command"
echo "[$timestamp] [OUTPUT]"
} >>"$LOG_FILE"
# 执行命令并捕获输出
eval "$command" 2>&1 | while IFS= read -r line; do
echo "[$timestamp] [OUTPUT] $line" >>"$LOG_FILE"
echo "$line" # 同时输出到屏幕
done
local exit_code=${PIPESTATUS[0]}
echo "[$timestamp] [COMMAND] 退出代码: $exit_code" >>"$LOG_FILE"
return $exit_code
}
# 记录分隔线
log_separator() {
echo "--------------------------------------------------" >>"$LOG_FILE"
}
# 显示日志位置
show_log_location() {
log_info "详细日志已保存到: $LOG_FILE"
log_info "查看日志: tail -f $LOG_FILE"
}
# 主函数
main() {
if [[ "$1" == "init" ]]; then
init_log
log_system_info
fi
}
# 如果直接执行此脚本,则初始化
if [[ "${BASH_SOURCE[0]}" == "${0}" ]]; then
main "$@"
fi

View file

@ -1,19 +0,0 @@
# Minecraft 1.21.1 Fabric Mods
# 格式: mod名称 | 项目slug | 版本号(可选)
# Fabric API (必需)
fabric-api | fabric-api | 0.100.3+1.21.1
# 锂 (Lithium) - 性能优化
lithium | lithium | 0.12.2+1.21.1
# 磷 (Phosphor) - 光照优化
phosphor | phosphor | 0.8.2+1.21.1
# 钠 (Sodium) - 渲染优化
sodium | sodium | 0.5.11+1.21.1
# Iris Shaders - 着色器支持
iris | iris | 1.7.2+1.21.1
# 注意:版本号可以不写,会自动下载最新版本

View file

@ -1,53 +0,0 @@
# Minecraft 服务器配置
enable-jmx-monitoring=false
rcon.port=25575
level-seed=
gamemode=survival
enable-command-block=false
enable-query=false
generator-settings={}
enforce-secure-profile=true
level-name=world
motd=My Automated Fabric Server
query.port=25565
pvp=true
generate-structures=true
max-chained-neighbor-updates=1000000
difficulty=normal
network-compression-threshold=256
max-tick-time=60000
require-resource-pack=false
use-native-transport=true
max-players=20
online-mode=false
enable-status=true
allow-flight=false
broadcast-rcon-to-ops=true
view-distance=10
server-ip=
resource-pack-prompt=
allow-nether=true
server-port=25565
enable-rcon=true
sync-chunk-writes=true
op-permission-level=4
prevent-proxy-connections=false
hide-online-players=false
resource-pack=
entity-broadcast-range-percentage=100
simulation-distance=10
rcon.password=changeme123
player-idle-timeout=0
force-gamemode=false
rate-limit=0
hardcore=false
white-list=false
broadcast-console-to-ops=true
spawn-npcs=true
spawn-animals=true
function-permission-level=2
level-type=minecraft\:normal
spawn-monsters=true
enforce-whitelist=false
spawn-protection=16
max-world-size=29999984

View file

@ -1,16 +1,25 @@
# 时区设置 # @name Nextcloud
# @desc Nextcloud private cloud with MariaDB + Redis
# @url https://nextcloud.com
# @port NC_PORT
TZ=Asia/Shanghai TZ=Asia/Shanghai
# Nextcloud 管理员账号(首次启动用于初始化) # Web interface
NEXTCLOUD_ADMIN_USER=admin NC_PORT=8080
NEXTCLOUD_ADMIN_PASSWORD=ChangeMe123!
NEXTCLOUD_TRUSTED_DOMAINS=localhost
# MariaDB 数据库配置 # Admin account — created on first startup
MYSQL_ROOT_PASSWORD=ChangeRoot123! NC_ADMIN_USER=admin
NC_ADMIN_PASSWORD=
# Trusted domains — space-separated list of domains/IPs that can access Nextcloud
NC_TRUSTED_DOMAINS=localhost
# MariaDB database
MYSQL_DATABASE=nextcloud MYSQL_DATABASE=nextcloud
MYSQL_USER=nextcloud MYSQL_USER=nextcloud
MYSQL_PASSWORD=ChangeDb123! MYSQL_PASSWORD=
MYSQL_ROOT_PASSWORD=
# Redis 密码 # Redis cache
REDIS_PASSWORD=ChangeRedis123! REDIS_PASSWORD=

View file

@ -1,40 +0,0 @@
# Nextcloud 本地存储中心
该目录提供一个可快速启动的 Nextcloud 本地私有云(包含 Nextcloud、MariaDB、Redis
## 目录结构
```
nextcloud/
├── compose.yaml # Docker Compose 配置
├── .env.example # 环境变量示例,请复制为 .env 后修改
└── README.md # 当前说明文档
```
## 使用步骤
1. 复制环境变量文件并按需修改:
```bash
cp .env.example .env
```
2. 启动服务:
```bash
docker compose up -d
```
3. 首次启动后访问 `http://localhost:8080`,使用 `.env` 中配置的管理员账号登录并完成初始化。
## 默认组件
- Nextcloud `nextcloud:stable-apache`(暴露端口 `8080`
- MariaDB 11持久化在 `nextcloud_db` 卷)
- Redis 7启用密码提升缓存性能
## 数据持久化
所有关键数据均挂载到命名卷,位于本地 Docker 数据目录,可根据需要调整为绑定宿主机路径:
- `nextcloud_html` / `nextcloud_data` / `nextcloud_config` / `nextcloud_apps`
- `nextcloud_db`
- `nextcloud_redis`
## 常用命令
- 查看日志:`docker compose logs -f nextcloud`
- 停止服务:`docker compose down`
- 备份数据库:`docker exec nextcloud-db mariadb-dump -unextcloud -p<密码> nextcloud > backup.sql`
根据需要可进一步扩展(例如反向代理、对象存储适配等)。

View file

@ -1,71 +1,75 @@
version: "3.9"
services: services:
nextcloud: nextcloud:
image: nextcloud:stable-apache image: nextcloud:stable-apache
container_name: nextcloud container_name: nextcloud
restart: unless-stopped
depends_on: depends_on:
- db db:
- redis condition: service_healthy
redis:
condition: service_healthy
ports: ports:
- "8080:80" - "${NC_PORT:-8080}:80"
environment: environment:
- TZ=${TZ:-Asia/Shanghai} TZ: "${TZ:-Asia/Shanghai}"
- NEXTCLOUD_ADMIN_USER=${NEXTCLOUD_ADMIN_USER:-admin} NEXTCLOUD_ADMIN_USER: "${NC_ADMIN_USER:-admin}"
- NEXTCLOUD_ADMIN_PASSWORD=${NEXTCLOUD_ADMIN_PASSWORD:-ChangeMe123!} NEXTCLOUD_ADMIN_PASSWORD: "${NC_ADMIN_PASSWORD}"
- NEXTCLOUD_TRUSTED_DOMAINS=${NEXTCLOUD_TRUSTED_DOMAINS:-localhost} NEXTCLOUD_TRUSTED_DOMAINS: "${NC_TRUSTED_DOMAINS:-localhost}"
- MYSQL_HOST=db MYSQL_HOST: db
- MYSQL_DATABASE=${MYSQL_DATABASE:-nextcloud} MYSQL_DATABASE: "${MYSQL_DATABASE:-nextcloud}"
- MYSQL_USER=${MYSQL_USER:-nextcloud} MYSQL_USER: "${MYSQL_USER:-nextcloud}"
- MYSQL_PASSWORD=${MYSQL_PASSWORD:-ChangeDb123!} MYSQL_PASSWORD: "${MYSQL_PASSWORD}"
- REDIS_HOST=redis REDIS_HOST: redis
- REDIS_HOST_PASSWORD=${REDIS_PASSWORD:-ChangeRedis123!} REDIS_HOST_PASSWORD: "${REDIS_PASSWORD}"
volumes: volumes:
- nextcloud_html:/var/www/html - nc-html:/var/www/html
- nextcloud_data:/var/www/html/data - nc-data:/var/www/html/data
- nextcloud_config:/var/www/html/config - nc-config:/var/www/html/config
- nextcloud_apps:/var/www/html/custom_apps - nc-apps:/var/www/html/custom_apps
networks: healthcheck:
- nextcloud-network test: ["CMD", "curl", "-fSs", "http://localhost/status.php"]
interval: 30s
timeout: 5s
retries: 5
start_period: 60s
restart: unless-stopped
db: db:
image: mariadb:11 image: mariadb:11
container_name: nextcloud-db container_name: nextcloud-db
restart: unless-stopped
command: --transaction-isolation=READ-COMMITTED --binlog-format=ROW --innodb_read_only_compressed=OFF command: --transaction-isolation=READ-COMMITTED --binlog-format=ROW --innodb_read_only_compressed=OFF
environment: environment:
- TZ=${TZ:-Asia/Shanghai} TZ: "${TZ:-Asia/Shanghai}"
- MYSQL_ROOT_PASSWORD=${MYSQL_ROOT_PASSWORD:-ChangeRoot123!} MYSQL_ROOT_PASSWORD: "${MYSQL_ROOT_PASSWORD}"
- MYSQL_DATABASE=${MYSQL_DATABASE:-nextcloud} MYSQL_DATABASE: "${MYSQL_DATABASE:-nextcloud}"
- MYSQL_USER=${MYSQL_USER:-nextcloud} MYSQL_USER: "${MYSQL_USER:-nextcloud}"
- MYSQL_PASSWORD=${MYSQL_PASSWORD:-ChangeDb123!} MYSQL_PASSWORD: "${MYSQL_PASSWORD}"
volumes: volumes:
- nextcloud_db:/var/lib/mysql - nc-db:/var/lib/mysql
networks: healthcheck:
- nextcloud-network test: ["CMD", "healthcheck.sh", "--connect", "--innodb_initialized"]
interval: 10s
timeout: 5s
retries: 5
start_period: 30s
restart: unless-stopped
redis: redis:
image: redis:7-alpine image: redis:7-alpine
container_name: nextcloud-redis container_name: nextcloud-redis
command: ["redis-server", "--requirepass", "${REDIS_PASSWORD}"]
volumes:
- nc-redis:/data
healthcheck:
test: ["CMD", "redis-cli", "-a", "${REDIS_PASSWORD}", "ping"]
interval: 10s
timeout: 5s
retries: 3
restart: unless-stopped restart: unless-stopped
command: ["redis-server", "--requirepass", "${REDIS_PASSWORD:-ChangeRedis123!}"]
environment:
- TZ=${TZ:-Asia/Shanghai}
- REDIS_PASSWORD=${REDIS_PASSWORD:-ChangeRedis123!}
volumes:
- nextcloud_redis:/data
networks:
- nextcloud-network
volumes: volumes:
nextcloud_html: nc-html:
nextcloud_data: nc-data:
nextcloud_config: nc-config:
nextcloud_apps: nc-apps:
nextcloud_db: nc-db:
nextcloud_redis: nc-redis:
networks:
nextcloud-network:
driver: bridge

View file

@ -1,7 +0,0 @@
# role: vps
# description: Postfix + Dovecot + OpenDKIM + SpamAssassin full email stack
DOMAIN=your-domain.com
MAIL_HOST=mail.your-domain.com
SERVER_IP=x.x.x.x
MAIL_USER=contact

View file

@ -1,37 +0,0 @@
#!/usr/bin/env bash
# Deploys Postfix + Dovecot + OpenDKIM + SpamAssassin email stack.
# Usage: INFRA_DIR=/path/to/infra/services/email ./deploy.sh
set -euo pipefail
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
source "$SCRIPT_DIR/../../bin/lib/common.sh"
ENV_FILE="${INFRA_DIR:-.}/.env"
[ -f "$ENV_FILE" ] || { log_error "No .env found at $ENV_FILE"; exit 1; }
set -a; source "$ENV_FILE"; set +a
require_env DOMAIN MAIL_HOST MAIL_USER
log_info "Installing packages..."
apt-get install -y postfix dovecot-core dovecot-imapd dovecot-pop3d dovecot-lmtpd \
dovecot-sieve opendkim opendkim-tools spamassassin spamc
log_info "Deploying Postfix config..."
envsubst < "${INFRA_DIR}/postfix/main.cf" > /etc/postfix/main.cf
cp "${INFRA_DIR}/postfix/aliases" /etc/aliases
newaliases
log_info "Deploying Dovecot config..."
cp "${INFRA_DIR}/dovecot/dovecot.conf" /etc/dovecot/dovecot.conf
cp "${INFRA_DIR}/dovecot/99-stats-fix.conf" /etc/dovecot/conf.d/99-stats-fix.conf
log_info "Adding postfix to dovecot group..."
usermod -aG dovecot postfix
log_info "Enabling services..."
systemctl enable --now postfix dovecot opendkim spamassassin
log_info "Email stack deployed. Remaining manual steps:"
echo " 1. Run certbot for mail.${DOMAIN}"
echo " 2. Generate DKIM key: opendkim-genkey -b 2048 -d ${DOMAIN} -s mail -D /etc/opendkim/keys/${DOMAIN}/"
echo " 3. Add DNS records (see services/email/README.md)"

View file

@ -1,10 +0,0 @@
# role: vps
# description: Forgejo self-hosted git service with optional GitHub mirror sync
GIT_DOMAIN=git.your-domain.com
# Optional: GitHub → Forgejo mirror sync (leave blank to skip cron setup)
GITHUB_USER=
GITHUB_TOKEN=
FORGEJO_URL=https://git.your-domain.com
FORGEJO_TOKEN=

View file

@ -1,67 +0,0 @@
#!/usr/bin/env bash
# Deploys Forgejo (self-hosted git) via Docker on a VPS.
# Usage: INFRA_DIR=/path/to/infra/services/forgejo ./deploy.sh
set -euo pipefail
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
source "$SCRIPT_DIR/../../bin/lib/common.sh"
ENV_FILE="${INFRA_DIR:-.}/.env"
[ -f "$ENV_FILE" ] || { log_error "No .env found at $ENV_FILE"; exit 1; }
set -a; source "$ENV_FILE"; set +a
require_env GIT_DOMAIN
require_command docker "https://docs.docker.com/engine/install/"
find_template() {
local f="$1"
if [[ -n "${INFRA_DIR:-}" && -f "${INFRA_DIR}/$f" ]]; then
echo "${INFRA_DIR}/$f"
elif [[ -f "$SCRIPT_DIR/$f" ]]; then
echo "$SCRIPT_DIR/$f"
else
log_error "Template not found: $f"
return 1
fi
}
DEPLOY_DIR="/opt/forgejo"
log_info "Creating deploy directory $DEPLOY_DIR..."
mkdir -p "$DEPLOY_DIR/data"
log_info "Deploying docker-compose.yml..."
cp "$(find_template docker-compose.yml)" "$DEPLOY_DIR/docker-compose.yml"
log_info "Starting Forgejo container..."
docker compose -f "$DEPLOY_DIR/docker-compose.yml" up -d
log_info "Deploying nginx vhost for ${GIT_DOMAIN}..."
envsubst '${GIT_DOMAIN}' < "$(find_template nginx.conf.example)" > "/etc/nginx/sites-available/forgejo"
ln -sf /etc/nginx/sites-available/forgejo /etc/nginx/sites-enabled/forgejo
nginx -t
systemctl reload nginx
# Optional: set up GitHub mirror sync cron
if [[ -n "${GITHUB_USER:-}" && -n "${GITHUB_TOKEN:-}" && -n "${FORGEJO_URL:-}" && -n "${FORGEJO_TOKEN:-}" ]]; then
SYNC_SCRIPT="$(find_template migrate_github_to_forgejo.py 2>/dev/null || true)"
if [[ -n "$SYNC_SCRIPT" ]]; then
log_info "Installing GitHub mirror sync script..."
cp "$SYNC_SCRIPT" "$DEPLOY_DIR/migrate_github_to_forgejo.py"
mkdir -p "$DEPLOY_DIR/logs"
CRON_LINE="0 3 * * * cd $DEPLOY_DIR && GITHUB_USER=${GITHUB_USER} GITHUB_TOKEN=${GITHUB_TOKEN} FORGEJO_URL=${FORGEJO_URL} FORGEJO_TOKEN=${FORGEJO_TOKEN} python3 migrate_github_to_forgejo.py >> $DEPLOY_DIR/logs/mirror-sync.log 2>&1"
(crontab -l 2>/dev/null | grep -v "migrate_github_to_forgejo"; echo "$CRON_LINE") | crontab -
log_info "Cron sync installed (daily 03:00)"
else
log_warn "migrate_github_to_forgejo.py not found in INFRA_DIR — skipping cron setup"
fi
else
log_info "GitHub sync vars not set — skipping cron setup"
fi
log_info "Forgejo deployed at http://localhost:3000"
echo ""
echo "Remaining manual steps:"
echo " 1. Get TLS cert: certbot --nginx -d ${GIT_DOMAIN}"
echo " 2. Complete Forgejo initial setup at https://${GIT_DOMAIN}"
echo " 3. Generate Forgejo API token: https://${GIT_DOMAIN}/user/settings/applications"

View file

@ -1,23 +0,0 @@
version: '3'
networks:
forgejo:
external: false
services:
server:
image: codeberg.org/forgejo/forgejo:9
container_name: forgejo
environment:
- USER_UID=1000
- USER_GID=1000
restart: always
networks:
- forgejo
volumes:
- /opt/forgejo/data:/data
- /etc/timezone:/etc/timezone:ro
- /etc/localtime:/etc/localtime:ro
ports:
- "3000:3000"
- "2223:22"

View file

@ -1,26 +0,0 @@
server {
listen 80;
server_name ${GIT_DOMAIN};
return 301 https://$host$request_uri;
}
server {
listen 443 ssl;
listen [::]:443 ssl;
server_name ${GIT_DOMAIN};
ssl_certificate /etc/letsencrypt/live/${GIT_DOMAIN}/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/${GIT_DOMAIN}/privkey.pem;
include /etc/letsencrypt/options-ssl-nginx.conf;
ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem;
location / {
proxy_pass http://localhost:3000;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
}
client_max_body_size 512M;
}

View file

@ -1,6 +0,0 @@
# role: homeserver
# description: FRP client (frpc) — tunnels local services through VPS
FRP_SERVER_ADDR=your-vps-ip
FRP_SERVER_PORT=7000
FRP_TOKEN=

View file

@ -1,52 +0,0 @@
#!/usr/bin/env bash
# Deploys frpc (FRP client) on home machine.
# Usage: INFRA_DIR=/path/to/infra/services/frp/client ./deploy.sh
set -euo pipefail
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
source "$SCRIPT_DIR/../../../bin/lib/common.sh"
ENV_FILE="${INFRA_DIR:-.}/.env"
[ -f "$ENV_FILE" ] || { log_error "No .env found at $ENV_FILE"; exit 1; }
set -a; source "$ENV_FILE"; set +a
require_env FRP_SERVER_ADDR FRP_SERVER_PORT FRP_TOKEN
find_template() {
local f="$1"
if [[ -n "${INFRA_DIR:-}" && -f "${INFRA_DIR}/$f" ]]; then
echo "${INFRA_DIR}/$f"
elif [[ -f "$SCRIPT_DIR/$f" ]]; then
echo "$SCRIPT_DIR/$f"
else
log_error "Template not found: $f"
return 1
fi
}
FRPC_BIN="/opt/frp/frpc"
if [[ -x "$FRPC_BIN" ]]; then
log_info "frpc already at $FRPC_BIN, skipping download"
else
log_info "Downloading FRP..."
VERSION=$(curl -s https://api.github.com/repos/fatedier/frp/releases/latest \
| python3 -c "import sys,json; print(json.load(sys.stdin)['tag_name'][1:])")
ARCHIVE="frp_${VERSION}_linux_amd64.tar.gz"
wget -q "https://github.com/fatedier/frp/releases/download/v${VERSION}/${ARCHIVE}"
tar -xf "$ARCHIVE"
mkdir -p /opt/frp
cp "frp_${VERSION}_linux_amd64/frpc" /opt/frp/
chmod +x /opt/frp/frpc
rm -rf "$ARCHIVE" "frp_${VERSION}_linux_amd64"
fi
log_info "Deploying config..."
envsubst < "$(find_template frpc.toml.example)" > /opt/frp/frpc.toml
log_info "Installing service..."
cp "$(find_template frpc.service)" /etc/systemd/system/
systemctl daemon-reload
systemctl enable --now frpc
log_info "FRP client deployed, connecting to ${FRP_SERVER_ADDR}:${FRP_SERVER_PORT}"

View file

@ -1,15 +0,0 @@
[Unit]
Description=FRP Client
After=network.target
[Service]
Type=simple
ExecStart=/opt/frp/frpc -c /opt/frp/frpc.toml
WorkingDirectory=/opt/frp
Restart=on-failure
RestartSec=5
User=root
Group=root
[Install]
WantedBy=multi-user.target

View file

@ -1,21 +0,0 @@
serverAddr = "${FRP_SERVER_ADDR}"
serverPort = ${FRP_SERVER_PORT}
auth.method = "token"
auth.token = "${FRP_TOKEN}"
# Example: expose home SSH
[[proxies]]
name = "home-ssh"
type = "tcp"
localIP = "127.0.0.1"
localPort = 22
remotePort = 1234
# Example: expose Minecraft
[[proxies]]
name = "minecraft"
type = "tcp"
localIP = "127.0.0.1"
localPort = 25565
remotePort = 25565

View file

@ -1,7 +0,0 @@
# role: vps
# description: FRP server (frps) — public entry point for reverse tunnels
FRP_TOKEN=
FRP_WEB_USER=admin
FRP_WEB_PASSWORD=
FRP_BIND_PORT=7000

View file

@ -1,52 +0,0 @@
#!/usr/bin/env bash
# Deploys frps (FRP server) on VPS.
# Usage: INFRA_DIR=/path/to/infra/services/frp/server ./deploy.sh
set -euo pipefail
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
source "$SCRIPT_DIR/../../../bin/lib/common.sh"
ENV_FILE="${INFRA_DIR:-.}/.env"
[ -f "$ENV_FILE" ] || { log_error "No .env found at $ENV_FILE"; exit 1; }
set -a; source "$ENV_FILE"; set +a
require_env FRP_TOKEN FRP_WEB_USER FRP_WEB_PASSWORD FRP_BIND_PORT
find_template() {
local f="$1"
if [[ -n "${INFRA_DIR:-}" && -f "${INFRA_DIR}/$f" ]]; then
echo "${INFRA_DIR}/$f"
elif [[ -f "$SCRIPT_DIR/$f" ]]; then
echo "$SCRIPT_DIR/$f"
else
log_error "Template not found: $f"
return 1
fi
}
FRPS_BIN="/opt/frp/frps"
if [[ -x "$FRPS_BIN" ]]; then
log_info "frps already at $FRPS_BIN, skipping download"
else
log_info "Downloading FRP..."
VERSION=$(curl -s https://api.github.com/repos/fatedier/frp/releases/latest \
| python3 -c "import sys,json; print(json.load(sys.stdin)['tag_name'][1:])")
ARCHIVE="frp_${VERSION}_linux_amd64.tar.gz"
wget -q "https://github.com/fatedier/frp/releases/download/v${VERSION}/${ARCHIVE}"
tar -xf "$ARCHIVE"
mkdir -p /opt/frp
cp "frp_${VERSION}_linux_amd64/frps" /opt/frp/
chmod +x /opt/frp/frps
rm -rf "$ARCHIVE" "frp_${VERSION}_linux_amd64"
fi
log_info "Deploying config..."
envsubst < "$(find_template frps.toml.example)" > /opt/frp/frps.toml
log_info "Installing service..."
cp "$(find_template frps.service)" /etc/systemd/system/
systemctl daemon-reload
systemctl enable --now frps
log_info "FRP server deployed on port ${FRP_BIND_PORT}"

View file

@ -1,15 +0,0 @@
[Unit]
Description=FRP Server
After=network.target
[Service]
Type=simple
ExecStart=/opt/frp/frps -c /opt/frp/frps.toml
WorkingDirectory=/opt/frp
Restart=on-failure
RestartSec=5
User=root
Group=root
[Install]
WantedBy=multi-user.target

View file

@ -1,17 +0,0 @@
bindAddr = "0.0.0.0"
bindPort = ${FRP_BIND_PORT}
vhostHTTPPort = 8080
vhostHTTPSPort = 8443
webServer.addr = "127.0.0.1"
webServer.port = 7500
webServer.user = "${FRP_WEB_USER}"
webServer.password = "${FRP_WEB_PASSWORD}"
log.to = "./frps.log"
log.level = "info"
log.maxDays = 3
auth.method = "token"
auth.token = "${FRP_TOKEN}"

View file

@ -1,6 +0,0 @@
# role: vps
# description: Galene video conferencing server
GALENE_VERSION=0.9.2
GALENE_HTTP_ADDR=127.0.0.1:8443
GALENE_TURN_ADDR=x.x.x.x:1194
GALENE_UDP_RANGE=10000-10100

View file

@ -1,81 +0,0 @@
#!/usr/bin/env bash
# Deploys Galene video conferencing server.
# https://github.com/jech/galene
#
# Usage: INFRA_DIR=/path/to/infra/services/galene ./deploy.sh
set -euo pipefail
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
source "$SCRIPT_DIR/../../bin/lib/common.sh"
ENV_FILE="${INFRA_DIR:-.}/.env"
[ -f "$ENV_FILE" ] || { log_error "No .env found at $ENV_FILE"; exit 1; }
set -a; source "$ENV_FILE"; set +a
require_env GALENE_VERSION GALENE_HTTP_ADDR GALENE_TURN_ADDR
INSTALL_DIR="/opt/galene"
BIN="$INSTALL_DIR/galene"
if [[ -x "$BIN" ]]; then
log_info "galene already at $BIN, skipping download"
else
log_info "Downloading Galene ${GALENE_VERSION}..."
ARCH="$(uname -m)"
case "$ARCH" in
x86_64) ARCH="amd64" ;;
aarch64) ARCH="arm64" ;;
*) log_error "Unsupported arch: $ARCH"; exit 1 ;;
esac
URL="https://github.com/jech/galene/releases/download/galene-${GALENE_VERSION}/galene-${GALENE_VERSION}-linux-${ARCH}.tar.gz"
TMP="$(mktemp -d)"
wget -qO "$TMP/galene.tar.gz" "$URL"
mkdir -p "$INSTALL_DIR"
tar -xf "$TMP/galene.tar.gz" -C "$INSTALL_DIR" --strip-components=1
chmod +x "$BIN"
rm -rf "$TMP"
fi
log_info "Creating directories..."
mkdir -p "$INSTALL_DIR"/{data,groups,static}
log_info "Deploying groups config from INFRA_DIR..."
if [[ -d "${INFRA_DIR}/groups" ]]; then
cp -r "${INFRA_DIR}/groups/." "$INSTALL_DIR/groups/"
fi
log_info "Installing systemd service..."
cat > /etc/systemd/system/galene.service <<EOF
[Unit]
Description=Galene videoconference server
After=network.target
[Service]
Type=simple
WorkingDirectory=$INSTALL_DIR
ExecStart=$BIN \\
-insecure \\
-http ${GALENE_HTTP_ADDR} \\
-static $INSTALL_DIR/static \\
-groups $INSTALL_DIR/groups \\
-data $INSTALL_DIR/data \\
-turn ${GALENE_TURN_ADDR} \\
-udp-range ${GALENE_UDP_RANGE:-10000-10100}
Restart=always
RestartSec=5
LimitNOFILE=65536
[Install]
WantedBy=multi-user.target
EOF
systemctl daemon-reload
systemctl enable --now galene
log_info "Galene deployed"
echo " Listening: ${GALENE_HTTP_ADDR}"
echo ""
echo "Remaining manual steps:"
echo " 1. Configure nginx reverse proxy (see infra/services/nginx/sites/)"
echo " 2. Get TLS cert for frontend domain"
echo " 3. Open UDP ports ${GALENE_UDP_RANGE:-10000-10100} in firewall"

View file

@ -1,8 +0,0 @@
# role: vps
# description: MinIO object storage server
MINIO_ROOT_USER=minioadmin
MINIO_ROOT_PASSWORD=
MINIO_VOLUMES=/data/minio
MINIO_OPTS=--console-address :9001
MINIO_BROWSER_REDIRECT_URL=https://console.example.com
MINIO_SERVER_URL=https://oss.example.com

View file

@ -1,83 +0,0 @@
#!/usr/bin/env bash
# Deploys MinIO object storage server.
# https://min.io/docs/minio/linux/index.html
#
# Usage: INFRA_DIR=/path/to/infra/services/minio ./deploy.sh
set -euo pipefail
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
source "$SCRIPT_DIR/../../bin/lib/common.sh"
ENV_FILE="${INFRA_DIR:-.}/.env"
[ -f "$ENV_FILE" ] || { log_error "No .env found at $ENV_FILE"; exit 1; }
set -a; source "$ENV_FILE"; set +a
require_env MINIO_ROOT_USER MINIO_ROOT_PASSWORD MINIO_VOLUMES
BIN="/usr/local/bin/minio"
if [[ -x "$BIN" ]]; then
log_info "minio already at $BIN, skipping download"
else
log_info "Downloading MinIO..."
wget -qO "$BIN" https://dl.min.io/server/minio/release/linux-amd64/minio
chmod +x "$BIN"
fi
log_info "Creating minio-user..."
if ! id minio-user &>/dev/null; then
useradd --system --no-create-home --shell /usr/sbin/nologin minio-user
fi
log_info "Creating data directory: ${MINIO_VOLUMES}..."
mkdir -p "${MINIO_VOLUMES}"
chown minio-user:minio-user "${MINIO_VOLUMES}"
log_info "Writing /etc/default/minio..."
cat > /etc/default/minio <<EOF
MINIO_ROOT_USER=${MINIO_ROOT_USER}
MINIO_ROOT_PASSWORD=${MINIO_ROOT_PASSWORD}
MINIO_VOLUMES=${MINIO_VOLUMES}
MINIO_OPTS=${MINIO_OPTS:---console-address :9001}
MINIO_BROWSER_REDIRECT_URL=${MINIO_BROWSER_REDIRECT_URL:-}
MINIO_SERVER_URL=${MINIO_SERVER_URL:-}
EOF
chmod 640 /etc/default/minio
chown root:minio-user /etc/default/minio
log_info "Installing systemd service..."
cat > /etc/systemd/system/minio.service <<'EOF'
[Unit]
Description=MinIO
Documentation=https://min.io/docs/minio/linux/index.html
Wants=network-online.target
After=network-online.target
AssertFileIsExecutable=/usr/local/bin/minio
[Service]
WorkingDirectory=/usr/local
User=minio-user
Group=minio-user
EnvironmentFile=/etc/default/minio
ExecStartPre=/bin/bash -c 'if [ -z "${MINIO_VOLUMES}" ]; then echo "MINIO_VOLUMES not set"; exit 1; fi'
ExecStart=/usr/local/bin/minio server $MINIO_OPTS $MINIO_VOLUMES
Restart=always
LimitNOFILE=65536
TasksMax=infinity
TimeoutStopSec=infinity
SendSIGKILL=no
[Install]
WantedBy=multi-user.target
EOF
systemctl daemon-reload
systemctl enable --now minio
log_info "MinIO deployed"
echo " API: http://localhost:9000"
echo " Console: http://localhost:9001"
echo ""
echo "Remaining manual steps:"
echo " 1. Configure nginx reverse proxy (see infra/services/nginx/sites/)"
echo " 2. Get TLS cert: certbot --nginx -d ${MINIO_SERVER_URL#https://}"

View file

@ -1,8 +0,0 @@
# role: vps
# description: Nginx web server and reverse proxy vhosts
DOMAIN=your-domain.com
BLOG_DOMAIN=blog.your-domain.com
CHAN_DOMAIN=chan.your-domain.com
MAIL_DOMAIN=mail.your-domain.com
GIT_DOMAIN=git.your-domain.com

View file

@ -1,36 +0,0 @@
#!/usr/bin/env bash
# Deploys Nginx web server and vhost configs.
# Usage: INFRA_DIR=/path/to/infra/services/nginx ./deploy.sh
set -euo pipefail
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
source "$SCRIPT_DIR/../../bin/lib/common.sh"
ENV_FILE="${INFRA_DIR:-.}/.env"
[ -f "$ENV_FILE" ] || { log_error "No .env found at $ENV_FILE"; exit 1; }
set -a; source "$ENV_FILE"; set +a
require_env DOMAIN
log_info "Installing nginx..."
apt-get install -y nginx certbot python3-certbot-nginx
log_info "Deploying nginx.conf..."
cp "${INFRA_DIR}/nginx.conf" /etc/nginx/nginx.conf
log_info "Deploying vhost configs..."
mkdir -p /etc/nginx/sites-available /etc/nginx/sites-enabled
for conf in "${INFRA_DIR}/sites/"*.conf; do
name="$(basename "$conf" .conf)"
envsubst '${DOMAIN}${BLOG_DOMAIN}${CHAN_DOMAIN}${MAIL_DOMAIN}${GIT_DOMAIN}' < "$conf" > "/etc/nginx/sites-available/${name}"
ln -sf "/etc/nginx/sites-available/${name}" "/etc/nginx/sites-enabled/${name}"
log_info " Deployed ${name}"
done
log_info "Testing nginx config..."
nginx -t
log_info "Nginx deployed. Remaining manual steps:"
echo " 1. Get TLS certs: certbot --nginx -d ${DOMAIN} -d ${CHAN_DOMAIN:-chan.${DOMAIN}} -d ${BLOG_DOMAIN:-blog.${DOMAIN}}"
echo " 2. systemctl reload nginx"

View file

@ -1,7 +0,0 @@
# role: homeserver
# description: sslocal + privoxy proxy chain (SOCKS5 :1080, HTTP :8118)
SS_SERVER=your-vps-ip
SS_PORT=41268
SS_PASSWORD=
SS_METHOD=2022-blake3-aes-256-gcm

View file

@ -1,11 +0,0 @@
{
"server": "${SS_SERVER}",
"server_port": ${SS_PORT},
"password": "${SS_PASSWORD}",
"method": "${SS_METHOD}",
"local_address": "127.0.0.1",
"local_port": 1080,
"timeout": 300,
"fast_open": true,
"mode": "tcp_and_udp"
}

View file

@ -1,70 +0,0 @@
#!/usr/bin/env bash
# Installs shadowsocks-rust client (sslocal) + privoxy proxy chain.
# Usage: INFRA_DIR=/path/to/infra/services/shadowsocks/client ./deploy.sh
set -euo pipefail
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
source "$SCRIPT_DIR/../../../bin/lib/common.sh"
ENV_FILE="${INFRA_DIR:-.}/.env"
[ -f "$ENV_FILE" ] || { log_error "No .env found at $ENV_FILE"; exit 1; }
set -a; source "$ENV_FILE"; set +a
require_env SS_SERVER SS_PORT SS_PASSWORD SS_METHOD
# Find a template file: prefer INFRA_DIR override, fall back to bundled default.
find_template() {
local f="$1"
if [[ -n "${INFRA_DIR:-}" && -f "${INFRA_DIR}/$f" ]]; then
echo "${INFRA_DIR}/$f"
elif [[ -f "$SCRIPT_DIR/$f" ]]; then
echo "$SCRIPT_DIR/$f"
else
log_error "Template not found: $f"
return 1
fi
}
SSLOCAL_BIN="/usr/local/bin/sslocal"
# Install sslocal — skip if already present, symlink if installed elsewhere.
if [[ -x "$SSLOCAL_BIN" ]]; then
log_info "sslocal already at $SSLOCAL_BIN ($($SSLOCAL_BIN --version 2>&1 | head -1)), skipping download"
elif command -v sslocal &>/dev/null; then
existing="$(command -v sslocal)"
log_info "sslocal found at $existing, symlinking to $SSLOCAL_BIN"
ln -sf "$existing" "$SSLOCAL_BIN"
else
log_info "Downloading shadowsocks-rust client..."
VERSION=$(curl -s https://api.github.com/repos/shadowsocks/shadowsocks-rust/releases/latest \
| python3 -c "import sys,json; print(json.load(sys.stdin)['tag_name'])")
ARCHIVE="shadowsocks-${VERSION}.x86_64-unknown-linux-gnu.tar.xz"
wget -q "https://github.com/shadowsocks/shadowsocks-rust/releases/download/${VERSION}/${ARCHIVE}"
tar -xf "$ARCHIVE"
cp sslocal "$SSLOCAL_BIN"
chmod +x "$SSLOCAL_BIN"
rm -f "$ARCHIVE" ssserver sslocal ssurl ssmanager redir tunnel
fi
log_info "Installing privoxy..."
if command -v pacman &>/dev/null; then
pacman -S --noconfirm --needed privoxy
else
apt-get install -y privoxy
fi
log_info "Deploying configs..."
mkdir -p /etc/shadowsocks-rust
envsubst < "$(find_template config.json.example)" > /etc/shadowsocks-rust/client.json
cp "$(find_template privoxy.conf.example)" /etc/privoxy/config
log_info "Stopping any existing shadowsocks service on port 1080..."
systemctl stop shadowsocks 2>/dev/null || true
log_info "Installing service..."
cp "$(find_template shadowsocks-client.service)" /etc/systemd/system/
systemctl daemon-reload
systemctl enable --now shadowsocks-client
systemctl enable --now privoxy
log_info "Proxy chain ready: SOCKS5 at 127.0.0.1:1080, HTTP at 127.0.0.1:8118"

View file

@ -1,5 +0,0 @@
# Privoxy config — bridges SOCKS5 (sslocal) to HTTP proxy
# Listens on :8118, forwards to sslocal SOCKS5 on :1080
listen-address 127.0.0.1:8118
forward-socks5t / 127.0.0.1:1080 .

View file

@ -1,14 +0,0 @@
[Unit]
Description=Shadowsocks-Rust Client
Documentation=https://github.com/shadowsocks/shadowsocks-rust
After=network.target
[Service]
Type=simple
ExecStart=/usr/local/bin/sslocal -c /etc/shadowsocks-rust/client.json
Restart=on-failure
RestartSec=5
LimitNOFILE=51200
[Install]
WantedBy=multi-user.target

View file

@ -1,6 +0,0 @@
# role: vps
# description: Shadowsocks-Rust server (GFW-resistant proxy)
SS_PORT=41268
SS_PASSWORD=
SS_METHOD=2022-blake3-aes-256-gcm

View file

@ -1,10 +0,0 @@
{
"server": "0.0.0.0",
"server_port": ${SS_PORT},
"password": "${SS_PASSWORD}",
"method": "${SS_METHOD}",
"timeout": 300,
"fast_open": true,
"no_delay": true,
"mode": "tcp_and_udp"
}

View file

@ -1,56 +0,0 @@
#!/usr/bin/env bash
# Installs shadowsocks-rust server and configures systemd service.
# Usage: INFRA_DIR=/path/to/infra/services/shadowsocks/server ./deploy.sh
set -euo pipefail
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
source "$SCRIPT_DIR/../../../bin/lib/common.sh"
ENV_FILE="${INFRA_DIR:-.}/.env"
[ -f "$ENV_FILE" ] || { log_error "No .env found at $ENV_FILE"; exit 1; }
set -a; source "$ENV_FILE"; set +a
require_env SS_PORT SS_PASSWORD SS_METHOD
find_template() {
local f="$1"
if [[ -n "${INFRA_DIR:-}" && -f "${INFRA_DIR}/$f" ]]; then
echo "${INFRA_DIR}/$f"
elif [[ -f "$SCRIPT_DIR/$f" ]]; then
echo "$SCRIPT_DIR/$f"
else
log_error "Template not found: $f"
return 1
fi
}
SSSERVER_BIN="/usr/local/bin/ssserver-rust"
if [[ -x "$SSSERVER_BIN" ]]; then
log_info "ssserver-rust already at $SSSERVER_BIN ($($SSSERVER_BIN --version 2>&1 | head -1)), skipping download"
elif command -v ssserver &>/dev/null; then
existing="$(command -v ssserver)"
log_info "ssserver found at $existing, symlinking to $SSSERVER_BIN"
ln -sf "$existing" "$SSSERVER_BIN"
else
log_info "Downloading shadowsocks-rust..."
VERSION=$(curl -s https://api.github.com/repos/shadowsocks/shadowsocks-rust/releases/latest \
| python3 -c "import sys,json; print(json.load(sys.stdin)['tag_name'])")
ARCHIVE="shadowsocks-${VERSION}.x86_64-unknown-linux-gnu.tar.xz"
wget -q "https://github.com/shadowsocks/shadowsocks-rust/releases/download/${VERSION}/${ARCHIVE}"
tar -xf "$ARCHIVE"
cp ssserver "$SSSERVER_BIN"
chmod +x "$SSSERVER_BIN"
rm -f "$ARCHIVE" ssserver sslocal ssurl ssmanager redir tunnel
fi
log_info "Deploying config..."
mkdir -p /etc/shadowsocks-rust
envsubst < "$(find_template config.json.example)" > /etc/shadowsocks-rust/config.json
log_info "Installing service..."
cp "$(find_template shadowsocks-rust.service)" /etc/systemd/system/
systemctl daemon-reload
systemctl enable --now shadowsocks-rust
log_info "Shadowsocks server deployed on port ${SS_PORT}"

View file

@ -1,14 +0,0 @@
[Unit]
Description=Shadowsocks-Rust Server
Documentation=https://github.com/shadowsocks/shadowsocks-rust
After=network.target
[Service]
Type=simple
ExecStart=/usr/local/bin/ssserver-rust -c /etc/shadowsocks-rust/config.json
Restart=on-failure
RestartSec=5
LimitNOFILE=51200
[Install]
WantedBy=multi-user.target

View file

@ -1,3 +0,0 @@
# role: homeserver
# description: sing-box transparent proxy client connecting to sing-box server
SING_BOX_VERSION=1.11.0

View file

@ -1,66 +0,0 @@
#!/usr/bin/env bash
# Deploys sing-box as a transparent proxy client (home machine / LAN gateway).
#
# Usage: INFRA_DIR=/path/to/infra/services/sing-box/client ./deploy.sh
set -euo pipefail
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
source "$SCRIPT_DIR/../../../bin/lib/common.sh"
ENV_FILE="${INFRA_DIR:-.}/.env"
[ -f "$ENV_FILE" ] || { log_error "No .env found at $ENV_FILE"; exit 1; }
set -a; source "$ENV_FILE"; set +a
require_env SING_BOX_VERSION
INSTALL_DIR="/etc/sing-box"
BIN="$INSTALL_DIR/sing-box"
if [[ -x "$BIN" ]]; then
log_info "sing-box already at $BIN, skipping download"
else
log_info "Downloading sing-box ${SING_BOX_VERSION}..."
ARCH="$(uname -m)"
case "$ARCH" in
x86_64) ARCH="amd64" ;;
aarch64) ARCH="arm64" ;;
*) log_error "Unsupported arch: $ARCH"; exit 1 ;;
esac
URL="https://github.com/SagerNet/sing-box/releases/download/v${SING_BOX_VERSION}/sing-box-${SING_BOX_VERSION}-linux-${ARCH}.tar.gz"
TMP="$(mktemp -d)"
wget -qO "$TMP/sing-box.tar.gz" "$URL"
tar -xf "$TMP/sing-box.tar.gz" -C "$TMP"
mkdir -p "$INSTALL_DIR"
install -m 755 "$TMP/sing-box-${SING_BOX_VERSION}-linux-${ARCH}/sing-box" "$BIN"
rm -rf "$TMP"
fi
log_info "Deploying client config..."
CONFIG_SRC="${INFRA_DIR}/sing_box_client.json"
[ -f "$CONFIG_SRC" ] || { log_error "sing_box_client.json not found in INFRA_DIR"; exit 1; }
cp "$CONFIG_SRC" "$INSTALL_DIR/config.json"
log_info "Installing systemd service..."
cat > /etc/systemd/system/sing-box-client.service <<'EOF'
[Unit]
Description=sing-box Client
After=network.target nss-lookup.target
[Service]
User=root
CapabilityBoundingSet=CAP_NET_ADMIN CAP_NET_BIND_SERVICE CAP_NET_RAW
AmbientCapabilities=CAP_NET_ADMIN CAP_NET_BIND_SERVICE CAP_NET_RAW
ExecStart=/etc/sing-box/sing-box run -c /etc/sing-box/config.json
ExecReload=/bin/kill -HUP $MAINPID
Restart=on-failure
RestartSec=10
LimitNOFILE=65536
[Install]
WantedBy=multi-user.target
EOF
systemctl daemon-reload
systemctl enable --now sing-box-client
log_info "sing-box client deployed"

View file

@ -1,3 +0,0 @@
# role: vps
# description: sing-box proxy server (VLESS/Reality, VMess/WS, Hysteria2 via sing-box-yg)
SING_BOX_VERSION=1.11.0

View file

@ -1,77 +0,0 @@
#!/usr/bin/env bash
# Deploys sing-box proxy server on VPS.
#
# Config generated by https://github.com/yonggekkk/sing-box-yg — run that
# script once interactively to create /etc/s-box/sb.json, certs, and keys.
# Then commit the generated files into infra for future re-deployment.
#
# Usage: INFRA_DIR=/path/to/infra/services/sing-box/server ./deploy.sh
set -euo pipefail
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
source "$SCRIPT_DIR/../../../bin/lib/common.sh"
ENV_FILE="${INFRA_DIR:-.}/.env"
[ -f "$ENV_FILE" ] || { log_error "No .env found at $ENV_FILE"; exit 1; }
set -a; source "$ENV_FILE"; set +a
require_env SING_BOX_VERSION
INSTALL_DIR="/etc/s-box"
BIN="$INSTALL_DIR/sing-box"
if [[ -x "$BIN" ]]; then
log_info "sing-box already at $BIN, skipping download"
else
log_info "Downloading sing-box ${SING_BOX_VERSION}..."
ARCH="$(uname -m)"
case "$ARCH" in
x86_64) ARCH="amd64" ;;
aarch64) ARCH="arm64" ;;
*) log_error "Unsupported arch: $ARCH"; exit 1 ;;
esac
URL="https://github.com/SagerNet/sing-box/releases/download/v${SING_BOX_VERSION}/sing-box-${SING_BOX_VERSION}-linux-${ARCH}.tar.gz"
TMP="$(mktemp -d)"
wget -qO "$TMP/sing-box.tar.gz" "$URL"
tar -xf "$TMP/sing-box.tar.gz" -C "$TMP"
mkdir -p "$INSTALL_DIR"
install -m 755 "$TMP/sing-box-${SING_BOX_VERSION}-linux-${ARCH}/sing-box" "$BIN"
rm -rf "$TMP"
fi
log_info "Deploying config from INFRA_DIR..."
for f in sb.json cert.pem private.key public.key; do
src="${INFRA_DIR}/$f"
if [[ -f "$src" ]]; then
cp "$src" "$INSTALL_DIR/$f"
log_info " copied $f"
fi
done
log_info "Installing systemd service..."
cat > /etc/systemd/system/sing-box.service <<'EOF'
[Unit]
After=network.target nss-lookup.target
[Service]
User=root
WorkingDirectory=/root
CapabilityBoundingSet=CAP_NET_ADMIN CAP_NET_BIND_SERVICE CAP_NET_RAW
AmbientCapabilities=CAP_NET_ADMIN CAP_NET_BIND_SERVICE CAP_NET_RAW
ExecStart=/etc/s-box/sing-box run -c /etc/s-box/sb.json
ExecReload=/bin/kill -HUP $MAINPID
Restart=on-failure
RestartSec=10
LimitNOFILE=infinity
[Install]
WantedBy=multi-user.target
EOF
systemctl daemon-reload
systemctl enable --now sing-box
log_info "sing-box server deployed"
echo ""
echo "Note: initial config must be generated via sing-box-yg:"
echo " bash <(curl -Ls https://raw.githubusercontent.com/yonggekkk/sing-box-yg/main/sb.sh)"

View file

@ -1,7 +0,0 @@
# role: any
# description: TNT terminal chat server (SSH-based, github.com/m1ngsama/TNT)
TNT_PORT=2222
TNT_ACCESS_TOKEN=
TNT_BIND_ADDR=0.0.0.0
TNT_MAX_CONNECTIONS=50
TNT_MAX_CONN_PER_IP=3

View file

@ -1,70 +0,0 @@
#!/usr/bin/env bash
# Deploys TNT terminal chat server.
# https://github.com/m1ngsama/TNT
#
# Usage: INFRA_DIR=/path/to/infra/services/tnt ./deploy.sh
set -euo pipefail
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
source "$SCRIPT_DIR/../../bin/lib/common.sh"
ENV_FILE="${INFRA_DIR:-.}/.env"
[ -f "$ENV_FILE" ] || { log_error "No .env found at $ENV_FILE"; exit 1; }
set -a; source "$ENV_FILE"; set +a
require_env TNT_PORT TNT_ACCESS_TOKEN
BIN="/usr/local/bin/tnt"
DATA_DIR="/var/lib/tnt"
if [[ -x "$BIN" ]]; then
log_info "tnt already at $BIN, skipping download"
else
log_info "Installing tnt via official installer..."
curl -sSL https://raw.githubusercontent.com/m1ngsama/TNT/main/install.sh | sh
fi
log_info "Setting up data directory..."
mkdir -p "$DATA_DIR"
# Create unprivileged user if not exists
if ! id tnt &>/dev/null; then
useradd --system --no-create-home --shell /usr/sbin/nologin tnt
fi
chown tnt:tnt "$DATA_DIR"
log_info "Installing systemd service..."
cat > /etc/systemd/system/tnt.service <<EOF
[Unit]
Description=TNT Terminal Chat Server
After=network.target
[Service]
Type=simple
User=tnt
Group=tnt
WorkingDirectory=$DATA_DIR
ExecStart=$BIN -p ${TNT_PORT}
Restart=always
RestartSec=5
Environment="TNT_ACCESS_TOKEN=${TNT_ACCESS_TOKEN}"
Environment="TNT_BIND_ADDR=${TNT_BIND_ADDR:-0.0.0.0}"
Environment="TNT_MAX_CONNECTIONS=${TNT_MAX_CONNECTIONS:-50}"
Environment="TNT_MAX_CONN_PER_IP=${TNT_MAX_CONN_PER_IP:-3}"
NoNewPrivileges=true
PrivateTmp=true
ProtectSystem=strict
ProtectHome=true
ReadWritePaths=$DATA_DIR
[Install]
WantedBy=multi-user.target
EOF
systemctl daemon-reload
systemctl enable --now tnt
log_info "TNT deployed on port ${TNT_PORT}"
echo " Connect: ssh -p ${TNT_PORT} <host>"

337
setup.sh
View file

@ -1,337 +0,0 @@
#!/usr/bin/env bash
# Interactive installer for infra services.
#
# Usage:
# ./setup.sh # fully interactive
# ./setup.sh --infra-dir PATH # use pre-filled .env files from local infra checkout
# ./setup.sh --dry-run # show what would be deployed, no changes
# INFRA_DIR=/path/to/infra ./setup.sh # same as --infra-dir via env var
set -euo pipefail
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
source "$SCRIPT_DIR/bin/lib/common.sh"
# ============================================================================
# Args
# ============================================================================
INFRA_DIR="${INFRA_DIR:-}"
DRY_RUN=0
while [[ $# -gt 0 ]]; do
case "$1" in
--infra-dir) INFRA_DIR="$2"; shift 2 ;;
--dry-run) DRY_RUN=1; shift ;;
*) log_error "Unknown argument: $1"; exit 1 ;;
esac
done
# ============================================================================
# Banner
# ============================================================================
echo ""
echo " ┌─────────────────────────────────────┐"
echo " │ automa setup │"
echo " │ interactive infrastructure deploy │"
echo " └─────────────────────────────────────┘"
echo ""
# ============================================================================
# Prerequisites (skipped in dry-run)
# ============================================================================
check_prerequisites() {
log_info "Checking prerequisites..."
local missing=0
for cmd in curl wget systemctl envsubst; do
if ! command -v "$cmd" &>/dev/null; then
log_warn " missing: $cmd"
missing=1
fi
done
if [[ "$missing" -eq 1 ]]; then
log_error "Install missing tools before continuing."
exit 1
fi
log_info "Prerequisites OK."
}
[[ "$DRY_RUN" -eq 0 ]] && check_prerequisites
# ============================================================================
# Infra dir resolution
# ============================================================================
if [[ -n "$INFRA_DIR" ]]; then
if [[ ! -d "$INFRA_DIR" ]]; then
log_error "INFRA_DIR does not exist: $INFRA_DIR"
exit 1
fi
log_info "Using infra dir: $INFRA_DIR"
else
echo ""
echo " No --infra-dir specified."
echo " Point to a local infra checkout (with pre-filled .env files) for auto-config,"
echo " or leave blank to enter all values interactively."
echo ""
read -rp " Path to local infra repo [blank = interactive mode]: " infra_input
if [[ -n "$infra_input" ]]; then
infra_input="${infra_input/#\~/$HOME}"
if [[ -d "$infra_input" ]]; then
INFRA_DIR="$infra_input"
log_info "Using infra dir: $INFRA_DIR"
else
log_warn "Directory not found, continuing in interactive mode."
fi
fi
fi
# ============================================================================
# Module discovery
# Parallel arrays indexed by position: MODULES[i], ROLES[i], DESCS[i]
# MENU_MODS[menu_number] = module_path (1-based; index 0 = "")
# ============================================================================
MODULES=()
ROLES=()
DESCS=()
MENU_MODS=("") # index 0 unused — menu numbers start at 1
discover_modules() {
while IFS= read -r deploy; do
local mod env_ex role desc r d
mod="$(dirname "$deploy" | sed "s|^$SCRIPT_DIR/services/||")"
env_ex="$SCRIPT_DIR/services/$mod/.env.example"
role="any"
desc="$mod"
if [[ -f "$env_ex" ]]; then
# grep returns 1 if no match — use || true to avoid pipefail exit
r="$(grep '^# role:' "$env_ex" | head -1 | sed 's/^# role: *//' || true)"
d="$(grep '^# description:' "$env_ex" | head -1 | sed 's/^# description: *//' || true)"
[[ -n "$r" ]] && role="$r"
[[ -n "$d" ]] && desc="$d"
fi
MODULES+=("$mod")
ROLES+=("$role")
DESCS+=("$desc")
done < <(find "$SCRIPT_DIR/services" -name "deploy.sh" | sort)
}
discover_modules
# ============================================================================
# Interactive menu (grouped by role)
# Populates MENU_MODS as a side effect.
# ============================================================================
print_menu() {
local printed_any=0
for role_group in "vps" "homeserver" "any"; do
local label header_printed=0
case "$role_group" in
vps) label="VPS services" ;;
homeserver) label="Home server services" ;;
any) label="Any machine" ;;
esac
for i in "${!MODULES[@]}"; do
[[ "${ROLES[$i]}" != "$role_group" ]] && continue
if [[ "$header_printed" -eq 0 ]]; then
echo " $label:"
header_printed=1
printed_any=1
fi
local menu_idx=${#MENU_MODS[@]}
MENU_MODS+=("${MODULES[$i]}")
printf " [%2d] %-30s %s\n" "$menu_idx" "${MODULES[$i]}" "${DESCS[$i]}"
done
[[ "$header_printed" -eq 1 ]] && echo ""
done
if [[ "$printed_any" -eq 0 ]]; then
log_error "No deployable modules found."
exit 1
fi
}
print_menu
echo " Enter module numbers to deploy (space-separated, e.g. \"1 3\")."
echo " Press Enter to select all, or type \"none\" to exit."
echo ""
read -rp " > " selection
if [[ "$selection" == "none" ]]; then
echo " Exiting."
exit 0
fi
SELECTED_MODULES=()
if [[ -z "$selection" ]]; then
SELECTED_MODULES=("${MODULES[@]}")
else
for num in $selection; do
if [[ "$num" =~ ^[0-9]+$ ]] \
&& [[ "$num" -gt 0 ]] \
&& [[ "$num" -lt "${#MENU_MODS[@]}" ]]; then
SELECTED_MODULES+=("${MENU_MODS[$num]}")
else
log_warn "Unknown module number: $num (skipped)"
fi
done
fi
if [[ ${#SELECTED_MODULES[@]} -eq 0 ]]; then
log_error "No valid modules selected."
exit 1
fi
echo ""
log_info "Selected: ${SELECTED_MODULES[*]}"
# ============================================================================
# Env resolution
# Priority:
# 1. $INFRA_DIR/services/$mod/.env (pre-filled, use as-is)
# 2. $INFRA_DIR/services/$mod/.env.example (prompt, fill from infra template)
# 3. $SCRIPT_DIR/services/$mod/.env.example (prompt, fill from automa template)
# Prints the resolved .env path to stdout.
# ============================================================================
fill_env_interactive() {
# All user-visible output → stderr so caller can capture stdout for the path.
local env_example="$1"
local output="$2"
printf "" > "$output"
echo " Fill in values (Enter = accept default shown in [brackets]):" >&2
echo "" >&2
while IFS= read -r line; do
if [[ "$line" =~ ^#\ (role|description): ]]; then
continue
fi
if [[ -z "$line" ]]; then
continue
fi
if [[ "$line" =~ ^# ]]; then
printf " %s\n" "$line" >&2
continue
fi
local key default hint val
key="${line%%=*}"
default="${line#*=}"
hint=""
if [[ -n "$default" && "$default" != "your-"* && "$default" != "x.x.x.x" ]]; then
hint=" [$default]"
fi
read -rp " $key$hint: " val <&0
printf "%s=%s\n" "$key" "${val:-$default}" >> "$output"
done < "$env_example"
}
resolve_env_for_module() {
# Prints resolved .env path to stdout; all messages → stderr.
local mod="$1"
local dry_run="${2:-0}"
local infra_mod_dir=""
if [[ -n "$INFRA_DIR" ]]; then
infra_mod_dir="$INFRA_DIR/services/$mod"
fi
# Case 1: pre-filled .env in infra — use directly
if [[ -n "$infra_mod_dir" && -f "$infra_mod_dir/.env" ]]; then
log_info " Using pre-filled .env from infra" >&2
printf "%s" "$infra_mod_dir/.env"
return 0
fi
# Determine .env.example template source
local env_example=""
if [[ -n "$infra_mod_dir" && -f "$infra_mod_dir/.env.example" ]]; then
env_example="$infra_mod_dir/.env.example"
elif [[ -f "$SCRIPT_DIR/services/$mod/.env.example" ]]; then
env_example="$SCRIPT_DIR/services/$mod/.env.example"
else
log_error " No .env.example found for: $mod" >&2
return 1
fi
# Dry-run: skip interactive fill, just return path to the example
if [[ "$dry_run" -eq 1 ]]; then
printf "%s" "$env_example"
return 0
fi
local tmp_env
tmp_env="$(mktemp /tmp/automa-env-XXXXXX)"
fill_env_interactive "$env_example" "$tmp_env"
printf "%s" "$tmp_env"
}
# ============================================================================
# Deployment
# ============================================================================
DEPLOYED=()
FAILED=()
for mod in "${SELECTED_MODULES[@]}"; do
echo ""
echo " ── $mod ──"
local_env=""
if ! local_env="$(resolve_env_for_module "$mod" "$DRY_RUN")"; then
FAILED+=("$mod")
continue
fi
deploy_script="$SCRIPT_DIR/services/$mod/deploy.sh"
if [[ ! -x "$deploy_script" ]]; then
log_error " deploy.sh not found or not executable: $deploy_script"
FAILED+=("$mod")
continue
fi
if [[ "$DRY_RUN" -eq 1 ]]; then
log_info " [dry-run] $mod$deploy_script (env: $local_env)"
DEPLOYED+=("$mod")
continue
fi
if INFRA_DIR="$(dirname "$local_env")" bash "$deploy_script"; then
DEPLOYED+=("$mod")
else
log_error " Deployment failed: $mod"
FAILED+=("$mod")
fi
done
# ============================================================================
# Summary
# ============================================================================
echo ""
echo " ┌─────────────────────────────────────┐"
echo " │ Summary │"
echo " └─────────────────────────────────────┘"
if [[ ${#DEPLOYED[@]} -gt 0 ]]; then
echo ""
log_info "Deployed (${#DEPLOYED[@]}):"
for m in "${DEPLOYED[@]}"; do echo "$m"; done
fi
if [[ ${#FAILED[@]} -gt 0 ]]; then
echo ""
log_error "Failed (${#FAILED[@]}):"
for m in "${FAILED[@]}"; do echo "$m"; done
exit 1
fi
echo ""
log_info "Done."

28
tailscale/.env.example Normal file
View file

@ -0,0 +1,28 @@
# @name Tailscale + DERP
# @desc Tailscale mesh VPN client with optional DERP relay
# @url https://tailscale.com/kb/1282/docker
# @note Deploy tailscale only: docker compose --profile tailscale up -d
# @note Deploy with DERP relay: docker compose --profile derp up -d
TZ=Asia/Shanghai
# Hostname shown in the Tailscale admin console
TS_HOSTNAME=
# Auth key — generate at https://login.tailscale.com/admin/settings/keys
# For headscale: generate via headscale CLI
TS_AUTHKEY=
# Extra arguments passed to tailscaled
# For headscale users, add: --login-server=https://your.headscale.host
TS_EXTRA_ARGS=--advertise-tags=tag:container
# Networking mode: false = kernel (better performance), true = userspace
TS_USERSPACE=false
TS_FIREWALL_MODE=nftables
# DERP relay settings (only used with --profile derp)
# Public IP of this server — clients connect to this address
DERP_HOST=
DERP_PORT=443
STUN_PORT=3478

52
tailscale/compose.yaml Normal file
View file

@ -0,0 +1,52 @@
services:
tailscale:
image: tailscale/tailscale:latest
container_name: tailscale
hostname: "${TS_HOSTNAME}"
profiles: ["tailscale", "derp"]
cap_add:
- NET_ADMIN
- SYS_MODULE
- NET_RAW
devices:
- /dev/net/tun:/dev/net/tun
network_mode: host
environment:
TS_AUTHKEY: "${TS_AUTHKEY}"
TS_EXTRA_ARGS: "${TS_EXTRA_ARGS:---advertise-tags=tag:container}"
TS_STATE_DIR: /var/lib/tailscale
TS_SOCKET: /var/run/tailscale/tailscaled.sock
TS_USERSPACE: "${TS_USERSPACE:-false}"
TS_DEBUG_FIREWALL_MODE: "${TS_FIREWALL_MODE:-nftables}"
TS_HOSTNAME: "${TS_HOSTNAME}"
TZ: "${TZ:-Asia/Shanghai}"
volumes:
- ./tailscale-data:/var/lib/tailscale
- /var/run/tailscale:/var/run/tailscale
- /lib/modules:/lib/modules:ro
healthcheck:
test: ["CMD-SHELL", "tailscale status"]
interval: 30s
timeout: 10s
retries: 3
start_period: 30s
restart: unless-stopped
derp-server:
image: ghcr.io/nbtca/tailscale-derp:edge
container_name: tailscale-derp
profiles: ["derp"]
network_mode: host
depends_on:
tailscale:
condition: service_healthy
environment:
TZ: "${TZ:-Asia/Shanghai}"
DERP_HOST: "${DERP_HOST}"
DERP_PORT: "${DERP_PORT:-443}"
STUN_PORT: "${STUN_PORT:-3478}"
HTTP_PORT: "-1"
VERIFY_CLIENTS: "true"
volumes:
- /var/run/tailscale:/var/run/tailscale
restart: unless-stopped

View file

@ -1,3 +1,7 @@
# TeamSpeak Server Admin Password # @name TeamSpeak
# Change this to a strong password before deployment # @desc TeamSpeak voice communication server
TS3_ADMIN_PASSWORD=ChangeThisPassword123! # @url https://teamspeak.com
# @port 9987/udp
# Server admin password — set on first run, used for ServerQuery
TS3_ADMIN_PASSWORD=

View file

@ -1,37 +0,0 @@
## 使用说明
1. **创建项目目录**
```bash
mkdir teamspeak-server && cd teamspeak-server
```
2. **创建 compose.yaml 文件**
3. **创建 .env 文件设置密码**
```bash
echo "TS3_ADMIN_PASSWORD=你的强密码" > .env
```
4. **启动服务**
```bash
docker-compose up -d
```
5. **查看日志获取权限密钥**
```bash
docker-compose logs teamspeak | grep "token="
```
## 端口说明
- **9987/udp**: 语音通信
- **10011**: 文件传输
- **10022**: 服务器查询
- **10080**: HTTP文件传输
- **10443**: 主要服务器端口
- **30033**: 文件传输
- **41144**: TSDNS服务

View file

@ -1,29 +1,23 @@
version: "3.8"
services: services:
teamspeak: teamspeak:
image: teamspeak:latest image: teamspeak:latest
container_name: teamspeak-server container_name: teamspeak
ports: ports:
- "9987:9987/udp" # 语音服务器端口 - "9987:9987/udp"
- "10011:10011" # 文件传输 - "10011:10011"
- "10022:10022" # ServerQuery (SSH) - "30033:30033"
- "10080:10080" # 文件传输
- "10443:10443" # TS3服务器主要端口
- "30033:30033" # 文件传输
- "41144:41144" # TSDNS
environment: environment:
- TS3SERVER_LICENSE=accept TS3SERVER_LICENSE: accept
- TS3SERVER_SERVERADMIN_PASSWORD=${TS3_ADMIN_PASSWORD:-password123} TS3SERVER_SERVERADMIN_PASSWORD: "${TS3_ADMIN_PASSWORD}"
volumes:
- ts-data:/var/ts3server/
healthcheck:
test: ["CMD-SHELL", "echo quit | nc localhost 10011 | grep -q TS3"]
interval: 30s
timeout: 5s
retries: 3
start_period: 15s
restart: unless-stopped restart: unless-stopped
volumes:
- teamspeak_data:/var/ts3server/
networks:
- teamspeak-network
volumes: volumes:
teamspeak_data: ts-data:
networks:
teamspeak-network:
driver: bridge

7
uptime-kuma/.env.example Normal file
View file

@ -0,0 +1,7 @@
# @name Uptime Kuma
# @desc Uptime monitoring dashboard
# @url https://github.com/louislam/uptime-kuma
# @port KUMA_PORT
# Bind address — default localhost only; use 0.0.0.0:3001 for external access
KUMA_PORT=127.0.0.1:3001

15
uptime-kuma/compose.yaml Normal file
View file

@ -0,0 +1,15 @@
services:
uptime-kuma:
image: louislam/uptime-kuma:1
container_name: uptime-kuma
volumes:
- ./data:/app/data
ports:
- "${KUMA_PORT:-127.0.0.1:3001}:3001"
healthcheck:
test: ["CMD-SHELL", "extra/healthcheck"]
interval: 60s
timeout: 30s
retries: 5
start_period: 180s
restart: unless-stopped