Compare commits

...

3 Commits

Author SHA1 Message Date
52faebb63a chore(dr): update disaster recovery export to 2025-12-07
- Add latest infrastructure snapshot (homelab-export-20251207-120040)
- Include VM 101 (monitoring-docker) in inventory
- Include CT 112 (twingate-connector) in inventory
- Archive previous export as homelab-export-20251207-120040.tar.gz
- Update storage utilization statistics
- Remove outdated export from 2025-12-02
- Update .gitignore to allow DR exports and archives

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-12-07 12:42:07 -07:00
d4d8e69262 feat(monitoring): add Prometheus/Grafana monitoring stack
- Add Grafana dashboard service (port 3000)
- Add Prometheus time-series database (port 9090)
- Add PVE Exporter for Proxmox metrics (port 9221)
- Deploy on VM 101 (monitoring-docker) at 192.168.2.114
- Configure scraping for Proxmox node 192.168.2.100
- Add docker-compose configurations for all services
- Add template files for sensitive credentials (pve.yml.template, .env.template)

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-12-07 12:41:22 -07:00
f42eeaba92 feat(docs): update documentation for monitoring stack and infrastructure changes
- Update INDEX.md with VM 101 (monitoring-docker) and CT 112 (twingate-connector)
- Update README.md with monitoring and security sections
- Update CLAUDE.md with new architecture patterns
- Update services/README.md with monitoring stack documentation
- Update CLAUDE_STATUS.md with current infrastructure state
- Update infrastructure counts: 10 VMs, 4 Containers
- Update storage stats: PBS 27.43%, Vault 10.88%
- Create comprehensive monitoring/README.md
- Add .gitignore rules for monitoring sensitive files (pve.yml, .env)

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-12-07 12:41:08 -07:00
78 changed files with 1718 additions and 1396 deletions

8
.gitignore vendored
View File

@@ -35,6 +35,7 @@ auth.json # Authentication files
# Backup and Export Files # Backup and Export Files
# ---------------------- # ----------------------
*.tar.gz # Compressed archives *.tar.gz # Compressed archives
!archive-homelab/*.tar.gz # EXCEPT archives in archive-homelab directory
*.tgz # Compressed archives *.tgz # Compressed archives
*.zip # Zip archives *.zip # Zip archives
*.bak # Backup files *.bak # Backup files
@@ -42,7 +43,9 @@ auth.json # Authentication files
backups/ # Backup directory backups/ # Backup directory
exports/ # Export directory (if not needed in git) exports/ # Export directory (if not needed in git)
homelab-export-*/ # Your homelab export directories homelab-export-*/ # Your homelab export directories
!disaster-recovery/homelab-export-*/ # EXCEPT exports in disaster-recovery directory
*.log # Log files (unless you specifically want to track them) *.log # Log files (unless you specifically want to track them)
!disaster-recovery/**/*.log # EXCEPT log files in disaster-recovery exports
# Temporary Files # Temporary Files
# -------------- # --------------
@@ -134,6 +137,11 @@ services/homepage/services.yaml
# Template files (.template) are tracked for reference # Template files (.template) are tracked for reference
scripts/fixers/fix_n8n_db_c_locale.sh scripts/fixers/fix_n8n_db_c_locale.sh
# Monitoring Stack Sensitive Files
# --------------------------------
# Exclude files containing Proxmox credentials and local paths
**/pve.yml # Proxmox credentials for exporters (NOT templates)
# Custom Exclusions # Custom Exclusions
# ---------------- # ----------------
# Add any custom patterns specific to your homelab below: # Add any custom patterns specific to your homelab below:

View File

@@ -21,9 +21,11 @@ The infrastructure employs full VMs for services requiring kernel-level isolatio
| VM ID | Name | Purpose | Notes | | VM ID | Name | Purpose | Notes |
|-------|------|---------|-------| |-------|------|---------|-------|
| 100 | docker-hub | Container registry/Docker hub mirror | Local container image caching | | 100 | docker-hub | Container registry/Docker hub mirror | Local container image caching |
| 101 | gitlab | GitLab CE/EE instance | Source control, CI/CD platform | | 101 | monitoring-docker | Monitoring stack | Grafana/Prometheus/PVE Exporter at 192.168.2.114 |
| 104 | ubuntu-dev | Ubuntu development environment | Additional dev workstation |
| 105 | dev | Development environment | General-purpose development workstation | | 105 | dev | Development environment | General-purpose development workstation |
| 106 | Ansible-Control | Automation control node | IaC orchestration, configuration management | | 106 | Ansible-Control | Automation control node | IaC orchestration, configuration management |
| 107 | ubuntu-docker | Ubuntu Docker host | Docker-focused environment |
| 108 | CML | Cisco Modeling Labs | Network simulation/testing environment | | 108 | CML | Cisco Modeling Labs | Network simulation/testing environment |
| 109 | web-server-01 | Web application server | Production-like web tier (clustered) | | 109 | web-server-01 | Web application server | Production-like web tier (clustered) |
| 110 | web-server-02 | Web application server | Load-balanced pair with web-server-01 | | 110 | web-server-02 | Web application server | Load-balanced pair with web-server-01 |
@@ -35,9 +37,10 @@ Lightweight services leveraging LXC for reduced overhead and faster provisioning
| CT ID | Name | Purpose | Notes | | CT ID | Name | Purpose | Notes |
|-------|------|---------|-------| |-------|------|---------|-------|
| 102 | nginx | Reverse proxy/load balancer | Front-end traffic management | | 102 | nginx | Reverse proxy/load balancer | Front-end traffic management (NPM) |
| 103 | netbox | Network documentation/IPAM | Infrastructure source of truth | | 103 | netbox | Network documentation/IPAM | Infrastructure source of truth |
| 112 | Anytype | Knowledge management | Personal/team documentation | | 112 | twingate-connector | Zero-trust network access | Secure remote access connector |
| 113 | n8n | Workflow automation | n8n.io platform at 192.168.2.107 |
### Storage Architecture ### Storage Architecture
@@ -45,10 +48,10 @@ The storage layout demonstrates a well-organized approach to data separation:
| Storage Pool | Type | Usage | Purpose | | Storage Pool | Type | Usage | Purpose |
|--------------|------|-------|---------| |--------------|------|-------|---------|
| local | Directory | 14.8% | System files, ISOs, templates | | local | Directory | 15.13% | System files, ISOs, templates |
| local-lvm | LVM-Thin | 0.0% | VM disk images (thin provisioned) | | local-lvm | LVM-Thin | 0.0% | VM disk images (thin provisioned) |
| Vault | NFS/Directory | 11.9% | Secure storage for sensitive data | | Vault | NFS/Directory | 10.88% | Secure storage for sensitive data |
| PBS-Backups | Proxmox Backup Server | 21.6% | Automated backup repository | | PBS-Backups | Proxmox Backup Server | 27.43% | Automated backup repository |
| iso-share | NFS/CIFS | 1.4% | Installation media library | | iso-share | NFS/CIFS | 1.4% | Installation media library |
| localnetwork | Network share | N/A | Shared resources across infrastructure | | localnetwork | Network share | N/A | Shared resources across infrastructure |
@@ -60,7 +63,11 @@ The storage layout demonstrates a well-organized approach to data separation:
**Network Simulation Capability**: CML (108) suggests network engineering activities, possibly testing configurations before production deployment. **Network Simulation Capability**: CML (108) suggests network engineering activities, possibly testing configurations before production deployment.
**Container Strategy**: The selective use of LXC for stateless or lightweight services (nginx, netbox) vs full VMs for complex applications demonstrates thoughtful resource optimization. **Container Strategy**: The selective use of LXC for stateless or lightweight services (nginx, netbox, twingate, n8n) vs full VMs for complex applications demonstrates thoughtful resource optimization.
**Monitoring & Observability**: The dedicated monitoring VM (101) with Grafana, Prometheus, and PVE Exporter provides comprehensive infrastructure visibility, enabling proactive capacity planning and performance optimization.
**Zero-Trust Security**: Implementation of Twingate connector (CT 112) demonstrates modern security practices, providing secure remote access without traditional VPN complexity.
## Working with This Environment ## Working with This Environment

File diff suppressed because it is too large Load Diff

View File

@@ -309,13 +309,14 @@ cat scripts/crawlers-exporters/COLLECTION-GUIDE.md
## Your Infrastructure ## Your Infrastructure
Based on the latest export (2025-12-02 20:49:54), your environment includes: Based on the latest export (2025-12-07 12:00:40), your environment includes:
### Virtual Machines (QEMU/KVM) - 9 VMs ### Virtual Machines (QEMU/KVM) - 10 VMs
| VM ID | Name | Status | Purpose | | VM ID | Name | Status | Purpose |
|-------|------|--------|---------| |-------|------|--------|---------|
| 100 | docker-hub | Running | Container registry/Docker hub mirror | | 100 | docker-hub | Running | Container registry/Docker hub mirror |
| 101 | monitoring-docker | Running | Monitoring stack (Grafana/Prometheus/PVE Exporter) at 192.168.2.114 |
| 104 | ubuntu-dev | Stopped | Ubuntu development environment | | 104 | ubuntu-dev | Stopped | Ubuntu development environment |
| 105 | dev | Stopped | General-purpose development workstation | | 105 | dev | Stopped | General-purpose development workstation |
| 106 | Ansible-Control | Running | IaC orchestration, configuration management | | 106 | Ansible-Control | Running | IaC orchestration, configuration management |
@@ -325,23 +326,24 @@ Based on the latest export (2025-12-02 20:49:54), your environment includes:
| 110 | web-server-02 | Running | Load-balanced pair with web-server-01 | | 110 | web-server-02 | Running | Load-balanced pair with web-server-01 |
| 111 | db-server-01 | Running | Backend database server | | 111 | db-server-01 | Running | Backend database server |
**Note**: VM 101 (gitlab) has been removed from the infrastructure. **Recent Changes**: Added VM 101 (monitoring-docker) for dedicated observability infrastructure.
### Containers (LXC) - 3 Containers ### Containers (LXC) - 4 Containers
| CT ID | Name | Status | Purpose | | CT ID | Name | Status | Purpose |
|-------|------|--------|---------| |-------|------|--------|---------|
| 102 | nginx | Running | Reverse proxy/load balancer | | 102 | nginx | Running | Reverse proxy/load balancer |
| 103 | netbox | Stopped | Network documentation/IPAM | | 103 | netbox | Stopped | Network documentation/IPAM |
| 113 | n8n | Running | Workflow automation platform | | 112 | twingate-connector | Running | Zero-trust network access connector |
| 113 | n8n | Running | Workflow automation platform at 192.168.2.107 |
**Note**: CT 112 (Anytype) has been replaced by CT 113 (n8n). **Recent Changes**: Added CT 112 (twingate-connector) for zero-trust security, CT 113 (n8n) for workflow automation.
### Storage Pools ### Storage Pools
- **local** (Directory) - 14.8% used - System files, ISOs, templates - **local** (Directory) - 15.13% used - System files, ISOs, templates
- **local-lvm** (LVM-Thin) - 0.0% used - VM disk images (thin provisioned) - **local-lvm** (LVM-Thin) - 0.0% used - VM disk images (thin provisioned)
- **Vault** (NFS/Directory) - 11.9% used - Secure storage for sensitive data - **Vault** (NFS/Directory) - 10.88% used - Secure storage for sensitive data
- **PBS-Backups** (Proxmox Backup Server) - 21.6% used - Automated backup repository - **PBS-Backups** (Proxmox Backup Server) - 27.43% used - Automated backup repository
- **iso-share** (NFS/CIFS) - 1.4% used - Installation media library - **iso-share** (NFS/CIFS) - 1.4% used - Installation media library
- **localnetwork** (Network share) - Shared resources across infrastructure - **localnetwork** (Network share) - Shared resources across infrastructure
@@ -349,8 +351,8 @@ All of these are documented in your collection exports!
## Latest Export Information ## Latest Export Information
- **Export Directory**: `/home/jramos/homelab/homelab-export-20251202-204939/` - **Export Directory**: `/home/jramos/homelab/disaster-recovery/homelab-export-20251207-120040/`
- **Collection Date**: 2025-12-02 20:49:54 - **Collection Date**: 2025-12-07 12:00:40
- **Hostname**: serviceslab - **Hostname**: serviceslab
- **Collection Level**: full - **Collection Level**: full
- **Script Version**: 1.0.0 - **Script Version**: 1.0.0
@@ -439,6 +441,40 @@ For detailed troubleshooting, see: **[troubleshooting/BUGFIX-SUMMARY.md](trouble
| **Output (standard)** | 2-6 MB | Per collection run | | **Output (standard)** | 2-6 MB | Per collection run |
| **Output (full)** | 5-20 MB | Per collection run | | **Output (full)** | 5-20 MB | Per collection run |
## Monitoring Stack
The infrastructure now includes a comprehensive monitoring and observability stack deployed on VM 101 (monitoring-docker) at 192.168.2.114:
### Components
- **Grafana** (Port 3000): Visualization and dashboards
- **Prometheus** (Port 9090): Metrics collection and time-series database
- **PVE Exporter** (Port 9221): Proxmox VE metrics exporter
### Features
- Real-time Proxmox infrastructure monitoring
- VM and container resource utilization tracking
- Storage pool metrics and capacity planning
- Network traffic analysis
- Pre-configured dashboards for Proxmox VE
- Alerting capabilities (configurable)
### Access
- **Grafana UI**: http://192.168.2.114:3000
- **Prometheus UI**: http://192.168.2.114:9090
- **Metrics Endpoint**: http://192.168.2.114:9221/pve
### Documentation
For comprehensive setup, configuration, and troubleshooting:
- **Monitoring Guide**: `monitoring/README.md`
- **Docker Compose Configs**: `monitoring/grafana/`, `monitoring/prometheus/`, `monitoring/pve-exporter/`
### Key Metrics
- Node CPU, memory, and disk usage
- VM/CT resource consumption
- Storage pool utilization trends
- Backup job success rates
- Network interface statistics
## Service Management ## Service Management
### n8n Workflow Automation ### n8n Workflow Automation
@@ -531,8 +567,8 @@ bash scripts/crawlers-exporters/collect.sh
--- ---
**Repository Version:** 2.0.0 **Repository Version:** 2.1.0
**Last Updated**: 2025-12-02 **Last Updated**: 2025-12-07
**Latest Export**: homelab-export-20251202-204939 **Latest Export**: disaster-recovery/homelab-export-20251207-120040
**Infrastructure**: 9 VMs, 3 Containers, Proxmox VE 8.3.3 **Infrastructure**: 10 VMs, 4 Containers, Proxmox VE 8.3.3
**Maintained by**: Your homelab automation system **Maintained by**: Your homelab automation system

View File

@@ -16,18 +16,21 @@ This repository contains configuration files, scripts, and documentation for man
### Virtual Machines (QEMU/KVM) ### Virtual Machines (QEMU/KVM)
- **100** - docker-hub: Container registry and Docker hub mirror - **100** - docker-hub: Container registry and Docker hub mirror
- **101** - gitlab: GitLab CE/EE for source control and CI/CD - **101** - monitoring-docker: Monitoring stack (Grafana/Prometheus/PVE Exporter) at 192.168.2.114
- **104** - ubuntu-dev: Ubuntu development environment
- **105** - dev: General-purpose development environment - **105** - dev: General-purpose development environment
- **106** - Ansible-Control: Infrastructure automation control node - **106** - Ansible-Control: Infrastructure automation control node
- **107** - ubuntu-docker: Ubuntu Docker host
- **108** - CML: Cisco Modeling Labs for network simulation - **108** - CML: Cisco Modeling Labs for network simulation
- **109** - web-server-01: Web application server (clustered) - **109** - web-server-01: Web application server (clustered)
- **110** - web-server-02: Web application server (load-balanced) - **110** - web-server-02: Web application server (load-balanced)
- **111** - db-server-01: Database server - **111** - db-server-01: Database server
### Containers (LXC) ### Containers (LXC)
- **102** - nginx: Reverse proxy and load balancer - **102** - nginx: Reverse proxy and load balancer (Nginx Proxy Manager)
- **103** - netbox: Network documentation and IPAM - **103** - netbox: Network documentation and IPAM
- **112** - Anytype: Knowledge management system - **112** - twingate-connector: Zero-trust network access connector
- **113** - n8n: Workflow automation platform at 192.168.2.107
### Storage Pools ### Storage Pools
- **local**: System files, ISOs, and templates - **local**: System files, ISOs, and templates
@@ -49,6 +52,40 @@ homelab/
└── README.md # This file └── README.md # This file
``` ```
## Monitoring & Observability
The infrastructure includes a comprehensive monitoring stack deployed on VM 101 (monitoring-docker) at 192.168.2.114:
### Components
- **Grafana** (Port 3000): Visualization and dashboards
- **Prometheus** (Port 9090): Metrics collection and time-series database
- **PVE Exporter** (Port 9221): Proxmox VE metrics exporter
### Features
- Real-time infrastructure monitoring
- Resource utilization tracking for VMs and containers
- Storage pool metrics and trends
- Network traffic analysis
- Pre-configured Proxmox VE dashboards
- Alerting capabilities
**Documentation**: See `monitoring/README.md` for complete setup and configuration guide.
## Network Security
### Zero-Trust Access
- **CT 112** - twingate-connector: Provides secure remote access without traditional VPN
- **Technology**: Twingate zero-trust network access
- **Benefits**: Simplified secure access, no complex VPN configurations
## Automation & Integration
### Workflow Automation
- **CT 113** - n8n at 192.168.2.107
- **Database**: PostgreSQL 15+
- **Features**: API integrations, scheduled workflows, webhook triggers
- **Documentation**: See `services/README.md` for n8n setup and troubleshooting
## Quick Start ## Quick Start
### Prerequisites ### Prerequisites
@@ -137,5 +174,6 @@ For questions about:
--- ---
*Last Updated: 2025-11-29* *Last Updated: 2025-12-07*
*Proxmox Version: 8.3.3* *Proxmox Version: 8.3.3*
*Infrastructure: 10 VMs, 4 LXC Containers*

Binary file not shown.

View File

@@ -1,88 +0,0 @@
[2025-12-02 20:49:39] [DEBUG] Created directory: ./homelab-export-20251202-204939
[2025-12-02 20:49:39] [DEBUG] Created directory: ./homelab-export-20251202-204939/docs
[2025-12-02 20:49:39] [DEBUG] Created directory: ./homelab-export-20251202-204939/configs/proxmox
[2025-12-02 20:49:39] [DEBUG] Created directory: ./homelab-export-20251202-204939/configs/vms
[2025-12-02 20:49:39] [DEBUG] Created directory: ./homelab-export-20251202-204939/configs/lxc
[2025-12-02 20:49:39] [DEBUG] Created directory: ./homelab-export-20251202-204939/configs/storage
[2025-12-02 20:49:39] [DEBUG] Created directory: ./homelab-export-20251202-204939/configs/network
[2025-12-02 20:49:39] [DEBUG] Created directory: ./homelab-export-20251202-204939/configs/backup
[2025-12-02 20:49:39] [DEBUG] Created directory: ./homelab-export-20251202-204939/exports/system
[2025-12-02 20:49:39] [DEBUG] Created directory: ./homelab-export-20251202-204939/exports/cluster
[2025-12-02 20:49:39] [DEBUG] Created directory: ./homelab-export-20251202-204939/exports/guests
[2025-12-02 20:49:39] [DEBUG] Created directory: ./homelab-export-20251202-204939/scripts
[2025-12-02 20:49:39] [DEBUG] Created directory: ./homelab-export-20251202-204939/diagrams
[2025-12-02 20:49:39] [SUCCESS] Directory structure created at: ./homelab-export-20251202-204939
[2025-12-02 20:49:40] [SUCCESS] Collected Proxmox VE version
[2025-12-02 20:49:40] [SUCCESS] Collected Hostname
[2025-12-02 20:49:40] [SUCCESS] Collected Kernel information
[2025-12-02 20:49:40] [SUCCESS] Collected System uptime
[2025-12-02 20:49:40] [SUCCESS] Collected System date/time
[2025-12-02 20:49:40] [SUCCESS] Collected CPU information
[2025-12-02 20:49:40] [SUCCESS] Collected Detailed CPU info
[2025-12-02 20:49:40] [SUCCESS] Collected Memory information
[2025-12-02 20:49:40] [SUCCESS] Collected Detailed memory info
[2025-12-02 20:49:40] [SUCCESS] Collected Filesystem usage
[2025-12-02 20:49:40] [SUCCESS] Collected Block devices
[2025-12-02 20:49:40] [DEBUG] Command 'pvdisplay' is available
[2025-12-02 20:49:40] [SUCCESS] Collected LVM physical volumes
[2025-12-02 20:49:40] [SUCCESS] Collected LVM volume groups
[2025-12-02 20:49:40] [SUCCESS] Collected LVM logical volumes
[2025-12-02 20:49:40] [SUCCESS] Collected IP addresses
[2025-12-02 20:49:40] [SUCCESS] Collected Routing table
[2025-12-02 20:49:40] [SUCCESS] Collected Listening sockets
[2025-12-02 20:49:40] [DEBUG] Command 'dpkg' is available
[2025-12-02 20:49:40] [SUCCESS] Collected Installed packages
[2025-12-02 20:49:40] [SUCCESS] Collected Datacenter config
[2025-12-02 20:49:40] [SUCCESS] Collected Storage config
[2025-12-02 20:49:40] [SUCCESS] Collected User config
[2025-12-02 20:49:40] [DEBUG] Source does not exist: /etc/pve/domains.cfg (Authentication domains)
[2025-12-02 20:49:40] [SUCCESS] Collected Auth public key
[2025-12-02 20:49:40] [WARN] Failed to copy directory HA configuration from /etc/pve/ha
[2025-12-02 20:49:40] [SUCCESS] Collected VM 100 (docker-hub) config
[2025-12-02 20:49:40] [SUCCESS] Collected VM 104 (ubuntu-dev) config
[2025-12-02 20:49:40] [SUCCESS] Collected VM 105 (dev) config
[2025-12-02 20:49:40] [SUCCESS] Collected VM 106 (Ansible-Control) config
[2025-12-02 20:49:40] [SUCCESS] Collected VM 107 (ubuntu-docker) config
[2025-12-02 20:49:40] [SUCCESS] Collected VM 108 (CML) config
[2025-12-02 20:49:40] [SUCCESS] Collected VM 109 (web-server-01) config
[2025-12-02 20:49:40] [SUCCESS] Collected VM 110 (web-server-02) config
[2025-12-02 20:49:40] [SUCCESS] Collected VM 111 (db-server-01) config
[2025-12-02 20:49:40] [SUCCESS] Collected Container 102 (nginx) config
[2025-12-02 20:49:40] [SUCCESS] Collected Container 103 (netbox) config
[2025-12-02 20:49:40] [SUCCESS] Collected Container 113 (n8n
n8n
n8n) config
[2025-12-02 20:49:40] [SUCCESS] Collected Network interfaces config
[2025-12-02 20:49:40] [WARN] Failed to copy directory Additional interface configs from /etc/network/interfaces.d
[2025-12-02 20:49:40] [WARN] Failed to copy directory SDN configuration from /etc/pve/sdn
[2025-12-02 20:49:40] [SUCCESS] Collected Hosts file
[2025-12-02 20:49:40] [SUCCESS] Collected DNS resolver config
[2025-12-02 20:49:40] [DEBUG] Command 'pvesm' is available
[2025-12-02 20:49:42] [SUCCESS] Collected Storage status
[2025-12-02 20:49:42] [DEBUG] Command 'zpool' is available
[2025-12-02 20:49:42] [SUCCESS] Collected ZFS pool status
[2025-12-02 20:49:42] [SUCCESS] Collected ZFS pool list
[2025-12-02 20:49:42] [DEBUG] Command 'zfs' is available
[2025-12-02 20:49:42] [SUCCESS] Collected ZFS datasets
[2025-12-02 20:49:42] [SUCCESS] Collected Samba config
[2025-12-02 20:49:42] [SUCCESS] Collected iSCSI initiator config
[2025-12-02 20:49:42] [SUCCESS] Collected Vzdump config
[2025-12-02 20:49:42] [DEBUG] Command 'pvecm' is available
[2025-12-02 20:49:42] [WARN] Failed to execute: pvecm status (Cluster status)
[2025-12-02 20:49:43] [WARN] Failed to execute: pvecm nodes (Cluster nodes)
[2025-12-02 20:49:43] [DEBUG] Command 'pvesh' is available
[2025-12-02 20:49:44] [SUCCESS] Collected Cluster resources
[2025-12-02 20:49:45] [SUCCESS] Collected Recent tasks
[2025-12-02 20:49:45] [DEBUG] Command 'qm' is available
[2025-12-02 20:49:46] [SUCCESS] Collected VM list
[2025-12-02 20:49:46] [DEBUG] Command 'pct' is available
[2025-12-02 20:49:47] [SUCCESS] Collected Container list
[2025-12-02 20:49:47] [DEBUG] Command 'pvesh' is available
[2025-12-02 20:49:49] [SUCCESS] Collected All guests (JSON)
[2025-12-02 20:49:49] [SUCCESS] Collected Systemd services
[2025-12-02 20:49:54] [SUCCESS] Generated README.md
[2025-12-02 20:49:58] [SUCCESS] Generated SUMMARY.md
[2025-12-02 20:49:58] [SUCCESS] Total items collected: 50
[2025-12-02 20:49:58] [INFO] Total items skipped: 1
[2025-12-02 20:49:58] [WARN] Total errors: 5
[2025-12-02 20:49:58] [WARN] Review ./homelab-export-20251202-204939/collection.log for details

View File

@@ -1,9 +0,0 @@
-----BEGIN PUBLIC KEY-----
MIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEAuH77Q3gsq0eSe+iUFGk0
VliLvw4A/JbEkRnW3B8D+iNeN41sm0Py7AkqlKy3X4LE8UQQ6Yu+nyxBfZMr5Sim
41FbnxxflXfXVvCcbfJe0PW9iRuXATqhBZtKbkcE4y2C/FCnQEq9d3LY8gKTHRJ3
7NQ4TEe0njNpeJ8TthzFJwFLwybO40XuVdjyvoDNRLyOqxLUc4ju0VQjZRJwE6hI
8vUv/o+d4n5eGq5s+wu3kgiI8NztPjiZhWuW0Kc/pkanHt1hSvoJzICWsr3pcU/F
nrTP0q56voFwnyEFxZ6qZhTxq/Xe1JFxYI0fA2PZYGguwx1tLGbrV1DBD0A9RBc+
GwIDAQAB
-----END PUBLIC KEY-----

View File

@@ -1,163 +0,0 @@
UNIT LOAD ACTIVE SUB DESCRIPTION
apparmor.service loaded active exited Load AppArmor profiles
apt-daily-upgrade.service loaded inactive dead Daily apt upgrade and clean activities
apt-daily.service loaded inactive dead Daily apt download activities
● auditd.service not-found inactive dead auditd.service
auth-rpcgss-module.service loaded inactive dead Kernel Module supporting RPCSEC_GSS
beszel-agent-update.service loaded inactive dead Update beszel-agent if needed
beszel-agent.service loaded active running Beszel Agent Service
blk-availability.service loaded active exited Availability of block devices
chrony.service loaded active running chrony, an NTP client/server
● connman.service not-found inactive dead connman.service
console-getty.service loaded inactive dead Console Getty
● console-screen.service not-found inactive dead console-screen.service
console-setup.service loaded active exited Set console font and keymap
corosync.service loaded inactive dead Corosync Cluster Engine
cron.service loaded active running Regular background program processing daemon
dbus.service loaded active running D-Bus System Message Bus
● display-manager.service not-found inactive dead display-manager.service
dm-event.service loaded active running Device-mapper event daemon
dpkg-db-backup.service loaded inactive dead Daily dpkg database backup service
● dracut-mount.service not-found inactive dead dracut-mount.service
e2scrub_all.service loaded inactive dead Online ext4 Metadata Check for All Filesystems
e2scrub_reap.service loaded inactive dead Remove Stale Online ext4 Metadata Check Snapshots
emergency.service loaded inactive dead Emergency Shell
● exim4.service not-found inactive dead exim4.service
● fcoe.service not-found inactive dead fcoe.service
fstrim.service loaded inactive dead Discard unused blocks on filesystems from /etc/fstab
getty-static.service loaded inactive dead getty on tty2-tty6 if dbus and logind are not available
getty@tty1.service loaded active running Getty on tty1
● glusterd.service not-found inactive dead glusterd.service
● gssproxy.service not-found inactive dead gssproxy.service
ifupdown2-pre.service loaded active exited Helper to synchronize boot up for ifupdown
initrd-cleanup.service loaded inactive dead Cleaning Up and Shutting Down Daemons
initrd-parse-etc.service loaded inactive dead Mountpoints Configured in the Real Root
initrd-switch-root.service loaded inactive dead Switch Root
initrd-udevadm-cleanup-db.service loaded inactive dead Cleanup udev Database
● iscsi-shutdown.service not-found inactive dead iscsi-shutdown.service
iscsid.service loaded inactive dead iSCSI initiator daemon (iscsid)
● kbd.service not-found inactive dead kbd.service
keyboard-setup.service loaded active exited Set the console keyboard layout
kmod-static-nodes.service loaded active exited Create List of Static Device Nodes
ksmtuned.service loaded active running Kernel Samepage Merging (KSM) Tuning Daemon
logrotate.service loaded inactive dead Rotate log files
lvm2-lvmpolld.service loaded inactive dead LVM2 poll daemon
lvm2-monitor.service loaded active exited Monitoring of LVM2 mirrors, snapshots etc. using dmeventd or progress polling
lxc-monitord.service loaded active running LXC Container Monitoring Daemon
lxc-net.service loaded active exited LXC network bridge setup
lxc.service loaded active exited LXC Container Initialization and Autoboot Code
lxcfs.service loaded active running FUSE filesystem for LXC
man-db.service loaded inactive dead Daily man-db regeneration
modprobe@configfs.service loaded inactive dead Load Kernel Module configfs
modprobe@dm_mod.service loaded inactive dead Load Kernel Module dm_mod
modprobe@drm.service loaded inactive dead Load Kernel Module drm
modprobe@efi_pstore.service loaded inactive dead Load Kernel Module efi_pstore
modprobe@fuse.service loaded inactive dead Load Kernel Module fuse
modprobe@loop.service loaded inactive dead Load Kernel Module loop
● multipathd.service not-found inactive dead multipathd.service
networking.service loaded active exited Network initialization
● NetworkManager.service not-found inactive dead NetworkManager.service
● nfs-kernel-server.service not-found inactive dead nfs-kernel-server.service
● nfs-server.service not-found inactive dead nfs-server.service
nfs-utils.service loaded inactive dead NFS server and client services
● ntp.service not-found inactive dead ntp.service
● ntpsec.service not-found inactive dead ntpsec.service
open-iscsi.service loaded inactive dead Login to default iSCSI targets
● openntpd.service not-found inactive dead openntpd.service
● plymouth-quit-wait.service not-found inactive dead plymouth-quit-wait.service
● plymouth-start.service not-found inactive dead plymouth-start.service
postfix.service loaded active exited Postfix Mail Transport Agent
postfix@-.service loaded active running Postfix Mail Transport Agent (instance -)
promtail.service loaded active running Promtail service for Loki log shipping
proxmox-boot-cleanup.service loaded inactive dead Clean up bootloader next-boot setting
proxmox-firewall.service loaded active running Proxmox nftables firewall
pve-cluster.service loaded active running The Proxmox VE cluster filesystem
pve-container@102.service loaded active running PVE LXC Container: 102
pve-container@113.service loaded active running PVE LXC Container: 113
pve-daily-update.service loaded inactive dead Daily PVE download activities
pve-firewall.service loaded active running Proxmox VE firewall
pve-guests.service loaded active exited PVE guests
pve-ha-crm.service loaded active running PVE Cluster HA Resource Manager Daemon
pve-ha-lrm.service loaded active running PVE Local HA Resource Manager Daemon
pve-lxc-syscalld.service loaded active running Proxmox VE LXC Syscall Daemon
pve-query-machine-capabilities.service loaded active exited PVE Query Machine Capabilities
pvebanner.service loaded active exited Proxmox VE Login Banner
pvedaemon.service loaded active running PVE API Daemon
pvefw-logger.service loaded active running Proxmox VE firewall logger
pvenetcommit.service loaded active exited Commit Proxmox VE network changes
pveproxy.service loaded active running PVE API Proxy Server
pvescheduler.service loaded active running Proxmox VE scheduler
pvestatd.service loaded active running PVE Status Daemon
pveupload-cleanup.service loaded inactive dead Clean up old Proxmox pveupload files in /var/tmp
qmeventd.service loaded active running PVE Qemu Event Daemon
rbdmap.service loaded active exited Map RBD devices
rc-local.service loaded inactive dead /etc/rc.local Compatibility
rescue.service loaded inactive dead Rescue Shell
rpc-gssd.service loaded inactive dead RPC security service for NFS client and server
rpc-statd-notify.service loaded active exited Notify NFS peers of a restart
rpc-svcgssd.service loaded inactive dead RPC security service for NFS server
rpcbind.service loaded active running RPC bind portmap service
rrdcached.service loaded active running LSB: start or stop rrdcached
● sendmail.service not-found inactive dead sendmail.service
smartmontools.service loaded active running Self Monitoring and Reporting Technology (SMART) Daemon
● smb.service not-found inactive dead smb.service
spiceproxy.service loaded active running PVE SPICE Proxy Server
ssh.service loaded active running OpenBSD Secure Shell server
● syslog.service not-found inactive dead syslog.service
systemd-ask-password-console.service loaded inactive dead Dispatch Password Requests to Console
systemd-ask-password-wall.service loaded inactive dead Forward Password Requests to Wall
systemd-binfmt.service loaded active exited Set Up Additional Binary Formats
systemd-boot-system-token.service loaded inactive dead Store a System Token in an EFI Variable
systemd-firstboot.service loaded inactive dead First Boot Wizard
systemd-fsck-root.service loaded inactive dead File System Check on Root Device
systemd-fsck@dev-disk-by\x2duuid-20FD\x2d8DBD.service loaded active exited File System Check on /dev/disk/by-uuid/20FD-8DBD
systemd-fsckd.service loaded inactive dead File System Check Daemon to report status
● systemd-hwdb-update.service not-found inactive dead systemd-hwdb-update.service
systemd-initctl.service loaded inactive dead initctl Compatibility Daemon
systemd-journal-flush.service loaded active exited Flush Journal to Persistent Storage
systemd-journald.service loaded active running Journal Service
systemd-logind.service loaded active running User Login Management
systemd-machine-id-commit.service loaded inactive dead Commit a transient machine-id on disk
systemd-modules-load.service loaded active exited Load Kernel Modules
systemd-networkd.service loaded inactive dead Network Configuration
● systemd-oomd.service not-found inactive dead systemd-oomd.service
systemd-pcrphase-initrd.service loaded inactive dead TPM2 PCR Barrier (initrd)
systemd-pcrphase-sysinit.service loaded inactive dead TPM2 PCR Barrier (Initialization)
systemd-pcrphase.service loaded inactive dead TPM2 PCR Barrier (User)
systemd-pstore.service loaded inactive dead Platform Persistent Storage Archival
systemd-quotacheck.service loaded inactive dead File System Quota Check
systemd-random-seed.service loaded active exited Load/Save Random Seed
systemd-remount-fs.service loaded active exited Remount Root and Kernel File Systems
systemd-repart.service loaded inactive dead Repartition Root Disk
systemd-rfkill.service loaded inactive dead Load/Save RF Kill Switch Status
systemd-sysctl.service loaded active exited Apply Kernel Variables
systemd-sysext.service loaded inactive dead Merge System Extension Images into /usr/ and /opt/
systemd-sysusers.service loaded active exited Create System Users
systemd-tmpfiles-clean.service loaded inactive dead Cleanup of Temporary Directories
systemd-tmpfiles-setup-dev.service loaded active exited Create Static Device Nodes in /dev
systemd-tmpfiles-setup.service loaded active exited Create System Files and Directories
systemd-udev-settle.service loaded active exited Wait for udev To Complete Device Initialization
systemd-udev-trigger.service loaded active exited Coldplug All udev Devices
systemd-udevd.service loaded active running Rule-based Manager for Device Events and Files
● systemd-update-done.service not-found inactive dead systemd-update-done.service
systemd-update-utmp-runlevel.service loaded inactive dead Record Runlevel Change in UTMP
systemd-update-utmp.service loaded active exited Record System Boot/Shutdown in UTMP
systemd-user-sessions.service loaded active exited Permit User Sessions
● systemd-vconsole-setup.service not-found inactive dead systemd-vconsole-setup.service
user-runtime-dir@0.service loaded active exited User Runtime Directory /run/user/0
user@0.service loaded active running User Manager for UID 0
watchdog-mux.service loaded active running Proxmox VE watchdog multiplexer
wazuh-agent.service loaded active running Wazuh agent
zfs-import-cache.service loaded inactive dead Import ZFS pools by cache file
zfs-import-scan.service loaded active exited Import ZFS pools by device scanning
zfs-import@Vault.service loaded active exited Import ZFS pool Vault
zfs-mount.service loaded active exited Mount ZFS filesystems
zfs-share.service loaded active exited ZFS file system shares
zfs-volume-wait.service loaded active exited Wait for ZFS Volume (zvol) links in /dev
zfs-zed.service loaded active running ZFS Event Daemon (zed)
LOAD = Reflects whether the unit definition was properly loaded.
ACTIVE = The high-level unit activation state, i.e. generalization of SUB.
SUB = The low-level unit activation state, values depend on unit type.
156 loaded units listed.
To show all installed unit files use 'systemctl list-unit-files'.

View File

@@ -1,6 +0,0 @@
Name Type Status Total Used Available %
PBS-Backups pbs active 1009313392 245697628 712271792 24.34%
Vault zfspool active 4546625536 487890744 4058734792 10.73%
iso-share nfs active 3298592768 46755840 3251836928 1.42%
local dir active 45024148 6655328 36049256 14.78%
local-lvm lvmthin active 68988928 6898 68982029 0.01%

View File

@@ -1,15 +0,0 @@
NAME USED AVAIL REFER MOUNTPOINT
Vault 465G 3.78T 104K /Vault
Vault/base-104-disk-0 38.4G 3.81T 5.87G -
Vault/base-107-disk-0 56.5G 3.83T 5.69G -
Vault/subvol-102-disk-0 721M 1.30G 721M /Vault/subvol-102-disk-0
Vault/subvol-103-disk-0 1.68G 2.32G 1.68G /Vault/subvol-103-disk-0
Vault/subvol-113-disk-0 2.16G 17.9G 2.14G /Vault/subvol-113-disk-0
Vault/vm-100-disk-0 102G 3.85T 33.3G -
Vault/vm-105-disk-0 32.5G 3.80T 16.3G -
Vault/vm-106-disk-0 32.5G 3.80T 11.3G -
Vault/vm-107-cloudinit 6M 3.78T 72K -
Vault/vm-108-disk-0 102G 3.87T 14.0G -
Vault/vm-109-disk-0 32.5G 3.81T 233M -
Vault/vm-110-disk-0 32.5G 3.81T 3.85G -
Vault/vm-111-disk-0 32.5G 3.81T 4.63G -

View File

@@ -1 +0,0 @@
[{"cpu":0.0145668121932816,"disk":0,"diskread":8754925056,"diskwrite":98623655936,"id":"qemu/100","maxcpu":4,"maxdisk":107374182400,"maxmem":8598323200,"mem":8118095872,"name":"docker-hub","netin":10940443180,"netout":433401918,"node":"serviceslab","status":"running","template":0,"type":"qemu","uptime":5471864,"vmid":100},{"cpu":0.000396259427189655,"disk":756023296,"diskread":56942592,"diskwrite":0,"id":"lxc/102","maxcpu":1,"maxdisk":2147483648,"maxmem":2147483648,"mem":111960064,"name":"nginx","netin":6466470348,"netout":1025645316,"node":"serviceslab","status":"running","template":0,"type":"lxc","uptime":6223975,"vmid":102},{"cpu":0,"disk":0,"diskread":0,"diskwrite":0,"id":"lxc/103","maxcpu":2,"maxdisk":4294967296,"maxmem":2147483648,"mem":0,"name":"netbox","netin":0,"netout":0,"node":"serviceslab","status":"stopped","tags":"community-script;network","template":0,"type":"lxc","uptime":0,"vmid":103},{"cpu":0,"disk":0,"diskread":0,"diskwrite":0,"id":"qemu/104","maxcpu":2,"maxdisk":34359738368,"maxmem":5242880000,"mem":0,"name":"ubuntu-dev","netin":0,"netout":0,"node":"serviceslab","status":"stopped","tags":"template","template":1,"type":"qemu","uptime":0,"vmid":104},{"cpu":0,"disk":0,"diskread":0,"diskwrite":0,"id":"qemu/105","maxcpu":4,"maxdisk":34359738368,"maxmem":16777216000,"mem":0,"name":"dev","netin":0,"netout":0,"node":"serviceslab","status":"stopped","template":0,"type":"qemu","uptime":0,"vmid":105},{"cpu":0.00859680719603501,"disk":0,"diskread":20044764516,"diskwrite":44196287488,"id":"qemu/106","maxcpu":2,"maxdisk":34359738368,"maxmem":4294967296,"mem":3740889088,"name":"Ansible-Control","netin":8096398402,"netout":77216446,"node":"serviceslab","status":"running","template":0,"type":"qemu","uptime":2712772,"vmid":106},{"cpu":0,"disk":0,"diskread":0,"diskwrite":0,"id":"qemu/107","maxcpu":2,"maxdisk":53687091200,"maxmem":4294967296,"mem":0,"name":"ubuntu-docker","netin":0,"netout":0,"node":"serviceslab","status":"stopped","tags":"template","template":1,"type":"qemu","uptime":0,"vmid":107},{"cpu":0,"disk":0,"diskread":0,"diskwrite":0,"id":"qemu/108","maxcpu":4,"maxdisk":107374182400,"maxmem":33554432000,"mem":0,"name":"CML","netin":0,"netout":0,"node":"serviceslab","status":"stopped","template":0,"type":"qemu","uptime":0,"vmid":108},{"cpu":0.0315216263854617,"disk":0,"diskread":572292626,"diskwrite":1008925696,"id":"qemu/109","maxcpu":1,"maxdisk":34359738368,"maxmem":2147483648,"mem":209444864,"name":"web-server-01","netin":4917297893,"netout":3941494,"node":"serviceslab","status":"running","template":0,"type":"qemu","uptime":2697856,"vmid":109},{"cpu":0.00477600399779723,"disk":0,"diskread":5130442360,"diskwrite":21638925824,"id":"qemu/110","maxcpu":1,"maxdisk":34359738368,"maxmem":4294967296,"mem":2422759424,"name":"web-server-02","netin":6548190260,"netout":24100161,"node":"serviceslab","status":"running","template":0,"type":"qemu","uptime":2692898,"vmid":110},{"cpu":0.00668640559691612,"disk":0,"diskread":4973196920,"diskwrite":22098824704,"id":"qemu/111","maxcpu":1,"maxdisk":34359738368,"maxmem":4294967296,"mem":2348294144,"name":"db-server-01","netin":6555995304,"netout":20880204,"node":"serviceslab","status":"running","template":0,"type":"qemu","uptime":2691960,"vmid":111},{"cpu":0.000594389140784483,"disk":2294022144,"diskread":0,"diskwrite":114688,"id":"lxc/113","maxcpu":2,"maxdisk":21474836480,"maxmem":4294967296,"mem":498679808,"name":"n8n","netin":1092635479,"netout":20852346,"node":"serviceslab","status":"running","template":0,"type":"lxc","uptime":201526,"vmid":113},{"cgroup-mode":2,"cpu":0.00678020181071272,"disk":6814695424,"id":"node/serviceslab","level":"","maxcpu":24,"maxdisk":46104727552,"maxmem":185885036544,"mem":84348379136,"node":"serviceslab","status":"online","type":"node","uptime":6224083},{"content":"images,rootdir","disk":7064466,"id":"storage/serviceslab/local-lvm","maxdisk":70644662272,"node":"serviceslab","plugintype":"lvmthin","shared":0,"status":"available","storage":"local-lvm","type":"storage"},{"content":"images,rootdir","disk":499600146432,"id":"storage/serviceslab/Vault","maxdisk":4655744548864,"node":"serviceslab","plugintype":"zfspool","shared":0,"status":"available","storage":"Vault","type":"storage"},{"content":"iso","disk":47877980160,"id":"storage/serviceslab/iso-share","maxdisk":3377758994432,"node":"serviceslab","plugintype":"nfs","shared":1,"status":"available","storage":"iso-share","type":"storage"},{"content":"vztmpl,backup,iso","disk":6814699520,"id":"storage/serviceslab/local","maxdisk":46104727552,"node":"serviceslab","plugintype":"dir","shared":0,"status":"available","storage":"local","type":"storage"},{"content":"backup","disk":251594371072,"id":"storage/serviceslab/PBS-Backups","maxdisk":1033536913408,"node":"serviceslab","plugintype":"pbs","shared":1,"status":"available","storage":"PBS-Backups","type":"storage"},{"id":"sdn/serviceslab/localnetwork","node":"serviceslab","sdn":"localnetwork","status":"ok","type":"sdn"}]

File diff suppressed because one or more lines are too long

View File

@@ -1 +0,0 @@
[{"cpu":0.0186187802692886,"disk":0,"diskread":8754925056,"diskwrite":98623840256,"id":"qemu/100","maxcpu":4,"maxdisk":107374182400,"maxmem":8598323200,"mem":8120344576,"name":"docker-hub","netin":10940472600,"netout":433402096,"node":"serviceslab","status":"running","template":0,"type":"qemu","uptime":5471875,"vmid":100},{"cpu":0.000396373773600793,"disk":756023296,"diskread":56942592,"diskwrite":0,"id":"lxc/102","maxcpu":1,"maxdisk":2147483648,"maxmem":2147483648,"mem":111960064,"name":"nginx","netin":6466499856,"netout":1025651322,"node":"serviceslab","status":"running","template":0,"type":"lxc","uptime":6223985,"vmid":102},{"cpu":0,"disk":0,"diskread":0,"diskwrite":0,"id":"lxc/103","maxcpu":2,"maxdisk":4294967296,"maxmem":2147483648,"mem":0,"name":"netbox","netin":0,"netout":0,"node":"serviceslab","status":"stopped","tags":"community-script;network","template":0,"type":"lxc","uptime":0,"vmid":103},{"cpu":0,"disk":0,"diskread":0,"diskwrite":0,"id":"qemu/104","maxcpu":2,"maxdisk":34359738368,"maxmem":5242880000,"mem":0,"name":"ubuntu-dev","netin":0,"netout":0,"node":"serviceslab","status":"stopped","tags":"template","template":1,"type":"qemu","uptime":0,"vmid":104},{"cpu":0,"disk":0,"diskread":0,"diskwrite":0,"id":"qemu/105","maxcpu":4,"maxdisk":34359738368,"maxmem":16777216000,"mem":0,"name":"dev","netin":0,"netout":0,"node":"serviceslab","status":"stopped","template":0,"type":"qemu","uptime":0,"vmid":105},{"cpu":0.0119351155572363,"disk":0,"diskread":20044764516,"diskwrite":44196287488,"id":"qemu/106","maxcpu":2,"maxdisk":34359738368,"maxmem":4294967296,"mem":3740889088,"name":"Ansible-Control","netin":8096426464,"netout":77216446,"node":"serviceslab","status":"running","template":0,"type":"qemu","uptime":2712783,"vmid":106},{"cpu":0,"disk":0,"diskread":0,"diskwrite":0,"id":"qemu/107","maxcpu":2,"maxdisk":53687091200,"maxmem":4294967296,"mem":0,"name":"ubuntu-docker","netin":0,"netout":0,"node":"serviceslab","status":"stopped","tags":"template","template":1,"type":"qemu","uptime":0,"vmid":107},{"cpu":0,"disk":0,"diskread":0,"diskwrite":0,"id":"qemu/108","maxcpu":4,"maxdisk":107374182400,"maxmem":33554432000,"mem":0,"name":"CML","netin":0,"netout":0,"node":"serviceslab","status":"stopped","template":0,"type":"qemu","uptime":0,"vmid":108},{"cpu":0.0267346588482093,"disk":0,"diskread":572292626,"diskwrite":1008925696,"id":"qemu/109","maxcpu":1,"maxdisk":34359738368,"maxmem":2147483648,"mem":209444864,"name":"web-server-01","netin":4917325955,"netout":3941494,"node":"serviceslab","status":"running","template":0,"type":"qemu","uptime":2697866,"vmid":109},{"cpu":0.00286442773373671,"disk":0,"diskread":5130442360,"diskwrite":21638929920,"id":"qemu/110","maxcpu":1,"maxdisk":34359738368,"maxmem":4294967296,"mem":2422759424,"name":"web-server-02","netin":6548218322,"netout":24100161,"node":"serviceslab","status":"running","template":0,"type":"qemu","uptime":2692908,"vmid":110},{"cpu":0.00381923697831561,"disk":0,"diskread":4973196920,"diskwrite":22098824704,"id":"qemu/111","maxcpu":1,"maxdisk":34359738368,"maxmem":4294967296,"mem":2348294144,"name":"db-server-01","netin":6556023366,"netout":20880204,"node":"serviceslab","status":"running","template":0,"type":"qemu","uptime":2691971,"vmid":111},{"cpu":0.000396373773600793,"disk":2294022144,"diskread":0,"diskwrite":114688,"id":"lxc/113","maxcpu":2,"maxdisk":21474836480,"maxmem":4294967296,"mem":498909184,"name":"n8n","netin":1092664063,"netout":20852346,"node":"serviceslab","status":"running","template":0,"type":"lxc","uptime":201537,"vmid":113}]

View File

@@ -1 +0,0 @@
Tue Dec 2 08:49:40 PM MST 2025

View File

@@ -1 +0,0 @@
20:49:40 up 72 days, 54 min, 3 users, load average: 0.14, 0.21, 0.23

View File

@@ -4,9 +4,9 @@ This directory contains a complete snapshot of your Proxmox-based homelab infras
## Collection Information ## Collection Information
- **Collection Date**: 2025-12-02 20:49:54 - **Collection Date**: 2025-12-07 12:00:51
- **Proxmox Node**: serviceslab - **Proxmox Node**: serviceslab
- **Collection Level**: full - **Collection Level**: standard
- **Sanitization Applied**: IPs=false, Passwords=true, Tokens=true - **Sanitization Applied**: IPs=false, Passwords=true, Tokens=true
## Directory Structure ## Directory Structure

View File

@@ -2,9 +2,9 @@
## Collection Metadata ## Collection Metadata
- **Date/Time**: 2025-12-02 20:49:54 - **Date/Time**: 2025-12-07 12:00:51
- **Hostname**: serviceslab - **Hostname**: serviceslab
- **Collection Level**: full - **Collection Level**: standard
- **Script Version**: 1.0.0 - **Script Version**: 1.0.0
## Sanitization Settings ## Sanitization Settings
@@ -16,7 +16,7 @@
## Collection Statistics ## Collection Statistics
### Successfully Collected ### Successfully Collected
Total items collected: 50 Total items collected: 51
- Proxmox VE version - Proxmox VE version
- Hostname - Hostname
@@ -41,6 +41,7 @@ Total items collected: 50
- User config - User config
- Auth public key - Auth public key
- VM 100 (docker-hub) config - VM 100 (docker-hub) config
- VM 101 (monitoring-docker) config
- VM 104 (ubuntu-dev) config - VM 104 (ubuntu-dev) config
- VM 105 (dev) config - VM 105 (dev) config
- VM 106 (Ansible-Control) config - VM 106 (Ansible-Control) config
@@ -51,6 +52,7 @@ Total items collected: 50
- VM 111 (db-server-01) config - VM 111 (db-server-01) config
- Container 102 (nginx) config - Container 102 (nginx) config
- Container 103 (netbox) config - Container 103 (netbox) config
- Container 112 (twingate-connector) config
- Container 113 (n8n - Container 113 (n8n
n8n n8n
n8n) config n8n) config
@@ -69,7 +71,6 @@ n8n) config
- VM list - VM list
- Container list - Container list
- All guests (JSON) - All guests (JSON)
- Systemd services
### Skipped Items ### Skipped Items
Total items skipped: 1 Total items skipped: 1
@@ -152,14 +153,15 @@ zfsutils-linux: 2.2.7-pve1
``` ```
VMID NAME STATUS MEM(MB) BOOTDISK(GB) PID VMID NAME STATUS MEM(MB) BOOTDISK(GB) PID
100 docker-hub running 8200 100.00 1370101 100 docker-hub running 8200 100.00 1370101
101 monitoring-docker running 4096 50.00 3956419
104 ubuntu-dev stopped 5000 32.00 0 104 ubuntu-dev stopped 5000 32.00 0
105 dev stopped 16000 32.00 0 105 dev stopped 16000 32.00 0
106 Ansible-Control running 4096 32.00 1020188 106 Ansible-Control stopped 4096 32.00 0
107 ubuntu-docker stopped 4096 50.00 0 107 ubuntu-docker stopped 4096 50.00 0
108 CML stopped 32000 100.00 0 108 CML stopped 32000 100.00 0
109 web-server-01 running 2048 32.00 1124720 109 web-server-01 stopped 2048 32.00 0
110 web-server-02 running 4096 32.00 1159023 110 web-server-02 stopped 4096 32.00 0
111 db-server-01 running 4096 32.00 1165739 111 db-server-01 stopped 4096 32.00 0
``` ```
### Containers ### Containers
@@ -167,16 +169,17 @@ zfsutils-linux: 2.2.7-pve1
VMID Status Lock Name VMID Status Lock Name
102 running nginx 102 running nginx
103 stopped netbox 103 stopped netbox
112 running twingate-connector
113 running n8n 113 running n8n
``` ```
### Storage ### Storage
``` ```
Name Type Status Total Used Available % Name Type Status Total Used Available %
PBS-Backups pbs active 1009313392 245697632 712271788 24.34% PBS-Backups pbs active 1009313392 276840184 681129236 27.43%
Vault zfspool active 4546625536 487890756 4058734780 10.73% Vault zfspool active 4546625536 494635624 4051989912 10.88%
iso-share nfs active 3298592768 46755840 3251836928 1.42% iso-share nfs active 3267232768 46755840 3220476928 1.43%
local dir active 45024148 6655444 36049140 14.78% local dir active 45024148 6813960 35890624 15.13%
local-lvm lvmthin active 68988928 6898 68982029 0.01% local-lvm lvmthin active 68988928 6898 68982029 0.01%
``` ```
@@ -184,19 +187,20 @@ local-lvm lvmthin active 68988928 6898 689820
``` ```
Filesystem Size Used Avail Use% Mounted on Filesystem Size Used Avail Use% Mounted on
udev 87G 0 87G 0% /dev udev 87G 0 87G 0% /dev
tmpfs 18G 4.7M 18G 1% /run tmpfs 18G 3.6M 18G 1% /run
/dev/mapper/pve-root 43G 6.4G 35G 16% / /dev/mapper/pve-root 43G 6.5G 35G 16% /
tmpfs 87G 46M 87G 1% /dev/shm tmpfs 87G 46M 87G 1% /dev/shm
tmpfs 5.0M 0 5.0M 0% /run/lock tmpfs 5.0M 0 5.0M 0% /run/lock
efivarfs 64K 39K 21K 66% /sys/firmware/efi/efivars efivarfs 64K 39K 21K 66% /sys/firmware/efi/efivars
/dev/sda2 1022M 12M 1011M 2% /boot/efi /dev/sda2 1022M 12M 1011M 2% /boot/efi
Vault 3.8T 128K 3.8T 1% /Vault Vault 3.8T 128K 3.8T 1% /Vault
Vault/subvol-102-disk-0 2.0G 721M 1.3G 36% /Vault/subvol-102-disk-0 Vault/subvol-102-disk-0 2.0G 722M 1.3G 36% /Vault/subvol-102-disk-0
Vault/subvol-103-disk-0 4.0G 1.7G 2.4G 43% /Vault/subvol-103-disk-0 Vault/subvol-103-disk-0 4.0G 1.7G 2.4G 43% /Vault/subvol-103-disk-0
/dev/fuse 128M 24K 128M 1% /etc/pve /dev/fuse 128M 28K 128M 1% /etc/pve
192.168.2.150:/mnt/Vauly/iso-vault 3.1T 45G 3.1T 2% /mnt/pve/iso-share 192.168.2.150:/mnt/Vauly/iso-vault 3.1T 45G 3.0T 2% /mnt/pve/iso-share
192.168.2.150:/mnt/Vauly/anytype 3.1T 0 3.1T 0% /mnt/pve/anytype 192.168.2.150:/mnt/Vauly/anytype 3.0T 0 3.0T 0% /mnt/pve/anytype
Vault/subvol-113-disk-0 20G 2.2G 18G 11% /Vault/subvol-113-disk-0 Vault/subvol-113-disk-0 20G 2.2G 18G 11% /Vault/subvol-113-disk-0
Vault/subvol-112-disk-0 3.0G 466M 2.6G 16% /Vault/subvol-112-disk-0
tmpfs 18G 4.0K 18G 1% /run/user/0 tmpfs 18G 4.0K 18G 1% /run/user/0
``` ```
@@ -209,4 +213,4 @@ tmpfs 18G 4.0K 18G 1% /run/user/0
5. Create diagrams and additional documentation in respective folders 5. Create diagrams and additional documentation in respective folders
--- ---
*Report generated 2025-12-02 20:49:58* *Report generated 2025-12-07 12:00:55*

View File

@@ -0,0 +1,90 @@
[2025-12-07 12:00:40] [DEBUG] Created directory: ./homelab-export-20251207-120040
[2025-12-07 12:00:40] [DEBUG] Created directory: ./homelab-export-20251207-120040/docs
[2025-12-07 12:00:40] [DEBUG] Created directory: ./homelab-export-20251207-120040/configs/proxmox
[2025-12-07 12:00:40] [DEBUG] Created directory: ./homelab-export-20251207-120040/configs/vms
[2025-12-07 12:00:40] [DEBUG] Created directory: ./homelab-export-20251207-120040/configs/lxc
[2025-12-07 12:00:40] [DEBUG] Created directory: ./homelab-export-20251207-120040/configs/storage
[2025-12-07 12:00:40] [DEBUG] Created directory: ./homelab-export-20251207-120040/configs/network
[2025-12-07 12:00:40] [DEBUG] Created directory: ./homelab-export-20251207-120040/configs/backup
[2025-12-07 12:00:40] [DEBUG] Created directory: ./homelab-export-20251207-120040/exports/system
[2025-12-07 12:00:40] [DEBUG] Created directory: ./homelab-export-20251207-120040/exports/cluster
[2025-12-07 12:00:40] [DEBUG] Created directory: ./homelab-export-20251207-120040/exports/guests
[2025-12-07 12:00:40] [DEBUG] Created directory: ./homelab-export-20251207-120040/scripts
[2025-12-07 12:00:40] [DEBUG] Created directory: ./homelab-export-20251207-120040/diagrams
[2025-12-07 12:00:40] [SUCCESS] Directory structure created at: ./homelab-export-20251207-120040
[2025-12-07 12:00:41] [SUCCESS] Collected Proxmox VE version
[2025-12-07 12:00:41] [SUCCESS] Collected Hostname
[2025-12-07 12:00:41] [SUCCESS] Collected Kernel information
[2025-12-07 12:00:41] [SUCCESS] Collected System uptime
[2025-12-07 12:00:41] [SUCCESS] Collected System date/time
[2025-12-07 12:00:41] [SUCCESS] Collected CPU information
[2025-12-07 12:00:41] [SUCCESS] Collected Detailed CPU info
[2025-12-07 12:00:41] [SUCCESS] Collected Memory information
[2025-12-07 12:00:41] [SUCCESS] Collected Detailed memory info
[2025-12-07 12:00:41] [SUCCESS] Collected Filesystem usage
[2025-12-07 12:00:41] [SUCCESS] Collected Block devices
[2025-12-07 12:00:41] [DEBUG] Command 'pvdisplay' is available
[2025-12-07 12:00:41] [SUCCESS] Collected LVM physical volumes
[2025-12-07 12:00:41] [SUCCESS] Collected LVM volume groups
[2025-12-07 12:00:41] [SUCCESS] Collected LVM logical volumes
[2025-12-07 12:00:41] [SUCCESS] Collected IP addresses
[2025-12-07 12:00:41] [SUCCESS] Collected Routing table
[2025-12-07 12:00:41] [SUCCESS] Collected Listening sockets
[2025-12-07 12:00:41] [DEBUG] Command 'dpkg' is available
[2025-12-07 12:00:41] [SUCCESS] Collected Installed packages
[2025-12-07 12:00:41] [SUCCESS] Collected Datacenter config
[2025-12-07 12:00:41] [SUCCESS] Collected Storage config
[2025-12-07 12:00:41] [SUCCESS] Collected User config
[2025-12-07 12:00:41] [DEBUG] Source does not exist: /etc/pve/domains.cfg (Authentication domains)
[2025-12-07 12:00:41] [SUCCESS] Collected Auth public key
[2025-12-07 12:00:41] [WARN] Failed to copy directory HA configuration from /etc/pve/ha
[2025-12-07 12:00:42] [SUCCESS] Collected VM 100 (docker-hub) config
[2025-12-07 12:00:42] [SUCCESS] Collected VM 101 (monitoring-docker) config
[2025-12-07 12:00:42] [SUCCESS] Collected VM 104 (ubuntu-dev) config
[2025-12-07 12:00:42] [SUCCESS] Collected VM 105 (dev) config
[2025-12-07 12:00:42] [SUCCESS] Collected VM 106 (Ansible-Control) config
[2025-12-07 12:00:42] [SUCCESS] Collected VM 107 (ubuntu-docker) config
[2025-12-07 12:00:42] [SUCCESS] Collected VM 108 (CML) config
[2025-12-07 12:00:42] [SUCCESS] Collected VM 109 (web-server-01) config
[2025-12-07 12:00:42] [SUCCESS] Collected VM 110 (web-server-02) config
[2025-12-07 12:00:42] [SUCCESS] Collected VM 111 (db-server-01) config
[2025-12-07 12:00:42] [SUCCESS] Collected Container 102 (nginx) config
[2025-12-07 12:00:42] [SUCCESS] Collected Container 103 (netbox) config
[2025-12-07 12:00:42] [SUCCESS] Collected Container 112 (twingate-connector) config
[2025-12-07 12:00:42] [SUCCESS] Collected Container 113 (n8n
n8n
n8n) config
[2025-12-07 12:00:42] [SUCCESS] Collected Network interfaces config
[2025-12-07 12:00:42] [WARN] Failed to copy directory Additional interface configs from /etc/network/interfaces.d
[2025-12-07 12:00:42] [WARN] Failed to copy directory SDN configuration from /etc/pve/sdn
[2025-12-07 12:00:42] [SUCCESS] Collected Hosts file
[2025-12-07 12:00:42] [SUCCESS] Collected DNS resolver config
[2025-12-07 12:00:42] [DEBUG] Command 'pvesm' is available
[2025-12-07 12:00:43] [SUCCESS] Collected Storage status
[2025-12-07 12:00:43] [DEBUG] Command 'zpool' is available
[2025-12-07 12:00:43] [SUCCESS] Collected ZFS pool status
[2025-12-07 12:00:43] [SUCCESS] Collected ZFS pool list
[2025-12-07 12:00:43] [DEBUG] Command 'zfs' is available
[2025-12-07 12:00:43] [SUCCESS] Collected ZFS datasets
[2025-12-07 12:00:43] [SUCCESS] Collected Samba config
[2025-12-07 12:00:43] [SUCCESS] Collected iSCSI initiator config
[2025-12-07 12:00:43] [SUCCESS] Collected Vzdump config
[2025-12-07 12:00:43] [DEBUG] Command 'pvecm' is available
[2025-12-07 12:00:44] [WARN] Failed to execute: pvecm status (Cluster status)
[2025-12-07 12:00:44] [WARN] Failed to execute: pvecm nodes (Cluster nodes)
[2025-12-07 12:00:44] [DEBUG] Command 'pvesh' is available
[2025-12-07 12:00:46] [SUCCESS] Collected Cluster resources
[2025-12-07 12:00:47] [SUCCESS] Collected Recent tasks
[2025-12-07 12:00:47] [DEBUG] Command 'qm' is available
[2025-12-07 12:00:48] [SUCCESS] Collected VM list
[2025-12-07 12:00:48] [DEBUG] Command 'pct' is available
[2025-12-07 12:00:49] [SUCCESS] Collected Container list
[2025-12-07 12:00:49] [DEBUG] Command 'pvesh' is available
[2025-12-07 12:00:51] [SUCCESS] Collected All guests (JSON)
[2025-12-07 12:00:51] [INFO] Skipping service configs (collection level: standard)
[2025-12-07 12:00:51] [SUCCESS] Generated README.md
[2025-12-07 12:00:55] [SUCCESS] Generated SUMMARY.md
[2025-12-07 12:00:55] [SUCCESS] Total items collected: 51
[2025-12-07 12:00:55] [INFO] Total items skipped: 1
[2025-12-07 12:00:55] [WARN] Total errors: 5
[2025-12-07 12:00:55] [WARN] Review ./homelab-export-20251207-120040/collection.log for details

View File

@@ -0,0 +1,38 @@
#<div align='center'>
# <a href='https%3A//Helper-Scripts.com' target='_blank' rel='noopener noreferrer'>
# <img src='https%3A//raw.githubusercontent.com/community-scripts/ProxmoxVE/main/misc/images/logo-81x112.png' alt='Logo' style='width%3A81px;height%3A112px;'/>
# </a>
#
# <h2 style='font-size%3A 24px; margin%3A 20px 0;'>Twingate-Connector LXC</h2>
#
# <p style='margin%3A 16px 0;'>
# <a href='https%3A//ko-fi.com/community_scripts' target='_blank' rel='noopener noreferrer'>
# <img src='https%3A//img.shields.io/badge/&#x2615;-Buy us a coffee-blue' alt='spend Coffee' />
# </a>
# </p>
#
# <span style='margin%3A 0 10px;'>
# <i class="fa fa-github fa-fw" style="color%3A #f5f5f5;"></i>
# <a href='https%3A//github.com/community-scripts/ProxmoxVE' target='_blank' rel='noopener noreferrer' style='text-decoration%3A none; color%3A #00617f;'>GitHub</a>
# </span>
# <span style='margin%3A 0 10px;'>
# <i class="fa fa-comments fa-fw" style="color%3A #f5f5f5;"></i>
# <a href='https%3A//github.com/community-scripts/ProxmoxVE/discussions' target='_blank' rel='noopener noreferrer' style='text-decoration%3A none; color%3A #00617f;'>Discussions</a>
# </span>
# <span style='margin%3A 0 10px;'>
# <i class="fa fa-exclamation-circle fa-fw" style="color%3A #f5f5f5;"></i>
# <a href='https%3A//github.com/community-scripts/ProxmoxVE/issues' target='_blank' rel='noopener noreferrer' style='text-decoration%3A none; color%3A #00617f;'>Issues</a>
# </span>
#</div>
arch: amd64
cores: 1
features: keyctl=1,nesting=1
hostname: twingate-connector
memory: 1024
net0: name=eth0,bridge=vmbr0,hwaddr=BC:24:11:BD:7B:AB,ip=dhcp,type=veth
onboot: 1
ostype: ubuntu
rootfs: Vault:subvol-112-disk-0,size=3G
swap: 512
tags: community-script;connector;network;twingate
unprivileged: 1

View File

@@ -0,0 +1,9 @@
-----BEGIN PUBLIC KEY-----
MIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEAiEK1snWs+diuBS9UtKiF
cn1vr7aCWix7jBicpSlsgXH505vHdVirlPH8Bb+0n9WCAfcw78vYWYQMRcit//kr
PUULOBo6TatFE+1zu2Q5EuoY51x/8p7tvVg46LfQn+GiBmQBxeFsv61SHFG891IS
6QsDcWgvdxGPa2SnTLcWR5uALArSrqYowJwaBXBdj/STS56FFC91KQSBmEsq9pu6
9BpDsqOfpUkHuRwEOam+ZKfofHCNzd2Js3ioAllpGJkjctdBvAgcwyreas6t/bzW
0/SzvH4kKiTS7aVojFZ7hUMBaLct//6i5+OAd2/G/xVy5k7ih4LCYqvV0+xBIMLG
rQIDAQAB
-----END PUBLIC KEY-----

View File

@@ -1,5 +1,6 @@
user:api@pam:1:0:::::: user:api@pam:1:0::::::
token:api@pam!homepage:0:1:: token:api@pam!homepage:0:1::
user:monitoring@pve:1:0::::::
user:root@pam:1:0:::jramosdirect2@gmail.com::: user:root@pam:1:0:::jramosdirect2@gmail.com:::
token:root@pam!packer:0:0:: token:root@pam!packer:0:0::
token:root@pam!tui:0:0:: token:root@pam!tui:0:0::
@@ -13,5 +14,6 @@ group:terraform:terraform@pam::
role:TerraformProvision:Datastore.AllocateSpace,Datastore.Audit,Pool.Allocate,SDN.Use,Sys.Audit,Sys.Console,Sys.Modify,Sys.PowerMgmt,VM.Allocate,VM.Audit,VM.Clone,VM.Config.CDROM,VM.Config.CPU,VM.Config.Cloudinit,VM.Config.Disk,VM.Config.HWType,VM.Config.Memory,VM.Config.Network,VM.Config.Options,VM.Migrate,VM.Monitor,VM.PowerMgmt: role:TerraformProvision:Datastore.AllocateSpace,Datastore.Audit,Pool.Allocate,SDN.Use,Sys.Audit,Sys.Console,Sys.Modify,Sys.PowerMgmt,VM.Allocate,VM.Audit,VM.Clone,VM.Config.CDROM,VM.Config.CPU,VM.Config.Cloudinit,VM.Config.Disk,VM.Config.HWType,VM.Config.Memory,VM.Config.Network,VM.Config.Options,VM.Migrate,VM.Monitor,VM.PowerMgmt:
acl:1:/:root@pam!packer:Administrator: acl:1:/:root@pam!packer:Administrator:
acl:1:/:monitoring@pve:PVEAdmin:
acl:1:/:@api-ro,api@pam!homepage:PVEAuditor: acl:1:/:@api-ro,api@pam!homepage:PVEAuditor:
acl:1:/:@terraform:TerraformProvision: acl:1:/:@terraform:TerraformProvision:

View File

@@ -0,0 +1,6 @@
Name Type Status Total Used Available %
PBS-Backups pbs active 1009313392 276840176 681129244 27.43%
Vault zfspool active 4546625536 494635612 4051989924 10.88%
iso-share nfs active 3267232768 46755840 3220476928 1.43%
local dir active 45024148 6813872 35890712 15.13%
local-lvm lvmthin active 68988928 6898 68982029 0.01%

View File

@@ -0,0 +1,18 @@
NAME USED AVAIL REFER MOUNTPOINT
Vault 472G 3.77T 112K /Vault
Vault/base-104-disk-0 38.4G 3.81T 5.87G -
Vault/base-107-disk-0 56.5G 3.82T 5.69G -
Vault/subvol-102-disk-0 721M 1.30G 721M /Vault/subvol-102-disk-0
Vault/subvol-103-disk-0 1.68G 2.32G 1.68G /Vault/subvol-103-disk-0
Vault/subvol-112-disk-0 466M 2.55G 466M /Vault/subvol-112-disk-0
Vault/subvol-113-disk-0 2.17G 17.9G 2.14G /Vault/subvol-113-disk-0
Vault/vm-100-disk-0 102G 3.84T 33.3G -
Vault/vm-101-cloudinit 6M 3.77T 72K -
Vault/vm-101-disk-0 5.96G 3.77T 9.21G -
Vault/vm-105-disk-0 32.5G 3.79T 16.3G -
Vault/vm-106-disk-0 32.5G 3.79T 11.3G -
Vault/vm-107-cloudinit 6M 3.77T 72K -
Vault/vm-108-disk-0 102G 3.86T 14.0G -
Vault/vm-109-disk-0 32.5G 3.81T 235M -
Vault/vm-110-disk-0 32.5G 3.80T 4.32G -
Vault/vm-111-disk-0 32.5G 3.80T 4.54G -

View File

@@ -1,2 +1,2 @@
NAME SIZE ALLOC FREE CKPOINT EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT NAME SIZE ALLOC FREE CKPOINT EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT
Vault 4.36T 99.9G 4.26T - - 8% 2% 1.00x ONLINE - Vault 4.36T 107G 4.26T - - 8% 2% 1.00x ONLINE -

View File

@@ -0,0 +1,17 @@
boot: order=scsi0;ide2;net0
cores: 2
cpu: host
ide0: Vault:vm-101-cloudinit,media=cdrom,size=4M
ide2: iso-share:iso/ubuntu-24.04.2-desktop-amd64.iso,media=cdrom,size=6194550K
memory: 4096
meta: creation-qemu=9.0.2,ctime=1749061520
name: monitoring-docker
net0: virtio=BC:24:11:94:63:50,bridge=vmbr0,firewall=1
numa: 0
ostype: l26
scsi0: Vault:base-107-disk-0/vm-101-disk-0,iothread=1,size=50G
scsihw: virtio-scsi-single
smbios1: uuid=9eea22c7-6662-4cd9-b0e4-b6d821d5f438
sockets: 1
tags: template
vmgenid: 3f7cbc60-9184-4b98-948a-c35672ad5195

View File

@@ -2,7 +2,7 @@ boot: order=scsi0;ide2;net0
cores: 2 cores: 2
cpu: host cpu: host
ide0: Vault:vm-107-cloudinit,media=cdrom ide0: Vault:vm-107-cloudinit,media=cdrom
ide2: local:iso/ubuntu-24.04.1-desktop-amd64.iso,media=cdrom,size=6057964K ide2: iso-share:iso/ubuntu-24.04.2-desktop-amd64.iso,media=cdrom,size=6194550K
memory: 4096 memory: 4096
meta: creation-qemu=9.0.2,ctime=1749061520 meta: creation-qemu=9.0.2,ctime=1749061520
name: ubuntu-docker name: ubuntu-docker

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

View File

@@ -0,0 +1 @@
[{"cpu":0.0162004185838582,"disk":0,"diskread":9329237504,"diskwrite":106667067904,"id":"qemu/100","maxcpu":4,"maxdisk":107374182400,"maxmem":8598323200,"mem":7929741312,"name":"docker-hub","netin":12083321006,"netout":460533575,"node":"serviceslab","status":"running","template":0,"type":"qemu","uptime":5872131,"vmid":100},{"cpu":0.0166769014833835,"disk":0,"diskread":4561243264,"diskwrite":12045452288,"id":"qemu/101","maxcpu":2,"maxdisk":53687091200,"maxmem":4294967296,"mem":3871657984,"name":"monitoring-docker","netin":2943925010,"netout":164801680,"node":"serviceslab","status":"running","tags":"template","template":0,"type":"qemu","uptime":314004,"vmid":101},{"cpu":0.000693404881572702,"disk":756547584,"diskread":56942592,"diskwrite":0,"id":"lxc/102","maxcpu":1,"maxdisk":2147483648,"maxmem":2147483648,"mem":118231040,"name":"nginx","netin":7575938643,"netout":1224826348,"node":"serviceslab","status":"running","template":0,"type":"lxc","uptime":6624241,"vmid":102},{"cpu":0,"disk":0,"diskread":0,"diskwrite":0,"id":"lxc/103","maxcpu":2,"maxdisk":4294967296,"maxmem":2147483648,"mem":0,"name":"netbox","netin":0,"netout":0,"node":"serviceslab","status":"stopped","tags":"community-script;network","template":0,"type":"lxc","uptime":0,"vmid":103},{"cpu":0,"disk":0,"diskread":0,"diskwrite":0,"id":"qemu/104","maxcpu":2,"maxdisk":34359738368,"maxmem":5242880000,"mem":0,"name":"ubuntu-dev","netin":0,"netout":0,"node":"serviceslab","status":"stopped","tags":"template","template":1,"type":"qemu","uptime":0,"vmid":104},{"cpu":0,"disk":0,"diskread":0,"diskwrite":0,"id":"qemu/105","maxcpu":4,"maxdisk":34359738368,"maxmem":16777216000,"mem":0,"name":"dev","netin":0,"netout":0,"node":"serviceslab","status":"stopped","template":0,"type":"qemu","uptime":0,"vmid":105},{"cpu":0,"disk":0,"diskread":0,"diskwrite":0,"id":"qemu/106","maxcpu":2,"maxdisk":34359738368,"maxmem":4294967296,"mem":0,"name":"Ansible-Control","netin":0,"netout":0,"node":"serviceslab","status":"stopped","template":0,"type":"qemu","uptime":0,"vmid":106},{"cpu":0,"disk":0,"diskread":0,"diskwrite":0,"id":"qemu/107","maxcpu":2,"maxdisk":53687091200,"maxmem":4294967296,"mem":0,"name":"ubuntu-docker","netin":0,"netout":0,"node":"serviceslab","status":"stopped","tags":"template","template":1,"type":"qemu","uptime":0,"vmid":107},{"cpu":0,"disk":0,"diskread":0,"diskwrite":0,"id":"qemu/108","maxcpu":4,"maxdisk":107374182400,"maxmem":33554432000,"mem":0,"name":"CML","netin":0,"netout":0,"node":"serviceslab","status":"stopped","template":0,"type":"qemu","uptime":0,"vmid":108},{"cpu":0,"disk":0,"diskread":0,"diskwrite":0,"id":"qemu/109","maxcpu":1,"maxdisk":34359738368,"maxmem":2147483648,"mem":0,"name":"web-server-01","netin":0,"netout":0,"node":"serviceslab","status":"stopped","template":0,"type":"qemu","uptime":0,"vmid":109},{"cpu":0,"disk":0,"diskread":0,"diskwrite":0,"id":"qemu/110","maxcpu":1,"maxdisk":34359738368,"maxmem":4294967296,"mem":0,"name":"web-server-02","netin":0,"netout":0,"node":"serviceslab","status":"stopped","template":0,"type":"qemu","uptime":0,"vmid":110},{"cpu":0,"disk":0,"diskread":0,"diskwrite":0,"id":"qemu/111","maxcpu":1,"maxdisk":34359738368,"maxmem":4294967296,"mem":0,"name":"db-server-01","netin":0,"netout":0,"node":"serviceslab","status":"stopped","template":0,"type":"qemu","uptime":0,"vmid":111},{"cpu":0.00178304112404409,"disk":488112128,"diskread":0,"diskwrite":114688,"id":"lxc/112","maxcpu":1,"maxdisk":3221225472,"maxmem":1073741824,"mem":52203520,"name":"twingate-connector","netin":156009188,"netout":10896198,"node":"serviceslab","status":"running","tags":"community-script;connector;network;twingate","template":0,"type":"lxc","uptime":10756,"vmid":112},{"cpu":0.000396231360898687,"disk":2300313600,"diskread":0,"diskwrite":114688,"id":"lxc/113","maxcpu":2,"maxdisk":21474836480,"maxmem":4294967296,"mem":529104896,"name":"n8n","netin":2103919448,"netout":34073042,"node":"serviceslab","status":"running","template":0,"type":"lxc","uptime":601793,"vmid":113}]

View File

@@ -1,4 +1,5 @@
VMID Status Lock Name VMID Status Lock Name
102 running nginx 102 running nginx
103 stopped netbox 103 stopped netbox
112 running twingate-connector
113 running n8n 113 running n8n

View File

@@ -1,10 +1,11 @@
VMID NAME STATUS MEM(MB) BOOTDISK(GB) PID VMID NAME STATUS MEM(MB) BOOTDISK(GB) PID
100 docker-hub running 8200 100.00 1370101 100 docker-hub running 8200 100.00 1370101
101 monitoring-docker running 4096 50.00 3956419
104 ubuntu-dev stopped 5000 32.00 0 104 ubuntu-dev stopped 5000 32.00 0
105 dev stopped 16000 32.00 0 105 dev stopped 16000 32.00 0
106 Ansible-Control running 4096 32.00 1020188 106 Ansible-Control stopped 4096 32.00 0
107 ubuntu-docker stopped 4096 50.00 0 107 ubuntu-docker stopped 4096 50.00 0
108 CML stopped 32000 100.00 0 108 CML stopped 32000 100.00 0
109 web-server-01 running 2048 32.00 1124720 109 web-server-01 stopped 2048 32.00 0
110 web-server-02 running 4096 32.00 1159023 110 web-server-02 stopped 4096 32.00 0
111 db-server-01 running 4096 32.00 1165739 111 db-server-01 stopped 4096 32.00 0

View File

@@ -0,0 +1 @@
Sun Dec 7 12:00:41 PM MST 2025

View File

@@ -1,16 +1,17 @@
Filesystem Size Used Avail Use% Mounted on Filesystem Size Used Avail Use% Mounted on
udev 87G 0 87G 0% /dev udev 87G 0 87G 0% /dev
tmpfs 18G 4.7M 18G 1% /run tmpfs 18G 3.6M 18G 1% /run
/dev/mapper/pve-root 43G 6.4G 35G 16% / /dev/mapper/pve-root 43G 6.5G 35G 16% /
tmpfs 87G 46M 87G 1% /dev/shm tmpfs 87G 46M 87G 1% /dev/shm
tmpfs 5.0M 0 5.0M 0% /run/lock tmpfs 5.0M 0 5.0M 0% /run/lock
efivarfs 64K 39K 21K 66% /sys/firmware/efi/efivars efivarfs 64K 39K 21K 66% /sys/firmware/efi/efivars
/dev/sda2 1022M 12M 1011M 2% /boot/efi /dev/sda2 1022M 12M 1011M 2% /boot/efi
Vault 3.8T 128K 3.8T 1% /Vault Vault 3.8T 128K 3.8T 1% /Vault
Vault/subvol-102-disk-0 2.0G 721M 1.3G 36% /Vault/subvol-102-disk-0 Vault/subvol-102-disk-0 2.0G 722M 1.3G 36% /Vault/subvol-102-disk-0
Vault/subvol-103-disk-0 4.0G 1.7G 2.4G 43% /Vault/subvol-103-disk-0 Vault/subvol-103-disk-0 4.0G 1.7G 2.4G 43% /Vault/subvol-103-disk-0
/dev/fuse 128M 24K 128M 1% /etc/pve /dev/fuse 128M 28K 128M 1% /etc/pve
192.168.2.150:/mnt/Vauly/iso-vault 3.1T 45G 3.1T 2% /mnt/pve/iso-share 192.168.2.150:/mnt/Vauly/iso-vault 3.1T 45G 3.0T 2% /mnt/pve/iso-share
192.168.2.150:/mnt/Vauly/anytype 3.1T 0 3.1T 0% /mnt/pve/anytype 192.168.2.150:/mnt/Vauly/anytype 3.0T 0 3.0T 0% /mnt/pve/anytype
Vault/subvol-113-disk-0 20G 2.2G 18G 11% /Vault/subvol-113-disk-0 Vault/subvol-113-disk-0 20G 2.2G 18G 11% /Vault/subvol-113-disk-0
Vault/subvol-112-disk-0 3.0G 466M 2.6G 16% /Vault/subvol-112-disk-0
tmpfs 18G 0 18G 0% /run/user/0 tmpfs 18G 0 18G 0% /run/user/0

View File

@@ -52,38 +52,6 @@
link/ether ba:3a:c1:aa:10:50 brd ff:ff:ff:ff:ff:ff link/ether ba:3a:c1:aa:10:50 brd ff:ff:ff:ff:ff:ff
44: fwln100i0@fwpr100p0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master fwbr100i0 state UP group default qlen 1000 44: fwln100i0@fwpr100p0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master fwbr100i0 state UP group default qlen 1000
link/ether 06:d4:ea:b0:f6:d7 brd ff:ff:ff:ff:ff:ff link/ether 06:d4:ea:b0:f6:d7 brd ff:ff:ff:ff:ff:ff
54: tap106i0: <BROADCAST,MULTICAST,PROMISC,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast master fwbr106i0 state UNKNOWN group default qlen 1000
link/ether 86:77:e4:f6:85:ad brd ff:ff:ff:ff:ff:ff
55: fwbr106i0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
link/ether 5e:06:2d:be:20:c3 brd ff:ff:ff:ff:ff:ff
56: fwpr106p0@fwln106i0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master vmbr0 state UP group default qlen 1000
link/ether 22:57:54:82:7c:8d brd ff:ff:ff:ff:ff:ff
57: fwln106i0@fwpr106p0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master fwbr106i0 state UP group default qlen 1000
link/ether 5e:06:2d:be:20:c3 brd ff:ff:ff:ff:ff:ff
74: tap109i0: <BROADCAST,MULTICAST,PROMISC,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast master fwbr109i0 state UNKNOWN group default qlen 1000
link/ether 96:8b:b9:f5:70:bc brd ff:ff:ff:ff:ff:ff
75: fwbr109i0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
link/ether 12:0a:af:36:77:84 brd ff:ff:ff:ff:ff:ff
76: fwpr109p0@fwln109i0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master vmbr0 state UP group default qlen 1000
link/ether 62:5d:ea:2f:8e:6a brd ff:ff:ff:ff:ff:ff
77: fwln109i0@fwpr109p0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master fwbr109i0 state UP group default qlen 1000
link/ether 12:0a:af:36:77:84 brd ff:ff:ff:ff:ff:ff
78: tap110i0: <BROADCAST,MULTICAST,PROMISC,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast master fwbr110i0 state UNKNOWN group default qlen 1000
link/ether 62:90:76:ad:7f:7a brd ff:ff:ff:ff:ff:ff
79: fwbr110i0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
link/ether 86:52:66:ba:37:7c brd ff:ff:ff:ff:ff:ff
80: fwpr110p0@fwln110i0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master vmbr0 state UP group default qlen 1000
link/ether 02:14:10:45:0c:37 brd ff:ff:ff:ff:ff:ff
81: fwln110i0@fwpr110p0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master fwbr110i0 state UP group default qlen 1000
link/ether 86:52:66:ba:37:7c brd ff:ff:ff:ff:ff:ff
82: tap111i0: <BROADCAST,MULTICAST,PROMISC,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast master fwbr111i0 state UNKNOWN group default qlen 1000
link/ether 12:9c:5b:86:20:37 brd ff:ff:ff:ff:ff:ff
83: fwbr111i0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
link/ether 82:e3:73:ed:a5:38 brd ff:ff:ff:ff:ff:ff
84: fwpr111p0@fwln111i0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master vmbr0 state UP group default qlen 1000
link/ether da:c8:08:78:66:ed brd ff:ff:ff:ff:ff:ff
85: fwln111i0@fwpr111p0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master fwbr111i0 state UP group default qlen 1000
link/ether 82:e3:73:ed:a5:38 brd ff:ff:ff:ff:ff:ff
98: veth113i0@if2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master fwbr113i0 state UP group default qlen 1000 98: veth113i0@if2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master fwbr113i0 state UP group default qlen 1000
link/ether fe:70:23:4c:19:c2 brd ff:ff:ff:ff:ff:ff link-netnsid 1 link/ether fe:70:23:4c:19:c2 brd ff:ff:ff:ff:ff:ff link-netnsid 1
99: fwbr113i0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000 99: fwbr113i0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
@@ -92,3 +60,13 @@
link/ether f6:b3:32:40:56:71 brd ff:ff:ff:ff:ff:ff link/ether f6:b3:32:40:56:71 brd ff:ff:ff:ff:ff:ff
101: fwln113i0@fwpr113p0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master fwbr113i0 state UP group default qlen 1000 101: fwln113i0@fwpr113p0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master fwbr113i0 state UP group default qlen 1000
link/ether 02:a5:f8:57:c2:8b brd ff:ff:ff:ff:ff:ff link/ether 02:a5:f8:57:c2:8b brd ff:ff:ff:ff:ff:ff
110: tap101i0: <BROADCAST,MULTICAST,PROMISC,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast master fwbr101i0 state UNKNOWN group default qlen 1000
link/ether 36:9c:79:1f:d7:93 brd ff:ff:ff:ff:ff:ff
111: fwbr101i0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
link/ether 0e:f9:19:4e:c9:6f brd ff:ff:ff:ff:ff:ff
112: fwpr101p0@fwln101i0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master vmbr0 state UP group default qlen 1000
link/ether fe:81:a9:d2:9b:2d brd ff:ff:ff:ff:ff:ff
113: fwln101i0@fwpr101p0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master fwbr101i0 state UP group default qlen 1000
link/ether 0e:f9:19:4e:c9:6f brd ff:ff:ff:ff:ff:ff
114: veth112i0@if2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master vmbr0 state UP group default qlen 1000
link/ether fe:2a:fe:84:b7:86 brd ff:ff:ff:ff:ff:ff link-netnsid 2

View File

@@ -26,6 +26,7 @@ zd16 230:16 0 100G 0 disk
├─zd16p1 230:17 0 1M 0 part ├─zd16p1 230:17 0 1M 0 part
└─zd16p2 230:18 0 100G 0 part └─zd16p2 230:18 0 100G 0 part
zd32 230:32 0 4M 0 disk zd32 230:32 0 4M 0 disk
zd48 230:48 0 4M 0 disk
zd64 230:64 0 50G 0 disk zd64 230:64 0 50G 0 disk
├─zd64p1 230:65 0 1M 0 part ├─zd64p1 230:65 0 1M 0 part
└─zd64p2 230:66 0 50G 0 part └─zd64p2 230:66 0 50G 0 part
@@ -36,6 +37,20 @@ zd96 230:96 0 32G 0 disk
├─zd96p1 230:97 0 1M 0 part ├─zd96p1 230:97 0 1M 0 part
└─zd96p2 230:98 0 32G 0 part └─zd96p2 230:98 0 32G 0 part
zd112 230:112 0 32G 0 disk zd112 230:112 0 32G 0 disk
├─zd112p1 230:113 0 1M 0 part
└─zd112p2 230:114 0 32G 0 part
zd128 230:128 0 32G 0 disk zd128 230:128 0 32G 0 disk
├─zd128p1 230:129 0 300M 0 part
├─zd128p2 230:130 0 3.9G 0 part
└─zd128p3 230:131 0 27.8G 0 part
zd144 230:144 0 32G 0 disk zd144 230:144 0 32G 0 disk
├─zd144p1 230:145 0 1M 0 part
├─zd144p2 230:146 0 2G 0 part
└─zd144p3 230:147 0 30G 0 part
zd160 230:160 0 32G 0 disk zd160 230:160 0 32G 0 disk
├─zd160p1 230:161 0 1M 0 part
├─zd160p2 230:162 0 2G 0 part
└─zd160p3 230:163 0 30G 0 part
zd176 230:176 0 50G 0 disk
├─zd176p1 230:177 0 1M 0 part
└─zd176p2 230:178 0 50G 0 part

View File

@@ -1,3 +1,3 @@
total used free shared buff/cache available total used free shared buff/cache available
Mem: 173Gi 76Gi 71Gi 103Mi 25Gi 96Gi Mem: 173Gi 80Gi 67Gi 102Mi 26Gi 92Gi
Swap: 8.0Gi 0B 8.0Gi Swap: 8.0Gi 0B 8.0Gi

View File

@@ -5,7 +5,7 @@ model : 44
model name : Intel(R) Xeon(R) CPU X5670 @ 2.93GHz model name : Intel(R) Xeon(R) CPU X5670 @ 2.93GHz
stepping : 2 stepping : 2
microcode : 0x1f microcode : 0x1f
cpu MHz : 1943.100 cpu MHz : 3229.042
cache size : 12288 KB cache size : 12288 KB
physical id : 1 physical id : 1
siblings : 12 siblings : 12
@@ -33,7 +33,7 @@ model : 44
model name : Intel(R) Xeon(R) CPU X5670 @ 2.93GHz model name : Intel(R) Xeon(R) CPU X5670 @ 2.93GHz
stepping : 2 stepping : 2
microcode : 0x1f microcode : 0x1f
cpu MHz : 2437.923 cpu MHz : 1997.944
cache size : 12288 KB cache size : 12288 KB
physical id : 0 physical id : 0
siblings : 12 siblings : 12
@@ -89,7 +89,7 @@ model : 44
model name : Intel(R) Xeon(R) CPU X5670 @ 2.93GHz model name : Intel(R) Xeon(R) CPU X5670 @ 2.93GHz
stepping : 2 stepping : 2
microcode : 0x1f microcode : 0x1f
cpu MHz : 3191.160 cpu MHz : 2925.820
cache size : 12288 KB cache size : 12288 KB
physical id : 0 physical id : 0
siblings : 12 siblings : 12
@@ -145,7 +145,7 @@ model : 44
model name : Intel(R) Xeon(R) CPU X5670 @ 2.93GHz model name : Intel(R) Xeon(R) CPU X5670 @ 2.93GHz
stepping : 2 stepping : 2
microcode : 0x1f microcode : 0x1f
cpu MHz : 3191.651 cpu MHz : 2925.820
cache size : 12288 KB cache size : 12288 KB
physical id : 0 physical id : 0
siblings : 12 siblings : 12
@@ -173,7 +173,7 @@ model : 44
model name : Intel(R) Xeon(R) CPU X5670 @ 2.93GHz model name : Intel(R) Xeon(R) CPU X5670 @ 2.93GHz
stepping : 2 stepping : 2
microcode : 0x1f microcode : 0x1f
cpu MHz : 1601.008 cpu MHz : 2925.820
cache size : 12288 KB cache size : 12288 KB
physical id : 1 physical id : 1
siblings : 12 siblings : 12
@@ -201,7 +201,7 @@ model : 44
model name : Intel(R) Xeon(R) CPU X5670 @ 2.93GHz model name : Intel(R) Xeon(R) CPU X5670 @ 2.93GHz
stepping : 2 stepping : 2
microcode : 0x1f microcode : 0x1f
cpu MHz : 3090.356 cpu MHz : 1598.267
cache size : 12288 KB cache size : 12288 KB
physical id : 0 physical id : 0
siblings : 12 siblings : 12
@@ -229,7 +229,7 @@ model : 44
model name : Intel(R) Xeon(R) CPU X5670 @ 2.93GHz model name : Intel(R) Xeon(R) CPU X5670 @ 2.93GHz
stepping : 2 stepping : 2
microcode : 0x1f microcode : 0x1f
cpu MHz : 2566.098 cpu MHz : 2925.820
cache size : 12288 KB cache size : 12288 KB
physical id : 1 physical id : 1
siblings : 12 siblings : 12
@@ -257,7 +257,7 @@ model : 44
model name : Intel(R) Xeon(R) CPU X5670 @ 2.93GHz model name : Intel(R) Xeon(R) CPU X5670 @ 2.93GHz
stepping : 2 stepping : 2
microcode : 0x1f microcode : 0x1f
cpu MHz : 3221.735 cpu MHz : 1597.709
cache size : 12288 KB cache size : 12288 KB
physical id : 0 physical id : 0
siblings : 12 siblings : 12
@@ -313,7 +313,7 @@ model : 44
model name : Intel(R) Xeon(R) CPU X5670 @ 2.93GHz model name : Intel(R) Xeon(R) CPU X5670 @ 2.93GHz
stepping : 2 stepping : 2
microcode : 0x1f microcode : 0x1f
cpu MHz : 2925.820 cpu MHz : 2784.113
cache size : 12288 KB cache size : 12288 KB
physical id : 0 physical id : 0
siblings : 12 siblings : 12
@@ -369,7 +369,7 @@ model : 44
model name : Intel(R) Xeon(R) CPU X5670 @ 2.93GHz model name : Intel(R) Xeon(R) CPU X5670 @ 2.93GHz
stepping : 2 stepping : 2
microcode : 0x1f microcode : 0x1f
cpu MHz : 2925.820 cpu MHz : 3214.111
cache size : 12288 KB cache size : 12288 KB
physical id : 0 physical id : 0
siblings : 12 siblings : 12
@@ -397,7 +397,7 @@ model : 44
model name : Intel(R) Xeon(R) CPU X5670 @ 2.93GHz model name : Intel(R) Xeon(R) CPU X5670 @ 2.93GHz
stepping : 2 stepping : 2
microcode : 0x1f microcode : 0x1f
cpu MHz : 1597.742 cpu MHz : 2925.820
cache size : 12288 KB cache size : 12288 KB
physical id : 1 physical id : 1
siblings : 12 siblings : 12
@@ -453,7 +453,7 @@ model : 44
model name : Intel(R) Xeon(R) CPU X5670 @ 2.93GHz model name : Intel(R) Xeon(R) CPU X5670 @ 2.93GHz
stepping : 2 stepping : 2
microcode : 0x1f microcode : 0x1f
cpu MHz : 1598.649 cpu MHz : 2252.346
cache size : 12288 KB cache size : 12288 KB
physical id : 1 physical id : 1
siblings : 12 siblings : 12
@@ -509,7 +509,7 @@ model : 44
model name : Intel(R) Xeon(R) CPU X5670 @ 2.93GHz model name : Intel(R) Xeon(R) CPU X5670 @ 2.93GHz
stepping : 2 stepping : 2
microcode : 0x1f microcode : 0x1f
cpu MHz : 3015.939 cpu MHz : 2925.820
cache size : 12288 KB cache size : 12288 KB
physical id : 1 physical id : 1
siblings : 12 siblings : 12
@@ -537,7 +537,7 @@ model : 44
model name : Intel(R) Xeon(R) CPU X5670 @ 2.93GHz model name : Intel(R) Xeon(R) CPU X5670 @ 2.93GHz
stepping : 2 stepping : 2
microcode : 0x1f microcode : 0x1f
cpu MHz : 2925.820 cpu MHz : 1756.832
cache size : 12288 KB cache size : 12288 KB
physical id : 0 physical id : 0
siblings : 12 siblings : 12
@@ -565,7 +565,7 @@ model : 44
model name : Intel(R) Xeon(R) CPU X5670 @ 2.93GHz model name : Intel(R) Xeon(R) CPU X5670 @ 2.93GHz
stepping : 2 stepping : 2
microcode : 0x1f microcode : 0x1f
cpu MHz : 2925.820 cpu MHz : 3191.556
cache size : 12288 KB cache size : 12288 KB
physical id : 1 physical id : 1
siblings : 12 siblings : 12
@@ -621,7 +621,7 @@ model : 44
model name : Intel(R) Xeon(R) CPU X5670 @ 2.93GHz model name : Intel(R) Xeon(R) CPU X5670 @ 2.93GHz
stepping : 2 stepping : 2
microcode : 0x1f microcode : 0x1f
cpu MHz : 2925.820 cpu MHz : 1845.241
cache size : 12288 KB cache size : 12288 KB
physical id : 1 physical id : 1
siblings : 12 siblings : 12

View File

@@ -1,44 +1,44 @@
MemTotal: 181528356 kB MemTotal: 181528356 kB
MemFree: 75114964 kB MemFree: 70582004 kB
MemAvailable: 100892388 kB MemAvailable: 96684740 kB
Buffers: 286508 kB Buffers: 287032 kB
Cached: 23702512 kB Cached: 23946144 kB
SwapCached: 0 kB SwapCached: 0 kB
Active: 21658520 kB Active: 15532828 kB
Inactive: 22755424 kB Inactive: 22901216 kB
Active(anon): 20523992 kB Active(anon): 14298292 kB
Inactive(anon): 0 kB Inactive(anon): 0 kB
Active(file): 1134528 kB Active(file): 1234536 kB
Inactive(file): 22755424 kB Inactive(file): 22901216 kB
Unevictable: 30536 kB Unevictable: 30536 kB
Mlocked: 25416 kB Mlocked: 25416 kB
SwapTotal: 8388604 kB SwapTotal: 8388604 kB
SwapFree: 8388604 kB SwapFree: 8388604 kB
Zswap: 0 kB Zswap: 0 kB
Zswapped: 0 kB Zswapped: 0 kB
Dirty: 1704 kB Dirty: 1360 kB
Writeback: 0 kB Writeback: 0 kB
AnonPages: 20455468 kB AnonPages: 14231404 kB
Mapped: 415160 kB Mapped: 461320 kB
Shmem: 105696 kB Shmem: 105136 kB
KReclaimable: 3213592 kB KReclaimable: 3293104 kB
Slab: 5329888 kB Slab: 5957828 kB
SReclaimable: 3213592 kB SReclaimable: 3293104 kB
SUnreclaim: 2116296 kB SUnreclaim: 2664724 kB
KernelStack: 12096 kB KernelStack: 12176 kB
PageTables: 69952 kB PageTables: 56172 kB
SecPageTables: 12776 kB SecPageTables: 11892 kB
NFS_Unstable: 0 kB NFS_Unstable: 0 kB
Bounce: 0 kB Bounce: 0 kB
WritebackTmp: 0 kB WritebackTmp: 0 kB
CommitLimit: 99152780 kB CommitLimit: 99152780 kB
Committed_AS: 29996872 kB Committed_AS: 18033820 kB
VmallocTotal: 34359738367 kB VmallocTotal: 34359738367 kB
VmallocUsed: 1868488 kB VmallocUsed: 2642056 kB
VmallocChunk: 0 kB VmallocChunk: 0 kB
Percpu: 51840 kB Percpu: 53856 kB
HardwareCorrupted: 0 kB HardwareCorrupted: 0 kB
AnonHugePages: 18647040 kB AnonHugePages: 12511232 kB
ShmemHugePages: 0 kB ShmemHugePages: 0 kB
ShmemPmdMapped: 0 kB ShmemPmdMapped: 0 kB
FileHugePages: 0 kB FileHugePages: 0 kB

View File

@@ -5,13 +5,13 @@ udp UNCONN 0 0 [::]:111 [::]:* users:(("rpcbin
udp UNCONN 0 0 [::1]:323 [::]:* users:(("chronyd",pid=1485,fd=6)) udp UNCONN 0 0 [::1]:323 [::]:* users:(("chronyd",pid=1485,fd=6))
tcp LISTEN 0 128 0.0.0.0:22 0.0.0.0:* users:(("sshd",pid=1481,fd=3)) tcp LISTEN 0 128 0.0.0.0:22 0.0.0.0:* users:(("sshd",pid=1481,fd=3))
tcp LISTEN 0 4096 0.0.0.0:111 0.0.0.0:* users:(("rpcbind",pid=1249,fd=4),("systemd",pid=1,fd=89)) tcp LISTEN 0 4096 0.0.0.0:111 0.0.0.0:* users:(("rpcbind",pid=1249,fd=4),("systemd",pid=1,fd=89))
tcp LISTEN 0 4096 127.0.0.1:85 0.0.0.0:* users:(("pvedaemon worke",pid=3144344,fd=6),("pvedaemon worke",pid=3135828,fd=6),("pvedaemon worke",pid=1932152,fd=6),("pvedaemon",pid=1918,fd=6)) tcp LISTEN 0 4096 127.0.0.1:85 0.0.0.0:* users:(("pvedaemon worke",pid=2066260,fd=6),("pvedaemon worke",pid=2061273,fd=6),("pvedaemon worke",pid=2059558,fd=6),("pvedaemon",pid=1918,fd=6))
tcp LISTEN 0 100 127.0.0.1:25 0.0.0.0:* users:(("master",pid=1680,fd=13)) tcp LISTEN 0 100 127.0.0.1:25 0.0.0.0:* users:(("master",pid=1680,fd=13))
tcp LISTEN 0 100 [::1]:25 [::]:* users:(("master",pid=1680,fd=14)) tcp LISTEN 0 100 [::1]:25 [::]:* users:(("master",pid=1680,fd=14))
tcp LISTEN 0 4096 *:8006 *:* users:(("pveproxy worker",pid=3312091,fd=6),("pveproxy worker",pid=3294452,fd=6),("pveproxy worker",pid=3270004,fd=6),("pveproxy",pid=1927,fd=6)) tcp LISTEN 0 4096 *:8006 *:* users:(("pveproxy worker",pid=2104704,fd=6),("pveproxy worker",pid=2089989,fd=6),("pveproxy worker",pid=2079540,fd=6),("pveproxy",pid=1927,fd=6))
tcp LISTEN 0 128 [::]:22 [::]:* users:(("sshd",pid=1481,fd=4)) tcp LISTEN 0 128 [::]:22 [::]:* users:(("sshd",pid=1481,fd=4))
tcp LISTEN 0 4096 [::]:111 [::]:* users:(("rpcbind",pid=1249,fd=6),("systemd",pid=1,fd=91)) tcp LISTEN 0 4096 [::]:111 [::]:* users:(("rpcbind",pid=1249,fd=6),("systemd",pid=1,fd=91))
tcp LISTEN 0 4096 *:3128 *:* users:(("spiceproxy work",pid=2122012,fd=6),("spiceproxy",pid=1933,fd=6)) tcp LISTEN 0 4096 *:3128 *:* users:(("spiceproxy work",pid=1781025,fd=6),("spiceproxy",pid=1933,fd=6))
tcp LISTEN 0 4096 *:9080 *:* users:(("promtail",pid=1424,fd=7)) tcp LISTEN 0 4096 *:9080 *:* users:(("promtail",pid=1424,fd=7))
tcp LISTEN 0 4096 *:33683 *:* users:(("promtail",pid=1424,fd=8)) tcp LISTEN 0 4096 *:33683 *:* users:(("promtail",pid=1424,fd=8))
tcp LISTEN 0 4096 *:45876 *:* users:(("beszel-agent",pid=741889,fd=8)) tcp LISTEN 0 4096 *:45876 *:* users:(("beszel-agent",pid=3442072,fd=8))

View File

@@ -0,0 +1 @@
12:00:41 up 76 days, 16:05, 3 users, load average: 0.29, 0.24, 0.32

755
monitoring/README.md Normal file
View File

@@ -0,0 +1,755 @@
# Monitoring Stack
Comprehensive monitoring and observability stack for the Proxmox homelab environment, providing real-time metrics, visualization, and alerting capabilities.
## Overview
The monitoring stack consists of three primary components deployed on VM 101 (monitoring-docker) at 192.168.2.114:
- **Grafana**: Visualization and dashboards (Port 3000)
- **Prometheus**: Metrics collection and time-series database (Port 9090)
- **PVE Exporter**: Proxmox VE metrics exporter (Port 9221)
## Architecture
```
┌─────────────────────────────────────────────────────────────────┐
│ Proxmox Host (serviceslab) │
│ 192.168.2.200 │
└────────────────────────────┬────────────────────────────────────┘
│ API (8006)
┌────────▼────────┐
│ PVE Exporter │
│ Port: 9221 │
│ (VM 101) │
└────────┬────────┘
│ Metrics
┌────────▼────────┐
│ Prometheus │
│ Port: 9090 │
│ (VM 101) │
└────────┬────────┘
│ Query
┌────────▼────────┐
│ Grafana │
│ Port: 3000 │
│ (VM 101) │
└─────────────────┘
│ HTTPS
┌────────▼────────┐
│ Nginx Proxy │
│ (CT 102) │
│ 192.168.2.101 │
└─────────────────┘
```
## Components
### VM 101: monitoring-docker
**Specifications**:
- **IP Address**: 192.168.2.114
- **Operating System**: Ubuntu 22.04/24.04 LTS
- **Docker Version**: 24.0+
- **Purpose**: Dedicated monitoring infrastructure host
**Resource Allocation**:
- **CPU**: 2-4 cores
- **Memory**: 4-8 GB
- **Storage**: 50-100 GB (thin provisioned)
### Grafana
**Version**: Latest stable
**Port**: 3000
**Access**: http://192.168.2.114:3000
**Features**:
- Pre-configured Proxmox VE dashboards
- Prometheus data source integration
- User authentication and authorization
- Dashboard templating and variables
- Alerting capabilities
- Panel plugins for advanced visualizations
**Default Credentials**:
- Username: `admin`
- Password: Check `.env` file or initial setup
**Key Dashboards**:
- Proxmox Host Overview
- VM Resource Utilization
- Container Resource Utilization
- Storage Pool Metrics
- Network Traffic Analysis
### Prometheus
**Version**: Latest stable
**Port**: 9090
**Access**: http://192.168.2.114:9090
**Configuration**: `/home/jramos/homelab/monitoring/prometheus/prometheus.yml`
**Scrape Targets**:
```yaml
scrape_configs:
- job_name: 'prometheus'
static_configs:
- targets: ['localhost:9090']
- job_name: 'pve'
static_configs:
- targets: ['pve-exporter:9221']
metrics_path: /pve
params:
module: [default]
```
**Features**:
- Time-series metrics database
- PromQL query language
- Service discovery
- Alert manager integration (configurable)
- Data retention policies
- Remote storage support
**Retention Policy**: 15 days (configurable via command line args)
### PVE Exporter
**Version**: prompve/prometheus-pve-exporter:latest
**Port**: 9221
**Access**: http://192.168.2.114:9221
**Configuration**:
- File: `/home/jramos/homelab/monitoring/pve-exporter/pve.yml`
- Environment: `/home/jramos/homelab/monitoring/pve-exporter/.env`
**Proxmox Connection**:
```yaml
default:
user: monitoring@pve
password: <stored in .env>
verify_ssl: false
```
**Metrics Exported**:
- Proxmox cluster status
- Node CPU, memory, disk usage
- VM/CT status and resource usage
- Storage pool utilization
- Network interface statistics
- Backup job status
- Service health
**Environment Variables**:
- `PVE_USER`: Proxmox API user (typically `monitoring@pve`)
- `PVE_PASSWORD`: API user password
- `PVE_VERIFY_SSL`: SSL verification (false for self-signed certs)
## Deployment
### Prerequisites
1. **VM 101 Setup**:
```bash
# Install Docker and Docker Compose
curl -fsSL https://get.docker.com | sh
sudo usermod -aG docker $USER
# Verify installation
docker --version
docker compose version
```
2. **Proxmox API User**:
```bash
# On Proxmox host, create monitoring user
pveum user add monitoring@pve
pveum passwd monitoring@pve
pveum aclmod / -user monitoring@pve -role PVEAuditor
```
3. **Clone Repository**:
```bash
cd /home/jramos
git clone <repository-url> homelab
cd homelab/monitoring
```
### Configuration
1. **PVE Exporter Environment**:
```bash
cd pve-exporter
nano .env
```
Add:
```env
PVE_USER=monitoring@pve
PVE_PASSWORD=your-secure-password
PVE_VERIFY_SSL=false
```
2. **Verify Configuration Files**:
```bash
# Check PVE exporter config
cat pve-exporter/pve.yml
# Check Prometheus config
cat prometheus/prometheus.yml
```
### Deployment Steps
1. **Deploy PVE Exporter**:
```bash
cd /home/jramos/homelab/monitoring/pve-exporter
docker compose up -d
docker compose logs -f
```
2. **Deploy Prometheus**:
```bash
cd /home/jramos/homelab/monitoring/prometheus
docker compose up -d
docker compose logs -f
```
3. **Deploy Grafana**:
```bash
cd /home/jramos/homelab/monitoring/grafana
docker compose up -d
docker compose logs -f
```
4. **Verify All Services**:
```bash
# Check running containers
docker ps
# Test PVE Exporter
curl http://192.168.2.114:9221/pve?target=192.168.2.200&module=default
# Test Prometheus
curl http://192.168.2.114:9090/-/healthy
# Test Grafana
curl http://192.168.2.114:3000/api/health
```
### Initial Grafana Setup
1. **Access Grafana**:
- Navigate to http://192.168.2.114:3000
- Login with default credentials (admin/admin)
- Change password when prompted
2. **Add Prometheus Data Source**:
- Go to Configuration → Data Sources
- Click "Add data source"
- Select "Prometheus"
- URL: `http://prometheus:9090`
- Click "Save & Test"
3. **Import Proxmox Dashboard**:
- Go to Dashboards → Import
- Dashboard ID: 10347 (Proxmox VE)
- Select Prometheus data source
- Click "Import"
4. **Configure Alerting** (Optional):
- Go to Alerting → Notification channels
- Add email, Slack, or other notification methods
- Create alert rules in dashboards
## Network Configuration
### Internal Access
All services are accessible within the homelab network:
- **Grafana**: http://192.168.2.114:3000
- **Prometheus**: http://192.168.2.114:9090
- **PVE Exporter**: http://192.168.2.114:9221
### External Access (via Nginx Proxy Manager)
Configure reverse proxy on CT 102 (nginx at 192.168.2.101):
1. **Create Proxy Host**:
- Domain: `monitoring.yourdomain.com`
- Scheme: `http`
- Forward Hostname: `192.168.2.114`
- Forward Port: `3000`
2. **SSL Configuration**:
- Enable "Force SSL"
- Request Let's Encrypt certificate
- Enable HTTP/2
3. **Access List** (Optional):
- Create access list for authentication
- Apply to proxy host for additional security
## Maintenance
### Update Services
```bash
# Update all monitoring services
cd /home/jramos/homelab/monitoring
# Update PVE Exporter
cd pve-exporter
docker compose pull
docker compose up -d
# Update Prometheus
cd ../prometheus
docker compose pull
docker compose up -d
# Update Grafana
cd ../grafana
docker compose pull
docker compose up -d
```
### Backup Grafana Dashboards
```bash
# Backup Grafana data
docker exec -t grafana tar czf - /var/lib/grafana > grafana-backup-$(date +%Y%m%d).tar.gz
# Or use Grafana's provisioning
# Dashboards can be exported as JSON and stored in git
```
### Prometheus Data Retention
```bash
# Check Prometheus storage size
docker exec prometheus du -sh /prometheus
# Adjust retention in docker-compose.yml:
# command:
# - '--storage.tsdb.retention.time=30d'
# - '--storage.tsdb.retention.size=50GB'
```
### View Logs
```bash
# PVE Exporter logs
cd /home/jramos/homelab/monitoring/pve-exporter
docker compose logs -f
# Prometheus logs
cd /home/jramos/homelab/monitoring/prometheus
docker compose logs -f
# Grafana logs
cd /home/jramos/homelab/monitoring/grafana
docker compose logs -f
# All logs together
docker logs -f pve-exporter
docker logs -f prometheus
docker logs -f grafana
```
## Troubleshooting
### PVE Exporter Cannot Connect to Proxmox
**Symptoms**: No metrics from Proxmox, connection refused errors
**Solutions**:
1. Verify Proxmox API is accessible:
```bash
curl -k https://192.168.2.200:8006/api2/json/version
```
2. Check PVE Exporter environment variables:
```bash
cd /home/jramos/homelab/monitoring/pve-exporter
cat .env
docker compose config
```
3. Test authentication:
```bash
# From VM 101
curl -k -d "username=monitoring@pve&password=yourpassword" \
https://192.168.2.200:8006/api2/json/access/ticket
```
4. Verify user permissions on Proxmox:
```bash
# On Proxmox host
pveum user list
pveum aclmod / -user monitoring@pve -role PVEAuditor
```
### Prometheus Not Scraping Targets
**Symptoms**: Targets shown as down in Prometheus UI
**Solutions**:
1. Check Prometheus targets:
- Navigate to http://192.168.2.114:9090/targets
- Verify target status and error messages
2. Verify network connectivity:
```bash
docker exec prometheus curl http://pve-exporter:9221/pve
```
3. Check Prometheus configuration:
```bash
cd /home/jramos/homelab/monitoring/prometheus
docker compose exec prometheus promtool check config /etc/prometheus/prometheus.yml
```
4. Reload Prometheus configuration:
```bash
docker compose restart prometheus
```
### Grafana Shows No Data
**Symptoms**: Dashboards display "No data" or empty graphs
**Solutions**:
1. Verify Prometheus data source:
- Go to Configuration → Data Sources
- Test connection to Prometheus
- URL should be `http://prometheus:9090`
2. Check Prometheus has data:
- Navigate to http://192.168.2.114:9090
- Run query: `up`
- Should show all scrape targets
3. Verify dashboard queries:
- Edit panel
- Check PromQL query syntax
- Test query in Prometheus UI first
4. Check time range:
- Ensure dashboard time range includes recent data
- Prometheus retention period not exceeded
### Docker Compose Network Issues
**Symptoms**: Containers cannot communicate
**Solutions**:
1. Check Docker network:
```bash
docker network ls
docker network inspect monitoring_default
```
2. Verify container connectivity:
```bash
docker exec prometheus ping pve-exporter
docker exec grafana ping prometheus
```
3. Recreate network:
```bash
cd /home/jramos/homelab/monitoring
docker compose down
docker network prune
docker compose up -d
```
### High Memory Usage
**Symptoms**: VM 101 running out of memory
**Solutions**:
1. Check container memory usage:
```bash
docker stats
```
2. Reduce Prometheus retention:
```yaml
# In prometheus/docker-compose.yml
command:
- '--storage.tsdb.retention.time=7d'
- '--storage.tsdb.retention.size=10GB'
```
3. Limit Grafana image rendering:
```yaml
# In grafana/docker-compose.yml
environment:
- GF_RENDERING_SERVER_URL=
- GF_RENDERING_CALLBACK_URL=
```
4. Increase VM memory allocation in Proxmox
### SSL/TLS Certificate Errors
**Symptoms**: PVE Exporter cannot verify SSL certificate
**Solutions**:
1. Set `verify_ssl: false` in `pve.yml` (for self-signed certs)
2. Or import Proxmox CA certificate:
```bash
# Copy CA from Proxmox to VM 101
scp root@192.168.2.200:/etc/pve/pve-root-ca.pem .
# Add to trust store
sudo cp pve-root-ca.pem /usr/local/share/ca-certificates/pve-root-ca.crt
sudo update-ca-certificates
```
## Metrics Reference
### Key Proxmox Metrics
**Node Metrics**:
- `pve_node_cpu_usage_ratio`: CPU utilization (0-1)
- `pve_node_memory_usage_bytes`: Memory used
- `pve_node_memory_total_bytes`: Total memory
- `pve_node_disk_usage_bytes`: Root disk used
- `pve_node_uptime_seconds`: Node uptime
**VM/CT Metrics**:
- `pve_guest_info`: Guest information (labels: id, name, type, node)
- `pve_guest_cpu_usage_ratio`: Guest CPU usage
- `pve_guest_memory_usage_bytes`: Guest memory used
- `pve_guest_disk_read_bytes_total`: Disk read bytes
- `pve_guest_disk_write_bytes_total`: Disk write bytes
- `pve_guest_network_receive_bytes_total`: Network received
- `pve_guest_network_transmit_bytes_total`: Network transmitted
**Storage Metrics**:
- `pve_storage_usage_bytes`: Storage used
- `pve_storage_size_bytes`: Total storage size
- `pve_storage_info`: Storage information (labels: storage, type)
### Useful PromQL Queries
**CPU Usage by VM**:
```promql
pve_guest_cpu_usage_ratio{type="qemu"} * 100
```
**Memory Usage Percentage**:
```promql
(pve_guest_memory_usage_bytes / pve_guest_memory_size_bytes) * 100
```
**Storage Usage Percentage**:
```promql
(pve_storage_usage_bytes / pve_storage_size_bytes) * 100
```
**Network Bandwidth (rate)**:
```promql
rate(pve_guest_network_transmit_bytes_total[5m])
```
**Top 5 VMs by CPU**:
```promql
topk(5, pve_guest_cpu_usage_ratio{type="qemu"})
```
## Security Considerations
### API Credentials
1. **PVE Exporter `.env` file**:
- Never commit to version control
- Use strong passwords
- Restrict file permissions: `chmod 600 .env`
2. **Proxmox API User**:
- Use dedicated monitoring user
- Grant minimal required permissions (PVEAuditor role)
- Consider token-based authentication
3. **Grafana Authentication**:
- Change default admin password
- Enable OAuth/LDAP for user authentication
- Use role-based access control
### Network Security
1. **Firewall Rules**:
```bash
# On VM 101, restrict access
ufw allow from 192.168.2.0/24 to any port 3000
ufw allow from 192.168.2.0/24 to any port 9090
ufw allow from 192.168.2.0/24 to any port 9221
```
2. **Reverse Proxy**:
- Use Nginx Proxy Manager for SSL termination
- Implement access lists
- Enable fail2ban for brute force protection
3. **Docker Security**:
- Run containers as non-root users
- Use read-only filesystems where possible
- Limit container capabilities
## Performance Tuning
### Prometheus Optimization
**Scrape Interval**:
```yaml
global:
scrape_interval: 30s # Increase for less frequent scraping
evaluation_interval: 30s
```
**Target Relabeling**:
```yaml
relabel_configs:
- source_labels: [__address__]
regex: '.*'
action: keep # Keep only matching targets
```
### Grafana Optimization
**Query Optimization**:
- Use recording rules in Prometheus for complex queries
- Set appropriate refresh intervals on dashboards
- Limit time range on expensive queries
**Caching**:
```ini
# In grafana.ini or environment variables
[caching]
enabled = true
ttl = 3600
```
## Advanced Configuration
### Alerting with Alertmanager
1. **Add Alertmanager to stack**:
```bash
cd /home/jramos/homelab/monitoring
# Create alertmanager directory with docker-compose.yml
```
2. **Configure alerts in Prometheus**:
```yaml
# In prometheus.yml
alerting:
alertmanagers:
- static_configs:
- targets: ['alertmanager:9093']
rule_files:
- 'alerts.yml'
```
3. **Example alert rules**:
```yaml
# alerts.yml
groups:
- name: proxmox
interval: 30s
rules:
- alert: HighCPUUsage
expr: pve_node_cpu_usage_ratio > 0.9
for: 5m
labels:
severity: warning
annotations:
summary: "High CPU usage on {{ $labels.node }}"
```
### Multi-Node Proxmox Cluster
For clustered Proxmox environments:
```yaml
# In pve.yml
cluster1:
user: monitoring@pve
password: ${PVE_PASSWORD}
verify_ssl: false
cluster2:
user: monitoring@pve
password: ${PVE_PASSWORD}
verify_ssl: false
```
### Dashboard Provisioning
Store dashboards as code:
```bash
# Create provisioning directory
mkdir -p grafana/provisioning/dashboards
# Add provisioning config
# grafana/provisioning/dashboards/dashboards.yml
```
## Integration with Other Services
### n8n Workflow Automation
Create workflows in n8n (CT 113) to:
- Send alerts to Slack/Discord based on Prometheus alerts
- Generate daily/weekly infrastructure reports
- Automate backup verification checks
### NetBox IPAM
Sync monitoring targets with NetBox (CT 103):
- Automatically discover new VMs/CTs
- Update service inventory
- Link metrics to network documentation
## Additional Resources
### Documentation
- [Prometheus Documentation](https://prometheus.io/docs/)
- [Grafana Documentation](https://grafana.com/docs/)
- [PVE Exporter GitHub](https://github.com/prometheus-pve/prometheus-pve-exporter)
- [Proxmox API Documentation](https://pve.proxmox.com/pve-docs/api-viewer/)
### Community Dashboards
- Grafana Dashboard 10347: Proxmox VE
- Grafana Dashboard 15356: Proxmox Cluster
- Grafana Dashboard 15362: Proxmox Summary
### Related Homelab Documentation
- [Homelab Overview](../README.md)
- [Services Documentation](../services/README.md)
- [Infrastructure Index](../INDEX.md)
- [n8n Setup Guide](../services/README.md#n8n-workflow-automation)
---
**Last Updated**: 2025-12-07
**Maintainer**: jramos
**VM**: 101 (monitoring-docker) at 192.168.2.114
**Stack Version**: Prometheus 2.x, Grafana 10.x, PVE Exporter latest

View File

@@ -0,0 +1,9 @@
services:
grafana:
image: grafana/grafana-enterprise
container_name: grafana
restart: unless-stopped
ports:
- "3000:3000"
volumes:
- "/home/server-admin/grafana/grafana-storage:/var/lib/grafana"

View File

@@ -0,0 +1,8 @@
services:
prometheus:
image: prom/prometheus
volumes:
- "/home/server-admin/prometheus/prometheus.yml:/etc/prometheus/prometheus.yml"
- "/home/server-admin/prometheus/data:/prometheus"
ports:
- 9090:9090

View File

@@ -0,0 +1,17 @@
scrape_configs:
- job_name: 'pve'
static_configs:
- targets:
- 192.168.2.100 # Proxmox VE Node
metrics_path: /pve
params:
module: [default]
cluster: ['1']
node: ['1']
relabel_configs:
- source_labels: [__address__]
target_label: __param_target
- source_labels: [__param_target]
target_label: instance
- target_label: __address__
replacement: 192.168.2.114:9221 #PVE Exporter Address

View File

@@ -0,0 +1 @@
PVE_CONFIG_PATH=/path/to/your/pve.yml

View File

@@ -0,0 +1,14 @@
version: '3.8'
services:
pve-exporter:
image: prompve/prometheus-pve-exporter:latest
container_name: pve-exporter
ports:
- "9221:9221"
restart: unless-stopped
volumes:
- ${PVE_CONFIG_PATH}:/etc/prometheus/pve.yml:ro
env_file:
- .env
labels:
org.label-schema.group: "monitoring"

View File

@@ -0,0 +1,4 @@
default:
user: monitoring@pve
password: YOUR_MONITORING_USER_PASSWORD_HERE
verify_ssl: false

View File

@@ -132,6 +132,205 @@ cd speedtest-tracker
docker compose up -d docker compose up -d
``` ```
## Monitoring Stack (VM-based)
**Deployment**: VM 101 (monitoring-docker) at 192.168.2.114
**Technology**: Docker Compose
**Components**: Grafana, Prometheus, PVE Exporter
### Overview
Comprehensive monitoring and observability stack for the Proxmox homelab environment providing real-time metrics, visualization, and alerting capabilities.
### Components
**Grafana** (Port 3000):
- Visualization and dashboards
- Pre-configured Proxmox VE dashboards
- User authentication and RBAC
- Alerting capabilities
- Access: http://192.168.2.114:3000
**Prometheus** (Port 9090):
- Metrics collection and time-series database
- PromQL query language
- 15-day retention (configurable)
- Service discovery
- Access: http://192.168.2.114:9090
**PVE Exporter** (Port 9221):
- Proxmox VE metrics exporter
- Connects to Proxmox API
- Exports node, VM, CT, and storage metrics
- Access: http://192.168.2.114:9221
### Key Features
- Real-time Proxmox infrastructure monitoring
- VM and container resource utilization tracking
- Storage pool capacity planning
- Network traffic analysis
- Backup job status monitoring
- Custom alerting rules
### Deployment
```bash
# Navigate to monitoring directory
cd /home/jramos/homelab/monitoring
# Deploy PVE Exporter
cd pve-exporter
docker compose up -d
# Deploy Prometheus
cd ../prometheus
docker compose up -d
# Deploy Grafana
cd ../grafana
docker compose up -d
# Verify all services
docker ps | grep -E 'grafana|prometheus|pve-exporter'
```
### Configuration
**PVE Exporter**:
- Environment file: `monitoring/pve-exporter/.env`
- Configuration: `monitoring/pve-exporter/pve.yml`
- Requires Proxmox API user with PVEAuditor role
**Prometheus**:
- Configuration: `monitoring/prometheus/prometheus.yml`
- Scrapes PVE Exporter every 30 seconds
- Targets: localhost:9090, pve-exporter:9221
**Grafana**:
- Default credentials: admin/admin (change on first login)
- Data source: Prometheus at http://prometheus:9090
- Recommended dashboard: Grafana ID 10347 (Proxmox VE)
### Maintenance
```bash
# Update images
cd /home/jramos/homelab/monitoring/<component>
docker compose pull
docker compose up -d
# View logs
docker compose logs -f
# Restart services
docker compose restart
```
### Troubleshooting
**PVE Exporter connection issues**:
1. Verify Proxmox API is accessible: `curl -k https://192.168.2.200:8006`
2. Check credentials in `.env` file
3. Verify user has PVEAuditor role: `pveum user list` (on Proxmox)
**Grafana shows no data**:
1. Verify Prometheus data source configuration
2. Check Prometheus targets: http://192.168.2.114:9090/targets
3. Test queries in Prometheus UI before using in Grafana
**High memory usage**:
1. Reduce Prometheus retention period
2. Limit Grafana concurrent queries
3. Increase VM 101 memory allocation
**Complete Documentation**: See `/home/jramos/homelab/monitoring/README.md`
---
## Twingate Connector
**Deployment**: CT 112 (twingate-connector)
**Technology**: LXC Container
**Purpose**: Zero-trust network access
### Overview
Lightweight connector providing secure remote access to homelab resources without traditional VPN complexity. Part of Twingate's zero-trust network access (ZTNA) solution.
### Features
- **Zero-Trust Architecture**: Grant access to specific resources, not entire networks
- **No VPN Required**: Simplified connection without VPN client configuration
- **Identity-Based Access**: User and device authentication
- **Automatic Updates**: Connector auto-updates for security patches
- **Low Resource Overhead**: Minimal CPU and memory footprint
### Architecture
```
External User → Twingate Cloud → Twingate Connector (CT 112) → Homelab Resources
```
### Deployment Considerations
**LXC vs Docker**:
- LXC chosen for lightweight, always-on service
- Minimal resource consumption
- System-level integration
- Quick restart and recovery
**Network Placement**:
- Deployed on homelab management network (192.168.2.0/24)
- Access to all internal resources
- No inbound port forwarding required
### Configuration
The Twingate connector is configured via the Twingate Admin Console:
1. **Create Connector** in Twingate Admin Console
2. **Generate Token** for connector authentication
3. **Deploy Container** with provided token
4. **Configure Resources** to route through connector
5. **Assign Users** to resources
### Maintenance
**Health Monitoring**:
- Check connector status in Twingate Admin Console
- Monitor CPU/memory usage on CT 112
- Review connection logs
**Updates**:
- Connector auto-updates by default
- Manual updates: Restart container or redeploy
**Troubleshooting**:
- Verify network connectivity to Twingate cloud
- Check connector token validity
- Review resource routing configuration
- Ensure firewall allows outbound HTTPS
### Security Best Practices
1. **Least Privilege**: Grant access only to required resources
2. **MFA Enforcement**: Require multi-factor authentication for users
3. **Device Trust**: Enable device posture checks
4. **Audit Logs**: Regularly review access logs in Twingate Console
5. **Connector Isolation**: Consider dedicated network segment for connector
### Integration with Homelab
**Protected Resources**:
- Proxmox Web UI (192.168.2.200:8006)
- Grafana Monitoring (192.168.2.114:3000)
- Nginx Proxy Manager (192.168.2.101:81)
- n8n Workflows (192.168.2.107:5678)
- Development VMs and services
**Access Policies**:
- Admin users: Full access to all resources
- Monitoring users: Read-only Grafana access
- Developers: Access to dev VMs and services
---
## General Deployment Instructions ## General Deployment Instructions
### Prerequisites ### Prerequisites
@@ -308,6 +507,39 @@ Several services have embedded secrets in their docker-compose.yaml files:
2. Verify host directory ownership: `chown -R <user>:<group> /path/to/volume` 2. Verify host directory ownership: `chown -R <user>:<group> /path/to/volume`
3. Check SELinux context (if applicable): `ls -Z /path/to/volume` 3. Check SELinux context (if applicable): `ls -Z /path/to/volume`
### Monitoring Stack Issues
**Metrics Not Appearing**:
1. Verify PVE Exporter can reach Proxmox API
2. Check Prometheus scrape targets status
3. Ensure Grafana data source is configured correctly
4. Review retention policies (data may be expired)
**Authentication Failures (PVE Exporter)**:
1. Verify Proxmox user credentials in `.env` file
2. Check user has PVEAuditor role
3. Test API access: `curl -k https://192.168.2.200:8006/api2/json/version`
**High Resource Usage**:
1. Adjust Prometheus retention: `--storage.tsdb.retention.time=7d`
2. Reduce scrape frequency in prometheus.yml
3. Limit Grafana query concurrency
4. Increase VM 101 resources if needed
### Twingate Connector Issues
**Connector Offline**:
1. Check CT 112 is running: `pct status 112`
2. Verify network connectivity from container
3. Check connector token validity in Twingate Console
4. Review container logs for error messages
**Cannot Access Resources**:
1. Verify resource is configured in Twingate Console
2. Check user has permission to access resource
3. Ensure connector is online and healthy
4. Verify network routes on CT 112
## Migration Notes ## Migration Notes
### Post-Migration Tasks ### Post-Migration Tasks
@@ -353,6 +585,7 @@ For homelab-specific questions or issues:
--- ---
**Last Updated**: 2025-12-02 **Last Updated**: 2025-12-07
**Maintainer**: jramos **Maintainer**: jramos
**Repository**: http://192.168.2.102:3060/jramos/homelab **Repository**: http://192.168.2.102:3060/jramos/homelab
**Infrastructure**: 10 VMs, 4 LXC Containers