refactor(repo): reorganize repository structure for improved navigation and maintainability

Implement comprehensive directory reorganization to improve discoverability,
logical grouping, and separation of concerns across documentation, scripts,
and infrastructure snapshots.

Major Changes:

1. Documentation Reorganization:
   - Created start-here-docs/ for onboarding documentation
     * Moved QUICK-START.md, START-HERE.md, GIT-SETUP-GUIDE.md
     * Moved GIT-QUICK-REFERENCE.md, SCRIPT-USAGE.md, SETUP-COMPLETE.md
   - Created troubleshooting/ directory
     * Moved BUGFIX-SUMMARY.md for centralized issue resolution
   - Created mcp/ directory for Model Context Protocol configurations
     * Moved OBSIDIAN-MCP-SETUP.md to mcp/obsidian/

2. Scripts Reorganization:
   - Created scripts/crawlers-exporters/ for infrastructure collection
     * Moved collect*.sh scripts and collection documentation
     * Consolidates Proxmox homelab export tooling
   - Created scripts/fixers/ for operational repair scripts
     * Moved fix_n8n_db_*.sh scripts
     * Isolated scripts with embedded credentials (templates tracked)
   - Created scripts/qol/ for quality-of-life utilities
     * Moved git-aliases.sh and git-first-commit.sh

3. Infrastructure Snapshots:
   - Created disaster-recovery/ for active infrastructure state
     * Moved latest homelab-export-20251202-204939/ snapshot
     * Contains current VM/CT configurations and system state
   - Created archive-homelab/ for historical snapshots
     * Moved homelab-export-*.tar.gz archives
     * Preserves point-in-time backups for reference

4. Agent Definitions:
   - Created sub-agents/ directory
     * Added backend-builder.md (development agent)
     * Added lab-operator.md (infrastructure operations agent)
     * Added librarian.md (git/version control agent)
     * Added scribe.md (documentation agent)

5. Updated INDEX.md:
   - Reflects new directory structure throughout
   - Updated all file path references
   - Enhanced navigation with new sections
   - Added agent roles documentation
   - Updated quick reference commands

6. Security Improvements:
   - Updated .gitignore to match reorganized file locations
   - Corrected path for scripts/fixers/fix_n8n_db_c_locale.sh exclusion
   - Maintained template-based credential management pattern

Infrastructure State Update:
   - Latest snapshot: 2025-12-02 20:49:54
   - Removed: VM 101 (gitlab), CT 112 (Anytype)
   - Added: CT 113 (n8n)
   - Total: 9 VMs, 3 Containers

Impact:
   - Improved repository navigation and discoverability
   - Logical separation of documentation, scripts, and snapshots
   - Clearer onboarding path for new users
   - Enhanced maintainability through organized structure
   - Foundation for multi-agent workflow support

Files changed: 90 files (+935/-349)
   - 3 modified, 14 new files, 73 renames/moves

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
This commit is contained in:
2025-12-02 21:39:33 -07:00
parent eec4c4b298
commit 4f69420aaa
90 changed files with 935 additions and 349 deletions

View File

@@ -0,0 +1,20 @@
# vzdump default settings
#tmpdir: DIR
#dumpdir: DIR
#storage: STORAGE_ID
#mode: snapshot|suspend|stop
#bwlimit: KBPS
#performance: [max-workers=N][,pbs-entries-max=N]
#ionice: PRI
#lockwait: MINUTES
#stopwait: MINUTES
#stdexcludes: BOOLEAN
#mailto: ADDRESSLIST
#prune-backups: keep-INTERVAL=N[,...]
#script: FILENAME
#exclude-path: PATHLIST
#pigz: N
#notes-template: {{guestname}}
#pbs-change-detection-mode: legacy|data|metadata
#fleecing: enabled=BOOLEAN,storage=STORAGE_ID

View File

@@ -0,0 +1,13 @@
#.101
arch: amd64
cores: 1
features: nesting=1
hostname: nginx
memory: 2048
nameserver: 8.8.8.8
net0: name=eth0,bridge=vmbr0,firewall=1,gw=192.168.2.1,hwaddr=BC:24:11:A6:98:63,ip=192.168.2.101/24,type=veth
onboot: 1
ostype: alpine
rootfs: Vault:subvol-102-disk-0,size=2G
swap: 512
unprivileged: 1

View File

@@ -0,0 +1,39 @@
#<div align='center'>
# <a href='https%3A//Helper-Scripts.com' target='_blank' rel='noopener noreferrer'>
# <img src='https%3A//raw.githubusercontent.com/community-scripts/ProxmoxVE/main/misc/images/logo-81x112.png' alt='Logo' style='width%3A81px;height%3A112px;'/>
# </a>
#
# <h2 style='font-size%3A 24px; margin%3A 20px 0;'>NetBox LXC</h2>
#
# <p style='margin%3A 16px 0;'>
# <a href='https%3A//ko-fi.com/community_scripts' target='_blank' rel='noopener noreferrer'>
# <img src='https%3A//img.shields.io/badge/&#x2615;-Buy us a coffee-blue' alt='spend Coffee' />
# </a>
# </p>
#
# <span style='margin%3A 0 10px;'>
# <i class="fa fa-github fa-fw" style="color%3A #f5f5f5;"></i>
# <a href='https%3A//github.com/community-scripts/ProxmoxVE' target='_blank' rel='noopener noreferrer' style='text-decoration%3A none; color%3A #00617f;'>GitHub</a>
# </span>
# <span style='margin%3A 0 10px;'>
# <i class="fa fa-comments fa-fw" style="color%3A #f5f5f5;"></i>
# <a href='https%3A//github.com/community-scripts/ProxmoxVE/discussions' target='_blank' rel='noopener noreferrer' style='text-decoration%3A none; color%3A #00617f;'>Discussions</a>
# </span>
# <span style='margin%3A 0 10px;'>
# <i class="fa fa-exclamation-circle fa-fw" style="color%3A #f5f5f5;"></i>
# <a href='https%3A//github.com/community-scripts/ProxmoxVE/issues' target='_blank' rel='noopener noreferrer' style='text-decoration%3A none; color%3A #00617f;'>Issues</a>
# </span>
#</div>
#<b>.104</b>
arch: amd64
cores: 2
features: keyctl=1,nesting=1
hostname: netbox
memory: 2048
net0: name=eth0,bridge=vmbr0,gw=192.168.2.1,hwaddr=BC:24:11:61:7D:2B,ip=192.168.2.104/24,type=veth
onboot: 1
ostype: debian
rootfs: Vault:subvol-103-disk-0,size=4G
swap: 512
tags: community-script;network
unprivileged: 1

View File

@@ -0,0 +1,46 @@
arch: amd64
cores: 2
features: nesting=1
hostname: n8n
memory: 4096
nameserver: 8.8.8.8 8.8.4.4 1.1.1.1
net0: name=eth0,bridge=vmbr0,firewall=1,gw=192.168.2.1,hwaddr=BC:24:11:BD:35:B7,ip=192.168.2.113/24,type=veth
ostype: debian
parent: pre-db-permission-fix
rootfs: Vault:subvol-113-disk-0,size=20G
searchdomain: apophisnetworking.net
swap: 2048
unprivileged: 1
[pre-db-permission-fix]
#Before PostgreSQL schema permission fix
arch: amd64
cores: 2
features: nesting=1
hostname: n8n
memory: 4096
nameserver: 8.8.8.8 8.8.4.4 1.1.1.1
net0: name=eth0,bridge=vmbr0,firewall=1,gw=192.168.2.1,hwaddr=BC:24:11:BD:35:B7,ip=192.168.2.113/24,type=veth
ostype: debian
parent: pre-n8n-fix
rootfs: Vault:subvol-113-disk-0,size=20G
searchdomain: apophisnetworking.net
snaptime: 1764644598
swap: 2048
unprivileged: 1
[pre-n8n-fix]
#Before encryption key fix 2025-12-01_12%3A58
arch: amd64
cores: 2
features: nesting=1
hostname: n8n
memory: 4096
nameserver: 8.8.8.8 8.8.4.4 1.1.1.1
net0: name=eth0,bridge=vmbr0,firewall=1,gw=192.168.2.1,hwaddr=BC:24:11:BD:35:B7,ip=192.168.2.113/24,type=veth
ostype: debian
rootfs: Vault:subvol-113-disk-0,size=20G
searchdomain: apophisnetworking.net
snaptime: 1764619109
swap: 2048
unprivileged: 1

View File

@@ -0,0 +1,11 @@
127.0.0.1 localhost.localdomain localhost
192.168.2.100 serviceslab.apophisnetworking.net serviceslab
# The following lines are desirable for IPv6 capable hosts
::1 ip6-localhost ip6-loopback
fe00::0 ip6-localnet
ff00::0 ip6-mcastprefix
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters
ff02::3 ip6-allhosts

View File

@@ -0,0 +1,50 @@
# network interface settings; autogenerated
# Please do NOT modify this file directly, unless you know what
# you're doing.
#
# If you want to manage parts of the network configuration manually,
# please utilize the 'source' or 'source-directory' directives to do
# so.
# PVE will preserve these directives, but will NOT read its network
# configuration from sourced files, so do not attempt to move any of
# the PVE managed interfaces into external files!
auto lo
iface lo inet loopback
iface eno1 inet manual
iface eno2 inet manual
iface eno3 inet manual
iface eno4 inet manual
iface enp6s0f0 inet manual
iface enp6s0f1 inet manual
iface enp7s0f0 inet manual
iface enp7s0f1 inet manual
auto vmbr0
iface vmbr0 inet static
address 192.168.2.100/24
gateway 192.168.2.1
bridge-ports eno1
bridge-stp off
bridge-fd 0
bridge-vlan-aware yes
bridge-vids 2-4094
auto vmbr1
iface vmbr1 inet static
address 192.168.3.0/24
bridge-ports none
bridge-stp off
bridge-fd 0
bridge-vlan-aware yes
bridge-vids 2-4094
source /etc/network/interfaces.d/*

View File

@@ -0,0 +1,2 @@
search apophisnetworking.net
nameserver 8.8.8.8

View File

@@ -0,0 +1,9 @@
-----BEGIN PUBLIC KEY-----
MIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEAuH77Q3gsq0eSe+iUFGk0
VliLvw4A/JbEkRnW3B8D+iNeN41sm0Py7AkqlKy3X4LE8UQQ6Yu+nyxBfZMr5Sim
41FbnxxflXfXVvCcbfJe0PW9iRuXATqhBZtKbkcE4y2C/FCnQEq9d3LY8gKTHRJ3
7NQ4TEe0njNpeJ8TthzFJwFLwybO40XuVdjyvoDNRLyOqxLUc4ju0VQjZRJwE6hI
8vUv/o+d4n5eGq5s+wu3kgiI8NztPjiZhWuW0Kc/pkanHt1hSvoJzICWsr3pcU/F
nrTP0q56voFwnyEFxZ6qZhTxq/Xe1JFxYI0fA2PZYGguwx1tLGbrV1DBD0A9RBc+
GwIDAQAB
-----END PUBLIC KEY-----

View File

@@ -0,0 +1 @@
keyboard: en-us

View File

@@ -0,0 +1,30 @@
dir: local
path /var/lib/vz
content vztmpl,iso,backup
lvmthin: local-lvm
thinpool data
vgname pve
content images,rootdir
zfspool: Vault
pool Vault
content rootdir,images
mountpoint /Vault
nodes serviceslab
pbs: PBS-Backups
datastore backups
server 192.168.2.151
content backup
fingerprint dc:7c:c6:19:f3:79:1c:f0:a9:36:3c:b0:6d:9f:8e:9a:53:c3:70:de:b8:a8:7a:c9:3a:4e:38:fb:60:f9:10:8f
prune-backups keep-all=1
username root@pam
nfs: iso-share
export /mnt/Vauly/iso-vault
path /mnt/pve/iso-share
server 192.168.2.150
content iso
prune-backups keep-all=1

View File

@@ -0,0 +1,17 @@
user:api@pam:1:0::::::
token:api@pam!homepage:0:1::
user:root@pam:1:0:::jramosdirect2@gmail.com:::
token:root@pam!packer:0:0::
token:root@pam!tui:0:0::
user:terraform@pam:1:0::::::
token:terraform@pam!terraform:0:0::
group:api-ro:api@pam::
group:terraform:terraform@pam::
role:TerraformProvision:Datastore.AllocateSpace,Datastore.Audit,Pool.Allocate,SDN.Use,Sys.Audit,Sys.Console,Sys.Modify,Sys.PowerMgmt,VM.Allocate,VM.Audit,VM.Clone,VM.Config.CDROM,VM.Config.CPU,VM.Config.Cloudinit,VM.Config.Disk,VM.Config.HWType,VM.Config.Memory,VM.Config.Network,VM.Config.Options,VM.Migrate,VM.Monitor,VM.PowerMgmt:
acl:1:/:root@pam!packer:Administrator:
acl:1:/:@api-ro,api@pam!homepage:PVEAuditor:
acl:1:/:@terraform:TerraformProvision:

View File

@@ -0,0 +1,163 @@
UNIT LOAD ACTIVE SUB DESCRIPTION
apparmor.service loaded active exited Load AppArmor profiles
apt-daily-upgrade.service loaded inactive dead Daily apt upgrade and clean activities
apt-daily.service loaded inactive dead Daily apt download activities
● auditd.service not-found inactive dead auditd.service
auth-rpcgss-module.service loaded inactive dead Kernel Module supporting RPCSEC_GSS
beszel-agent-update.service loaded inactive dead Update beszel-agent if needed
beszel-agent.service loaded active running Beszel Agent Service
blk-availability.service loaded active exited Availability of block devices
chrony.service loaded active running chrony, an NTP client/server
● connman.service not-found inactive dead connman.service
console-getty.service loaded inactive dead Console Getty
● console-screen.service not-found inactive dead console-screen.service
console-setup.service loaded active exited Set console font and keymap
corosync.service loaded inactive dead Corosync Cluster Engine
cron.service loaded active running Regular background program processing daemon
dbus.service loaded active running D-Bus System Message Bus
● display-manager.service not-found inactive dead display-manager.service
dm-event.service loaded active running Device-mapper event daemon
dpkg-db-backup.service loaded inactive dead Daily dpkg database backup service
● dracut-mount.service not-found inactive dead dracut-mount.service
e2scrub_all.service loaded inactive dead Online ext4 Metadata Check for All Filesystems
e2scrub_reap.service loaded inactive dead Remove Stale Online ext4 Metadata Check Snapshots
emergency.service loaded inactive dead Emergency Shell
● exim4.service not-found inactive dead exim4.service
● fcoe.service not-found inactive dead fcoe.service
fstrim.service loaded inactive dead Discard unused blocks on filesystems from /etc/fstab
getty-static.service loaded inactive dead getty on tty2-tty6 if dbus and logind are not available
getty@tty1.service loaded active running Getty on tty1
● glusterd.service not-found inactive dead glusterd.service
● gssproxy.service not-found inactive dead gssproxy.service
ifupdown2-pre.service loaded active exited Helper to synchronize boot up for ifupdown
initrd-cleanup.service loaded inactive dead Cleaning Up and Shutting Down Daemons
initrd-parse-etc.service loaded inactive dead Mountpoints Configured in the Real Root
initrd-switch-root.service loaded inactive dead Switch Root
initrd-udevadm-cleanup-db.service loaded inactive dead Cleanup udev Database
● iscsi-shutdown.service not-found inactive dead iscsi-shutdown.service
iscsid.service loaded inactive dead iSCSI initiator daemon (iscsid)
● kbd.service not-found inactive dead kbd.service
keyboard-setup.service loaded active exited Set the console keyboard layout
kmod-static-nodes.service loaded active exited Create List of Static Device Nodes
ksmtuned.service loaded active running Kernel Samepage Merging (KSM) Tuning Daemon
logrotate.service loaded inactive dead Rotate log files
lvm2-lvmpolld.service loaded inactive dead LVM2 poll daemon
lvm2-monitor.service loaded active exited Monitoring of LVM2 mirrors, snapshots etc. using dmeventd or progress polling
lxc-monitord.service loaded active running LXC Container Monitoring Daemon
lxc-net.service loaded active exited LXC network bridge setup
lxc.service loaded active exited LXC Container Initialization and Autoboot Code
lxcfs.service loaded active running FUSE filesystem for LXC
man-db.service loaded inactive dead Daily man-db regeneration
modprobe@configfs.service loaded inactive dead Load Kernel Module configfs
modprobe@dm_mod.service loaded inactive dead Load Kernel Module dm_mod
modprobe@drm.service loaded inactive dead Load Kernel Module drm
modprobe@efi_pstore.service loaded inactive dead Load Kernel Module efi_pstore
modprobe@fuse.service loaded inactive dead Load Kernel Module fuse
modprobe@loop.service loaded inactive dead Load Kernel Module loop
● multipathd.service not-found inactive dead multipathd.service
networking.service loaded active exited Network initialization
● NetworkManager.service not-found inactive dead NetworkManager.service
● nfs-kernel-server.service not-found inactive dead nfs-kernel-server.service
● nfs-server.service not-found inactive dead nfs-server.service
nfs-utils.service loaded inactive dead NFS server and client services
● ntp.service not-found inactive dead ntp.service
● ntpsec.service not-found inactive dead ntpsec.service
open-iscsi.service loaded inactive dead Login to default iSCSI targets
● openntpd.service not-found inactive dead openntpd.service
● plymouth-quit-wait.service not-found inactive dead plymouth-quit-wait.service
● plymouth-start.service not-found inactive dead plymouth-start.service
postfix.service loaded active exited Postfix Mail Transport Agent
postfix@-.service loaded active running Postfix Mail Transport Agent (instance -)
promtail.service loaded active running Promtail service for Loki log shipping
proxmox-boot-cleanup.service loaded inactive dead Clean up bootloader next-boot setting
proxmox-firewall.service loaded active running Proxmox nftables firewall
pve-cluster.service loaded active running The Proxmox VE cluster filesystem
pve-container@102.service loaded active running PVE LXC Container: 102
pve-container@113.service loaded active running PVE LXC Container: 113
pve-daily-update.service loaded inactive dead Daily PVE download activities
pve-firewall.service loaded active running Proxmox VE firewall
pve-guests.service loaded active exited PVE guests
pve-ha-crm.service loaded active running PVE Cluster HA Resource Manager Daemon
pve-ha-lrm.service loaded active running PVE Local HA Resource Manager Daemon
pve-lxc-syscalld.service loaded active running Proxmox VE LXC Syscall Daemon
pve-query-machine-capabilities.service loaded active exited PVE Query Machine Capabilities
pvebanner.service loaded active exited Proxmox VE Login Banner
pvedaemon.service loaded active running PVE API Daemon
pvefw-logger.service loaded active running Proxmox VE firewall logger
pvenetcommit.service loaded active exited Commit Proxmox VE network changes
pveproxy.service loaded active running PVE API Proxy Server
pvescheduler.service loaded active running Proxmox VE scheduler
pvestatd.service loaded active running PVE Status Daemon
pveupload-cleanup.service loaded inactive dead Clean up old Proxmox pveupload files in /var/tmp
qmeventd.service loaded active running PVE Qemu Event Daemon
rbdmap.service loaded active exited Map RBD devices
rc-local.service loaded inactive dead /etc/rc.local Compatibility
rescue.service loaded inactive dead Rescue Shell
rpc-gssd.service loaded inactive dead RPC security service for NFS client and server
rpc-statd-notify.service loaded active exited Notify NFS peers of a restart
rpc-svcgssd.service loaded inactive dead RPC security service for NFS server
rpcbind.service loaded active running RPC bind portmap service
rrdcached.service loaded active running LSB: start or stop rrdcached
● sendmail.service not-found inactive dead sendmail.service
smartmontools.service loaded active running Self Monitoring and Reporting Technology (SMART) Daemon
● smb.service not-found inactive dead smb.service
spiceproxy.service loaded active running PVE SPICE Proxy Server
ssh.service loaded active running OpenBSD Secure Shell server
● syslog.service not-found inactive dead syslog.service
systemd-ask-password-console.service loaded inactive dead Dispatch Password Requests to Console
systemd-ask-password-wall.service loaded inactive dead Forward Password Requests to Wall
systemd-binfmt.service loaded active exited Set Up Additional Binary Formats
systemd-boot-system-token.service loaded inactive dead Store a System Token in an EFI Variable
systemd-firstboot.service loaded inactive dead First Boot Wizard
systemd-fsck-root.service loaded inactive dead File System Check on Root Device
systemd-fsck@dev-disk-by\x2duuid-20FD\x2d8DBD.service loaded active exited File System Check on /dev/disk/by-uuid/20FD-8DBD
systemd-fsckd.service loaded inactive dead File System Check Daemon to report status
● systemd-hwdb-update.service not-found inactive dead systemd-hwdb-update.service
systemd-initctl.service loaded inactive dead initctl Compatibility Daemon
systemd-journal-flush.service loaded active exited Flush Journal to Persistent Storage
systemd-journald.service loaded active running Journal Service
systemd-logind.service loaded active running User Login Management
systemd-machine-id-commit.service loaded inactive dead Commit a transient machine-id on disk
systemd-modules-load.service loaded active exited Load Kernel Modules
systemd-networkd.service loaded inactive dead Network Configuration
● systemd-oomd.service not-found inactive dead systemd-oomd.service
systemd-pcrphase-initrd.service loaded inactive dead TPM2 PCR Barrier (initrd)
systemd-pcrphase-sysinit.service loaded inactive dead TPM2 PCR Barrier (Initialization)
systemd-pcrphase.service loaded inactive dead TPM2 PCR Barrier (User)
systemd-pstore.service loaded inactive dead Platform Persistent Storage Archival
systemd-quotacheck.service loaded inactive dead File System Quota Check
systemd-random-seed.service loaded active exited Load/Save Random Seed
systemd-remount-fs.service loaded active exited Remount Root and Kernel File Systems
systemd-repart.service loaded inactive dead Repartition Root Disk
systemd-rfkill.service loaded inactive dead Load/Save RF Kill Switch Status
systemd-sysctl.service loaded active exited Apply Kernel Variables
systemd-sysext.service loaded inactive dead Merge System Extension Images into /usr/ and /opt/
systemd-sysusers.service loaded active exited Create System Users
systemd-tmpfiles-clean.service loaded inactive dead Cleanup of Temporary Directories
systemd-tmpfiles-setup-dev.service loaded active exited Create Static Device Nodes in /dev
systemd-tmpfiles-setup.service loaded active exited Create System Files and Directories
systemd-udev-settle.service loaded active exited Wait for udev To Complete Device Initialization
systemd-udev-trigger.service loaded active exited Coldplug All udev Devices
systemd-udevd.service loaded active running Rule-based Manager for Device Events and Files
● systemd-update-done.service not-found inactive dead systemd-update-done.service
systemd-update-utmp-runlevel.service loaded inactive dead Record Runlevel Change in UTMP
systemd-update-utmp.service loaded active exited Record System Boot/Shutdown in UTMP
systemd-user-sessions.service loaded active exited Permit User Sessions
● systemd-vconsole-setup.service not-found inactive dead systemd-vconsole-setup.service
user-runtime-dir@0.service loaded active exited User Runtime Directory /run/user/0
user@0.service loaded active running User Manager for UID 0
watchdog-mux.service loaded active running Proxmox VE watchdog multiplexer
wazuh-agent.service loaded active running Wazuh agent
zfs-import-cache.service loaded inactive dead Import ZFS pools by cache file
zfs-import-scan.service loaded active exited Import ZFS pools by device scanning
zfs-import@Vault.service loaded active exited Import ZFS pool Vault
zfs-mount.service loaded active exited Mount ZFS filesystems
zfs-share.service loaded active exited ZFS file system shares
zfs-volume-wait.service loaded active exited Wait for ZFS Volume (zvol) links in /dev
zfs-zed.service loaded active running ZFS Event Daemon (zed)
LOAD = Reflects whether the unit definition was properly loaded.
ACTIVE = The high-level unit activation state, i.e. generalization of SUB.
SUB = The low-level unit activation state, values depend on unit type.
156 loaded units listed.
To show all installed unit files use 'systemctl list-unit-files'.

View File

@@ -0,0 +1,348 @@
#
# Open-iSCSI default configuration.
# Could be located at /etc/iscsi/iscsid.conf or ~/.iscsid.conf
#
# Note: To set any of these values for a specific node/session run
# the iscsiadm --mode node --op command for the value. See the README
# and man page for iscsiadm for details on the --op command.
#
######################
# iscsid daemon config
######################
#
# If you want iscsid to start the first time an iscsi tool
# needs to access it, instead of starting it when the init
# scripts run, set the iscsid startup command here. This
# should normally only need to be done by distro package
# maintainers. If you leave the iscsid daemon running all
# the time then leave this attribute commented out.
#
# Default for Fedora and RHEL. Uncomment to activate.
# iscsid.startup = /bin/systemctl start iscsid.socket iscsiuio.socket
#
# Default for Debian and Ubuntu. Uncomment to activate.
iscsid.startup = /bin/systemctl start iscsid.socket
#
# Default if you are not using systemd. Uncomment to activate.
# iscsid.startup = /usr/bin/service start iscsid
# Check for active mounts on devices reachable through a session
# and refuse to logout if there are any. Defaults to "No".
# iscsid.safe_logout = Yes
# Only require UID auth for MGMT IPCs, and not username.
# Useful if you want to run iscsid in a constrained environment.
# Note: Only do this if you are aware of the security implications.
# Defaults to "No".
# iscsid.ipc_auth_uid = Yes
#############################
# NIC/HBA and driver settings
#############################
# open-iscsi can create a session and bind it to a NIC/HBA.
# To set this up see the example iface config file.
#*****************
# Startup settings
#*****************
# To request that the iscsi service scripts startup a session, use "automatic":
# node.startup = automatic
#
# To manually startup the session, use "manual". The default is manual.
node.startup = manual
# For "automatic" startup nodes, setting this to "Yes" will try logins on each
# available iface until one succeeds, and then stop. The default "No" will try
# logins on all available ifaces simultaneously.
node.leading_login = No
# *************
# CHAP Settings
# *************
# To enable CHAP authentication set node.session.auth.authmethod
# to CHAP. The default is None.
#node.session.auth.authmethod = CHAP
# To configure which CHAP algorithms to enable, set
# node.session.auth.chap_algs to a comma separated list.
# The algorithms should be listed in order of decreasing
# preference — in particular, with the most preferred algorithm first.
# Valid values are MD5, SHA1, SHA256, and SHA3-256.
# The default is MD5.
#node.session.auth.chap_algs = SHA3-256,SHA256,SHA1,MD5
# To set a CHAP username and password for initiator
# authentication by the target(s), uncomment the following lines:
#node.session.auth.username = username
#node.session.auth.password = password
# To set a CHAP username and password for target(s)
# authentication by the initiator, uncomment the following lines:
#node.session.auth.username_in = username_in
#node.session.auth.password_in = password_in
# To enable CHAP authentication for a discovery session to the target,
# set discovery.sendtargets.auth.authmethod to CHAP. The default is None.
#discovery.sendtargets.auth.authmethod = CHAP
# To set a discovery session CHAP username and password for the initiator
# authentication by the target(s), uncomment the following lines:
#discovery.sendtargets.auth.username = username
#discovery.sendtargets.auth.password = password
# To set a discovery session CHAP username and password for target(s)
# authentication by the initiator, uncomment the following lines:
#discovery.sendtargets.auth.username_in = username_in
#discovery.sendtargets.auth.password_in = password_in
# ********
# Timeouts
# ********
#
# See the iSCSI README's Advanced Configuration section for tips
# on setting timeouts when using multipath or doing root over iSCSI.
#
# To specify the length of time to wait for session re-establishment
# before failing SCSI commands back to the application when running
# the Linux SCSI Layer error handler, edit the line.
# The value is in seconds and the default is 120 seconds.
# Special values:
# - If the value is 0, IO will be failed immediately.
# - If the value is less than 0, IO will remain queued until the session
# is logged back in, or until the user runs the logout command.
node.session.timeo.replacement_timeout = 120
# To specify the time to wait for login to complete, edit the line.
# The value is in seconds and the default is 15 seconds.
node.conn[0].timeo.login_timeout = 15
# To specify the time to wait for logout to complete, edit the line.
# The value is in seconds and the default is 15 seconds.
node.conn[0].timeo.logout_timeout = 15
# Time interval to wait for on connection before sending a ping.
# The value is in seconds and the default is 5 seconds.
node.conn[0].timeo.noop_out_interval = 5
# To specify the time to wait for a Nop-out response before failing
# the connection, edit this line. Failing the connection will
# cause IO to be failed back to the SCSI layer. If using dm-multipath
# this will cause the IO to be failed to the multipath layer.
# The value is in seconds and the default is 5 seconds.
node.conn[0].timeo.noop_out_timeout = 5
# To specify the time to wait for an abort response before
# failing the operation and trying a logical unit reset, edit the line.
# The value is in seconds and the default is 15 seconds.
node.session.err_timeo.abort_timeout = 15
# To specify the time to wait for a logical unit response
# before failing the operation and trying session re-establishment,
# edit the line.
# The value is in seconds and the default is 30 seconds.
node.session.err_timeo.lu_reset_timeout = 30
# To specify the time to wait for a target response
# before failing the operation and trying session re-establishment,
# edit the line.
# The value is in seconds and the default is 30 seconds.
node.session.err_timeo.tgt_reset_timeout = 30
# The value is in seconds and the default is 60 seconds.
node.session.err_timeo.host_reset_timeout = 60
#******
# Retry
#******
# To specify the number of times iscsid should retry a login
# if the login attempt fails due to the node.conn[0].timeo.login_timeout
# expiring, modify the following line. Note that if the login fails
# quickly (before node.conn[0].timeo.login_timeout fires) because the network
# layer or the target returns an error, iscsid may retry the login more than
# node.session.initial_login_retry_max times.
#
# This retry count along with node.conn[0].timeo.login_timeout
# determines the maximum amount of time iscsid will try to
# establish the initial login. node.session.initial_login_retry_max is
# multiplied by the node.conn[0].timeo.login_timeout to determine the
# maximum amount.
#
# The default node.session.initial_login_retry_max is 8 and
# node.conn[0].timeo.login_timeout is 15 so we have:
#
# node.conn[0].timeo.login_timeout * node.session.initial_login_retry_max = 120s
#
# Valid values are any integer value. This only
# affects the initial login. Setting it to a high value can slow
# down the iscsi service startup. Setting it to a low value can
# cause a session to not get logged into, if there are distuptions
# during startup or if the network is not ready at that time.
node.session.initial_login_retry_max = 8
################################
# session and device queue depth
################################
# To control how many commands the session will queue, set
# node.session.cmds_max to an integer between 2 and 2048 that is also
# a power of 2. The default is 128.
node.session.cmds_max = 128
# To control the device's queue depth, set node.session.queue_depth
# to a value between 1 and 1024. The default is 32.
node.session.queue_depth = 32
##################################
# MISC SYSTEM PERFORMANCE SETTINGS
##################################
# For software iscsi (iscsi_tcp) and iser (ib_iser), each session
# has a thread used to transmit or queue data to the hardware. For
# cxgb3i, you will get a thread per host.
#
# Setting the thread's priority to a lower value can lead to higher throughput
# and lower latencies. The lowest value is -20. Setting the priority to
# a higher value, can lead to reduced IO performance, but if you are seeing
# the iscsi or scsi threads dominate the use of the CPU then you may want
# to set this value higher.
#
# Note: For cxgb3i, you must set all sessions to the same value.
# Otherwise the behavior is not defined.
#
# The default value is -20. The setting must be between -20 and 20.
node.session.xmit_thread_priority = -20
#***************
# iSCSI settings
#***************
# To enable R2T flow control (i.e., the initiator must wait for an R2T
# command before sending any data), uncomment the following line:
#
#node.session.iscsi.InitialR2T = Yes
#
# To disable R2T flow control (i.e., the initiator has an implied
# initial R2T of "FirstBurstLength" at offset 0), uncomment the following line:
#
# The defaults is No.
node.session.iscsi.InitialR2T = No
#
# To disable immediate data (i.e., the initiator does not send
# unsolicited data with the iSCSI command PDU), uncomment the following line:
#
#node.session.iscsi.ImmediateData = No
#
# To enable immediate data (i.e., the initiator sends unsolicited data
# with the iSCSI command packet), uncomment the following line:
#
# The default is Yes.
node.session.iscsi.ImmediateData = Yes
# To specify the maximum number of unsolicited data bytes the initiator
# can send in an iSCSI PDU to a target, edit the following line.
#
# The value is the number of bytes in the range of 512 to (2^24-1) and
# the default is 262144.
node.session.iscsi.FirstBurstLength = 262144
# To specify the maximum SCSI payload that the initiator will negotiate
# with the target for, edit the following line.
#
# The value is the number of bytes in the range of 512 to (2^24-1) and
# the defauls it 16776192.
node.session.iscsi.MaxBurstLength = 16776192
# To specify the maximum number of data bytes the initiator can receive
# in an iSCSI PDU from a target, edit the following line.
#
# The value is the number of bytes in the range of 512 to (2^24-1) and
# the default is 262144.
node.conn[0].iscsi.MaxRecvDataSegmentLength = 262144
# To specify the maximum number of data bytes the initiator will send
# in an iSCSI PDU to the target, edit the following line.
#
# The value is the number of bytes in the range of 512 to (2^24-1).
# Zero is a special case. If set to zero, the initiator will use
# the target's MaxRecvDataSegmentLength for the MaxXmitDataSegmentLength.
# The default is 0.
node.conn[0].iscsi.MaxXmitDataSegmentLength = 0
# To specify the maximum number of data bytes the initiator can receive
# in an iSCSI PDU from a target during a discovery session, edit the
# following line.
#
# The value is the number of bytes in the range of 512 to (2^24-1) and
# the default is 32768.
discovery.sendtargets.iscsi.MaxRecvDataSegmentLength = 32768
# To allow the targets to control the setting of the digest checking,
# with the initiator requesting a preference of enabling the checking,
# uncomment one or both of the following lines:
#node.conn[0].iscsi.HeaderDigest = CRC32C,None
#node.conn[0].iscsi.DataDigest = CRC32C,None
#
# To allow the targets to control the setting of the digest checking,
# with the initiator requesting a preference of disabling the checking,
# uncomment one or both of the following lines:
#node.conn[0].iscsi.HeaderDigest = None,CRC32C
#node.conn[0].iscsi.DataDigest = None,CRC32C
#
# To enable CRC32C digest checking for the header and/or data part of
# iSCSI PDUs, uncomment one or both of the following lines:
#node.conn[0].iscsi.HeaderDigest = CRC32C
#node.conn[0].iscsi.DataDigest = CRC32C
#
# To disable digest checking for the header and/or data part of
# iSCSI PDUs, uncomment one or both of the following lines:
#node.conn[0].iscsi.HeaderDigest = None
#node.conn[0].iscsi.DataDigest = None
#
# The default is to never use DataDigests or HeaderDigests.
#
# For multipath configurations, you may want more than one session to be
# created on each iface record. If node.session.nr_sessions is greater
# than 1, performing a 'login' for that node will ensure that the
# appropriate number of sessions is created.
node.session.nr_sessions = 1
# When iscsid starts up, it recovers existing sessions (if possible).
# If the target for a session has gone away when this occurs, the
# iscsid daemon normally tries to reestablish each session,
# in succession, in the background, by trying again every two
# seconds until all sessions are restored. This configuration
# variable can limits the number of retries for each session.
# For example, setting reopen_max=150 would mean that each session
# recovery was limited to about five minutes.
node.session.reopen_max = 0
#************
# Workarounds
#************
# Some targets like IET prefer that an initiator does not respond to PDUs like
# R2Ts after it has sent a task management function like an ABORT TASK or a
# LOGICAL UNIT RESET. To adopt this behavior, uncomment the following line.
# The default is Yes.
node.session.iscsi.FastAbort = Yes
# Some targets like Equalogic prefer that an initiator continue to respond to
# R2Ts after it has sent a task management function like an ABORT TASK or a
# LOGICAL UNIT RESET. To adopt this behavior, uncomment the following line.
# node.session.iscsi.FastAbort = No
# To prevent doing automatic scans that would add unwanted luns to the system,
# we can disable them and have sessions only do manually requested scans.
# Automatic scans are performed on startup, on login, and on AEN/AER reception
# on devices supporting it. For HW drivers, all sessions will use the value
# defined in the configuration file. This configuration option is independent
# of the scsi_mod.scan parameter. The default is auto.
node.session.scan = auto

View File

@@ -0,0 +1,6 @@
Name Type Status Total Used Available %
PBS-Backups pbs active 1009313392 245697628 712271792 24.34%
Vault zfspool active 4546625536 487890744 4058734792 10.73%
iso-share nfs active 3298592768 46755840 3251836928 1.42%
local dir active 45024148 6655328 36049256 14.78%
local-lvm lvmthin active 68988928 6898 68982029 0.01%

View File

@@ -0,0 +1,236 @@
#
# Sample configuration file for the Samba suite for Debian GNU/Linux.
#
#
# This is the main Samba configuration file. You should read the
# smb.conf(5) manual page in order to understand the options listed
# here. Samba has a huge number of configurable options most of which
# are not shown in this example
#
# Some options that are often worth tuning have been included as
# commented-out examples in this file.
# - When such options are commented with ";", the proposed setting
# differs from the default Samba behaviour
# - When commented with "#", the proposed setting is the default
# behaviour of Samba but the option is considered important
# enough to be mentioned here
#
# NOTE: Whenever you modify this file you should run the command
# "testparm" to check that you have not made any basic syntactic
# errors.
#======================= Global Settings =======================
[global]
## Browsing/Identification ###
# Change this to the workgroup/NT-domain name your Samba server will part of
workgroup = WORKGROUP
#### Networking ####
# The specific set of interfaces / networks to bind to
# This can be either the interface name or an IP address/netmask;
# interface names are normally preferred
; interfaces = 127.0.0.0/8 eth0
# Only bind to the named interfaces and/or networks; you must use the
# 'interfaces' option above to use this.
# It is recommended that you enable this feature if your Samba machine is
# not protected by a firewall or is a firewall itself. However, this
# option cannot handle dynamic or non-broadcast interfaces correctly.
; bind interfaces only = yes
#### Debugging/Accounting ####
# This tells Samba to use a separate log file for each machine
# that connects
log file = /var/log/samba/log.%m
# Cap the size of the individual log files (in KiB).
max log size = 1000
# We want Samba to only log to /var/log/samba/log.{smbd,nmbd}.
# Append syslog@1 if you want important messages to be sent to syslog too.
logging = file
# Do something sensible when Samba crashes: mail the admin a backtrace
panic action = /usr/share/samba/panic-action %d
####### Authentication #######
# Server role. Defines in which mode Samba will operate. Possible
# values are "standalone server", "member server", "classic primary
# domain controller", "classic backup domain controller", "active
# directory domain controller".
#
# Most people will want "standalone server" or "member server".
# Running as "active directory domain controller" will require first
# running "samba-tool domain provision" to wipe databases and create a
# new domain.
server role = standalone server
obey pam restrictions = yes
# This boolean parameter controls whether Samba attempts to sync the Unix
# password with the SMB password when the encrypted SMB password in the
# passdb is changed.
unix password sync = yes
# For Unix password sync to work on a Debian GNU/Linux system, the following
# parameters must be set (thanks to Ian Kahan <<kahan@informatik.tu-muenchen.de> for
# sending the correct chat script for the passwd program in Debian Sarge).
passwd program = /usr/bin/passwd %u
passwd chat = *Enter\snew\s*\spassword:* %n\n *Retype\snew\s*\spassword:* %n\n *password\supdated\ssuccessfully* .
# This boolean controls whether PAM will be used for password changes
# when requested by an SMB client instead of the program listed in
# 'passwd program'. The default is 'no'.
pam password change = yes
# This option controls how unsuccessful authentication attempts are mapped
# to anonymous connections
map to guest = bad user
########## Domains ###########
#
# The following settings only takes effect if 'server role = classic
# primary domain controller', 'server role = classic backup domain controller'
# or 'domain logons' is set
#
# It specifies the location of the user's
# profile directory from the client point of view) The following
# required a [profiles] share to be setup on the samba server (see
# below)
; logon path = \\%N\profiles\%U
# Another common choice is storing the profile in the user's home directory
# (this is Samba's default)
# logon path = \\%N\%U\profile
# The following setting only takes effect if 'domain logons' is set
# It specifies the location of a user's home directory (from the client
# point of view)
; logon drive = H:
# logon home = \\%N\%U
# The following setting only takes effect if 'domain logons' is set
# It specifies the script to run during logon. The script must be stored
# in the [netlogon] share
# NOTE: Must be store in 'DOS' file format convention
; logon script = logon.cmd
# This allows Unix users to be created on the domain controller via the SAMR
# RPC pipe. The example command creates a user account with a disabled Unix
# password; please adapt to your needs
; add user script = /usr/sbin/useradd --create-home %u
# This allows machine accounts to be created on the domain controller via the
# SAMR RPC pipe.
# The following assumes a "machines" group exists on the system
; add machine script = /usr/sbin/useradd -g machines -c "%u machine account" -d /var/lib/samba -s /bin/false %u
# This allows Unix groups to be created on the domain controller via the SAMR
# RPC pipe.
; add group script = /usr/sbin/addgroup --force-badname %g
############ Misc ############
# Using the following line enables you to customise your configuration
# on a per machine basis. The %m gets replaced with the netbios name
# of the machine that is connecting
; include = /home/samba/etc/smb.conf.%m
# Some defaults for winbind (make sure you're not using the ranges
# for something else.)
; idmap config * : backend = tdb
; idmap config * : range = 3000-7999
; idmap config YOURDOMAINHERE : backend = tdb
; idmap config YOURDOMAINHERE : range = 100000-999999
; template shell = /bin/bash
# Setup usershare options to enable non-root users to share folders
# with the net usershare command.
# Maximum number of usershare. 0 means that usershare is disabled.
# usershare max shares = 100
# Allow users who've been granted usershare privileges to create
# public shares, not just authenticated ones
usershare allow guests = yes
#======================= Share Definitions =======================
[homes]
comment = Home Directories
browseable = no
# By default, the home directories are exported read-only. Change the
# next parameter to 'no' if you want to be able to write to them.
read only = yes
# File creation mask is set to 0700 for security reasons. If you want to
# create files with group=rw permissions, set next parameter to 0775.
create mask = 0700
# Directory creation mask is set to 0700 for security reasons. If you want to
# create dirs. with group=rw permissions, set next parameter to 0775.
directory mask = 0700
# By default, \\server\username shares can be connected to by anyone
# with access to the samba server.
# The following parameter makes sure that only "username" can connect
# to \\server\username
# This might need tweaking when using external authentication schemes
valid users = %S
# Un-comment the following and create the netlogon directory for Domain Logons
# (you need to configure Samba to act as a domain controller too.)
;[netlogon]
; comment = Network Logon Service
; path = /home/samba/netlogon
; guest ok = yes
; read only = yes
# Un-comment the following and create the profiles directory to store
# users profiles (see the "logon path" option above)
# (you need to configure Samba to act as a domain controller too.)
# The path below should be writable by all users so that their
# profile directory may be created the first time they log on
;[profiles]
; comment = Users profiles
; path = /home/samba/profiles
; guest ok = no
; browseable = no
; create mask = 0600
; directory mask = 0700
[printers]
comment = All Printers
browseable = no
path = /var/tmp
printable = yes
guest ok = no
read only = yes
create mask = 0700
# Windows clients look for this share name as a source of downloadable
# printer drivers
[print$]
comment = Printer Drivers
path = /var/lib/samba/printers
browseable = yes
read only = yes
guest ok = no
# Uncomment to allow remote administration of Windows print drivers.
# You may need to replace 'lpadmin' with the name of the group your
# admin users are members of.
# Please note that you also need to set appropriate Unix permissions
# to the drivers directory for these users to have write rights in it
; write list = root, @lpadmin

View File

@@ -0,0 +1,15 @@
NAME USED AVAIL REFER MOUNTPOINT
Vault 465G 3.78T 104K /Vault
Vault/base-104-disk-0 38.4G 3.81T 5.87G -
Vault/base-107-disk-0 56.5G 3.83T 5.69G -
Vault/subvol-102-disk-0 721M 1.30G 721M /Vault/subvol-102-disk-0
Vault/subvol-103-disk-0 1.68G 2.32G 1.68G /Vault/subvol-103-disk-0
Vault/subvol-113-disk-0 2.16G 17.9G 2.14G /Vault/subvol-113-disk-0
Vault/vm-100-disk-0 102G 3.85T 33.3G -
Vault/vm-105-disk-0 32.5G 3.80T 16.3G -
Vault/vm-106-disk-0 32.5G 3.80T 11.3G -
Vault/vm-107-cloudinit 6M 3.78T 72K -
Vault/vm-108-disk-0 102G 3.87T 14.0G -
Vault/vm-109-disk-0 32.5G 3.81T 233M -
Vault/vm-110-disk-0 32.5G 3.81T 3.85G -
Vault/vm-111-disk-0 32.5G 3.81T 4.63G -

View File

@@ -0,0 +1,2 @@
NAME SIZE ALLOC FREE CKPOINT EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT
Vault 4.36T 99.9G 4.26T - - 8% 2% 1.00x ONLINE -

View File

@@ -0,0 +1,10 @@
pool: Vault
state: ONLINE
scan: scrub repaired 0B in 00:09:21 with 0 errors on Sun Nov 9 00:33:22 2025
config:
NAME STATE READ WRITE CKSUM
Vault ONLINE 0 0 0
scsi-3600508e00000000012df59c25c59f20a ONLINE 0 0 0
errors: No known data errors

View File

@@ -0,0 +1,17 @@
#.102
agent: 1
boot: order=scsi0;net0
cores: 4
cpu: host
memory: 8200
meta: creation-qemu=9.0.2,ctime=1739318083
name: docker-hub
net0: virtio=BC:24:11:5B:F5:95,bridge=vmbr0,firewall=1
numa: 0
onboot: 1
ostype: l26
scsi0: Vault:vm-100-disk-0,iothread=1,size=100G
scsihw: virtio-scsi-single
smbios1: uuid=851c9177-a62a-4b55-b495-31680929bed4
sockets: 1
vmgenid: 4ec3beb0-5855-4855-ac77-4552d8848429

View File

@@ -0,0 +1,19 @@
#preparation template vvm
boot: order=scsi0;ide2;net0
cores: 2
cpu: host
ide0: local-lvm:vm-104-cloudinit,media=cdrom
ide2: none,media=cdrom
memory: 5000
meta: creation-qemu=9.0.2,ctime=1748116781
name: ubuntu-dev
net0: virtio=BC:24:11:86:18:0B,bridge=vmbr0,firewall=1
numa: 0
ostype: l26
scsi0: Vault:base-104-disk-0,iothread=1,size=32G
scsihw: virtio-scsi-single
smbios1: uuid=4d84ed03-fc39-40e0-bdee-dc9a31b20e0d
sockets: 1
tags: template
template: 1
vmgenid: f26c1136-40de-4497-b286-50e9df75d977

View File

@@ -0,0 +1,17 @@
boot: order=scsi0;net0;ide2
cores: 4
cpu: host
ide2: iso-share:iso/ubuntu-24.04.2-desktop-amd64.iso,media=cdrom,size=6194550K
memory: 16000
meta: creation-qemu=9.0.2,ctime=1747705140
name: dev
net0: virtio=BC:24:11:0B:94:9A,bridge=vmbr0,firewall=1
numa: 0
ostype: l26
scsi0: Vault:vm-105-disk-0,iothread=1,size=32G
scsihw: virtio-scsi-single
smbios1: uuid=94e9fcd6-460b-4ba5-bbaa-419b5cf30491
sockets: 1
spice_enhancements: foldersharing=1,videostreaming=all
vga: qxl
vmgenid: 23985a38-d287-4b9c-97d6-6ac8056090bc

View File

@@ -0,0 +1,15 @@
boot: order=scsi0;ide2;net0
cores: 2
cpu: host
ide2: iso-share:iso/ubuntu-24.04.2-desktop-amd64.iso,media=cdrom,size=6194550K
memory: 4096
meta: creation-qemu=9.0.2,ctime=1762020925
name: Ansible-Control
net0: virtio=BC:24:11:19:EA:A0,bridge=vmbr0,firewall=1,tag=5
numa: 0
ostype: l26
scsi0: Vault:vm-106-disk-0,iothread=1,size=32G
scsihw: virtio-scsi-single
smbios1: uuid=adbbe001-4198-4a4d-ba99-3052f7483c10
sockets: 1
vmgenid: 013f169c-4f1f-4f79-af63-fd57d5dc155a

View File

@@ -0,0 +1,18 @@
boot: order=scsi0;ide2;net0
cores: 2
cpu: host
ide0: Vault:vm-107-cloudinit,media=cdrom
ide2: local:iso/ubuntu-24.04.1-desktop-amd64.iso,media=cdrom,size=6057964K
memory: 4096
meta: creation-qemu=9.0.2,ctime=1749061520
name: ubuntu-docker
net0: virtio=BC:24:11:DB:15:0C,bridge=vmbr0,firewall=1
numa: 0
ostype: l26
scsi0: Vault:base-107-disk-0,iothread=1,size=50G
scsihw: virtio-scsi-single
smbios1: uuid=e63b1ff1-07f8-4809-a3d6-ca03cab2698e
sockets: 1
tags: template
template: 1
vmgenid: c59cfcf1-fa27-4438-b707-7f3b7bdc4a2d

View File

@@ -0,0 +1,17 @@
bios: ovmf
boot: order=scsi0;net0;ide0
cores: 4
cpu: host
efidisk0: local-lvm:vm-108-disk-0,efitype=4m,pre-enrolled-keys=1,size=4M
ide0: iso-share:iso/refplat-20241223-fcs.iso,media=cdrom,size=12426624K
memory: 32000
meta: creation-qemu=9.0.2,ctime=1751066715
name: CML
net0: virtio=BC:24:11:70:E6:08,bridge=vmbr0,firewall=1
numa: 0
ostype: l26
scsi0: Vault:vm-108-disk-0,iothread=1,size=100G
scsihw: virtio-scsi-single
smbios1: uuid=36809984-61ba-452d-8fa3-78cea42b5e57
sockets: 1
vmgenid: 7c6b3c35-3e83-4c3e-ac89-823f4395b3dc

View File

@@ -0,0 +1,15 @@
boot: order=scsi0;ide2;net0
cores: 1
cpu: x86-64-v2-AES
ide2: iso-share:iso/alpine-standard-3.21.0-x86_64.iso,media=cdrom,size=240M
memory: 2048
meta: creation-qemu=9.0.2,ctime=1762035910
name: web-server-01
net0: virtio=BC:24:11:5A:51:E5,bridge=vmbr0,firewall=1,tag=5
numa: 0
ostype: l26
scsi0: Vault:vm-109-disk-0,iothread=1,size=32G
scsihw: virtio-scsi-single
smbios1: uuid=483f4dca-bbd5-4a7d-b796-f1b44d2e6caa
sockets: 1
vmgenid: 593fcc09-1934-4e9e-af9a-235ac850db41

View File

@@ -0,0 +1,15 @@
boot: order=scsi0;ide2;net0
cores: 1
cpu: host
ide2: iso-share:iso/ubuntu-24.04.3-live-server-amd64.iso,media=cdrom,size=3226020K
memory: 4096
meta: creation-qemu=9.0.2,ctime=1762040863
name: web-server-02
net0: virtio=BC:24:11:19:CE:FF,bridge=vmbr0,firewall=1,tag=5
numa: 0
ostype: l26
scsi0: Vault:vm-110-disk-0,iothread=1,size=32G
scsihw: virtio-scsi-single
smbios1: uuid=3b8489d7-fdf8-4f2a-8650-9c1327de1cdf
sockets: 1
vmgenid: 646e0531-ddab-48f5-98a4-f4d15bf32cc7

View File

@@ -0,0 +1,15 @@
boot: order=scsi0;ide2;net0
cores: 1
cpu: host
ide2: iso-share:iso/ubuntu-24.04.3-live-server-amd64.iso,media=cdrom,size=3226020K
memory: 4096
meta: creation-qemu=9.0.2,ctime=1762041805
name: db-server-01
net0: virtio=BC:24:11:C0:5F:B4,bridge=vmbr0,firewall=1,tag=5
numa: 0
ostype: l26
scsi0: Vault:vm-111-disk-0,iothread=1,size=32G
scsihw: virtio-scsi-single
smbios1: uuid=856b9b58-9146-46fd-88c5-eb8d08fc5f7e
sockets: 1
vmgenid: 6864f7c4-1576-403f-b7e2-1d542bd1e252