User Tools

Site Tools


proxmox:cluster

=========================================

Proxmox Architecture – TorresVault 2.0 (Current State, 2026)

=========================================

This document describes the current, single-node Proxmox architecture powering TorresVault 2.0.

This page replaces and supersedes all references to:

  • pve1
  • pve2
  • The old 2-node cluster
  • The Raspberry Pi qdevice
  • All Intel-based legacy hardware

All of that hardware has been decommissioned.

The sole hypervisor is now:

β–Ά PVE-NAS (192.168.1.153)

Running on enterprise-grade Ryzen hardware with TrueNAS virtualized via HBA passthrough, and acting as the centralized compute + storage backbone for TorresVault 2.0.

Future expansions (backup NAS, mini-PC cluster, GPU with Jarvis, Flex 10G, etc.) will be documented on a separate roadmap page.


1. High-Level Overview

Hypervisor Platform

  • Proxmox VE 9.x
  • Single-node design (no cluster)
  • System name: pve-nas
  • Management IP: 192.168.1.153
  • IPMI: 192.168.1.145

Storage Layer (under TrueNAS VM)

  • 8 Γ— Samsung PM863 1.92 TB enterprise SSDs passed directly to TrueNAS via HBA
  • TrueNAS manages all storage pools
  • PVE-NAS uses:
    • NVMe mirror β†’ Proxmox OS
    • 1.9 TB SSDs β†’ VM storage
    • ZFS replication & snapshots inside TrueNAS
    • PBS nightly backups

Backup Layer

  • PBS VM on PVE-NAS
  • Writes into pbs-main datastore on TrueNAS

Workload Layer

Core services:

  • Immich
  • Nextcloud
  • Jellyfin
  • Web landing page
  • NPM reverse proxy
  • Prometheus / Grafana
  • Kuma
  • Wiki
  • n8n automations

Automation Layer

  • Home Assistant (external Pi 4)
  • BLE tracking
  • FPP (192.168.60.55)
  • WLED (including car warning system)

This is currently the entire virtualization footprint for TorresVault 2.0.


2. Physical Host: PVE-NAS

Hardware Summary

Component Details
Motherboard ASRock Rack X570D4U-2L2T
CPU AMD Ryzen 7 5700G β€” 8 cores / 16 threads
RAM 64 GiB DDR4 ECC
Boot 2 Γ— NVMe SSD (ZFS mirror)
VM Storage 2 Γ— Samsung PM863 1.92 TB SSD (Proxmox local storage)
HBA 1 Γ— LSI IT-Mode passthrough
TrueNAS Pool Drives 8 Γ— Samsung PM863 1.92 TB SSD (full passthrough)
Networking 1G Γ—2 + 10G Γ—2 (X550 NICs)
IPMI 192.168.1.145

This is now your single most powerful and consolidated host in TorresVault.


3. Network Design

Proxmox sees only the main LAN and storage networks you define.

Management & LAN

Interface IP Purpose
vmbr0 192.168.1.153 Main LAN bridge & VM network
eno1 / eno2 (bridged) 1G LAN & VM connectivity
ens1f0 / ens1f1 (available) Dual 10GbE for future storage network / Flex 10G

VLANs (available to VMs)

  • VLAN10 – User
  • VLAN20 – IoT
  • VLAN50 – IoT+
  • VLAN60 – Lighting
  • VLAN30 – Guest
    (all managed via UniFi)

IPMI

  • 192.168.1.145
    Always available even if Proxmox is offline.

4. Storage Architecture (Current)

There are three main storage components:


4.1 Proxmox Local Storage (OS + VM disks)

Storage Name Description Backed By
local ISOs, templates NVMe mirror
local-lvm VM disks NVMe mirror
ssd-backups Local staging 1.9TB PM863 SSD
immich-nfs Immich share TrueNAS
nas-zfs ZFS datasets TrueNAS
nas-local VM backups / misc TrueNAS

4.2 TrueNAS VM (ID 108)

Component Details
Disks 8 Γ— PM863 1.92TB SSDs via HBA passthrough
Role All primary storage for Immich, Nextcloud, Jellyfin, PBS datastore
IP 192.168.1.108
Pools ssd-pool, temp-pool, hdd-pool (if present)

TrueNAS acts as your central storage authority.


4.3 Proxmox Backup Server VM (ID 105)

Component Details
Datastore pbs-main
Backed By TrueNAS
Backed Up? NO (PBS never backs up itself)

PBS backs up:

  • Immich
  • Nextcloud
  • Jellyfin
  • Web / Wiki
  • Prometheus / Kuma
  • n8n
  • NPM

Excluded:

  • PBS (cannot back itself)
  • TrueNAS VM (contains the datastore itself)

5. Workload Layout (Current)

VMs Hosted on PVE-NAS

VM ID Name Purpose
100 web TorresVault home page
101 Kuma Uptime monitoring
102 next Nextcloud
103 immich Photo/video backup
104 jellyfin Media server
105 pbs Backup server
106 n8n Automations
107 npm Reverse proxy
108 truenas Core storage
110 Prometheus Monitoring
116 wiki DokuWiki
112/113/114/111 iperf-x VLAN testing tools

Everything is now consolidated on one hypervisor.


6. Backup Strategy

Nightly PBS Backup Jobs

Backed up nightly:

  • Core services (web, Nextcloud, Immich, Jellyfin)
  • Monitoring stack
  • Wiki
  • n8n
  • NPM
  • Portainer (if re-added)
  • All IPERF lab images (optional)

Excluded

  • TrueNAS VM (contains datastore)
  • PBS VM (cannot self-backup)
  • VMs with external data stores (e.g., Nextcloud files on TrueNAS)

Restore Flow

  1. In PBS: pick snapshot
  2. Restore to local-lvm or ZFS
  3. Boot VM
  4. Validate with service health checks

7. Monitoring

Monitoring stack includes:

Node-level

  • Proxmox UI (graphs)
  • ZFS ARC graphs
  • IO delay graphs

Service-level

  • Prometheus (metrics)
  • Grafana dashboards
  • Kuma for ping/HTTP checks

Storage-level

  • TrueNAS SMART monitoring
  • PBS datastore stats
  • Verification/prune jobs

Network-level

  • UniFi metrics
  • HA sensors & automations

8. Operations

Power-Down Order

  1. Apps (Immich, Nextcloud, NPM, web, wiki)
  2. Monitoring (Prometheus, Kuma)
  3. PBS
  4. TrueNAS
  5. PVE-NAS

Power-Up Order

  1. Network gear
  2. PVE-NAS
  3. TrueNAS
  4. PBS
  5. Core apps
  6. Monitoring

Updating Proxmox

apt update && apt full-upgradereboot

9. Risks & Constraints

  • Single-node setup (no HA)
  • TrueNAS + PBS + all VMs on same hardware = consolidated risk
  • No shared storage
  • Heavy workloads can spike RAM (currently ~65% used steady)
  • Future GPU / AI workloads may require more RAM

These are acceptable for home lab usage.


10. Future Upgrades (TorresVault 2.0 Roadmap)

  • Add FLEX 10G for 10GbE uplink
  • Build backup NAS with matching SSDs
  • Add UM890 Pro mini-PC cluster
  • Add Jarvis AI GPU node
  • Scale-out TrueNAS pool to 10–11 SSDs
  • Offload PBS to dedicated hardware
  • Move Home Assistant to a VM
  • Add ZFS replication between NAS β†’ backup NAS

Roadmap page will detail this further.

proxmox/cluster.txt Β· Last modified: by 192.168.1.189

Donate Powered by PHP Valid HTML5 Valid CSS Driven by DokuWiki