architecture:overview
This is an old revision of the document!
Table of Contents
TorresVault Architecture Overview
This page documents the high-level architecture of the TorresVault home lab:
- Proxmox cluster and storage
- Network (UniFi, VLANs, WiFi)
- Core services (DNS, reverse proxy, storage, monitoring)
- Automation & lighting (Home Assistant, FPP)
- Future expansion plans (NAS hybrid box, mini PC cluster)
1. High-Level Diagram
This is the birds-eye view of TorresVault as it exists today + near-term plans.
βββββββββββββββββ Internet βββββββββββββββββ
β Ting Fiber (WAN1) β
ββββββββββββββββββββββ¬ββββββββββββββββββββββ
β
[ WAN1 @ Port 5 ]
β
ββββββββββββββββββββββββββββββββββ
β UCG Max (192.168.1.1) β
β - Router / Firewall β
β - DHCP for all VLANs β
βββββββ¬ββββββββββββ¬βββββββββββββββ
β β
VLAN 1 (192.168.1.0/24) β β VLAN 10 (192.168.10.0/24)
β β
βββββββ΄ββββββββ β
β USW-Lite-8 β β
β 8-PoE β β
β 192.168.1.194 β
βββββββ¬ββββββββ β
β β
βββββββββββββββββββββΌββββββββββββΌβββββββββββββββββββββ
β β β β
[Hallway AP] [Front-end [Other wired [Downstream
192.168.1.236 devices] devices] uplinks]
(WiFi for multiple
VLANs via SSIDs)
Hallway AP (mesh) βββΊ UDB Switch (192.168.1.98)
βββββββββββ Proxmox / Compute Layer βββββββββββ
VLAN 10 / 20 uplinks via:
- USW Flex (192.168.10.7)
- USW Flex 2.5G 5 (192.168.10.104)
ββββββββββββββββββββββββββββββββββββββββββββββ
β Proxmox Cluster β
β β
β PVE1: (details TBD) β
β - CPU: β
β - RAM: β
β - Storage: 12β14 Γ 1TB 2.5" disks β
β - HBA: β
β β
β PVE2: (details TBD) β
β - CPU: β
β - RAM: β
β - Storage: 12β14 Γ 1TB 2.5" disks β
β - HBA: β
β β
β QDevice: Raspberry Pi (corosync qdevice) β
ββββββββββββββββββββββββββββββββββββββββββββββ
ββββββββββββββββββββββββββββββββββββββββββββββ
β Future: NAS / Proxmox Hybrid (Define 7) β
β - ASRock Rack X570D4U β
β - Ryzen CPU β
β - RAM: TBD β
β - 2 Γ HBAs β
β - 16 Γ 6TB SAS enterprise drives β
β - Dual Intel X550 10GbE β
β - 1 mgmt, 2 Γ 1G, 2 Γ 10G β
ββββββββββββββββββββββββββββββββββββββββββββββ
ββββββββββββββββββββββββββββββββββββββββββββββ
β Services Cluster (Mini PCs) β
β - 2 Γ MINISFORUM UM890 Pro β
β - Future use: k3s / services / AI β
ββββββββββββββββββββββββββββββββββββββββββββββ
βββββββββββββββ Automation & Lighting Layer ββββββββββββββ
VLAN 60 (Torres Family Lights) β 192.168.60.0/24
- FPP Controller: 192.168.60.55
- Kulp controllers / smart receivers
- WLED instances (including wled_car_warning)
2. Network & VLAN Layout
The network core is provided by the UCG Max gateway and a UniFi switch/AP stack.
Core UniFi Devices
- UCG Max β 192.168.1.1
- WAN1: Ting Fiber
- Handles DHCP for all VLANs
- Router for all subnets
- Switches
- USW Flex β 192.168.10.7 (uplink from UCG Max Port 4)
- USW Flex 2.5G 5 β 192.168.10.104 (uplink from USW Flex Port 5)
- USW-Lite-8-PoE β 192.168.1.194 (uplink from UCG Max Port 1)
- UDB Switch β 192.168.1.98 (meshed via Hallway AP)
- Access Points
- Master Bedroom AP β 192.168.10.201 (uplink from USW Flex Port 4)
- Hallway AP β 192.168.1.236 (uplink from USW-Lite-8-PoE, provides mesh to UDB)
VLANs & Subnets
Current layer-3 networks:
| Name | VLAN ID | Subnet | DHCP | Notes |
|---|---|---|---|---|
| Default | 1 | 192.168.1.0/24 | Yes | Core LAN / Infra |
| stark_user | 10 | 192.168.10.0/24 | Yes | User devices |
| stark_IOT | 20 | 192.168.20.0/24 | Yes | Home IoT |
| guest | 30 | 192.168.30.0/24 | Yes | Guest WiFi |
| IOT+ | 50 | 192.168.50.0/24 | Yes | Higher-trust IoT / bridge |
| Torres Family lights | 60 | 192.168.60.0/24 | Yes | FPP, controllers, WLED etc. |
WiFi SSIDs
| SSID | VLAN / Network | Bands | Purpose |
|---|---|---|---|
| stark_IOT | stark_IOT (20) | 2.4 / 5 GHz | Bulk IoT |
| stark_user | stark_user (10) | 2.4 / 5 GHz | User phones / laptops |
| stark_IOT+ | IOT+ (50) | 2.4 / 5 GHz | Special IoT / bridges |
3. Proxmox Cluster Architecture
The hypervisor layer currently consists of two main Proxmox nodes plus a qdevice, with a future third node / NAS hybrid.
PVE1
- Hostname: pve1
- CPU: Intel Core i5-2500 @ 3.30 GHz (4 cores / 4 threads, 1 socket)
- RAM: 32 GB DDR3L 1600 MHz
- 4 Γ 8 GB Timetec DDR3L (PC3L-12800) UDIMMs
- Disks (approximate):
- Multiple 1 TB WDC WD1003FBYX enterprise HDDs
- Multiple 1 TB Seagate ST91000640NS drives
- Total of ~12 Γ 1 TB disks for VM storage
- Storage stack:
- System disk on onboard Intel SATA controller
- Data disks on GLOTRENDS SATA card, grouped into Proxmox storage (LVM/ZFS + zvols)
- HBAs / SATA:
- Onboard Intel SATA controller (RAID mode)
- ASMedia ASM1064 SATA controller
- GLOTRENDS SA3112-C 12-Port PCIe x1 SATA Expansion Card
- Networking:
- Onboard Intel 82579LM Gigabit NIC
- Intel I350 quad-port 1 GbE PCIe NIC
- vmbr interfaces used for:
- LAN / management
- Cluster interconnect (10.10.10.0/30 link to PVE2)
- Roles:
- Hosts many of the core VMs (Nextcloud, NPM, Jellyfin, Prometheus/Grafana, etc.)
- Part of 2-node Proxmox cluster
PVE2
- Hostname: pve2
- CPU: Intel Core i5-4570 @ 3.20 GHz (4 cores / 4 threads, 1 socket)
- RAM: 32 GB DDR3L 1600 MHz
- Same Timetec 4 Γ 8 GB kit as PVE1
- Disks (approximate):
- Multiple 1 TB Seagate ST91000640NS drives
- Total of ~12 Γ 1 TB disks for VM storage
- Storage stack:
- System disk on onboard Intel 9-Series SATA controller (AHCI)
- Data disks on GLOTRENDS SATA card
- HBAs / SATA:
- Intel 9 Series SATA controller (AHCI mode)
- ASMedia ASM1064 SATA controller
- GLOTRENDS SA3112-C 12-Port PCIe x1 SATA Expansion Card
- Networking:
- Same Intel I350 quad-port 1 GbE NIC family as PVE1 (4 ports)
- Bridges mirror PVE1 layout for easy VM migration
- Roles:
- Redundant node for critical services
- General lab workloads and testing
QDevice
- Hardware: Raspberry Pi
- Purpose: runs corosync-qdevice to provide quorum for the 2-node Proxmox cluster
- Goal: avoid split-brain if one Proxmox node goes offline
Future: NAS / Proxmox Hybrid (Define 7 XL)
- Case: Fractal Design Define 7 XL
- Motherboard: ASRock Rack X570D4U
- CPU: Ryzen (exact model TBD)
- RAM: TBD (planned upgrade path from 32 GB β higher)
- Disks: 16 Γ 6 TB SAS enterprise drives via dual HBAs
- Network:
- 1 Γ dedicated management port
- 2 Γ 1 GbE
- 2 Γ 10 GbE (Intel X550)
- Role:
- High-capacity NAS for the cluster
- Additional Proxmox node for storage-heavy workloads
- Long-term βset it and forget itβ anchor of TorresVault 2.0
Future: Services / Mini-PC Cluster
- 2 Γ MINISFORUM UM890 Pro mini PCs
- Planned roles:
- Lightweight Kubernetes / k3s or Docker swarm node(s)
- Local AI / automation services
- Offload non-critical or experimental workloads from PVE1/PVE2
4. Core Services Layout
Key always-on services and where they live conceptually:
- DNS & Filtering
- Pi-hole pair with VIP 192.168.1.5
- Handles internal DNS including `torresvault.com` / `in.torresvault.com`
- Reverse Proxy
- NGINX Proxy Manager
- Exposes external services under `torresvault.com`
- Internal apps reachable via `in.torresvault.com`
- Storage & Files
- Nextcloud VM
- Backed by Proxmox storage + future NAS
- Monitoring
- Prometheus + Grafana
- Targets: Proxmox nodes, UniFi, FPP, key VMs & containers
- Home Automation
- Home Assistant (currently on a Pi)
- Integrations:
- UniFi presence / network health
- BLE tracking
- FPP (192.168.60.55)
- WLED (including car warning instance)
- Zigbee/Z-Wave/other smart devices
- Media
- Jellyfin VM
- Access protected via NPM / auth
5. Automation & Lighting (Torres Family Lights)
The holiday light show runs on a dedicated VLAN and infrastructure:
- VLAN 60 β Torres Family lights β 192.168.60.0/24
- FPP primary controller β 192.168.60.55
- Kulp 32 controllers and smart receivers
- Mega tree, matrix, rooflines, and other props
- Home Assistant controls:
- Start/Stop show
- Sequence selection
- Monitoring FPP state
- WLED instances:
- `wled_car_warning` used for in-car item reminders
This layer is intentionally isolated using its own VLAN and firewall rules, while still tightly integrated with Home Assistant for automations.
6. Future Direction (TorresVault 2.0)
Planned upgrades and architectural goals:
- Bring NAS / Proxmox hybrid online as a third cluster member and storage anchor.
- Deploy the 2 Γ MINISFORUM UM890 Pro mini PCs as a lightweight services/AI cluster.
- Migrate more VMs to containerized services (Docker / k3s) where it makes sense.
- Standardize on voice + automation (Home Assistant Voice, local AI).
- Tighten monitoring + alerting across Proxmox, UniFi, FPP, Pi-hole, and services.
- Document every major component and procedure in this wiki for future you.
architecture/overview.1769218030.txt.gz Β· Last modified: by nathna
