User Tools

Site Tools


torresvault:architecture:overview

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revisionPrevious revision
torresvault:architecture:overview [2026/01/23 14:00] nathnatorresvault:architecture:overview [2026/01/23 14:03] (current) nathna
Line 142: Line 142:
  
 ---- ----
 +
  
 ==== 3. Proxmox Cluster Architecture ==== ==== 3. Proxmox Cluster Architecture ====
Line 149: Line 150:
 === PVE1 === === PVE1 ===
  
-  * CPU: **TBD**  (fill from `lscpu`+  * Hostname: **pve1** 
-  * RAM: **TBD**  (fill from `free -h`+  * CPU: **Intel Core i5-2500 @ 3.30 GHz (4 cores / 4 threads, 1 socket)** 
-  * Disks: **~12–14 × 1TB 2.5" drives** (exact list from `lsblk`+  * RAM: **32 GB DDR3L 1600 MHz**   
-  * HBA: **TBD** (from `lspci`)+    * 4 × 8 GB Timetec DDR3L (PC3L-12800UDIMMs 
 +  * Disks (approximate): 
 +    Multiple **1 TB WDC WD1003FBYX** enterprise HDDs 
 +    * Multiple **1 TB Seagate ST91000640NS** drives 
 +    Total of ~12 × 1 TB disks for VM storage 
 +  Storage stack: 
 +    System disk on onboard Intel SATA controller 
 +    * Data disks on GLOTRENDS SATA card, grouped into Proxmox storage (LVM/ZFS + zvols
 +  * HBAs / SATA: 
 +    Onboard **Intel SATA controller (RAID mode)*
 +    * **ASMedia ASM1064 SATA controller** 
 +    * **GLOTRENDS SA3112-C 12-Port PCIe x1 SATA Expansion Card** 
 +  * Networking: 
 +    * Onboard **Intel 82579LM Gigabit NIC** 
 +    * **Intel I350 quad-port 1 GbE** PCIe NIC 
 +    * vmbr interfaces used for: 
 +      * LAN / management 
 +      * Cluster interconnect (10.10.10.0/30 link to PVE2)
   * Roles:   * Roles:
-    * Core VMs (Nextcloud, NPM, Jellyfin, Prometheus/Grafana, etc.) +    * Hosts many of the core VMs (Nextcloud, NPM, Jellyfin, Prometheus/Grafana, etc.) 
-    * Part of Proxmox cluster storage+    * Part of 2-node Proxmox cluster
  
 === PVE2 === === PVE2 ===
  
-  * CPU: **TBD** +  * Hostname: **pve2** 
-  * RAM: **TBD** +  * CPU: **Intel Core i5-4570 @ 3.20 GHz (4 cores / 4 threads, 1 socket)** 
-  * Disks: **~12–14 × 1TB 2.5" drives** +  * RAM: **32 GB DDR3L 1600 MHz**   
-  * HBA: **TBD**+    * Same Timetec 4 × 8 GB kit as PVE1 
 +  * Disks (approximate): 
 +    Multiple **1 TB Seagate ST91000640NS** drives 
 +    Total of ~12 × 1 TB disks for VM storage 
 +  Storage stack: 
 +    * System disk on onboard Intel 9-Series SATA controller (AHCI) 
 +    Data disks on GLOTRENDS SATA card 
 +  * HBAs / SATA: 
 +    * **Intel 9 Series SATA controller (AHCI mode)** 
 +    * **ASMedia ASM1064 SATA controller** 
 +    * **GLOTRENDS SA3112-C 12-Port PCIe x1 SATA Expansion Card** 
 +  * Networking: 
 +    * Same **Intel I350 quad-port 1 GbE** NIC family as PVE1 (4 ports) 
 +    Bridges mirror PVE1 layout for easy VM migration
   * Roles:   * Roles:
-    * Redundancy for critical services +    * Redundant node for critical services 
-    * Test / lab workloads+    * General lab workloads and testing
  
 === QDevice === === QDevice ===
  
-  * Hardware: Raspberry Pi +  * Hardware: **Raspberry Pi** 
-  * Purpose: `corosync-qdeviceto avoid split-brain in 2-node cluster+  * Purpose: runs **corosync-qdevice** to provide quorum for the 2-node Proxmox cluster 
 +  * Goal: avoid split-brain if one Proxmox node goes offline
  
-=== Future: NAS / Proxmox Hybrid ===+=== Future: NAS / Proxmox Hybrid (Define 7 XL) ===
  
-  * Case: Fractal Design Define 7 XL +  * Case: **Fractal Design Define 7 XL** 
-  * Motherboard: ASRock Rack X570D4U +  * Motherboard: **ASRock Rack X570D4U** 
-  * CPU: Ryzen (exact model TBD) +  * CPU: **Ryzen (exact model TBD)** 
-  * RAM: TBD +  * RAM: **TBD (planned upgrade path from 32 GB → higher)** 
-  * Disks: 16 × 6TB SAS enterprise drives via dual HBAs+  * Disks: **16 × 6 TB SAS enterprise drives** via dual HBAs
   * Network:   * Network:
     * 1 × dedicated management port     * 1 × dedicated management port
-    * 2 × 1GbE +    * 2 × 1 GbE 
-    * 2 × 10GbE (Intel X550)+    * 2 × 10 GbE (Intel X550)
   * Role:   * Role:
-    * High-capacity NAS +    * High-capacity NAS for the cluster 
-    * Proxmox node for storage-heavy workloads +    * Additional Proxmox node for storage-heavy workloads 
-    * “Set it and forget it” anchor of TorresVault 2.0+    * Long-term set it and forget it” anchor of **TorresVault 2.0** 
 + 
 +=== Future: Services / Mini-PC Cluster ===
  
-=== FutureServices Cluster (Mini PCs===+  * **2 × MINISFORUM UM890 Pro** mini PCs 
 +  * Planned roles: 
 +    * Lightweight Kubernetes / k3s or Docker swarm node(s) 
 +    * Local AI / automation services 
 +    * Offload non-critical or experimental workloads from PVE1/PVE2
  
-  * 2 × MINISFORUM UM890 Pro mini PCs 
-  * Planned use: 
-    * k3s or Docker-based service cluster 
-    * Local AI / automation workloads 
-    * Possibly offload lighter services from main Proxmox nodes 
  
 ---- ----
torresvault/architecture/overview.1769194812.txt.gz · Last modified: by nathna

Donate Powered by PHP Valid HTML5 Valid CSS Driven by DokuWiki