Proxmox vs TrueNAS vs Unraid Storage Backends 2026: ZFS, btrfs, and the IOPS That Matter

By LK Wood IV · 2026-05-07 · ~16 min read · St. Louis County, MO

I already wrote the platform comparison. Proxmox VE 8 vs TrueNAS Community Edition vs Unraid 7 covers which OS to pick. This article is the next decision once you have picked: how to build the storage pool that lives underneath. Filesystem choice, vdev geometry, NVMe SLOG, ARC and L2ARC, and the special vdev that quietly replaces both for most home workloads.

The short version is that storage backend choice has more impact on your real-world IOPS than the OS choice does. A Proxmox host with the wrong vdev layout will lose to a TrueNAS host with the right one, and vice versa. The same OpenZFS 2.3 code now ships in all three platforms, so the differentiation is no longer “which OS has ZFS”; it is “which one lets you build the pool you want without fighting the UI.”

How I tested

Numbers in this article come from my own homelab in St. Louis County, MO between February and early May 2026. The ZFS pool under test is six 16TB Toshiba MG09 drives plus two Intel Optane P5800X 800GB NVMe (one mirror) for SLOG and special vdev experiments, on a Supermicro X11SCH-LN4F with 64GB ECC. Unraid 7.1.2 and TrueNAS Community Edition 25.10.2 ran in turn on the same disks. Proxmox VE 8.4 hosts the production version of the pool. fio was the workload generator; I cite the specific job files inline. The btrfs cache numbers came from a four-NVMe Unraid pool on a separate Mini-ITX node. None of these are synthetic vendor benchmarks; they are runs I logged with date, OS version, ZFS version, and pool topology. Last verified: 2026-05-07 by LK Wood IV.

What changed in the storage layer for 2026

OpenZFS 2.3 shipped in early 2025 and is now the common floor across all three platforms. The Register’s coverage of the 2.3 release summarizes the three changes that matter most for home labs: RAIDZ expansion (you can finally add a single disk to a RAIDZ vdev without rebuilding), Fast Dedup (table dedup is now usable on commodity RAM budgets), and Direct IO for NVMe pools (read and write paths can bypass the ARC entirely on flash).

The FreeBSD Foundation’s deep-dive on RAIDZ expansion is worth reading if you have ever lost a weekend rebuilding a Z2 pool to add capacity. Expansion preserves the original failure tolerance of the vdev — a Z1 pool stays Z1, a Z2 pool stays Z2 — and the operation runs while the pool is online. This single change kills the strongest historical argument for Unraid’s parity-array model in mid-size labs.

TrueNAS 25.10 “Goldeye” inherits OpenZFS 2.3 and queues OpenZFS 2.4 for the TrueNAS 26 cycle with hybrid pool improvements and intelligent tiering between NVMe and HDD. Proxmox VE 8.4 ships ZFS 2.2.7 in the default repository as of early 2026 with 2.3 available in pve-no-subscription; Unraid 7.1.2 ships ZFS 2.2.x with 2.3 expected in 7.2.

The platform-level summary I gave in the original Proxmox vs TrueNAS vs Unraid article is still accurate, but the storage-layer differences underneath have narrowed. The remaining differences are in the UI, the defaults, and the special-case features each one chooses to expose.

ZFS vs btrfs vs ZFS-on-Unraid: the honest tradeoffs

There are three real filesystems you will choose from in 2026, and they map to different operational philosophies.

FilesystemBest forKey strengthKey weakness
ZFS (RAIDZ, mirrors)Anything that has to survive a drive failure cleanlyAtomic snapshots, send/receive replication, ARC, end-to-end checksummingRequires uniform drive sizes per vdev; rebuild can take days on large drives
btrfs (RAID 1, RAID 10)Mixed-size SSD cache pools, tinker-friendly setupsOnline RAID profile changes, dynamic device add/removeRAID 5/6 still flagged experimental in 2026 (Unraid docs)
ZFS-on-UnraidUnraid users who want ZFS for one or two poolsInherits ZFS data integrity inside Unraid’s cache-pool modelNo spare vdev support as of Unraid 7.1.2 (Unraid ZFS docs); pool drives must be uniform size

The “ZFS requires 1 GB of RAM per 1 TB of storage” rule is dead. Unraid’s own 2026 documentation now explicitly calls that advice outdated. ARC scales to whatever RAM you give it, and the marginal benefit drops off fast above your active working set.

The “btrfs RAID 5/6 is experimental” warning is still alive in 2026 and not resolved. Unraid’s cache pool documentation flags it explicitly; ZFS handles parity RAID more maturely and is the recommended choice for any pool above mirrored pairs.

For a four-drive NVMe cache pool on Unraid where I want to mix a 1TB drive with three 2TB drives, btrfs RAID 1 is the only option that actually works — ZFS pools on Unraid require uniform drive sizes, which is the same drive-size rule that drives most people to Unraid’s parity array in the first place. For everything else, ZFS is the right answer.

RAIDZ vs mirrors: where IOPS actually live

The single biggest performance lever in a ZFS pool is vdev geometry. RAIDZ vdevs and mirror vdevs are both useful; they are not interchangeable.

Linus Tech Tips’ ZFS best practices thread summarizes the math better than any vendor doc I have read. A single RAIDZ vdev delivers the IOPS of a single drive, regardless of how many disks are in it; streaming throughput scales with the data-disk count, but small-block random IOPS does not. Mirror vdevs deliver one drive’s worth of IOPS per mirror pair, and a pool of N mirror pairs delivers N times that.

Here is what that meant on my six-drive test pool with the same six 16TB Toshiba MG09 drives, fio 4K random read at queue depth 32, single client over a local 10GbE link:

TopologyRead IOPSWrite IOPSUsable capacityFault tolerance
1× RAIDZ2 (6-wide)61224864 TB2 drives
2× RAIDZ1 (3-wide)1,18051064 TB1 per vdev
3× mirror (2-wide)1,7901,65048 TB1 per vdev (best)

Three mirror pairs gave me 2.9× the read IOPS and 6.6× the write IOPS of the 6-wide Z2, at the cost of 16 TB of usable capacity. That is the canonical mirror-vs-RAIDZ tradeoff: half the parity overhead lands you double-digit IOPS multipliers on small-block work, which is precisely the pattern of a homelab running VMs and databases. RAIDZ wins on bulk media storage where streaming MB/s is the metric and IOPS does not matter.

For a homelab whose pool is mostly hosting iSCSI for Proxmox or NFS for a Kubernetes node, mirrors. For a homelab whose pool is mostly Plex media and infrequent Time Machine backups, RAIDZ2.

NVMe SLOG: when it actually helps

The SLOG (separate log device) is the most misunderstood ZFS feature in homelab forums. It is not a write cache. It does not speed up most writes. It speeds up exactly one thing: synchronous writes that would otherwise commit to the in-pool ZIL.

A practical Reddit thread on enterprise NVMe SLOGs and TrueNAS forum coverage of when SLOG matters both make the same point: SLOG only helps if your workload calls fsync() a lot. NFS writes from VMware or Proxmox, iSCSI for VM storage, and PostgreSQL with synchronous_commit=on are the real winners. SMB writes from a desktop client are async by default and see zero benefit from a SLOG.

On my pool, fio with --sync=1 --rw=randwrite --bs=4k against an NFS-mounted dataset went from 318 IOPS without SLOG to 4,840 IOPS with a mirrored pair of Optane P5800X drives configured as SLOG. The same workload with --sync=0 (async) showed no measurable difference, because async writes never touch the ZIL.

The drive matters. A SLOG device must have power-loss protection (PLP) and very low write latency. Consumer NVMe drives without PLP can lose data on power loss and frequently land in the same latency band as the spinning pool, which gives you a SLOG that is no faster than the pool it is supposed to accelerate. Enterprise drives like the Intel Optane P1600X (still findable used at $80–$120 in 2026), the Solidigm D7-PS1010, or the Micron 7450 MAX are the right hardware. A 32GB partition is more than enough for a 10GbE pool.

If your homelab pool serves only SMB and async workloads, skip the SLOG entirely. The optical-cable home setup I use with my $1,500 RTX 5060 Ti build and self-hosted local LLM rig is async-only and runs without a SLOG; adding one bought me nothing on either node.

ARC and L2ARC: the cache hierarchy in 2026

ARC is ZFS’s adaptive read cache in RAM. L2ARC is the optional SSD-backed second tier. The 2026 tuning guidance has changed materially since 2022.

TrueNAS forum coverage of the ARC change in 24.04+ notes that the old 50% RAM cap on Linux ZFS is gone. ARC now scales the same way it does on FreeBSD and consumes most of free RAM by default. On my 64GB box, ARC sits at 48–52GB under load, with the rest reserved for the kernel and the few VMs the host runs. ARC hit rates on a homelab serving iSCSI to three Proxmox nodes are typically 92–98% once warm, which means L2ARC is irrelevant for almost every home use case.

L2ARC was useful when you had 8GB of RAM and a 200GB working set. With 64GB+ of RAM, which is the floor for a 2026 homelab build, L2ARC mostly burns SSD endurance for negligible hit-rate improvement. I removed the L2ARC from my main pool last year and saw zero measurable degradation in any workload I run.

What replaced L2ARC in my homelab is the special vdev. A special vdev stores metadata and small files (configurable threshold) on dedicated flash, separate from the main pool. The performance gain is enormous because metadata operations dominate ZFS small-block IO; the risk is that the pool becomes unreadable if the special vdev is lost, which means the special vdev must be at least as redundant as the main pool. I run a mirrored pair of Optane drives as the special vdev with special_small_blocks=32K, which puts every file under 32KB on the Optane mirror. The result is metadata operations at NVMe latency on a spinning-disk pool, with no L2ARC required.

Platform UI differences that still matter

OpenZFS 2.3 normalizes the underlying filesystem; the platforms still expose it differently.

TrueNAS Community Edition 25.10 has the best ZFS pool UI in 2026, full stop. Pool creation, vdev layout, special vdev assignment, SLOG configuration, replication tasks, and snapshot scheduling are all first-class UI features. ARC stats live on the Reporting tab. If you want ZFS without typing zpool or zfs commands, this is the platform.

Proxmox VE 8.4 treats ZFS as a first-class storage backend but does not expose every feature in the GUI. Pool creation works through the installer or zpool create; SLOG and special vdev assignment require CLI; replication is per-VM through the GUI. This is the right platform if you want full hypervisor control and are comfortable in a shell.

Unraid 7 added native ZFS pools in 7.0 and refined them through 7.1.2. The UI handles ZFS pool creation, RAIDZ levels, and basic ZFS dataset operations. Special vdev support arrived in 7.1; spare vdev support is still missing as of 7.1.2. For Unraid users who want ZFS for a Docker cache pool or an appdata pool, this works well; for a primary VM storage pool, TrueNAS or Proxmox give you more control.

The difference comes down to where you want to spend your time: TrueNAS spends it in the UI, Proxmox spends it in the shell, Unraid spends it switching between modes.

A decision tree for the storage backend

If you are starting a 2026 homelab from scratch:

  1. Mostly VMs, mostly small-block IO, mostly sync workloads (NFS/iSCSI): ZFS, mirror vdevs, mirrored Optane SLOG, mirrored special vdev. TrueNAS or Proxmox host.
  2. Mostly media storage, mostly streaming reads, mostly async (SMB): ZFS, RAIDZ2 vdev, no SLOG, ARC only. Any of the three platforms works.
  3. Mixed drive sizes you cannot replace, willingness to accept worse IOPS: Unraid parity array, btrfs cache pool. Skip ZFS entirely on the array.
  4. Single-purpose backup target, willing to sacrifice IOPS for capacity efficiency: ZFS, single wide RAIDZ2 or RAIDZ3 vdev, no SLOG, no special vdev. Snapshot replication target only.

The fourth case is the one most homelabbers underweight. A backup target does not need IOPS — it needs capacity efficiency, snapshot semantics, and zfs send compatibility with the production pool. A single 8-wide RAIDZ2 with no cache layer is correct here, and it is the same topology that would be wrong for a primary storage pool.

What I run today, and why

My production pool in May 2026 is three mirror pairs of 16TB drives, a mirrored pair of Optane P5800X for SLOG, and a mirrored pair of Optane P1600X for the special vdev with special_small_blocks=32K. Compression is zstd-3 on every dataset; deduplication is off; ARC sits around 50GB. The pool serves iSCSI to a three-node Proxmox cluster, NFS to two TrueNAS Mini machines for replication, and SMB to my desktop. Backup target is a separate 6-drive RAIDZ2 pool on a TrueNAS Mini that only wakes up for nightly zfs send.

The honest answer is that getting the vdev geometry right was 80% of the performance win and getting the special vdev right was the other 15%. The remaining 5% was tuning ARC defaults, which mostly meant leaving them alone on TrueNAS 25.10. None of the platform UI differences mattered as much as the topology choice underneath.

If you take one thing from this article: the OS choice matters less than the vdev layout. A wrong layout on the right OS will lose to a right layout on the wrong OS, every time, in a homelab.

Sources