In the ever-evolving landscape of virtualization, efficient memory management is crucial to ensuring optimal performance and resource utilization. VMware ESXi, a powerful hypervisor, introduces an innovative feature called Memory Tiering that revolutionizes how virtual machines (VMs) interact with system memory. This blog post delves into the intricacies of ESXi Memory Tiering, exploring its benefits, implementation, and real-world impact on data center operations.
NVMe PCIe Storage and Memory Tiering
- High-Speed Interface: NVMe PCIe is a high-speed, low-latency storage interface designed for SSDs (Solid-State Drives). It provides significantly faster data transfer rates compared to traditional SATA-based SSDs.
- Direct Access to Memory: When combined with Memory Tiering, NVMe storage allows direct access to the host’s system memory (RAM) over the PCIe bus. This bypasses the traditional storage controller, resulting in even lower latency and higher throughput for memory operations.
- Performance Benefits: With NVMe, the slower tier of memory (e.g., SSDs or persistent memory) can still offer decent performance. This is because NVMe SSDs have much faster read/write speeds, enabling quicker movement of pages between tiers.
Why NVMe Matters for Memory Tiering
- Reduced Latency: Lower latency access to storage means faster page movement and improved overall system responsiveness, which are crucial for time-sensitive applications.
- High Throughput: NVMe SSDs offer higher data transfer rates, enabling efficient handling of large memory pages and bulk data transfers during VM operations.
- Cost-Effectiveness: By utilizing NVMe storage in the slower tiers, organizations can achieve cost savings while maintaining high performance for critical workloads.
Best Practices
- Storage Configuration: Ensure that the ESXi host has the necessary PCIe slots and support for NVMe devices. Properly configure the storage to align with memory tier requirements.
- Performance Monitoring: Continuously monitor VM performance and memory utilization to fine-tune Memory Tiering policies and ensure optimal page placement.
- Hardware Compatibility: Verify that all hardware components, including memory modules, storage drives, and PCIe cards, are compatible with NVMe to avoid performance bottlenecks.

The integration of NVMe PCIe storage enhances VMware ESXi Memory Tiering’s capabilities, making it a powerful solution for data centers seeking to maximize memory utilization and application performance.
How To Configure Memory Tiering:
SSH into Each ESXi Host, If you ESXi Hosts are managed by VCF/SDDC Manager, you will need to lookup the password in SDDC Manager.

Enable Memory Tiering with the command below, if you want to revert and disable it set it back to FALSE and put it in maintenance mode / reboot the host.
esxcli system settings kernel set -s MemoryTiering -v TRUE
Choose the NVMe device to use as tiered memory and note the NVMe device path (i.e. /vmfs/devices/disks/).

Locate the NVMe Disk which mine is below as an example
esxcli system tierdevice create -d /vmfs/devices/disks/t10.NVMe____INTEL_SSDPED1D280GAH____________________000142FC3BE4D25C
Create the tier partition on the NVMe device. esxcli system tierdevice create -d /vmfs/devices/disks/


esxcli system settings advanced set -o /Mem/TierNvmePct -i 200


Go to -> Configure under the host -> Advanced System Settings – Filter For Mem.TierNvmePct
This is where you will set the percentage of NVMe will be set for its ratio configure that specific host

Configuring the DRAM to NVMe Ratio
As noted in the NVMe Device Recommendations section, by default, hosts are configured to
use a DRAM to NVMe ratio of 4:1. This can be configured per host to evaluate performance
when using different ratios.
The host advanced setting for Mem.TierNvmePct sets the amount of NVMe to be used as
tiered memory using a percentage equivalent of the total amount of DRAM. A host reboot is
required for any changes to this setting to take effect.
For example, setting a value to 25 would configure using an amount of NVMe as tiered memory
that is equivalent to 25% of the total amount of DRAM. This is known as the DRAM to NVMe
ratio of 4:1. A host with 1 TB of DRAM would use 256 GB of NVMe as tiered memory.
Another example, setting this value to 50 would configure using an amount of NVMe as tiered
memory that is equivalent to 50% of the total amount of DRAM. This is known as the DRAM to
NVMe ratio of 2:1. A host with 1 TB of DRAM would use 512 GB of NVMe as tiered memory.
One last example, setting this value to 100 would configure using an amount of NVMe as tiered
memory that is equivalent to 100% of the total amount of DRAM. This is known as the DRAM to
NVMe ratio of 1:1. A host with 1 TB of DRAM would use 1 TB of NVMe as tiered memory.
It is recommended that the amount of NVMe configured as tiered memory does not exceed the
total amount of DRAM.
Reference – https://knowledge.broadcom.com/external/article/311934/using-the-memory-tiering-over-nvme-featu.html