Top Posts
Offline VMware Cloud Foundation 9 Depot: Your Path...
VMware Cloud Foundation 9: Simplifying Identity with a...
What’s New In VMware Cloud Foundation 9.0
Deploying & Configuring the VMware LCM Bundle Utility...
VMware Cloud Foundation: Don’t Forget About SSO Service...
VMware Explore Las Vegas 2025: Illuminating the Path...
Securing Software Updates for VMware Cloud Foundation: What...
VMware Cloud Foundation 5.2: A Guide to Simplified...
VMware Cloud Foundation 5.2: Unlocking Secure Hybrid Cloud...
VMware Cloud Foundation – Memory Tiering: Optimizing Memory...
Virtual Bytes
  • Home
  • Home Data Center 2025
  • VMware
    • Cloud
    • Datacenter & Cloud Infrastructure
      • VMware ESXi
      • VMware vCenter
      • VMware vSAN
    • Networking & Security
    • Desktop & App Virtualization
      • Omnissa Horizon
    • Troubleshooting
    • Ansible
  • Education
  • Hardware
    • Hardware Tips & Tricks
  • Events
  • About
    • About Me
    • Home Lab Archives
      • Home Lab 2020-2022
      • Home Lab 2016-2020
Category:

VMware vSAN

NetworkingVMware vSAN

VMware vSAN and Remote Direct Memory Access

by Tommy Grot January 5, 2024
written by Tommy Grot 3 minutes read

Welcome to the first blog post of 2024! We are thrilled to kick off the year with a topic that is bound to ignite your vSAN Cluster. Get ready to dive into the world of RDMA (Remote Direct Memory Access) and vSAN (Virtual Storage Area Network) implementation. These cutting-edge technologies are revolutionizing the way data is transferred and stored, promising lightning-fast speeds and unparalleled efficiency. Whether you are a tech enthusiast, a system administrator, or simply someone intrigued by the latest advancements in the tech universe, this blog post will unravel the mysteries of RDMA and vSAN, leaving you with a newfound understanding and enthusiasm for these game-changing innovations. So, buckle up and lets get ready!

Lets configured your ESXi Host to be ready for RDMA for vSAN

First thing you will want your Core Networking Switches to have Data Center bridging configured for all interfaces that are connected to your vSAN Cluster. Link to Arista

Example Syntax From my Arista DCS-7050QX-32S-F
   description ESX01-VDS01-1-VMNIC4
   mtu 9214
   dcbx mode ieee
   speed forced 40gfull
   switchport mode trunk
   priority-flow-control on
   priority-flow-control priority 3 no-drop

Sample Config

So, now that the networking is prepared next we will need to SSH into each ESXi Host and you will need configure the settings below:

Example of vSAN Cluster Health regarding RDMA not configured

While in SSH – you will need to configured each host with the parameters’ below in each code block

dcbx int Set DCBX operational mode
Values : 0 – Disabled, 1 – Enabled Hardware Mode, 2 – Enabled Software Mode, 3 – If Hardware Mode is supported Enable Hardware Mode, else Enable Software

esxcli system module parameters set -m nmlx5_core -p dcbx=3

pfctx int 0x08 Priority based Flow Control policy on TX.
Values : 0-255
It’s 8 bits bit mask, each bit indicates priority [0-7]. Bit value:
1 – generate pause frames according to the RX buffer threshold on the specified priority.
0 – never generate pause frames on the specified priority.
Notes: Must be equal to pfcrx.
Default: 0

pfcrx int 0x08 Priority based Flow Control policy on RX.
Values : 0-255
It’s 8 bits bit mask, each bit indicates priority [0-7]. Bit value:
1 – respect incoming pause frames on the specified priority.
0 – ignore incoming pause frames on the specified priority.
Notes: Must be equal to pfctx.
Default: 0

trust_state int Port policy to calculate the switch priority and packet color based on incoming packet
Values : 1 – TRUST_PCP, 2 – TRUST_DSCP
Default: 1

esxcli system module parameters set -m nmlx5_core -p "pfctx=0x08 pfcrx=0x08 trust_state=2 max_vfs=0"

pcp_force int PCP value to force on outgoing RoCE traffic.
Cannot be active when dscp_to_pcp is enabled.
Values : -1 – Disabled, 0-7 – PCP value to force
Default: -1

dscp_force int DSCP value to force on outgoing RoCE traffic.

Values : -1 – Disabled, 0-63 – DSCP value to force
Default: -1

esxcli system module parameters set -m nmlx5_rdma -p "pcp_force=-1 dscp_force=26"

Now, that your have configured all the ESXi hosts, you will need to repeat the syntax above to each host you have. Once updated you will need to put each host in maintenance mode and reboot each host.

Once all ESXi hosts are configured and rebooted, vSAN Health should report back RDMA Configuration Healthy.

Below are some Network Backbone tests over RDMA!

January 5, 2024 2 comments 1.8K views
1 FacebookTwitterLinkedinThreadsBlueskyEmail
VMware vSAN

Reuse Ex-vSAN Host, Remove vSAN configuration with esxcli

by Tommy Grot October 17, 2020
written by Tommy Grot 1 minutes read

Trying to re-use a previous VMware ESXi host, but the vSAN Disks are still claimed. No worries – Below we will walk you through how to remove the drives and get your disks back to rebuild your vSAN Cluster!

First run this command to check if the disks are apart of the vSAN Cluster

esxcli vsan storage list | grep naa

Once you run the esxcli vsan storage list | grep naa, then you will need to run the command below, to prevent that the automatic disk claim is disabled for this procedure.

esxcli vsan storage automode set --enabled false

Once the vSAN automatic disk claim command is completed, then you may remove vSAN Disk Group Names with the command below. Replace the (VSAN Disk Group Name) with your corresponding naa. ID.

esxcli vsan storage remove -s (VSAN Disk Group Name)

After all vSAN Drives have been removed from each ESXi host. Then you will want to verify that all the vSAN Disks no longer show up and you will want to run the same command as you did in the beginning.

If nothing shows up then you successful removed all vSAN Disks.

esxcli vsan storage list | grep naa

To finalize the removal of vSAN you will want the host to leave the vSAN Cluster so you will need to run this command.

esxcli vsan cluster leave

You will see a vSAN Clustering is not enabled on this host message.

Once you have completed all those steps, you will successfully see that all the SAS SSDs below are no longer claimed by vSAN. Now the ESXi host is ready for a fresh installation!

October 17, 2020 5 comments 4.7K views
1 FacebookTwitterLinkedinThreadsBlueskyEmail
VMware vSAN

Setting Up – Nested ESXi 6.7 on vSAN

by Tommy Grot April 12, 2019
written by Tommy Grot 1 minutes read

Today we will be installing Nested ESXi 6.7 on vSAN.

This is not a supported type of deployment for Production Environments. USE AT YOUR OWN RISK!

Before Installing ESXi 6.7 – There will be a requirement to add an ESXCLI command via SSH on each host that is apart of the vSAN Cluster

To Enable SSH on ESXi 6.7 – Go to desired host. Click on Configure -> System -> Services -> Click on SSH and click Start.

 esxcli system settings advanced set -o /VSAN/FakeSCSIReservations -i 1 

After the command is inputted. Continue Installation.

After installation – Setup ESXi to have Static IP Address
Once IP address is set, Apply the settings and Select Y – Yes ( This will not interrupt VM traffic only Management)
April 12, 2019 0 comments 2.1K views
0 FacebookTwitterLinkedinThreadsBlueskyEmail




Recent Posts

  • Offline VMware Cloud Foundation 9 Depot: Your Path to Air-Gapped Deployments
  • VMware Cloud Foundation 9: Simplifying Identity with a Unified SSO Experience
  • What’s New In VMware Cloud Foundation 9.0
  • Deploying & Configuring the VMware LCM Bundle Utility on Photon OS: A Step-by-Step Guide
  • VMware Cloud Foundation: Don’t Forget About SSO Service Accounts

AI cloud Cloud Computing cloud director configure cyber security director dns domain controller ESXi How To las vegas llm llms multicloud NSx NSX-T 3.2.0 NVMe sddc security servers ssh storage tenant upgrade vcd vcda VCDX vcenter VCF vcf 9 VDC vexpert Virtual Machines VMs vmware vmware.com vmware aria VMware Cloud Foundation VMware cluster VMware Explore VMware NSX vrslcm vsan walkthrough

  • Twitter
  • Instagram
  • Linkedin
  • Youtube

@2023 - All Right Reserved. Designed and Developed by Virtual Bytes

Virtual Bytes
  • Home
  • Home Data Center 2025
  • VMware
    • Cloud
    • Datacenter & Cloud Infrastructure
      • VMware ESXi
      • VMware vCenter
      • VMware vSAN
    • Networking & Security
    • Desktop & App Virtualization
      • Omnissa Horizon
    • Troubleshooting
    • Ansible
  • Education
  • Hardware
    • Hardware Tips & Tricks
  • Events
  • About
    • About Me
    • Home Lab Archives
      • Home Lab 2020-2022
      • Home Lab 2016-2020