Top Posts
Offline VMware Cloud Foundation 9 Depot: Your Path...
VMware Cloud Foundation 9: Simplifying Identity with a...
What’s New In VMware Cloud Foundation 9.0
Deploying & Configuring the VMware LCM Bundle Utility...
VMware Cloud Foundation: Don’t Forget About SSO Service...
VMware Explore Las Vegas 2025: Illuminating the Path...
Securing Software Updates for VMware Cloud Foundation: What...
VMware Cloud Foundation 5.2: A Guide to Simplified...
VMware Cloud Foundation 5.2: Unlocking Secure Hybrid Cloud...
VMware Cloud Foundation – Memory Tiering: Optimizing Memory...
Virtual Bytes
  • Home
  • Home Data Center 2025
  • VMware
    • Cloud
    • Datacenter & Cloud Infrastructure
      • VMware ESXi
      • VMware vCenter
      • VMware vSAN
    • Networking & Security
    • Desktop & App Virtualization
      • Omnissa Horizon
    • Troubleshooting
    • Ansible
  • Education
  • Hardware
    • Hardware Tips & Tricks
  • Events
  • About
    • About Me
    • Home Lab Archives
      • Home Lab 2020-2022
      • Home Lab 2016-2020
Author

Tommy Grot

Tommy Grot

Cloud

Deploying VMware Cloud Director Availability 4.3

by Tommy Grot March 24, 2022
written by Tommy Grot 4 minutes read

Todays topic is deploying VMware Cloud Director Availability for VMware Cloud Director! Todays topic is deploying VMware Cloud Director Availability for VMware Cloud Director! This walkthrough will guide you on how to deploy VCDA from a OVA to an working appliance. All this is created within my home lab. A different guide will be on how to setup VCDA and multi VCDs to create a Peer between and show some Migrations and so on! 🙂

A little about VCDA! – From VMware’s site

VMware Cloud Director Availabilityâ„¢ is a Disaster Recovery-as-a-Service (DRaaS) solution. Between multi-tenant clouds and on-premises, with asynchronous replications, VMware Cloud Director Availability migrates, protects, fails over, and reverses failover of vApps and virtual machines. VMware Cloud Director Availability is available through the VMware Cloud Provider Program.VMware Cloud Director Availability introduces a unified architecture for the disaster recovery and migration of VMware vSphere Â® workloads. With VMware Cloud Director Availability, the service providers and their tenants can migrate and protect vApps and virtual machines:

  • From an on-premises vCenter Server site to a VMware Cloud Directorâ„¢ site
  • From a VMware Cloud Director site to an on-premises vCenter Server site
  • From one VMware Cloud Director site to another VMware Cloud Director site

Cloud SiteIn a single cloud site, one VMware Cloud Director Availability instance consists of:

  • One Cloud Replication Management Appliance
  • One or more Cloud Replicator Appliance instances
  • One Cloud Tunnel Appliance

Links!

Replication Flow – Link to VMware

  • Multiple Availability cloud sites can coexist in one VMware Cloud Director instance. In a site, all the cloud appliances operate together to support managing replications for virtual machines, secure SSL communication, and storage of the replicated data. The service providers can support recovery for multiple tenant environments that can scale to handle the increasing workloads.

Upload the OVA for VCDA

Create a friendly name within this deployment, i like to create a name that is meaningful and corellates to a service.

Proceed to step 4

Accept this lovely EULA 😛

Since in my lab for this deployment i did a combined appliance. I will also do a seperate applaince for each service configuration.

Choose the network segment you will have your VCDA appliance live on, i put my appiliance on a NSX-T backed segment on the overlay network.

Fill in the required information, also create an A record for the VCDA appliance so that when it does its recersive DNS it will succesffuly generate a self signed certificate and allow the appliance to keep building, successfuly.

After you hit submit and watch the deployment you can open the vmware web / remote console and just watch for any issues or errors that may cause the deployment to fail.

I ran into a snag! What happened was the network configuration did not accept all the information i filled in for the network adapter on the VCDA appliance OVA deployment. So here, I had to login as root into the VCDA appliance, it did force me to reset the password that I orginally set on the OVA deployment.

Connect to the VMware Cloud Director Availability by using a Secure Shell (SSH) client.

Open an SSH connection to Appliance-IP-Address.
Log in as the root user.

To retrieve all available network adapters, run: /opt/vmware/h4/bin/net.py nics-status

/opt/vmware/h4/bin/net.py nic-status ens160

/opt/vmware/h4/bin/net.py configure-nic ens160 — static –address 172.16.204.100/24 –gateway 172.16.204.1

After you have updated all the network configuration you can check the config by :

To retrieve the status of a specific network adapter,

/opt/vmware/h4/bin/net.py nic-status ens160

After the networking is all good, then you may go back to your web browser and go to the UI of the VCDA. Here we will configure next few steps.

Add the license you have recived for VCDA – this license is different than what VMware Cloud Director utilizes.

Configure the Site Details for your VCDA. I did Classic data engines since I do not have VMware on AWS.

Add your first VMware Cloud Director to this next step

Once you have added the first VCD, then you will be asked for the next few steps. Here we will add the look up service which is the vCenter Server lookup service along with the Replicator 1 which for my setup i did a combined appliance so the IP is the same as my deployment of VCDA but my port will be different.

Then I created a basic password for this lab simulation. Use a secure password!! 🙂

Once All is completed you shall see a dashboard like this below. We have successfully deployed VMware Cloud Director Availability! Next blog post we will get into the nitty gritty of the the migration and RPOs, and SLAs as we explore this new service which is a addon to VMware Cloud Director!

March 24, 2022 0 comments 3.4K views
0 FacebookTwitterLinkedinThreadsBlueskyEmail
Cloud

Photon OS Emergency Mode – Fix Corrupt Disk

by Tommy Grot March 15, 2022
written by Tommy Grot 0 minutes read

For this little walk through, we will be using my VMware Cloud Director 10.3.2a applaince i have in my lab, it did not shut down safely so we will repair it! 🙂

Reboot the VMware Cloud Director appliance – then press ‘e’ immediatly to load into the GRUB, and at the end ot $systemd_cmdline add the following

” systemd.unit=emergency.target ”

Then hit F10 to boot

Run this following command to repaire the disk.

e2fsck -y /dev/sda3

Once Repaired – Shutdown VMware Cloud Director appliance and then power backon

VCD is now loading!

Successfully repaired a corrupted disk on Photon OS!

March 15, 2022 0 comments 3.2K views
0 FacebookTwitterLinkedinThreadsBlueskyEmail
NetworkingVMware NSX

NSX-T 3.2 VRF Lite – Overview & Configuration

by Tommy Grot February 20, 2022
written by Tommy Grot 7 minutes read

Hello! Today’s blog post will be about doing a NSX-T Tier-0- VRF Tier-0 with VRF Lite Toplogy and how it will be setup! We will be doing a Active / Active Toplogy with VRF.

A little about what a VRF is – (Virtual Routing and Forwarding) this allows you to logically carve out a logical router into multiple routers, this allows you to have multiple identical networks but logically segmented off into their own routing instances. Each VRF has its own independent routing tables. this allows to have multiple networks be segmented away from each other and not overlap and still function!

Benefit of NSX-T VRF Lite – allows you to have multple virtual networks on on a same Tier-0 without needing to build a seperate nsx edge node and consuming more resources to just have the ability to segment and isolate a routing instance from another one.

Image from VMware

What is : Transport Zone (TZ) defines span of logical networks over the physical infrastructure. Types of Transport Zones – Overlay or VLAN

When a NSX-T Tier-0 VRF is attached to parent Tier-0, there are multiple parameters that will be inherited by design and cannot be changed:

  • Edge Cluster
  • High Availability mode (Active/Active – Active/Standby)
  • BGP Local AS Number
  • Internal Transit Subnet
  • Tier-0, Tier-1 Transit Subnet.

All other configuration parameters can be independently managed on the Tier-0:

  • External Interface IP addresses
  • BGP neighbors
  • Prefix list, route-map, Redistribution
  • Firewall rules
  • NAT rules

First things first – login into NSX-T Manager, once you are logged in you will have to prepare the network and tranport zones for this VRF lite topology to properly work, as it resides within the overlay network within NSX-T!

Go to Tier-0 Gateways -> Select one of your Tier-0 Routers that you have configured during initial setup. I will be using my two Tier-0’s, ec-edge-01-Tier0-gw and ec-edge-02-Tier0-gw for this tutorial along with a new Tier-0 VRF which will be attached to the second Tier-0 gateway.

So, first thing we will need to prepare two (2) segments for our VRF T0’s to ride the overlay network.

Go to – Segments -> Add a Segment, the two segments will ride the overlay transport zone, no vlan and no gateway attached. Add the segments and click on No for configuring the segment. Repeat for Second segment.

Below is the new segment that will be used for the Transit for the VRF Tier-0.

Just a reminder – this segment will not be connected to any gateway or subnets or vlans

Here are my 2 overlay backed segments, these will traverse the network backbone for the VRF Tier-0 to the ec-edge-01-Tier0-gw.

But, the VRF Tier 0 will be attached to the second Tier 0 (ec-edge-02-Tier0-gw) which is on two seperate Edge nodes (nsx-edge-03, nsx-edge-04) for a Active – Active toplogy.

Once the segment has been created then we can go and created a VRF T0. Go back to the Tier-0 Gateway window and click on Add Gateway – VRF ->

Name of VRF gateway ec-vrf-t0-gw and then attach it to the ec-edge-02-Tier0-gw, enable BGP and create a AS# which i used 65101, and as the second Tier-0-gateway it will act as a ghost router for those VRFs.

Once you finish you will want to click save, and continue configuring that VRF Tier-0, next we will configure the interfaces.

Now, we will need to create interfaces on the ec-edge-01-Tier0-gw. Expand Interfaces and click on the number in blue, for my deployment my NSX-T Tier 0 right now has 2 interfaces.

Once you create the 2 Interfaces on that Tier 0 the number of interfaces will change.

Click on Add Interfaces -> Create a unique name for that first uplink which will peer via BGP with the VRF T0.

Allocate couple IP addresses, I am using 172.16.233.5/29 for the first interface on the ec-vrf-t0-gw, which lives within the nsx-edge-01 for my deployment, the VRF T0 will have 172.16.233.6/29, and connect that interface on the overlay segment you created earlier.

Then the second interface, I created with the IP of 172.16.234.5/29, also the VRF Tier-0 will have 172.16.234.6/29 and each interface will be attaced to that second nsx-edge-node-02, so first IP 172.16.233.5/29 is attached to edge node 1 and second IP will be on Edge Node 02.

ec-t0-1-vrf-01-a – 172.16.233.5/29 – ec-t0-vrf-transport-1 172.16.233.6/29 (overlay segment)

ec-t0-1-vrf-01-b 172.16.234.5/29 – ec-t0-vrf-transport-2 172.16.234.6/29 (overlay segment)

Jumbo Frames 9000 MTU on both interfaces

Once you have created all required interfaces, below is an example of what i created, make sure you have everything setup correctly or the T0 and VRF T0 will not peer up!

Then go to BGP configuration for that nsx-edge-01 and nsx-edge-02 and prepare the peers from it to the VRF Tier-0 router.

Next, we will create another set of interfaces for the VRF T0 itself, these will live on nsx-edge-03 and nsx-edge-04. Same steps as what we created for nsx-edge-01 and nsx-edge-02, just flip it!

ec-t0-1-vrf-01-a – 172.16.233.6/29 – nsx-edge-03

ec-t0-1-vrf-01-b -172.16.234.6/29 – nsx-edge-04

Jumbo Frames 9000MTU

Once, both interfaces are configured for the Tier-0’s, you should have two interfaces with different subnets for the transit between the VRF T0 and the Edge 01 gateway Tier-0. After, interfaces are created on the specific nsx-edges.

Verify the interfaces and the correct segments and if everything is good, click save and proceed to next step.

Everything we just created rides the overlay segments, now we will configure BGP on each of the T0s.

Expand BGP – Click on the External and Service Interfaces (number) mine has 2.

Click Edit on the Tier 0, ec-edge-01-Tier0-gw and expand BGP and click on BGP Neigbors.

Create the BGP Peers on the VRF T0, you will see the interface IP we created earlier under the “Source Addresses” those are attached to each specific interface which is on those overlay segments we created for the VRF lite model.

172.16.233.5 – 172.16.233.6 – nsx-edge-03
172.16.234.5 – 172.16.234.6 – nsx-edge-04

Click Save and proceed to creating the second BGP interface which will be for nsx-edge-04 BGP

If everything went smooth, you should be able to verify your BGP peers between both Tier-0 and VRF Tier-0 as shown below.

After you have created the networks, you may create a T1 and attach it to that VRF T0! Which you can consume within VMware Cloud Director or just a standalone T1 which is attached to that VRF, But next we will attach a test segment to this VRF T0 we just created!

Once you create that segment with a subnet attached to the Tier-1 you will want to verify if you have any routes being advertised within your TOR Router, for my lab i am using a Arista DCS 7050QX-F-S which is running BGP.

I ran a command – show ip route on my Arista core switch.

You will see many different routes but the one we are intersted is 172.16.66.0/24 we just advertised.

If you do not see any routes coming up from the new VRF T0 we created you will want to create a Route-Redistrubition for that T0, click on the vrf t0 and edit and go to Route Re Distribution

For the walkthrough I redistributed everything for testing purposes, but for your use case you will only want to re-distribute the specific subnets, nats, forward IPs etc.

Overall toplogy of what we created is to the left with the two T0s, the T0 to the right is my current infrastructure.

That is it! For this walk through, we created a VRF T0 and attached it to the second edge t0 router and then we took the VRF T0 and peered up to the ec-edge01-t0-gw.

February 20, 2022 0 comments 2.6K views
1 FacebookTwitterLinkedinThreadsBlueskyEmail
Cloud

VMware Cloud Director 10.3.2 Installation / Configuration

by Tommy Grot January 19, 2022
written by Tommy Grot 1 minutes read

Installing VMware Cloud Director, this walkthrough will guide you on how to deploy VMware Cloud Director 10.3.2. My next blog post will be on how to configure tenants and different network toplogies within vCD.

Download the OVA from VMware’s website login will be required to gain access to the installation medium

Login into vCenter, then right click on the Cluster, Deploy OVF Template

Select the VMware Cloud Director OVA and then click Next

Chose the naming convention of your vcd instance

Select the Compute Cluster that you wish to deploy VCD on

Review the details

Accept that lovely EULA! 🙂

Select the Configuration of the VCD instance. Each confiruation has different resouce allocations.

Select the Storage you wish to deploy the VCD instance too, for mine i chose my vSAN Storage

Select the Networks that VCD will utilize, for my setup I am using two NSX-T overlay backed Segments with the Database segment being isolated and the vcd segment being routable

Verify all settings before hitting Finish!

After the deployment is completed you can integrate VCD with NSX-T and vCenter

Login via https://x.x.x.x/provider (this will allow you to login into VCD as the provider)

Once logged in, go to Infrastructure Resource

Click ADD – to add vCenter server

Once you accepted the SSL Certificate from vCenter, then you will enable tenant access and click finish. After vCenter has been added you will see an overall vCenter Info, like in the screenshots below

After vCenter has been added, you may add NSX-T managers

Click on ADD – fill in the NSX-T Manager(s) URL/IP and user account

Trust the certificate from NSX-T Managers, then you are all set!

January 19, 2022 0 comments 2.4K views
1 FacebookTwitterLinkedinThreadsBlueskyEmail
Hardware Tips & Tricks

Dell PowerEdge IPMI Fan Control

by Tommy Grot December 31, 2021
written by Tommy Grot 2 minutes read

Controlling Dell PowerEdge Fans for 12th/13th Generation Servers is possible by communicating with the BMC using IPMI commands. In this walk through we will walk through on the steps to granually control the speed you want.

Requirements:

Opensource IPMI Tool or Dell Open Manage BMC Utility Tool installed on your desktop

DISCLAIMER! – Do this at your own RISK. Be aware of what Hexadecimal values you input for the Fan Speed, This walkthrough will keep them at 45%

Be careful of what you do if you overheat and damage your server at your own fault!!

Website Link for : Hexadecimal Calculator

IPMI Syntax:

  • Senors read out: ipmitool -I lanplus -H [IP Address] -U [username] -P [password] sdr list full
  • Enable manual/static fan control: ipmitool -I lanplus -H [IP Address] -U [username] -P [password] raw 0x30 0x30 0x01 0x00
  • Disable manual/static fan control: ipmitool -I lanplus -H [IP Address] -U [username] -P [password] raw 0x30 0x30 0x01 0x01
  • Enable static fan control: ipmitool -I lanplus -H [interface] -U [username] -P [password] raw 0x30 0x30 0x02 0xff 0x2D

Note:

Enable IMPI over LAN within iDRAC7 Web Interface
Replace with your iDRAC IP address
Replace with iDRAC admin user
Replace with iDRAC admin password

This Enables Static Fan Control

ipmitool -I lanplus -H [iDRAC IP ADDRESS] -U [username] -P [Password] raw 0x30 0x30 0x01 0x00

This sets the Fans to a Static Speed to 45%

ipmitool -I lanplus [iDRAC IP ADDRESS] -U [username] -P [Password] raw 0x30 0x30 0x02 0xff 0x2D

This will verify Servers SDR

ipmitool -I lanplus -H [iDRAC IP ADDRESS] -U [username] -P [Password] sdr list full

Login into iDRAC web interface, Go to iDRAC Settings – > Network -> IMPI over LAN (Enable it)

Open Command Prompt and Change Directory to C:\Program Files (x86)\Dell\SysMgt\bmc

If you want to verify the directory type in ” dir “.

Below is an example: ipmitool.exe -l lanplus 0.0.0.0 -U root -P Password raw 0x30 0x30 0x01 0x00 – Replace all required fields with your own information.

This next step below, will enable the static fan control for the server with IPMI.

Next command you can control the fan speeds with the proper hexadecimal value for the ending, as you see 0x2D =45% so you will need to figure out your speed you want the server to be statically at.

Lets go verify the static fan controls in the iDRAC Web UI or also you can run the SDR command via IPMI.

December 31, 2021 0 comments 6.2K views
0 FacebookTwitterLinkedinThreadsBlueskyEmail
Networking

NSX-T 3.2 Add Edge Nodes

by Tommy Grot December 29, 2021
written by Tommy Grot 3 minutes read

Today’s topic is on deploying a NSX-T Edge node, during this process I have to grow out my Edge nodes to NSX Edge Large so I can utilize IPS/IDS within the new release of NSX-T! The Edge Node specifications are:

NSX Edge VM Resource Requirements

Appliance SizeMemoryvCPUDisk SpaceVM Hardware VersionNotes
NSX Edge Small4 GB2200 GB11 or later (vSphere 6.0 or later)Proof-of-concept deployments only.Note:L7 rules for firewall, load balancing and so on are not realized on a Tier-1 gateway if you deploy a small sized NSX Edge VM.
NSX Edge Medium8 GB4200 GB11 or later (vSphere 6.0 or later)Suitable when only L2 through L4 features such as NAT, routing, L4 firewall, L4 load balancer are required and the total throughput requirement is less than 2 Gbps.
NSX Edge Large32 GB8200 GB11 or later (vSphere 6.0 or later)Suitable when only L2 through L4 features such as NAT, routing, L4 firewall, L4 load balancer are required and the total throughput is 2 ~ 10 Gbps. It is also suitable when L7 load balancer, for example, SSL offload is required.See Scaling Load Balancer Resources in the NSX-T Data Center Administration Guide. For more information about what the different load balance sizes and NSX Edge form factors can support, see https://configmax.vmware.com.
NSX Edge Extra Large64 GB16200 GB11 or later (vSphere 6.0 or later)Suitable when the total throughput required is multiple Gbps for L7 load balancer and VPN.See Scaling Load Balancer Resources in the NSX-T Data Center Administration Guide. For more information about what the different load balance sizes and NSX Edge form factors can support, see https://configmax.vmware.com.
Credit @VMware NSX-T Website

Lets begin! You will need to login into your NSX-T manager, then go to the System Tab -> Fabric -> Nodes

Then, click on ADD EDGE NODE. You will need to prep a A record and a free static IP address to predefine the A record you will create within your Domain Controller or DNS server of choice. (The Extra Large – option is required for IPS/Malware threat prevention which I will try out later)

Create the administrative account that you desired and password.

Add the edge node to the correct Compute Manager, along with Cluster and Datastore. If you have resource pools then you can select them and preconfigure that.

Here you will input the IP address and Default Gateway, the IP address will be the one you preconfigured for the A record on the DNS server.

Select the Port Group you want the Management interface of the NSX Edge Node to live on.

Preconfigure the Search Domains, DNS Servers and NTP Servers.

This will vary on each deployment, since my NSX-T environment is backed on dual 10Gbit networks that peer up-to my Arista 7050QX via eBGP then I choose the vmnic uplink profile.

Below are the uplink trunks that the NSX-T will run on. Each interface of the Edge Node will need a trunk uplink

Click Finish!

Now you see the 2 new Edge Nodes in large size being deployed!

December 29, 2021 0 comments 2.1K views
0 FacebookTwitterLinkedinThreadsBlueskyEmail
Networking

Upgrading NSX-T 3.1.3.3 to NSX-T 3.2.0

by Tommy Grot December 17, 2021
written by Tommy Grot 3 minutes read

Little about NSX-T 3.2 – There are lots of improvements within this release, from Strong Multi-Cloud Security to Gateway Firewalls and overall better Networking and Policy Enchantments. If you want to read more about those check out the original blog post from VMware.

Download your bytes off VMware’s NSX-T website. Once you have downloaded all required packages depending on your implementation. You will want to have a backup of your NSX-T environment prior to upgrading your NSX-T environment.

Below is a step by step walk through on how to upload the VMware-NSX-upgrade-bundle-3.2.0.0.0.19067070.mub upgrade file and proceed with the NSX-T Upgrade

Once you login, go to the System Tab

Go to Lifecycle Management -> Upgrade

Well, for my NSX-T environment – I have already upgraded it before from 3.1.3.0 to 3.1.3.3 and that is why you will see a (Green) Complete at the top right of the NSX Appliances. But – Proceed with the UPGRADE NSX Button

Here you will find the VMware-NSX-upgrade-bundle-3.2.0.0.0.19067070.mub file

Continuation of Uploading file

Now you will start uploading it. This will take some time so grab a snack! 🙂

Once it has uploaded you will see a “Upgrade Bundle retrieved successfully” now you can proceed by doing the upgrade – Click on the UPGRADE button below.

This lovely EULA! 🙂 Well you gotta accept it if you want to upgrade…

This will prompt you one more time, before you execute the full upgrade process of NSX-T

Once the Upgrade has been Extracted and the Upgrade Coordinator has been restarted your upgrade path is now ready to start upgrading the Edges.

For my NSX-T, I ran the Upgrade Pre Checks – to ensure that there are no issues before I did any major upgrades.

Results of the Pre Checks, had few Issues but nothing alarming for my situation.

Here below, I am upgrading the Edges serially, that way I can keep my services up and running with minimal to no downtime. When I upgraded NSX-T Edges I only saw 1 ping drop from a whole consistent ping to one of my web servers.

More progress on the Edge Upgrades

Lets check up on the NSX Edges, here below is a snip of one of the Edges that got upgraded to NSX-T 3.2.0

Now, that the Edges have upgraded Successfully we can proceed to the Hosts

Time to upgrade the Hosts! Make sure your hosts are not in production and in maintenance mode, this upgrade process will put the hosts into Maintenance Mode so just make sure you have enough resources.

Now that the host is free of VMs – Now the upgrading is Installing NSX bits on the host, this process will need to be repeated as many times there are hosts within your cluster.

All the hosts got upgraded successfully no issues encountered – Next Step is to upgrade the NSX Managers

On the next screenshot below, you will see that the Node OS upgrade process is next, Click Start and you will initiate the NSX Manager upgrade. If you want to see all the NSX Managers status click on the 1.Node OS Upgrade.

After you click start, you will see a dialog window pop open warning you to not create any objects within the NSX Manager, Also later in the upgrade if the web interface is down, you may log into the NSX Managers via Web Console through vCenter and run this command – ‘ get upgrade progress-status ‘

This is how the Node Upgrade Status Looks like, now you see that there are some upgrades happening for the second and third NSX Managers.

Below is a sample screen snip of the NSX01 console and I executed the command to see its status.

Now, that NSX Managers have upgraded the OS, there are still many services that need to get upgraded. Below is a screenshot of the current progress it is at

All NSX Managers have been upgraded to NSX-T 3.2.0 – Click Done

Upgrade has now been complete! 🙂

December 17, 2021 0 comments 6.1K views
3 FacebookTwitterLinkedinThreadsBlueskyEmail
VMware vCenter

vCenter 7.0 Install with vSAN

by Tommy Grot February 6, 2021
written by Tommy Grot 4 minutes read

Hello VBs! Todays topic will be on installing vCenter onto a vSAN cluster, whether it is a all flash or a mixed capacity cluster with some spindles and 1 SSD for caching. There are some requirements you will need to have a vSAN cluster, which requires minimum 3 nodes or 2 + 1 witness node which resides on a non vSAN cluster. For this walkthrough we will be concentrating on only a 3 node cluster for vSAN!

Below – is the requirements pulled from VMware

Storage ComponentRequirements
CacheOne SAS or SATA solid-state disk (SSD) or PCIe flash device.Before calculating the Primary level of failures to tolerate, check the size of the flash caching device in each disk group. Verify that it provides at least 10 percent of the anticipated storage consumed on the capacity devices, not including replicas such as mirrors.vSphere Flash Read Cache must not use any of the flash devices reserved for vSAN cache.The cache flash devices must not be formatted with VMFS or another file system.
Virtual machine data storageFor hybrid group configuration, make sure that at least one SAS, NL-SAS, or SATA magnetic disk is available.For all-flash disk group configuration, make sure at least one SAS, or SATA solid-state disk (SSD), or PCIe flash device.
Storage controllersOne SAS or SATA host bus adapter (HBA), or a RAID controller that is in passthrough mode or RAID 0 mode.

Memory

The memory requirements for vSAN depend on the number of disk groups and devices that the ESXi hypervisor must manage. Each host must contain a minimum of 32 GB of memory to accommodate the maximum number of disk groups (5) and maximum number of capacity devices per disk group (7).

Flash Boot Devices

During installation, the ESXi installer creates a coredump partition on the boot device. The default size of the coredump partition satisfies most installation requirements.

  • If the memory of the ESXi host has 512 GB of memory or less, you can boot the host from a USB, SD, or SATADOM device. When you boot a vSAN host from a USB device or SD card, the size of the boot device must be at least 4 GB.
  • If the memory of the ESXi host has more than 512 GB, you must boot the host from a SATADOM or disk device. When you boot a vSAN host from a SATADOM device, you must use single-level cell (SLC) device. The size of the boot device must be at least 16 GB.

Now, we will install vCenter, once the ISO has mounted and you startup the executable file. You will see this below.

The Platform Service Controller has been deprecated in these new installs of vCenter, which if your setup does have a PSC, it is recommended to upgrade or rebuild a brand new vCenter install.

Specify the VMware ESXi Host or vCenter IP Address – Example – 10.0.0.100 or DNS Name – esxi01.virtualbytes.io ; vcenter.virtualbytes.io

( VMware recommends to operate its platform with DNS names, this allows easier management and IP address modifications )

After all proper information is configured / documented – Click Next

Setup the appliance –

  • Specify a name for your vCenter Server (ex. vCenter, VCSA, etc)
  • Specify the VCSA root password ( If you do lose this password there is a step by step procedure to recover access)

During this Stage 1 – Here you can specify the VCSA installation to the specified datastore. vCenter will be installed on a vSAN Cluster, this installation will install on a single host, once you have setup vCenter and log into vCenter you will see a vSAN datastore, this is not a complete setup! You will need to add additional two ESXi hosts and setup the rest of the vSAN cluster to have data protection.

In this step, you will be selecting the disks, which in this case each user will have different amount of disks, so you will want to created 2 disk groups that are evenly created for this installation.

  • VMware recommends to install vCenter under a FQDN (Fully Qualified Domain Name) with a FQDN – this allows you to change the IP address of a VCSA.
    Be cautious of filling the “Configure network settings” page – This will lead to a inoperable vCenter instance.
  • Recommendation – Setup an ‘A’ Record on your local network DNS server or Domain Controller to point to the fresh new vCenter Server Appliance

VERIFY! VERIFY! VERIFY! – If you do not verify your appliance deployment configuration and you mis-configure a setting – You will be reinstalling VCSA from scratch again. (BE CAREFUL!)

After you have verified all information – Click Next

This stage will take few minutes – Grab a coffee or water! 🙂

  • Stage 2 – This stage you will get to configure the Appliance configuration such as (SSH, SSO, CEIP)
  • SSH – is recommend to be turned off for security reasons, unless you are setting up more than one vCenter server and setting up vCenter High Availability
  • SSO – the single sign on, the default is vsphere.local, if you have a on premise domain controller, make sure you do not use the same domain name, this will prevent you from adding vCenter to the Active Directory domain forest.
February 6, 2021 0 comments 2.9K views
0 FacebookTwitterLinkedinThreadsBlueskyEmail
Omnissa Horizon

Removing Invalid Instant Clones / Horizon Pool

by Tommy Grot October 19, 2020
written by Tommy Grot 3 minutes read

A quick tip for the day, if you encounter any issues with your Horizon platform and you bring it back up from a backup or any other issues, and you see that some Instant Clones are missing, no worries you will need to do few things to clean up your Horizon View, with the ViewDbChk and ADSI Edit.

First step, login to your Horizon server. Once you are logged in you will need to open ADSI Edit and run these commands specifically.

Right click on ADSI Edit -> Connect To

EXACTLY Fill it out like I have it, or you will run into issues or corrupt something else. Once it is all filled out, click OK.

Name: Horizon View ADAM
Connection Point: DC=vdi, DC=vmware, DC=int
Computer: localhost:389

Now you see that you are connected to the ADAM database and you will now drill down to the specific areas.

First you want to remove the specific Horizon Pool(s) Ex. My Lab Horizon server has two Pools Download Desktop, Remote Desktop. These are both test pools. Be very CAREFUL to make sure you delete the proper CN=HORIZON_POOL.

After the desktop pool objects within the OU=Applications are deleted, you will want to delete the corresponding CN=HORIZON_POOL in the OU=Server Groups, this I have noticed wouldn’t let me rebuild the Horizon Pools with the previous name. After the OU=Server Groups have been removed it worked and let me create a new pool.

Next steps, is to run the viewdbchk.cmd, you will have to execute this from Command Prompt, you will not be able to just click on the .cmd file. Below I have provided the path so you may copy it and paste it in Command Prompt, ran as Administrator.

cd C:\Program Files\VMware\VMware View\Server\tools\bin

Once the script is ran you will see syntax and switches to use. I have provided a table of all them.

ViewDbChk --findDesktop --desktopName <desktop name> [--verbose]
   Find a desktop pool by name.
ViewDbChk --enableDesktop --desktopName <desktop name> [--verbose]
   Enable a desktop pool.
ViewDbChk --disableDesktop --desktopName <desktop name> [--verbose]
   Disable a desktop pool.
ViewDbChk --findMachine --desktopName <desktop name> --machineName <machine name> [--verbose]
   Find a machine by name.
ViewDbChk --removeMachine --machineName <machine name> [--desktopName <desktop name>] [--force] [--noErrorCheck] [--verbose]
   Remove a machine from a desktop pool.
ViewDbChk --scanMachines [--desktopName <desktop name>] [--limit <maximum deletes>] [--force] [--verbose]
   Scan all machines for problems. The scan can optionally be limited to a specific desktop pool.
ViewDbChk --help [--commandName] [--verbose]
   Display help for all commands, or a specific command.

First, you will need to find the pool you want to operate on. But for this tutorial we will be removing the desktop pool. So you will need to scan for it

ViewDbChk --findDesktop --desktopName <desktop name> --verbose

Once you have found your Desktop Pool you want to scan, this will also delete the entry if the Virtual Machine is orphaned or no longer existing. If you encounter other issues, you will need to use the –force switch to forcefully remove the pool, but in my scenario I was able to remove a large pool of Instant Clones in the last image.

ViewDbChk --scanMachines --desktopName <desktop name> --limit <maximum deletes>  --verbose

That is it! If you were able to remove all the Virtual Machines that were stuck in the Horizon View, now you are able to re-use or create a new pool even with the same name!

October 19, 2020 0 comments 3.7K views
0 FacebookTwitterLinkedinThreadsBlueskyEmail
VMware Troubleshooting

vSphere ESXi Dump Collector

by Tommy Grot October 17, 2020
written by Tommy Grot 2 minutes read

If you have any issues or errors that occur within the ESXi Hypervisor, the ESXi Collector will send the current state of the VMkernel Memory. This will dump the core to the vCenter via network. So if a ESXi host fails or gets compromised there will be traces of sys log and other logs sent to the vCenter Serve which could be in the same organization datacenter or reside somewhere else in the cloud.

Cyber Security Tip! – DISABLE SSH after you are done working with it, this is strongly recommend to harden the ESXi host and prevent any cyber attacks against SSH (Port 22)

The ESXi Dump Collector traffic is not encrypted so best practice is to set it on a isolated VLAN that the internet or other networks do not communicate with it.

First step, is to log into VMware Server Management, also known as, VAMI.

https://YOUR_VCENTER_IP_OR_DNS:5480/

The login credentials to log into VAMI

Username : root

Password : The Password you setup during installation.

Once you are logged into VAMI, you will need to go to the Services section. Then look for VMware vSphere ESXI Dump Collector.

Select it, and click START

After the VMware vSphere ESXi Dump Collector is started and running, log into your ESXi host(s) via SSH.

To enable SSH on the cluster, login your vCenter, then go to the ESXi host, Click on Configure -> System -> Services. You will see SSH, click on that and select START.

Once SSH has started, open up your favorite SSH tool, for this tutorial I am using Putty. You may download it here.

Then log into the ESXi host and you will execute few commands to enable the ESXi host to offload the VMkernel logs to the vCenter Dump Collector.

esxcli system coredump network set --interface-name vmk0 --server (YOUR vCENTER IP) --server-port 6500
esxcli system coredump network set --enable true
esxcli system coredump network get

After all those 3 commands are executed with your specific vCenter IP, you will see that the final command will get the coredump network configuration and display it in the SSH session. Once that is enabled you will see that the Alert for ESXi Core Dumps log go away and logs are offloaded.

October 17, 2020 0 comments 2.1K views
0 FacebookTwitterLinkedinThreadsBlueskyEmail
Newer Posts
Older Posts




Recent Posts

  • Offline VMware Cloud Foundation 9 Depot: Your Path to Air-Gapped Deployments
  • VMware Cloud Foundation 9: Simplifying Identity with a Unified SSO Experience
  • What’s New In VMware Cloud Foundation 9.0
  • Deploying & Configuring the VMware LCM Bundle Utility on Photon OS: A Step-by-Step Guide
  • VMware Cloud Foundation: Don’t Forget About SSO Service Accounts

AI cloud Cloud Computing cloud director configure cyber security director dns domain controller ESXi How To las vegas llm llms multicloud NSx NSX-T 3.2.0 NVMe sddc security servers ssh storage tenant upgrade vcd vcda VCDX vcenter VCF vcf 9 VDC vexpert Virtual Machines VMs vmware vmware.com vmware aria VMware Cloud Foundation VMware cluster VMware Explore VMware NSX vrslcm vsan walkthrough

  • Twitter
  • Instagram
  • Linkedin
  • Youtube

@2023 - All Right Reserved. Designed and Developed by Virtual Bytes

Virtual Bytes
  • Home
  • Home Data Center 2025
  • VMware
    • Cloud
    • Datacenter & Cloud Infrastructure
      • VMware ESXi
      • VMware vCenter
      • VMware vSAN
    • Networking & Security
    • Desktop & App Virtualization
      • Omnissa Horizon
    • Troubleshooting
    • Ansible
  • Education
  • Hardware
    • Hardware Tips & Tricks
  • Events
  • About
    • About Me
    • Home Lab Archives
      • Home Lab 2020-2022
      • Home Lab 2016-2020