Top Posts
What’s New In VMware Cloud Foundation 9.0
Deploying & Configuring the VMware LCM Bundle Utility...
VMware Cloud Foundation: Don’t Forget About SSO Service...
VMware Explore Las Vegas 2025: Illuminating the Path...
Securing Software Updates for VMware Cloud Foundation: What...
VMware Cloud Foundation 5.2: A Guide to Simplified...
VMware Cloud Foundation 5.2: Unlocking Secure Hybrid Cloud...
VMware Cloud Foundation – Memory Tiering: Optimizing Memory...
Decoding VMware Cloud Foundation: Unveiling the numerous amount...
VMware Cloud Director 10.6.1: Taking Cloud Management to...
Virtual Bytes
  • Home
  • Home Data Center 2025
  • VMware
    • Cloud
    • Datacenter & Cloud Infrastructure
      • VMware ESXi
      • VMware vCenter
      • VMware vSAN
    • Networking & Security
    • Desktop & App Virtualization
      • Omnissa Horizon
    • Troubleshooting
    • Ansible
  • Education
  • Hardware
    • Hardware Tips & Tricks
  • Events
  • About
    • About Me
    • Home Lab Archives
      • Home Lab 2020-2022
      • Home Lab 2016-2020
Tag:

vsan

Cloud

VMware Cloud Director 10.3.3: Creating a Tenant

by Tommy Grot April 15, 2022
written by Tommy Grot 3 minutes read

A little about what VMware Cloud Director is – it is a CMP or also known as a cloud managment plane which supports, pools and abstracts the VMware virtualization infrastructure as (VDC) Virtual Data Centers. A provider can offer many different flavors and specifcations of a Tenant to a customer, it could be a Gold, Silver or Bronze types of capacity and tiering which allows a good allocation model depending on a customer that needs a higher guarenteed resource usage or allocation where as a lower tier customer wants to test few software solutions they could use a bronze tier and be able to save costs.

Once you are logged in, then you will want to create few things first! But my previous blog post already explains on how to add a vCenter Server and NSX-T integration here at this post.

Well lets begin! First we will want to create a Network pool which is a VXLAN that will reside within the tenant environment will run ontop of Geneve on the overlay!

  • Login into the Provider portal of VCD with the administrator account
  • https://<vcd-ip>/provider/

Go to Network Pools

The network will be Geneve backed to ride the NSX-T overlay

Select the NSX-T Manager

The network pool which is backed by NSX-T Transport Zone we will want to select the transport zone that you have created for your edge nodes during the NSX-T setup.

Once you have your Network Pool setup and followed the steps you should see something like this!

Network Pool has been successfully created as shown below

After a network pool has been created, next we will create the Provider VDC ( Virtual Data Center)

Select the Provider vCenter you have configured within the Infrastructure portion

Select the Cluster, for me – I have a vSAN Cluster

Once you select the vSAN or Cluster you have in your envirnonemnt, you may proceed but the Hardware Version should be left as default since this is the maximum hardware version VCD can run and accept.

Select vSAN Storage Policy if you have vSAN if not then select the proper storage policy your storage platform is using
The network pool we created earlier, this is where we get to consume it and we let NSX-T manager and Geneve network pool run out VCD environment
  • Next, we will create an organization for us to be able to attach a VDC to
    it, which for this walk through my org is Lab-01. That will be the same name
    you use when you login as a tenant into VCD.
  • An organization is just a logical group of resources that are presented to customers, where each organization has its own isolation/security boundaries and their own Web UI which they can use an identity manager to integrate such as LDAP for seamless user management.

Once a New Organization has been created, next we will create a Organization VDC (Virtual Data Center)

Click on Organizations VDCs and Create “NEW” Organization

Type a name of the organization you wish to create

Attach that organization to the provider virtual datacenter we created earlier

Select the allocaiton model, I have seen the Flex model be the most flexible to have the ability to have better control over the resources even at the VM level. More information is here on VMware’s website

For this demonstration, I am not allocating and resource I am giving my Tenant unlimited resources from my vSAN Cluster, but for a production environment you will want to use the proper allocation model and resources.

Select the Storage policy along with i like to enable Thin provision to save storage space!

Each organization will have its own Network Pool but it will run ontop of the Geneve overlay

About to finish up the setup of a VDC!

We have logged into the new Tenant space we have created! 🙂

April 15, 2022 0 comments 1.4K views
0 FacebookTwitterLinkedinEmail
Cloud

Upgrading VMware Cloud Director to 10.3.3

by Tommy Grot April 14, 2022
written by Tommy Grot 4 minutes read

Upgrading VMware Cloud Director from 10.3.2.1 to 10.3.3, primarily to fix a Security Vulnerability.

Also, there are some enhancements which follow:

What is New?!

The VMware Cloud Director 10.3.3 release provides bug fixes, API enhancements, and enhancements of the VMware Cloud Director appliance management user interface:

  • Backup and restore of VMware Cloud Director appliance certificates. VMware Cloud Director appliance management interface UI and API backup and restore now includes VMware Cloud Director certificates. See Backup and Restore of VMware Cloud Director Appliance in the VMware Cloud Director Installation, Configuration, and Upgrade Guide.
  • New /admin/user/{id}/action/takeOwnership API to reassign the owner of media.
  • Improved support for routed vApp network configuration of the MoveVApp API.

This release resolves resolves CVE-2022-22966. For information, see https://www.vmware.com/security/advisories.

There are also lots of fixes, if your VCD is having issues there is a possibility this upgrade could fix lots of issues!

All the Fixes are listed on this site : https://docs.vmware.com/en/VMware-Cloud-Director/10.3.3/rn/vmware-cloud-director-1033-release-notes/index.html

First things first, lets download the newest release for VMware Cloud Director 10.3.3 – File Name: VMware_Cloud_Director_10.3.3.7659-19610875_update.tar.gz

Then shutdown your VCD Cells if you have multiple of them. Once they are turned off take a snapshot of all of them, along with the NFS Transfer Service Server usually it is a VM, take a snapshot of it too just in case you want to roll back if any issues occur.

Next we will want to upload the tar.gz file via WINSCP to the primary VCD Cell if you have a HA VCD topology, then the secondary get upgraded after the primary is finished.

I have logged into the VCD appliance with root account

Then open up a Putty session to the VCD appliance login as root,

Then change directory to /tmp/

Once in the directory:

Make Directory with the command below:

mkdir local-update-package

Start to upload the tar.gz file for the upgrade into /tmp/local-update-package via WINSCP

File has been successfully uploaded to the VCD appliance.

Then next steps we will need to prepare the appliance for the upgrade:

We will need to extract the update package in the new directory we created in /tmp/

tar -zxf VMware_Cloud_Director_v.v.v.v–nnnnnnnn_update.tar.gz \ -C /tmp/local-update-package

You can run the “ls” command and you shall see the tar.gz file along with manifest and package-pool

After you have verified the local update directory then we will need to set the update repository.

vamicli update – -repo file:///tmp/local-update-package

Check for update with this command after you have set the update package into the repository address

vamicli update – -check

Now, we see that we have a upgrade that is staged and almost ready to be ran! But, we will need to shutdown the cell(s) with this command

/opt/vmware/vcloud-director/bin/cell-management-tool -u <admin username> cell –shutdown

Next is to take a backup of the database, so if your cloud director appliance was orginally version 10.2.x initially and you have upgraded it throughout its life span, then your next command will be little different which is /opt/vmware/appliance/bin/create-backup.sh – (which i have noticed it gets renamed during a upgrade process from 10.2.x to 10.3.1)

But if your appliance is 10.3.x and newer then /opt/vmware/appliance/bin/create-db-backup will be your backup to run.

I changed directory and went all the way down to the “bin” of the backup file and now i executed the script.

Backup was successful! Now, time for the install 🙂

Apply the upgrade for VCD, the command below will run will install the update

vamicli update – -install latest

Now, the next step is important, if you have any more VCD Cell appliances you will want to repeat first few steps and then just run the command below to upgrade the other appliances:

/opt/vmware/vcloud-director/bin/upgrade

Select Y to Proceed with the upgrade

After successful upgrade, you may reboot VCD appliance and test!

April 14, 2022 0 comments 2.8K views
1 FacebookTwitterLinkedinEmail
Cloud

Deploying VMware Cloud Director Availability 4.3

by Tommy Grot March 24, 2022
written by Tommy Grot 4 minutes read

Todays topic is deploying VMware Cloud Director Availability for VMware Cloud Director! Todays topic is deploying VMware Cloud Director Availability for VMware Cloud Director! This walkthrough will guide you on how to deploy VCDA from a OVA to an working appliance. All this is created within my home lab. A different guide will be on how to setup VCDA and multi VCDs to create a Peer between and show some Migrations and so on! 🙂

A little about VCDA! – From VMware’s site

VMware Cloud Director Availabilityâ„¢ is a Disaster Recovery-as-a-Service (DRaaS) solution. Between multi-tenant clouds and on-premises, with asynchronous replications, VMware Cloud Director Availability migrates, protects, fails over, and reverses failover of vApps and virtual machines. VMware Cloud Director Availability is available through the VMware Cloud Provider Program.VMware Cloud Director Availability introduces a unified architecture for the disaster recovery and migration of VMware vSphere Â® workloads. With VMware Cloud Director Availability, the service providers and their tenants can migrate and protect vApps and virtual machines:

  • From an on-premises vCenter Server site to a VMware Cloud Directorâ„¢ site
  • From a VMware Cloud Director site to an on-premises vCenter Server site
  • From one VMware Cloud Director site to another VMware Cloud Director site

Cloud SiteIn a single cloud site, one VMware Cloud Director Availability instance consists of:

  • One Cloud Replication Management Appliance
  • One or more Cloud Replicator Appliance instances
  • One Cloud Tunnel Appliance

Links!

Replication Flow – Link to VMware

  • Multiple Availability cloud sites can coexist in one VMware Cloud Director instance. In a site, all the cloud appliances operate together to support managing replications for virtual machines, secure SSL communication, and storage of the replicated data. The service providers can support recovery for multiple tenant environments that can scale to handle the increasing workloads.

Upload the OVA for VCDA

Create a friendly name within this deployment, i like to create a name that is meaningful and corellates to a service.

Proceed to step 4

Accept this lovely EULA 😛

Since in my lab for this deployment i did a combined appliance. I will also do a seperate applaince for each service configuration.

Choose the network segment you will have your VCDA appliance live on, i put my appiliance on a NSX-T backed segment on the overlay network.

Fill in the required information, also create an A record for the VCDA appliance so that when it does its recersive DNS it will succesffuly generate a self signed certificate and allow the appliance to keep building, successfuly.

After you hit submit and watch the deployment you can open the vmware web / remote console and just watch for any issues or errors that may cause the deployment to fail.

I ran into a snag! What happened was the network configuration did not accept all the information i filled in for the network adapter on the VCDA appliance OVA deployment. So here, I had to login as root into the VCDA appliance, it did force me to reset the password that I orginally set on the OVA deployment.

Connect to the VMware Cloud Director Availability by using a Secure Shell (SSH) client.

Open an SSH connection to Appliance-IP-Address.
Log in as the root user.

To retrieve all available network adapters, run: /opt/vmware/h4/bin/net.py nics-status

/opt/vmware/h4/bin/net.py nic-status ens160

/opt/vmware/h4/bin/net.py configure-nic ens160 — static –address 172.16.204.100/24 –gateway 172.16.204.1

After you have updated all the network configuration you can check the config by :

To retrieve the status of a specific network adapter,

/opt/vmware/h4/bin/net.py nic-status ens160

After the networking is all good, then you may go back to your web browser and go to the UI of the VCDA. Here we will configure next few steps.

Add the license you have recived for VCDA – this license is different than what VMware Cloud Director utilizes.

Configure the Site Details for your VCDA. I did Classic data engines since I do not have VMware on AWS.

Add your first VMware Cloud Director to this next step

Once you have added the first VCD, then you will be asked for the next few steps. Here we will add the look up service which is the vCenter Server lookup service along with the Replicator 1 which for my setup i did a combined appliance so the IP is the same as my deployment of VCDA but my port will be different.

Then I created a basic password for this lab simulation. Use a secure password!! 🙂

Once All is completed you shall see a dashboard like this below. We have successfully deployed VMware Cloud Director Availability! Next blog post we will get into the nitty gritty of the the migration and RPOs, and SLAs as we explore this new service which is a addon to VMware Cloud Director!

March 24, 2022 0 comments 3.3K views
0 FacebookTwitterLinkedinEmail
Cloud

Photon OS Emergency Mode – Fix Corrupt Disk

by Tommy Grot March 15, 2022
written by Tommy Grot 0 minutes read

For this little walk through, we will be using my VMware Cloud Director 10.3.2a applaince i have in my lab, it did not shut down safely so we will repair it! 🙂

Reboot the VMware Cloud Director appliance – then press ‘e’ immediatly to load into the GRUB, and at the end ot $systemd_cmdline add the following

” systemd.unit=emergency.target ”

Then hit F10 to boot

Run this following command to repaire the disk.

e2fsck -y /dev/sda3

Once Repaired – Shutdown VMware Cloud Director appliance and then power backon

VCD is now loading!

Successfully repaired a corrupted disk on Photon OS!

March 15, 2022 0 comments 3.2K views
0 FacebookTwitterLinkedinEmail
NetworkingVMware NSX

NSX-T 3.2 VRF Lite – Overview & Configuration

by Tommy Grot February 20, 2022
written by Tommy Grot 7 minutes read

Hello! Today’s blog post will be about doing a NSX-T Tier-0- VRF Tier-0 with VRF Lite Toplogy and how it will be setup! We will be doing a Active / Active Toplogy with VRF.

A little about what a VRF is – (Virtual Routing and Forwarding) this allows you to logically carve out a logical router into multiple routers, this allows you to have multiple identical networks but logically segmented off into their own routing instances. Each VRF has its own independent routing tables. this allows to have multiple networks be segmented away from each other and not overlap and still function!

Benefit of NSX-T VRF Lite – allows you to have multple virtual networks on on a same Tier-0 without needing to build a seperate nsx edge node and consuming more resources to just have the ability to segment and isolate a routing instance from another one.

Image from VMware

What is : Transport Zone (TZ) defines span of logical networks over the physical infrastructure. Types of Transport Zones – Overlay or VLAN

When a NSX-T Tier-0 VRF is attached to parent Tier-0, there are multiple parameters that will be inherited by design and cannot be changed:

  • Edge Cluster
  • High Availability mode (Active/Active – Active/Standby)
  • BGP Local AS Number
  • Internal Transit Subnet
  • Tier-0, Tier-1 Transit Subnet.

All other configuration parameters can be independently managed on the Tier-0:

  • External Interface IP addresses
  • BGP neighbors
  • Prefix list, route-map, Redistribution
  • Firewall rules
  • NAT rules

First things first – login into NSX-T Manager, once you are logged in you will have to prepare the network and tranport zones for this VRF lite topology to properly work, as it resides within the overlay network within NSX-T!

Go to Tier-0 Gateways -> Select one of your Tier-0 Routers that you have configured during initial setup. I will be using my two Tier-0’s, ec-edge-01-Tier0-gw and ec-edge-02-Tier0-gw for this tutorial along with a new Tier-0 VRF which will be attached to the second Tier-0 gateway.

So, first thing we will need to prepare two (2) segments for our VRF T0’s to ride the overlay network.

Go to – Segments -> Add a Segment, the two segments will ride the overlay transport zone, no vlan and no gateway attached. Add the segments and click on No for configuring the segment. Repeat for Second segment.

Below is the new segment that will be used for the Transit for the VRF Tier-0.

Just a reminder – this segment will not be connected to any gateway or subnets or vlans

Here are my 2 overlay backed segments, these will traverse the network backbone for the VRF Tier-0 to the ec-edge-01-Tier0-gw.

But, the VRF Tier 0 will be attached to the second Tier 0 (ec-edge-02-Tier0-gw) which is on two seperate Edge nodes (nsx-edge-03, nsx-edge-04) for a Active – Active toplogy.

Once the segment has been created then we can go and created a VRF T0. Go back to the Tier-0 Gateway window and click on Add Gateway – VRF ->

Name of VRF gateway ec-vrf-t0-gw and then attach it to the ec-edge-02-Tier0-gw, enable BGP and create a AS# which i used 65101, and as the second Tier-0-gateway it will act as a ghost router for those VRFs.

Once you finish you will want to click save, and continue configuring that VRF Tier-0, next we will configure the interfaces.

Now, we will need to create interfaces on the ec-edge-01-Tier0-gw. Expand Interfaces and click on the number in blue, for my deployment my NSX-T Tier 0 right now has 2 interfaces.

Once you create the 2 Interfaces on that Tier 0 the number of interfaces will change.

Click on Add Interfaces -> Create a unique name for that first uplink which will peer via BGP with the VRF T0.

Allocate couple IP addresses, I am using 172.16.233.5/29 for the first interface on the ec-vrf-t0-gw, which lives within the nsx-edge-01 for my deployment, the VRF T0 will have 172.16.233.6/29, and connect that interface on the overlay segment you created earlier.

Then the second interface, I created with the IP of 172.16.234.5/29, also the VRF Tier-0 will have 172.16.234.6/29 and each interface will be attaced to that second nsx-edge-node-02, so first IP 172.16.233.5/29 is attached to edge node 1 and second IP will be on Edge Node 02.

ec-t0-1-vrf-01-a – 172.16.233.5/29 – ec-t0-vrf-transport-1 172.16.233.6/29 (overlay segment)

ec-t0-1-vrf-01-b 172.16.234.5/29 – ec-t0-vrf-transport-2 172.16.234.6/29 (overlay segment)

Jumbo Frames 9000 MTU on both interfaces

Once you have created all required interfaces, below is an example of what i created, make sure you have everything setup correctly or the T0 and VRF T0 will not peer up!

Then go to BGP configuration for that nsx-edge-01 and nsx-edge-02 and prepare the peers from it to the VRF Tier-0 router.

Next, we will create another set of interfaces for the VRF T0 itself, these will live on nsx-edge-03 and nsx-edge-04. Same steps as what we created for nsx-edge-01 and nsx-edge-02, just flip it!

ec-t0-1-vrf-01-a – 172.16.233.6/29 – nsx-edge-03

ec-t0-1-vrf-01-b -172.16.234.6/29 – nsx-edge-04

Jumbo Frames 9000MTU

Once, both interfaces are configured for the Tier-0’s, you should have two interfaces with different subnets for the transit between the VRF T0 and the Edge 01 gateway Tier-0. After, interfaces are created on the specific nsx-edges.

Verify the interfaces and the correct segments and if everything is good, click save and proceed to next step.

Everything we just created rides the overlay segments, now we will configure BGP on each of the T0s.

Expand BGP – Click on the External and Service Interfaces (number) mine has 2.

Click Edit on the Tier 0, ec-edge-01-Tier0-gw and expand BGP and click on BGP Neigbors.

Create the BGP Peers on the VRF T0, you will see the interface IP we created earlier under the “Source Addresses” those are attached to each specific interface which is on those overlay segments we created for the VRF lite model.

172.16.233.5 – 172.16.233.6 – nsx-edge-03
172.16.234.5 – 172.16.234.6 – nsx-edge-04

Click Save and proceed to creating the second BGP interface which will be for nsx-edge-04 BGP

If everything went smooth, you should be able to verify your BGP peers between both Tier-0 and VRF Tier-0 as shown below.

After you have created the networks, you may create a T1 and attach it to that VRF T0! Which you can consume within VMware Cloud Director or just a standalone T1 which is attached to that VRF, But next we will attach a test segment to this VRF T0 we just created!

Once you create that segment with a subnet attached to the Tier-1 you will want to verify if you have any routes being advertised within your TOR Router, for my lab i am using a Arista DCS 7050QX-F-S which is running BGP.

I ran a command – show ip route on my Arista core switch.

You will see many different routes but the one we are intersted is 172.16.66.0/24 we just advertised.

If you do not see any routes coming up from the new VRF T0 we created you will want to create a Route-Redistrubition for that T0, click on the vrf t0 and edit and go to Route Re Distribution

For the walkthrough I redistributed everything for testing purposes, but for your use case you will only want to re-distribute the specific subnets, nats, forward IPs etc.

Overall toplogy of what we created is to the left with the two T0s, the T0 to the right is my current infrastructure.

That is it! For this walk through, we created a VRF T0 and attached it to the second edge t0 router and then we took the VRF T0 and peered up to the ec-edge01-t0-gw.

February 20, 2022 0 comments 2.5K views
1 FacebookTwitterLinkedinEmail
Cloud

VMware Cloud Director 10.3.2 Installation / Configuration

by Tommy Grot January 19, 2022
written by Tommy Grot 1 minutes read

Installing VMware Cloud Director, this walkthrough will guide you on how to deploy VMware Cloud Director 10.3.2. My next blog post will be on how to configure tenants and different network toplogies within vCD.

Download the OVA from VMware’s website login will be required to gain access to the installation medium

Login into vCenter, then right click on the Cluster, Deploy OVF Template

Select the VMware Cloud Director OVA and then click Next

Chose the naming convention of your vcd instance

Select the Compute Cluster that you wish to deploy VCD on

Review the details

Accept that lovely EULA! 🙂

Select the Configuration of the VCD instance. Each confiruation has different resouce allocations.

Select the Storage you wish to deploy the VCD instance too, for mine i chose my vSAN Storage

Select the Networks that VCD will utilize, for my setup I am using two NSX-T overlay backed Segments with the Database segment being isolated and the vcd segment being routable

Verify all settings before hitting Finish!

After the deployment is completed you can integrate VCD with NSX-T and vCenter

Login via https://x.x.x.x/provider (this will allow you to login into VCD as the provider)

Once logged in, go to Infrastructure Resource

Click ADD – to add vCenter server

Once you accepted the SSL Certificate from vCenter, then you will enable tenant access and click finish. After vCenter has been added you will see an overall vCenter Info, like in the screenshots below

After vCenter has been added, you may add NSX-T managers

Click on ADD – fill in the NSX-T Manager(s) URL/IP and user account

Trust the certificate from NSX-T Managers, then you are all set!

January 19, 2022 0 comments 2.4K views
1 FacebookTwitterLinkedinEmail
Networking

Upgrading NSX-T 3.1.3.3 to NSX-T 3.2.0

by Tommy Grot December 17, 2021
written by Tommy Grot 3 minutes read

Little about NSX-T 3.2 – There are lots of improvements within this release, from Strong Multi-Cloud Security to Gateway Firewalls and overall better Networking and Policy Enchantments. If you want to read more about those check out the original blog post from VMware.

Download your bytes off VMware’s NSX-T website. Once you have downloaded all required packages depending on your implementation. You will want to have a backup of your NSX-T environment prior to upgrading your NSX-T environment.

Below is a step by step walk through on how to upload the VMware-NSX-upgrade-bundle-3.2.0.0.0.19067070.mub upgrade file and proceed with the NSX-T Upgrade

Once you login, go to the System Tab

Go to Lifecycle Management -> Upgrade

Well, for my NSX-T environment – I have already upgraded it before from 3.1.3.0 to 3.1.3.3 and that is why you will see a (Green) Complete at the top right of the NSX Appliances. But – Proceed with the UPGRADE NSX Button

Here you will find the VMware-NSX-upgrade-bundle-3.2.0.0.0.19067070.mub file

Continuation of Uploading file

Now you will start uploading it. This will take some time so grab a snack! 🙂

Once it has uploaded you will see a “Upgrade Bundle retrieved successfully” now you can proceed by doing the upgrade – Click on the UPGRADE button below.

This lovely EULA! 🙂 Well you gotta accept it if you want to upgrade…

This will prompt you one more time, before you execute the full upgrade process of NSX-T

Once the Upgrade has been Extracted and the Upgrade Coordinator has been restarted your upgrade path is now ready to start upgrading the Edges.

For my NSX-T, I ran the Upgrade Pre Checks – to ensure that there are no issues before I did any major upgrades.

Results of the Pre Checks, had few Issues but nothing alarming for my situation.

Here below, I am upgrading the Edges serially, that way I can keep my services up and running with minimal to no downtime. When I upgraded NSX-T Edges I only saw 1 ping drop from a whole consistent ping to one of my web servers.

More progress on the Edge Upgrades

Lets check up on the NSX Edges, here below is a snip of one of the Edges that got upgraded to NSX-T 3.2.0

Now, that the Edges have upgraded Successfully we can proceed to the Hosts

Time to upgrade the Hosts! Make sure your hosts are not in production and in maintenance mode, this upgrade process will put the hosts into Maintenance Mode so just make sure you have enough resources.

Now that the host is free of VMs – Now the upgrading is Installing NSX bits on the host, this process will need to be repeated as many times there are hosts within your cluster.

All the hosts got upgraded successfully no issues encountered – Next Step is to upgrade the NSX Managers

On the next screenshot below, you will see that the Node OS upgrade process is next, Click Start and you will initiate the NSX Manager upgrade. If you want to see all the NSX Managers status click on the 1.Node OS Upgrade.

After you click start, you will see a dialog window pop open warning you to not create any objects within the NSX Manager, Also later in the upgrade if the web interface is down, you may log into the NSX Managers via Web Console through vCenter and run this command – ‘ get upgrade progress-status ‘

This is how the Node Upgrade Status Looks like, now you see that there are some upgrades happening for the second and third NSX Managers.

Below is a sample screen snip of the NSX01 console and I executed the command to see its status.

Now, that NSX Managers have upgraded the OS, there are still many services that need to get upgraded. Below is a screenshot of the current progress it is at

All NSX Managers have been upgraded to NSX-T 3.2.0 – Click Done

Upgrade has now been complete! 🙂

December 17, 2021 0 comments 6K views
3 FacebookTwitterLinkedinEmail
VMware ESXi

Installing / Upgrading VMware vSphere ESXi 7 Update 1 via iDRAC 8

by Tommy Grot October 7, 2020
written by Tommy Grot 2 minutes read

This post is guiding you on how to install or upgrade VMware ESXi 7 Update 1. This walk through will be the same for installing or upgrading ESXi, the only difference would be during the install you would specify a root password along with installing it.

Disclaimer: This also works for iDRAC 8 and below. On Dell PowerEdge 13th, 12th Generation servers.

Information Regarding the VMware vSphere Update via VMware’s Website – Click Here to Download, You will need to login with your VMware credentials.

Name:VMware-VMvisor-Installer-7.0U1-16850804.x86_64
Release Date:2020-10-06
Build Number:16850804

First Step: Log into iDRAC via root. Once logged in, click on Launch virtual console as below in the screenshot.

Once the Virtual Console window has opened. You will need to mount your VMware-VMvisor-Installer-7.0U1-16850804.x86_64.iso to the Virtual Optical Drive. Then click Map Device.

During the reboot of the server you may have to press F11 to get into the Boot Menu and select the One Time Boot Menu and go down to Virtual Optical Drive.

Once you select Virtual Optical Drive you will see the Dell BIOS loading and running through its boot up processes.

Now loading the VMware-VMvisor-Installer-7.0U1-16850804.x86_64.iso and it loads into memory.

Once the VMware ESXi 7.0.1 installer has loaded you will be greeted with a menu to follow through which is pretty straightforward.

Accept EULA.

Install or Upgrade VMware ESXi 7.0.1 on the correct corresponding datastore. If it is on a SD Card or Flash it will show up. Ensure to not install it on a VMFS datastore!

During this step. You have the ability to install a fresh or upgrade an existing environment. This will depend on your scenario. Please follow through carefully and ensure you have all backups of your DATA!

This will be the Final Screen prior to proceeding with the Installation or Upgrade! Press F11 to upgrade or accept if it is a fresh install.

Please be patient and let install finish 🙂

Install / Upgrade is complete! Now you may un map the virtual optical drive and reboot the host! And Enjoy the new VMware vSphere Update 7.0.1

October 7, 2020 1 comment 8.9K views
1 FacebookTwitterLinkedinEmail
VMware vCenter

VMware vCenter 7 Update 1 – Installation Walkthrough

by Tommy Grot October 6, 2020
written by Tommy Grot 3 minutes read

 

To Install vCenter 7.0 Update 1 with Embedded Platform Service Controller – Download the ISO from VMware (login will be required) Once downloaded, open the VMware VCSA.iso

  • Select vcsa-ui-installer folder
  • Then select choice of operating system (This tutorial is completed via Windows 10)
  • After going to win32
  • Then click on installer.exe

During this install we will be installing fresh new instance of VCSA.

  • Select Install

This installation now only offers Embedded Platform Services Controller. The External Platform Service Controller topology is depreciated model per VMware

Click Next

  • Accept EULA

Specify the VMware ESXi Host or vCenter IP Address – Example – 10.0.0.100 or DNS Name – esxi01.virtualbytes.io ; vcenter.virtualbytes.io

( VMware recommends to operate its platform with DNS names, this allows easier management and IP address modifications )

  • After all proper information is configured / documented – Click Next
  • Accept the Certificate Warning from the ESXi Host
  • Setup the appliance –
    • Specify a name for your vCenter Server (ex. vCenter, VCSA, etc)
    • Specify the VCSA root password ( If you do lose this password there is a step by step procedure to recover access)
  • During this Stage 1 – Here you can specify the VCSA installation to the specified datastore. Enabling Thin Disk Mode – will per-allocate the actual used storage of the virtual machine on the datastore (This saves storage from being filled)
  • VMware recommends to install vCenter under a FQDN (Fully Qualified Domain Name) with a FQDN – this allows you to change the IP address of a VCSA.
    Be cautious of filling the “Configure network settings” page – This will lead to a inoperable vCenter instance.
  • Recommendation – Setup an ‘A’ Record on your local network DNS server or Domain Controller to point to the fresh new vCenter Server Appliance

VERIFY! VERIFY! VERIFY! – If you do not verify your appliance deployment configuration and you mis-configure a setting – You will be reinstalling VCSA from scratch again. (BE CAREFUL!)

After you have verified all information – Click Next

  • This stage will take few minutes – Grab a coffee or water! 🙂
  • Once the Deploy vCenter Server Appliance is completed – You will see another window to “Continue” – Click Continue and this will lead you to the next window.
  • Stage 2 – This stage you will get to configure the Appliance configuration such as (SSH, SSO, CEIP)
  • SSH – is recommend to be turned off for security reasons, unless you are setting up more than one vCenter server and setting up vCenter High Availability
  • SSO – the single sign on, the default is vsphere.local, if you have a on premise domain controller, make sure you do not use the same domain name, this will prevent you from adding vCenter to the Active Directory domain forest.
  • Most recommended option is to synchronize the time with the ESXi Host – Also making sure that the NTP Service is running on the ESXi Host and is up to date with a relative time server.
  • During this option – SSO ( Single Sign On ) If you plan to join your vCenter to a Active Directory Domain – Make sure to specify a different domain or keep vsphere.local (You cannot use same AD domain as SSO domain on VCSA)

VERY IMPORTANT! Make sure the password you specify in this configuration window is accurate. If not you will be repeating the installation all over again – due to a password mis-configuration

  • Your choice to Join the CEIP Program ( This is recommend to join CEIP to get the Skyline Alerts and Health Updates)
  • Verify the finalization stage – this stage will configure your vCenter Server Appliance.

VERY IMPORTANT! Make sure the password you specify in this configuration window is accurate. If not you will be repeating the installation all over again due to a password mis-configuration

  • Once you confirm the final confirmation – There is no going back! Unless you took a snapshot of the vCenter Server Appliance prior to starting the vSphere SSO domain portion.
  • Enjoy your coffee or water – and wait patiently (This takes around 10-20 minutes, depending on the hardware)
  • You’ve installed vCenter Server Appliance! Now you may log into it URL to your server will be a – DNS A Record you configured in your local DNS Server.

https://your_vcenter_dns_name

October 6, 2020 0 comments 1.7K views
0 FacebookTwitterLinkedinEmail
VMware vSAN

Setting Up – Nested ESXi 6.7 on vSAN

by Tommy Grot April 12, 2019
written by Tommy Grot 1 minutes read

Today we will be installing Nested ESXi 6.7 on vSAN.

This is not a supported type of deployment for Production Environments. USE AT YOUR OWN RISK!

Before Installing ESXi 6.7 – There will be a requirement to add an ESXCLI command via SSH on each host that is apart of the vSAN Cluster

To Enable SSH on ESXi 6.7 – Go to desired host. Click on Configure -> System -> Services -> Click on SSH and click Start.

 esxcli system settings advanced set -o /VSAN/FakeSCSIReservations -i 1 

After the command is inputted. Continue Installation.

After installation – Setup ESXi to have Static IP Address
Once IP address is set, Apply the settings and Select Y – Yes ( This will not interrupt VM traffic only Management)
April 12, 2019 0 comments 2.1K views
0 FacebookTwitterLinkedinEmail
Newer Posts
Older Posts




Recent Posts

  • What’s New In VMware Cloud Foundation 9.0
  • Deploying & Configuring the VMware LCM Bundle Utility on Photon OS: A Step-by-Step Guide
  • VMware Cloud Foundation: Don’t Forget About SSO Service Accounts
  • VMware Explore Las Vegas 2025: Illuminating the Path to Cloud Excellence!
  • Securing Software Updates for VMware Cloud Foundation: What You Need to Know

AI AVI Vantage cloud Cloud Computing cloud director computing configure cyber security director dns domain controller ESXi How To las vegas llm llms multi-cloud multicloud NSx NSX-T 3.2.0 private AI servers ssh storage tenant upgrade vcd vcda VCDX vcenter VCF VDC vexpert Virtual Machines VMs vmware vmware.com vmware aria VMware Cloud Foundation VMware cluster VMware Explore VMware NSX vrslcm vsan walkthrough

  • Twitter
  • Instagram
  • Linkedin
  • Youtube

@2023 - All Right Reserved. Designed and Developed by Virtual Bytes

Virtual Bytes
  • Home
  • Home Data Center 2025
  • VMware
    • Cloud
    • Datacenter & Cloud Infrastructure
      • VMware ESXi
      • VMware vCenter
      • VMware vSAN
    • Networking & Security
    • Desktop & App Virtualization
      • Omnissa Horizon
    • Troubleshooting
    • Ansible
  • Education
  • Hardware
    • Hardware Tips & Tricks
  • Events
  • About
    • About Me
    • Home Lab Archives
      • Home Lab 2020-2022
      • Home Lab 2016-2020