Top Posts
Offline VMware Cloud Foundation 9 Depot: Your Path...
VMware Cloud Foundation 9: Simplifying Identity with a...
What’s New In VMware Cloud Foundation 9.0
Deploying & Configuring the VMware LCM Bundle Utility...
VMware Cloud Foundation: Don’t Forget About SSO Service...
VMware Explore Las Vegas 2025: Illuminating the Path...
Securing Software Updates for VMware Cloud Foundation: What...
VMware Cloud Foundation 5.2: A Guide to Simplified...
VMware Cloud Foundation 5.2: Unlocking Secure Hybrid Cloud...
VMware Cloud Foundation – Memory Tiering: Optimizing Memory...
Virtual Bytes
  • Home
  • Home Data Center 2025
  • VMware
    • Cloud
    • Datacenter & Cloud Infrastructure
      • VMware ESXi
      • VMware vCenter
      • VMware vSAN
    • Networking & Security
    • Desktop & App Virtualization
      • Omnissa Horizon
    • Troubleshooting
    • Ansible
  • Education
  • Hardware
    • Hardware Tips & Tricks
  • Events
  • About
    • About Me
    • Home Lab Archives
      • Home Lab 2020-2022
      • Home Lab 2016-2020
Author

Tommy Grot

Tommy Grot

Cloud

VMware Cloud Director 10.4.X & Terraform Automation Part 2

by Tommy Grot April 13, 2023
written by Tommy Grot 6 minutes read

Tonight’s multi-post is about VMware Cloud Director 10.4.x and Terraform!

With Terraform there are endless possibilities, creating a virtual data center and being able to tailor to your liking and keeping it in an automated deployment. In this multi-part blog post we will get into VCD and Terraform Infrastructure as Code automation. If you would like to see what we did in Part 1, here is the previous post – VMware Cloud Director 10.4.X & Terraform Automation Part 1

What You will Need:

  • A Linux VM to execute Terraform from
  • Latest Terraform Provider (I am using beta 3.9.0-beta.2 )
  • Gitlab / Code Repo (Optional to store your code)
  • VMware Cloud Director with NSX-T Integrated already
  • Local Account with Provider Permissions on VCD (mine is terraform)

Lets Begin!

First part we will add on to our existing Terraform automation which we have continued in Part 1 of this multi-part blog. Below is the provider information for reference.

terraform {
  required_providers {
    vcd = {
      source  = "vmware/vcd"
      version = "3.9.0-beta.2"
    }
  }
}

provider "vcd" {
  url                  = "https://cloud.virtualbytes.io/api"
  org                  = "system"
  user                 = "terraform"
  password             = "VMware1!"
  auth_type            = "integrated"
  max_retry_timeout    = 60
  allow_unverified_ssl = true
}

Next, we will add Data Center Groups to our terraform template, what we are doing here is Creating the virtual data center group to span multiple organizations, if need be, but for this demonstration – I am using a DCG for Distributed Firewall purposes.

#### Create VDC Org Group 

resource "vcd_vdc_group" "demo-vdc-group" {
  depends_on = [vcd_org_vdc.demo-org-10]
  org                   = "demo-org-10"
  name                  = "demo-vdc-group"
  description           = "Demo Data Center Group"
  starting_vdc_id       = vcd_org_vdc.demo-org-10.id
  participating_vdc_ids = [vcd_org_vdc.demo-org-10.id]
  dfw_enabled           = true
  default_policy_status = true
}

The next code snippet – here we will set and configure the Data Center Group firewall from an Internal to Internal and Drop to Any to Any and Allow. Configuration where by default it keeps Internal DFW rule.

##### DFW VDC Group to Any-Any-Allow
resource "vcd_nsxt_distributed_firewall" "lab-03-pro-dfw" {
  depends_on = [vcd_org_vdc.demo-org-10]
  org = "demo-org-10"
  vdc_group_id = vcd_vdc_group.demo-vdc-group.id
  rule {
    name        = "Default_VdcGroup_demo-vdc-group"
    direction   = "IN_OUT"
    ip_protocol = "IPV4"
    source_ids = [vcd_nsxt_security_group.static_group_1.id]
    destination_ids = []
    action      = "ALLOW"
  }
}

If you are wanting to create multiple rules within a Distributed Firewall, here below I will show some examples – This will not be a part of the code implementation.

##### Sample DFW Rule Creation
resource "vcd_nsxt_distributed_firewall" "lab-03-pro-dfw-1" {
  depends_on = [vcd_org_vdc.demo-org-10]
  org = "demo-org-10"
  vdc_group_id = vcd_vdc_group.demo-vdc-group.id
  rule {
    name        = "rule-1" # Here you will create your name for the specific firewall rule
    direction   = "IN_OUT" # One of IN, OUT, or IN_OUT. (default IN_OUT)
    ip_protocol = "IPV4"
    source_ids = []
    destination_ids = []
    action      = "ALLOW"
  }
}

Some more detailed information from Terraform site –

Each Firewall Rule contains following attributes:

  • name – (Required) Explanatory name for firewall rule (uniqueness not enforced)
  • comment – (Optional; VCD 10.3.2+) Comment field shown in UI
  • description – (Optional) Description of firewall rule (not shown in UI)
  • direction – (Optional) One of IN, OUT, or IN_OUT. (default IN_OUT)
  • ip_protocol – (Optional) One of IPV4, IPV6, or IPV4_IPV6 (default IPV4_IPV6)
  • action – (Required) Defines if it should ALLOW, DROP, REJECT traffic. REJECT is only supported in VCD 10.2.2+
  • enabled – (Optional) Defines if the rule is enabled (default true)
  • logging – (Optional) Defines if logging for this rule is enabled (default false)
  • source_ids – (Optional) A set of source object Firewall Groups (IP Sets or Security groups). Leaving it empty matches Any (all)
  • destination_ids – (Optional) A set of source object Firewall Groups (IP Sets or Security groups). Leaving it empty matches Any (all)
  • app_port_profile_ids – (Optional) An optional set of Application Port Profiles.
  • network_context_profile_ids – (Optional) An optional set of Network Context Profiles. Can be looked up using vcd_nsxt_network_context_profile data source.
  • source_groups_excluded – (Optional; VCD 10.3.2+) – reverses value of source_ids for the rule to match everything except specified IDs.
  • destination_groups_excluded – (Optional; VCD 10.3.2+) – reverses value of destination_ids for the rule to match everything except specified IDs.

Now that we have established firewall rules within our template, next you can IP Sets which are kind of a Group that you can use for ACL’s and integrate them into a firewall and static groups etc!

#### Demo Org 10 IP sets
resource "vcd_nsxt_ip_set" "ipset-server-1" {
  org = "demo-org-10" # Optional

  edge_gateway_id = vcd_nsxt_edgegateway.lab-03-pro-gw-01.id

  name        = "first-ip-set"
  description = "IP Set containing IPv4 address for a server"

  ip_addresses = [
    "10.10.10.50",
  ]
}

Static Groups are another great way to assign networks and members. For this example, my Static Group consists of my domain network segment and with this I can utilize the group into firewall rules.

#### Create Static Group
resource "vcd_nsxt_security_group" "static_group_1" {
  org = "demo-org-10"
  edge_gateway_id = vcd_nsxt_edgegateway.lab-03-pro-gw-01.id

  name        = "domain-network"
  description = "Security Group containing domain network"

  member_org_network_ids = [vcd_network_routed_v2.nsxt-backed-2.id]
}

###########################################################
An example of how to use a Static Group within a firewall rule.
  rule {
    name        = "domain-network" ## firewall rule name
    action      = "ALLOW" 
    direction   = "IN_OUT"
    ip_protocol = "IPV4"
    source_ids = [vcd_nsxt_security_group.sg-domain-network.id]
    destination_ids = [vcd_nsxt_security_group.sg-domain-network.id]
    logging   = true
  }

That is it for the automation for Part 2 of VMware Cloud Director! Stay Tuned for more automation!

April 13, 2023 0 comments 1.3K views
0 FacebookTwitterLinkedinThreadsBlueskyEmail
Cloud

Load Balancing VMware Cloud Director 10.4.x Cells with NSX ALB (AVI)

by Tommy Grot April 11, 2023
written by Tommy Grot 2 minutes read

Topic of the Day – Load Balancing VMware Cloud Director 10.4.x multi cell deployment. For this deployment I am using 3 VCD Cells, they consist of Small Size ( 2vCPU and 12GB, these are not recommended specifications for a production appliance – Per VMware.)

This walkthrough will show you how to load balancer the appliances only, we are not integrating NSX ALB into VMware Cloud Director for Tenants to consume, stay tuned for a future walkthrough for VCD and NSX ALB Integration!

What you will need:

  • Multiple VCD Appliances
  • Certificate with multiple SANs ( I used my wildcard cert)
  • Certificates and Public Addresses configured already on all VCD Appliances
  • 4 DNS A Records, 1 A Record Pointing to VIP IP address of ALB VS Pool, 3 A Records for individual appliances

More information on VMware Cloud Director 10.4.1 Certificate Implementation here

Lets login to NSX ALB, Go to Virtual Services and top right click on “Create Virtual Service”

-> Advanced Setup

Select your NSX Cloud which we will deploy the VIP pool

Select VRF Context, which for my deployment I am used my t1-edge-01-m01-gw, which is my Tier 1 Router attached to my primary Tier-0.

Next we will configured Virtual Service VIP for our Service Engine for ALB.

Attach the VsVIP to your Tier 1 Logical Router

Add a Virtual IP that is free from within your VIP Pool that is pre-allocated manually or can be via IPAM Dynamically. For my implementation I am setting the IP address statically.

Click Save -> Then it will take us back to the main page where we are deploying the Virtual Service

Next step we will set the Profile of our Virtual Service to the following

  • System-TCP-Proxy
  • System-L4-Application


(Side topic, VMware Cloud Director works better with a Layer 4 Load Balancer, there are issues that occur if a Layer 7 HTTP load balancer is utilized)

Now that our Profile is set, next we will create our Pool. I named my “VMware-Cloud-Director-Appliances-Pool”

The Following Settings that should be set are:

  • Default Server Port: 443
  • Least Connections (can use other Algorithms based on your needs)
  • Tier1 Logical Router – t1-edge01-m01-gw (this is my Tier1)
  • Servers – Created IP Address Group
  • Health Monitor
  • SSL – System-Standard, (Service Edge Client Certificate)

  1. Any other settings will depend on your implementation

Once all settings have been configured, now we will hit save and proceed to the last page “Advanced”

Ensure to select your Service Engine Group, or ALB will deploy it on the default group and might cause issues.

After you have the AVI Service Engine deploying, now you can go to VCD, and setup Public Addresses – Pre reqs are that you need to have VCD SSL CA Signed or Self Signed already configured and just need to enabled Public Addresses for the Web Portal and API.

That’s it! Very simple implementation to utilize VMware NSX Advanced Load Balancer and Load Balance VMware Cloud Director Appliances!

April 11, 2023 0 comments 2.5K views
1 FacebookTwitterLinkedinThreadsBlueskyEmail
Omnissa Horizon

VMware Horizon 2303 – Deleting / Cleaning up Orphaned VMs / Templates

by Tommy Grot April 10, 2023
written by Tommy Grot 1 minutes read

Tonight’s topic is about an issue I encountered with VMware Horizon 2303 and Instant Clone pool. I deleted an instant clone pool from Horizon View administration and it deleted it from Horizon, but my cluster was throwing an vSAN error that it could not delete. So, what I had to do is go into the Horizon Connection Server and use the iccleanup.cmd utility.

Connect to your Horizon Connection server via RDP

Then go to ->

C:\Program Files\VMware\VMware View\Server\tools\bin

While you have this Explorer window open, a little shortcut to change directory and open Command Prompt, just type in CMD

iccleanup.cmd -vc <your-fqdn-vcsa> -uid [email protected] -skipCertVeri

Ensure to skip certificate verification if using self signed certs, sometimes may cause connection issues.

Next you will enter command “list”

Next, you will have the option to do the following: (unprotect/delete/output/back) – For my situation, I unprotected the cp-template that was orphaned and causing issues by defining the index number.

Then I ran the delete command and it found few more templates from a previous VDI Instant Clone pool and it removed all orphaned templates and VMs!

April 10, 2023 0 comments 5.6K views
0 FacebookTwitterLinkedinThreadsBlueskyEmail
Commvault Disaster Recovery

Deploying Commvault & Integrating with VMware vSphere 8

by Tommy Grot April 3, 2023
written by Tommy Grot 1 minutes read

Today’s Topic! Deploying Commvault, – Commvault is my most favorite Disaster Recovery Solution, it has so many features and many awesome features that are just simple to use and secure!

Simple, comprehensive backup and archiving

  • Comprehensive workload coverage (files, apps, databases, virtual, containers, cloud) from a single extensible platform and user interface
  • High-performance backups via storage integrations
  • Automated tiering for long-term retention and archiving

Trusted recovery, ransomware protection, and security

  • Rapid, granular recovery of data and applications, including instant recovery of virtual machines
  • Built-in ransomware protection including anomaly detection and reporting
  • End-to-end encryption, including data-at-rest and data-in-flight encryption, to ensure your data is secure

More information Visit Commvault’s Website here!

First we will setup a dedicated VLAN for our Commvault CommCell. This will be a multi-part post where first we install and deploy a virtual machine and deploy Commvault,

Prepare a dedicated network for backups

Once, installation is complete you will be able to login to your Comm Cell via web UI.

April 3, 2023 0 comments 714 views
0 FacebookTwitterLinkedinThreadsBlueskyEmail
Cloud

VMware Cloud Director 10.4.x & Terraform Automation Part 1

by Tommy Grot April 3, 2023
written by Tommy Grot 5 minutes read

Today’s post is about VMware Cloud Director 10.4.x and Terraform!

With Terraform there are endless possibilities, creating a virtual data center and being able to tailor to your liking and keeping it in an automated deployment. In this multi-part blog post we will get into VCD and Terraform Infrastructure as Code automation. This will be a multi-part post, for now we are starting off at Part 1!

What You will Need:

  • A Linux VM to execute Terraform from
  • Latest Terraform Provider (I am using beta 3.9.0-beta.2 )
  • Gitlab / Code Repo (Optional to store your code)
  • VMware Cloud Director with NSX-T Integrated already
  • Local Account with Provider Permissions on VCD (mine is terraform)

Lets Begin!

To begin our terraform main.tf, we will specify the terraform provider VCD version which I am using 3.9.0-beta.2

 terraform {
  required_providers {
    vcd = {
      source  = "vmware/vcd"
      version = "3.9.0-beta.2"
    }
  }
}

provider "vcd" {
  url                  = "https://cloud.virtualbytes.io/api"
  org                  = "system"
  user                 = "terraform"
  password             = "VMware1!"
  auth_type            = "integrated"
  max_retry_timeout    = 60
  allow_unverified_ssl = true

Once you have your Terraform Provider configured and administrative privilege account next, we will start creating an Organization within VCD.

# Creating VMware Cloud Director Organization#
resource "vcd_org" "demo-org-10" {
  name             = "demo-org-10"
  full_name        = "demo-org-10"
  description      = ""
  is_enabled       = true
  delete_recursive = true
  delete_force     = true
  

  vapp_lease {
    maximum_runtime_lease_in_sec          = 3600 # 1 hour
    power_off_on_runtime_lease_expiration = true
    maximum_storage_lease_in_sec          = 0 # never expires
    delete_on_storage_lease_expiration    = false
  }
  vapp_template_lease {
    maximum_storage_lease_in_sec       = 604800 # 1 week
    delete_on_storage_lease_expiration = true
  }
}

Next the code below will create a Virtual Data Center within that Organization you have created above.

resource "vcd_org_vdc" "demo-org-10" {
  depends_on  = [vcd_org.demo-org-10]
  name        = "demo-org-10"
  description = ""
  org         = "demo-org-10"
  allocation_model  = "Flex"
  network_pool_name = "VB-POOL-01"
  provider_vdc_name = "Provider-VDC"
  elasticity = true
  include_vm_memory_overhead = true
  compute_capacity {
    cpu {
      allocated = 2048
    }

    memory {
      allocated = 2048
    }
  }

  storage_profile {
    name    = "vCloud"
    limit   = 10240
    default = true
  }
  network_quota            = 100
  enabled                  = true
  enable_thin_provisioning = true
  enable_fast_provisioning = true
  delete_force             = true
  delete_recursive         = true
}

Next, we will specify the automation to create a template library within that Virtual Data Center.

#Creating Virtual Data Center Catalog#
resource "vcd_catalog" "NewCatalog" {
  depends_on = [vcd_org_vdc.demo-org-10]
  org = "demo-org-10"

  name             = "Templates"
  description      = "Template Library"
  delete_recursive = true
  delete_force     = true
}

The next step will depend on if you have NSX already configured and ready to consume a Tier-0 VRF into this Provider Gateway we are about to ingest into this Virtual Data Center. My Tier-0 VRF is labeled = vrf-tier-0-edge-03-gw-lab, as I tell Terraform the existing data where to pull from NSX and to assign it to this VDC.

# Add NSX Edge Gateway Tier 0 to VDC
data "vcd_nsxt_manager" "main" {
  name = "nsx-m01"
}

data "vcd_nsxt_tier0_router" "vrf-tier-0-edge-03-gw-lab" {
  name            = "vrf-tier-0-edge-03-gw-lab"
  nsxt_manager_id = data.vcd_nsxt_manager.main.id
}

resource "vcd_external_network_v2" "ext-net-nsxt-t0" {
  depends_on = [vcd_org_vdc.demo-org-10]
  name        = "lab-03-pro-gw-01"
  description = "vrf-tier-0-edge-03-gw-lab"

  nsxt_network {
    nsxt_manager_id      = data.vcd_nsxt_manager.main.id
    nsxt_tier0_router_id = data.vcd_nsxt_tier0_router.vrf-tier-0-edge-03-gw-lab.id
  }

  ip_scope {
    enabled        = true
    gateway        = "192.168.249.145"
    prefix_length = "29"

    static_ip_pool {
      start_address  = "192.168.249.146"
      end_address   = "192.168.249.149"
    }
  }
}

Now, that we have created a Provider Gateway by consuming a VRF Tier-0 from NSX, next we will create a Tier-1 Gateway and attach it into the Virtual Data Center so we can add segments!

resource "vcd_nsxt_edgegateway" "lab-03-pro-gw-01" {
  depends_on = [vcd_org_vdc.demo-org-10]
  org         = "demo-org-10"
  owner_id    = vcd_vdc_group.demo-vdc-group.id
  name        = "lab-03-pro-gw-01"
  description = "lab-03-pro-gw-01"

  external_network_id = vcd_external_network_v2.ext-net-nsxt-t0.id

    subnet {
    gateway       = "192.168.249.145"
    prefix_length = "29"
    # primary_ip should fall into defined "allocated_ips" 
    # range as otherwise next apply will report additional
    # range of "allocated_ips" with the range containing 
    # single "primary_ip" and will cause non-empty plan.
    primary_ip = "192.168.249.146"
    allocated_ips {
      start_address  = "192.168.249.147"
      end_address   = "192.168.249.149"
    }
  }
}

Now we can create a segment and attach it to our Tier-1 Gateway within the Virtual Data Center!

#### Create VMware Managment Network /24 
resource "vcd_network_routed_v2" "nsxt-backed-1" {
  depends_on = [vcd_org_vdc.demo-org-10]
  org         = "demo-org-10"
  name        = "vmw-nw-routed-01"
  edge_gateway_id = vcd_nsxt_edgegateway.lab-03-pro-gw-01.id
  gateway       = "10.10.10.1"
  prefix_length = 24
  static_ip_pool {
    start_address = "10.10.10.5"
    end_address   = "10.10.10.10"
  }
}

This is it for Part 1! Stay tuned for Part 2 where we will customize this VDC we created with Terraform!

April 3, 2023 0 comments 1.2K views
0 FacebookTwitterLinkedinThreadsBlueskyEmail
VMware Troubleshooting

VMware vRealize Lifecycle Suite & VMware Cloud Foundation 4.5 Rollback

by Tommy Grot March 20, 2023
written by Tommy Grot 1 minutes read

Today’s topic is on VMware Aria Life Cycle Manager formerly (vRSLCM) – Have you encountered an issue with vRSLCM or uploaded a PSPACK that you didn’t want to upload? Here we will walk through on how to roll back if you encounter any issues!

Tasks:

  • Create a snapshot of our SDDC VCF VM
  • Update vRSLCM Postgres
  • Delete via Developer Center
  • Re-Deploy

After the snapshot has been crated, lets now ssh into the VCF SDDC Manager appliance, then elevate to root

su root

Run Postgres SQL Command to remove it from VCF Database

psql -h localhost -U postgres -d platform -c "update vrslcm set status = 'DISABLED'"

Now – when should see that vRSLCM has been disabled and is letting know VCF that there is something wrong with it, so now it will let you Roll Back

Then Go back to VCF UI, Developer Center – > Scroll all the way down to APIs for managing vRealize Life Cycle Manager -> Select Delete – > Execute

After vRSLCM is Delete, you will see Roll Back under vRealize Suite and then you can deploy vRSLCM again!

March 20, 2023 0 comments 1.1K views
0 FacebookTwitterLinkedinThreadsBlueskyEmail
Cloud

Upgrading VMware Cloud Director to 10.x Versions

by Tommy Grot March 3, 2023
written by Tommy Grot 4 minutes read

This walkthrough is valid for VMware Cloud Director 10.6.x Upgrade!


What’s New

VMware Cloud Director version 10.4.1.1 release provides bug fixes, updates the VMware Cloud Director appliance base OS and the VMware Cloud Director open-source components.

Resolved Issues

  • VMware Cloud Director operations, such as powering a VM on and off takes longer time to complete after upgrading to VMware Cloud Director 10.4.1After upgrading to VMware Cloud Director 10.4.1, VMware Cloud Director operations, such as powering a VM on or off takes longer time to complete. The task displays a Starting virtual machine status and nothing happens.The jms-expired-messages.logs log file displays an error.RELIABLE:LargeServerMessage & expiration=
  • During an upgrade from VMware Cloud Director 10.4 to version 10.4.1, upgrading the standby cell fails with a Failure: Error while running post-install scripts error messageWhen upgrading the VMware Cloud Director appliance by using an update package from version 10.4 to version 10.4.1, the upgrade of the standby cell fails with an error message.Failure: Error while running post-install scriptsThe update-postgres-db.log log file displays an error.> INFO: connecting to source node> DETAIL: connection string is: host=primary node ip user=repmgr> ERROR: connection to database failed> DETAIL:> connection to server at “primary node ip”, port 5432 failed: could not initiate GSSAPI security context: Unspecified GSS failure. Minor >> code may provide more information: No Kerberos credentials available (default cache: FILE:/tmp/krb5cc_1002)> connection to server at “primary node ip”, port 5432 failed: timeout expired
More Fixes and Known Issues here

More Information about VMware Cloud Director 10.4.1

VMware Cloud Director 10.4.1 introduces several new concepts that facilitate creating, deploying, running, and managing extensions. Solution Add-Ons are an evolution of VMware Cloud Director extensions that are built, implemented, packaged, deployed, instantiated, and managed following a new extensibility framework. Solution Add-Ons contain custom functionality or services and can be built and packaged by a cloud provider or by an independent software vendor. VMware also develops and publishes its own VMware Cloud Director Solution Add-Ons.

My Versions

  • VMware NSX 4.1.0.0.0.21332672
  • VMware vCSA 8.0.0 21216066
  • VMware Cloud Director 10.4.1

First. properly shutdown your VCD Cells if you have multiple cells. Once they are turned off take a snapshot of all of the appliances

Next we will want to upload the tar.gz file via WINSCP to the primary VCD Cell if you have a multi cell deployment you will need to upgrade the first cell, then second and third.

I have logged into the VCD appliance with root account

Then open up a Putty session to the VCD appliance login as root,

Then change directory to /tmp/ Once in the directory:

cd /tmp

Create a Directory within /tmp directory, with the command below:

mkdir local-update-package

Start to upload the VMware_Cloud_Director_10.4.1.9360-21373231_update.tar.gz file for the upgrade into /tmp/local-update-package via winscp

File has been successfully uploaded to the VCD appliance.

Next steps we will need to prepare the appliance for the upgrade:

We will need to move the VMware_Cloud_Director_10.4.1.9360-21373231_update.tar.gz from the /tmp directory to /tmp/local-update-package/

mv VMware_Cloud_Director_10.4.1.9360-21373231_update.tar.gz /tmp/local-update-package

Once in the local-update-package director, and you have your VMware_Cloud_Director_10.4.1.9360-21373231_update.tar.gz run the command below to extract the update package in the new directory we created in /tmp/local-update-package

tar -zxf VMware_Cloud_Director_10.4.1.9360-21373231_update.tar.gz

You can run the “ls” command and you shall see the VMware_Cloud_Director_10.4.1.9360-21373231_update.tar.gz file along with manifest and package-pool

After you have verified the local update directory then we will need to set the update repository.

vamicli update --repo file:///tmp/local-update-package

Check for update with this command after you have set the update package into the repository address

vamicli update --check

Now, we see that we have a upgrade that is staged and almost ready to be ran! But, we will need to shutdown the cell(s) with this command

/opt/vmware/vcloud-director/bin/cell-management-tool -u administrator cell --shutdown

Next is to take a backup of the database, log into VMware Cloud Director Appliance, https://<your-ip>:5480 , same port as vCSA VAMI.

Backup was successful! Now, time for the install

Apply the upgrade for VCD, the command below will run will install the update

vamicli update --install latest

Now, the next step is important, if you have any more VCD Cell appliances you will want to repeat first few steps and then just run the command below to upgrade the other appliances:

/opt/vmware/vcloud-director/bin/upgrade 

Select Y to Proceed with the upgrade

After successful upgrade, you may reboot VCD appliance and test, and after successful tests remove your snapshot.

March 3, 2023 0 comments 3.5K views
3 FacebookTwitterLinkedinThreadsBlueskyEmail
VMware NSX

Upgrading VMware NSX 4.0.1.1 to 4.1.0

by Tommy Grot March 1, 2023
written by Tommy Grot 4 minutes read

Topic of the day – How to upgrade VMware NSX 4.0.1.1 to 4.1.0, during this walkthrough, I will go through all the steps that are required, along with how the upgrade process is and any issues I encounter.

Lets Begin!

What Will You Need:

  • VMware-NSX-upgrade-bundle-4.1.0.0.0.21332672.mub
  • Successful NSX Configuration Backups on your SFTP Server
  • 1 hour!

Version I am Running:

  • VMware ESXi 8.0 Build 21203435
  • VMware vCenter Server – 8.0.0.10200

The supported upgrade paths for the NSX product versions. (More Info Here)

Adhere to the following upgrade paths for each NSX release version.

  • NSX 3.2.x > NSX 4.1.x.
  • NSX 4.0.x > NSX 4.1.x.

What’s New

Information below is from VMware’s NSX Release Notes – More Info Check out here!

NSX 4.1.0 provides a variety of new features to offer new functionalities for virtualized networking and security for private, public, and multi-clouds. Highlights include new features and enhancements in the following focus areas:

  • IPv6 support for NSX internal control and management planes communication – This release introduces support for Control-plane and Management-plane communication between Transport Nodes and NSX Managers over IPv6. In this release, the NSX manager cluster must still be deployed in dual-stack mode (IPv4 and IPv6) and will be able to communicate to Transport Nodes (ESXi hosts and Edge Nodes) either over IPv4 or IPv6. When the Transport Node is configured with dual-stack (IPv4 and IPv6), IPv6 communication will be always preferred.
  • Multi-tenancy available in UI, API and alarm framework – With this release we are extending the consumption  model of NSX by introducing multi-tenancy, hence allowing multiple users in NSX to consume their own objects, see their own alarms and monitor their VMs with traceflow.  This is made possible by the ability for the Enterprise Admin to segment the platform into Projects, giving different spaces to different users while keeping visibility and control.
  • Antrea to NSX Integration improvements – With NSX 4.1, you can create firewall rules with both K8s and NSX objects. Dynamic groups can also be created based on NSX tags and K8s labels. This improves usability and functionality of using NSX to manage Antrea clusters.
  • Online Diagnostic System provides predefined runbooks that contain debugging steps to troubleshoot a specific issue. These runbooks can be invoked by API and will trigger debugging steps using the CLI, API and Scripts. Recommended actions will be provided post debugging to fix the issue and the artifacts generated related to the debugging can be downloaded for further analysis. Online Diagnostic System helps to automate debugging and simplifies troubleshooting

Interoperability Check for VMware vSphere 8 and NSX 4.1

Before we start the upgrade, make sure that you have successfully backups of your current NSX Environment! For my implementation I used a SFTP Server running on Ubuntu Linux VM which is hosted on a secondary tier storage SAN,

Now, that we verified our backups are good. Lets begin uploading the MUB file.

Go To Upgrade ->

Click on UPGRADE

Find via Browse the VMware-NSX-upgrade-bundle-4.1.0.0.0.21332672.mub and upload it

Wait few min for the MUB to upload

Now, the MUB has been successfully uploaded to NSX Manager appliance, now we wait and let it extract all the files.

Verifying Stage

Now, we are ready to prepare for the Upgrade! (Accept the EULA at the pop up)

This portion of the upgrade will go through many stages within this little window, you will see once as an example: Restarting Upgrade Coordinator

Click – Run Pre-Check Before upgrading!

Now Let the NSX Edge Nodes upgrade, this is a pretty quick automated process!

NSX Edge Nodes successfully upgraded!

Now time to upgrade the Transport Nodes (also known as ESXi Hosts)

You will see the upgrade process going in vCenter Tasks. (Make sure that all VMs that are not on any kind of shared storage such as vSAN or a iSCSI LUN which is shared within the cluster, off.)

In Process upgrade for Hosts.

Successful Upgrade

The final part, we will upgrade the NSX Managers

Confirm to start the upgrade

Upgrade in process for NSX Managers, it does them 1 at a time.

Data Migration upgrade process

Unpin API

Unpin UI

There it is! Fully upgrade to NSX 4.1.0 – Now, next blog post we will get into the new features. But Check out that new Project Tab drop down for multi-tenancy!

March 1, 2023 0 comments 4.5K views
0 FacebookTwitterLinkedinThreadsBlueskyEmail
Hardware Tips & Tricks

Palo Alto PPPoE VLAN Tagging w/ Century Link Gigabit Fiber

by Tommy Grot February 22, 2023
written by Tommy Grot 1 minutes read

Today’s topic is about Palo Alto and PPPoE interface and VLAN tagging. Currently Palo Alto does not support VLAN Tagging on an Interface with PPPoE authentication for your ISP. So, our workaround that works well but definitely need to have a Managed switch so you can Trunk and Tag an interface to make the Century Link FTTH ADTRAN know that VLAN 201 is Tagged.

Info –

Century Link Fiber Internet – Utilizes VLAN tagging to segregate out their Internet traffic, and their TV service.

What You Need:

  • PPPoE Username & Password
  • VLAN ID = 201
  • Managed Switch / Router that supports VLAN Tagging

First we will setup two ports on a managed switch, for my implementation I am using a Cisco Catalyst WS-C3750X-24P-S.

Interface Gi1/0/23 – Syntax below to copy (Just FYI, this is for Cisco)

description Trunk to CenturyLink
switchport trunk allowed vlan 201
switchport trunk encapsulation dot1q
switchport mode trunk
switchport nonegotiate
spanning-tree portfast edge

Interface Gi1/0/24 – Syntax below to copy (Just FYI, this is for Cisco)

description WAN-PAFW
switchport access vlan 201
switchport mode access

Go to Network -> Click on Ethernet1/1

Once Interface is opened up, make sure your Virtual Router and all configured, but then Go to IPv4

Click on General -> Select PPPoE -> Fill in your Username and Password for Century Link PPPoE Authentication -> Click OK

Now that your Century Link PPPoE user name and info is filled out, under advanced, you may set your Static IP address here for your WAN Interface.

Red Ethernet cable Et1/1 going to Gi1/0/24 which is the one that is Tagged as an access port on vlan 201

February 22, 2023 0 comments 1K views
0 FacebookTwitterLinkedinThreadsBlueskyEmail
Hardware ReviewsHome-Labs

Intel Optane 905P NVMe Opportunity & VMware vSphere 8

by Tommy Grot January 20, 2023
written by Tommy Grot 1 minutes read

Being apart of the VMware vExpert Program is awesome, working with other VMware vExperts and collaborating with them is a blast! Plus getting sample hardware to install and review in your home lab is pretty sweet, I must say!? With #Intel and VMware sending me 10 Intel Optane 905p 280GB SSDs, I have been testing them and putting workloads on top of these NVMe’s. With optane memory it has insane access to storage and the crazy intense IOPs that allow many intensive workloads to keep crunching under high demands.

In my deploment I have tested many different scenarios, I have few optanes in my Dell PowerEdge R740s, a Dell 14th Generation Server. These Servers have the following Specs

  • Intel(R) Xeon(R) Gold 6138 CPU @ 2.00GHz
  • 512 GB DDR4 of Micron Memory
  • 2 x Intel Optane’s per Dell PowerEdge R740
  • 2.5TB SAS SSD 12Gb/s
  • 40/50Gb Network Connectivity (ConnectX-4 Mellanox)
  • 1100 Watt Platinum PSU’s
Top View of the #Intel Optane 905P
Side View of the #Intel Optane 905P

The 3 – Dell PowerEdge R740s that have 2 Intel Optane NVMe 905Ps each installed
Wide View

Benchmarks

This test was done on a VM on vSphere 8 with Crystal Disk Mark –

VM Specs: 8vCPUs and 32GB of RAM, VM Version 20.

Crystal Disk Mark – Measuring in GB/s!
8 x 16MiB IOPS Test
8 x 128MiB IOPS Test
8x16GiB IOPS Test

Upgrading Firmware of the Intel 905p

Intel Mas Update Tool – Download

I will start the process of upgrading my firmware of all 6 Intel Optanes that are installed within the 3 – Dell R740s.

Upload the intel-mas-tool via SCP, I used WinSCP to upload it into the /tmp directory on the esxi host.

Then I ran

esxcli software component apply -d /tmp/intel-mas-tool.xxxx

i

January 20, 2023 0 comments 884 views
0 FacebookTwitterLinkedinThreadsBlueskyEmail
Newer Posts
Older Posts




Recent Posts

  • Offline VMware Cloud Foundation 9 Depot: Your Path to Air-Gapped Deployments
  • VMware Cloud Foundation 9: Simplifying Identity with a Unified SSO Experience
  • What’s New In VMware Cloud Foundation 9.0
  • Deploying & Configuring the VMware LCM Bundle Utility on Photon OS: A Step-by-Step Guide
  • VMware Cloud Foundation: Don’t Forget About SSO Service Accounts

AI cloud Cloud Computing cloud director configure cyber security director dns domain controller ESXi How To las vegas llm llms multicloud NSx NSX-T 3.2.0 NVMe sddc security servers ssh storage tenant upgrade vcd vcda VCDX vcenter VCF vcf 9 VDC vexpert Virtual Machines VMs vmware vmware.com vmware aria VMware Cloud Foundation VMware cluster VMware Explore VMware NSX vrslcm vsan walkthrough

  • Twitter
  • Instagram
  • Linkedin
  • Youtube

@2023 - All Right Reserved. Designed and Developed by Virtual Bytes

Virtual Bytes
  • Home
  • Home Data Center 2025
  • VMware
    • Cloud
    • Datacenter & Cloud Infrastructure
      • VMware ESXi
      • VMware vCenter
      • VMware vSAN
    • Networking & Security
    • Desktop & App Virtualization
      • Omnissa Horizon
    • Troubleshooting
    • Ansible
  • Education
  • Hardware
    • Hardware Tips & Tricks
  • Events
  • About
    • About Me
    • Home Lab Archives
      • Home Lab 2020-2022
      • Home Lab 2016-2020