Top Posts
What’s New In VMware Cloud Foundation 9.0
Deploying & Configuring the VMware LCM Bundle Utility...
VMware Cloud Foundation: Don’t Forget About SSO Service...
VMware Explore Las Vegas 2025: Illuminating the Path...
Securing Software Updates for VMware Cloud Foundation: What...
VMware Cloud Foundation 5.2: A Guide to Simplified...
VMware Cloud Foundation 5.2: Unlocking Secure Hybrid Cloud...
VMware Cloud Foundation – Memory Tiering: Optimizing Memory...
Decoding VMware Cloud Foundation: Unveiling the numerous amount...
VMware Cloud Director 10.6.1: Taking Cloud Management to...
Virtual Bytes
  • Home
  • Home Data Center 2025
  • VMware
    • Cloud
    • Datacenter & Cloud Infrastructure
      • VMware ESXi
      • VMware vCenter
      • VMware vSAN
    • Networking & Security
    • Desktop & App Virtualization
      • Omnissa Horizon
    • Troubleshooting
    • Ansible
  • Education
  • Hardware
    • Hardware Tips & Tricks
  • Events
  • About
    • About Me
    • Home Lab Archives
      • Home Lab 2020-2022
      • Home Lab 2016-2020
Tag:

VMware Cloud Foundation

Cloud

Cannot establish a remote console connection in VMware Aria Automation 8.12.x

by Tommy Grot June 1, 2023
written by Tommy Grot 1 minutes read

Tonight’s troubleshooting tidbit – I have deployed VMware Aria Automation, started doing some automation, I ran into a issue were the Remote Console did not want to open it came with an error – “Cannot establish a remote console connection. Verify that the machine is powered on. If the server has a self-signed certificate, you might need to accept the certificate, then close and retry the connection.”

  1. SSH into one vRA virtual appliance in the cluster
  2. Edit the provisioning service deployment by running the following commandkubectl -n prelude edit deployment provisioning-service-app
  3. Set the following property in the JAVA_OPTS list to true-Denable.remote-console-proxy=false

Here you will see the orginal screenshot having the Denable.remote-console-proxy=true, which next screenshot we will switch it to false

Denable.remote-console-proxy=false

After you save with wq! you will go back to the main SSH session and what I did was executed command – watch kubectl get pods -n prelude . This allowed me to verify and watch if there was no errors during startup.

June 1, 2023 0 comments 1.2K views
1 FacebookTwitterLinkedinEmail
CloudNetworkingVMware NSX

Deploying VMware NSX Advanced Load Balancer

by Tommy Grot May 3, 2023
written by Tommy Grot 2 minutes read

Today’s topic is on VMware NSX Advanced Load Balancer (AVI). We will walk through the steps of deploying a NSX ALB overlayed on top of your NSX Environment.

Features

  • Multi-Cloud Consistency – Simplify administration with centralized policies and operational consistency
  • Pervasive Analytics – Gain unprecedented insights with application performance monitoring and security
  • Full Lifecycle Automation – Free teams from manual tasks with application delivery automation
  • Future Proof – Extend application services seamlessly to cloud-native and containerized applications

More information at VMware’s site here

What You Will Need:

  • A Configured and running NSX Environment
  • NSX ALB Controller OVA (controller-22.1.3-9096.ova)
  • Supported Avi controller versions: 20.1.7, 21.1.2 or later versions
  • Obtain IP addresses needed to install an appliance:
    • Virtual IP of NSX Advanced Load Balancer appliance cluster
    • Management IP address
    • Management gateway IP address
    • DNS server IP address
  • Cluster VIP and all controllers management network must be in same subnet.

Lets start with deploying controller OVF

I like to keep neat and consistent names the following names I utilized:

Virtual Machine Names:
  • nsx-alb-01
  • nsx-alb-02
  • nsx-alb-03

You need total of 3 Controllers deployed to create a High Available NSX ALB.

Click the Ignore All, or you will get this error as show below

Select your datastore ->

Click Next ->

My DNS Records:

  • nsx-alb-01.virtualbytes.io
  • nsx-alb-02.virtualbytes.io
  • nsx-alb-03.virtualbytes.io

We are deploying!

Access your first appliance via its FQDN that you have set in the steps above.

Create your password for local admin account

Create your passphrase, and your DNS resolvers, and DNS Search Domains.

Skip SMTP if not needed, but if you need a mail server please fill out your required SMTP IP and Port

  • Service Engines are managed within the tenant context, not shared across tenants to enable the Tenant Context Mode.
  • Service Engines are managed within the provider context, shared across tenants to enable the Provider Context Mode.

That is it for the initial deployment, next we will add our other 2 additional NSX ALB nodes for HA setup.

Go to Administration -> Controller -> Nodes

Click Edit ->

For your 2 additional NSX ALB nodes you will need to provide an IP Address and hostname and password.

Sample of what it should look like for all 3 ALB appliances

A simple topology of what we have deployed.

That is it! from now on you can configure for what use case you will NSX-ALB for. A next blog post will go through how to step up a NSX-T Cloud.

Licensing Flavors – If you click on the little cog icon next to the Licensing. You will see different tiers.

Different License Tiers that are apart of NSX-ALB Licensing model.

May 3, 2023 0 comments 2.6K views
0 FacebookTwitterLinkedinEmail
VMware NSX

VMware NSX – Segment fails to delete from NSX Manager. Status is “Delete in Progress”

by Tommy Grot April 28, 2023
written by Tommy Grot 2 minutes read

Today’s troubleshooting tidbit – If you have issues removing a NSX Segment that got removed from NSX Policy UI but NSX Manager UI still shows that the segment is being used and active and cannot delete, well no problem at all. We will clean it up.

For More Reference VMware has a published KB for this here.

Below you will see that my vmw-vsan-segment that was stuck and said it was dependent on another configuration, but it was not. This segment was created from within VMware Cloud Director.

Confirm that there are no ports in use with the Logical Switch which was not deleted

Lets SSH into one of your NSX Managers, then we will execute the command below Run get logical-switches on the Local Manager CLI and confirm the stale Logical Switch is listed, and note its UUID

get logical-switches

 Elevate to root shell with command below

Engineering Mode

Use st en to enter engineering mode which is root privileged mode

st en

Confirm the Logical Switch info can be polled with API:
curl -k -v -H “Content-Type:application/json” -u admin -X GET “https://{mgr_IP}/api/v1/logical-switches/(LS_UUID)“

Example of my command below:

 curl -k -v -H "Content-Type:application/json" -u admin -X GET "https://172.16.2.201/api/v1/logical-switches/e2f51ece-99fe-417a-b7db-828a6a39234b"

Remove stale Logical Switch objects via API:
curl -k -v -H “Content-Type:application/json”  -H “X-Allow-Overwrite:true” -u admin -X DELETE “https://{mgr_IP}/api/v1/logical-switches/{LS_UUID}?cascade=true&detach=true“

Example of my command below:

curl -k -v -H "Content-Type:application/json"  -H "X-Allow-Overwrite:true" -u admin -X DELETE "https://172.16.2.201/api/v1/logical-switches/e2f51ece-99fe-417a-b7db-828a6a39234b?cascade=true&detach=true"

Now you should see a return ‘200’ response code if deletion is successful

That is all, we successfully cleaned up our NSX Segment that was stuck!

April 28, 2023 0 comments 1.4K views
0 FacebookTwitterLinkedinEmail
VMware ESXiVMware vCenter

Upgrading to VMware vSphere 8 Update 1

by Tommy Grot April 18, 2023
written by Tommy Grot 4 minutes read

Tonight’s topic is upgrading to the new and most anticipated update of: vSphere 8 Update 1 from vSphere 8.0. In this walk through we will do a step by step process of what you will need to do before you upgrade your vSphere environment.

What’s New

Some tidbits of information below from the Release Notes – More Information check out the release notes here

  • vSphere 8.0 IA/GA Release Model: For more information on the Release Model of vSphere Update releases, see The vSphere 8 Release Model Evolves.
  • vSphere Configuration Profiles: vSphere 8.0 Update 1 officially launches vSphere Configuration Profiles, which allow you to manage ESXi cluster configurations by specifying a desired host configuration at the cluster level, automate the scanning of ESXi hosts for compliance to the specified Desired Configuration and remediate any host that is not compliant. vSphere Configuration Profiles require that you use vSphere Lifecycle Manager images to manage your cluster lifecycle, a vSphere 8.0 Update 1 environment, and Enterprise Plus or vSphere+ license. For more information, see Using vSphere Configuration Profiles to Manage Host Configuration at a Cluster Level.
  • With vSphere 8.0 Update 1, vSphere Distributed Services Engine adds support for:
    • NVIDIA BlueField-2 DPUs to server designs from Lenovo (Lenovo ThinkSystem SR650 V2).
    • 100G NVIDIA BlueField-2 DPUs to server designs from Dell.
    • UPTv2 for NVIDIA BlueField-2 DPUs.
    • AMD Genoa CPU based server designs from Dell.
  • Support for heterogenous virtual graphics processing unit (vGPU) profiles on the same GPU hardware: vSphere 8.0 Update 1 removes the requirement that all vGPUs on a physical GPU must be of the same type and you can set different vGPU profiles, such as compute, graphics, or Virtual Desktop Infrastructure workload, on one GPU to save cost by higher GPU utilization and reduced workload fragmentation.
  • Integration of VMware Skylineâ„¢ Health Diagnosticsâ„¢ with vCenter: Starting with vSphere 8.0 Update 1, you can detect and remediate issues in your vSphere environment by using the VMware Skyline Health Diagnostics self-service diagnostics platform, which is integrated with the vSphere Client. For more information, see VMware Skyline Health Diagnostics for vSphere Documentation.
  • VM-level power consumption metrics: Starting with vSphere 8.0 Update 1, you as a vSphere admin can track power consumption at a VM level to support the environmental, social, and governance goals of your organization.

What you need:

  • SFTP Server to back up your VCSA
  • ESXi 8.0.1 Image via Customer Connect – (VMware-VMvisor-Installer-8.0U1-21495797.x86_64.iso)
  • Few minutes of preparation

First thing you want to get your vCenter Server Appliance on the newest version, before you upgrade your VMware ESXi hosts to 8.0.1.

Below we will walk through the process to get your VCSA backed up before upgrading!

(Side Note – Make sure you have SFTP or any other means of backing up your VCSA, for this walk through we will not go through setting up a SFTP server)

Once the VCSA is backed up successfully – > Then click Stage and Install

Accept that lovely EULA 🙂 If you don’t then no upgrade for you.

Click Next – it will be running pre-checks, and the upgrade process will start, the whole process took roughly less than 15 minutes, this depends on your environment and how large the db and how many objects maintained within VCSA.

Install in progress….

During this process it will convert your data from the previous installation over to your new one, so if there is lots of metrics and logs and historical information it may take a bit.

I went and took a look at how the vSphere Client status is, and there is a new UI addon where it is different from previous deployments of vSphere 8.0

Lets log back into your vCenter!

We will prep the cluster image and since I have Dell PowerEdge R740s (14th Gen) hardware I make sure I have the correct Vendor addon selected and validated.

After few minutes of validating, your Image for your cluster will be ready to be applied

Lets start remediating some servers, one by one. As I have Dell PowerEdge R740s, I have quick boot enabled so the whole upgrade process for each ESXi host was less than 10 minutes for each host.

Upgrade In Process…

While we are waiting, I like to login to the servers iDRAC and watch the upgrade process.

Few minutes later we are on VMware ESXi 8.0.1

After all ESXi hosts are upgrade to 8.0.1, next we will go to Configure -> vSAN – > Disk Management -> Upgrade Disks format version to version 18.0 from 17.0

Some neat additions to vSphere 8 Update 1 – I do like how there are tiles now with more detailed information, but also you can toggle the hamburger menu to collapse all these tiles into a easier to see all Health Findings.

Also, I am glad that the usage is back into its tile on the vSphere User Interface, it is a much needed and appreciated addon back into vSphere 8.0.1

That is all! After following through the walkthrough you should of been able to upgrade your vSphere 8 to vSphere 8.0.1.

April 18, 2023 0 comments 4.5K views
0 FacebookTwitterLinkedinEmail
VMware Troubleshooting

How PCIe NVMe Disks affect VMware ESXi vmnic order assignment

by Tommy Grot April 18, 2023
written by Tommy Grot 3 minutes read

Today’s topic is about VMware Cloud Foundation and homogenous network mapping with additional PCIe interfaces within a server.

Physical PortDevice Alias
Onboard port 1vmnic0
Onboard port 2vmnic1
Onboard port 3vmnic2
Onboard port 4vmnic3
Slot #2 port 1vmnic4
Slot #2 port 2vmnic5
Slot #4 port 1vmnic6
Slot #4 port 2vmnic7

VMware KB – How VMware ESXi determines the order in which names are assigned to devices (2091560) this KB talks about vmnic ordering and assignment, but the post below will explain when a NVMe PCIe disk is apart of a host.

What kind of environment? – VMware Cloud Foundation 4.x

If a system has:

  • Four onboard network ports
  • One dual-port NIC in slot #2
  • One dual-port NIC in slot#4

Then devices names should be assigned as:

The problem:

If a physical server has additional PCIe interfaces that are greater in quantity over another server that you want to bring into an existing or new cluster.

An example – Dell PowerEdge R740, with 24 NVMe PCIe SSD Drives, and 2 – QSFP Mellanox 40Gig PCIe, and 1 – NDC LOM Intel X710 Quad 10Gb SFP+, and Boss Card, but another server that has few drives less than 24 as example above but the same network cards as following (2 – QSFP Mellanox 40Gig PCIe, 1 – NDC LOM Intel X710 Quad 10Gb SFP+, and Boss Card)

This will cause the physical server to have its PCIe Hardware IDs shift by (N) and cause the vmnic mapping to be out of order where certain vmnics will show up out of order, which causes the homogeneous network misconfiguration layout for VMware Cloud Foundation ESXi Hosts that are apart of a workload domain. It is important to have identical hardware for a VCF implementation to have successful VCF deployment of a workload domain.

This type of issue would cause problems for any future deployments of VMware Cloud Foundation 4.x. If they have an existing cluster with either type of configurations: high density compute node, or a vGPU node, or a high-density storage node it would throw off the PCIe mapping to and prevent all esxi hosts to have a homogeneous vmnic mapping to the physical nic.

The Fix:

Before you start doing any configurations with your ESXi host within VCF, please make sure to Decommission that host from your cluster within that workload domain.

Once the host is removed from that cluster within the workload domain:

Go to – > Workload Domains -> (Your Domain) ->

Clusters -> Hosts (Tab) -> Select the host you want to remove

Then Go back to Main SDDC Page – > Hosts -> and Decommission that select ESXi Host

Once, host is decommissioned, wipe all your NVMe Disks first, and then make sure to shutdown the ESXi host and unplug the NVMe disks just slightly to ensure that they do not get powered on, so then the next re-image of your ESXi host there will only be 1 disk which should be your boot drive or a Boss SSD M.2.

After server is up login back into your ESXi host and it should match to your liking where all the vmnics are aligned and correctly showing up in a homogenous layout

The Results:

Before

After

April 18, 2023 0 comments 659 views
0 FacebookTwitterLinkedinEmail
Cloud

VMware Cloud Director 10.4.X & Terraform Automation Part 2

by Tommy Grot April 13, 2023
written by Tommy Grot 6 minutes read

Tonight’s multi-post is about VMware Cloud Director 10.4.x and Terraform!

With Terraform there are endless possibilities, creating a virtual data center and being able to tailor to your liking and keeping it in an automated deployment. In this multi-part blog post we will get into VCD and Terraform Infrastructure as Code automation. If you would like to see what we did in Part 1, here is the previous post – VMware Cloud Director 10.4.X & Terraform Automation Part 1

What You will Need:

  • A Linux VM to execute Terraform from
  • Latest Terraform Provider (I am using beta 3.9.0-beta.2 )
  • Gitlab / Code Repo (Optional to store your code)
  • VMware Cloud Director with NSX-T Integrated already
  • Local Account with Provider Permissions on VCD (mine is terraform)

Lets Begin!

First part we will add on to our existing Terraform automation which we have continued in Part 1 of this multi-part blog. Below is the provider information for reference.

terraform {
  required_providers {
    vcd = {
      source  = "vmware/vcd"
      version = "3.9.0-beta.2"
    }
  }
}

provider "vcd" {
  url                  = "https://cloud.virtualbytes.io/api"
  org                  = "system"
  user                 = "terraform"
  password             = "VMware1!"
  auth_type            = "integrated"
  max_retry_timeout    = 60
  allow_unverified_ssl = true
}

Next, we will add Data Center Groups to our terraform template, what we are doing here is Creating the virtual data center group to span multiple organizations, if need be, but for this demonstration – I am using a DCG for Distributed Firewall purposes.

#### Create VDC Org Group 

resource "vcd_vdc_group" "demo-vdc-group" {
  depends_on = [vcd_org_vdc.demo-org-10]
  org                   = "demo-org-10"
  name                  = "demo-vdc-group"
  description           = "Demo Data Center Group"
  starting_vdc_id       = vcd_org_vdc.demo-org-10.id
  participating_vdc_ids = [vcd_org_vdc.demo-org-10.id]
  dfw_enabled           = true
  default_policy_status = true
}

The next code snippet – here we will set and configure the Data Center Group firewall from an Internal to Internal and Drop to Any to Any and Allow. Configuration where by default it keeps Internal DFW rule.

##### DFW VDC Group to Any-Any-Allow
resource "vcd_nsxt_distributed_firewall" "lab-03-pro-dfw" {
  depends_on = [vcd_org_vdc.demo-org-10]
  org = "demo-org-10"
  vdc_group_id = vcd_vdc_group.demo-vdc-group.id
  rule {
    name        = "Default_VdcGroup_demo-vdc-group"
    direction   = "IN_OUT"
    ip_protocol = "IPV4"
    source_ids = [vcd_nsxt_security_group.static_group_1.id]
    destination_ids = []
    action      = "ALLOW"
  }
}

If you are wanting to create multiple rules within a Distributed Firewall, here below I will show some examples – This will not be a part of the code implementation.

##### Sample DFW Rule Creation
resource "vcd_nsxt_distributed_firewall" "lab-03-pro-dfw-1" {
  depends_on = [vcd_org_vdc.demo-org-10]
  org = "demo-org-10"
  vdc_group_id = vcd_vdc_group.demo-vdc-group.id
  rule {
    name        = "rule-1" # Here you will create your name for the specific firewall rule
    direction   = "IN_OUT" # One of IN, OUT, or IN_OUT. (default IN_OUT)
    ip_protocol = "IPV4"
    source_ids = []
    destination_ids = []
    action      = "ALLOW"
  }
}

Some more detailed information from Terraform site –

Each Firewall Rule contains following attributes:

  • name – (Required) Explanatory name for firewall rule (uniqueness not enforced)
  • comment – (Optional; VCD 10.3.2+) Comment field shown in UI
  • description – (Optional) Description of firewall rule (not shown in UI)
  • direction – (Optional) One of IN, OUT, or IN_OUT. (default IN_OUT)
  • ip_protocol – (Optional) One of IPV4, IPV6, or IPV4_IPV6 (default IPV4_IPV6)
  • action – (Required) Defines if it should ALLOW, DROP, REJECT traffic. REJECT is only supported in VCD 10.2.2+
  • enabled – (Optional) Defines if the rule is enabled (default true)
  • logging – (Optional) Defines if logging for this rule is enabled (default false)
  • source_ids – (Optional) A set of source object Firewall Groups (IP Sets or Security groups). Leaving it empty matches Any (all)
  • destination_ids – (Optional) A set of source object Firewall Groups (IP Sets or Security groups). Leaving it empty matches Any (all)
  • app_port_profile_ids – (Optional) An optional set of Application Port Profiles.
  • network_context_profile_ids – (Optional) An optional set of Network Context Profiles. Can be looked up using vcd_nsxt_network_context_profile data source.
  • source_groups_excluded – (Optional; VCD 10.3.2+) – reverses value of source_ids for the rule to match everything except specified IDs.
  • destination_groups_excluded – (Optional; VCD 10.3.2+) – reverses value of destination_ids for the rule to match everything except specified IDs.

Now that we have established firewall rules within our template, next you can IP Sets which are kind of a Group that you can use for ACL’s and integrate them into a firewall and static groups etc!

#### Demo Org 10 IP sets
resource "vcd_nsxt_ip_set" "ipset-server-1" {
  org = "demo-org-10" # Optional

  edge_gateway_id = vcd_nsxt_edgegateway.lab-03-pro-gw-01.id

  name        = "first-ip-set"
  description = "IP Set containing IPv4 address for a server"

  ip_addresses = [
    "10.10.10.50",
  ]
}

Static Groups are another great way to assign networks and members. For this example, my Static Group consists of my domain network segment and with this I can utilize the group into firewall rules.

#### Create Static Group
resource "vcd_nsxt_security_group" "static_group_1" {
  org = "demo-org-10"
  edge_gateway_id = vcd_nsxt_edgegateway.lab-03-pro-gw-01.id

  name        = "domain-network"
  description = "Security Group containing domain network"

  member_org_network_ids = [vcd_network_routed_v2.nsxt-backed-2.id]
}

###########################################################
An example of how to use a Static Group within a firewall rule.
  rule {
    name        = "domain-network" ## firewall rule name
    action      = "ALLOW" 
    direction   = "IN_OUT"
    ip_protocol = "IPV4"
    source_ids = [vcd_nsxt_security_group.sg-domain-network.id]
    destination_ids = [vcd_nsxt_security_group.sg-domain-network.id]
    logging   = true
  }

That is it for the automation for Part 2 of VMware Cloud Director! Stay Tuned for more automation!

April 13, 2023 0 comments 1.3K views
0 FacebookTwitterLinkedinEmail
Cloud

Load Balancing VMware Cloud Director 10.4.x Cells with NSX ALB (AVI)

by Tommy Grot April 11, 2023
written by Tommy Grot 2 minutes read

Topic of the Day – Load Balancing VMware Cloud Director 10.4.x multi cell deployment. For this deployment I am using 3 VCD Cells, they consist of Small Size ( 2vCPU and 12GB, these are not recommended specifications for a production appliance – Per VMware.)

This walkthrough will show you how to load balancer the appliances only, we are not integrating NSX ALB into VMware Cloud Director for Tenants to consume, stay tuned for a future walkthrough for VCD and NSX ALB Integration!

What you will need:

  • Multiple VCD Appliances
  • Certificate with multiple SANs ( I used my wildcard cert)
  • Certificates and Public Addresses configured already on all VCD Appliances
  • 4 DNS A Records, 1 A Record Pointing to VIP IP address of ALB VS Pool, 3 A Records for individual appliances

More information on VMware Cloud Director 10.4.1 Certificate Implementation here

Lets login to NSX ALB, Go to Virtual Services and top right click on “Create Virtual Service”

-> Advanced Setup

Select your NSX Cloud which we will deploy the VIP pool

Select VRF Context, which for my deployment I am used my t1-edge-01-m01-gw, which is my Tier 1 Router attached to my primary Tier-0.

Next we will configured Virtual Service VIP for our Service Engine for ALB.

Attach the VsVIP to your Tier 1 Logical Router

Add a Virtual IP that is free from within your VIP Pool that is pre-allocated manually or can be via IPAM Dynamically. For my implementation I am setting the IP address statically.

Click Save -> Then it will take us back to the main page where we are deploying the Virtual Service

Next step we will set the Profile of our Virtual Service to the following

  • System-TCP-Proxy
  • System-L4-Application


(Side topic, VMware Cloud Director works better with a Layer 4 Load Balancer, there are issues that occur if a Layer 7 HTTP load balancer is utilized)

Now that our Profile is set, next we will create our Pool. I named my “VMware-Cloud-Director-Appliances-Pool”

The Following Settings that should be set are:

  • Default Server Port: 443
  • Least Connections (can use other Algorithms based on your needs)
  • Tier1 Logical Router – t1-edge01-m01-gw (this is my Tier1)
  • Servers – Created IP Address Group
  • Health Monitor
  • SSL – System-Standard, (Service Edge Client Certificate)

  1. Any other settings will depend on your implementation

Once all settings have been configured, now we will hit save and proceed to the last page “Advanced”

Ensure to select your Service Engine Group, or ALB will deploy it on the default group and might cause issues.

After you have the AVI Service Engine deploying, now you can go to VCD, and setup Public Addresses – Pre reqs are that you need to have VCD SSL CA Signed or Self Signed already configured and just need to enabled Public Addresses for the Web Portal and API.

That’s it! Very simple implementation to utilize VMware NSX Advanced Load Balancer and Load Balance VMware Cloud Director Appliances!

April 11, 2023 0 comments 2.5K views
1 FacebookTwitterLinkedinEmail
Omnissa Horizon

VMware Horizon 2303 – Deleting / Cleaning up Orphaned VMs / Templates

by Tommy Grot April 10, 2023
written by Tommy Grot 1 minutes read

Tonight’s topic is about an issue I encountered with VMware Horizon 2303 and Instant Clone pool. I deleted an instant clone pool from Horizon View administration and it deleted it from Horizon, but my cluster was throwing an vSAN error that it could not delete. So, what I had to do is go into the Horizon Connection Server and use the iccleanup.cmd utility.

Connect to your Horizon Connection server via RDP

Then go to ->

C:\Program Files\VMware\VMware View\Server\tools\bin

While you have this Explorer window open, a little shortcut to change directory and open Command Prompt, just type in CMD

iccleanup.cmd -vc <your-fqdn-vcsa> -uid [email protected] -skipCertVeri

Ensure to skip certificate verification if using self signed certs, sometimes may cause connection issues.

Next you will enter command “list”

Next, you will have the option to do the following: (unprotect/delete/output/back) – For my situation, I unprotected the cp-template that was orphaned and causing issues by defining the index number.

Then I ran the delete command and it found few more templates from a previous VDI Instant Clone pool and it removed all orphaned templates and VMs!

April 10, 2023 0 comments 5.4K views
0 FacebookTwitterLinkedinEmail
Commvault Disaster Recovery

Deploying Commvault & Integrating with VMware vSphere 8

by Tommy Grot April 3, 2023
written by Tommy Grot 1 minutes read

Today’s Topic! Deploying Commvault, – Commvault is my most favorite Disaster Recovery Solution, it has so many features and many awesome features that are just simple to use and secure!

Simple, comprehensive backup and archiving

  • Comprehensive workload coverage (files, apps, databases, virtual, containers, cloud) from a single extensible platform and user interface
  • High-performance backups via storage integrations
  • Automated tiering for long-term retention and archiving

Trusted recovery, ransomware protection, and security

  • Rapid, granular recovery of data and applications, including instant recovery of virtual machines
  • Built-in ransomware protection including anomaly detection and reporting
  • End-to-end encryption, including data-at-rest and data-in-flight encryption, to ensure your data is secure

More information Visit Commvault’s Website here!

First we will setup a dedicated VLAN for our Commvault CommCell. This will be a multi-part post where first we install and deploy a virtual machine and deploy Commvault,

Prepare a dedicated network for backups

Once, installation is complete you will be able to login to your Comm Cell via web UI.

April 3, 2023 0 comments 706 views
0 FacebookTwitterLinkedinEmail
VMware Troubleshooting

VMware vRealize Lifecycle Suite & VMware Cloud Foundation 4.5 Rollback

by Tommy Grot March 20, 2023
written by Tommy Grot 1 minutes read

Today’s topic is on VMware Aria Life Cycle Manager formerly (vRSLCM) – Have you encountered an issue with vRSLCM or uploaded a PSPACK that you didn’t want to upload? Here we will walk through on how to roll back if you encounter any issues!

Tasks:

  • Create a snapshot of our SDDC VCF VM
  • Update vRSLCM Postgres
  • Delete via Developer Center
  • Re-Deploy

After the snapshot has been crated, lets now ssh into the VCF SDDC Manager appliance, then elevate to root

su root

Run Postgres SQL Command to remove it from VCF Database

psql -h localhost -U postgres -d platform -c "update vrslcm set status = 'DISABLED'"

Now – when should see that vRSLCM has been disabled and is letting know VCF that there is something wrong with it, so now it will let you Roll Back

Then Go back to VCF UI, Developer Center – > Scroll all the way down to APIs for managing vRealize Life Cycle Manager -> Select Delete – > Execute

After vRSLCM is Delete, you will see Roll Back under vRealize Suite and then you can deploy vRSLCM again!

March 20, 2023 0 comments 1.1K views
0 FacebookTwitterLinkedinEmail
Newer Posts
Older Posts




Recent Posts

  • What’s New In VMware Cloud Foundation 9.0
  • Deploying & Configuring the VMware LCM Bundle Utility on Photon OS: A Step-by-Step Guide
  • VMware Cloud Foundation: Don’t Forget About SSO Service Accounts
  • VMware Explore Las Vegas 2025: Illuminating the Path to Cloud Excellence!
  • Securing Software Updates for VMware Cloud Foundation: What You Need to Know

AI AVI Vantage cloud Cloud Computing cloud director computing configure cyber security director dns domain controller ESXi How To las vegas llm llms multi-cloud multicloud NSx NSX-T 3.2.0 private AI servers ssh storage tenant upgrade vcd vcda VCDX vcenter VCF VDC vexpert Virtual Machines VMs vmware vmware.com vmware aria VMware Cloud Foundation VMware cluster VMware Explore VMware NSX vrslcm vsan walkthrough

  • Twitter
  • Instagram
  • Linkedin
  • Youtube

@2023 - All Right Reserved. Designed and Developed by Virtual Bytes

Virtual Bytes
  • Home
  • Home Data Center 2025
  • VMware
    • Cloud
    • Datacenter & Cloud Infrastructure
      • VMware ESXi
      • VMware vCenter
      • VMware vSAN
    • Networking & Security
    • Desktop & App Virtualization
      • Omnissa Horizon
    • Troubleshooting
    • Ansible
  • Education
  • Hardware
    • Hardware Tips & Tricks
  • Events
  • About
    • About Me
    • Home Lab Archives
      • Home Lab 2020-2022
      • Home Lab 2016-2020