NSX-T Management Cluster Deployment: Part 1

Posted by
Reading Time: 2 minutes

As discussed in one of my previous blog NSX-T Architecture (Revamped): Part 1, the release of NSX-T v2.4 brought simplicity, flexibility and scalability for the management plane. If you are not familiar with the architecture of the NSX-T, I would highly recommend checking out both parts of the architecture blogs, along with the NSX-T Management Cluster: Benefits, Roles, CCP Sharding and failure handling.

As I like to split some topics of discussion, this topic is another one divided in two parts, for easier understanding and context:

  1. NSX-T Management Cluster Deployment: Part 1 (this blog) – shares the general requirements which are common across the deployment options discussed in Part 2.
  2. NSX-T Management Cluster Deployment: Part 2 – uncovers the deployment options and their relevant use cases.

At first, I thought about writing one blog relevant to NSX-T Management Cluster deployment options, but there were quite a few requirements that were common across the deployment options, which led me to split this topic in two parts. This Blog is Part 1 and talks about general requirements of NSX-T Management Cluster:

  1. NSX-T Manager deployment is hypervisor agnostic and is supported on vSphere and KVM based hypervisors:

HypervisorVersion (for NSX-T v2.5)Version (for NSX-T v2.4)CPU Cores Memory
vSpherevCenter v6.5 U2d (and later), ESXi v6.5 P03 (or later)

vCenter 6.7U1b (or later), ESXi 6.7 EP06 (or later)
vCenter v6.5 U2d (and later), ESXi v6.5 P03 (or later)

vCenter 6.7U1b (or later), ESXi 6.7 EP06 (or later)
416
RHEL KVM7.6, 7.5, and 7.47.6, 7.5, and 7.4416
Centos KVM7.5, 7.47.4416
SUSE Linux Enterprise Server KVM12 SP312 SP3, SP4416
Ubuntu KVM18.04.2 LTS*18.04 and 16.04.2 LTS416

* In NSX-T Data Center v2.5, hosts running Ubuntu 18.04.2 LTS must be upgraded from 16.04. Fresh installs are not supported.

  1. The maximum network latency between NSX Manager Nodes in the cluster should be 10ms.
  2. The maximum network latency between NSX Manager Nodes and Transport Nodes should 150ms
  3. NSX Manager must have a static IP address and you cannot change the IP address after installation.
  4. The maximum disk access latency should be 10ms
  5. When installing NSX Manager, specify a hostname that does not contain invalid characters such as an underscore or special characters such as dot “.”. If the hostname contains any invalid character or special characters, after deployment the hostname is set to nsx-manager.
  6. The NSX Manager VM running on ESXi has VMTools installed – Remove or upgrade of  VMTools is not recommended
  7. Verify that you have the IP address and gateway, DNS server IP addresses, domain search list, and the NTP server IP address for the NSX Manager to use.
  8. Appropriate Privileges to deploy OVF Template
  9. The Client Integration Plug-in must be installed.
  10. NSX Manager VM Resource requirements:

Appliance SizeMemoryvCPUDisk SpaceVM Hardware VersionSupport
Extra Small82200 Gb10 or later“Cloud Service
Manager” role only
Small164200 Gb10 or laterProof of Concepts or Lab
Medium246200 Gb10 or laterUpto 64 Hosts
Large4812200 Gb10 or laterUpto 1024 Hosts

For further information on maximum configuration details please see the link here

General Best Practices:

  • The NSX-T Manager VMs are deployed on shared storage
  • DRS Anti-affinity rules configured to manage NSX-T Manager VMs across hosts
  • When deploying NSX-T service VMs as a high-availability pair, enable DRS to ensure that vMotion will function properly.

This completes the Part 1 i.e. general requirements of the NSX-T Management Cluster Deployment, lets discuss the deployment options and their relevant use cases in Part 2.

Leave a Reply

Your email address will not be published. Required fields are marked *


The reCAPTCHA verification period has expired. Please reload the page.