NSX-T: Deploy NSX-T Manager 2.5 with OVFtool
If you have the NSX-T Manager OVA on a fileshare or web server you can deploy it from the vCenter GUI, or use any other deployment tool like OVFtool.
Using OVFtool to deploy NSX-T Manager is a great way to deploy a quick and repeatable configuration. It also helps to document the configuration options.
Ensure you have forward and reverse DNS. In the lab I use Powershell:
1Add-DnsServerResourceRecordA -CreatePtr -Name "sitea-nsxm" -ZoneName "lab.vmw.one" -IPv4Address "172.31.150.14"
2Add-DnsServerResourceRecordA -CreatePtr -Name "sitea-nsxm1" -ZoneName "lab.vmw.one" -IPv4Address "172.31.150.15"
3Add-DnsServerResourceRecordA -CreatePtr -Name "sitea-nsxm2" -ZoneName "lab.vmw.one" -IPv4Address "172.31.150.16"
4Add-DnsServerResourceRecordA -CreatePtr -Name "sitea-nsxm3" -ZoneName "lab.vmw.one" -IPv4Address "172.31.150.17"
Create a config file for the VM parameters. I like to use different ConfigFiles for each deployment, as it's proof of the initial configuration.
This configuration file is for NSX-T Manager 2.5. There's some minor differences in NSX-T Manager 3.0. See a future post coming shortly.
Create nsxm1.ovftool.cfg
:
1name=sitea-nsxm1
2network=stretched-150
3# vmFolder=infrastructure
4datastore=vsanDatastore
5X:injectOvfEnv
6X:logFile=sitea-nsxm1.log
7X:logLevel=verbose
8allowExtraConfig
9acceptAllEulas
10noSSLVerify
11diskMode=thin
12powerOn
13deploymentOption=small
14# Sets memory shares to default, reservations to 0. limits to 0. Lab deployments only.
15viMemoryResource=:0:0
16# Sets CPU shares to default, reservations to 0. limits to 0. Lab deployments only.
17viCpuResource=:0:0
18prop:nsx_hostname=sitea-nsxm1.lab.vmw.one
19prop:nsx_domain_0=lab.vmw.one
20prop:nsx_ip_0=172.31.150.15
21prop:nsx_netmask_0=255.255.255.0
22prop:nsx_gateway_0=172.31.150.1
23prop:nsx_dns1_0=172.31.3.9 172.31.9.1
24prop:nsx_ntp_0=172.31.9.1
25prop:nsx_role=NSX Manager
26prop:nsx_isSSHEnabled=True
27prop:nsx_allowSSHRootLogin=False
28prop:nsx_passwd_0=VMware1!VMware1!
29prop:nsx_cli_passwd_0=VMware1!VMware1!
I'm using OVFtool version 4.3.0 (build-14746126)
1c:\>ovftool --version
2VMware ovftool 4.3.0 (build-14746126)
1Z:\Software\VMware\scripts>ovftool --configFile=sitea-nsxm.ovftool.cfg "Z:\Software\VMware\NSX-T\nsx-unified-appliance-2.5.1.0.0.15314292.ova" vi://administrator@vsphere.local@sitea-vc.lab.vmw.one/Datacenter/host/vSphere-Cluster
2Opening OVA source: Z:\Software\VMware\NSX-T\nsx-unified-appliance-2.5.1.0.0.15314292.ova
3The manifest validates
4Source is signed and the certificate validates
5Enter login information for target vi://sitea-vc.lab.vmw.one/
6Username: administrator%40vsphere.local
7Password: ********
8Opening VI target: vi://administrator%40vsphere.local@sitea-vc.lab.vmw.one:443/Datacenter/host/vSphere-Cluster
9Deploying to VI: vi://administrator%40vsphere.local@sitea-vc.lab.vmw.one:443/Datacenter/host/vSphere-Cluster
10Transfer Completed
11Powering on VM: sitea-nsxm1
12Task Completed
13Completed successfully
If you want to include the administrator@vsphere.local password, change the above to vi://administrator@vsphere.local:VMware1!@sitea-vc.lab.vmw.one/Datacenter/host/vSphere-Cluster.
If you're using a complex password with special characters, you may need to url encode them.
It may seem confusing but Datacenter
is the datacenter name within vCenter. host
is fixed. vSphere-Cluster
is the name of your cluster.
Increment the VM name and IP in the OVFtool cfg file and run it twice more to have 3 NSX-T Managers in total.
Disable Snapshots on NSX-T Appliances
Snapshots on NSX-T appliances are NOT supported. Official doco here.
To disable snapshots, on each NSX-T appliance VM:
- Power off the VM
- Right click the VM and Edit Settings
- On the VM Options tab, expand Advanced
- In the Configuration Parameters field, click Edit Configuration....
- In the Configuration Parameters window, click Add Configuration Params.
- Enter the following:
For Name, entersnapshot.MaxSnapshots
.
For Value, enter-0
. - Click OK to save the changes.
- Power the VM back on.
Join the NSX-T Managers to form a cluster
If you deployed NSX-T Managers from the GUI, this is not required. Although if you've deployed using OVFtool, or deploying on KVM, you need to create an NSX-T Manager cluster.
Follow along at the official documentation here, or:
SSH to the first NSX-T Manager, and login with admin. (This is why I always enable SSH)
Run the following commands to get the thumbprint and cluster-id.
1NSX CLI (Manager, Policy, Controller 2.5.1.0.0.15314292). Press ? for command list or enter: help
2sitea-nsxm1> get certificate api thumbprint
35974198a52876288a3265d738f5bbae915383f33303e7ac23c5708b36292e0e3
4sitea-nsxm1>
5sitea-nsxm1>
6sitea-nsxm1>
7sitea-nsxm1> get cluster config
8Cluster Id: da6b9c4f-df91-4114-9bd7-4176e8354405
9Cluster Configuration Version: 0
10Number of nodes in the cluster: 1
11
12Node UUID: be010f42-21fc-85d6-ce89-7827422447fa
13Node Status: JOINED
14 ENTITY UUID IP ADDRESS PORT FQDN
15 HTTPS c64a17dd-6cfe-4e2a-80fd-ae31a72fc24f 172.31.150.15 443 sitea-nsxm.lab.vmw.one
16 CONTROLLER 1d953532-dba2-4f8b-8992-acfec5d1de0b 172.31.150.15 - sitea-nsxm.lab.vmw.one
17 CLUSTER_BOOT_MANAGER 5edecb35-1862-4743-aa56-1df253c1b40d 172.31.150.15 - sitea-nsxm.lab.vmw.one
18 DATASTORE 2d5f1143-c4f3-49c7-a2cb-3c58f929fa20 172.31.150.15 9000 sitea-nsxm.lab.vmw.one
19 MANAGER f1f93d7c-809e-43df-9492-96eebc6d07c5 172.31.150.15 - sitea-nsxm.lab.vmw.one
20 POLICY 5455fd6a-0943-4595-9d36-21377c5c1df9 172.31.150.15 - sitea-nsxm.lab.vmw.one
Make a note of the thumbprint and cluster-id.
Now SSH to each other NSX-T Manager and run:
1host> join <NSX-Manager-IP> cluster-id <cluster-id> username <NSX-Manager-username> password <NSX-Manager-password> thumbprint <NSX-Manager-thumbprint>
1sitea-nsxm2> join 172.31.150.15 cluster-id da6b9c4f-df91-4114-9bd7-4176e8354405 username admin password VMware1!VMware1! thumbprint 5974198a52876288a3265d738f5bbae915383f33303e7ac23c5708b36292e0e3
2Join operation successful. Services are being restarted. Cluster may take some time to stabilize.
3sitea-nsxm2>
Do the same for the third NSX-T Manager.
1sitea-nsxm3> join 172.31.150.15 cluster-id da6b9c4f-df91-4114-9bd7-4176e8354405 username admin password VMware1!VMware1! thumbprint 5974198a52876288a3265d738f5bbae915383f33303e7ac23c5708b36292e0e3
2Join operation successful. Services are being restarted. Cluster may take some time to stabilize.
3sitea-nsxm3>
Confirm the results with:
1sitea-nsxm1> get cluster config
2Cluster Id: da6b9c4f-df91-4114-9bd7-4176e8354405
3Cluster Configuration Version: 2
4Number of nodes in the cluster: 3
5
6Node UUID: be010f42-21fc-85d6-ce89-7827422447fa
7Node Status: JOINED
8 ENTITY UUID IP ADDRESS PORT FQDN
9 HTTPS c64a17dd-6cfe-4e2a-80fd-ae31a72fc24f 172.31.150.15 443 sitea-nsxm1.lab.vmw.one
10 CONTROLLER 1d953532-dba2-4f8b-8992-acfec5d1de0b 172.31.150.15 - sitea-nsxm1.lab.vmw.one
11 CLUSTER_BOOT_MANAGER 5edecb35-1862-4743-aa56-1df253c1b40d 172.31.150.15 - sitea-nsxm1.lab.vmw.one
12 DATASTORE 2d5f1143-c4f3-49c7-a2cb-3c58f929fa20 172.31.150.15 9000 sitea-nsxm1.lab.vmw.one
13 MANAGER f1f93d7c-809e-43df-9492-96eebc6d07c5 172.31.150.15 - sitea-nsxm1.lab.vmw.one
14 POLICY 5455fd6a-0943-4595-9d36-21377c5c1df9 172.31.150.15 - sitea-nsxm1.lab.vmw.one
15
16Node UUID: eac10f42-8dd2-4055-d177-4b0460974d5c
17Node Status: JOINED
18 ENTITY UUID IP ADDRESS PORT FQDN
19 HTTPS 872a74b4-81d3-48d3-b4b7-cbd305558468 172.31.150.16 443 sitea-nsxm2.lab.vmw.one
20 CONTROLLER 016b9c46-2782-4ea4-9fa0-480bf2335a0e 172.31.150.16 - sitea-nsxm2.lab.vmw.one
21 CLUSTER_BOOT_MANAGER 90a5a0ad-30aa-4ff8-bcba-af7ef4454440 172.31.150.16 - sitea-nsxm2.lab.vmw.one
22 DATASTORE 506165b4-5cf5-485b-9d31-9e9d719d3d51 172.31.150.16 9000 sitea-nsxm2.lab.vmw.one
23 MANAGER c57c98cf-396b-4cf4-ba00-8a3735b4a1c8 172.31.150.16 - sitea-nsxm2.lab.vmw.one
24 POLICY 60dacbd7-ad60-4769-891c-000facf6458d 172.31.150.16 - sitea-nsxm2.lab.vmw.one
25
26Node UUID: d6120f42-f4ec-0e06-47dc-5440d39a8b51
27Node Status: JOINING
28 ENTITY UUID IP ADDRESS PORT FQDN
29 HTTPS b07664db-ecc2-4538-af64-a23d37457a92 172.31.150.17 443 sitea-nsxm3.lab.vmw.one
30 CONTROLLER 75c7cf3f-45fb-40cc-8be5-7f713d4efa56 172.31.150.17 - sitea-nsxm3.lab.vmw.one
31 CLUSTER_BOOT_MANAGER f05eb38a-3de3-4bfa-ac65-5cca75884435 172.31.150.17 - sitea-nsxm3.lab.vmw.one
32 DATASTORE 93f12cd5-064a-4e77-9528-07f9bbee19f0 172.31.150.17 9000 sitea-nsxm3.lab.vmw.one
33 MANAGER e47592ce-3e5d-459f-a627-208d100f7181 172.31.150.17 - sitea-nsxm3.lab.vmw.one
34 POLICY 12cbe74d-7441-44bc-b4ef-dcfc8ecf8ad8 172.31.150.17 - sitea-nsxm3.lab.vmw.one
I did it a bit too soon, as you can see the Node Status
of the last NSX-T Manager still says JOINING
. Give it a bit more time and check it again.
The Next time you log into any of the NSX-T Managers, and click to System / Appliances, you should see all 3 NSX-T Managers. Never mind the CPU usage. It settles down after a bit.
The last step of deployment is to configure the Virtual IP. This method is only supported when the NSX-T Managers are in the same layer 2 network. Look for Virtual IP: Not Set, and click EDIT, and add the VIP, and click SAVE. Give it a few minutes to apply the settings.