NSX-T: Configure NSX-T Manager 2.5
If you followed the previous blog NSX-T: Deploy NSX-T Manager 2.5 with OVFtool you should have NSX-T Managers deployed.
There's a few more things we have to configure within NSX-T Manager like:
- Add a compute Manager (when using vCenter)
- Create IP Pools
- Create Transport Zones
- Create Profiles - Uplink & Transport Nodes
- Configure Host Transport Nodes for NSX
- Deploy Edge Transport Node
- Create Edge Cluster
- Create Segments
- Create T0 gateway
- Create T1 gateway
- Static routes
This is for a collapsed compute/edge cluster.
This is based on:
NSX-T 2.5.1
vSphere 6.7 U3
Single host, with 4 pNICs.
Because I have a single ESXi host, I'm going with a collapsed management/edge cluster, as per figure 7-11 from the NSX-T design guide.
The following needs to be set up already. Pre-requisites:
Site A | |||
---|---|---|---|
Network Name | VLAN ID | IP Range | Number of IP's |
sitea-mgmt | 101 | 172.31.101.0/24 | 0 |
sitea-tep-hosts | 102 | 172.31.102.0/24 | 6 |
sitea-tep-edge | 103 | 172.31.103.0/24 | 4 |
sitea-uplink-1 | 104 | 172.31.104.0/24 | 2 |
sitea-uplink-2 | 105 | 172.31.105.0/24 | 2 |
shared | 150 | 172.31.150.0/24 | 3 |
Site A | |||
---|---|---|---|
VDS Portgroup Name | VLAN ID | ||
sitea-mgmt | 101 | ||
shared | 150 | ||
nsxt-trunk1 | 103,104 | ||
nsxt-trunk2 | 103, 105 |
It's a good habit to put all Management / Control VM's, on an ephemeral portgroup.
Because it's a collapsed compute/Edge cluster, you need a separate vlan for the Edge TEP and the Host TEP.
Add Compute Manager
Adding a compute manager (vCenter) makes it easier when deploying additional NSX-T Managers or Edge Nodes, as it deploys them to the cluster. Otherwise you need to deploy each appliance yourself, and join them manually.
If you deployed additional NSX-T Managers via the GUI, you would have already done this, so you can skip to Create IP Pools. Otherwise, read on.
From the NSX-T Manager appliance, click System/Fabric/Compute Managers, and then ADD.
Click ADD about the missing thumbprint.
If the Registration Status shows 'Not Registered', and Connection Status as 'Down', click on 'Not Registered' to view the error.
If you get Compute Manager vc.vmw.one is already registered with other NSX Manager <IP address>
, you need to remove the previous NSX-T Manager registration with vCenter. Remember, a vCenter can only be registered with 1 NSX-T Manager, but an NSX-T Manager can have many vCenter compute managers.
See NSX-T Troubleshooting: Compute Manager Already Registered with Another NSX Manager on how to remove the previous NSX-T Manager registration.
If you removed a previous Compute Manager using the delete option within NSX-T Manager, it would have cleaned up the vCenter MOB entry.
Eventually you should see a couple of greens dots for Registered and Up.
Add Compute Manager using API
For an alternative way to add a Compute Manager using the API, you need the vCenter thumbprint.
1# openssl s_client -connect sitea-vc.lab.vmw.one:443 < /dev/null 2>/dev/null | openssl x509 -fingerprint -sha256 -noout -in /dev/stdin
2SHA256 Fingerprint=18:6F:E4:CB:51:84:92:E8:DA:26:50:55:8A:63:69:FB:BA:F2:A7:74:BF:91:54:D7:BD:16:2E:C8:DE:3D:98:DB
3#
Make the following API call to add vCenter as a compute manager, updating the vCenter address, username, password and thumbprint.
1POST https://sitea-nsxm.lab.vmw.one/api/v1/fabric/compute-managers
2Authorization: Basic admin VMware1!VMware1!
3content-type: application/json
4
5{
6 "server": "sitea-vc.lab.vmw.one",
7 "origin_type": "vCenter",
8 "credential" : {
9 "credential_type" : "UsernamePasswordLoginCredential",
10 "username": "administrator@vsphere.local",
11 "password": "VMware1!",
12 "thumbprint": "18:6F:E4:CB:51:84:92:E8:DA:26:50:55:8A:63:69:FB:BA:F2:A7:74:BF:91:54:D7:BD:16:2E:C8:DE:3D:98:DB"
13 }
14}
If you receive an error message about already being register to a compute manager, follow the instructions in NSX-T Troubleshooting: Compute Manager Already Registered with Another NSX Manager.
Create IP Pools
IP Pools are blocks or ranges of IP's it will automatically assign for components like TEPs.
In my case, I need at a minimum:
- 2 IP's per Transport Node (ESXi host) for host TEPs. Total of 6.
- 2 IP's per Edge Transport Node (Edge Node) for Edge Node TEPs. Total of 4.
For simplicity, I'll specify a range in a /24 networks.
You can click through the GUI of NSX-T Manager, or you can do it via the API.
Create IP Pools - GUI
From within NSX-T Manager GUI, go to Networking/IP Address Pools, and click on ADD IP ADDRESS POOL. Add a name such as ip-pool-sitea-host-tep-01
, and click Set.
Click ADD SUBNET, and choose IP Ranges. Put in the IP Range, eg: 172.31.102.31-172.31.102.40, and CLICK on Add Item(s): 172.31.102.31-172.31.102.40 to ensure it adds the values into the text box. Add in the CIDR 172.31.102.0/24 and Gateway IP 172.31.102.1 then click ADD, then APPLY, then SAVE
Don't stress if the Status is Uninitialized or Unknown. Click REFRESH at the bottom of the screen and it should change to Up.
Do the same again for ip-pool-sitea-edge-tep-01 with a range of 172.31.103.31-172.31.14.40
Create IP Pools - API
When you're doing the same thing multiple times, it makes sense to use the API. There'll be more posts on the NSX-T API later. It's pretty amazing.
But to create the same IP Pools above, open VS Code with the REST Client for Visual Studio extension, and submit the following:
1PATCH https://sitea-nsxm.lab.vmw.one/policy/api/v1/infra
2Authorization: Basic admin VMware1!VMware1!
3content-type: application/json
4
5{
6 "resource_type": "Infra",
7 "id": "infra",
8 "display_name": "infra",
9 "path": "/infra",
10 "relative_path": "infra",
11 "children": [
12 {
13 "IpAddressPool": {
14 "resource_type": "IpAddressPool",
15 "id": "ip-pool-sitea-edge-tep-01",
16 "display_name": "ip-pool-sitea-edge-tep-01",
17 "description": "Site-A Edge TEP Pool\n172.31.103.0/24",
18 "children": [
19 {
20 "IpAddressPoolSubnet": {
21 "cidr": "172.31.103.0/24",
22 "gateway_ip": "172.31.103.1",
23 "allocation_ranges": [
24 {
25 "start": "172.31.103.31",
26 "end": "172.31.103.40"
27 }
28 ],
29 "resource_type": "IpAddressPoolStaticSubnet",
30 "id": "ip-pool-sitea-edge-tep-01",
31 "display_name": "ip-pool-sitea-edge-tep-01",
32 "children": []
33 },
34 "resource_type": "ChildIpAddressPoolSubnet"
35 }
36 ]
37 },
38 "resource_type": "ChildIpAddressPool"
39 },
40 {
41 "IpAddressPool": {
42 "resource_type": "IpAddressPool",
43 "id": "ip-pool-sitea-host-tep-01",
44 "display_name": "ip-pool-sitea-host-tep-01",
45 "description": "Site-A Host TEP Pool\n172.31.102.0/24",
46 "children": [
47 {
48 "IpAddressPoolSubnet": {
49 "cidr": "172.31.102.0/24",
50 "gateway_ip": "172.31.102.1",
51 "allocation_ranges": [
52 {
53 "start": "172.31.102.31",
54 "end": "172.31.102.40"
55 }
56 ],
57 "resource_type": "IpAddressPoolStaticSubnet",
58 "id": "ip-pool-sitea-host-tep-01",
59 "display_name": "ip-pool-sitea-host-tep-01",
60 "children": []
61 },
62 "resource_type": "ChildIpAddressPoolSubnet"
63 }
64 ]
65 },
66 "resource_type": "ChildIpAddressPool"
67 }
68 ]
69}
The end result should have 2 IP Pools.
Create Transport Zone
A transport zone defines the boundary of where segments can be attached.
If you plan to run nested NSX-T, you'll need to use the API to set the "nested_nsx" : true
. See below .
Create 3 Transport zones: TZ_Overlay, TZ_VLAN_Edge, TZ_VLAN_Host.
Create Transport Zone - GUI
From NSX-T Manager, click System / Fabric / Transport Zones. Click ADD.
Unless you know what you're doing, for Host Membership Criteria, select Standard (For all hosts). Enhanced Datapath is targeted for NFV workloads and requires specific pNICs. Read the docs.
Name: TZ_Overlay
N-VDS Name: NVDS
Host Membership Criteria: Standard (For all hosts)
Traffic Type: Overlay
Do the same again twice more, but for a VLAN transport zones. Because I am using 2 TEP's on the Edge Node, I need to define different Uplink Teaming Policy Names. The Transport Zone for the Transport Nodes (ESXi Hosts), can have the Uplink Teaming Policy Names blank.
Click ADD to add another Transport Zone.
Name: TZ_VLAN_Host
N-VDS Name: NVDS
Host Membership Criteria: Standard (For all hosts)
Traffic Type: VLAN
Click ADD.
Click ADD to add another Transport Zone.
Name: TZ_VLAN_Edge
N-VDS Name: NVDS
Host Membership Criteria: Standard (For all hosts)
Traffic Type: VLAN
Uplink Teaming Policy Names: ToR-left, ToR-right
Click ADD.
Create Transport Zone - API
In NSX-T 2.5, there's no Policy API to create Transport Zones so you need to use the MP (Management Plane) API.
Note the "nested_nsx"
option. This is only because I plan to run nested NSX-T later on.
1POST https://sitea-nsxm.lab.vmw.one/api/v1/transport-zones
2Authorization: Basic admin VMware1!VMware1!
3content-type: application/json
4
5{
6 "display_name" : "TZ_OVERLAY",
7 "description" : "Transport Zone created via API",
8 "host_switch_name" : "NVDS",
9 "transport_type" : "OVERLAY",
10 "nested_nsx" : true
11}
1POST https://sitea-nsxm.lab.vmw.one/api/v1/transport-zones
2Authorization: Basic admin VMware1!VMware1!
3content-type: application/json
4
5{
6 "display_name" : "TZ_VLAN_HOST",
7 "description" : "Transport Zone created via API",
8 "host_switch_name" : "NVDS",
9 "transport_type" : "VLAN",
10 "nested_nsx" : true
11}
1POST https://sitea-nsxm.lab.vmw.one/api/v1/transport-zones
2Authorization: Basic admin VMware1!VMware1!
3content-type: application/json
4
5{
6 "display_name" : "TZ_VLAN_Edge",
7 "description" : "Transport Zone created via API",
8 "host_switch_name" : "NVDS",
9 "transport_type" : "VLAN",
10 "nested_nsx" : true,
11 "uplink_teaming_policy_names": [
12 "ToR-left",
13 "ToR-right"
14 ]
15}
Profiles - Uplink & Transport Nodes
Profiles are used in two ways. First, there's an Uplink Profile for the Edge Nodes and the ESXi host (Transport Node), and a Transport Node Profile, which also includes an Uplink Profile.
An Uplink Profile groups settings like transport VLAN, teaming policy (uplink failover/load balance) into a profile that you can apply to an Edge Node or Clusters of Transport Nodes. After you have applied the profile to an Edge Node/Cluster, any changes to the profile will be reflected on the attached Edge Nodes/Cluster(s).
Having multiple entries for an Active Uplink will assign multiple TEPs.
The difference between the Edge Node & Transport Node Uplink Profiles in my design are the transport VLANs, and the Teaming Policies. The Edge Nodes have named teaming policies so we can use each named teaming policy for the left & right uplinks, whereas the Transport Nodes are just load balanced. Make sure the Named Teaming Policy match what's defined for the Transport Zones.
Uplink Profile - Edge Transport Node (Edge Node) - GUI
From NSX-T Manager, click System / Fabric / Profiles / Uplink Profiles. Click ADD.
Uplink Profile - Edge Transport Node (Edge Node) - API
1POST https://sitea-nsxm.lab.vmw.one/api/v1/host-switch-profiles
2Authorization: Basic admin VMware1!VMware1!
3content-type: application/json
4
5{
6 "resource_type": "UplinkHostSwitchProfile",
7 "display_name": "profile-uplink-edge-multi-tep",
8 "description": "Multi TEP Edge Uplink Profile",
9 // "mtu": 1600,
10 "teaming": {
11 "policy": "LOADBALANCE_SRCID",
12 // "standby_list": [],
13 "active_list": [
14 {
15 "uplink_name": "uplink-1",
16 "uplink_type": "PNIC"
17 },
18 {
19 "uplink_name": "uplink-2",
20 "uplink_type": "PNIC"
21 }
22 ]
23 },
24 "named_teamings": [
25 {
26 "name": "ToR-left",
27 "policy": "FAILOVER_ORDER",
28 "active_list": [
29 {
30 "uplink_name": "uplink-1",
31 "uplink_type": "PNIC"
32 }
33 ]
34 },
35 {
36 "name": "ToR-right",
37 "policy": "FAILOVER_ORDER",
38 "active_list": [
39 {
40 "uplink_name": "uplink-2",
41 "uplink_type": "PNIC"
42 }
43 ]
44 }
45 ],
46 "transport_vlan": 103
47}
Uplink Profile - Transport Node (ESXi Host) - GUI
From NSX-T Manager, click System / Fabric / Profiles / Uplink Profiles. Click ADD.
Uplink Profile - Transport Node (ESXi Host) - API
1POST https://sitea-nsxm.lab.vmw.one/api/v1/host-switch-profiles
2Authorization: Basic admin VMware1!VMware1!
3content-type: application/json
4
5{
6 "resource_type": "UplinkHostSwitchProfile",
7 "display_name": "profile-uplink-host-multi-tep",
8 "description": "Multi TEP Host Uplink Profile",
9 // "mtu": 1600,
10 "teaming": {
11 "policy": "LOADBALANCE_SRCID",
12 // "standby_list": [],
13 "active_list": [
14 {
15 "uplink_name": "uplink-1",
16 "uplink_type": "PNIC"
17 },
18 {
19 "uplink_name": "uplink-2",
20 "uplink_type": "PNIC"
21 }
22 ]
23 },
24 "transport_vlan": 102
25}
Transport Node Profiles
A Transport Node Profile defines:
- Which Transport Zones it is part of
- Which Uplink Profile to use
- Enable/Disable LLDP
- Where it gets the TEP IP's from
- Physical NIC to uplink name mapping
A Transport Node Profile is later applied to a Transport Node (ESXi Host) or cluster, so you can use different Transport Node Profiles (and Uplink Profiles) for different clusters.
Add Transport Node Profile - GUI
Now to combine the host uplink profile into the Transport Node Profile.
From NSX-T Manager, click System / Fabric / Profiles / Transport Node Profiles. Click ADD.
Enter the name, a description, and select the TZ_Overlay & TZ_VLAN_Host Transport Zones, and click the arrow to move it to the Selected side.
In the same window, click N-VDS just under the Add Transport Node Profile window title.
Fill in the details as follows.
For the physical NICs, use the physical NICs on your host that you plan to use for TEP traffic. These physical NICs must NOT be in use, as they will be used for the NVDS on the host.
The end result should look like this.
Configure Host Transport Nodes for NSX
To push the NSX-T vibs to the hosts, we have to apply the Transport Node Profile to a Transport Node (ESXi Host) / Cluster(s).
Configure Host Transport Nodes for NSX - GUI
From NSX-T Manager, click System / Fabric / Nodes / Host Transport Nodes. Select the cluster, and click CONFIGURE NSX.
Select the Transport Node Profile created earlier.
Give it time while it configures the cluster.
After a few minutes click the refresh button in the bottom of the screen, and the Configuration State should show Success, and the Node Status should be Up.
Configure Host Transport Nodes for NSX - API
On my todo list - Sorry!
Edge Transport Node (Edge Node)
The Edge Node is a VM that will contain the T0 & T1 SR (Service Router) components, as apposed to the DR (Distributed Router) functions.
Deploy Edge Transport Node (Edge Node) - GUI
From NSX-T Manager, click System / Fabric / Nodes / Edge Transport Nodes. Click ADD EDGE VM. To determine which form factor is right for you, check the official doco.
Set the credentials for the admin,root and audit CLI accounts. There will be more configuration required later, so enable SSH for admin.
Select the compute manager and where you want to deploy the Edge Node.
Select the management IP including CIDR and the port group where the management interface will be attached to.
The Edge Node handles TEP & VLAN traffic, so add both TZ_Overlay and TZ_VLAN_Edge Transport Zones.
Make sure you map uplink-1 to sitea-nsxt-trunk1
, and uplink-2 to sitea-nsxt-trunk2
. sitea-nsxt-trunk1
/sitea-nsxt-trunk2
are DVS portgroups allowing all VLANs for overlay transport and uplink left/right VLANs
Do it again for a second Edge Node, using a different IP for it's management interface.
Once they have finished deploying, if they don't show Success, click REFRESH in the bottom of the window.
Deploy Edge Transport Node (Edge Node) - API
On my todo list - Sorry!
Create Edge Cluster
From NSX-T Manager, click System / Fabric / Nodes / Edge Clusters. Click ADD.
Select both Edge Nodes, and move them to the right hand side and click ADD
The Edge Cluster will show 2 Edge Transport Nodes.
Create Segments
Segments are the same as portgroups, or layer 2 segments. It's a broadcast domain. It's where you attach VMs to be able to communicate on the network. Segments live inside the NSX-T domain, within a Transport Zone.
Segments are also required for uplinks from an Edge Transport Node (Edge Node) to the DVS portgroups.
Create segments for Edge Node VLAN uplinks
From NSX-T Manager, click Networking / Segments / Segments. Click ADD SEGMENT.
Segment Name: ToR-left
Connected Gateway & Type: None
Transport Zone: TZ_VLAN_Edge | VLAN
VLAN: 104
Click SAVE, and then No to continuing to configure this Segment.
Do the same again for ToR-right.
Segment Name: ToR-left
Connected Gateway & Type: None
Transport Zone: TZ_VLAN_Edge | VLAN
VLAN: 104
Click SAVE, and then No to continuing to configure this Segment.
Create VM Segments
Create a couple of segments to test VM connectivity.
From NSX-T Manager, click Networking / Segments / Segments. Click ADD SEGMENT.
Segment Name: Test-192.168.0.1
Connected Gateway & Type: T1
Transport Zone: TZ_OVERLAY | Overlay
Click on Set Subnets
Click ADD SUBNET.
Gateway IP/Prefix Length: 192.168.0.1/24
Click ADD. Click APPLY. Click SAVE.
Add another test segment of:
Segment Name: Test-192.168.1.1
Connected Gateway & Type: T1
Transport Zone: TZ_OVERLAY | Overlay
Gateway IP/Prefix Length: 192.168.1.1/24
Tier-0 Gateway
A Tier-0 Gateway provides the on/off ramp for traffic entering and exiting the overlay network. It connects to a physical router, as well as routes traffic to 1 or more Tier-1 Gateways.
Create T0 Gateway
From NSX-T Manager, click Networking / Tier-0 Gateways and click ADD TIER-0 GATEWAY.
Fill in the basic details, and click SAVE
After saving it, click YES to continue configuring this Tier-0 Gateway.
Expand the INTERFACES section, and click Set.
In the Set Interfaces screen, Click ADD INTERFACE.
Name:sitea-en01-ToR-left
IP Address / Mask: 172.31.104.2/24
Connected To(Segment): ToR-left
Edge Node: sitea-en01
Click SAVE
Now to do the same again for the other interface.
In the Set Interfaces screen, Click ADD INTERFACE again.
Name:sitea-en01-ToR-right
IP Address / Mask: 172.31.105.2/24
Connected To(Segment): ToR-right
Edge Node: sitea-en01
Click SAVE.
Do the same again except choose sitea-en02
.
Name:sitea-en02-ToR-left
IP Address / Mask: 172.31.104.3/24
Connected To(Segment): ToR-right
Edge Node: sitea-en02
Click SAVE.
Name:sitea-en02-ToR-right
IP Address / Mask: 172.31.105.3/24
Connected To(Segment): ToR-right
Edge Node: sitea-en02
Click SAVE.
Click CLOSE, and then CLOSE EDITING.
Tier-1 Gateway
A Tier-1 Gateway routes overlay traffic between Segments that VMs are attached to, and also connects to the Tier-0 Gateway to for traffic exiting/entering the overlay network.
Create T1 gateway
From NSX-T Manager, click Networking / Tier-1 Gateways and click ADD TIER-1 GATEWAY.
Click SAVE, and No to continue configuring this Tier-1 Gateway.
With the T1 Gateway, you can now create overlay segments and VMs will be able to communicate between segments.
Static Routes
Using static routes isn't the best option, but for a lab, it's perfect. If you have the option, use BGP.
On my Cisco 3750X, I set a static route to the NSX-T Overlay networks:
1c3750#conf t
2Enter configuration commands, one per line. End with CNTL/Z.
3c3750(config)#ip route 192.168.0.0 255.255.255.0 172.31.104.2 10
4c3750(config)#ip route 192.168.0.0 255.255.255.0 172.31.104.3 10
5c3750(config)#ip route 192.168.0.0 255.255.255.0 172.31.105.2 10
6c3750(config)#ip route 192.168.0.0 255.255.255.0 172.31.105.3 10
7c3750(config)#ip route 192.168.1.0 255.255.255.0 172.31.104.2 10
8c3750(config)#ip route 192.168.1.0 255.255.255.0 172.31.104.3 10
9c3750(config)#ip route 192.168.1.0 255.255.255.0 172.31.105.2 10
10c3750(config)#ip route 192.168.1.0 255.255.255.0 172.31.105.3 10