NSX-T Troubleshooting Tunnel Status

Switching between a mix of different product versions, I forget some of the specifics, and I’ve wasted too much time troubleshooting something that was working OK. Hopefully this saves you some time.

vmk10 / vmk11 / vmk50 not showing up in vCenter

After deploying NSX-T 2.5 in the lab and configuring ESXi hosts for NSX-T, even though it was successful, I didn’t see vmk10/vmk11/vmk50 listed in VMkernel adapters for the ESXi hosts in vCenter

Troubleshooting

FACT: When using NVDS, vmk10/vmk11/vmk50 will NOT show up in vCenter. (With NSX-T 3.0 on VDS, TEP vmkernel adapters will show up in vCenter for each host).

Check within NSX-T Manager:

Troubleshooting

Click on the Interface Details to see the IP/subnet mask

Troubleshooting

To confirm ESXi hosts are configured for NSX-T, ssh to the host:

[root@site-210-esxi1:~] esxcli software vib list |grep -i nsx
nsx-adf                        2.5.1.0.0-6.7.15314402                VMware  VMwareCertified   2020-05-19
nsx-aggservice                 2.5.1.0.0-6.7.15314423                VMware  VMwareCertified   2020-05-19
nsx-cli-libs                   2.5.1.0.0-6.7.15314375                VMware  VMwareCertified   2020-05-19
nsx-common-libs                2.5.1.0.0-6.7.15314375                VMware  VMwareCertified   2020-05-19
nsx-context-mux                2.5.1.0.0esx67-15314456               VMware  VMwareCertified   2020-05-19
nsx-esx-datapath               2.5.1.0.0-6.7.15314311                VMware  VMwareCertified   2020-05-19
nsx-exporter                   2.5.1.0.0-6.7.15314423                VMware  VMwareCertified   2020-05-19
nsx-host                       2.5.1.0.0-6.7.15314289                VMware  VMwareCertified   2020-05-19
nsx-metrics-libs               2.5.1.0.0-6.7.15314375                VMware  VMwareCertified   2020-05-19
nsx-mpa                        2.5.1.0.0-6.7.15314423                VMware  VMwareCertified   2020-05-19
nsx-nestdb-libs                2.5.1.0.0-6.7.15314375                VMware  VMwareCertified   2020-05-19
nsx-nestdb                     2.5.1.0.0-6.7.15314393                VMware  VMwareCertified   2020-05-19
nsx-netcpa                     2.5.1.0.0-6.7.15314440                VMware  VMwareCertified   2020-05-19
nsx-netopa                     2.5.1.0.0-6.7.15314363                VMware  VMwareCertified   2020-05-19
nsx-opsagent                   2.5.1.0.0-6.7.15314423                VMware  VMwareCertified   2020-05-19
nsx-platform-client            2.5.1.0.0-6.7.15314423                VMware  VMwareCertified   2020-05-19
nsx-profiling-libs             2.5.1.0.0-6.7.15314375                VMware  VMwareCertified   2020-05-19
nsx-proxy                      2.5.1.0.0-6.7.15314435                VMware  VMwareCertified   2020-05-19
nsx-python-gevent              1.1.0-9273114                         VMware  VMwareCertified   2020-05-19
nsx-python-greenlet            0.4.9-12819723                        VMware  VMwareCertified   2020-05-19
nsx-python-logging             2.5.1.0.0-6.7.15314402                VMware  VMwareCertified   2020-05-19
nsx-python-protobuf            2.6.1-12818951                        VMware  VMwareCertified   2020-05-19
nsx-rpc-libs                   2.5.1.0.0-6.7.15314375                VMware  VMwareCertified   2020-05-19
nsx-sfhc                       2.5.1.0.0-6.7.15314423                VMware  VMwareCertified   2020-05-19
nsx-shared-libs                2.5.1.0.0-6.7.15036308                VMware  VMwareCertified   2020-05-19
nsx-upm-libs                   2.5.1.0.0-6.7.15314375                VMware  VMwareCertified   2020-05-19
nsx-vdpi                       2.5.1.0.0-6.7.15314422                VMware  VMwareCertified   2020-05-19
nsxcli                         2.5.1.0.0-6.7.15314296                VMware  VMwareCertified   2020-05-19
[root@site-210-esxi1:~]

List vmkernel adapters on the host:

[root@site-210-esxi1:~]  esxcli network ip interface ipv4 get
Name   IPv4 Address   IPv4 Netmask     IPv4 Broadcast   Address Type  Gateway  DHCP DNS
-----  -------------  ---------------  ---------------  ------------  -------  --------
vmk0   172.31.210.11  255.255.255.224  172.31.210.31    STATIC        0.0.0.0     false
vmk10  172.31.210.36  255.255.255.224  172.31.210.63    STATIC        0.0.0.0     false
vmk11  172.31.210.37  255.255.255.224  172.31.210.63    STATIC        0.0.0.0     false
vmk50  169.254.1.1    255.255.0.0      169.254.255.255  STATIC        0.0.0.0     false
[root@site-210-esxi1:~] 

NSX-T Tunnel Status down / Not Available

Tunnel Status shows as Not Available: Troubleshooting

There’s no details listed in Tunnel Status for the host: Troubleshooting

FACT: If there’s no workloads running on a host, the tunnel is not established.

You can still confirm TEP communication using vmkping.

Confirm vxlan netstack is there:

[root@site-210-esxi1:~] esxcli network ip netstack list
defaultTcpipStack
   Key: defaultTcpipStack
   Name: defaultTcpipStack
   State: 4660

vxlan
   Key: vxlan
   Name: vxlan
   State: 4660

hyperbus
   Key: hyperbus
   Name: hyperbus
   State: 4660
[root@site-210-esxi1:~] 

Use vmkping to confirm connectivity between all TEP interfaces. Don’t forget to test packets at least 1600 bytes in size.

[root@site-210-esxi1:~] vmkping -I vmk10 -S vxlan 172.31.210.38 -d -s 1572
PING 172.31.210.38 (172.31.210.38): 1572 data bytes
1580 bytes from 172.31.210.38: icmp_seq=0 ttl=64 time=0.585 ms
1580 bytes from 172.31.210.38: icmp_seq=1 ttl=64 time=0.660 ms
1580 bytes from 172.31.210.38: icmp_seq=2 ttl=64 time=0.423 ms

--- 172.31.210.38 ping statistics ---
3 packets transmitted, 3 packets received, 0% packet loss
round-trip min/avg/max = 0.423/0.556/0.660 ms

[root@site-210-esxi1:~]

If it fails on larger packet sizes, check the MTU size on everything from vmkernel adapter, VDS, each physical switchport, VLAN and every device in between ESXi hosts. This is the most common issue.

[root@site-210-esxi1:~] vmkping -I vmk11 -S vxlan 172.31.210.38 -d -s 1572
PING 172.31.210.38 (172.31.210.38): 1572 data bytes
sendto() failed (Message too long)
sendto() failed (Message too long)
sendto() failed (Message too long)

--- 172.31.210.38 ping statistics ---
3 packets transmitted, 0 packets received, 100% packet loss

[root@site-210-esxi1:~]

If vmkping fails to receive packets:

  • Check the TEP VLAN is configured on the physical switchports
  • Check the correct TEP / Transport VLAN is configured in the Uplink Profile used by the Transport Node Profile
Andrew Dauncey
Andrew Dauncey
Senior Consultant at VMware PSO

Every day I’m shuffling

Previous

Related