Edge TEP IP on the same subnet as local hypervisor TEP

Edge TEP IP on the same subnet as local hypervisor TEP

In this blog post I will describe a new feature where NSX-T 3.1 and later supports Inter TEP communication within the same host. Edge TEP IP can be on the same subnet and VLAN as the local hypervisor TEPs. This is beneficial for collapsed cluster topologies where Edge Node VMs are running on vSphere Hosts which are configured for NSX-T, as Transport Nodes. There are a lot of topologies where there is a collapsed Compute & Edge or Management & Edge Clusters where this feature is beneficial.

It is important to note that this feature requires your Edge Node VMs to be connected to NSX VLAN Segments.

In the following diagram I’m showing two Edge Nodes with Active/Active Tier-0 Gateway running. The Edge Nodes are running on vSphere Hosts which themselves are configured as NSX-T Transport Nodes in collapsed Compute & Edge Cluster topology. This means that the Edge Nodes are running in a vSphere cluster alongside general VM workloads. Note that this diagram is based on vSphere 7.0 and later where NSX-T is configured on the VDS instead of a dedicated N-VDS.

We need three VLANs for an Active-Active Tier-0 design:

  • VLAN 100: This is a VLAN allocated for Geneve Overlay networking. Both Hosts and Edge Nodes will have their Tunnel Endpoint (TEP) interfaces connected to this VLAN.
  • VLAN 101: This is a dedicated Transit VLAN for North-South traffic and BFP peering towards ToR-A
  • VLAN 102: This is a dedicated Transit VLAN for North-South traffic and BFP peering towards ToR-B

In this example the vSphere Hosts have two pNICs. Obviously with more pNICs there are more options for design and separation of traffic. With two pNICs we are sharing Management, vMotion and NSX traffic, both North-South as East-West, which means performance sizing is important and higher speed pNICs are preferred.

A single VDS 7.0 is configured and VDS Port Groups for Management, vMotion and vSphere portgroups are created. For the Edge Nodes this means you could connect Edge Nodes management interface (vNIC0) to a shared Management Port Group where e.g. your NSX Managers are connected to. This creates an out-of-band interface for Edge Nodes which isn’t dependent of any NSX configured segment and is excluded from the Distributed Firewall automatically.

For the actual Edge Node connectivity we need to configure two NSX Segments, configured as trunks allowing guest VLAN tagging for at least required VLANs (100, 101 & 102). To use the Inter-TEP feature it is mandatory to use NSX Segments. We leverage Named Teaming Policies in NSX-T to send the right transit traffic to the right pNIC of the host. It is important to include standby Uplink for the Host Uplink Profiles to avoid any black holing of Overlay traffic from the Edge Node in case of a pNIC or link failure.

The Edge Nodes will be configured and vNIC1 (fp-eth0) will be connected to Trunk-A-Segment and vNIC2 (fp-eth1) will be connected to Trunk-B-Segment. Additionally, the Edge Node is configured with a single N-VDS with both Overlay-TZ and VLAN-TZ. The Uplink Profile for the Edge Nodes is configured in such a way to enable both fp-eth0 and fp-eth1 interface to send (Active-Active) overlay traffic while North-South traffic towards ToR-A or ToR-B is pinned to a specific fp-eth interface connecting to a NSX Segment and pinned to preferred pNIC on the host, using named Teaming Policies.

The Active-Active Tier-0 Gateway is configured on the Edge Cluster. We configure two NSX Segments with Named Teaming Policies (VLAN101 & VLAN102). The uplinks for the Tier-0 Gateway will be configured on these NSX Segments.

It is important to configure a sufficient IP Pool to accommodate the allocation of both Host and Edge TEP IP addresses. Since this design leverages a Multi-TEP Active/Active design this means each host, based on the fact they have two pNICs, requires two TEP IP addresses. Additionally, each Edge Node requires two TEP IP addresses. This way both pNICs of the hosts are used for East-West Overlay traffic.

Leave a Reply

Your email address will not be published. Required fields are marked *

VCDX #284

VCDX #284

Twitter