NSX-T Uplink Profile

NSX-T Uplink Profile

​An uplink profile defines policies for the links from hypervisor hosts to NSX-T logical switches or from NSX-T Edge nodes to top-of-rack switches.

​Uplink profiles allow you to consistently configure identical capabilities for network adapters across multiple hosts or nodes.

Introduction

​The settings defined by uplink profiles might include teaming policies, active/standby links, the transport VLAN ID, and the MTU setting.

​Standby uplinks are not supported with VM-based NSX-T Edge Node. For each uplink profile created for a VM-based NSX-T Edge Node, the profile must specify only one active uplink and no standby uplink.

NSX-T ​Edge Node now supports multiple active Uplinks with TEP configured and Load Balance Source Teaming Policy configured.

​With NSX-T 2.4 a N-VDS can have multiple teaming policies attached.

Teaming Policy

​There are three types of teaming policies:

  • Failover Order
  • Load Balance Source
  • Load Balance Source MAC

​When doing Guest VLAN trunking (where multiple MAC addresses are coming from the same port-ID) it is useful to select Load Balance Source MAC teaming.

​Note that LAG is not considered a Teaming Policy. It is considered a group state of pNIC(s)

Starting with NSX-T 2.4 a N-VDS can have multiple teaming policies attached.

​The NSX-T Edge Node now supports multiple active Uplinks with TEP configured and Load Balance Source Teaming Policy configured, which means it can have for example two TEPs of which each TEP can be bound to a specific pNIC which gives better load balancing capabilities.

Named Teaming Policy

The Named Teaming Policy feature introduced in NSX-T 2.3 (formerly known as VLAN Pinning) has the following common use cases:

  • Giving the capabilities for pinning VMkernel traffic to a specific pNIC with failover order, which is useful for VMkernel traffic like Managment, vMotion and VSAN traffic.
  • Providing deterministic NSX-T Edge Peering (North-South) with a top of rack (ToR) router via a single pNIC.

The Named Teaming Policy is used only for VLAN Logical Switches (Segments) and is configured on the Transport Zone GUI and Advanced UI on the Logical Switch for that VLAN.

Different Named Teaming Policies can exists under a single N-VDS for different types of VLAN traffic.

Configuration

ESXi Host with two Active Uplinks and Load Balance Source

In this example a we create an Uplink Profile for a vSphere ESXi Transport Node with two pNICs available for the N-VDS.

The default teaming policy will be Load Balance Source so both uplinks are active. The Transport VLAN carrying the Geneve Overlay traffic will be VLAN 100 and MTU size 9000. (Geneve protocol requires at least 1600 for MTU size)

The actual configuration of the Uplink Profile is shown below where uplink-1 and uplink-2 are specified as both Active Uplinks. When configuring a ESXi Host as a Transport Node the Administrator can map the actual vmnic to the required uplink.

KVM Host with two Uplinks in Failover Order

In the following example a we create an Uplink Profile for a KVM Transport Node with two pNICs available for the N-VDS. KVM Hosts only support Failover Order as Teaming Policy.

The default teaming policy will be Failover Order so one uplink will be active and the other standby. The Transport VLAN carrying the Geneve Overlay traffic will be VLAN 100 and MTU size 9000.

The actual configuration of the Uplink Profile is shown below where uplink-1 is the Active Uplink and uplink-2 is the Standby Uplink.

ESXi Host with vSphere and VSAN

In cases where vSphere ESXi Hosts have two pNICs to save on Network Fabric costs and the Hosts are configured as NSX-T Transport Nodes it is possible to use Named Teaming Policy to pin specific VMkernel traffic on a Specific Uplink. In this example vSphere VMkernel traffic will be pinned on uplink-1 and VSAN VMkernel traffic will be pinned on uplink-2.

The default teaming policy will be Load Balance Source so both uplinks will be active for Overlay traffic. Transport VLAN for Geneve will be VLAN 100 with MTU size 9000. vSphere Management VMkernel traffic is pinned Active on uplink-1 and Standby on uplink-2, while VSAN is pinned Active on uplink-2 and Standby on uplink-1. 

When configuring a Transport Zone you should type exactly the name of the Uplink Named Teaming Policy. Segments (formely known as Logical Switches) can now be created in that VLAN Transport Zone for both VMkernel traffic after which in the Advanced UI you must configure the required Uplink Teaming Policy Name. In this example the VSAN Segment (VLAN 10) is configured with the VSAN Uplink Teaming Policy which configures uplink-2 as the active uplink on the host.

NSX-T Edge Node BGP Peering Configuration

In the case of NSX-T Edge Node designs we required deterministic peering with ToR Routers. It is a best practice to peer BGP with a ToR Router over with a dedicated VLAN and Uplink to make sure that any failure of each component in the path will result in traffic failover to the other ToR Router. In this example we configure a Uplink Profile for NSX-T Edge Nodes and use Named Teaming Policy for peering with ToR-A and ToR-B so uplink-1 will be used for traffic towards ToR-A and uplink-2 will be used for traffic towards (ToR-B).

The default Teaming Policy will be Load Balance Source so we can support two TEPs on the NSX-T Edge Nodes. Transport VLAN for Geneve will be VLAN 100 with MTU size 9000. ToR-A is pinned on uplink-1 and ToR-B is pinned Active on uplink-2. 

When configuring a Transport Zone you should type exactly the name of the Uplink Named Teaming Policy. Segments can now be created in that VLAN Transport Zone for both ToR traffic after which in the Advanced UI you must configure the required Uplink Teaming Policy Name. In this example the VLAN101-ToR-A Segment (VLAN 101) is configured with the ToR-A Uplink Teaming Policy which configures uplink-1 as the active uplink on the host.

 

There are 2 comments. Add yours.

  1. Tobias

    Great post! Thank you! Helped a lot!

    I have a question though. I pretty new to NSX-T and right now I’m doing a NSX-T 2.5 deployment. We are deploying a single N-VDS Multi-TEP design. We have two physical NICs available for NSX over which we are peering BGP. In our case, as we are deploying that single N-VDS Multi-TEP design, our Edge nodes are connected to Trunk Segments and then the Transport Nodes are connected to the physical network.

    The question I have is related to the last example, “NSX-T Edge Node BGP Peering Configuration”

    In that example you configure an Uplink Profile for NSX-T Edge Nodes but when I try to do the same and try to apply the Uplink Teaming Policy Names on the VLAN transport zone I get the following error:

    “There are 9 TransportNodes with the first 5 IDs […] that do not specify the new uplink teaming name [[Leaf-A][Leaf-B]]. (Error code: 8521)”

    It took me a while but realized that if I created the same Named Teaming Policys on the Host Uplink Policy aswell it all seems to work. This way the correct Edge interface and Host interface are used.
    Is this the right way to do it?

    • Hi, yes that is correct assuming the hosts use the same VLAN Transport Zone with same N-VDS name.

Leave a Reply

Your email address will not be published. Required fields are marked *

VCDX #284

VCDX-NV

Twitter