This is the next post on a series of posts on NSX-T Edge Node design topologies.
In this blog post I will describe an Edge Node design topology hosting a Tier-0 Gateway with Static Routing with a HA VIP address configured.
With the Edge Node Virtual Appliances it is important to know on which vSphere Hosts the Edge Nodes are going to run, how many physical NICs are available and if the Edge Node is running on top of a VSS/VDS or N-VDS and how teaming is configured. In this design topology the Edge Nodes are running on hosts with two VDS with four Physical NICs.
An uplink profile defines policies for the links from hypervisor hosts to NSX-T logical switches or from NSX-T Edge nodes to top-of-rack switches.
Uplink profiles allow you to consistently configure identical capabilities for network adapters across multiple hosts or nodes.
Transport Node Profiles are introduced to automatically configure vCenter Clusters for NSX-T. Additionally a Transport Node Profile maintains Transport Node Configuration at the Cluster level to ensure that when a vSphere Host is added or removed from the cluster it will be automatically be configured or unconfigured. Creating a Transport Node profile has a lot of similarities with the Host Migration from VSS/VDS to N-VDS workflow, which is documented in the Host Migration to N-VDS blog post.
In NSX-T 2.4 the NSX-T Manager is a Converged Appliance where Policy, Management and Control Roles are available on each NSX-T Manager Node and creating a Cluster of three NSX-T Managers. The NSX-T Managers in the Cluster also share a Distributed Persistent Datastore where the Desired State is stored. This feature brings the benefit of availability of all management services across the cluster, improves the install and upgrade process and makes operations easier with less systems to monitor and maintain.
This blog post describes the required steps in NSX-T 2.4 to migrate a vSphere Host Physical adapters and associated VMkernel interfaces from a vSphere Distributed Switch to the N-VDS.
Example use cases for migration to the N-VDS:
- Micro-segmentation with NSX-T, which requires the N-VDS. For example current workloads are on a Compute vDS which require to be secured with NSX-T.
- Servers with limited number of Physical NICs or customers having the requirement to limit the amount of physical connectivity towards Top of Racks switches to save costs on the network fabric.
In this blog post I will show how to migrate Phyiscal adapters vmnic0 and vmnic1, which are currently connected to a vDS, to a N-VDS. This migration example also shows how to migrate the associated VMkernel interfaces for vSphere Management and vMotion. Depending on your topology this step may not be required. This could be the case when you have for example different vDS for Management and Compute.