An uplink profile defines policies for the links from hypervisor hosts to NSX-T logical switches or from NSX-T Edge nodes to top-of-rack switches.
Uplink profiles allow you to consistently configure identical capabilities for network adapters across multiple hosts or nodes.
Transport Node Profiles are introduced to automatically configure vCenter Clusters for NSX-T. Additionally a Transport Node Profile maintains Transport Node Configuration at the Cluster level to ensure that when a vSphere Host is added or removed from the cluster it will be automatically be configured or unconfigured. Creating a Transport Node profile has a lot of similarities with the Host Migration from VSS/VDS to N-VDS workflow, which is documented in the Host Migration to N-VDS blog post.
In NSX-T 2.4 the NSX-T Manager is a Converged Appliance where Policy, Management and Control Roles are available on each NSX-T Manager Node and creating a Cluster of three NSX-T Managers. The NSX-T Managers in the Cluster also share a Distributed Persistent Datastore where the Desired State is stored. This feature brings the benefit of availability of all management services across the cluster, improves the install and upgrade process and makes operations easier with less systems to monitor and maintain.
This blog post describes the required steps in NSX-T 2.4 to migrate a vSphere Host Physical adapters and associated VMkernel interfaces from a vSphere Distributed Switch to the N-VDS.
Example use cases for migration to the N-VDS:
- Micro-segmentation with NSX-T, which requires the N-VDS. For example current workloads are on a Compute vDS which require to be secured with NSX-T.
- Servers with limited number of Physical NICs or customers having the requirement to limit the amount of physical connectivity towards Top of Racks switches to save costs on the network fabric.
In this blog post I will show how to migrate Phyiscal adapters vmnic0 and vmnic1, which are currently connected to a vDS, to a N-VDS. This migration example also shows how to migrate the associated VMkernel interfaces for vSphere Management and vMotion. Depending on your topology this step may not be required. This could be the case when you have for example different vDS for Management and Compute.
In NSX BGP filters work like access lists for route advertisements (prefixes). The NSX BGP filters are prefix lists which work very similarly to firewall access lists. A prefix list contains one or more ordered entries which are processed sequentially. For each prefix entry you can specify inbound or outbound filters to allow certain routes to be advertised to or from the Edge Services Gateway/Distributed Logical Router.
For example you to want to prevent a route for 10.0.0.0/24 from being advertised in BGP from the NSX Edge Services Gateway.