In NSX-T 2.4 the NSX-T Manager is a Converged Appliance where Policy, Management and Control Roles are available on each NSX-T Manager Node and creating a Cluster of three NSX-T Managers. The NSX-T Managers in the Cluster also share a Distributed Persistent Datastore where the Desired State is stored. This feature brings the benefit of availability of all management services across the cluster, improves the install and upgrade process and makes operations easier with less systems to monitor and maintain.
This blog post describes the required steps in NSX-T 2.4 to migrate a vSphere Host Physical adapters and associated VMkernel interfaces from a vSphere Distributed Switch to the N-VDS.
Example use cases for migration to the N-VDS:
- Micro-segmentation with NSX-T, which requires the N-VDS. For example current workloads are on a Compute vDS which require to be secured with NSX-T.
- Servers with limited number of Physical NICs or customers having the requirement to limit the amount of physical connectivity towards Top of Racks switches to save costs on the network fabric.
In this blog post I will show how to migrate Phyiscal adapters vmnic0 and vmnic1, which are currently connected to a vDS, to a N-VDS. This migration example also shows how to migrate the associated VMkernel interfaces for vSphere Management and vMotion. Depending on your topology this step may not be required. This could be the case when you have for example different vDS for Management and Compute.
In NSX BGP filters work like access lists for route advertisements (prefixes). The NSX BGP filters are prefix lists which work very similarly to firewall access lists. A prefix list contains one or more ordered entries which are processed sequentially. For each prefix entry you can specify inbound or outbound filters to allow certain routes to be advertised to or from the Edge Services Gateway/Distributed Logical Router.
For example you to want to prevent a route for 10.0.0.0/24 from being advertised in BGP from the NSX Edge Services Gateway.
One of our customers is preparing to migrate Virtual Machines from VLAN to VXLAN with the NSX L2 Bridge and asked me how to test the L2 Bridge and get confirmation that it is actually configured correctly and operational. All commands in this blog post are from the NSX Troubleshooting Documentation.
We can test if a bridge is functional by issuing a command on the NSX Manager.
In this blog post I would like to share some information regarding possibilities of on-boarding existing workloads or tenants in new or current VMware NSX deployments.
VMware NSX deployment projects I’ve been involved in mostly are designed and deployed in a greenfield environment where a customer has invested in hardware and software to run their new Cloud environment on. From this point forward new workloads and deployments are aimed to run on that infrastructure and the current (brownfield) environment has to be migrated or will be shut down in a certain amount of time. Migrating applications to NSX and securing them with means of NSX Micro-Segmentation involves obviously good knowledge of your application. In other words: Which Virtual Machines talks to each other, and over which protocols and ports? The more information you’ve got about those applications the better you are able to secure them. A tool like vRealize Network Insight can help a great deal here, but that’s a topic on each own. Another solution would be to have applications isolated with NSX Distributed Firewall allow rules with logging enabled. If you have a solution like Log Insight, you would then see all that traffic logged which includes the protocol communications between source and destination.
Figure 1: Micro-segmentation for a 3-tier application