On-boarding existing workloads and tenants to VMware NSX

On-boarding existing workloads and tenants to VMware NSX

In this blog post I would like to share some information regarding possibilities of on-boarding existing workloads or tenants in new or current VMware NSX deployments.

VMware NSX deployment projects I’ve been involved in mostly are designed and deployed in a greenfield environment where a customer has invested in hardware and software to run their new Cloud environment on. From this point forward new workloads and deployments are aimed to run on that infrastructure and the current (brownfield) environment has to be migrated or will be shut down in a certain amount of time. Migrating applications to NSX and securing them with means of NSX Micro-Segmentation involves obviously good knowledge of your application. In other words: Which Virtual Machines talks to each other, and over which protocols and ports? The more information you’ve got about those applications the better you are able to secure them. A tool like vRealize Network Insight can help a great deal here, but that’s a topic on each own. Another solution would be to have applications isolated with NSX Distributed Firewall allow rules with logging enabled. If you have a solution like Log Insight, you would then see all that traffic logged which includes the protocol communications between source and destination.

Figure 1: Micro-segmentation for a 3-tier application

Segmentation of a 3-tier App

So once NSX and the logical and routing components are installed and available, it is the goal that all new workloads will be deployed on logical switches in the NSX environment. For brownfield workloads there are several options to on-board current (brownfield) workloads:

Several existing clusters will participate as additional Payload clusters to host workloads on. It is recommended to use a separate VTEP IP Pool for their VTEP interfaces and have the cluster added to the NSX transport zone so that they will receive all attached Logical Switches and VXLAN information in their vSphere Distributed Switch. This will make it possible to consume the VXLAN in that cluster and to run new or existing VM’s and have them attached to Logical Switches.
What if you have existing vSphere clusters with workloads running in the deployed vSphere Distributed Portgroups which need to be isolated and secured? This is achieved by preparing the clusters for NSX and installing the Distributed Firewall VIB. Once the clusters are prepared it is possible to use micro-segmentation with the Distributed Firewall. Again you need to know how to group workloads and what allowed traffic flows are. This will need time to investigate traffic flows properly or to revisit application documentataion. It is also recommended to use the “applied to” function in the Distributed Firewall to have rules applied to specific clusters running those workloads only.

Figure 2: Preparing Clusters for NSX

Prepare Clusters for NSX

When migrating Virtual Machines to VXLAN based logical switches it might be needed to maintain connectivity to VLAN based distributed portgroups because there are still Virtual Machines, physical servers or routers connected to that VLAN. For example, the gateway may still be the Switched Virtual Interface (SVI) of a physical router. When this is the case a Bridge can be configured to have bridging between the VLAN based portgroup and the VXLAN based Logical Switch. Note that a Bridge might be a single point of failure so this might be a intermediate solution to migrate workloads. After all Virtual Machines are migrated to Logical Switches and there are no more physical servers connected to the VLAN, or these can be connected with a Bridge and the attached risks are acceptable, you can migrate the Switched Virtual Interface as described next.

Figure 3: VXLAN to VLAN Bridge

NSX Bridge
A VLAN based network can be migrated to a VXLAN based network. The router (SVI) for that network should be brought down and the new VXLAN logical switch for that workloads configured and connected to a Distributed Logical Router. From that point a Logical Interface (LIF) can be configured on the Distributed Logical Router serving as the new default gateway for the Virtual Machines. The Virtual Machine will be connected to the new Logical Switch. A Dynamic Routing protocol like OSPF or BGP can make sure to inform the network to where to find the path for the new router for this network. When planned properly you can maintain IP configuration of the gateway on the Distributed Logical Router and no Virtual Machine reconfiguration is needed.
Virtual Machines can be migrated to VXLAN based Logical Switches and have their IP changed. Which is not the desired method for most organizations because obviously, without any automation or orchestration tools, this is labour intensive.
Another possibility is allowing overlapping networks with the use of NAT. This is a good option for organizations which are Cloud ready, comfortable with overlapping IP networks and aware of the operational risks.

One or a combination of these scenario’s may be needed to make a migration to NSX successful. Preparation is very important. A good overview of traffic between Virtual Machines in your existing deployments is required for solid micro-segmentation implementation.

Leave a Reply

Your email address will not be published. Required fields are marked *