This blog post discusses five Edge Cluster designs for NSX-T and vSphere with Tanzu.
Before enabling vSphere with Tanzu on NSX-T the following requirements need to be met:
- NSX-T Managers, Edge Nodes, Edge Cluster(s) and Tier-0 configured
- vSphere Hosts for the Supervisor Clusters configured for NSX-T
- NSX Switch configured with VDS 7.0
Note that multiple vCenter Servers can share the same NSX-T deployment.
In terms of scalability it is important to understand an Edge Cluster with Large Edge Nodes (VM) can support up to 200 namespaces. The reason behind this number is that each namespace will instantiate a small Load Balancer (LB) instance, which is an Active/Standby service and requires at least two Edge Nodes. A Large Edge node can support up to 40 Load Balancer instances. With a maximum of 10 Edge Nodes in an Edge Cluster we have 400 Active/Standby Small Load Balancer instances, therefor a maximum of 200 Small Load Balancers can be deployed per Edge Cluster.
Option 1 – Single Supervisor Cluster with Single Edge Cluster
First design option is suitable for customers deploying their first Supervisor cluster or who are staring with NSX-T and vSphere with Tanzu. This would be the first or only Supervisor Cluster. To keep things simple only one Edge Cluster is configured with a single shared Tier-0 Gateway. Each Supervisor Cluster will have a dedicated Tier-1 Gateway and all networking services like LB and NAT will be shared in the single Edge Cluster. If performance requirements increase the Edge Cluster can be scaled out to a maximum of 10 Edge Nodes providing more bandwidth North-South and resources for LB and NAT instances.
Each namespace will have a dedicated Tier-1 Gateway, Logical Switch and will be secured with the DFW. Pod to Pod traffic in a given namespace is allowed. Traffic going in or out a namespace is denied by default.
Option 2 – Multiple Supervisor Clusters with Single Edge Cluster
The second design option below is suitable for customers adding additional Supervisor Clusters or looking for creating different Supervisor Clusters for separation of resources for different environments. For example creating a Production, Test or Development environment. Since both Clusters will share the same Edge Cluster this means that fewer resources for network services for each Supervisor Clusters are available. In terms of scalability and performance, a risk would be that a customer could reach limits for the Edge Cluster earlier. This design option will require the same amount of Edge Nodes as option 1 which saves required CPU and Memory. North-South connectivity is only required towards the cluster where the Edge Nodes are running.
Option 3 – Separation of Edge Clusters for Tier-0 and Tier-1
The following design is about separation of Edge Clusters for Tier-0 and Tier-1 Gateways. This design is suitable for customers having high North-South performance requirements and designing a dedicated Edge Cluster for the Tier-0 Gateway forwarding traffic while having a shared Edge Cluster for network services like LB and NAT. This design ensures there will be no resource contention between Tier-0 and Tier-1 gateways. Edge Nodes for the Tier-0 could be placed on a dedicated vSphere Cluster for the Tier-0 Edge Cluster while the Edge Nodes for the Tier-1 cluster could reside on the Supervisor Clusters.
Option 4 – Dedicated Edge Cluster for Tier-0 and Tier-1 Edge Clusters for each Supervisor Cluster
The following design option expand on the design option 3 with the main difference being that now each Supervisor Cluster has a dedicated Edge Cluster for their networking services increasing performance scalability for their NAT and LB instances they require. The Edge Nodes for each dedicated Edge Cluster can run on each of their specific Supervisor Cluster hosting resources. This design would ensure dedicated resources for a given Supervisor Cluster while keep North-South connectivity consolidated. This option is also suitable for a multi-tenant, Container as a Service (CaaS) design where it is allowed to share North-South connectivity. Each tenant will have a dedicated Supervisor Cluster.
Option 5 – Dedicated Tier-0 and Tier-1 Edge Cluster for each Supervisor Cluster
This design is expanding further on the multi-tenancy design or for environments where there is a requirement to separate North-South connectivity. Each Supervisor Cluster will have a dedicated Edge Cluster where their Tier-0 and Tier-1 Gateways are running. Each Edge Cluster would run on top of each Supervisor Cluster avoiding any resource contention between any of the Supervisor Clusters. Additionally, each Supervisor Cluster would also have the possibility to connect to a different network fabric or VRF.