Edge Cluster Design for vSphere with Tanzu

Dedicated Edge Cluster for Tier-0 and Tier-1 Edge Clusters for each Supervisor Cluster

Edge Cluster Design for vSphere with Tanzu

This blog post discusses five Edge Cluster designs for NSX-T and vSphere with Tanzu.

Prerequisites

Before enabling vSphere with Tanzu on NSX-T the following requirements need to be met:

  • NSX-T Managers, Edge Nodes, Edge Cluster(s) and Tier-0 configured
  • vSphere Hosts for the Supervisor Clusters configured for NSX-T
  • NSX Switch configured with VDS 7.0

Note that multiple vCenter Servers can share the same NSX-T deployment.

In terms of scalability it is important to understand an Edge Cluster with Large Edge Nodes (VM) can support up to 200 namespaces. The reason behind this number is that each namespace will instantiate a small Load Balancer (LB) instance, which is an Active/Standby service and requires at least two Edge Nodes. A Large Edge node can support up to 40 Load Balancer instances. With a maximum of 10 Edge Nodes in an Edge Cluster we have 400 Active/Standby Small Load Balancer instances, therefor a maximum of 200 Small Load Balancers can be deployed per Edge Cluster.

Option 1 – Single Supervisor Cluster with Single Edge Cluster

First design option is suitable for customers deploying their first Supervisor cluster or who are staring with NSX-T and vSphere with Tanzu. This would be the first or only Supervisor Cluster. To keep things simple only one Edge Cluster is configured with a single shared Tier-0 Gateway. Each Supervisor Cluster will have a dedicated Tier-1 Gateway and all networking services like LB and NAT will be shared in the single Edge Cluster. If performance requirements increase the Edge Cluster can be scaled out to a maximum of 10 Edge Nodes providing more bandwidth North-South and resources for LB and NAT instances.

Each namespace will have a dedicated Tier-1 Gateway, Logical Switch and will be secured with the DFW. Pod to Pod traffic in a given namespace is allowed. Traffic going in or out a namespace is denied by default.

Option 2 – Multiple Supervisor Clusters with Single Edge Cluster

The second design option below is suitable for customers adding additional Supervisor Clusters or looking for creating different Supervisor Clusters for separation of resources for different environments. For example creating a Production, Test or Development environment. Since both Clusters will share the same Edge Cluster this means that fewer resources for network services for each Supervisor Clusters are available. In terms of scalability and performance, a risk would be that a customer could reach limits for the Edge Cluster earlier. This design option will require the same amount of Edge Nodes as option 1 which saves required CPU and Memory. North-South connectivity is only required towards the cluster where the Edge Nodes are running.

Multiple Supervisor Clusters with Single Edge Cluster

Option 3 – Separation of Edge Clusters for Tier-0 and Tier-1

The following design is about separation of Edge Clusters for Tier-0 and Tier-1 Gateways. This design is suitable for customers having high North-South performance requirements and designing a dedicated Edge Cluster for the Tier-0 Gateway forwarding traffic while having a shared Edge Cluster for network services like LB and NAT. This design ensures there will be no resource contention between Tier-0 and Tier-1 gateways. Edge Nodes for the Tier-0 could be placed on a dedicated vSphere Cluster for the Tier-0 Edge Cluster while the Edge Nodes for the Tier-1 cluster could reside on the Supervisor Clusters.

Separation of Edge Clusters for Tier-0 and Tier-1

Option 4 – Dedicated Edge Cluster for Tier-0 and Tier-1 Edge Clusters for each Supervisor Cluster

The following design option expand on the design option 3 with the main difference being that now each Supervisor Cluster has a dedicated Edge Cluster for their networking services increasing performance scalability for their NAT and LB instances they require. The Edge Nodes for each dedicated Edge Cluster can run on each of their specific Supervisor Cluster hosting resources. This design would ensure dedicated resources for a given Supervisor Cluster while keep North-South connectivity consolidated. This option is also suitable for a multi-tenant, Container as a Service (CaaS) design where it is allowed to share North-South connectivity. Each tenant will have a dedicated Supervisor Cluster.

Dedicated Edge Cluster for Tier-0 and Tier-1 Edge Clusters for each Supervisor Cluster

Option 5 – Dedicated Tier-0 and Tier-1 Edge Cluster for each Supervisor Cluster

This design is expanding further on the multi-tenancy design or for environments where there is a requirement to separate North-South connectivity. Each Supervisor Cluster will have a dedicated Edge Cluster where their Tier-0 and Tier-1 Gateways are running. Each Edge Cluster would run on top of each Supervisor Cluster avoiding any resource contention between any of the Supervisor Clusters. Additionally, each Supervisor Cluster would also have the possibility to connect to a different network fabric or VRF.

Dedicated Tier-0 and Tier-1 Edge Cluster for each Supervisor Cluster

 

There are 4 comments. Add yours.

  1. Kay

    This is good but any places I can get the basic terminology for each of these components broken down?

  2. Andras Szabo

    Hello Raymond!
    Regarding option 5: do you know if we can use VRFs on a pair of T0s instead of dedicating T0s for every K8S cluster? I have a customer where tenant (landscape) separation is mandatory but they’d like to keep the number of Edge VMs down. We’ve managed to do it with NSX-T 3.1 using vanilla Kubernetes but they are looking into Tanzu and this could possibly be a showstopper.
    Thanks in advance.

    • Hi, as far as I know it is not supported yet. I’m not a Product Manager so I don’t know roadmap and need to ask this internally.

Leave a Reply

Your email address will not be published. Required fields are marked *

VCDX #284

VCDX #284

Twitter