In this blog post I am going to deploy a simple multi-tier PHP Guestbook application with Redis using a Kubernetes cluster with the NSX-T Container Plugin (NCP) installed. In this post I am showing how various NSX-T objects are being created based on the deployment in Kubernetes.
This example is based on Kubernetes documentation and the goal is to highlight NSX-T objects being created by the NCP. This example application consists of the following components:
- A single-instance Redis master to store guestbook entries
- Multiple replicated Redis instances to serve reads
- Multiple web frontend instances
- NSX-T Load Balancer
- NSX-T Security Groups and Distributed Firewall Rule.
I have installed NSX-T 2.5, a Kubernetes Cluster and configured NCP 2.5 following the VMware documentation and using NSX-T Resources using the Networking tab (a.k.a. Policy UI)
Creating a Namespace
First we create a new namespace in Kubernetes for this application with the command:
kubectl create ns guestbook
Doing so will have the NSX-T NCP creating a number of required objects in NSX-T 2.5
Clicking on the Subnets (1) link will reveal the Subnet being carved out of the NSX-T IP Block reserved for NCP and the Default Gateway for the subnet configured on the Tier-1 Gateway.
On the k8s-cluster1 Tier-1 Gateway the Segment will show being linked and therefor a downlink interface, serving as Default Gateway being created. Note I am using the new shared Tier-1 feature meaning that a single Tier-1 Gateway will be used and shared for all the namespaces, instead of creating a separate single Tier-1 Gateway for each namespace.
Now we are ready to deploy the Redis Master pod. The guestbook application uses Redis to store its data. It writes its data to a Redis master instance and reads data from multiple Redis slave instances.
Start up the Redis Master
The guestbook application uses Redis to store its data. It writes its data to a Redis master instance and reads data from multiple Redis slave instances.
Creating the Redis Master Deployment
The manifest file, included below, specifies a Deployment controller that runs a single replica Redis master Pod.
apiVersion: apps/v1 # for versions before 1.9.0 use apps/v1beta2 kind: Deployment metadata: name: redis-master labels: app: redis spec: selector: matchLabels: app: redis role: master tier: backend replicas: 1 template: metadata: labels: app: redis role: master tier: backend spec: containers: - name: master image: k8s.gcr.io/redis:e2e # or just image: redis resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 6379
Launch a terminal window in the directory you downloaded the manifest files and apply the Redis Master Deployment from the
kubectl apply -f redis-master-deployment.yaml
Creating the Redis Master Service
The guestbook applications needs to communicate to the Redis master to write its data. You need to apply a Service to proxy the traffic to the Redis master Pod. A Service defines a policy to access the Pods.
apiVersion: v1 kind: Service metadata: name: redis-master labels: app: redis role: master tier: backend spec: ports: - port: 6379 targetPort: 6379 selector: app: redis role: master tier: backend
Apply the Redis Master Service from the
kubectl apply -f redis-master-service.yaml
Creating the Redis Slave Deployment
Although the Redis master is a single pod, you can make it highly available to meet traffic demands by adding replica Redis slaves.
Creating the Redis Slave Deployment
Deployments scale based off of the configurations set in the manifest file. In this case, the Deployment object specifies two replicas.
If there are not any replicas running, this Deployment would start the two replicas on your Kubernetes cluster. Conversely, if there are more than two replicas are running, it would scale down until two replicas are running.
apiVersion: apps/v1 # for versions before 1.9.0 use apps/v1beta2 kind: Deployment metadata: name: redis-slave labels: app: redis spec: selector: matchLabels: app: redis role: slave tier: backend replicas: 2 template: metadata: labels: app: redis role: slave tier: backend spec: containers: - name: slave image: gcr.io/google_samples/gb-redisslave:v3 resources: requests: cpu: 100m memory: 100Mi env: - name: GET_HOSTS_FROM value: dns # Using `GET_HOSTS_FROM=dns` requires your cluster to # provide a dns service. As of Kubernetes 1.3, DNS is a built-in # service launched automatically. However, if the cluster you are using # does not have a built-in DNS service, you can instead # access an environment variable to find the master # service's host. To do so, comment out the 'value: dns' line above, and # uncomment the line below: # value: env ports: - containerPort: 6379
Apply the Redis Slave Deployment from the
kubectl apply -f redis-slave-deployment.yaml
Creating the Redis Slave Service
The guestbook application needs to communicate to Redis slaves to read data. To make the Redis slaves discoverable, you need to set up a Service. A Service provides transparent load balancing to a set of Pods.
apiVersion: v1 kind: Service metadata: name: redis-slave labels: app: redis role: slave tier: backend spec: ports: - port: 6379 selector: app: redis role: slave tier: backend
Apply the Redis Slave Service from the following
kubectl apply -f redis-slave-service.yaml
Set up and Expose the Guestbook Frontend
The guestbook application has a web frontend serving the HTTP requests written in PHP. It is configured to connect to the
redis-master Service for write requests and the
redis-slave service for Read requests.
Creating the Guestbook Frontend Deployment
apiVersion: apps/v1 # for versions before 1.9.0 use apps/v1beta2 kind: Deployment metadata: name: frontend labels: app: guestbook spec: selector: matchLabels: app: guestbook tier: frontend replicas: 5 template: metadata: labels: app: guestbook tier: frontend spec: containers: - name: php-redis image: gcr.io/google-samples/gb-frontend:v4 resources: requests: cpu: 100m memory: 100Mi env: - name: GET_HOSTS_FROM value: dns # Using `GET_HOSTS_FROM=dns` requires your cluster to # provide a dns service. As of Kubernetes 1.3, DNS is a built-in # service launched automatically. However, if the cluster you are using # does not have a built-in DNS service, you can instead # access an environment variable to find the master # service's host. To do so, comment out the 'value: dns' line above, and # uncomment the line below: # value: env ports: - containerPort: 80
Apply the frontend Deployment from the
kubectl apply -f frontend-deployment.yaml
Creating the Frontend Service
redis-master Services you applied are only accessible within the container cluster because the default type for a Service is ClusterIP.
ClusterIP provides a single IP address for the set of Pods the Service is pointing to. This IP address is accessible only within the cluster.
If you want guests to be able to access your guestbook, you must configure the frontend Service to be externally visible, so a client can request the Service from outside the container cluster. In this case we configure a NSX-T Load Balancer by adding the following specification
apiVersion: v1 kind: Service metadata: name: frontend labels: app: guestbook tier: frontend spec: # comment or delete the following line if you want to use a LoadBalancer # type: NodePort # if your cluster supports it, uncomment the following to automatically create # an external load-balanced IP for the frontend service. type: LoadBalancer ports: - port: 80 selector: app: guestbook tier: frontend
Apply the frontend Service from the
kubectl apply -f frontend-service.yaml
Viewing the created NSX-T Resources
Now we have created a number of objects created in the guestbook namespace. You can view what is created with the deployments and service with the
kubectl get all -o wide command.
Note the External IP being allocated to the front-end service. This is an IP Address taken from the Load Balancer IP Pool we’ve configured in NSX-T and NCP.
Now let’s look what we can see in NSX-T being created by NCP. To start we can have another look in the Segment being created for the Guestbook application to see we have a number of Segment Ports being automatically being created.
Clicking on Load Balancing and Virtual Servers we can see the Load Balancer and Virtual Server being created by NCP for the Guestbook Application Frontend by specifying
type: LoadBalancer. Also note you can find the IP Address configured on the Virtual Server.
Additionally a Server Pool is created consisting of the frontend pods being the members of the pool.
Scale the Web Frontend
Scaling up or down is easy because your servers are defined as a Service type: LoadBalancer that uses a Deployment controller.
Run the following command to scale up the number of frontend pods:
kubectl scale deployment frontend --replicas=5
This will add two pods and NCP will add the required Segment Ports and adds the pods to the Load Balancer Server Pool automatically.
Note the added frontend Segment Ports:
Note the added frontend pods as Pool Members in the Server Pool:
Securing the Guestbook Application
To further secure the Guestbook App we leverage the NSX-T Distributed Firewall (DFW) by creating a Network Policy.
A network policy is a specification of how groups of pods are allowed to communicate with each other and other network endpoints.
NetworkPolicy resources use labels to select pods and define rules which specify what traffic is allowed to the selected pods.
To isolate the Guestbook Application and securing access to and from it, I configure the policy to only allow inbound TCP 6379.
# redis guestbook demo network-policy apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: redis-demo-policy spec: podSelector: matchLabels: app: redis ingress: - ports: - protocol: TCP port: 6379
Apply the NetworkPolicy from the
kubectl apply -f redis-demo-policy.yaml
When this Network Policy is being applied NCP will automatically create the required NSX-T objects. Clicking on Security and the Distributed Firewall, under the Application Category we will see the guestbook DFW Policies being added to the DFW. The first DFW rule is configured for allowing (whitelisting) inbound TCP 6379 towards the guestbook Group in which automatically the Guestbook ods IP addresses are being added.
Also note that the rules are only applied to the members of the guestbook-redis-demo-policy-tgt group to ensure limited DFW scope and the DFW Rules only being applied to the Guestbook Application Pods. Also note there is a isolation rule being added and applied to the guestbook-redis-demo-policy-tgt group to ensure everything else, except for ingress TCP 6379, will be dropped.
Note a service object is also being added with the destination (ingress) TCP port 6379 specified.
Testing the Guestbook Application
Use a browser to test if the Guestbook application actually works. Try to reach the Guestbook by using the Load Balancer IP address, in my case http://172.16.1.12
You can also test the Load Balancer performance by using a tool like Apache Benchmark. I have installed Apache Benchmark on one of my VMs in the lab to test Load Balancer performance. The following command I have used to test the Load Balancer.
ab -k -c 200 -n 100000 172.16.1.12/
On the NSX-T Load Balancer you can find detailed statistics.
This concludes the Kubernetes example Guestbook Application Deployment with NSX-T and NCP 2.5 which shows how NCP automatically creates required NSX Networking and Security components.