Multi-tier Application

Deploy
Guestbook

For this deployment we will be returning to using the default namespace.

Step 1 - Redis Master deployment POD controller

Redis is an open source, in memory data structure store that can be used as a database. For this lab we are using this example application to simplify the lab structure and kubernetes architecture as persistent storage volumes would be required for more complex deployments like MySQL.


cat << EOF > ~/redis-master-deployment.yaml
apiVersion: apps/v1 # for versions before 1.9.0 use apps/v1beta2
kind: Deployment
metadata:
  name: redis-master
spec:
  selector:
    matchLabels:
      app: redis
      role: master
      tier: backend
  replicas: 1
  template:
    metadata:
      labels:
        app: redis
        role: master
        tier: backend
    spec:
      containers:
      - name: master
        image: k8s.gcr.io/redis:e2e  # or just image: redis
        resources:
          requests:
            cpu: 100m
            memory: 100Mi
        ports:
        - containerPort: 6379
EOF

With the file defined, you have to create the master deployment of redis.


kubectl create -f redis-master-deployment.yaml 

deployment "redis-master" created

This should have deployed the redis master into the kubernetes cluster.


kubectl get pods -o wide

NAME                            READY     STATUS    RESTARTS   AGE       IP             NODE
redis-master-2365125485-1hg2z   1/1       Running   0          8m        10.X.X.X   pod09-master.ecatsrtpdmz.cisco.com
Should take some time to deploy. This is because the container needs to be downloaded from the internet before it is placed on the compute host to be deployed. The same applies to every deployment that is done. Run the command various times until the containers are running.

Step 2 - Deploy Redis Master kubernetes service

With the deployment of the Redis master, you have to deploy a service for the redis master so that the Redis slaves can reach it. This contains now the advertisement port and the proper labels.


cat << EOF > ~/redis-master-service.yaml
apiVersion: v1
kind: Service
metadata:
  name: redis-master
  labels:
    app: redis
    role: master
    tier: backend
spec:
  ports:
  - port: 6379
    targetPort: 6379
  selector:
    app: redis
    role: master
    tier: backend
EOF


kubectl create -f ~/redis-master-service.yaml


kubectl get service 

NAME           TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)    AGE
redis-master   ClusterIP   10.XX.XX.XX     [none]        6379/TCP   36s

For more information on the redis master service that was deployed you can do the command:


kubectl describe service redis-master

In this output pay attention to the labels, as these play a role in kubernetes as how different components of the application profile are reached.

Name:              redis-master
Namespace:         default
Labels:            app=redis
                   role=master
                   tier=backend
Annotations:       [none]
Selector:          app=redis,role=master,tier=backend
Type:              ClusterIP
IP:                10.96.X.X
Port:              [unset]  6379/TCP
TargetPort:        6379/TCP
Endpoints:         10.209.X.X:6379
Session Affinity:  None
Events:            [none]

Let's take for example that we want to list what endpoints are available for this service. Let's say we ask the fabric to tell us from the role and app name.


kubectl get endpoints --selector app=redis,role=master

What is happened is that when the deployment was defined for the redis master, the labels where created with it. This plays a huge role because when you talk about scalabilty of the application, you will tell ingress services to look for backends that match these labs instead of naming a particular pod. This is because POD names have to be unique. So now the service asks for the endpoints based on the labels and gets all the names of the PODS associated to that.

NAME           ENDPOINTS           AGE
redis-master   10.219.0.138:6379   1d

Step 3 - Redis Slave deployment POD

The Redis architecture uses a master-slave model and kubernetes is a real nice place to establish definitions of this architecture and keep the right connectivity between these PODs.


cat << EOF > ~/redis-slave-deployment.yaml
apiVersion: apps/v1 # for versions before 1.9.0 use apps/v1beta2
kind: Deployment
metadata:
  name: redis-slave
spec:
  selector:
    matchLabels:
      app: redis
      role: slave
      tier: backend
  replicas: 2
  template:
    metadata:
      labels:
        app: redis
        role: slave
        tier: backend
    spec:
      containers:
      - name: slave
        image: gcr.io/google_samples/gb-redisslave:v1
        resources:
          requests:
            cpu: 100m
            memory: 100Mi
        env:
        - name: GET_HOSTS_FROM
          value: dns
          # Using `GET_HOSTS_FROM=dns` requires your cluster to
          # provide a dns service. As of Kubernetes 1.3, DNS is a built-in
          # service launched automatically. However, if the cluster you are using
          # does not have a built-in DNS service, you can instead
          # access an environment variable to find the master
          # service's host. To do so, comment out the 'value: dns' line above, and
          # uncomment the line below:
          # value: env
        ports:
        - containerPort: 6379
EOF


kubectl create -f ~/redis-slave-deployment.yaml 

You can take a look and there are varios kubernetes pods created.


kubectl get pods 

Should take a little time to deploy all these pods.

NAME                            READY     STATUS    RESTARTS   AGE
redis-master-585798d8ff-qh4qs   1/1       Running   0          4m
redis-slave-865486c9df-bvm5r    1/1       Running   0          19s
redis-slave-865486c9df-kbsdc    1/1       Running   0          19s
Should take some time to deploy. This is because the container needs to be downloaded from the internet before it is placed on the compute host to be deployed. The same applies to every deployment that is done. Run the command various times until the containers are running.

Step 4 - Redis Slave service

Just like with the master, you have to create the service for the slaves. This is where most of the clients will establish a connection to.


cat << EOF > ~/redis-slave-service.yaml
apiVersion: v1
kind: Service
metadata:
  name: redis-slave
  labels:
    app: redis
    role: slave
    tier: backend
spec:
  ports:
  - port: 6379
  selector:
    app: redis
    role: slave
    tier: backend
EOF


kubectl create -f ~/redis-slave-service.yaml


kubectl get service 

NAME           TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)    AGE
redis-master   ClusterIP   10.XX.XX.XX     [none]        6379/TCP   5m
redis-slave    ClusterIP   10.XX.XX.XX     [none]        6379/TCP   5s

Step 5 - Frontend deployment POD

With the backend services created, you can deploy the PHP front-end application component that will interact with the backend. This deployment file defines how that application component will be created in kubernetes. In this case you will be definining it with a REPLICA count of 2. This is how many kubernetes pods for this service will be built. It also defines the container image and resource limitations.


cat << EOF > ~/gb-frontend-deployment.yaml
apiVersion: apps/v1 # for versions before 1.9.0 use apps/v1beta2
kind: Deployment
metadata:
  name: gb-frontend
spec:
  selector:
    matchLabels:
      app: guestbook
      tier: frontend
  replicas: 2
  template:
    metadata:
      labels:
        app: guestbook
        tier: frontend
    spec:
      containers:
      - name: php-redis
        image: gcr.io/google-samples/gb-frontend:v4
        resources:
          requests:
            cpu: 100m
            memory: 100Mi
        env:
        - name: GET_HOSTS_FROM
          value: dns
          # Using `GET_HOSTS_FROM=dns` requires your cluster to
          # provide a dns service. As of Kubernetes 1.3, DNS is a built-in
          # service launched automatically. However, if the cluster you are using
          # does not have a built-in DNS service, you can instead
          # access an environment variable to find the master
          # service's host. To do so, comment out the 'value: dns' line above, and
          # uncomment the line below:
          # value: env
        ports:
        - containerPort: 80
EOF


kubectl create -f ~/gb-frontend-deployment.yaml 


kubectl get pods --selector app=guestbook,tier=frontend 

NAME                       READY     STATUS    RESTARTS   AGE
frontend-67f65745c-bvxrw   1/1       Running   0          2m
frontend-67f65745c-r5l7k   1/1       Running   0          2m
Should take some time to deploy. This is because the container needs to be downloaded from the internet before it is placed on the compute host to be deployed. The same applies to every deployment that is done. Run the command various times until the containers are running.

Step 6 - Frontend deployment service

With that completed, you have to define the service for the frontend deployment. In this area we will be picking a STATIC IP from the address pool we provided when you built the ACI/Kubernetes YAML file. This is a very important part to remember. There are two separate pools defined. The static and the dynamic pool. When you define the LoadBalancerIP you have to select an address from the static pool. If you don't define the LoadBalancerIP variable then kubernetes will pick an address from the dynamic pool.


cat << EOF > ~/gb-frontend-service.yaml
apiVersion: v1
kind: Service
metadata:
  name: gb-frontend-service
  labels:
    app: guestbook
    tier: frontend
spec:
  # comment or delete the following line if you want to use a LoadBalancer
  # type: NodePort
  # if your cluster supports it, uncomment the following to automatically create
  # an external load-balanced IP for the frontend service.
  type: LoadBalancer
  loadBalancerIP: 10.0.146.68
  ports:
  - port: 80
  selector:
    app: guestbook
    tier: frontend
EOF


kubectl create -f ~/gb-frontend-service.yaml

With the service created you can now view the service.


kubectl get service gb-frontend-service -o wide

NAME                 TYPE           CLUSTER-IP    EXTERNAL-IP    PORT(S)        AGE       SELECTOR
gb-frontend-service  LoadBalancer   10.96.XX.XX   10.0.146.68   80:32269/TCP   Xm        app=guestbook,tier=frontend

Step 7 - Check the APP!

Now that the service is deployed you should be able to connect to the guestbook app from your browser by going to the url http://10.0.146.68. The page is really simple ( one reason we picked this example ) to just enter messages and as long as the deployment is up, the data will be there.

If you where to look at the ACI fabric you will see that the PBR service knows about the two hosts that it needs to do the function for the defined application because they have containers running in them servicing that function.

Step 8 - Increase deployment size

In kubernetes increasing the scale of the deployment is very easy. If you wanted to increase the amount of front-end Web application components of this app you have deployed, then you just need to tell kubernetes to increase the count of replicas

If you issue the command:


kubectl get pods 

The output would look something like:

NAME                            READY     STATUS    RESTARTS   AGE
frontend-67f65745c-rqp7x        1/1       Running   0          5h
frontend-67f65745c-zf65b        1/1       Running   0          5h
redis-master-585798d8ff-n8jks   1/1       Running   0          5h
redis-slave-865486c9df-dnhgf    1/1       Running   0          5h
redis-slave-865486c9df-pdkfm    1/1       Running   0          5h

If you issue the following command:


kubectl scale --replicas 4 deployment gb-frontend 

The amount of frontend containers built to process the requests should have built to six:

NAME                            READY     STATUS    RESTARTS   AGE
frontend-67f65745c-4h776        1/1       Running   0          5h
frontend-67f65745c-sn44f        1/1       Running   0          5s
frontend-67f65745c-vjhs9        0/1       Pending   0          5s
frontend-67f65745c-zf65b        1/1       Running   0          5h
redis-master-585798d8ff-n8jks   1/1       Running   0          5h
redis-slave-865486c9df-dnhgf    1/1       Running   0          5h
redis-slave-865486c9df-pdkfm    1/1       Running   0          5h

With the application created, we can show you what ACI did in the back to route traffic from the network into these various components.