In the docker section we showed you a very simple application that was a container running Flask and presenting you the IP of the container. Now we are going to deploy the same application inside of Kubernetes
In kubernetes you can create deployments in the CLI directly or you can define the deployments in YAML files. Using YAML files to configure these different kubernetes components is what is called declarative configuration. The advtange of using this method is that it drives directly into two important parts of kubernetes.
cat << EOF > ~/mylabapp-deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: mylabapp
labels:
app: mylabapp
spec:
selector:
matchLabels:
app: mylabapp
strategy:
type: Recreate
template:
metadata:
labels:
app: mylabapp
spec:
containers:
- image: svs-rtp-dmz-registry.ciscolive.com/myplayapp:v1
name: mylabapp
ports:
- containerPort: 8090
name: mylabapp
EOF
With the YAML definition completed, we can create the deployment.
kubectl create -f ~/mylabapp-deployment.yaml
The will create the kubernetes deployment.
deployment.apps/mylabapp created
You can describe the deployment to get more information.
kubectl describe deployment mylabapp
The output of the command gives us details on the deployment that was built.
Name: mylabapp Namespace: default CreationTimestamp: Tue, 22 May 2018 09:22:12 -0400 Labels: app=mylabapp Annotations: deployment.kubernetes.io/revision=1 kubectl.kubernetes.io/last-applied-configuration={"apiVersion":"apps/v1","kind":"Deployment","metadata":{"annotations":{},"labels":{"app":"mylabapp"},"name":"mylabapp","namespace":"default"},"spec":{"... Selector: app=mylabapp Replicas: 1 desired | 1 updated | 1 total | 1 available | 0 unavailable StrategyType: Recreate MinReadySeconds: 0 Pod Template: Labels: app=mylabapp tier=backend Containers: mylabapp: Image: svs-rtp-dmz-registry.ciscolive.com/myplayapp:v1 Port: 8090/TCP Environment: [none] Mounts: [none] Volumes: [none] Conditions: Type Status Reason ---- ------ ------ Available True MinimumReplicasAvailable Progressing True NewReplicaSetAvailable OldReplicaSets: [none] NewReplicaSet: mylabapp-767db98d99 (1/1 replicas created) Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal ScalingReplicaSet 58s deployment-controller Scaled up replica set mylabapp-767db98d99 to 1
Using the describe
command from kubectl
you can get a glimpse at
important information that has been configured in the pod.
kubectl describe pods mylabapp
Name: mylabapp-767db98d99-94bvn Namespace: default Node: pod09-nodeX.ecatsrtpdmz.cisco.com/10.0.222.XX Start Time: Tue, 22 May 2018 09:22:15 -0400 Labels: app=mylabapp pod-template-hash=3238654855 tier=backend Annotations: opflex.cisco.com/computed-endpoint-group={"policy-space":"k8s_pod09","name":"kubernetes|kube-default"} opflex.cisco.com/computed-security-group=[] Status: Running IP: 10.X.X.X Controlled By: ReplicaSet/mylabapp-767db98d99 Containers: mylabapp: Container ID: docker://5708ccdd9e30b2adbfc95cb744969ebcbded5c27faf44583a194a36162a1c7d8 Image: svs-rtp-dmz-registry.ciscolive.com/myplayapp:v1 Image ID: docker-pullable://svs-rtp-dmz-registry.ciscolive.com/myplayapp:v1@sha256:3c4cecbf434b3eef4a7b34c4a1a97eea8bc4ded227f375621d52d3565c99746d Port: 8090/TCP State: Running Started: Tue, 22 May 2018 09:22:32 -0400 Ready: True Restart Count: 0 Environment: [none] Mounts: /var/run/secrets/kubernetes.io/serviceaccount from default-token-td2gp (ro) Conditions: Type Status Initialized True Ready True PodScheduled True Volumes: default-token-td2gp: Type: Secret (a volume populated by a Secret) SecretName: default-token-td2gp Optional: false QoS Class: BestEffort Node-Selectors: [none] Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s node.kubernetes.io/unreachable:NoExecute for 300s Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled 15m default-scheduler Successfully assigned mylabapp-767db98d99-94bvn to pod18-node2.ecatsrtpdmz.cisco.com Normal SuccessfulMountVolume 15m kubelet, pod-nodeX.ecatsrtpdmz.cisco.com MountVolume.SetUp succeeded for volume "default-token-td2gp" Normal Pulling 15m kubelet, pod-nodeX.ecatsrtpdmz.cisco.com pulling image "svs-rtp-dmz-registry.ciscolive.com/myplayapp:v1" Normal Pulled 15m kubelet, pod-nodeX.ecatsrtpdmz.cisco.com Successfully pulled image "svs-rtp-dmz-registry.ciscolive.com/myplayapp:v1" Normal Created 15m kubelet, pod-nodeX.ecatsrtpdmz.cisco.com Created container Normal Started 15m kubelet, pod-nodeX.ecatsrtpdmz.cisco.com Started container
In the output of the description for the POD you can see the annotations field. Here we can see now that the integration with ACI has placed this POD inside a specific policy-space in the fabric. How DOES that translate into the ACI fabric?
If you head over the ACI Fabric you will see a Tenant for this LAB POD named k8s_pod09. Inside the application profile are now a series of End Point Groups (EPGs), including kube-default.
As you can see now the ACI fabric is visibly aware of the created Kubernetes POD for the mylabapp. It also knows the IP address that was assigned to the Kubernetes POD and the host that the container is actively running on. This visibility lifts a veil of darkness from the kubernetes cluster towards the network administrator.
ACI has also created polcies in the ACI that make it possible for the different PODS to communicate in the fabric across the context of the kubernetes overlay.
The ACI/Kubernetes integration policy are implmented both in ACI Fabric and also in the host via OpenFlow controls. The ACI integration utilized the opflex protocol to communicate between the ACI fabric leaf and the compute host. This communication makes it possible to build the proper policies that extend these beyond just iptables in a single host.
ACI also creates a Virtual networking integration for the kubernetes cluster. In this we can see constructs of Kubernetes, like the deployment that was just created. These component visibility from the ACI fabric provides the network administrator granular visibility of what the DEVOPS teams are building.
You will now create a quick POD to test the reachability in the fabric. We will run it with
the parameter --rm
to delete the deployment when we are done and
the parameter --it
to interact with it. As we are dealing with a fabric overlay
orchestrator, while the container is created in any particular host, you will be interacting
with the container over the network.
kubectl run alpine --rm -it --image=svs-rtp-dmz-registry.ciscolive.com/alpine /bin/sh
Once inside the POD you will use the command: wget --spider --timeout=1
plus the
IP address that you got from kubectl or ACI GUI that was assigned to the pod mylabapp.
You will also have to specify the PORT as the application is listening on port 8090.
If you don't see a command prompt, try pressing enter. / # wget --spider --timeout=1 10.209.X.X:8090 Connecting to 10.209.X.X:8090 (10.209.X.X:8090) / # exit
This pod that we just created to run busybox connected from one pod to another pod in the kubernetes cluster over the ACI fabric.
With the deployment enabled in the kubernetes cluster, we can create the service that we wish to expose to the outside world. With this service YAML definition you specify to kubernetes what the service is. In this case a LoadBalancer service.
In the file we have also defined a specific IP address that will be used. This is a slight variation that is done to assist in the lab manual to reach the service URL. In the real world the most prevalent option would be to let kubernetes assign a IP address from the pool. In some cases these are pre-assigned DNS names that make it easy for developers to reach endpoints.
cat << EOF > ~/mylabapp-service.yaml
apiVersion: v1
kind: Service
metadata:
name: mylabapp
labels:
app: mylabapp
spec:
type: LoadBalancer
loadBalancerIP: 10.0.146.67
ports:
- port: 80
targetPort: 8090
selector:
app: mylabapp
EOF
kubectl create -f ~/mylabapp-service.yaml
Once completed you can list the services and pods that have been deployed.
kubectl get svc,pods -o wide
The output should be similar to the following with both a CLUSTER-IP and also a EXTERNAL-IP. For this student pod you should have the external IP address defined of 10.0.146.67
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
svc/mylabapp LoadBalancer 10.X.X.X 10.0.146.67 80:XXXX/TCP 2m app=mylabapp
svc/kubernetes ClusterIP 10.96.0.1 [none] 443/TCP 2d [none]
NAME READY STATUS RESTARTS AGE IP NODE
po/mylabapp-6ddb9bfd5f-vv5l4 1/1 Running 0 3m 10.X.X.X podXX-nodeX.ecatsrtpdmz.cisco.com
You can point your browser to that IP address 10.0.146.67 to reach the deployed service.
One of the big advantages of the integration is that the scalability of the application is managed by kubernetes. You can easily ask kubernetes to increase the amount of applications and the load balancer service will properly adjust all policies to handle this.
kubectl scale --replicas=4 deployment mylabapp
Once kubernetes completes, you can look to see the deployments.
kubectl get svc,pods -o wide
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR svc/mylabapp LoadBalancer 10.96.XX.XX 10.0.146.67 80:30911/TCP 7m app=mylabapp svc/kubernetes ClusterIP 10.96.0.1 [none] 443/TCP 6d [none] NAME READY STATUS RESTARTS AGE IP NODE po/mylabapp-587458cc8f-78t4x 1/1 Running 0 5s 10.209.X.X pod09-node1.ecatsrtpdmz.cisco.com po/mylabapp-587458cc8f-g8hxg 1/1 Running 0 5s 10.209.X.X pod09-node2.ecatsrtpdmz.cisco.com po/mylabapp-587458cc8f-hd8qg 1/1 Running 0 31m 10.209.X.X pod09-node2.ecatsrtpdmz.cisco.com po/mylabapp-587458cc8f-nqjfw 1/1 Running 0 5s 10.209.X.X pod09-node1.ecatsrtpdmz.cisco.com
As we will show you in the next section, the load balancing of the service to these four kubernetes pods is handled by the ACI fabric itself using application service graphs.
Now that you have deployed a service and a kubernetes POD, let's take a little closer look on how ACI using OpFlex stores data on the compute components. OpFlex communication protocol is established between the ACI fabric leaf and the compute node. This protocol exchanges information from the ACI fabric to provide policy enforcement inside the compute node via OpenVSwitch. With this protocol the EPG, contracts and other ACI constructs are converted into kubernetes network policies on the host itself. This technology is the same used in the Cisco OpenStack integration with ACI.
The first step we are going to need
is to get the UID of the service itself. Using the output modified -o yaml
we
will be getting the object representation of the service in kubernetes.
kubectl get service mylabapp -o yaml
apiVersion: v1
kind: Service
metadata:
creationTimestamp: 2018-06-10T21:14:08Z
labels:
app: mylabapp
name: mylabapp
namespace: default
resourceVersion: "1413"
selfLink: /api/v1/namespaces/default/services/mylabapp
uid: 36f34183-6cf3-11e8-bab8-005056182503
spec:
clusterIP: 10.96.XX.XX
externalTrafficPolicy: Cluster
loadBalancerIP: 10.0.146.67
ports:
- nodePort: 30845
port: 80
protocol: TCP
targetPort: 8090
selector:
app: mylabapp
sessionAffinity: None
type: LoadBalancer
status:
loadBalancer:
ingress:
- ip: 10.0.146.67
That UID defined is what you need to get the service opflex definition file. You need to copy that UID that is unique to your pod and then goto:
cd /var/lib/opflex-agent-ovs/services/
ls -la
-rw-r--r-- 1 root root 690 Jun 10 17:15 36f34183-6cf3-11e8-bab8-005056182503-external.service -rw-r--r-- 1 root root 604 Jun 10 17:15 36f34183-6cf3-11e8-bab8-005056182503.service -rw-r--r-- 1 root root 574 Jun 10 17:04 c7d33c70-6cf1-11e8-bab8-005056182503.service -rw-r--r-- 1 root root 839 Jun 10 17:05 cbbb9c04-6cf1-11e8-bab8-005056182503.service
You should see a list of files and the one that matches the UID you saw in your output should be the one that provides the service information. The output looks like the following.
cat the uuid.service file that is specific to your output above
cat 36f34183-6cf3-11e8-bab8-005056182503.service { "uuid": "36f34183-6cf3-11e8-bab8-005056182503", "domain-policy-space": "common", "domain-name": "k8s_vrf", "service-mode": "loadbalancer", "service-mapping": [ { "service-ip": "10.96.X.X", "service-proto": "tcp", "service-port": 80, "next-hop-ips": [ "10.209.X.X", "10.209.X.X", "10.209.X.X", "10.209.X.X" ], "next-hop-port": 8090, "conntrack-enabled": true } ], "attributes": { "app": "mylabapp", "name": "mylabapp", "namespace": "default", "service-name": "default_mylabapp" }
The UID information is also available in the Cisco ACI fabric under the VMM domain for the service definition.
There is also the endpoint file definition that would be located in the directory
/var/lib/opflex-agent-ovs/endpoints/
. To make the connection you can
get the pods with the command:
kubectl get pods -o wide | grep mylabapp
mylabapp-7bbc7bf597-58kft 1/1 Running 0 14m 10.209.0.67 pod09-node1.ecatsrtpdmz.cisco.com none none mylabapp-7bbc7bf597-bnjz9 1/1 Running 0 14m 10.209.0.5 pod09-node2.ecatsrtpdmz.cisco.com none none mylabapp-7bbc7bf597-vhlj9 1/1 Running 0 26m 10.209.0.3 pod09-node2.ecatsrtpdmz.cisco.com None none mylabapp-7bbc7bf597-wxf5v 1/1 Running 0 14m 10.209.0.68 pod09-node1.ecatsrtpdmz.cisco.com none none
Of the list of pods pick one of them and describe the kubernetes pod. Make sure you note the worker node (node1 or node2) that this pod resides on. In the example below, the pod resides on node1 from the output above.
kubectl get pod [pod_id that is one of that looks like mylabapp-7bbc7bf597-58kft -o yaml
apiVersion: v1
kind: Pod
metadata:
creationTimestamp: "2020-01-27T09:34:02Z"
generateName: mylabapp-7bbc7bf597-
labels:
app: mylabapp
pod-template-hash: 7bbc7bf597
name: mylabapp-7bbc7bf597-58kft
namespace: default
ownerReferences:
- apiVersion: apps/v1
blockOwnerDeletion: true
controller: true
kind: ReplicaSet
name: mylabapp-7bbc7bf597
uid: 85e18de2-efe0-4912-8ec2-0846e7ea5dab
resourceVersion: "128970"
selfLink: /api/v1/namespaces/default/pods/mylabapp-7bbc7bf597-58kft
uid: dea12256-f8c1-4c56-a209-4de21d69cb0d
spec:
<snip>
cd /var/lib/opflex-agent-ovs/endpoints/
ls
Find the file that matches that UUID. It will include additional information but it starts with
the UUID. The contents of the file will be similar to the below if you cat
the file's contents:
Replace the UUID with yours in the below output:
cat dea12256-f8c1-4c56-a209-4de21d69cb0d_31b5fc8269107b29637ab27c81f048ed1e0e8349467c194d5393f9c17d70fded_vetha81f5f4f.ep { "uuid": "dea12256-f8c1-4c56-a209-4de21d69cb0d_31b5fc8269107b29637ab27c81f048ed1e0e8349467c194d5393f9c17d70fded_vetha81f5f4f", "eg-policy-space": "k8s_pod17", "endpoint-group-name": "kubernetes|kube-default", "ip": [ "10.217.0.67" ], "mac": "26:eb:36:3a:67:5b", "access-interface": "vetha81f5f4f", "access-uplink-interface": "pa-vetha81f5f4f", "interface-name": "pi-vetha81f5f4f", "attributes": { "app": "mylabapp", "interface-name": "vetha81f5f4f", "namespace": "default", "pod-template-hash": "7bbc7bf597", "vm-name": "mylabapp-7bbc7bf597-58kft" }
We have included these so you have another tool in the troubleshooting belt. These ar OpFlex related files and provide data to OpenVswitch integration and how OpenVSwitch builds the tables in the compute node based on these policies.