Moving Workloads

Policies
Moving workloads

As we have seen to this point, all the POD's have been deployed into the same default EPG in the ACI fabric. This is done on purpose. In a DEVOPS environment, developers are more productive and expect a more open environment without constrains imposed in the network. If you have experience with VMware or OpenStack it isn't uncommon for large flat subnets to exist where application components are deployed during the development cycle with just direct IP connectivity between them.

Kubernetes attempts the same concept. Even when you created the Namespace and deployed the application in the new Namespace, the POD's where still deployed in the default EPG inside the ACI fabric. POD's can talk to one another across different namespaces but will need to add the namespace into the URL that they want to reach. By default the namespace search is added to /etc/resolv.conf in the container.

With the application now defined inside the walls of the Namespace mylabapp that was created we can move that workload into a separate EPG and apply network policies in the ACI fabric as needed.

Step 1 - Create new EPG

To create the EPG go to the ACI GUI and reach your student pod tenant: k8s_pod09

Click on Create Application EPG and enter the data you can see in the following screen capture. Make sure to select the associate VMM domain.

When you click the + on the section EPG Contract Master you will be using a new feature in ACI 3.1 that makes it possible to link the contracts that are existing in a specific EPG in another EPG.

Since the goal is that we have to duplicate the contracts that are existing in kube-default, this feature makes it super easy. You will select the application profile kubernetes that is the only one under this tenant and then select the EPG kube-default.

Once that is completed you can click Next to associate the VMM domain.

Now you have to associate the EPG to the VMM domain of the kubernetes PODS.

Once completed if you look at the policy topology for the EPG you will notice that the newly created EPG has a arrow pointed sideways to kube-default. This is the GUI representation of contract master.

Before you move the workloads, let's run a little experiment and check the reachability of the mylabapp. As you remember we moved the application into a separate Namespace and that has consequencees in regards to reachability as any deployment inside of a specific namespace has to be referred to the namespace as a url sub-domain.

If you look again at the service you should have the application deployed in the Namespace mylabapp


kubectl get svc,pods -o wide --namespace mylabapp

NAME           TYPE           CLUSTER-IP     EXTERNAL-IP    PORT(S)        AGE       SELECTOR
svc/mylabapp   LoadBalancer   10.XXX.XX.XX   10.0.146.67   80:31840/TCP   3h        app=mylabapp

NAME                           READY     STATUS    RESTARTS   AGE       IP             NODE
po/mylabapp-587458cc8f-86ggx   1/1       Running   0          3h        10.XX.XX.XX    pod09-nodeX.ecatsrtpdmz.cisco.com
po/mylabapp-587458cc8f-f9k4c   1/1       Running   0          3h        10.XX.XX.XX    pod09-nodeX.ecatsrtpdmz.cisco.com
po/mylabapp-587458cc8f-m4s57   1/1       Running   0          3h        10.XX.XX.XX    pod09-nodeX.ecatsrtpdmz.cisco.com
po/mylabapp-587458cc8f-sbbxm   1/1       Running   0          3h        10.XX.XX.XX    pod09-nodeX.ecatsrtpdmz.cisco.com

Step 2 - Check connectivity

As you did before, you are going to use an alpine container deployment to connect between pods in the fabric.


kubectl run alpine --rm -it --image=svs-rtp-dmz-registry.ciscolive.com/alpine /bin/sh

Once inside the container try to connect to the service with the wget command.


wget --spider --timeout=1 mylabapp

/ # wget --spider --timeout=1 mylabapp
Spider mode enabled. Check if remote file exists.
--2018-05-28 21:41:44--  http://mylabapp/
Resolving mylabapp... failed: Operation timed out.
wget: unable to resolve host address 'mylabapp'

The reason that you can't connect to the app is that it exists in a separate Namespace. In Kubernetes when applications are deployed in Namespaces, it means that they are now only reachable via the sub-domain of the Namespace. If you run the command more /etc/resolv.conf inside the alpine container you will see what the default search domains contains.


more /etc/resolv.conf

/ # more /etc/resolv.conf
nameserver 10.96.0.10
search default.svc.cluster.local svc.cluster.local cluster.local ecatsrtpdmz.cisco.com

There you can see that it searches in default namespace for cluster local. If you try to reach the service adding the subdomain.


wget --spider --timeout=1 mylabapp.mylabapp
/ # wget --spider --timeout=1 mylabapp.mylabapp
Spider mode enabled. Check if remote file exists.
--2018-05-28 21:59:18--  http://mylabapp.mylabapp/
Resolving mylabapp.mylabapp... 10.103.51.32
Connecting to mylabapp.mylabapp|10.103.51.32|:80... connected.
HTTP request sent, awaiting response... 200 OK
Length: 3011 (2.9K) [text/html]

Then the alpine deployment can now reach the service. We wanted to show you this for the next part and how ACI can help in securing these workloads across PODS.

Before we continue, exit the container.


exit

Step 3 - Move the workload

ACI Endpoint Groups (EPGs) provide a new model for mapping applications to the network. Rather than using forwarding constructs such as IP addressing or VLANs to apply connectivity and policy, EPGs use a grouping of application EndPoints(EP). EPGs act as a container for collections of applications, or application components and tiers that can be used to apply forwarding and policy logic. They allow the separation of network policy, security, and forwarding from addressing and instead apply it to logical application boundaries.

The ACI and Kubernetes integration makes it possible to deploy Kubernetes deploymemts into separate EPGs. Using the namespace construct you can easily assign workloads into specific EPGs in the ACI fabric that then make it possible to apply security constrains protecting the kubernetes fabric.

You have just created an EPG in the ACI fabric called kube-mylabapp and we can transfer all the PODs in the namespace mylabapp into this separate EPG with a single command.


acikubectl set default-eg namespace mylabapp -t k8s_pod09 -a kubernetes -g kube-mylabapp 

[root@pod09-master ~]#acikubectl set default-eg namespace mylabapp -t k8s_pod07 -a kubernetes -g kube-mylabapp

Setting default endpoint group:
Endpoint Group:
Tenant: k8s_pod07
App profile: kubernetes
Endpoint group: kube-mylabapp

If you now look in the ACI fabric under the operational TAB in the EPG you will see that now the pods for this deployment all exist in that specific EPG.

This will take some time and you might have to select different EPGs so that the ACI GUI will refresh the content as the OpFlex agent communicates with the ACI fabric after the move.

Step 4 - Verify Network Policy

If you still have open the linux alpine container we ran to do the wget tests, re-run the same wget commands. If not here is the deployment run command.


kubectl run alpine --rm -it --image=svs-rtp-dmz-registry.ciscolive.com/alpine /bin/sh

Once inside the container run the command wget --spider --timeout=1 mylabapp.mylabapp

/ # wget --spider --timeout=1 mylabapp.mylabapp
Spider mode enabled. Check if remote file exists.
--2018-05-29 01:14:06--  http://mylabapp.mylabapp/
Resolving mylabapp.mylabapp... 10.XX.XX.XX
Connecting to mylabapp.mylabapp|10.XX.XX.XX|:80... failed: Operation timed out.
Retrying.

As you can see this deployment is failing to reach this service because there is no contract agreement in ACI to allow traffic from the kube-default EPG that you have been using to the new epg kube-mylabapp that you just created. Any new POD's created in the namespace will automatically be deployed inside this EPG. You can also create different deployments inside the namespace and also they would be placed in this EPG.

The capabilities here are great. For example, let's suppose you have some bare metal database servers that are used by different developer groups. If they are placed in a specific EPG, then the fabric can act as the connector of that bare metal resource and the kubernetes cluster. You can specify precisely the service ports of the database server for a secure connection between the development team and the kubernetes cluster POD's.