Using a real ingress controller

Ingress
Service Controller

We have showed you to this point that you can reach services exposed by Kubernetes to a specific IP Address. This method is a well known method to expose services. Add the DNS name to a particular IP address and assign the IP to the service that we wish.

Another method is possible that involves simply deploying wild card DNS entries. For example a DNS entry can be added to the kubernetes fabric that stipulates that *.ciscolive.com is on a specific IP or set of IP addresses. At this point it is possible for users to simply create the service and assign it any name with the .ciscolive.com ending URL and it would get routed to the cluster.

As L7 controller then acts based on the received URL identifier to route the traffic to the correct service. This simplifies application deployments dramatically. For example if you are developing an application on the web with this system, then deployed services could be things like js.ciscolive.com, minio.ciscolive.com, calendar.ciscolive.com, cart.ciscolive.com. Each then routed to the service in the kubernetes fabric automatically and programatically.

Step 1 - Create configuration file


cat << EOF > ~/traefik.yaml
loadBalancerIP: 10.0.146.69
replicas: 4
debug:
  enabled: true
dashboard:
  enabled: true
  domain: dashboard.acik8s-pod09.ecatsrtpdmz.cisco.com
rbac:
  enabled: true
EOF

Step 2 - Install traefik ingress controller

traefik is an open source alternative application proxy load balancer written in Go that is very fast and designed around the capability of container application deployments.


helm install stable/traefik --name traefik --namespace kube-system --values ~/traefik.yaml

This will output text explaining what was accomplished:

NAME:   traefik
LAST DEPLOYED: Sat Jun  2 09:31:05 2018
NAMESPACE: kube-system
STATUS: DEPLOYED
[cut]

Step 3 - Verify ingress controller is working

First you can check to see if the deployment has completed.


kubectl get deployment traefik -n kube-system

NAME      DESIRED   CURRENT   UP-TO-DATE   AVAILABLE   AGE
traefik   1         1         1            1           9m  

Then verify that the service has been created for the dashboard portion of traefik.


kubectl get svc traefik -n kube-system

NAME      TYPE           CLUSTER-IP     EXTERNAL-IP    PORT(S)                      AGE
traefik   LoadBalancer   10.96.XX.XX    10.0.146.149   80:31433/TCP,443:31013/TCP   9m

And last you can check that the ingress is established.


kubectl describe ingress traefik-dashboard -n kube-system

Name:             traefik-dashboard
Namespace:        kube-system
Address:
Default backend:  default-http-backend:80 ([none])
Rules:
  Host                                          Path  Backends
  ----                                          ----  --------
  dashboard.acik8s-pod09.ecatsrtpdmz.cisco.com
                                                   traefik-dashboard:80 (10.219.X.X:8080)
Annotations:
Events:  [none]

With that you can see the dashboard for traefik deployed in kubernetes. The url should get you into the dashboard for the traefik service that we have defined. http://dashboard.acik8s-pod09.ecatsrtpdmz.cisco.com. The web page shoudl look like:

Understanding how traefik/kubernetes works

Traefik is just one of many ingress controllers available to advertise services into a kubernetes fabric. Using proper DNS naming makes it possible to give DEVOPS teams easy ways to create services that are used by different clients without the need to manually create them. The first step in this process involves telling DNS that a wildcard is available pointing to the ingress controller. This single IP address is sufficient because ACI will manage the process of load balancing to traefik. This wildcard gives us the ability that anytime a service is created with a name of [new_service].*.acik8s-pod09.ecatsrtpdmz.cisco.com it will be routed into the ACI fabric that will then be able to load balance against the traefik ingress controller directly.

Now every time a ingress is created in the ACI fabric, it can be easily mapped into the same ingress controller. The ingress controller is listening in the different namespaces for a request to expose a service and will automatically get the information from the ingress definition and expose the service without any intervention.

Expose Guestbook

You had built the guestbook application in the default namespace, connected to it via a specific IP address to the service. Since the service had a definition of LoadBalancer the ACI fabric did via PBR re-write to the address of the POD inside the fabric.

Now, you want to expose this service via the same DNS method we just described.

Step 4 - Create ingress definition for guestbook


cat << EOF > ~/gb-frontend-ingress.yaml
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: gb-frontend-ingress
  labels:
    app: guestbook
    tier: frontend
  annotations:
    kubernetes.io/ingress.class: traefik
spec:
  rules:
  - host: guestbook.acik8s-pod09.ecatsrtpdmz.cisco.com
    http:
      paths:
      - backend:
          serviceName: gb-frontend-service
          servicePort: 80
EOF

Step 5 - Create guestbook ingress

If you want to see something cool, keep your eye open on the traefik dashboard when you issue this command.


kubectl create -f ~/gb-frontend-ingress.yaml 

And once created traefik should immediatley be notified internally and then creates the ingress model to start load balancing traffic to the pods internally.

If you look at the traefik ingress controller you will see that the service backend and frontend from the perspective of traefik is operational and active. You can also check via the CLI with:


kubectl describe ingress gb-frontend-ingress

Name:             guestbook-ingress
Namespace:        default
Address:
Default backend:  default-http-backend:80 ([none])
Rules:
  Host                                          Path  Backends
  ----                                          ----  --------
  guestbook.acik8s-pod09.ecatsrtpdmz.cisco.com
                                                   guestbook-frontend:80 ([none])
Annotations:
Events:  [none]

Step 6 - Try connection to service

Try to connect now to the exposed service guestbook.acik8s-pod09.ecatsrtpdmz.cisco.com.

While this has completed and the service is ready to be exposed, ACI security constructs will not permit that this operation complete. The reason is that we need to allow specific type of communication between two separate EPG ( endpoint point groups ) in the ACI fabric.

Contract for the load balanacer

One important thing to remember is that the flow going into the ingress controller changes things in regards to the ACI fabric. We have deployed these applications into different kubernetes namespaces. Some of them exist in different ACI EPG's. So you have to now give permission of the traffic that we want to allow across the ACI fabric.

Look at the following diagram:

As you can see the ACI contracts allow traffic from the outside into thse two different EPGs. This is why you can get to the IP based guestbook service and you can reach the traefik dashboard. But a big change happens when you do load balancing using traefik, and that is that the network traffic hits the traefik load balancer and it terminates the connections and establishes new connections to the load balanced services behind it.

As you can imagine then the problem is that traefik POD can't talk to the guestbook pod on the ports it needs as ACI has established a strict policy enforcement in the fabric for you!

This makes for a more secure kubernetes cluster where we are applying application policy constructs to the network. In this case the policy that we need to pursue is based on what the service is requesting. Looking at the output of kubectl describe for the ingress that you just did:

Name:             guestbook-ingress
Namespace:        default
Address:
Default backend:  default-http-backend:80 ([none])
Rules:
  Host                                          Path  Backends
  ----                                          ----  --------
  guestbook.acik8s-pod09.ecatsrtpdmz.cisco.com
                                                   guestbook-frontend:80 ([none])
Annotations:
Events:  [none]

You can see that the backend is using port 80. We have to open that port between these two epgs ( kube-default and kube-system )

Step 7 - Create contract

In the ACI fabric

Step 8 - Tie contracts to End Point Groups

Step 9 - Connect to exposed service

Now the connection should work guestbook.acik8s-pod09.ecatsrtpdmz.cisco.com