Introduction

Introduction
Why Containers?

Understanding Containers

OS-level virtualization has taken over the techie world by storm in the form of instances, called containers. Originally known in Linux circles and mostly focused around application programing, containers are a new architectural method of collecting what is needed to provide a service. Instead of having long installation instructions with information of packages that are needed to get a service operational, containers can be written to contain all that is needed for the service to run.

Containers differ from normal Virtual Machines (VM's) in the fact that containers running on the same host share the underlying kernel amongst them. This variation means that the operating system doesn't need to be "reproduced". This provides two separate important things: you can start containers very quickly to provide the service, and they can be made to be extremely small in size.

Yet, what are these services we keep talking about? Since the days of application development on mainframe computers, developers have been working to "compartmentalize" the applications they developed into smaller, more manageable, processes. This provides an important function to a programmer, the ability to change and modify part of an application without having to impact and rebuild the whole application.

Achieving this goal requires writing applications in such a way that they can communicate with each other through programmatic tools like RPC (remote procedure calls) or the newer term, API (application programing interface).

In the past, people had different processes running on single servers. Then these evolved into running across networks through network RPCs. Complexity and lack of standards made these hard to manage. Then standard protocols where developed like ReST to be able to send data between network services. Different programmatic tooling to simplify creation of API's made it possible to now easily break applications into much smaller chunks.

With that in mind, containers have evolved from the original LXC concepts to what they are today. A mechanism for developers to create services that are easy to deploy and manage, that can be orchestrated and managed in such a way that they are small, nimble and fast. Hence the term micro-services was coined.

Container orchestration

The process of orchestration of containers involved the development of tools that would make it possible to manage the lifecycle of the application through automation. These involves aspects like placement, scheduling, resource management, updates and monitoring.

One of the key components of orchestration of containers is the declarative configuration model that is used throughout the eco-system. DEVOPS teams write the proper declarative configuration such that the automation uses this configuration throughout the architecture. This declarative configuration, usually accomplished with YAML or JSON contains all the crucial elements required by this application component to function. These might include network ports, storage, volumes and other parameters.

It is also the job of the orchestrator to insure proper placement of workloads based on system activity, load and policy. For example, provisioning an application that has some sort of high availability requirement and placing all the components in the same physical compute node makes no sense.

It is also the job of the orchestrator for monitoring health of the all the containers that compromise the application component and make sure that they are active and functional.

There are many container orchestrators in the market. Three of the popular ones are: Kubernetes, Docker Swarm and Apache Mesos. Over the past years Kubernetes has been gaining traction in the industry as a leader in the orchestration space of containers and the reason why Cisco has developed this integration model, based on Cisco ACI user feedback.

Kubernetes

Kubernetes originated as an open source project in google loosely based on Google's own Borg architecture for managing their datacenter workloads ( Borg remains internal google only ). Kubernetes birth date is in the year 2014 and has been adopted and extended by the community since with many extensions added by other corporations and open-source advocates.

From that birth in 2014 kubernetes continues to garner interest in the market. Different platforms utilize kubernetes as the basis for managing containers ( Rancher, OpenShift ). The internet can't stop asking questions about Kubernetes:

Which leads us as to why you are here with us today.

About this Lab!

When we embarked on developing this lab we realized that understanding kubernetes without having some basis of containers didn't make sense. So we developed in the lab a structure that goes through showing some of the container basics and the network components of containers on a single host. From there we move to how kubernetes network integration with ACI makes sense from the network administrator perspective in providing visibility to the administrator of what is happening inside of that kubernetes cluster.

We requested some information from the engineering team and other feedback on how to struture this lab. One area that we heard clearly is that most users where interested in the integration over VMware workloads instead of bare-metal servers. We designed the lab to show how you can configure and build them this way.

The lab from a high level has the following steps:

1
Introduction
2
Understanding Containers
3
ACI Access Policies
4
VMware Domain Definition
5
Linux Host Setup
6
ACI Kubernetes
7
Configure VMware Integration
8
Initialize Kubernetes
9
Deploy Applications
10
Conclusion
11
Reference

We take the student through the following flow of learning:

  • Installing and understanding Docker
    • Here we wanted to show you how to pull docker images, how to run them and how to modify and store them for your personal use.
  • ACI/VMware policy builds
    • The first steps build the basic ACI access-policies and ACI VMware VMM domain policies.
  • Linux setup for Kubernetes
    • In this section we show all linux steps to setup your linux box to integrate with ACI properly. In this case we are using CentOS 7 that is the closest to RedHat Enterprise Linux.
  • ACI Kubernetes config
    • The integration configuration between ACI and Kubernetes is done mostly using the same declarative model as is used in containers. A single YAML file contains all these parameters, and a single execution command automates the process of putting this all together.
    • Afterward you patch things up in VMware and initialize the kubernetes fabric.
  • Deploy Application
    • Here you get to play with the integration by deploying various applications in the ACI/Kubernetes fabric. We show you the details of what happens under the hood and how ACI helps the network administrator in running the network with visibility between these different components.

Any questions please reach out to the proctors. We are here to help and provide answers to any questions you may have. If we don't know, we will gladly find out a solution for you.