Initialize Kubernetes

Initialize
Kubernetes

Step 1 - Initialize kubernetes fabric with kubeadm

The kubeadm initialization procedure is standard for kubernetes. This method of initializing the kubernetes fabric is gaining momentum amongst kubernetes experts to deploy the cluster. The two flags that we have included define the pod network subnet and the service subnet.


cd ~
kubeadm init --apiserver-advertise-address=10.0.144.67 --pod-network-cidr=10.209.0.1/16 --service-cidr 10.96.0.1/16

This process will trigger a series of events including configuring the ACI fabric. The output will look as follows:

[root@pod09-master ~]# kubeadm init --apiserver-advertise-address=10.0.144.67 --pod-network-cidr=10.209.0.1/16 --service-cidr 10.96.0.1/16 

I0126 10:08:37.053715    8635 version.go:248] remote version is much newer: v1.17.2; falling back to: stable-1.15
[init] Using Kubernetes version: v1.15.9
[preflight] Running pre-flight checks
        [WARNING SystemVerification]: this Docker version is not on the list of validated versions: 19.03.4. Latest validated version: 18.09
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Activating the kubelet service
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [pod17-master.ecatsrtpdmz.cisco.com localhost] and IPs [10.0.144.131 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [pod17-master.ecatsrtpdmz.cisco.com localhost] and IPs [10.0.144.131 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [pod17-master.ecatsrtpdmz.cisco.com kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 10.0.144.131]
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[apiclient] All control plane components are healthy after 21.502916 seconds
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.15" in namespace kube-system with the configuration for the kubelets in the cluster
[upload-certs] Skipping phase. Please see --upload-certs
[mark-control-plane] Marking the node pod17-master.ecatsrtpdmz.cisco.com as control-plane by adding the label "node-role.kubernetes.io/master=''"
[mark-control-plane] Marking the node pod17-master.ecatsrtpdmz.cisco.com as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
[bootstrap-token] Using token: uzv78v.m2h7lfkytpvadl7l
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 10.X.X.X:6443 --token uzv78v.m2h7lfkytpvadl7l \
    --discovery-token-ca-cert-hash sha256:4df7af45031eff56289b6867f9dd4d9d968580ca2b4450467c40065617e92c29 

You have to run the kubeadm join command on the other two virtual machines pod09-node1 and pod09-node2. This command will trigger the joining of the other nodes to join the kubernetes cluster.

Step 2 - Initialize kubernetes on pod09-node1

You will have to copy the command line (the whole line) kubeadm join --token to pod09-node1 to initialize that VM.

Here is the sample of the command

[root@pod09-node1 ~]# kubeadm join 10.X.X.X:6443 --token uzv78v.m2h7lfkytpvadl7l \
>     --discovery-token-ca-cert-hash sha256:4df7af45031eff56289b6867f9dd4d9d968580ca2b4450467c40065617e92c29 
[preflight] Running pre-flight checks
        [WARNING SystemVerification]: this Docker version is not on the list of validated versions: 19.03.4. Latest validated version: 18.09
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
[kubelet-start] Downloading configuration for the kubelet from the "kubelet-config-1.15" ConfigMap in the kube-system namespace
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Activating the kubelet service
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...

This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.

Run 'kubectl get nodes' on the control-plane to see this node join the cluster.

Step 3 - Initialize kubernetes on pod09-node2

You will have to copy the command line (the whole line) kubeadm join --token to pod09-node2 to initialize that VM.

[root@pod09-node2 ~]# kubeadm join 10.X.X.X:6443 --token uzv78v.m2h7lfkytpvadl7l \
>     --discovery-token-ca-cert-hash sha256:4df7af45031eff56289b6867f9dd4d9d968580ca2b4450467c40065617e92c29 
[preflight] Running pre-flight checks
        [WARNING SystemVerification]: this Docker version is not on the list of validated versions: 19.03.4. Latest validated version: 18.09
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
[kubelet-start] Downloading configuration for the kubelet from the "kubelet-config-1.15" ConfigMap in the kube-system namespace
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Activating the kubelet service
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...

This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.

Run 'kubectl get nodes' on the control-plane to see this node join the cluster.

Step 4 - Setup kubeadmin configuration files to run kubectl


cd ~
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

Step 5 - Apply the kubernetes CNI configuration to start the ACI container integration

With this command you are applying the definition file of the kubernetes integration with ACI. It will then deploy all the ACI containers and start the integration components. This file was created by the ACC provision script process and is an extensive file in YAML with configuration for the integration. It follows the same declarative models we will be using to deploy everything into the kubernetes fabric.


cd ~
kubectl apply -f aci-k8s-config.yaml

[root@pod09-master ~]# kubectl apply -f aci-k8s-config.yaml
namespace/aci-containers-system created
customresourcedefinition.apiextensions.k8s.io/snatglobalinfos.aci.snat created
customresourcedefinition.apiextensions.k8s.io/snatlocalinfos.aci.snat created
customresourcedefinition.apiextensions.k8s.io/snatpolicies.aci.snat created
customresourcedefinition.apiextensions.k8s.io/nodeinfos.aci.snat created
configmap/aci-containers-config created
configmap/snat-operator-config created
secret/aci-user-cert created
serviceaccount/aci-containers-controller created
serviceaccount/aci-containers-host-agent created
clusterrole.rbac.authorization.k8s.io/aci-containers:controller created
clusterrole.rbac.authorization.k8s.io/aci-containers:host-agent created
clusterrolebinding.rbac.authorization.k8s.io/aci-containers:controller created
clusterrolebinding.rbac.authorization.k8s.io/aci-containers:host-agent created
daemonset.apps/aci-containers-host created
daemonset.apps/aci-containers-openvswitch created
deployment.apps/aci-containers-controller created
[root@pod09-master ~]#

Step 6 - Copy admin files to pod09-node1 and pod09-node2


cd ~
scp -r .kube/ 10.0.222.35:


cd ~
scp -r .kube/ 10.0.222.36:

Step 7 - Verify all integration PODS are operational.

Using the kubectl you can list all the nodes in the cluster and pods for a particular namespace in the kubernetes system.

  
  kubectl get nodes
  
  
NAME                                 STATUS   ROLES    AGE     VERSION
pod09-master.ecatsrtpdmz.cisco.com   Ready    master   21m     v1.15.7
pod09-node1.ecatsrtpdmz.cisco.com    Ready    none     8m55s   v1.15.7
pod09-node2.ecatsrtpdmz.cisco.com    Ready    none     6m3s    v1.15.7

kubectl get pods --namespace kube-system

The output should look like the following. It is important to make sure the status of your sevices are all Running.

[root@pod09-master ~]# kubectl get pods --namespace kube-system
NAME                                                         READY   STATUS    RESTARTS   AGE
aci-containers-controller-684d45448b-hlwcv                   2/2     Running   0          4m15s
aci-containers-host-jrb68                                    3/3     Running   0          4m15s
aci-containers-host-k69j4                                    3/3     Running   0          4m15s
aci-containers-host-zmzg6                                    3/3     Running   0          4m15s
aci-containers-openvswitch-dkds8                             1/1     Running   1          4m15s
aci-containers-openvswitch-g2lnn                             1/1     Running   1          4m15s
aci-containers-openvswitch-gbj9r                             1/1     Running   1          4m15s
coredns-5d4dd4b4db-8f859                                     1/1     Running   0          23m
coredns-5d4dd4b4db-prmcs                                     1/1     Running   0          23m
etcd-pod09-master.ecatsrtpdmz.cisco.com                      1/1     Running   0          22m
kube-apiserver-pod09-master.ecatsrtpdmz.cisco.com            1/1     Running   0          22m
kube-controller-manager-pod09-master.ecatsrtpdmz.cisco.com   1/1     Running   0          22m
kube-proxy-4qrx8                                             1/1     Running   0          11m
kube-proxy-h8ggc                                             1/1     Running   0          23m
kube-proxy-x5tqg                                             1/1     Running   0          8m22s
kube-scheduler-pod09-master.ecatsrtpdmz.cisco.com            1/1     Running   0          22m