Use Ansible to setup Compute Nodes

Ansible
Automate Node Setup

The setup for the compute nodes is best to automate in some form. There are many tools on the internet to assist you in the installation of Kubernetes. One of them is Kubespray. This tool is show by Cisco Technical Marketing team to customers around the world in conjunction with the ACI integration. One version that is specific for ACI is available as a fork by Camillo Rossi that is a Cisco TME focused on Kubernetes. You can view this version here.

For this lab we are going to just use some Ansible playbooks to configure the compute hosts. There is a little of re-inventing the wheel, but it works out to show you the potential of these tools if you would like to tailor things specifically for your production environment. Ansible provides many tools to assist you in the setup of the linux compute hosts.

For this lab you will also configure two separate nodes besides the kubernetes master controller. Since these steps are the same, explanations are displayed for the master node or when there are differences.

We will be doing this from the master node as we are restrained in the capacity for the lab. Usually in production networks, there is a dedicated compute host for automation. And with products like Ansible Tower available to corporate network administrators, how these could be invoked varies.

Step 1 - Upgrade PIP


cd ~
pip install --upgrade pip

Step 2 - PIP install ansible


cd ~
pip install ansible==2.9.2

Step 3 - Install GIT

Git is a tool for software version control. It is used a way to control changes in software. Many developers use GIT or older tools like Subversion. We will use this to bring down the ansible playbook structure we need to automate the install for Node1 and Node2.


yum -y install git

Step 4 - Clone the Ansible playbook structure

To simplify this for you we are going to provide a base structure that is in a public cloud GIT repository that we will clone with this command. Don't worry if you don't understand what GIT is as this is a tool mostly used by developers for version control of code. It can be used in Network Engineer circles to keep revision control of device configuration files. If you have any questions on GIT please reach out to one of the lab proctors.


cd ~
git clone https://github.com/rafmuller/ansible-nodes-2967.git ansible

Step 5 - Setup SSH for Ansible

To be able to connect between the master and the nodes using Ansible we have to provide the SSH key structure such we can ssh without a password requirement.


cd ~
ssh-keygen -f /root/.ssh/id_rsa -t rsa -N "" 

The output would look something like this:

Generating public/private rsa key pair.
Created directory '/root/.ssh'.
Your identification has been saved in /root/.ssh/id_rsa.
Your public key has been saved in /root/.ssh/id_rsa.pub.
The key fingerprint is:
SHA256:JxhzRnPsh43MkCfIdLC32eae8LZ9mlw6sgwWyRGfEho root@pod9-master.ecatsrtpdmz.cisco.com
The key's randomart image is:
+---[RSA 2048]----+
|     oEo*o.      |
|      o*+*o.     |
|      = *Bo+     |
|       O B* o    |
|      . S +.     |
|         *       |
|        + .   .  |
|       . *o+.+.  |
|         .B+*+   |
+----[SHA256]-----+

Once the SSH key is generated on the master node, we have to copy the public key to the nodes. SSH provides a facility to make this easier.


ssh-copy-id 10.0.222.35

You will be prompted to say yes/no and connect. Please say yes. Then for the password you enter: cisco.123

/usr/bin/ssh-copy-id: INFO: Source of key(s) to be installed: "/root/.ssh/id_rsa.pub"
The authenticity of host '10.0.222.35 (10.0.222.35)' can't be established.
ECDSA key fingerprint is SHA256:qPvdUCCxKFWp1ENrBghXcPr9mqaYpYFZQ9zSywkcwJ8.
ECDSA key fingerprint is MD5:c5:3d:2d:ac:34:5c:3e:54:f1:a3:a1:57:27:be:40:c2.
Are you sure you want to continue connecting (yes/no)?

Now you have to repeat the ssh-copy-id command for node2.


ssh-copy-id 10.0.222.36

Please type yes when prompted and the password cisco.123. With this completed you can now ssh into the hosts without having to enter a password that makes it possible for Ansible to automate these tasks without asking for the password hundreds of times.

Step 6 - Create the Ansible inventory file

Ansible uses the inventory file to identify what devices we are going to execute the playbook against. In this case we will be creating files for the two separate nodes.


cat << EOF > /root/ansible/inventory/inventory.yaml
---
all:
  vars:
    ansible_python_interpreter: "/usr/bin/python"
    ansible_ssh_private_key_file: "~/.ssh/id_rsa"
    docker_ce_version: "docker-ce-3:19.03.4-3.el7.x86_64"
    docker_ce_cli_version: "docker-ce-cli-1:19.03.4-3.el7.x86_64"
    containerd_io_version: "containerd.io-0:1.2.10-3.2.el7.x86_64"
    k8s_cni_version: "kubernetes-cni-0:0.7.5-0.x86_64"
    k8s_ctl_version: "kubectl-0:1.15.7-0.x86_64"
    k8s_adm_version: "kubeadm-0:1.15.7-0.x86_64"
    k8s_let_version: "kubelet-0:1.15.7-0.x86_64"
    etherdev: "ens224"
    aci_infra_vlan: 3967
    node_net_prefix: 29
    node_net_vlan: 3109
    mtu: 9000
  hosts:
    pod9-node1:
      ansible_host: "10.0.222.35"
      node_net_ip: "10.0.144.68"
      route_address: "10.209.0.0"
      route_netmask: "255.255.0.0"
      route_gateway: "10.0.144.65"
    pod9-node2:
      ansible_host: "10.0.222.36"
      node_net_ip: "10.0.144.69"
      route_address: "10.209.0.0"
      route_netmask: "255.255.0.0"
      route_gateway: "10.0.144.65"
EOF

Step 7 - Execute the Ansible playbook.


cd ~/ansible/
ansible-playbook -i inventory/inventory.yaml playbooks/node-setup.yaml

Once the Ansible playbook finishes, your output should have a similar view to the playbook results shown below:

PLAY RECAP ***********************************************************************************************************************************************************************
pod9-node1                : ok=30   changed=24   unreachable=0    failed=0    skipped=0    rescued=0    ignored=0   
pod9-node2                : ok=30   changed=24   unreachable=0    failed=0    skipped=0    rescued=0    ignored=0