Integration Config

Template
Configuration

Instead of having a template published, the integration creates a template that matches the version of the integration that is being installed.

Step 1 - Create provision template

You will run the acc-provision code and then redirect the output to a file for editing.


cd ~
acc-provision -f kubernetes-1.15 --sample > aci-fabric-config.yaml 

Understanding the ACI Kubernetes template

To assist with the process of creating this template file, we will breakdown the file components and explain them. Following that explanation we will provide a single command to create the file for you based on the data specific to your POD.

Fabric policy defintions

One of the most important key components to get the integration working is properly configuring the ACI PROVISION yaml file. This is divided into various sections.

The first section provides the system_id that is unique per Kubernetes cluster


#
# Configuration for ACI Fabric
#
aci_config:
  system_id: [value]                        
  apic_hosts:                    
  - [IP ADDRESS OF APIC]

Field Description
system_id The system_id must be unqie per kubernetes cluster interacting with an ACI fabric. It is used to identify many policy configurations in the ACI fabric.
apic_hosts List of IP addresses of the APIC controllers.

  vmm_domain:                    
    encap_type: vxlan/vlan 
    mcast_range: 
      start: X.X.X.X
      end: X.X.X.X
    nested_inside: 
      type: vmware  
      name: [value]

Field Description
vmm_domain Configuration parameters for the Kubernetes VMM domain
encap_type The encapsulation that will be used by OPFLEX in between the fabric leaf and the openvswitch container running on the host. Can be VXLAN or can be VLAN.
mcast_range For every VMM domain in an ACI fabric, you must specify a specific multicast range that it can use. This controls possible BUM flood traffic problems by keeping that into specific multi-cast topology trees inside the ACI fabric.
nested_inside If you are going to run kubernetes inside a virtualized platform like VMware or OpenStack, you have to define the specific type and the name of the VMM domain that you are going to utilize.
  
  aep:  [value]
  vrf:         
    name: [value]
    tenant: [value] 
  l3out:
    name: [value]   
    external_networks:
    -  [value]   
  
  
Field Description
aep The name of the Attachable Entity Profile created in the ACI fabric for the domain ( either physical or virtual). The AEP will be used to connect the access policies to the application profile policies in the ACI fabric.
vrf You have to create the VRF in the ACI fabric and then provide the name for the VRF in the definition. This is defined in the common tenant.
tenant The tenant in which the application profiles will be built.
l3out The name of the L3OUT that will be used by the integration for external services. Name needs to match exactly and is defined in common.
external networks This is the defined L3OUT EPG for the provided L3 OUT.
    
net_config:
  #Internal Subnets
  pod_subnet: [value]
  node_svc_subnet: [value]
  #External Visible Subnets
  node_subnet: [value]
  extern_dynamic: [value]
  extern_static: [value]
  # VLAN Identifiers
  kubeapi_vlan: [value]
  service_vlan: [value]
  infra_vlan: [value]
    
    
Field Description
pod_subnet The POD subnet is probably the largest address space required by kubernetes. Each POD will be assigned an IP address from this space and this address is only internal to the kubernetes fabric ( no external visibility ). e.g. 10.1.0.1/16
node_svc_subnet The NODE service subnet is utilized by the ACI fabric for service graphs. The size of this subnet is directly related to the amount of nodes created in the fabric ( each node will have a specific IP address assigned ) e.g. 10.2.0.1/16
node_subnet The tenant in which the application profiles will be built.
extern_dynamic The extern dynamic definition tells the ACI/Kubernetes integration what subnet to utilize for assigning dynamic allocation of external IP addresses. This will probably be a larger subnet that DEVOPS engineers will utilize more ad-hoc for external service definition
extern_static The extern static subnet gives the ACI/Kubernetes integration the subnet to utilize for statically defined external IP address services. These are usually tied back into specific DNS names and can be used by ingress controllers for wildcard DNS services.
kubeapi_vlan This is the VLAN encapsulation that will be used by the kubernetes nodes to talk to eachother for kubernetes API calls. This VLAN and subnet will communicate over the ACI fabric and will allow internal kubernetes API communications to happen.
service_vlan The service VLAN which should not be confused with the SERVICE CIDR for kubernetes is utilized by the ACI fabric for all the service graph PBR functionality in the fabric.
infra_vlan This is the ACI fabric infrastructure VLAN. This is configured during fabric initialization and can be extracted from your fabric under the infra tenant.
    
node_config:
 uplink_iface: [value]
 vxlan_uplink_iface: [value]
    
    
Field Description
uplink_iface The primary interface connecting to the ACI fabric
vxlan_uplink_iface The subinterface towards the infra VLAN towards the ACI fabric.
    
registry:
  image_prefix: noiro 
  # image_pull_secret: secret_name 
    
    
Field Description
image_prefix If you have a requirement to install your own private container registry due to the kubernetes cluster not being able to reach external networks, you can download the containers from Cisco.com and create your private registry. This would be the prefix information.
image_pull_secret Password definition for private registry.

Step 2 - Configure POD template file

To simplify the lab we have taken all the proper parameters for your specific LAB POD and configured the deployment for you directly.


cat << EOF > ~/aci-fabric-config.yaml
#
# Configuration for ACI Fabric
#
aci_config:
  system_id: k8s_pod09          # Every opflex cluster must have a distict ID           
  apic_hosts:                    # List of APIC hosts to connect for APIC API
  - 10.0.226.41
  vmm_domain:                    # Kubernetes container domain configuration
    encap_type: vxlan            # Encap mode: vxlan or vlan
    mcast_range:                 # Every opflex VMM must use a distinct range
      start: 239.9.1.1
      end: 239.9.255.255
    nested_inside:               # Include if nested inside a VMM
      type: vmware               # Specify the VMM vendor (supported: vmware)
      name: k8s_pod09_dvs        # Specify the name of the VMM domain

  # The following resources must already exist on the APIC,
  # they are used, but not created by the provisioning tool.
  aep: k8s_pod09_aep             # The AEP for ports/VPCs used by this cluster
  vrf:                           # This VRF used to create all kubernetes EPs
    name: k8s_vrf
    tenant: common               # This can be system-id or common
  l3out:
    name: k8s                    # Used to provision external IPs
    external_networks:
    - k8s-epg                    # Used for external contracts

#
# Networks used by Kubernetes
#
net_config:
  #Internal Subnets
  pod_subnet: 10.209.0.1/16       # Subnet to use for Kubernetes Pods
  node_svc_subnet: 10.109.2.1/24  # Subnet to use for service graph
  #External Visible Subnets
  node_subnet: 10.0.144.65/29      # Subnet to use for nodes
  extern_dynamic: 10.0.145.65/29   # Subnet to use for dynamic external IPs
  extern_static: 10.0.146.65/29    # Subnet to use for static external IPs
  # VLAN Identifiers
  kubeapi_vlan: 3109             # The VLAN used by the physdom for nodes
  service_vlan: 3209             # The VLAN used by LoadBalancer services
  infra_vlan: 3967               # The VLAN used by ACI infra

#node_config:
# uplink_iface: ens224 
# vxlan_uplink_iface: ens224.3967
 
#
# Configuration for container registry
# Update if a custom container registry has been setup
#
registry:
  image_prefix: noiro                   # e.g: registry.example.com/noiro
  # image_pull_secret: secret_name      # (if needed)
EOF

Step 3 - Initialize ACI fabric kuberentes configuration


cd ~
acc-provision -c aci-fabric-config.yaml -o aci-k8s-config.yaml -f kubernetes-1.15 -a -u admin -p cisco.123

The output will look similar to the following

INFO: Loading configuration from "aci-fabric-config.yaml"
INFO: Using configuration flavor kubernetes-1.15
INFO: Generating certs for kubernetes controller
INFO:   Private key file: "user-k8s_pod17.key"
INFO:   Certificate file: "user-k8s_pod17.crt"
INFO: Provisioning configuration in APIC
WARN: User already exists (k8s_pod17), recreating user
INFO: Using configuration label aci-containers-config-version=d7c16dce-f605-4cb9-b18a-46d35a8733b2
INFO: Writing kubernetes infrastructure YAML to aci-k8s-config.yaml
INFO: Apply infrastructure YAML using:
INFO:   kubectl apply -f aci-k8s-config.yaml
INFO: Delete stale objects from older deployments using:
INFO:   kubectl -n kube-system delete configmap,secret,serviceaccount,daemonset,deployment,clusterrolebinding,clusterrole -l  'aci-containers-config-version,aci-containers-config-version notin (d7c16dce-f605-4cb9-b18a-46d35a8733b2)'