Deploy Lab MyAPP

Applications
Using compose

Now that we understand a little more on container networks, let's deploy an application that has two separate containers that need to talk to each other to provide a service. This application is going to need a network on the host to establish this connection.

While we can create all the components using the docker CLI command, we are going to utilize a new powerful tool called docker compose. Compose provides a mechanism to define the structure of a service and all the required parts to make it work. A typical compose file would contain information on how what ports to expose, volumes to map into the container and networks that are needed for the application.

Step 1 - Install Docker Compose

Since we are using the same NODE for the integration of ACI/Kubernetes and the showing you some fundamentals of docker, we have to install compose the manual way. This is due to a conflict that would arise when the RPM package for the Cisco ACI integration scripts wants to install docker compose and the latest version that we can install with yum. In an environment where you are installing Kubernetes, you would not have a need to install docker-compose. The version that is available in YUM is also ancient and we can't utilize some of the parameters available in latest versions.


cd ~
wget  https://svs-rtp-dmz-files.ciscolive.com/ltraci-2967/docker-compose-Linux-x86_64_v1.25

[root@pod09-master ~]#wget https://svs-rtp-dmz-files.ciscolive.com/ltraci-2967/docker-compose-Linux-x86_64 /usr/local/bin/docker-compose
--2020-01-26 06:26:31--  https://svs-rtp-dmz-files.ciscolive.com/ltraci-2967/docker-compose-Linux-x86_64_v1.25
Resolving svs-rtp-dmz-files.ciscolive.com (svs-rtp-dmz-files.ciscolive.com)... 64.102.250.151
Connecting to svs-rtp-dmz-files.ciscolive.com (svs-rtp-dmz-files.ciscolive.com)|64.102.250.151|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 17031320 (16M) [application/octet-stream]
Saving to: ‘docker-compose-Linux-x86_64_v1.25’

100%[========================================================================================================================================>] 17,031,320  97.6MB/s   in 0.2s   

2020-01-26 06:26:31 (97.6 MB/s) - ‘docker-compose-Linux-x86_64_v1.25’ saved [17031320/17031320]

mv docker-compose-Linux-x86_64_v1.25 /usr/local/bin/docker-compose
chmod +x /usr/local/bin/docker-compose

Let's verify that the correct version of docker compose has been installed.


docker-compose --version

[root@pod09-master ~]#docker-compose --version
docker-compose version 1.25.0, build 0a186604

Step 2 - Create directory for our compose file


mkdir ~/my_app
cd ~/my_app

Step 3 - Pull myplayapp from registry

We have built a private docker repository to pull some images we have. This image is a small image that displays information of the container in a web format.


docker pull svs-rtp-dmz-registry.ciscolive.com/myplayapp:v1

After doing a pull, that means that the container is available to use.


docker images

[root@pod09-master my_app]# docker images
REPOSITORY                                     TAG                 IMAGE ID            CREATED             SIZE
ciscolive/dockerfile-example                   latest              1b1cb09d8a18        37 minutes ago      68.3MB
ciscolive/alpine-python                        latest              faf48dff46c6        38 minutes ago      60.7MB
alpine                                         3.10                af341ccd2df8        2 days ago          5.56MB
ubuntu                                         16.04               96da9143fb18        10 days ago         124MB
svs-rtp-dmz-registry.ciscolive.com/myplayapp   v1                  0e877ddd34b7        20 months ago       277MB

Step 4 - Docker compose file for myplayapp

The following compose file defines the operation of the container you will run. With docker compose you can define one or more containers in a single compose file and give the details on how these containers can talk to each other via the same network. You can also see that we have mapped the external PORT to the internal container port. This container uses port 8090 when it is operational.

We have also defined a specific network for the container to use. We will examine the container in a little more depth from a network perspective.


cd ~/my_app
cat << EOF > docker-compose.yml
version: '3.5'

services:
  my_app:
    container_name: myplayapp
    image: svs-rtp-dmz-registry.ciscolive.com/myplayapp:v1
    networks:
      - internal
    ports:
      - 8000:8090


networks:
  internal:
    driver: bridge
    name: myappnet
EOF

You may have noticed that there is a version definition in the docker-compose file. This is very important as the version dictates the proper interpreter component of compose. There is an extensive document that covers details on the different versions and what is supported in each. The advantage of the version structure is that your definition will utilize the right interpreter and always work while that tag is sustained in compose.

Step 5 - Run Docker Compose

Now that we have created the docker-compose.yml. We need to run the docker compose comamnd. This will provide you with the ability to start and stop the service we are defining.


docker-compose up -d

[root@pod09-master my_app]# docker-compose up -d
Creating network "myappnet" with driver "bridge"
Creating myplayapp ... done

We can see that the docker container is operational with:


docker ps

[root@pod09-master my_app]# docker ps
CONTAINER ID    IMAGE                                             COMMAND                  CREATED             STATUS              PORTS                              NAMES
0ef302d861eb    svs-rtp-dmz-registry.ciscolive.com/myplayapp:v1   "/bin/sh -c '/usr/bi…"   5 minutes ago       Up 5 minutes        8080/tcp, 0.0.0.0:8000->8090/tcp   myplayapp

Step 6 - Check the Application

The application should be viewable in your browser at: POD09 My Play App

Step 7 - Understanding the host network a little better

You can list configured networks on your docker host with docker network ls


docker network ls | grep myappnet

[root@pod09-master my_app]# docker network ls
NETWORK ID NAME DRIVER SCOPE
[cut]
da8dc16ed064 myappnet bridge local

You can inspect further the defined network with docker network inspect command.


docker network inspect myappnet


[
  {
      "Name": "myappnet",
      "Id": "31a25defe6ea2d8b39e1e2b135ccb9c3064b06da889e462733f76d2ad335754a",
      "Created": "2020-01-26T07:03:01.721656117-05:00",
      "Scope": "local",
      "Driver": "bridge",
      "EnableIPv6": false,
      "IPAM": {
          "Driver": "default",
          "Options": null,
          "Config": [
              {
                  "Subnet": "172.18.0.0/16",
                  "Gateway": "172.18.0.1"
              }
          ]
      },
      "Internal": false,
      "Attachable": true,
      "Ingress": false,
      "ConfigFrom": {
          "Network": ""
      },
      "ConfigOnly": false,
      "Containers": {
          "0ef302d861eb4d17a1ee6e848ffcf8961bc01a97353e8171e7cc978bf57636e0": {
              "Name": "myplayapp",
              "EndpointID": "1845b5edfad4f54548a92eff02ac60702034607ce308ecaf6ba88740a06016c4",
              "MacAddress": "02:42:ac:12:00:02",
              "IPv4Address": "172.18.0.2/16",
              "IPv6Address": ""
          }
      },
      "Options": {},
      "Labels": {
          "com.docker.compose.network": "myappnet",
          "com.docker.compose.project": "my_app",
          "com.docker.compose.version": "1.25.0"
      }
  }
]

We can quickly look in the container and see the IP ( we have also shown it in the web app page ).


docker exec -it myplayapp ip a

1: lo:  mtu 65536 qdisc noqueue state UNKNOWN qlen 1
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
26: eth0@if27:  mtu 1500 qdisc noqueue state UP
    link/ether 02:42:ac:12:00:02 brd ff:ff:ff:ff:ff:ff
    inet 172.18.0.2/16 scope global eth0
       valid_lft forever preferred_lft forever

This conclude Understanding Containers seccion. We are now ready to work on ACI integration with Kubernets