Post Docker containers statistics to Slack

In this post we are going to see how to monitor Docker containers resource usage statistics and send alarm notifications to a Slack channel.
The Docker Engine allows you to see these statistics running the docker stats command. It returns a live data stream for running containers.

Here you can find the official Docker documentation of the command: Docker stats
If you are wondering what Slack is, let me just say that it is an instant messaging and collaboration system based on channels.
You can read more here: Slack

We are going to monitor the container’s resources using a Python script. There are a lot of container management systems but I found some of them too complicated or not very useful. If you want something light, easy and open-source I suggest you Portainer.io (I am going to write a post about it).

We are going to use the docker-py Python client library to connect to the Docker Remote API.
Here you can find the library Github repository: docker-py.
If you do not want to use Python, here you can find a list of client libraries for other programming languages: Docker Remote API client libraries

Install the library with pip:

Connect to the Docker Deamon and to the Docker Remote API (specify the Docker server address):

For each running containers we can now stream the resource statistics.
To list all the running containers use the .containers.list() method.

To stream the statistics for a given container use the client.stats method. It takes the container name as argument and returns a generator (Wiki Python Generator).

Here you can find the official documentation of Low-level API: docker-py low-level API.
The stats method returns a JSON with the following format (note the CPU and memory usage information):

We are going now to analyze the resource usage statics and eventually notify an alarm message to Slack (if the usage of some resources exceed our thresholds).
I am not going to post here how to extract the resource usage from the JSON object, but you can find the full code in this Github repository: mz1991/docker-stats-slack.

To send the notification to Slack we will use the Slack Webhook integration. Webhooks are a simple way to post messages from external sources into Slack. They make use of normal HTTP requests with a JSON payload that includes the message text and some options.
You can read more here: Slack Incoming Webhooks.
I assume you configured the Slack Incoming Webhook integration for your Slack team and you have the Webhook URL.
To configure the Incoming Webhook integration for your Slack team, you can use the following URL: https://[your_slack_team].slack.com/apps/A0F7XDUAZ-incoming-webhooks

With the Webhook integration we do not need any Slack library, to post a message to the Slack channel we just need to post (HTTP post) a message to the Webhook URL endpoint.

The format of the JSON we are going to post is the following:

You need to specify the channel where you want to post the message, the display name of the user with an emoji and the text of the message.

You can find the Webhook documentation here: https://api.slack.com/incoming-webhooks
and the list of available emoji here: Slack Emoji Cheat Sheet

To post the message we are going to use the Request class (built-in in the Python urllib3 module).

The posted message will look like this:

message_to_slack
We saw how to stream the containers statistics and how to post an alarm message to a Slack channel.
I build a Python script that uses a set of environment variables for the Slack channel configuration and the resource usage thresholds.

These are the environment variables needed:

  • SLACK_WEBHOOK_URL: the webhook url for your Slack team
  • SLACK_CHANNEL: the channel id (where the message will be posted)
  • SLACK_USERNAME: the username for the incoming messages
  • SLACK_EMOJI: the emoji for the incoming messages
  • MEMORY_PERCENTAGE: maximum percentage of RAM memory used for each container. When the percentage of used memory will exceed this threshold, an alarm will be posted to the Slack channel
  • CPU_PERCENTAGE: maximum percentage CPU usage for each container. When the CPU percentage usage will exceed this threshold, an alarm will be posted to the Slack channel
  • SLEEP_TIME (seconds) : interval between each message posted to Slack. Number of seconds between messages, unique for container.

To run the script use the following commands:

All the code from this post can be found in this Github repository: mz1991/docker-stats-slack.
Feel free to fork it and to add further features.

Elasticsearch and Kibana with Docker

Last weekend, in occasion of the Docker Global Mentor Week, I attended the Docker meetup in Milan. I improved my knowledge about the containers world so I decide to use Docker and Docker-Compose to ship Elasticsearch and Kibana. I already wrote some posts about Docker, you can find them here: Docker and Docker Compose and Docker Compose and Django.

I suppose you already have a basic knowledge about the main Docker commands (run, pull, etc.).

I have been using Docker version 1.12.3 and Docker-compose 1.8.1 (be sure you docker-compose version supports the version 2 of docker-compose file)
We can directly pull the images for Elasticseach and Kibana (I am using the latest version 5.0.1):

The Elasticsearch image is based on the openjdk:8-jre image, you can find the Dockerfile here: Elasticseatch 5.0.1 Dockerfile.
The Kibana image is based on the debian:jessie image, you can find the Dockerfile here: Kibana 5.0.1 Dockerfile

I defined a docker-compose.yml file to ship two containers with the previously pulled images, I exposed the default ports, 9200 for Elasticsearch and 5601 for Kibana. The environment variable defined within the Kibana service, represents the Elastichsearch url (within Docker you just need to specify the service name, it will automatically resolve it to an IP address).

With the docker-compose version 2 you do not have to specify the linking between the services, but they will be automatically placed within the same network (beside you specify a custom one).

The last version of Elasticsearch is more strict about the bootstrap checks so be sure to correctly set the vm.max_map_count and the file descriptors number (Wiki: file descriptor)

You can read more about these bootstrap checks here: Bootstrap Checks

We can now ship the two containers using docker-compose up command.

The two containers have been shipped and are running, we can reach Kibana at http://localhost:5601 and Elasticsearch at http://localhost:9200.

es_kibana_containers

So with Docker and docker-compose we can easily run Elasticseach and Kibana, focusing more on the application development instead of the environment installation.

Docker Compose and Django

In this post we are going to see how to build a web application using Django and Docker Compose.
Django is a high-level Python Web framework that encourages rapid development and clean, pragmatic design.
Docker-Compose is a tool that allow you for defining and running multi-container Docker applications (see my previous post for major details and to see how to install it).
I run this example on an Ubuntu 14.04 machine with Docker version 1.12.1 and Docker-Compose 1.8.0.

Create a directory where put the files needed for this example.

Create a Docker file (named Dockerfile, no extension needed)

This Docker file defines a basic image based on Python 2.7, creates a folder named code, adds the requirements.txt file to the folder and runs a pip install command.

We have to create now the requirements.txt file, with the following content:

These requirements are needed to run the Django web framework and to connect to a Postgresql db.

Create now a docker-compose.yml file that will contain our service definition: a Django web server a Postgresql database.

The command section defines the Django instruction to run the web server (see this link to the full documentation about manage.py).

To build the Django web service container we can use the docker-compose run command.

After we run this command we can see that the Django project (compossexample folder) has been created.

django_project

The file created by the Django-admin are owned by the root user. Use this command to change the permission for those files.

We need now to edit the Django configuration file (composeexample/settings.py) to set the Database connection string. Edit the DATABASE section and add this new configuration (the parameters of the connection have been defined in the docker-compose.yml file).

Check this link to see the full Database Django configuration documentation.

To start the two containers we can user the docker-compose up command:

The both services (shipped in two different containers) are running:
running

To list all the container (with extra information like ID,Status, Names, ecc) you can use the docker ps command:

docker_ps

The Django web application is The Django web server is now running on port 8000.

django-runnin

In case we would like to connect to the running container to perform some Django operation (like create a super user) we can use the Docker excec command.

The id of the container is the one shown by the ps command.

We can now create a super user to login to our Django web application (http://ip_address:8000/admin).

django-super-user

django-admin-panel

We combine Docker and Docker-Compose to create two containers for our Django web application. The first container contains the Django web server and the second one contains the Postresql database. These containers can be hosted on the same machine in a development environment but can splitted up when the application is deployed to a production environment.

Docker and Docker Compose

What is Docker?

docker_logoRecently I had the opportunity to use Docker for a small project and I realized how cool it is!

But what is Docker? “Docker is the world’s leading software containerization platform” (Docker official site).

It allows you to pack your application into a standardized unit for software development. It define itself as:

  • Lightweight: Containers running on a single machine share the same operating system kernel; they start instantly and use less RAM
  • Open: Docker containers are based on open standards, enabling containers to run on all major Linux distributions and on Microsoft Windows
  • Secure by default: Containers isolate applications from one another and the underlying infrastructure, while providing an added layer of protection for the application

Is the Docker approach similar to the Virtual Machine approach? Containers and virtual machines have similar resource isolation and allocation benefits but a different architectural approach allows containers to be more portable and efficient.

Virtual machine architecture: (note that an entire guest operating system is necessary)

vm

Docker containers architecture (the kernel is shared between the containers)

docker

So Docker allows you to host different application (shipped by containers) while sharing the same operating system kernel and keep the application isolation.
Docker comes with a lot of tools like Docker Engine, Machine, Kitematic and Docker Compose.compose

Compose is a tool for defining and running multi-container Docker applications. You can create a file (called docker-compose.yml) where you define the services that compose your application.

Using this approach you can build an application (composed by different services) and keep the services in separated containers (that can be deployed everywhere: on the same host or on more different hosts).

Install Docker

Now we are going to see how to install Docker. I am using Ubuntu 14.04.4 LTS (codename: trusty). If you want to use a different version of Ubuntu, your kernel must be 3.10 at minimum.

Install prerequisite:

Now add the Docker repository. Edit the file /etc/apt/sources.list.d/docker.list (create it if it does not exist) and add the following line:

Now get the added repository and purge the eventually existing package:

And finally install the docker-engine:

To check if Docker is correctly installed, you can check your docker version with

You should see the version of the Docker client and server.

docker-version

If you see the error “Docker command can’t connect to docker daemon” while connecting to the docker server, you need to add your current user to docker group as follow (suppose you are using an account called ubuntu):

To run a test imagine in a container you can use:

Now you can download, create and run your own Docker image (to see the full list of Docker commands and images, I suggest you to look at the official Docker documentation).

Install Docker-Compose

Now that we have installed Docker, we can install Docker-Compose.
On the official site page you can find all the different installation modes, but I suggest you to use pip (Python Package Index) to install it.
If you do not have pip installed, you can get it with:

Now you can easily install docker compose:

Tc check the Docker-Compose installation run:

Now that we have Docker-Compose, we can define a new configuration file to deploy the services of our application in different container and compose (link) them.
Here an example of docker-compose.yml file:

We defined an DB service (with mysql image) and a wordpress service (that depends from the DB service). These two container can be deployed on the same host or two different hosts. As you know, when you start a new application project is common to build everything on the same machine and often is impossible to split the services later, this approach will help up to split up the application when the application grows or the demand for the different parts changes.

deploy

So we moved from a single monolith application to a multi- containers (that can be deployed everywhere!) application.
In next post I will describe how to build a Python Django web application using Docker-Compose (web server and Postresql services splitted up in two containers).

References:
Docker Compose file
Docker Installation – Ubuntu
Getting started with Docker