Docker Compose: Scaling Multi-Container Applications

Introduction

In the Docker Compose: Creating Multi-Container Applications blog post we talked about Docker Compose, the benefits it offers, and used it to create a multi-container application. Then, in the Running a .NET application as part of a Docker Compose in Azure blog post, we explained how to create a multi-container application composed by a .NET web application and a Redis service. So far, so good.

However, although we can easily get multi-container applications up and running using Docker Compose, in real environments (e.g. production) we need to ensure that our application will continue responding even if it receives numerous requests. In order to achieve this, those in charge of configuring the environment usually create multiple instances of the web application and set up a load balancer in front of them. So, the question here is: Could we do this using Docker Compose? Fortunately, Docker Compose offers a really simple way to create multiple instances of any of the services defined in the Compose.

Please notice that although Docker Compose is not considered production ready yet, the goal of this post is to show how a particular service can be easily scaled using this feature so you know how to do it when the final version is released.

Running applications in more than one container with “scale”

Docker Compose allows you to generate multiple containers for your services running the same image. Using the “scale” command in combination with a load balancer, we can easily configure scalable applications.

The “scale” command sets the number of containers to run for a particular service. For example, if we want to run a front end web application in 10 different containers we would use this command.

Considering the scenario we worked on in the Running a .NET application as part of a Docker Compose in Azure blog post, how could we scale the .NET web application service to run in 3 different containers at the same time? Let’s see…

Check/update the docker-compose.yml file

The first thing we need to do is ensure that the service we want to scale does not specify the external/host port. If we specify that port, the service cannot be scaled since all the instances would try to use the same host port. So, we just need to make sure that the service we want to scale only defines the private port in order to let Docker choose a random host port when the container instances are created.

But, how do we specify only the private port? The port value can be configured as follows:

  • If we want to specify the external/host port and the private port, the “ports” configuration would look like this:
    “<external-port>:<private-port>”
  • If we want to specify only the private port, this would be the “ports” configuration:
    “<private-port>”

In our scenario, we want to scale the .NET web application service called “net“; therefore, that service should not specify the external port. As you can see in our docker-compose.yml file displayed below, the ports specification for the “net” service only contains one port, which is the private one. So, we are good to go.

net:

  image: websiteondocker

  ports:

   – “210”

  links:

   – redis

redis:

  image: redis

 

Remember that the private port we specify here must be the one we provided when we published the .NET application from Visual Studio since the application is configured to work on that port.

Scaling a service

Now that we have the proper configuration in the docker-compose.yml file, we are ready to scale the web application.

If we don’t have our Compose running or have modified the docker-compose.yml file, we would need to recreate the Compose by running “docker-compose up -d“.

Once we have the Compose running, let’s check the containers we have running as part of the Compose by executing “docker-compose ps“:

clip_image002

As you can see, there is one container running that corresponds to the “net” service (.NET web application) and another container corresponding to the Redis service.

Now, let’s scale our web application to run in 3 containers. To do this, we just need to run the scale command as follows:

docker-compose scale net=3

 

In the previous command, “net” is the name of the service that we want to scale and “3” is the amount of instances we want. As a result of running this command, 2 new containers running the .NET web application will be created.

clip_image002[7]

If we check the Docker Compose containers now, we’ll see the new ones:

clip_image003

We need to consider that Docker Compose remembers the amount of instances set in the scale command. So, from now on, every time we run “docker-compose up -d” to recreate the Compose, 3 containers running the .NET web application will be created. If we only want 1 instance of the web application again, we can run “docker-compose scale net=1“. In this case, Docker Compose will delete the extra containers.

At this point, we have 3 different containers running the .NET web application. But, how hard would it be to add a load balancer in front of these containers? Well, adding a load balancer container is pretty easy.

Configuring a load balancer

There are different proxy images that offer the possibility of balancing the load between different containers. We tested one of them: tutum/haproxy.

When we created the .NET web application, we included logic to display the name of the machine where the requests are processed:

@{

    ViewBag.Title = “Home Page”;

}

<h3>Hits count: @ViewBag.Message</h3>

<h3>Machine Name: @Environment.MachineName</h3>

 

So, once we set a load balancer in front of the 3 containers, the application should display different container IDs.

Let’s create the load balancer. In our scenario, we can create a new container using the tutum/haproxy image to balance the load between the web application containers by applying any of the following methods:

  • Manually start the load balancer container:
    We can manually start a container running the tutum/haproxy image by running the command displayed below. We would need to provide the different web app container names in order to indicate to the load balancer where it should send the requests.

docker run -d -p 80:80 –link <web-app-1-container-name>:<web-app-1-container-name> –link <web-app-2-container-name>:<web-app-2-container-name> … –link <web-app-N-container-name>:<web-app-N-container-name> tutum/haproxy

 

  • Include the load balancer configuration as part of the Docker Compose:
    We can update the docker-compose.yml file in order to include the tutum/haproxy configuration. This way, the load balancer would start when the Compose is created and the site would be accessible just by running one command. Below, you can see what the configuration corresponding to the load balancer service would look like. The “haproxy” service definition specifies a link to the “net” service. This is enough to let the load balancer know that it should distribute the requests between the instances of the “net” service, which correspond to the .NET web application.

haproxy:

  image: tutum/haproxy

  links:

   – net

  ports:

   – “80:80”

 

In our scenario, we will apply the second approach since it allows us to start the whole environment by running just one command. Although in general we think that it is better to include the load balancer configuration in the Compose configuration file, please keep in mind that starting the load balancer together with the rest of the Compose may not always be the best solution. For example, if you scaled the web application service adding new instances and you want the load balancer to start considering those instances without the site being down too long, restarting the load balancer container manually may be faster than recreating the whole compose.

Continuing with our example, let’s update the “docker-compose.yml” file to include the “haproxy” service configuration.

First, open the file:

vi docker-compose.yml

 

Once the file opens, press i (“Insert”) to start editing the file. Here, we will add the configuration corresponding to the “haproxy” service:

haproxy:

  image: tutum/haproxy

  links:

   – net

  ports:

   – “80:80”

net:

  image: websiteondocker

  ports:

   – “210”

  links:

   – redis

redis:

  image: redis

 

Finally, to save the changes we made to the file, just press Esc and then :wq (write and quit).

At this point, we are ready to recreate the Compose by running “docker-compose up -d“.

image

As you can see in the previous image, the existing containers were recreated and additionally, a new container corresponding to the “haproxy” service was created.

So, Docker Compose started the load balancer container, but is the site working? Let’s check it out!

First, let’s look at the container we have running:

image

As you can see, the load balancer container is up and running in port 80. So, since we already have an endpoint configured in our Azure VM for this port, let’s access the URL corresponding to our VM.

clip_image001

The site is running! Please notice that the Container ID is displayed on the page. Checking the displayed value against the result we got from the “docker ps” command, we can see that the request was processed by the “netcomposetest_net_3” container.

If we reload the page, this time the request should be processed by a different container.

clip_image002

This time, the request was processed by the “netcomposetest_net_4” container.

At this point we have validated that the .NET web application is running in different containers and that the load balancer is working. Plus, we have verified that all the containers are consuming information from the same Redis service instance since, as you can see, the amount of hits increased even when the requests were processed by different web application instances.

Now, what happens if we need to stop one of the web application containers? Do we need to stop everything? The answer is “No”. We can stop a container, and the load balancer will notice it and won’t send new requests to that container. The best thing here is that the site continues running!

Let’s validate this in our example. Since we have 3 web application containers running, we can stop 2 of them and then try to access the site.

To stop the containers, we can run the “docker stop <container-name>” command. Looking at the result we got from the “docker ps” command, we can see that our containers are called “netcomposetest_net_3“, “netcomposetest_net_4” and “netcomposetest_net_5“. Let’s stop the “netcomposetest_net_3” and “netcomposetest_net_4” containers.

clip_image003[1]

Now, if we reload the page, we will see that the site is still working!

clip_image004[1]

This time the request was processed by the only web application container we have running: “netcomposetest_net_5“.

If we keep reloading the page, we will see that all the requests are processed by this container.

clip_image005



One Comment

Leave a Reply