Running multiple Docker applications behind Nginx reverse proxy

December 20, 2018
docker nginx

Running multiple Docker applications behind Nginx reverse proxy

Docker isn’t anymore a buzzword, rather it is a “standard” on developing, packaging and running web applications. Nginx is one the most popular reverse proxies powering up to 39.0% of the services worldwide.

This post will instruct on how to run multiple independent Docker applications on the same host, behind a single Nginx server, with the prerequisite that the reader is already familiar with both Docker and Nginx.

Intro

When it comes to connect these tools and achieve our desired result, there are two ways to properly do this:

  1. Manually create nginx configuration files and reload it whenever any changes are made to dockerized applications
  2. Using nginx-proxy docker image that will automatically generate nginx configurations and update them if docker applications are changed.

We’ll go through both options in order to demonstrate how everything is working under the hood and how easy the process is, while using nginx-proxy.

For demonstration purposes, we’ll have 2 containers that will act as 2 independent applications:

Setup common network

According to Docker docs, when two or more containers are started without a network specified, they are connected to the default bridge network, where they can communicate with each other by container’s IP address. The default bridge network does not have a built-in service discovery that will allow containers to reach themselves via names, so in order to support this – a custom network must be created.

Usually a custom network is more suitable because containers can communicate via names and you can define which container what network should belong to.

To create a docker network use the below command where <network_name> is the name of the network. In our case we’ll name the network nginx-proxy

# docker network create <network_name>
docker network create nginx-proxy

Going on, any containers that we will need, will join this network when will be started.

Starting dockerized applications

Now that our custom docker network nginx-proxy is created, lets start our docker applications by indicating as the --network parameter, previously created network.

Start the Hello World REST API container

docker run --name hello_world_check -p 8000:8000 --network nginx-proxy -d crccheck/hello-world

Start the Portainer container

docker run --name portainer -p 9000:9000 --network nginx-proxy -v /var/run/docker.sock:/var/run/docker.sock -v portainer_data:/data portainer/portainer

Navigating to <host_ip_address>:8000 will show a hello world message, while <host_ip_address>:9000 – a login into Portainer dashboard. Both applications are exposed on the host via specific ports, but our end result is to access applications without explicitly specifying the port numbers. The exposed port on host for hello_world_check can be changed to :80, thus it will be available on <host_ip_address>. But what about portainer app? We can not have more than one application listening on the same port.

This is where Nginx comes in handy. Nginx will be the single application running on port :80 and will route traffic to the specific application, based on some rules.

Let’s suppose (if you are not already) you are running these applications on a private server and you have two domains for both applications with correct DNS configuration. In our case:

hello-world.com for hello world application
docker-dashboard.com for Docker UI application

We’ll have to configure our Nginx proxy to route traffic based on the specified domain. Nginx will receive the domain and will direct to the corresponding application. As pointed out before, we will do this in both manual and automatic way.

Manual configuration

For this step we’ll need a container running nginx with a configuration file that we should write. The container will not be running using original image, instead – on an image that we will create manually.

The configuration file the nginx must run on, is specified below. I tried to keep it as simple as possible and document how it works. Save this in a file default.conf

# default.conf

# this config will route traffic to hello_world_check container
server {
    listen 80;
    server_name hello-world.com www.hello-world.com;

    # default Docker DNS address
    resolver 127.0.0.11;

    location / {
        # prevent nginx to resolve the host of the container on start up
        # set the name of the container, nginx should redirect request to
        # nginx will use Docker's DNS resolver to resolved hello_world_check
        # into an actual IP Address
        set $docker_host_hello_world_check "hello_world_check";

        # redirect request to hello_world_check
        proxy_pass http://$docker_host_hello_world_check:8000;
    }
}


# this config will route traffic to portainer container
server {
    listen 80;
    server_name docker-dashboard.com www.docker-dashboard.com;

    resolver 127.0.0.11;

    location / {
        set $docker_host_portainer "portainer";
        proxy_pass http://$docker_host_portainer:9000;
    }
}

Create a Dockerfile that will serve as the base for our image. Its job is to replace nginx’s default configuration with our custom.

# Dockerfile

FROM nginx

COPY ./default.conf /etc/nginx/conf.d/default.conf

At the moment we have all necessary files to run our nginx container

# create the docker image
docker build -t custom_nginx .

# run the container so it joins the network of 2 already running applications
docker run -p 80:80 --network nginx-proxy -d custom_nginx

Optionally, one more step to do is re-running our applications without publishing any ports to the host. Meaning that at the moment you can still access the application via <host_ip_address:<service_port> and if you want not to allow this, the containers can be recreated by specifying --expose flag instead of -p and they will be accessible only by containers in the same network (e.g nginx-proxy).

# stop and remove containers of previous applications
docker stop hello_world_check && docker rm hello_world_check
docker stop portainer && docker rm portainer

# start containers without publishing any ports to host machine
docker run --name hello_world_check --expose 8000 --network nginx-proxy -d crccheck/hello-world
docker run --name portainer --expose 9000 --network nginx-proxy -v /var/run/docker.sock:/var/run/docker.sock -v portainer_data:/data -d portainer/portainer

Congratulations, our applications are now accessible on hello-world.com and docker-dashboard.com. They are production ready and are going to work as intended.

Nevertheless, even with this smallest setup, you’d may require to update manually the nginx configuration when any of the containers will change its properties (e.g. ports have been changed, container has been stopped).

Wouldn’t be cool if there was a way to dynamically generate nginx configs based on the state of containers? So that whenever a container changes, the nginx configs are changed as well?

Of course it would!

Automatic configuration

nginx-proxy is an automated nginx proxy for Docker containers that helps in generation and update of nginx config files. It works by constantly monitoring the state of the containers via Docker API.

Start the nginx-proxy container in already existing nginx-proxy network. (Note that you’d have to stop the existing nginx container if its running on port :80)

docker run -p 80:80 --network nginx-proxy -v /var/run/docker.sock:/tmp/docker.sock:ro -d jwilder/nginx-proxy

In order to know what nginx-proxy should do with incoming requests, a set of environment variables can be passed to Docker containers when they are started. We are going to (re)create our Docker applications with one more additional environment variable.

docker run --name hello_world_check --expose 8000 --network nginx-proxy -e VIRTUAL_HOST=hello-world.com -d crccheck/hello-world
docker run --name portainer --expose 9000 --network nginx-proxy -v /var/run/docker.sock:/var/run/docker.sock -v portainer_data:/data -e VIRTUAL_HOST=docker-dashboard.com -d portainer/portainer

Now based on the domain name that is accessed, the request will be routed to the container with the equivalent VIRTUAL_HOST set as environment variable.

For a bare minimal example, that’s pretty much it. The applications should now be live and whenever the containers change, the updates are automatically propagated into their corresponding nginx configuration.

nginx-proxy is much more configurable than this, it allows to overwrite default configuration, locations blocks, headers and many other. Also it comes in handy with a LetsEncrypt companion tool for easily management of certificates. So make sure to check out the documentation for more information and more complex configuration details.

comments powered by Disqus