Improvements in personal website deployment

Posted on by Matthias Noback

I wanted to be able to deploy MailComments to my Digital Ocean droplet (VPS) easily and without thinking. Due to a lack of maintenance, some more "operations" work had piled up as well:

  • The Digital Ocean monitoring agent had to be upgraded, but apt didn't have enough memory to do that on this old, small droplet.
  • The Ubuntu version running on that droplet was also a bit old by now.
  • The easiest thing to do was to just create a new droplet and prepare it for deploying my personal websites.
  • Unfortunately, my DNS setup was completely tied to the IP address of the droplet, so I couldn't really create a new droplet, and quickly switch. I'd have to wait for the new DNS information to propagate.

These issues were in the way of progress, so I decided to take some more time to rearrange things.

First: I created a droplet in a Digital Ocean region that supports both floating IPs and volumes (more about that later). Then I added a floating IP to the existing droplet. A floating IP means that you can use a single IP address for all incoming traffic, but you can dynamically assign this IP address to any droplet in the same region. This means you can set up a new droplet, and when it's ready, assign the floating IP address to the new droplet, then safely destroy the old droplet without losing any traffick.

Then I started working on that new droplet, setting it up the way I wanted. This was my shopping list:

  • A newer version of Ubuntu (it didn't have to be Ubuntu, but I don't have experience with any of the other distributions)
  • Docker
  • Nothing else really...

If that's your shopping list, it's easy to create a new droplet using Docker Machine. It has a driver for Digital Ocean. I enabled monitoring, used the standard Droplet size, and that was it. The advantage being: you don't need to set up a root password or anything. In fact, you can't just log in to the server; you'll always use an SSH key for it.

This is a nice way of keeping you from going into your server and performing all kinds of manual setup steps that you could never reproduce in a script, making you become too attached to this particular server. And this makes you afraid of destroying it and starting all over.

Here is the script I created for provisioning a new droplet:

#!/usr/bin/env bash

# Stop at first error; stop at undefined variable
set -eu

# On the local development machine:

# Read environment variables from .env
source .env

DIGITALOCEAN_REGION="${DIGITALOCEAN_REGION-ams3}"
DIGITALOCEAN_ACCESS_TOKEN="${DIGITALOCEAN_ACCESS_TOKEN}"
DIGITALOCEAN_SSH_KEY_FINGERPRINT=${DIGITALOCEAN_SSH_KEY_FINGERPRINT}

NEW_MACHINE_UUID=$(uuidgen)

docker-machine create --driver digitalocean \
        --digitalocean-access-token="${DIGITALOCEAN_ACCESS_TOKEN}" \
        --digitalocean-ssh-key-fingerprint="${DIGITALOCEAN_SSH_KEY_FINGERPRINT}" \
        --digitalocean-image=ubuntu-18-04-x64 \
        --digitalocean-region="${DIGITALOCEAN_REGION}" \
        --digitalocean-size=s-1vcpu-1gb \
        --digitalocean-monitoring=true \
        "${NEW_MACHINE_UUID}"
echo "${NEW_MACHINE_UUID}" > machine_id

Note: the .env file that's loaded by source looks a bit different from your average .env file:

export DIGITALOCEAN_ACCESS_TOKEN=...
export DIGITALOCEAN_SSH_KEY_FINGERPRINT=...

Traefik

On the old droplet I used jwilder/nginx-proxy since it had a nice setup for automatically creating certificates to support a secure connection. I was aware of Traefik for some time, and thought it would be a great replacement for the nginx-proxy. It turned out to be a bit hard to figure some things out, but a couple of hours later I managed to configure it properly.

I wanted to be able to launch any Docker container inside the traefik Docker network and let Traefik recognize it automatically as a service it should route traffic to. Traefik is built for this, and you just have to add a couple of labels to the container definitions:

services:
  matthiasnoback_nl:
    # all the usual things
    # ...
    networks:
      - traefik
    labels:
      - "traefik.enable=true"
      - "traefik.http.routers.matthiasnoback_nl_https.rule=Host(`matthiasnoback.nl`)"
      - "traefik.http.routers.matthiasnoback_nl_https.entrypoints=websecure"
      - "traefik.http.routers.matthiasnoback_nl_https.tls.certresolver=myhttpchallenge"

Treafik itself is the only service that needs to be listening on port 80 and 443. It will forward traffick to containers based on their routing rules. I decided to make my own Docker image based on Traefik's official image so I could copy the configuration as part of its build:

FROM traefik:v2.0
COPY traefik.toml /etc/traefik/traefik.toml

My traefik.toml file is quite basic:

[entryPoints]
  [entryPoints.web]
    address = ":80"

  [entryPoints.websecure]
    address = ":443"

[providers.docker]
  network = "traefik"

[certificatesResolvers.myhttpchallenge.acme]
  [certificatesResolvers.myhttpchallenge.acme.httpChallenge]
    entryPoint = "web"

HTTP to HTTPS

Besides a dynamic configuration based on container labels I wanted to have some static configuration for redirect rules as well. The first rule is an HTTP to HTTPS redirect. All sites use HTTPS (and they should), so the rewrite can be generic. First I declare a file for site-specific configurations:

# in traefik.toml

[providers.file]
  filename = "/etc/traefik/dynamic.toml"

The global HTTP to HTTPS redirection configuration looks like this:

[http.routers]
  [http.routers.redirecttohttps]
    entryPoints = ["web"]
    middlewares = ["httpsredirect"]
    rule = "HostRegexp(`{host:.+}`)"
    service = "noop"

[http.services]
  # noop service, the URL will be never called
  [http.services.noop.loadBalancer]
    [[http.services.noop.loadBalancer.servers]]
      url = "https://localhost"

[http.middlewares]
  [http.middlewares.httpsredirect.redirectScheme]
    scheme = "https"
    permanent = true

Redirects between secure sites

Then I have some old domain names that I wanted to redirect to the matthiasnoback.nl. This was a more difficult thing to accomplish. At some point I realized that a redirect from one safe site to the next one requires the first site to have a proper certificate as well... So, the router that processes an "old" request first establishes a secure connection, and then returns a redirect response to the new (also secure) address. I wondered why it kept giving me a 404 Not found instead of a redirect.

[http.routers]

  # ...

  [http.routers.mailcomments_com]
    entryPoints = ["websecure"]
    rule = "Host(`mailcomments.com`)"
    middlewares = ["redirect_mailcomments_com"]
    service = "noop"
    [http.routers.mailcomments_com.tls]
      certresolver = "myhttpchallenge"

[http.middlewares]
  [http.middlewares.redirect_mailcomments_com.redirectRegex]
    regex = "(.*)"
    replacement = "https://matthiasnoback.nl/mail-comments"
    permanent = false

Volumes

Traefik stores its certificates in an anonymous Docker volume, but I wanted to be able to create backups of these, in case I ever want to create a new droplet hosting the same secure website. The new MailComments service also needed a volume to store its database (a collection of JSON files actually). I also wanted the Docker volumes to survive the destruction of a droplet. This means, we should store persistent data not inside an anonymous Docker volume, nor a bind-mounted one on the host machine. Instead, we should use external storage that is not even on the droplet itself.

Digital Ocean itself offers block storage volumes for this purpose. A volume is basically an external hard drive that can be mounted inside a droplet. You then have access to the data that's on the volume. When the droplet dies, the volume stays, and you can attach it to a new droplet, where the data on the volume becomes available.

But Digital Ocean volumes aren't necessarily Docker volumes. Now you could bind-mount a directory on a Digital Ocean volume to a location inside a container. But it turns out you can also use Digital Ocean volumes directly as Docker volumes of a specific type ("dob"). In order to do that you can use the RexRay volume plugin for Docker and use its Digital Ocean storage provider. With it, you can create new Digital Ocean volumes on-the-fly using docker volume create, but you can also create a Digital Ocean volume in the web control panel. A volume you create there will get recognized automatically by Docker and you can start using it as an "external" volume.

Because every newly provisioned droplet should have the rexray docker volume plugin installed, I enhanced the provisioning script a bit:

SSH="docker-machine ssh $(cat machine_id)"

# Install the RexRay Digital Ocean Block Storage plugin on the new machine:

${SSH} docker plugin install rexray/dobs --grant-all-permissions \
    DOBS_TOKEN="${DIGITALOCEAN_ACCESS_TOKEN}" \
    DOBS_REGION="${DIGITALOCEAN_REGION}"

${SSH} docker plugin ls

${SSH} docker volume ls

I created a volume for the data of the MailComments service through the user interface of Digital Ocean. Then I added it as an external volume to docker-compose.yml file that I use to deploy the services to the droplet.

services:
  mail_comments:
    # ...
    volumes:
      - mail_comments_data:/data

volumes:
  mail_comments_data:
    external:
      name: mail-comments-data

I had some trouble with file permissions, but this was caused by the fact that the directory where the the volume was to be mounted inside the mail_comments container wasn't empty. After I made this directory empty during the build process of the image, everything worked just fine.

Rolling out a new droplet

I could still put in a lot of extra effort to be able to finish the remaining 20% of my wish list. Ideally, I would want to be able to programmatically:

  1. Provision a new droplet
  2. Start all services on it
  3. Reassign the floating IP and volumes to the new droplet
  4. Roll back if anything went wrong

Digital Ocean has an extensive API you can use to perform the necessary actions without any human intervention. As an example, this is what (part of the) scripted solution could look like:

#!/usr/bin/env bash

set -eu

source .env

DIGITALOCEAN_ACCESS_TOKEN="${DIGITALOCEAN_ACCESS_TOKEN}"
NEW_DOCKER_MACHINE_ID=$(cat machine_id)
echo "New Docker Machine: ${NEW_DOCKER_MACHINE_ID}"

# We're using "json" to get the DropletID as a string instead of a number
NEW_DROPLET_ID=$(docker-machine inspect \
    --format '' \
    "${NEW_DOCKER_MACHINE_ID}")
echo "New droplet ID: ${NEW_DROPLET_ID}"

DIGITALOCEAN_FLOATING_IP="${DIGITALOCEAN_FLOATING_IP}"

FLOATING_IP_INFO=$(curl -X GET -H "Content-Type: application/json" \
    -H "Authorization: Bearer ${DIGITALOCEAN_ACCESS_TOKEN}" \
    "https://api.digitalocean.com/v2/floating_ips/${DIGITALOCEAN_FLOATING_IP}" 2>/dev/null \
    | jq -c '.floating_ip.droplet | {name, id}')

echo "Floating IP currently assigned to ${FLOATING_IP_INFO}"

ASSIGN_TO_DROPLET_ID="${NEW_DROPLET_ID}"

curl -X POST \
  -H "Content-Type: application/json" \
  -H "Authorization: Bearer ${DIGITALOCEAN_ACCESS_TOKEN}" \
  -d "{\"type\":\"assign\",\"droplet_id\": \"${ASSIGN_TO_DROPLET_ID}\"}" \
  "https://api.digitalocean.com/v2/floating_ips/${DIGITALOCEAN_FLOATING_IP}/actions"

By the way, I use jq here, which allows you to write simple expressions for extracting specific fields etc. from a JSON structure. Really cool.

Although completing this setup might be a nice learning experience, I didn't think it would be worth it for my "personal websites" situation. Everything runs fine now, and the new setup is more modern, and more flexible than it was before anyway. So, that's it for now.

PHP Docker Digital Ocean Traefik
Comments
This website uses MailComments: you can send your comments to this post by email. Read more about MailComments, including suggestions for writing your comments (in HTML or Markdown).