docker

A 1-post collection

Installing Ghost via Docker

my Blog is running on Ghost. Ghost is cool but the thing is: most people do not know about node.js or how to setup a node.js application.

But you don't have to!

Virtualization or Containers

You could run a virtual machine with Xen or Virtualbox and just use an appliance that somebody already created for you. But this does not scale when you want to run more than a couple of instances on a server.

So how to set up a server without exactly knowing what to run (and how) but while not using a virtual machine? The answer to that could be Docker.

1. Setup Docker on Ubuntu (14.04+)

This, like all other steps in this guide is straightforward:

$ sudo apt-get install docker.io

Docker is a very lightweight pseudo-virtualization framework building on the foundation of linux cgroups and chroot jails, think of it as some kind of changeroot environment that is even stricter than a normal chroot jail. It restricts the access to all resources not part of the jail. You will see the processes running when you're looking at the process list from the outside of the jail and may even be able to kill them, but from the inside of the container you will not be able to access anything on the outside.

So the overhead of the container running the ghost service will be minimal, it needs some more resources than just running the node server directly (shared libraries have to be loaded twice for example, and a minimal os image will be installed too) but a good server will be able to run quite a couple instances without breaking down.

2. Get the ghost docker image

The next step is downloading the official docker image for the ghost service:

$ sudo docker pull dockerfile/ghost

This will take a while because the ghost docker image is a layered image. Docker creates a differential image for each action that is run while provisioning the container, so you could see what every single command executed did to the container and roll back a few steps if something went wrong or you'd have to update a single component in the history.

If you want to get an overview what exactly happened with an image over time run the history command on it:

$ sudo docker history dockerfile/ghost
IMAGE               CREATED             CREATED BY                                      SIZE
7fd9622ac59b        11 days ago         /bin/sh -c #(nop) EXPOSE map[2368/tcp:{}]       0 B
e052bd394625        11 days ago         /bin/sh -c #(nop) CMD [bash /ghost-start]       0 B
8951c214a253        11 days ago         /bin/sh -c #(nop) WORKDIR /ghost                0 B
083fc6513232        11 days ago         /bin/sh -c #(nop) VOLUME ["/data", "/ghost-ov   0 B
50b7f9b06075        11 days ago         /bin/sh -c #(nop) ENV NODE_ENV=production       0 B
34de0ad45f77        11 days ago         /bin/sh -c #(nop) ADD file:27b2fabfe632ee15b9   880 B
abe6497bce46        11 days ago         /bin/sh -c cd /tmp &&   wget https://ghost.or   71.38 MB
65f8a4200da9        11 days ago         /bin/sh -c #(nop) CMD [bash]                    0 B
5c85a5ac1a37        11 days ago         /bin/sh -c #(nop) WORKDIR /data                 0 B
7a5fa70ca2f3        11 days ago         /bin/sh -c cd /tmp &&   wget http://nodejs.or   17.73 MB
1d73c42b2c8b        12 days ago         /bin/sh -c #(nop) CMD [bash]                    0 B
b80296c7dcea        12 days ago         /bin/sh -c #(nop) WORKDIR /data                 0 B
b90d7c4116a7        12 days ago         /bin/sh -c apt-get update &&   apt-get instal   56.41 MB
036f41962925        12 days ago         /bin/sh -c #(nop) CMD [bash]                    0 B
6ca8ad8beff9        12 days ago         /bin/sh -c #(nop) WORKDIR /root                 0 B
caa6e240bc5e        12 days ago         /bin/sh -c #(nop) ENV HOME=/root                0 B
95d3002f2745        12 days ago         /bin/sh -c #(nop) ADD dir:a0224129e16f61bf5ca   80.57 kB
2cfc7dfeba2d        12 days ago         /bin/sh -c #(nop) ADD file:20736e4136fba11501   532 B
e14a4e231fad        12 days ago         /bin/sh -c #(nop) ADD file:ea96348b2288189f68   1.106 kB
4c325cfdc6d8        12 days ago         /bin/sh -c sed -i 's/# \(.*multiverse$\)/\1/g   221.9 MB
9cbaf023786c        12 days ago         /bin/sh -c #(nop) CMD [/bin/bash]               0 B
03db2b23cf03        12 days ago         /bin/sh -c apt-get update && apt-get dist-upg   0 B
8f321fc43180        12 days ago         /bin/sh -c sed -i 's/^#\s*\(deb.*universe\)$/   1.895 kB
6a459d727ebb        12 days ago         /bin/sh -c rm -rf /var/lib/apt/lists/*          0 B
2dcbbf65536c        12 days ago         /bin/sh -c echo '#!/bin/sh' > /usr/sbin/polic   194.5 kB
97fd97495e49        12 days ago         /bin/sh -c #(nop) ADD file:84c5e0e741a0235ef8   192.6 MB
511136ea3c5a        16 months ago                                                       0 B

As you can see with the official docker image for ghost it builds on a rather old version of ubuntu that is about 16 months old (it's 12.04 LTS, the old version is used currently because most docker images are based on that, so if you run multiple different images you'll only waste those 192.6 MB once as the base layer could be recycled).

3. Setup a user to run ghost from

Of course you could run the ghost docker container from your default user on the system, but I prefer to separate services by user, so I created a new user on the system:

$ sudo adduser ghost

You'll have to put that user into the docker group so it would be able to run docker commands:

$ sudo usermod -G docker ghost

4. Download latest ghost and themes

All things we do from this step on we will run as the new ghost user, so just switch over:

$ sudo su - ghost

Fetch latest ghost release zip to get our hands on the example configuration file and the default casper theme:

$ wget https://ghost.org/zip/ghost-latest.zip
$ mkdir ghost-latest
$ cd ghost-latest
$ unzip ../ghost-latest.zip

Now we copy the important parts to our data directory that the ghost instance will use:

$ cd
$ mkdir mysite.com
$ cd mysite.com
$ cp -r ../ghost-latest/content .
$ cp ../ghost-latest/config.example.js config.js

Now we have a working directory structure and a example configuration file that we can adapt to our preferred settings.

5. Configuring ghost

The simplest thing we could do is just edit the example-config we just copied and add our domain-name to it (config->production->url):

var path = require('path'),
    config;

config = {
    production: {
        url: 'http://mysite.com',
        mail: {},
        database: {
            client: 'sqlite3',
            connection: {
                filename: path.join(__dirname, '/content/data/ghost.db')
            },
            debug: false
        },

        server: {
            host: '0.0.0.0',
            port: '2368'
        }
    }
}

be sure to set the config->production->server->host setting to 0.0.0.0 to be able to access the server later on.

6. Running ghost in docker

We pulled the docker image in step 2, now lets use it:

$ docker run -d -p 2300:2368 -v $HOME/mysite.com:/ghost-override dockerfile/ghost

If everything worked we now can access the ghost instance on http://localhost:2300 or better yet: http://mysite.com:2300

If something went wrong we can inspect the logfile of the ghost instance by running the docker log <id> command. For that we have to find out the <id> part:

$ docker ps
CONTAINER ID        IMAGE                     COMMAND             CREATED             STATUS              PORTS                              NAMES
e425352a4b49        dockerfile/ghost:latest   bash /ghost-start   2 minutes ago        Up 2 minutes         2300/tcp, 0.0.0.0:2300->2368/tcp   mad_galileo

You can use the container ID or the name of the container for all commands that will accept an ID.

$ docker logs -f mad_galileo

If you specify the -f flag, as in the example, the log viewer works like tail -f (follows the log if new entries are appended) if you omit the -f it just dumps what has been logged so far and exits.

7. Setting up a reverse proxy to route multiple domains

Normally, if only one instance runs on the server we could just change the port 2300 in the docker command to 80 and be done. But all the setup we did was to have multiple instances running on the same server and sharing the port 80 for different domains.

As we can not bind more than one docker container to a single port we just have to setup a reverse proxy that listens to port 80 and does the routing to the different ghost instances for us.

I chose varnish for that as it is easy to configure.
For installation of varnish we change back to a user that can use sudo.

7a. Install varnish

Varnish is in the standard ubuntu repositories, so installation is just one apt-get away:

$ sudo apt-get install varnish

The default installation of varnish will listen on a somewhat arcane port to not interfere with apache. As we want varnish to do the routing we re-configure varnish to listen on port 80 and move the apache out of the way.

To do that:

7b. Move apache out of the way

Just change the listening port of apache to some other port not currently in use and make it listen only on 127.0.0.1 as we don't have to access the direct apache port from the outside. You'll find the configuration for that in /etc/apache2/ports.conf:

NameVirtualHost 127.0.0.1:8080
Listen 127.0.0.1:8080

<IfModule mod_ssl.c>
    NameVirtualHost *:443
    Listen 443
</IfModule>

<IfModule mod_gnutls.c>
    NameVirtualHost *:443
    Listen 443
</IfModule>

Also we have to change all the VirtualHost statements in /etc/apache2/sites-available/* to reflect this change:

<VirtualHost 127.0.0.1:8080>

Do not reload the apache config yet as your sites would go offline, but we don't want to interrupt anybody, do we?

7c. Setup varnish to run on port 80

So to move varnish to listen to port 80 instead of 6081 we have to change /etc/default/varnish:

DAEMON_OPTS="-a :80 \
             -T localhost:6082 \
             -f /etc/varnish/default.vcl \
             -S /etc/varnish/secret \
             -s malloc,256m"

7d. Setup routing in varnish to make apache available again

So we don't want to interrupt the connection to the apache daemon to allow for "normal" websites aside of our new ghost instances, to make that possible we set up a default route in /etc/varnish/default.vcl:

backend apache {
    .host = "127.0.0.1";
    .port = "8080";
}

sub vcl_recv {
}

If you now reload the apache config and afterwards the varnish server all sites should be accessible as they were before you made the change:

$ sudo service apache2 restart
$ sudo service varnish restart

If everything is working so far we are ready to add the ghost instances:

7e. Setup routing to ghost instances

As we did for apache, we just add a few rules to /etc/varnish/default.vcl and restart varnish afterwards:

backend ghost_mysite_com {
        .host = "127.0.0.1";
        .port = "2300";
}

sub vcl_recv {
        if (req.http.host ~ "mysite.com") {
                set req.backend = ghost_mysite_com;
                return(pass);
        }
}

If you don't want the varnish server to act as a cache (which it is very good at) you could use return(pipe) to disable that.

Be sure the apache backend is always the first backend defined because that is the default varnish falls back to if no rule in vcl_recv matches and redirects to another backend.

8. Make it permanent

If your server reboots, the currently running docker containers will not be started by default, so we add the command to run them to /etc/rc.local to execute them on reboot automatically:

su ghost -c 'docker run -d -p 2300:2368 -v /home/ghost/mysite.com:/ghost-override dockerfile/ghost'

just append that line before the exit 0 statement at the end.

9. Make it a farm

Now what if you want to run multiple instances?

  • Add a new data directory in /home/ghost (see steps 4 and 5)
  • Run a new docker instance on a different port (step 6 but change the 2300)
  • Add a rule to /etc/varnish/default.vcl for that instance (step 7e, change the 2300 again)
  • Add a line to /etc/rc.local, use the changed command line from step 6 (or change the 2300 again ;) )