xhyve: lightweight vm for Mac OS X

xhyve is a port of bhyve a qemu equivalent for FreeBSD to the Mac OS X Hypervisor framework. This means it has zero dependencies that are not already installed on every Mac that runs at least OS X Yosemite (10.10). The cool thing though is that Mac OS X has full control of the system all the time as no third party kernel driver hijacks the CPU when a VM is running, so the full power management that OS X provides is always in charge of everything. No more battery draining VMs \o/.

xhyve logo

xhyve is Open Source

This is really cool as everyone is able to hack it, so did I. The code is, like every bit of lowlevel C code I saw in my life, a bit complex to read and has not many comments but is very structured so you can easily find what you want to modify and do that.

The project is quite young so don't expect miracles. It has for example no graphics emulation. Running Ubuntu or FreeBSD is reduced to a serial tty and networking. If you want to run a virtual server on your Mac for development purposes it's quite perfect though.

There was one downer that got me: A virtual disk of say 30 GB has a backing file that is exactly 30 GB big even if you only store 400 MB on it. That's bad for everyone running on a SSD where space is limited as of now.

Introducing: Sparse HDD-Images for xhyve

Because the VM code is pretty small (the compiled executable of xhyve is about 230 KB) I though it might be possible for me to change this one last thing that prevented me to use xhyve on my Macbook. It turns out it is really easy to hack the virtual block device subsystem. All disk access code is neatly contained in one file: blockif.c. It is neatly separated from the virtio-block and ahci drivers.

So what I went out to do was three things:

  • Split the big disk image file into multiple segments (as for why read on)
  • Make the disk image segments only store blocks that have actual content in them (vs. storing only zeroes)
  • Make xhyve create the backing image files if they do not exist.

Splitting the disk image into segments

You may ask why, this is rather an optimization for maintaining speed and aid debugging but turned out to have the following advantages:

  • Some file systems may only allow files of a maximum size (prime example: FAT32 only allows 2GB per file)
  • Sparse image lookup tables can be filled with 32 bit values instead of defaulting to 64 bit (which saves 50% space in the lookup tables)
  • Debugging is easier as you may hexdump those smaller files on the terminal instead of loading a multi gigabyte file into the hex editor of your choice
  • Fragmentation of sparse images is reduced somewhat (probably not an issue for SSD backed files)
  • Growing disks is easy: just append a segment
  • Shrinking disks should be possible with help of the guest operating system, if it is able to clear the end of the disk of any data you could just delete a segment.

So splitting was implemented and rather easy to think of, just divide the disk read offset by the segment size and use a modulo operation to get to the in-segment-address. There's one catch: I had to revert from using preadv and pwritev to regular reads and writes. Usually you really want those v functions as they allow executing multiple read and write calls in one system call, thus beeing atomic. But these functions only work with one file descriptor and our reads probably span multiple segments and thus multiple file descriptors.

To make the thing easier and configurable I introduced two additional parameters for the block device configuration:

  • size the size of the backing file for the virtual disk
  • split the segment size. size should be a multiple of split to avoid wasting space.

You may use suffixes like k, m, g like on the RAM settings to avoid calculating the real byte sizes in your head ;)

Be aware: You may convert a disk from plain to split image either by using dd and splitting the image file (exact commands are left as an exercise to the reader) or by setting split to the old size of the image and size to a multiple of split effectively increasing the size of the disk by a multiple of the old size. New segments will be created automatically on next VM start.

Example config

Implementing sparse images

So the last step for making xhyve usable to me: Don't waste my disk space.

I think there are multiple methods for implementing efficient sparse images, I went for the following:

  • Only save sectors that contain actual data and not only zeroes
  • Minimum allocation size is one sector
  • Maintain a lookup table that references where each sector is saved in the image
  • Deallocation of a sector (e.g. overwriting with zeroes) is only handled by a shrink disk tool offline

So how does such a lookup table look?

A sparse disk lookup table is just an array of 32 bit unsigned integers, one for each sector. If you want to read sector 45 you just take the value of array position 45, multiply it by sector size and seek into the image segment to read from that address. Simple, isn't it?

In the current implementation the lookup table is written to a separate file with the extension .lut, all writes to this file are synchronous. The other backing files will be initially created as zero byte length files and when the guest os starts writing data the new sector is appended to the respective segment file and a new offset is written to the lookup table.

The lookup table starts as an array full of UINT32_MAX values (0xffffffff) as this is the marker used to describe that this sector is not yet in the image and thus should be returned as a series of zero values. If a read finds an entry other than that marker the corresponding data is read from the segment file.

All lookup tables for all segment files are appended to the .lut file, so it contains multiple tables, not just one. Positive side of this is that 32 bits of offset data map to a maximum segment size of about just under 4 TB divided by the sector size. If you use an SSD as backing storage you probably should configure your sector size to 4KB as that is the sector size of most SSDs and will result in additional performance. So this will result in a maximum segment size of about 16 PB and I never heard of a Mac that has this much storage. (If yours has please send me a photo)

Writes of new sectors (those appended to the segment file) are synchronous to avoid two sectors with the same address. Other writes are as the user configured them on the command line.

To enable sparse disk images just add sparse as a parameter to your configuration.

Sparse config example

Be aware: You'll have to recreate your disk image to profit of this setting, sparse disks are not compatible with normal disks.

Conclusion

I used this configuration:

-s 4,virtio-blk,test/hdd/hdd.img,sectorsize=4096,size=20G,split=1G,sparse

So this is how it looks on disk:

$ ls -lah
total 2785264  
drwxr-xr-x  23 dark  staff   782B Jan 16 01:27 .  
drwxr-xr-x   9 dark  staff   306B Jan 16 01:27 ..  
-rw-rw----   1 dark  staff   531M Jan 16 02:50 hdd.img.0000
-rw-rw----   1 dark  staff    12K Jan 16 01:18 hdd.img.0001
-rw-rw----   1 dark  staff   5.9M Jan 16 02:50 hdd.img.0002
-rw-rw----   1 dark  staff    24K Jan 16 01:18 hdd.img.0003
-rw-rw----   1 dark  staff   110M Jan 16 02:48 hdd.img.0004
-rw-rw----   1 dark  staff     0B Jan 16 01:11 hdd.img.0005
-rw-rw----   1 dark  staff   172M Jan 16 02:50 hdd.img.0006
-rw-rw----   1 dark  staff     0B Jan 16 01:11 hdd.img.0007
-rw-rw----   1 dark  staff   207M Jan 16 02:50 hdd.img.0008
-rw-rw----   1 dark  staff     0B Jan 16 01:11 hdd.img.0009
-rw-rw----   1 dark  staff    15M Jan 16 02:48 hdd.img.0010
-rw-rw----   1 dark  staff     0B Jan 16 01:11 hdd.img.0011
-rw-rw----   1 dark  staff    46M Jan 16 01:24 hdd.img.0012
-rw-rw----   1 dark  staff     0B Jan 16 01:11 hdd.img.0013
-rw-rw----   1 dark  staff   151M Jan 16 02:50 hdd.img.0014
-rw-rw----   1 dark  staff    12K Jan 16 01:18 hdd.img.0015
-rw-rw----   1 dark  staff    97M Jan 16 02:50 hdd.img.0016
-rw-rw----   1 dark  staff   5.3M Jan 16 02:48 hdd.img.0017
-rw-rw----   1 dark  staff     0B Jan 16 01:11 hdd.img.0018
-rw-rw----   1 dark  staff   4.0K Jan 16 01:15 hdd.img.0019
-rw-rw----   1 dark  staff    20M Jan 16 02:48 hdd.img.lut

The dump you see here is of an image where I just installed Ubuntu server 15.10.
Instead of wasting 20GB of space this one only needs about 1.3GB. Speed is about the same as before (with a SSD as backing storage) but may suffer severely on a spinning rust disk as there are way more seeks if the sectors become fragmented.

Where to get it

Currently you will have to compile yourself, just fetch the sparse-disk-image branch from https://github.com/dunkelstern/xhyve and execute make.

Post mortem analysis of Swift server code

Just a very quick idea of how you could handle server side crashes of a Swift binary. Swift itself has no Stack unwinding functions that you could use for debugging purposes but lldb has.

So what if the currently crashing program would attach lldb to itself and create stack traces before vanishing into nirvana?

import Darwin

private func signalHandler(signal: Int32) {  
    // need my pid for telling lldb to attach to parent
    let pid = getpid()

    // create command file
    let filename = "/tmp/backtrace.\(pid)"
    var fp = fopen(filename, "w")
    if fp == nil {
        print("Could not open command file")
        exit(1)
    }

    // attach to pid
    var cmd = "process attach --pid \(pid)\n"
    fputs(cmd, fp)

    // backtrace for all threads
    cmd = "bt all\n"
    fputs(cmd, fp)

//    // save core dump
//    cmd = "process save-core coredump\n"
//    fputs(cmd, fp)

    // kill the process
    cmd = "process kill\n"
    fputs(cmd, fp)

    // delete the command file
    cmd = "script import os\nscript os.unlink(\"\(filename)\")\n"
    fputs(cmd, fp)

    // quit lldb
    cmd = "quit\n"
    fputs(cmd, fp)
    fclose(fp)

    // add signal type to backtrace.log header
    fp = fopen("backtrace.log", "w")
    if fp == nil {
        print("Could not open log file")
        exit(1)
    }
    fputs("Signal \(signal) caught, executing lldb for backtrace\n", fp)
    fclose(fp)

    // run lldb
    let command = "/Library/Developer/Toolchains/swift-latest.xctoolchain/usr/bin/lldb --file \"\(Process.arguments[0])\" --source \"\(filename)\" >>backtrace.log"
    system(command)
    exit(1)
}

// Install signal handler
signal(SIGILL, signalHandler)  
signal(SIGTRAP, signalHandler)  
signal(SIGABRT, signalHandler)  
signal(SIGSEGV, signalHandler)  
signal(SIGBUS, signalHandler)  
signal(SIGFPE, signalHandler)

// Now crash
print("Hello, World!")  
var forcedUnwrap: Int! = nil

print(forcedUnwrap)  

This code traps all fatal error signals and calls lldb with a small command file that looks like this and is generated by the crashing program:

process attach --pid <my_pid>  
bt all  
process kill  
script import os  
script os.unlink("<command_file>")  
quit  

So it attaches lldb to a PID, fetches a backtrace for all threads, kills the parent process and deletes the command file. Log output is diverted to backtrace.log and contains all lldb output.

Additionally you could include process save-core coredump to write a core dump to the current directory which can be loaded for further inspection, but beware a core dump for our simple program up there will be around 500MB and will take about 30 seconds to write to disk (SSD).

You can load the core dump like this:

lldb -f <binary> -c coredump  

Now you can inspect memory and variables like the program crashed while the debugger was attached.

The cool thing is, this code will not disable the Xcode internal debugger, so you get the usual EXC_BAD_INSTRUCTION when running the code in Xcode.

Useful Xcode breakpoints

Here I will document useful breakpoints when you're developing for OSX or iOS with Xcode. This is primarily for me to remember what is useful as I am googling some of these all the time.

It's sad that there are no "breakpoint-templates" that will automatically apply to all Xcode projects that you'll ever create. But enough of the introductory words, here comes the list:

Objective C exception breakpoint

Obviously the most important breakpoint there is, this one has a "template" of some sort as it has its very own menu entry:

Objective C Exception Breakpoint

So far so good, but the annoying thing is that the exception message will not be printed when the breakpoint is reached but only if you continue (and crash your program for real, and of course void your stack backtrace)

So add this to the default:

i386/iOS simulator (32Bit)

Objective C Exception Breakpoint modifications

Just add the following actions:

  • po *(id*)($esp+4)

which will print for example:

-[__NSCFConstantString characterAtIndex:]: Range or index out of bounds
(lldb) 

x86_64/iOS simulator (64Bit)

Use the following actions:

  • po $rdx
  • po $rcx
  • po $r8

iOS device (ARM/ARM64)

Use these:

  • po $r2
  • po $r3
  • po *(id*)($sp)

If all this is to verbose to you look at the prebuild lldb script from here: http://qwan.org/2013/06/18/how-to-snatch-the-error-code-from-the-trap-frame-in-xcode/

Memory errors

Attention: This may be superseeded by the new Address sanitizer in Xcode 7

Sometimes it happens that you find a nasty memory error that is not at all obvious where it came from. Mostly it appears somewhere far away from the original error because the stack was smashed or a buffer overflow bleed into neighboring variables and overwrote something.

To find those errors you'll usually first enable all the memory protection error loggers that are available:

Memory debugging

And now you'll add a symbolic breakpoint at malloc_error_break to catch the offender that is smashing your stack right away (or at least very near to the cause)

Core Graphics Errors

All of Core Graphics error logging will go through the function CGPostError so it is sensible to add a breakpoint at exactly that location. It will break at anything Core Graphics will spit out. You know those 'XXX tried to draw to a nil context` errors may be hard to find if a lot is going on in your application at the same time.

Slow loading views

Perhaps because you used a wrong font-name? Who knows! Set a symbolic breakpoint to CTFontLogSuboptimalRequest to be notified when Core Text does not find the font directly but has to resort to a detailed search (that is very slow, on an iPad 4 it takes about 3 seconds to complete)

Auto layout

This one is obvious when it happens. But it's better to catch it directly instead of waiting for one to happen so for completeness, set a symbolic breakpoint to UIViewAlertForUnsatisfiableConstraints

Improve the debugger

Sometimes lldb can be quite stubborn:

(lldb) p self.window.bounds
error: property 'bounds' not found on object of type 'UIWindow *'
error: 1 errors parsing expression

But this can be fixed:

(lldb) expr @import UIKit
(lldb) p self.window.bounds
(CGRect) $4 = (origin = (x = 0, y = 0), size = (width = 375, height = 667))

Wow, much better, but what has this to do with breakpoints you may ask? Just try the following: Set a new breakpoint in your app delegate that is hit immediately after starting your application, set it to auto-continue and add an action expr @import UIKit.

The next run of your app will stop briefly at that breakpoint, execute that command and continue running. If you hit any other breakpoint (or press pause) the command already has been executed and lldb knows everything about UIKit already! Great!

(You may want to @import Foundation and @import CoreGraphics too)

Xcode build phases

This is just a short brain-dump what to put in each and every project's build phases:

Show all TODO: and FIXME: comments as warnings

Based on another blog article of Jeffrey Sambells, put this as a run script build phase into your build process:

TAGS="TODO:|FIXME:|@todo|@fixme|HACK:|@hack"  
echo "searching ${SRCROOT} for ${TAGS}"  
find "${SRCROOT}/Source" \( -name "*.h" -or -name "*.m" \) -print0 | xargs -0 egrep --with-filename --line-number --only-matching "($TAGS).*\$" | sed -e 's/\([^ ]*[0-9][0-9]*:\)\(.*\)/\1 warning: \2/'  

My changes include the doxygen @ tags and replace the invocation of perl with a better working version in sed script.

Automatically build documentation

Install the awesome appledoc to build better documentation than doxygen might do (all with the same comment markup ;) )

Best install not via homebrew but via the install-script in the git repo (as you will get no templates in a homebrew install)

Add a new custom target, name it documentation and add this script to the build phases:

cd "${SRCROOT}"  
if [ -x /usr/local/bin/appledoc ] ; then  
    /usr/local/bin/appledoc . 2>&1 |sed -e '/warning: Ignoring/d' >&2
else  
    echo "error: appledoc not installed" >&2
    exit 1
fi  

You'll need a config-file named AppledocSettings.plist in your $(SRCROOT):

<?xml version="1.0" encoding="UTF-8"?>  
<!DOCTYPE plist PUBLIC "-//Apple//DTD PLIST 1.0//EN" "http://www.apple.com/DTDs/PropertyList-1.0.dtd">  
<plist version="1.0">  
<dict>  
    <key>--ignore</key>
    <array>
        <string>directory/to/ignore</string>
    </array>
    <key>--project-name</key>
    <string>fancy-project</string>
    <key>--project-company</key>
    <string>example</string>
    <key>--company-id</key>
    <string>com.example</string>
    <key>--create-docset</key>
    <false/>
    <key>--install-docset</key>
    <false/>
    <key>--verbose</key>
    <string>2</string>
    <key>--logformat</key>
    <string>xcode</string>
    <key>--input</key>
    <array>
        <string>./path/to/source</string>
    </array>
    <key>--output</key>
    <string>./docs</string>
    <key>--merge-categories</key>
    <true/>
    <key>--warn-undocumented-object</key>
    <true/>
    <key>--warn-undocumented-member</key>
    <true/>
    <key>--warn-missing-arg</key>
    <true/>
</dict>  
</plist>  

Developing ghost templates

As you can see my blog has a new theme now. I developed that Theme based on the original "Casper"-theme that is included with a standard Installation of ghost (meaning: I kept the Handlebars themes, heavily modified them and scrapped the CSS ;) ).

I like playing with the CSS when I do webdesign (which is not that often anymore) and I like instant feedback, so the first thing that occured to me was setting up an environment to let me do that.

Fortunately the tools that are currently en vogue are all based on node.js as ghost itself is. So I took everything that could possibly help me from forums and collegues and implemented a simple system to develop themes.

Introducing gulp and bower

Because nobody wants to write pure CSS anymore but some kind of dialect of it (be it less or sass or whatever your preprocessor of choice is named) a new intermediate step is introduced: Compiling the CSS-Extension-Language down to standard CSS.

gulp

The tool of choice for that is gulp. gulp is compareable with make (or grunt if you're more into the javascript side of things) but has one big advantage: It does not rely on temporary files but on streams and is faster than most other (javascript based) build systems. And the best thing is: It has a file watcher built right into it. Everytime you save any of the theme files in your project gulp picks that file up and recompiles it instantly. So in principle you're writing CSS by not writing CSS (at least every process on your system may think that).

Normally to build a gulp project you'll just call gulp with a target name (or nothing if you want the default target):

gulp default  

livereload

I don't know if you heard of Livereload already, but if you didn't let me tell you something: It is the best thing since automated build systems! Livereload is a small javascript shim that is embedded into a webpage that connects to a livereload server via a websocket. Everytime the livereload server sees a file change the page is signalled to reload completely or overwrite its css. So you'll get instant feedback in your browser when you save a file in your editor (or gulp changed/compiled a file for you).

It even works on mobile devices and with multiple browsers connected to it. So no refreshing of multiple browsers to compatability test a change.

bower

bower is a tool like npm, it's a simple package manager. This time it's not a package manager for some backend or build stuff but for resources you'll want to include in your product (assets, CSS frameworks, javascript libraries, etc.)

It works exactly like npm and even writes a bower.json file that is very similar to the npm package.json

To let it create a bower.json file for you:

bower init  

and answer the questions.

To install a package just call:

bower install --save inuit-normalize  

The --save argument tells bower not only to install the package but to save it to the json file for persistence.

Running the thing in ghost

Of course just editing some template files without a ghost instance would not work. Because the gulp tool is written in javascript and the gulpfile.js control file is javascript too (like the name suggests) integrating ghost into it is pretty simple.

At first we need a ghost package:

npm install --save-dev ghost  

Now we need to call a ghost server from gulpfile.js:

gulp.task('ghost', ['sass', 'js', 'templates', 'fonts', 'stuff'], function() {  
    var ghost = require('ghost');
    process.env.NODE_ENV = 'development';
    ghost({ config: __dirname + '/ghost-config.js' }).then(function (ghostServer) {
        ghostServer.start();
    });
}

If you call the ghost target of gulp now you will be greeted by a freshly initialized ghost instance (of course a ghost-config.js should be available in your current working directory!)

Getting a complete bundle from github

Because I invested quite some time into the gulpfile.js and ironed out some quirks (Throwing errors in gulp crashes the ghost instance and the livereload server and ghost fight for the express-server instance for example) I just uploaded the full thing to github.

Get the ghost theme developers kit on github and checkout the dunkelstern_theme branch for a version of the theme on this blog as an example.

Setup instructions

If you're on a Mac open a terminal and do the following:

  • Get Homebrew if you don't have it already:
ruby -e "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/master/install)"  
  • Install node.js
brew install node  
  • Get a checkout of the git repository (or download as zip and unpack) and change to that directory (type cd, space, and drag and drop the folder to the terminal, then press return)
  • Get gulp and bower
npm install -g gulp  
npm install -g bower  
  • Get all needed node.js modules and assets:
npm install  
bower install  
  • If you have Sublime Text open the project file, else look into the directory for *.scss and *.hbs files
  • If you know Sass and Handlebars that is very good, else visit the ghost theme tutorial
  • In your terminal call the gulp livereloader
gulp livereload  
  • If everything worked you should see a preconfigured ghost instance in your browser if you visit http://localhost:2368/
  • Change something and see it refresh in your browser... it's magic!
  • If you produce an error the terminal will give you an exact error message and the browser will not reload your defect file.

Installing Ghost via Docker

my Blog is running on Ghost. Ghost is cool but the thing is: most people do not know about node.js or how to setup a node.js application.

But you don't have to!

Virtualization or Containers

You could run a virtual machine with Xen or Virtualbox and just use an appliance that somebody already created for you. But this does not scale when you want to run more than a couple of instances on a server.

So how to set up a server without exactly knowing what to run (and how) but while not using a virtual machine? The answer to that could be Docker.

1. Setup Docker on Ubuntu (14.04+)

This, like all other steps in this guide is straightforward:

$ sudo apt-get install docker.io

Docker is a very lightweight pseudo-virtualization framework building on the foundation of linux cgroups and chroot jails, think of it as some kind of changeroot environment that is even stricter than a normal chroot jail. It restricts the access to all resources not part of the jail. You will see the processes running when you're looking at the process list from the outside of the jail and may even be able to kill them, but from the inside of the container you will not be able to access anything on the outside.

So the overhead of the container running the ghost service will be minimal, it needs some more resources than just running the node server directly (shared libraries have to be loaded twice for example, and a minimal os image will be installed too) but a good server will be able to run quite a couple instances without breaking down.

2. Get the ghost docker image

The next step is downloading the official docker image for the ghost service:

$ sudo docker pull dockerfile/ghost

This will take a while because the ghost docker image is a layered image. Docker creates a differential image for each action that is run while provisioning the container, so you could see what every single command executed did to the container and roll back a few steps if something went wrong or you'd have to update a single component in the history.

If you want to get an overview what exactly happened with an image over time run the history command on it:

$ sudo docker history dockerfile/ghost
IMAGE               CREATED             CREATED BY                                      SIZE  
7fd9622ac59b        11 days ago         /bin/sh -c #(nop) EXPOSE map[2368/tcp:{}]       0 B  
e052bd394625        11 days ago         /bin/sh -c #(nop) CMD [bash /ghost-start]       0 B  
8951c214a253        11 days ago         /bin/sh -c #(nop) WORKDIR /ghost                0 B  
083fc6513232        11 days ago         /bin/sh -c #(nop) VOLUME ["/data", "/ghost-ov   0 B  
50b7f9b06075        11 days ago         /bin/sh -c #(nop) ENV NODE_ENV=production       0 B  
34de0ad45f77        11 days ago         /bin/sh -c #(nop) ADD file:27b2fabfe632ee15b9   880 B  
abe6497bce46        11 days ago         /bin/sh -c cd /tmp &&   wget https://ghost.or   71.38 MB  
65f8a4200da9        11 days ago         /bin/sh -c #(nop) CMD [bash]                    0 B  
5c85a5ac1a37        11 days ago         /bin/sh -c #(nop) WORKDIR /data                 0 B  
7a5fa70ca2f3        11 days ago         /bin/sh -c cd /tmp &&   wget http://nodejs.or   17.73 MB  
1d73c42b2c8b        12 days ago         /bin/sh -c #(nop) CMD [bash]                    0 B  
b80296c7dcea        12 days ago         /bin/sh -c #(nop) WORKDIR /data                 0 B  
b90d7c4116a7        12 days ago         /bin/sh -c apt-get update &&   apt-get instal   56.41 MB  
036f41962925        12 days ago         /bin/sh -c #(nop) CMD [bash]                    0 B  
6ca8ad8beff9        12 days ago         /bin/sh -c #(nop) WORKDIR /root                 0 B  
caa6e240bc5e        12 days ago         /bin/sh -c #(nop) ENV HOME=/root                0 B  
95d3002f2745        12 days ago         /bin/sh -c #(nop) ADD dir:a0224129e16f61bf5ca   80.57 kB  
2cfc7dfeba2d        12 days ago         /bin/sh -c #(nop) ADD file:20736e4136fba11501   532 B  
e14a4e231fad        12 days ago         /bin/sh -c #(nop) ADD file:ea96348b2288189f68   1.106 kB  
4c325cfdc6d8        12 days ago         /bin/sh -c sed -i 's/# \(.*multiverse$\)/\1/g   221.9 MB  
9cbaf023786c        12 days ago         /bin/sh -c #(nop) CMD [/bin/bash]               0 B  
03db2b23cf03        12 days ago         /bin/sh -c apt-get update && apt-get dist-upg   0 B  
8f321fc43180        12 days ago         /bin/sh -c sed -i 's/^#\s*\(deb.*universe\)$/   1.895 kB  
6a459d727ebb        12 days ago         /bin/sh -c rm -rf /var/lib/apt/lists/*          0 B  
2dcbbf65536c        12 days ago         /bin/sh -c echo '#!/bin/sh' > /usr/sbin/polic   194.5 kB  
97fd97495e49        12 days ago         /bin/sh -c #(nop) ADD file:84c5e0e741a0235ef8   192.6 MB  
511136ea3c5a        16 months ago                                                       0 B  

As you can see with the official docker image for ghost it builds on a rather old version of ubuntu that is about 16 months old (it's 12.04 LTS, the old version is used currently because most docker images are based on that, so if you run multiple different images you'll only waste those 192.6 MB once as the base layer could be recycled).

3. Setup a user to run ghost from

Of course you could run the ghost docker container from your default user on the system, but I prefer to separate services by user, so I created a new user on the system:

$ sudo adduser ghost

You'll have to put that user into the docker group so it would be able to run docker commands:

$ sudo usermod -G docker ghost

4. Download latest ghost and themes

All things we do from this step on we will run as the new ghost user, so just switch over:

$ sudo su - ghost

Fetch latest ghost release zip to get our hands on the example configuration file and the default casper theme:

$ wget https://ghost.org/zip/ghost-latest.zip
$ mkdir ghost-latest
$ cd ghost-latest
$ unzip ../ghost-latest.zip

Now we copy the important parts to our data directory that the ghost instance will use:

$ cd
$ mkdir mysite.com
$ cd mysite.com
$ cp -r ../ghost-latest/content .
$ cp ../ghost-latest/config.example.js config.js

Now we have a working directory structure and a example configuration file that we can adapt to our preferred settings.

5. Configuring ghost

The simplest thing we could do is just edit the example-config we just copied and add our domain-name to it (config->production->url):

var path = require('path'),  
    config;

config = {  
    production: {
        url: 'http://mysite.com',
        mail: {},
        database: {
            client: 'sqlite3',
            connection: {
                filename: path.join(__dirname, '/content/data/ghost.db')
            },
            debug: false
        },

        server: {
            host: '0.0.0.0',
            port: '2368'
        }
    }
}

be sure to set the config->production->server->host setting to 0.0.0.0 to be able to access the server later on.

6. Running ghost in docker

We pulled the docker image in step 2, now lets use it:

$ docker run -d -p 2300:2368 -v $HOME/mysite.com:/ghost-override dockerfile/ghost

If everything worked we now can access the ghost instance on http://localhost:2300 or better yet: http://mysite.com:2300

If something went wrong we can inspect the logfile of the ghost instance by running the docker log <id> command. For that we have to find out the <id> part:

$ docker ps
CONTAINER ID        IMAGE                     COMMAND             CREATED             STATUS              PORTS                              NAMES  
e425352a4b49        dockerfile/ghost:latest   bash /ghost-start   2 minutes ago        Up 2 minutes         2300/tcp, 0.0.0.0:2300->2368/tcp   mad_galileo  

You can use the container ID or the name of the container for all commands that will accept an ID.

$ docker logs -f mad_galileo

If you specify the -f flag, as in the example, the log viewer works like tail -f (follows the log if new entries are appended) if you omit the -f it just dumps what has been logged so far and exits.

7. Setting up a reverse proxy to route multiple domains

Normally, if only one instance runs on the server we could just change the port 2300 in the docker command to 80 and be done. But all the setup we did was to have multiple instances running on the same server and sharing the port 80 for different domains.

As we can not bind more than one docker container to a single port we just have to setup a reverse proxy that listens to port 80 and does the routing to the different ghost instances for us.

I chose varnish for that as it is easy to configure.
For installation of varnish we change back to a user that can use sudo.

7a. Install varnish

Varnish is in the standard ubuntu repositories, so installation is just one apt-get away:

$ sudo apt-get install varnish

The default installation of varnish will listen on a somewhat arcane port to not interfere with apache. As we want varnish to do the routing we re-configure varnish to listen on port 80 and move the apache out of the way.

To do that:

7b. Move apache out of the way

Just change the listening port of apache to some other port not currently in use and make it listen only on 127.0.0.1 as we don't have to access the direct apache port from the outside. You'll find the configuration for that in /etc/apache2/ports.conf:

NameVirtualHost 127.0.0.1:8080  
Listen 127.0.0.1:8080

<IfModule mod_ssl.c>  
    NameVirtualHost *:443
    Listen 443
</IfModule>

<IfModule mod_gnutls.c>  
    NameVirtualHost *:443
    Listen 443
</IfModule>  

Also we have to change all the VirtualHost statements in /etc/apache2/sites-available/* to reflect this change:

<VirtualHost 127.0.0.1:8080>  

Do not reload the apache config yet as your sites would go offline, but we don't want to interrupt anybody, do we?

7c. Setup varnish to run on port 80

So to move varnish to listen to port 80 instead of 6081 we have to change /etc/default/varnish:

DAEMON_OPTS="-a :80 \  
             -T localhost:6082 \
             -f /etc/varnish/default.vcl \
             -S /etc/varnish/secret \
             -s malloc,256m"

7d. Setup routing in varnish to make apache available again

So we don't want to interrupt the connection to the apache daemon to allow for "normal" websites aside of our new ghost instances, to make that possible we set up a default route in /etc/varnish/default.vcl:

backend apache {  
    .host = "127.0.0.1";
    .port = "8080";
}

sub vcl_recv {  
}

If you now reload the apache config and afterwards the varnish server all sites should be accessible as they were before you made the change:

$ sudo service apache2 restart
$ sudo service varnish restart

If everything is working so far we are ready to add the ghost instances:

7e. Setup routing to ghost instances

As we did for apache, we just add a few rules to /etc/varnish/default.vcl and restart varnish afterwards:

backend ghost_mysite_com {  
        .host = "127.0.0.1";
        .port = "2300";
}

sub vcl_recv {  
        if (req.http.host ~ "mysite.com") {
                set req.backend = ghost_mysite_com;
                return(pass);
        }
}

If you don't want the varnish server to act as a cache (which it is very good at) you could use return(pipe) to disable that.

Be sure the apache backend is always the first backend defined because that is the default varnish falls back to if no rule in vcl_recv matches and redirects to another backend.

8. Make it permanent

If your server reboots, the currently running docker containers will not be started by default, so we add the command to run them to /etc/rc.local to execute them on reboot automatically:

su ghost -c 'docker run -d -p 2300:2368 -v /home/ghost/mysite.com:/ghost-override dockerfile/ghost'  

just append that line before the exit 0 statement at the end.

9. Make it a farm

Now what if you want to run multiple instances?

  • Add a new data directory in /home/ghost (see steps 4 and 5)
  • Run a new docker instance on a different port (step 6 but change the 2300)
  • Add a rule to /etc/varnish/default.vcl for that instance (step 7e, change the 2300 again)
  • Add a line to /etc/rc.local, use the changed command line from step 6 (or change the 2300 again ;) )