A 5-post collection

RTMP Streaming Server

We all know Twitch and Youtube streaming, and some may know Hitbox. But what if you wanted to implement your own streaming server?

So what are the actual benefits to just using the mentioned services?

  • Private streams, no one can see them, no one can find them
  • Record everything without additional cost
  • Define which qualities the stream will be transcoded to
  • Embed streams on your own websites
  • May run "offline" just in your home for surveilance uses.

But there are disadvantages too:

  • Needs to run on a rented or owned box (so does probably cost something)
  • Machine with the server needs to be powerful
  • Probably does not scale to many viewers

So my intended purposes were watching my 3D printer from work (or even mobile when not at home) to be able to shut it down if it produces garbage and for some friends that stream to Twitch. So what? They stream to Twitch and need an own streaming server, why? That's easy: They play games together and wanted to show both screens side by side. So one of them bounces their stream to my server and the other one imports that stream in XSplit to send it to Twitch. You could probably do that with two Twitch streams but the latency is bad then. Over the setup I provide the latency is about 2-4 seconds, via Twitch it is more like 30 seconds.

What i tried

The first thing I tried was to search the APT repositories on Ubuntu for streaming servers, I found two:

  • crtmpserver
  • red5-server

While trying to set up crtmpserver I gave up after trying to get documentation for it (all websites and the Trac were down)

Red5 is Java so it fell out of consideration because I don't know my way around Java and Maven.

So I was standing there searching for alternatives. I found one, one that is really straightforward but it seemed the Git repo was abandoned some time ago. Then I looked if there were any forks that are in development. Oh boy there were many, about 1600, Github just gave up displaying the network graph and I had to search the most active and stable fork myself.

Github Screenshot

Then I found one:

Thanks sergey :)

How to get it running

The project is an extension module to nginx. I know my way around nginx configuration so you can feel my joy!

The bad thing is that you have to recompile the complete nginx server, but as I only wanted to use it for streaming and run the Ubuntu default nginx for my web needs, that was no problem. Both nginx instances can coexist on the same machine. (There are 3 now, one for web, one from gitlab and one for streaming)

So let's build the thing:

# create a user
addgroup --system rtmp
adduser --system --home /srv/stream --group rtmp rtmp

# prepare build dir
mkdir -p src
cd src

# use stable nginx, the module does not compile with mainline
git clone

# build new nginx
tar xvzf nginx-1.10.1.tar.gz
cd nginx-1.10.1
./configure \
    --prefix=/opt/nginx-stream \
    --conf-path=/etc/nginx-stream/nginx.conf \
    --pid-path=/run/nginx-stream/ \
    --lock-path=/run/nginx-stream/nginx.lock \
    --http-log-path=/var/log/nginx-stream/nginx.log \
    --error-log-path=/var/log/nginx-stream/error.log \
    --user=rtmp \
    --group=rtmp \
    --with-threads \
    --with-ipv6 \
    --with-http_ssl_module \
    --with-http_flv_module \
    --with-http_mp4_module \
    --with-http_gzip_static_module \
    --with-stream \
    --with-stream_ssl_module \
sudo make install


We replace the complete configuration that is installed in /etc/nginx-stream/nginx.conf:

Attention: This allows everyone to publish streams to your server, call control endpoints and see the status page, if you want authentication you'll have to configure it, see nginx documentation for details. I omitted the authentication stuff because it is another source of error when testing the setup.

We use the path /srv/stream/ for the user's home directory and for storage of needed html and media files (when recording).

To display the status page in a browser we have to give it a XSLT script to convert the XML the streaming server uses to something a browser can handle. We put the file into /srv/stream/html/style.xsl:

I know it is not the most elegant XSLT (neither is the HTML/CSS it generates) but it works for me.

Test-Run it

Just call sudo /opt/nginx-stream/sbin/nginx to start the server. You may kill the server by calling sudo pkill -F /run/nginx-stream/

Of course you may write configuration or a start-script for your prefered init system now, but for testing purposes the two commands from above should do it.

Publish a stream

To test the system I used OBS Studio as it is available for all major OSes.

OBS screenshot of streaming settings

Now click on Start streaming:

OBS start streaming

When everything went right no error message should pop up and the status bar should change to display the current bandwidth usage.

To check if everything is right on the server, go to, it should look like this:

Stat page screenshot

As you can see you'll have some links you may click on. The first one (Drop) cancels the stream (but OBS won't know and keep sending UDP packets to your server which is unfortunate), the second (Start Recording) switches on recording mode. When you click on the record link the answer will be the path that will be recorded to.

Subscribe to a stream

We have two applications defined:

  • Live stream
  • Video on demand

How to watch a live stream

Via RTMP, use the following URL (in VLC for example):<stream key>

Via HLS (HTML5 browser variant), fetch the playlist here:<stream key>.m3u8

How to watch a recorded stream

Currently you may only watch a recorded stream via RTMP, HLS variants would mean to prepare a split up playlist before. The nginx module does not do that for permanent storage. (Watch the blog for a future article about transcoding/hls)

When you click the record link (to either start or stop recording) the file name, to which it will record, will be displayed, just use the last part (without the path) in the RTMP VOD url to play that file:<filename>

Wrap up

It's not that complicated to run a simple streaming server, and the version I described here does not even use a lot of CPU as it does no transcoding. All you need is bandwidth and a streaming source that is configured to send a stream everyone can use.

Watch the blog for another article on transcoding and how to really trash your CPU performance on a streaming server ;)

IKEv2 VPN with StrongSWAN

Wow this was harder than i thought.

I just wanted to get a modern VPN on all my devices without the hassle to install third-party VPN clients on all of them (hello OpenVPN o/). The protocol of choice seems to be IKEv2 as all devices that I own seem to support this and it is more secure than the old PPTP or L2TP protocols the devices could support natively.

But let's just jump directly into it.

IPsec config

On the server we will be using StrongSWAN. All configuration is for Ubuntu 15.10 but should work on any distribution that has StrongSWAN as the configuration did not really change in the last few years.

Install StrongSWAN

At first we need to install StrongSWAN (all steps from here on should be done as the root user, switch to root by issuing sudo su - and typing your password):

apt install strongswan strongswan-plugin-af-alg strongswan-plugin-agent strongswan-plugin-certexpire strongswan-plugin-coupling strongswan-plugin-curl strongswan-plugin-dhcp strongswan-plugin-duplicheck strongswan-plugin-eap-aka strongswan-plugin-eap-aka-3gpp2 strongswan-plugin-eap-dynamic strongswan-plugin-eap-gtc strongswan-plugin-eap-mschapv2 strongswan-plugin-eap-peap strongswan-plugin-eap-radius strongswan-plugin-eap-tls strongswan-plugin-eap-ttls strongswan-plugin-error-notify strongswan-plugin-farp strongswan-plugin-fips-prf strongswan-plugin-gcrypt strongswan-plugin-gmp strongswan-plugin-ipseckey strongswan-plugin-kernel-libipsec strongswan-plugin-ldap strongswan-plugin-led strongswan-plugin-load-tester strongswan-plugin-lookip strongswan-plugin-ntru strongswan-plugin-pgp strongswan-plugin-pkcs11 strongswan-plugin-pubkey strongswan-plugin-radattr strongswan-plugin-sshkey strongswan-plugin-systime-fix strongswan-plugin-whitelist strongswan-plugin-xauth-eap strongswan-plugin-xauth-generic strongswan-plugin-xauth-noauth strongswan-plugin-xauth-pam strongswan-pt-tls-client

This one seems a bit excessive but I just installed everything I could find for StrongSWAN as I am lazy.


The next step is to get rid of the default configuration and supply our own:

The best bet here is to just move away the default config in /etc/ipsec.conf (or delete it as it does not contain anything of any value) and copy and paste the config above into it.

You will have to modify some values:

  • should be the hostname of the box you connect to.
  • rightsourceip should be a private IPv4 network and a subnet of the IPv6 subnet of your server (if your server got a /64 probably add another address part and use a /112 here)
  • rightdns is the dns server that will be sent to the client, I just used Google's free DNS servers here.

If you only want to use IPv4 just remove the v6 addresses.

Packet forwarding

To allow the connected VPN clients to actually talk to each other you'll have to enable packet forwarding, if you don't do that the clients will only be able to speak with the server.

Create a new file in /etc/sysctl.d named 99-vpn.conf:

Reload the settings with sysctl --system

If you want to give the VPN clients Internet access you'll have to enable NAT for the interfaces and routing for IPv6. I just added these lines to /etc/rc.local, you probably want to use the default facility for iptables rules of your distribution though:

Don't forget to actually run the script afterwards to enable the rules without rebooting. (D'oh!)

Generating certificates

To be on the real secure side and to make device provisioning as easy as possible we use X.509 certificates to connect to the VPN.
There are 3 sets of certificates:

  • The root CA
  • The VPN server certificate
  • The client certificates

Switch to /etc/ipsec.d and run all the following in that directory.

Root CA

For a CA we need a key first (we pick a 4096 bit long RSA key here):

ipsec pki --gen --type rsa --size 4096 --outform der > private/strongswanKey.der
chmod 600 private/strongswanKey.der

So let's create a root CA:

ipsec pki --self --ca --lifetime 3650 --in private/strongswanKey.der --type rsa --dn "C=DE, O=Dunkelstern, CN=Dunkelstern VPN Root CA" --outform der > cacerts/strongswanCert.der

So what's all that stuff?

  • First we tell the ipsec tool to create a self signed ca with roughly 10 years of lifetime
  • Use the key we generated
  • Tell the pki tool some settings: The country (DE), the Organisation (Dunkelstern) and the Common Name (Dunkelstern VPN Root CA)

You should probably move all the root CA private files (the key!) off the machine after you're done with them and put them on a disk into a safe or something.

VPN Certificates

So we have a CA but we definitely do not want to use that directly for the VPN server, so we create a derivative Certificate that has the root CA as parent. So at first we need a key again:

ipsec pki --gen --type rsa --size 4096 --outform der > private/vpnHostKey.der
chmod 600 private/vpnHostKey.der

And now comes the interesing part:

export vpn_host=""
export vpn_ipv4=""
export vpn_ipv6="::1"

ipsec pki --pub --in private/vpnHostKey.der --type rsa | ipsec pki --issue --lifetime 730 --cacert cacerts/strongswanCert.der --cakey private/strongswanKey.der --dn "C=DE, O=Dunkelstern, CN=$vpn_host" --san $vpn_ipv4 --san @$vpn_ipv4 --san $vpn_ipv6 --san @$vpn_ipv6 --flag serverAuth --flag ikeIntermediate --outform der > certs/vpnHostCert.der

You'll have to replace some values here:

  • vpn_host the hostname of the VPN server
  • vpn_ipv4 the public IPv4 address
  • vpn_ipv6 the public IPv6 address

Now it is really time to move the CA root private key ;)

Client certificates

To create client certificates I made a small script as you'll probably do this often:

Usage is something like kopernikus which will drop a p12 file in /etc/ipsec.d/p12/ with the name you supplied. The p12 file is encrypted with a pass-phrase you'll have to supply.


To get a working VPN config onto an iOS device you'll have to use a *.mobileconfig configuration profile as the VPN GUI of the iPhone and iPad has a bug that prevents valid connections as of iOS 9.3.

Create mobile config profile

Fetch the Apple Configurator 2 from the AppStore on a Mac (it's free but sadly there is no configurator for Windows)

After starting the configurator choose File->New Profile from the menu and fill the generic info field as you want:

Apple Configurator generic info

For the next step you'll need the p12 file and the vpnHostCert.der file to add to the certificate store:

Apple Configurator certificate store

And the last step: Configure the VPN

Apple Configurator VPN

Make sure you select the certificate auth method and the correct certificate. The Remote Identifier is what's in leftid in the ipsec.conf, the Local Identifier is what's in the Certificate's common name (machine@host).

Attention: Set the Encryption algorithm for IKE SA Params and Child SA Params to something sensible, do not use the 3DES default. 3DES is inherently unsafe!

If you're ready, save the config somewhere. So there's another step to get it running: Another Bug workaround!

Open the generated file in some text-editor and search for OverridePrimary and set it to zero!

Install on iOS device

The easiest way to install the configuration profile is just sending it to yourself as an email and then tap the attachment and allow it to install the VPN. If your device is enrolled in MDM (Mobile device management) you can send the profile over the air.


To connect to the VPN go into Settings->General->VPN and turn on the switch. All traffic will now be sent through the VPN. If the switch turns off immediately you either forgot to set OverridePrimary to zero or you chose an encryption that the server does not understand. Look into the server logfile for more information.


Just create a config file like you do for an iOS device and double click it. It will open in the System Preferences Profiles pane (which is not visible until you import a profile)

OSX System Prefs

I had to click the plus button and add the profile again as the preference pane did open but it did not automatically import the profile.

If you want to have an icon in the menu bar, switch to the Network pref-pane and tick the checkbox to show that icon:

Show in menu bar


(UPDATE: Wait stop right here! Read the follow up: The right way to setup a VPN on windows)

Oh my... Microsoft! To get it running on Windows you will have to jump over some obstacles. I am no windows guy, perhaps there's an easier way to do it, please mail me:

Import Certificates

Run the Management Console from the Win+R box with mmc:

Management Console

Now add the Certificates snap-in (File -> Add/Remove Snap in...):

Certificates Snap in

Switch to Console Root -> Certificates -> Personal -> Certificates and import the p12 file by clicking through the import wizard at Action -> All tasks -> Import.

Now you should have two new entries:

  • machinename@host
  • XY VPN Root CA

Move the Root CA to Trusted Root Certificate Authorities -> Certificates to trust it.

Setup VPN

First open the Network and Sharing Center (best done by right-clicking on the network icon in the task bar)

Network and Sharing Center

Now set up a new Network connection:

Network connection Workplace

VPN, not dialup

VPN step 1

Now go back to the Network and Sharing Center and click on Change Adapter Settings on the left. Select your VPN connection and open the properties window.

VPN Properties

Switch to the Security pane and set the VPN type to IKEv2 and the authentication to Use machine certificates.


On the Networking pane open the properties dialog for both Internet Protocol versions and set up the DNS servers as windows does not automatically take those from the VPN connection

DNS here

The connection is now ready to be activated, but there are some bugs hidden.

Public network? Why?

If you double click to connect now (which throws you over to the modern control panel)

Connect finally

You may notice that the network is classified as a public network. If you don't want it to be public follow the next steps. Skip them if public is ok with you.

First open the policy editor by running gpedit.msc from the Win+R window. Now navigate to Local Computer Policy -> Computer Configuration -> Windows Settings -> Security Settings -> Network List Manager Policies (Why is everything so wordy?), right click the VPN connection and set the Network Location

Network Location type

Getting IPv6 to work

If you look into the VPN connection status you may notice that IPv4 Connectivity says Internet and IPv6 Connectivity tells you No Network access. This is because Windows does not setup a default route through the VPN tunnel for IPv6 but depends on the Router on the other end to respond to router queries or send router advertisements. This could be done on the server but noone but Windows does need this and the error could be fixed by just running a single command on the windows box:

route -6 add ::0/0 2a01:4f8:190:2012:3::2

Where the IP on the end is the IP of your tunnel endpoint (look it up in the properties).

So how can we tell Windows to run that command for us... let's say it's a bit hacky what we are about to do now:

  1. Write a small netsh script to run
  2. Find the "VPN connection established"-Event in the event log
  3. Attach a scheduled task to that event type to run the script.

Hacky enough? Ok let's go.

Create a new text-file with the following content:

interface ipv6
add route ::0/0 "VPN Connection" 2a01:4f8:190:2012:3::2

Replace VPN Connection with the name of your connection and the IP address with the IP address you get from the VPN server. Move that text file somewhere where you will not delete it by accident.

Now open the event viewer (Win+R run eventvwr) and navigate to
Event Viewer (Local) -> Applications and Service Logs -> Microsoft -> Windows -> NetworkProfile -> Operational (wow!)

Event viewer path

Now find the event that signifies that the VPN connection was established. It usually has the Event ID 10000 and we need an entry with the State: Connected flag set.

Log entry

Finally we attach a task to that event, which will be called everytime the VPN connects.


My script was called route.nsh and I dropped it into C:\ directly.

Attention: Do not finish the wizard without telling it to open the task properties at the end, we have something to do there!

Set the Run with highes privileges checkbox.

High privilege

... turn off the energy save mode and set the task to run only if the VPN connection is available:

On battery too

Finished! If you now disconnect and then reconnect the VPN IPv6 will work through the tunnel!


coming soon.

xhyve: lightweight vm for Mac OS X

xhyve is a port of bhyve a qemu equivalent for FreeBSD to the Mac OS X Hypervisor framework. This means it has zero dependencies that are not already installed on every Mac that runs at least OS X Yosemite (10.10). The cool thing though is that Mac OS X has full control of the system all the time as no third party kernel driver hijacks the CPU when a VM is running, so the full power management that OS X provides is always in charge of everything. No more battery draining VMs \o/.

xhyve logo

xhyve is Open Source

This is really cool as everyone is able to hack it, so did I. The code is, like every bit of lowlevel C code I saw in my life, a bit complex to read and has not many comments but is very structured so you can easily find what you want to modify and do that.

The project is quite young so don't expect miracles. It has for example no graphics emulation. Running Ubuntu or FreeBSD is reduced to a serial tty and networking. If you want to run a virtual server on your Mac for development purposes it's quite perfect though.

There was one downer that got me: A virtual disk of say 30 GB has a backing file that is exactly 30 GB big even if you only store 400 MB on it. That's bad for everyone running on a SSD where space is limited as of now.

Introducing: Sparse HDD-Images for xhyve

Because the VM code is pretty small (the compiled executable of xhyve is about 230 KB) I though it might be possible for me to change this one last thing that prevented me to use xhyve on my Macbook. It turns out it is really easy to hack the virtual block device subsystem. All disk access code is neatly contained in one file: blockif.c. It is neatly separated from the virtio-block and ahci drivers.

So what I went out to do was three things:

  • Split the big disk image file into multiple segments (as for why read on)
  • Make the disk image segments only store blocks that have actual content in them (vs. storing only zeroes)
  • Make xhyve create the backing image files if they do not exist.

Splitting the disk image into segments

You may ask why, this is rather an optimization for maintaining speed and aid debugging but turned out to have the following advantages:

  • Some file systems may only allow files of a maximum size (prime example: FAT32 only allows 2GB per file)
  • Sparse image lookup tables can be filled with 32 bit values instead of defaulting to 64 bit (which saves 50% space in the lookup tables)
  • Debugging is easier as you may hexdump those smaller files on the terminal instead of loading a multi gigabyte file into the hex editor of your choice
  • Fragmentation of sparse images is reduced somewhat (probably not an issue for SSD backed files)
  • Growing disks is easy: just append a segment
  • Shrinking disks should be possible with help of the guest operating system, if it is able to clear the end of the disk of any data you could just delete a segment.

So splitting was implemented and rather easy to think of, just divide the disk read offset by the segment size and use a modulo operation to get to the in-segment-address. There's one catch: I had to revert from using preadv and pwritev to regular reads and writes. Usually you really want those v functions as they allow executing multiple read and write calls in one system call, thus beeing atomic. But these functions only work with one file descriptor and our reads probably span multiple segments and thus multiple file descriptors.

To make the thing easier and configurable I introduced two additional parameters for the block device configuration:

  • size the size of the backing file for the virtual disk
  • split the segment size. size should be a multiple of split to avoid wasting space.

You may use suffixes like k, m, g like on the RAM settings to avoid calculating the real byte sizes in your head ;)

Be aware: You may convert a disk from plain to split image either by using dd and splitting the image file (exact commands are left as an exercise to the reader) or by setting split to the old size of the image and size to a multiple of split effectively increasing the size of the disk by a multiple of the old size. New segments will be created automatically on next VM start.

Example config

Implementing sparse images

So the last step for making xhyve usable to me: Don't waste my disk space.

I think there are multiple methods for implementing efficient sparse images, I went for the following:

  • Only save sectors that contain actual data and not only zeroes
  • Minimum allocation size is one sector
  • Maintain a lookup table that references where each sector is saved in the image
  • Deallocation of a sector (e.g. overwriting with zeroes) is only handled by a shrink disk tool offline

So how does such a lookup table look?

A sparse disk lookup table is just an array of 32 bit unsigned integers, one for each sector. If you want to read sector 45 you just take the value of array position 45, multiply it by sector size and seek into the image segment to read from that address. Simple, isn't it?

In the current implementation the lookup table is written to a separate file with the extension .lut, all writes to this file are synchronous. The other backing files will be initially created as zero byte length files and when the guest os starts writing data the new sector is appended to the respective segment file and a new offset is written to the lookup table.

The lookup table starts as an array full of UINT32_MAX values (0xffffffff) as this is the marker used to describe that this sector is not yet in the image and thus should be returned as a series of zero values. If a read finds an entry other than that marker the corresponding data is read from the segment file.

All lookup tables for all segment files are appended to the .lut file, so it contains multiple tables, not just one. Positive side of this is that 32 bits of offset data map to a maximum segment size of about just under 4 TB divided by the sector size. If you use an SSD as backing storage you probably should configure your sector size to 4KB as that is the sector size of most SSDs and will result in additional performance. So this will result in a maximum segment size of about 16 PB and I never heard of a Mac that has this much storage. (If yours has please send me a photo)

Writes of new sectors (those appended to the segment file) are synchronous to avoid two sectors with the same address. Other writes are as the user configured them on the command line.

To enable sparse disk images just add sparse as a parameter to your configuration.

Sparse config example

Be aware: You'll have to recreate your disk image to profit of this setting, sparse disks are not compatible with normal disks.


I used this configuration:

-s 4,virtio-blk,test/hdd/hdd.img,sectorsize=4096,size=20G,split=1G,sparse

So this is how it looks on disk:

$ ls -lah
total 2785264
drwxr-xr-x  23 dark  staff   782B Jan 16 01:27 .
drwxr-xr-x   9 dark  staff   306B Jan 16 01:27 ..
-rw-rw----   1 dark  staff   531M Jan 16 02:50 hdd.img.0000
-rw-rw----   1 dark  staff    12K Jan 16 01:18 hdd.img.0001
-rw-rw----   1 dark  staff   5.9M Jan 16 02:50 hdd.img.0002
-rw-rw----   1 dark  staff    24K Jan 16 01:18 hdd.img.0003
-rw-rw----   1 dark  staff   110M Jan 16 02:48 hdd.img.0004
-rw-rw----   1 dark  staff     0B Jan 16 01:11 hdd.img.0005
-rw-rw----   1 dark  staff   172M Jan 16 02:50 hdd.img.0006
-rw-rw----   1 dark  staff     0B Jan 16 01:11 hdd.img.0007
-rw-rw----   1 dark  staff   207M Jan 16 02:50 hdd.img.0008
-rw-rw----   1 dark  staff     0B Jan 16 01:11 hdd.img.0009
-rw-rw----   1 dark  staff    15M Jan 16 02:48 hdd.img.0010
-rw-rw----   1 dark  staff     0B Jan 16 01:11 hdd.img.0011
-rw-rw----   1 dark  staff    46M Jan 16 01:24 hdd.img.0012
-rw-rw----   1 dark  staff     0B Jan 16 01:11 hdd.img.0013
-rw-rw----   1 dark  staff   151M Jan 16 02:50 hdd.img.0014
-rw-rw----   1 dark  staff    12K Jan 16 01:18 hdd.img.0015
-rw-rw----   1 dark  staff    97M Jan 16 02:50 hdd.img.0016
-rw-rw----   1 dark  staff   5.3M Jan 16 02:48 hdd.img.0017
-rw-rw----   1 dark  staff     0B Jan 16 01:11 hdd.img.0018
-rw-rw----   1 dark  staff   4.0K Jan 16 01:15 hdd.img.0019
-rw-rw----   1 dark  staff    20M Jan 16 02:48 hdd.img.lut

The dump you see here is of an image where I just installed Ubuntu server 15.10.
Instead of wasting 20GB of space this one only needs about 1.3GB. Speed is about the same as before (with a SSD as backing storage) but may suffer severely on a spinning rust disk as there are way more seeks if the sectors become fragmented.

Where to get it

Currently you will have to compile yourself, just fetch the sparse-disk-image branch from and execute make.

Post mortem analysis of Swift server code

Just a very quick idea of how you could handle server side crashes of a Swift binary. Swift itself has no Stack unwinding functions that you could use for debugging purposes but lldb has.

So what if the currently crashing program would attach lldb to itself and create stack traces before vanishing into nirvana?

import Darwin

private func signalHandler(signal: Int32) {
    // need my pid for telling lldb to attach to parent
    let pid = getpid()
    // create command file
    let filename = "/tmp/backtrace.\(pid)"
    var fp = fopen(filename, "w")
    if fp == nil {
        print("Could not open command file")
    // attach to pid
    var cmd = "process attach --pid \(pid)\n"
    fputs(cmd, fp)
    // backtrace for all threads
    cmd = "bt all\n"
    fputs(cmd, fp)

//    // save core dump
//    cmd = "process save-core coredump\n"
//    fputs(cmd, fp)

    // kill the process
    cmd = "process kill\n"
    fputs(cmd, fp)
    // delete the command file
    cmd = "script import os\nscript os.unlink(\"\(filename)\")\n"
    fputs(cmd, fp)
    // quit lldb
    cmd = "quit\n"
    fputs(cmd, fp)

    // add signal type to backtrace.log header
    fp = fopen("backtrace.log", "w")
    if fp == nil {
        print("Could not open log file")
    fputs("Signal \(signal) caught, executing lldb for backtrace\n", fp)
    // run lldb
    let command = "/Library/Developer/Toolchains/swift-latest.xctoolchain/usr/bin/lldb --file \"\(Process.arguments[0])\" --source \"\(filename)\" >>backtrace.log"

// Install signal handler
signal(SIGILL, signalHandler)
signal(SIGTRAP, signalHandler)
signal(SIGABRT, signalHandler)
signal(SIGSEGV, signalHandler)
signal(SIGBUS, signalHandler)
signal(SIGFPE, signalHandler)

// Now crash
print("Hello, World!")
var forcedUnwrap: Int! = nil


This code traps all fatal error signals and calls lldb with a small command file that looks like this and is generated by the crashing program:

process attach --pid <my_pid>
bt all
process kill
script import os
script os.unlink("<command_file>")

So it attaches lldb to a PID, fetches a backtrace for all threads, kills the parent process and deletes the command file. Log output is diverted to backtrace.log and contains all lldb output.

Additionally you could include process save-core coredump to write a core dump to the current directory which can be loaded for further inspection, but beware a core dump for our simple program up there will be around 500MB and will take about 30 seconds to write to disk (SSD).

You can load the core dump like this:

lldb -f <binary> -c coredump

Now you can inspect memory and variables like the program crashed while the debugger was attached.

The cool thing is, this code will not disable the Xcode internal debugger, so you get the usual EXC_BAD_INSTRUCTION when running the code in Xcode.

Installing Ghost via Docker

my Blog is running on Ghost. Ghost is cool but the thing is: most people do not know about node.js or how to setup a node.js application.

But you don't have to!

Virtualization or Containers

You could run a virtual machine with Xen or Virtualbox and just use an appliance that somebody already created for you. But this does not scale when you want to run more than a couple of instances on a server.

So how to set up a server without exactly knowing what to run (and how) but while not using a virtual machine? The answer to that could be Docker.

1. Setup Docker on Ubuntu (14.04+)

This, like all other steps in this guide is straightforward:

$ sudo apt-get install

Docker is a very lightweight pseudo-virtualization framework building on the foundation of linux cgroups and chroot jails, think of it as some kind of changeroot environment that is even stricter than a normal chroot jail. It restricts the access to all resources not part of the jail. You will see the processes running when you're looking at the process list from the outside of the jail and may even be able to kill them, but from the inside of the container you will not be able to access anything on the outside.

So the overhead of the container running the ghost service will be minimal, it needs some more resources than just running the node server directly (shared libraries have to be loaded twice for example, and a minimal os image will be installed too) but a good server will be able to run quite a couple instances without breaking down.

2. Get the ghost docker image

The next step is downloading the official docker image for the ghost service:

$ sudo docker pull dockerfile/ghost

This will take a while because the ghost docker image is a layered image. Docker creates a differential image for each action that is run while provisioning the container, so you could see what every single command executed did to the container and roll back a few steps if something went wrong or you'd have to update a single component in the history.

If you want to get an overview what exactly happened with an image over time run the history command on it:

$ sudo docker history dockerfile/ghost
IMAGE               CREATED             CREATED BY                                      SIZE
7fd9622ac59b        11 days ago         /bin/sh -c #(nop) EXPOSE map[2368/tcp:{}]       0 B
e052bd394625        11 days ago         /bin/sh -c #(nop) CMD [bash /ghost-start]       0 B
8951c214a253        11 days ago         /bin/sh -c #(nop) WORKDIR /ghost                0 B
083fc6513232        11 days ago         /bin/sh -c #(nop) VOLUME ["/data", "/ghost-ov   0 B
50b7f9b06075        11 days ago         /bin/sh -c #(nop) ENV NODE_ENV=production       0 B
34de0ad45f77        11 days ago         /bin/sh -c #(nop) ADD file:27b2fabfe632ee15b9   880 B
abe6497bce46        11 days ago         /bin/sh -c cd /tmp &&   wget https://ghost.or   71.38 MB
65f8a4200da9        11 days ago         /bin/sh -c #(nop) CMD [bash]                    0 B
5c85a5ac1a37        11 days ago         /bin/sh -c #(nop) WORKDIR /data                 0 B
7a5fa70ca2f3        11 days ago         /bin/sh -c cd /tmp &&   wget http://nodejs.or   17.73 MB
1d73c42b2c8b        12 days ago         /bin/sh -c #(nop) CMD [bash]                    0 B
b80296c7dcea        12 days ago         /bin/sh -c #(nop) WORKDIR /data                 0 B
b90d7c4116a7        12 days ago         /bin/sh -c apt-get update &&   apt-get instal   56.41 MB
036f41962925        12 days ago         /bin/sh -c #(nop) CMD [bash]                    0 B
6ca8ad8beff9        12 days ago         /bin/sh -c #(nop) WORKDIR /root                 0 B
caa6e240bc5e        12 days ago         /bin/sh -c #(nop) ENV HOME=/root                0 B
95d3002f2745        12 days ago         /bin/sh -c #(nop) ADD dir:a0224129e16f61bf5ca   80.57 kB
2cfc7dfeba2d        12 days ago         /bin/sh -c #(nop) ADD file:20736e4136fba11501   532 B
e14a4e231fad        12 days ago         /bin/sh -c #(nop) ADD file:ea96348b2288189f68   1.106 kB
4c325cfdc6d8        12 days ago         /bin/sh -c sed -i 's/# \(.*multiverse$\)/\1/g   221.9 MB
9cbaf023786c        12 days ago         /bin/sh -c #(nop) CMD [/bin/bash]               0 B
03db2b23cf03        12 days ago         /bin/sh -c apt-get update && apt-get dist-upg   0 B
8f321fc43180        12 days ago         /bin/sh -c sed -i 's/^#\s*\(deb.*universe\)$/   1.895 kB
6a459d727ebb        12 days ago         /bin/sh -c rm -rf /var/lib/apt/lists/*          0 B
2dcbbf65536c        12 days ago         /bin/sh -c echo '#!/bin/sh' > /usr/sbin/polic   194.5 kB
97fd97495e49        12 days ago         /bin/sh -c #(nop) ADD file:84c5e0e741a0235ef8   192.6 MB
511136ea3c5a        16 months ago                                                       0 B

As you can see with the official docker image for ghost it builds on a rather old version of ubuntu that is about 16 months old (it's 12.04 LTS, the old version is used currently because most docker images are based on that, so if you run multiple different images you'll only waste those 192.6 MB once as the base layer could be recycled).

3. Setup a user to run ghost from

Of course you could run the ghost docker container from your default user on the system, but I prefer to separate services by user, so I created a new user on the system:

$ sudo adduser ghost

You'll have to put that user into the docker group so it would be able to run docker commands:

$ sudo usermod -G docker ghost

4. Download latest ghost and themes

All things we do from this step on we will run as the new ghost user, so just switch over:

$ sudo su - ghost

Fetch latest ghost release zip to get our hands on the example configuration file and the default casper theme:

$ wget
$ mkdir ghost-latest
$ cd ghost-latest
$ unzip ../

Now we copy the important parts to our data directory that the ghost instance will use:

$ cd
$ mkdir
$ cd
$ cp -r ../ghost-latest/content .
$ cp ../ghost-latest/config.example.js config.js

Now we have a working directory structure and a example configuration file that we can adapt to our preferred settings.

5. Configuring ghost

The simplest thing we could do is just edit the example-config we just copied and add our domain-name to it (config->production->url):

var path = require('path'),

config = {
    production: {
        url: '',
        mail: {},
        database: {
            client: 'sqlite3',
            connection: {
                filename: path.join(__dirname, '/content/data/ghost.db')
            debug: false

        server: {
            host: '',
            port: '2368'

be sure to set the config->production->server->host setting to to be able to access the server later on.

6. Running ghost in docker

We pulled the docker image in step 2, now lets use it:

$ docker run -d -p 2300:2368 -v $HOME/ dockerfile/ghost

If everything worked we now can access the ghost instance on http://localhost:2300 or better yet:

If something went wrong we can inspect the logfile of the ghost instance by running the docker log <id> command. For that we have to find out the <id> part:

$ docker ps
CONTAINER ID        IMAGE                     COMMAND             CREATED             STATUS              PORTS                              NAMES
e425352a4b49        dockerfile/ghost:latest   bash /ghost-start   2 minutes ago        Up 2 minutes         2300/tcp,>2368/tcp   mad_galileo

You can use the container ID or the name of the container for all commands that will accept an ID.

$ docker logs -f mad_galileo

If you specify the -f flag, as in the example, the log viewer works like tail -f (follows the log if new entries are appended) if you omit the -f it just dumps what has been logged so far and exits.

7. Setting up a reverse proxy to route multiple domains

Normally, if only one instance runs on the server we could just change the port 2300 in the docker command to 80 and be done. But all the setup we did was to have multiple instances running on the same server and sharing the port 80 for different domains.

As we can not bind more than one docker container to a single port we just have to setup a reverse proxy that listens to port 80 and does the routing to the different ghost instances for us.

I chose varnish for that as it is easy to configure.
For installation of varnish we change back to a user that can use sudo.

7a. Install varnish

Varnish is in the standard ubuntu repositories, so installation is just one apt-get away:

$ sudo apt-get install varnish

The default installation of varnish will listen on a somewhat arcane port to not interfere with apache. As we want varnish to do the routing we re-configure varnish to listen on port 80 and move the apache out of the way.

To do that:

7b. Move apache out of the way

Just change the listening port of apache to some other port not currently in use and make it listen only on as we don't have to access the direct apache port from the outside. You'll find the configuration for that in /etc/apache2/ports.conf:


<IfModule mod_ssl.c>
    NameVirtualHost *:443
    Listen 443

<IfModule mod_gnutls.c>
    NameVirtualHost *:443
    Listen 443

Also we have to change all the VirtualHost statements in /etc/apache2/sites-available/* to reflect this change:


Do not reload the apache config yet as your sites would go offline, but we don't want to interrupt anybody, do we?

7c. Setup varnish to run on port 80

So to move varnish to listen to port 80 instead of 6081 we have to change /etc/default/varnish:

DAEMON_OPTS="-a :80 \
             -T localhost:6082 \
             -f /etc/varnish/default.vcl \
             -S /etc/varnish/secret \
             -s malloc,256m"

7d. Setup routing in varnish to make apache available again

So we don't want to interrupt the connection to the apache daemon to allow for "normal" websites aside of our new ghost instances, to make that possible we set up a default route in /etc/varnish/default.vcl:

backend apache {
    .host = "";
    .port = "8080";

sub vcl_recv {

If you now reload the apache config and afterwards the varnish server all sites should be accessible as they were before you made the change:

$ sudo service apache2 restart
$ sudo service varnish restart

If everything is working so far we are ready to add the ghost instances:

7e. Setup routing to ghost instances

As we did for apache, we just add a few rules to /etc/varnish/default.vcl and restart varnish afterwards:

backend ghost_mysite_com {
        .host = "";
        .port = "2300";

sub vcl_recv {
        if ( ~ "") {
                set req.backend = ghost_mysite_com;

If you don't want the varnish server to act as a cache (which it is very good at) you could use return(pipe) to disable that.

Be sure the apache backend is always the first backend defined because that is the default varnish falls back to if no rule in vcl_recv matches and redirects to another backend.

8. Make it permanent

If your server reboots, the currently running docker containers will not be started by default, so we add the command to run them to /etc/rc.local to execute them on reboot automatically:

su ghost -c 'docker run -d -p 2300:2368 -v /home/ghost/ dockerfile/ghost'

just append that line before the exit 0 statement at the end.

9. Make it a farm

Now what if you want to run multiple instances?

  • Add a new data directory in /home/ghost (see steps 4 and 5)
  • Run a new docker instance on a different port (step 6 but change the 2300)
  • Add a rule to /etc/varnish/default.vcl for that instance (step 7e, change the 2300 again)
  • Add a line to /etc/rc.local, use the changed command line from step 6 (or change the 2300 again ;) )