Ruan Bekker's Blog

From a Curious mind to Posts on Github

Setup APM Server on Ubuntu for Your Elastic Stack to Get Insights in Your Application Performance Metrics

In this post we will setup the Elastic Stack with Elasticsearc, Kibana and APM . The APM Server (Application Performance Metrics) which will receive the metric data from the application side, and is then pushed to apm indices on Elasticsearch.

This will be a 2 post blog on APM:

What is APM

From their website APM is described as: “Elastic APM is an application performance monitoring system built on the Elastic Stack. It allows you to monitor software services and applications in real time, collecting detailed performance information on response time for incoming requests, database queries, calls to caches, external HTTP requests, etc.”

You get metrics like average, p99 response times etc, and also have insights when errors occur, it even allows you to look at the stacktrace, poinpointing on which line of your code it ocurred etc.

APM Agents:

The APM Agents will be loaded inside your application, application metrics will then be pushed to the APM Server (which we will setup in this post), which then gets pushed to Elasticsearch and is then consumed by Kibana.

At the time of writing, the APM Agents are supported in the following languages:

  • Node.js
  • Django
  • Flask
  • Ruby on Rails
  • Rack
  • RUM
  • Golang
  • Java

Setup the Elastic Stack

One thing to note, every service in your Elastic Stack needs to be running on the same version. In this post we will setup Elasticsearch, APM and Kibana all running on version 6.4.3

Setup the Pre-Requirements:

Elasticsearch depends on Java, se we will go ahead and setup the repositories:

1
2
3
4
5
$ wget -qO - https://artifacts.elastic.co/GPG-KEY-elasticsearch | sudo apt-key add -
$ apt-get install apt-transport-https -y
$ echo "deb https://artifacts.elastic.co/packages/6.x/apt stable main" | sudo tee -a /etc/apt/sources.list.d/elastic-6.x.list
$ apt update && apt upgrade -y
$ apt install openjdk-8-jdk -y

Verify that Java is installed:

1
2
3
4
$ java -version
openjdk version "1.8.0_181"
OpenJDK Runtime Environment (build 1.8.0_181-8u181-b13-1ubuntu0.16.04.1-b13)
OpenJDK 64-Bit Server VM (build 25.181-b13, mixed mode)

Setup Kernel parameters for Elasticsearch:

1
2
$ sysctl -w vm.max_map_count=262144
$ echo 'vm.max_map_count=262144' >> /etc/sysctl.conf

Setup Elasticsearch:

Search for the latest versions (when already having elasticsearch, either upgrade or install apm on the same version as elasticsearch/kibana):

1
2
3
$ apt-cache madison elasticsearch
elasticsearch |      6.4.3 | https://artifacts.elastic.co/packages/6.x/apt stable/main amd64 Packages
elasticsearch |      6.4.2 | https://artifacts.elastic.co/packages/6.x/apt stable/main amd64 Packages

Install Elasticsearch:

1
$ apt-get install elasticsearch=6.4.3 -y

Configure Elasticsearch to lock the memory on startup:

1
$ sed -i 's/#bootstrap.memory_lock: true/bootstrap.memory_lock: true/g' /etc/elasticsearch/elasticsearch.yml

Enable Elasticsearch on startup and start the service:

1
2
3
$ systemctl daemon-reload
$ systemctl enable elasticsearch.service
$ systemctl start elasticsearch.service

Install Kibana:

Install Kibana version 6.4.3:

1
$ apt install kibana=6.4.3 -y

For demonstration, I will configure Kibana to listen on all interfaces on port 5601, but note this will enable access for everyone, you can [follow this blogpost] to setup a Nginx Reverse Proxy to upstream to localhost on port 5601.

Since this demonstration we are using Elasticsearch locally, so if you have a remote cluster, configuration can be applied where needed.

1
2
$ sed -i 's/#server.host: "localhost"/server.host: "0.0.0.0"/'g /etc/kibana/kibana.yml
$ sed -i 's/#elasticsearch.url: "http:\/\/localhost:9200"/elasticsearch.url: "http:\/\/localhost:9200"/'g /etc/kibana/kibana.yml

Enable Kibana on startup and start the service:

1
2
$ systemctl enable kibana.service
$ systemctl restart kibana.service

Install the APM Server

Install APM Server version 6.4.3:

1
$ apt install apm-server=6.4.3 -y

Since we have everything locally, the configuration can be kept as is, but if you need to configure the elasticsearch or kibana hosts, it can be done via /etc/apm-server/apm-server.yml

Then once Kibana and Elasticsearch is started, load the mapping templates, enable and start the service:

1
2
3
$ apm-server setup
$ systemctl enable apm-server.service
$ systemctl restart apm-server.service

Ensure all the services are running with netstat -tulpn and port 9200, 9300, 5601 and 8300 should be listening

Access Your Elastic Stack

Access Kibana on your routable endpoint on port 5601 and you should see something like this:

Configuring a Application to push metrics to APM

In the next post I will setup a Python Flask Application on APM

Benchmark Website Response Times With CURL

We can gain insights when making requests to websites such as:

  • Lookup time
  • Connect time
  • AppCon time
  • Redirect time
  • PreXfer time
  • StartXfer time

We will make a request to a website that has caching enabled, the first hit will be a MISS:

1
2
3
4
5
6
7
8
9
10
$ curl -s -w '\nLookup time:\t%{time_namelookup}\nConnect time:\t%{time_connect}\nAppCon time:\t%{time_appconnect}\nRedirect time:\t%{time_redirect}\nPreXfer time:\t%{time_pretransfer}\nStartXfer time:\t%{time_starttransfer}\n\nTotal time:\t%{time_total}\n' -o /dev/null https://user-images.githubusercontent.com/567298/53351889-85572000-392a-11e9-9720-464e9318206e.jpg

Lookup time:  1.524465
Connect time: 1.707561
AppCon time:  0.000000
Redirect time:    0.000000
PreXfer time: 1.707656
StartXfer time:   1.897660

Total time:   2.451824

The next hit will be a HIT:

1
2
3
4
5
6
7
8
9
10
$ curl -s -w '\nLookup time:\t%{time_namelookup}\nConnect time:\t%{time_connect}\nAppCon time:\t%{time_appconnect}\nRedirect time:\t%{time_redirect}\nPreXfer time:\t%{time_pretransfer}\nStartXfer time:\t%{time_starttransfer}\n\nTotal time:\t%{time_total}\n' -o /dev/null https://user-images.githubusercontent.com/567298/53351889-85572000-392a-11e9-9720-464e9318206e.jpg

Lookup time:  0.004441
Connect time: 0.188065
AppCon time:  0.000000
Redirect time:    0.000000
PreXfer time: 0.188160
StartXfer time:   0.381344

Total time:   0.926420

Similar Posts:

How to Bootstrap Nodes With Python Using Ansible

As Ansible depends on Python, therefore we can bootstrap our nodes with Python using a Ansible Playbook

Inventory

The nodes we want to bootstrap:

inventory.ini
1
2
3
4
5
6
7
[new]
node-1
node-2
node-3

[new:vars]
ansible_python_interpreter=/usr/bin/python3

Playbook

Our playbook with what we want to do:

bootstrap-python.yml
1
2
3
4
5
6
7
---
- hosts: all
  gather_facts: False

  tasks:
  - name: install python
    raw: test -e /usr/bin/python || ( apt update && apt install python -y )

Deploy

Deploy with Ansible:

1
2
3
4
5
6
7
8
9
10
11
12
13
$ ansible-playbook -i inventory.ini bootstrap-python.yml

PLAY [all] ***********************************************************************************************************************************************************************************************

TASK [install python] ************************************************************************************************************************************************************************************
changed: [node-1]
changed: [node-2]
changed: [node-3]

PLAY RECAP ***********************************************************************************************************************************************************************************************
node-1                     : ok=2    changed=2    unreachable=0    failed=0
node-2                     : ok=2    changed=2    unreachable=0    failed=0
node-3                     : ok=2    changed=2    unreachable=0    failed=0

This is it for this post, all posts for this tutorial will be posted under #ansible-tutorial

How to Install Packages on Remote Systems With Ansible

We will use Ansible to deploy packages to remote systems and in this case all the remote systems are running Debian, therefore we will be using the APT package manager.

Pre-Requisites:

Ensure that you have installed Ansible and setup the SSH Config for your remote systems, how to do that can be found under the post: setting up ansible

Our Inventory

The inventory file that describes our hosts:

inventory.ini
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
[scaleway]
cluster-node-1
cluster-node-2

[hetzner]
docker-node-1
docker-node-2
docker-node-3
glusterfs-node-1
glusterfs-node-2
elasticsearch-node-1
elasticsearch-node-2

[scaleway:vars]
ansible_python_interpreter=/usr/bin/python3
location=france

[hetzner:vars]
ansible_python_interpreter=/usr/bin/python3
location=germany

Playbook

Our playbook that we will define that we want to deploy packages using apt to all hosts:

packages.yml
1
2
3
4
5
6
7
8
9
10
11
12
---
- hosts: all
  tasks:
  - name: Install Packages
    apt: name= state=latest update_cache=yes
    with_items:
      - ntp
      - python
      - tcpdump
      - wget
      - openssl
      - curl

Deploy

Running the playbook to deploy the packages to the remote servers:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
$ ansible-playbook -i inventory.ini packages.yml

PLAY [all] ***********************************************************************************************************************************************************************************************

TASK [Gathering Facts] ***********************************************************************************************************************************************************************************
ok: [glusterfs-node-2]
ok: [glusterfs-node-1]
ok: [docker-node-1]
ok: [docker-node-2]
ok: [docker-node-3]
ok: [elasticsearch-node-1]
ok: [elasticsearch-node-2]
ok: [cluster-node-1]
ok: [cluster-node-2]

TASK [Install Packages] **********************************************************************************************************************************************************************************
changed: [docker-node-1] => (item=[u'ntp', u'python', u'tcpdump', u'wget', u'openssl', u'curl'])
changed: [docker-node-2] => (item=[u'ntp', u'python', u'tcpdump', u'wget', u'openssl', u'curl'])
changed: [docker-node-3] => (item=[u'ntp', u'python', u'tcpdump', u'wget', u'openssl', u'curl'])
changed: [elasticsearch-node-1] => (item=[u'ntp', u'python', u'tcpdump', u'wget', u'openssl', u'curl'])
changed: [glusterfs-node-1] => (item=[u'ntp', u'python', u'tcpdump', u'wget', u'openssl', u'curl'])
changed: [glusterfs-node-2] => (item=[u'ntp', u'python', u'tcpdump', u'wget', u'openssl', u'curl'])
changed: [elasticsearch-node-2] => (item=[u'ntp', u'python', u'tcpdump', u'wget', u'openssl', u'curl'])
ok: [cluster-node-1] => (item=[u'ntp', u'python', u'tcpdump', u'wget', u'openssl', u'curl'])
ok: [cluster-node-2] => (item=[u'ntp', u'python', u'tcpdump', u'wget', u'openssl', u'curl'])

PLAY RECAP ***********************************************************************************************************************************************************************************************
docker-node-1              : ok=2    changed=1    unreachable=0    failed=0
docker-node-2              : ok=2    changed=1    unreachable=0    failed=0
docker-node-3              : ok=2    changed=1    unreachable=0    failed=0
elasticsearch-node-1       : ok=2    changed=1    unreachable=0    failed=0
elasticsearch-node-2       : ok=2    changed=1    unreachable=0    failed=0
glusterfs-node-1           : ok=2    changed=1    unreachable=0    failed=0
glusterfs-node-2           : ok=2    changed=1    unreachable=0    failed=0
cluster-node-1             : ok=2    changed=0    unreachable=0    failed=0
cluster-node-2             : ok=2    changed=0    unreachable=0    failed=0

This is it for this post, all posts for this tutorial will be posted under #ansible-tutorials

Query 24 Hours Worth of Data Using BatchGet on Amazon DynamoDB Using Scan and Filter Without a GSI

I’m testing how to query data in DynamoDB which will always be the retrieval of yesterdays data, without using a Global Secondary Index.

This is done just to see what other ways you can use to query data based on a specific timeframe.

Use-Case:

Data from DynamoDB needs to be batch processed (daily for the last 24-hours), into a external datasource. Data will be written into DynamoDB, the HK (uuid) and RK (timestamp) will be duplicated to the daily table. But only uuid and timestamp will be duplicated to the daily table, and only data for that day will be written into that datestamp formatted table name.

Let’s say data for 2018-10-30 needs to be written into our external data source, we will do a scan on table tbl-test_20181030, then from our response we will have a list of HashKeys (uuid) which we will use to do a BatchGet Item on our base table: tbl-test_base, which essentially grabs all the data for that day.

If deeper filtering needs to be done on that day, the FilterExpression can be used to do a deeper filtering which leads to grabbing only the filtered down data from the base table.

Note: The base table might have millions of items, so a Scan operation on the Base table would be really expensive, as it reads all the items in the table.

Once the data has been processed, the daily or metadata table can be removed.

DynamoDB Table Design

The base table: tbl-test_base will have:

  • HashKey: uuid (string)
  • RangeKey: timestamp (number)
  • Attributes: city, stream, transaction_date, name, metric_uri
  • Item will look like:
1
2
3
4
5
6
7
8
9
{
  u'uuid': u'fb4ddeb9-3b5e-47b3-bbab-1aa1d8e8f47b',
  u'timestamp': 1540891276,
  u'city': u'sydney',
  u'stream': u'NONE',
  u'transaction_date': u'2018-10-30 11:21:16',
  u'metric_uri': u'some-dummy-metric-uri',
  u'name': u'frank'
}

he Daily Table: tbl-test_20181030 will look like:

  • HashKey: uuid
  • Attributes: timestamp
  • Item will look like:
1
2
3
4
{
  u'uuid': u'fb4ddeb9-3b5e-47b3-bbab-1aa1d8e8f47b',
  u'timestamp': 1540891276
}

Demonstration using Python

Creating the Metadata table:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
import boto3, time, uuid, random

session = boto3.Session(region_name='eu-west-1', profile_name='dev')
resource = session.resource('dynamodb')
client = session.client('dynamodb')

def create_table():
    table_name = "tbl-test_{0}".format(time.strftime("%Y%m%d"))
    response = resource.create_table(
        TableName=table_name,
        KeySchema=[{
            'AttributeName': 'uuid',
            'KeyType': 'HASH'
        }],
        AttributeDefinitions=[{
            'AttributeName': 'uuid',
            'AttributeType': 'S'
        }],
        ProvisionedThroughput={
            'ReadCapacityUnits': 1,
            'WriteCapacityUnits': 1
        }
    )

    resource.Table(table_name).wait_until_exists()

    arn = client.describe_table(TableName=table_name)['Table']['TableArn']
    client.tag_resource(
        ResourceArn=arn,
        Tags=[
            {'Key': 'Name','Value': 'dynamo_table'},
            {'Key': 'Environment','Value': 'Dev'},
            {'Key': 'CreatedBy','Value': 'Ruan'}
        ]
    )

    return resource.Table(table_name).table_status

print(create_table())

Write 400 Items to DynamoDB:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
import boto3, time, uuid, random

session = boto3.Session(region_name='eu-west-1', profile_name='dev')
resource = session.resource('dynamodb')
client = session.client('dynamodb')

base_table = 'tbl-test_base'
meta_table = 'tbl-test_{0}'.format(time.strftime("%Y%m%d"))

people = ['james', 'john', 'frank', 'paul', 'nathan', 'kevin']
cities = ['ireland', 'cape town', 'pretoria', 'paris', 'amsterdam', 'auckland', 'sydney']

def write_dynamo(uuid, timestamp):
    resource.Table(base_table).put_item(
        Item={
            'uuid': uuid,
            'timestamp': timestamp,
            'metric_uri': 'some-dummy-metric-uri',
            'transaction_date': time.strftime("%Y-%m-%d %H:%M:%S"),
            'name': random.choice(people),
            'stream': 'NONE',
            'city': random.choice(cities)
        }
    )

    resource.Table(meta_table).put_item(
        Item={
            'uuid': uuid,
            'timestamp': timestamp
        }
    )

    return 'Written'

for x in xrange(400):
    time.sleep(1)
    write_dynamo(str(uuid.uuid4()), int(time.time()))
    print(x)

Getting Data for 20181030 but also filter data greater than the timestamp attribute, greater than 1540841144 in epoch time (which will give us about 254 items).

The BatchGet Item supports up to 100 items per call, we will limit the scans on 100 items per call, then paginate using the ExlusiveStartKey with the value of our LastEvaluatedKey that we will get from our response:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
import boto3,time
from boto3.dynamodb.conditions import Key

base_table = 'tbl-test_base'
meta_table = 'tbl-test_20181030'

session = boto3.Session(region_name='eu-west-1', profile_name='dev')
resource = session.resource('dynamodb')
table = resource.Table(meta_table)
filtering_expression = Key('timestamp').gt(1540841144)

response = table.scan(FilterExpression=filtering_expression, Limit=100)

finished=False
while finished != True:
    if 'LastEvaluatedKey' in response.keys():
        print("Getting {} Items".format(response['Count']))
        items = resource.batch_get_item(RequestItems={base_table: {'Keys': response['Items']}})
        print(items['Responses'][base_table])
        time.sleep(2)
        response = table.scan(FilterExpression=filtering_expression, Limit=100, ExclusiveStartKey=response['LastEvaluatedKey'])
    else:
        print("Getting {} Items".format(response['Count']))
        items = resource.batch_get_item(RequestItems={base_table: {'Keys': response['Items']}})
        print(items['Responses'][base_table])
        finished=True

Running it:

1
2
3
4
5
6
7
8
9
$ python dynamodb-batch-get.py
Getting 100 Items
[{u'city': u'pretoria', u'uuid': u'e8bc0d1c-2b57-4de2-b0e1-35ef1fe0edf1', u'stream': u'NONE', u'timestamp': Decimal('1540846990'), u'transaction_date': u'2018-10-29 23:03:10', u'metric_uri': u'some-dummy-metric-uri', u'name': u'frank'}, {u'city': u'amsterdam', u'uuid':
...
Getting 100 Items
[{u'city': u'sydney', u'uuid': u'5bc51ce9-2809-46c9-a3f2-ff8180086d92', u'stream': u'NONE', u'timestamp': Decimal('1540848599'), u'transaction_date': u'2018-10-29 23:29:59', u'metric_uri': u'some-dummy-metric-uri', u'name': u'frank'}
...
Getting 54 Items
[{u'city': u'cape town', u'uuid': u'5e069f34-0e97-4a49-9ca9-da2213edb689'...

Verifying that each call only scans 100 at a time:

1
2
3
4
5
6
7
8
9
>>> response = table.scan(FilterExpression=filtering_expression, Limit=100)
>>> response.keys()
[u'Count', u'Items', u'LastEvaluatedKey', u'ScannedCount', 'ResponseMetadata']
>>> response.get('LastEvaluatedKey')
{u'uuid': u'e8c52a55-ca9e-4718-83d2-1b44a90f43e6'}
>>> response.get('Count')
100
>>> response.get('ScannedCount')
100

Other Thoughts:

Querying data is a lot easier using a Global Secondary Index where you could similarly have the metric_uri as the HashKey and transaction_date as the RangeKey:

1
2
3
4
5
6
>>> response = table.query(
    IndexName='metric_uri-transaction_date-index',
    KeyConditionExpression=Key('metric_uri').eq('some-dummy-metric-uri') & Key('transaction_date').begins_with('2018-10-30')
)
>>> response['Count']
400

Also note that depending on how you setup your GSI, in most cases its a exact duplicate in storage from your base table, so could potentially be double the costs.

Using Python Flask and JavaScript for Client Side Filtering Through Returned Data

This post will cover 2 sections, using Python Flask and Javascript to filter returned data, where you could have a table that represents 100 items, and you want to have a search box to filter down your results as you type.

The other section will be used as a demo, with solving a problem with Amazon CloudWatch Logs. I’m a Massive AWS Fanatic, but when it comes to CloudWatch Logs, I’m not so big of a fan of that specific service. Especially when you use Docker Swarm for AWS and have your logdriver set to CloudWatch Logs.

The Problem I have with CloudWatch Logs

When you point to your CloudWatch LogGroups, you can search for your streams, and in my case searching for a specific swarm service, but you can’t sort by date, like this:

This makes it really tedious when trying to search find your logs in a quick way.

Python Flask to the Resque

We will create a Python Flask application that retrieves your data about all your Docker Swarm Services and Container Id’s running on each node. For this demonstration, I have hard coded the services and container id’s, but using it in a real environment, you can utilise the Docker API or some logic that retrieves it from a datastore where a process populates it to.

The Application Code will do the following:

  • returns a list of your swarm services (mock data in the code)
  • when you select a service, it will get a list of the container ids and run through a for loop unsing jinja templates and display them in table format
  • when you select the containerId, it will populate the containerId to the cloudwatch logs filter, giving you the exact logstream which you are looking for
  • this will do a redirect to the AWS Console, and you will see the data in the sorted time of interest

  • app.py

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
from flask import Flask, render_template

app = Flask(__name__)

# faking datasets that can be returned from a api or database
swarm_services = ['my-web-service', 'my-api-service']
swarm_tasks = {
    "my-web-service": {
        "container_names": [
            "my-web-service.1.alfjshoehfosfn",
            "my-web-service.2.fuebchduehakjdu"
        ]
    },
    "my-api-service": {
        "container_names": [
            "my-api-service.1.oprudhyuythvbzx",
            "my-api-service.2.sjduebansifotuf"
        ]
    }
}

def get_container_name(app_name):
    data = []
    response = swarm_tasks[app_name]
    for container in response['container_names']:
        data.append(container)
    return render_template('index.html', app_name=app_name, number=len(data), data=data)

@app.route('/')
def list():
    return render_template('list.html', number=len(swarm_services), apps=swarm_services, aws_region='eu-west-1', cloudwatch_log_stream='docker-swarm-lg')

@app.route('/describe/<string:app_name>')
def get_app(app_name):
    app = get_container_name(app_name)
    return app

if __name__ == '__main__':
    app.run()

The index.html:

The list.html :

Filtering the Data

So at this moment all your data will be returned when a list is done, if you are in a case where you have lots of information, it can be overwelming and you will need to search for the service of interest. Using HTML and JavaScript, you can filter through the results:

The JavaScript Function: assets/js/filter.js

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
function SearchAndFilterThingy() {
  var input, filter, table, tr, td, x;
  input = document.getElementById("UserInput");
  filter = input.value.toUpperCase();
  table = document.getElementById("ServicesTable");
  tr = table.getElementsByTagName("tr");

  for (x = 0; x < tr.length; x++) {
    td = tr[x].getElementsByTagName("td")[0];
    if (td) {
      if (td.innerHTML.toUpperCase().indexOf(filter) > -1) {
        tr[x].style.display = "";
      }
      else {
        tr[x].style.display = "none";
      }
    }
  }
}

Screenshot

Once you search for a specific keyword on the service you are looking for the output should more or less look like the following:

Building Ghost Version 2 Blog for the RaspberryPi

In this post we will setup Ghost 2.0.3 for the Raspberry Pi on Docker Swarm

Dockerfile

Our dockerfile:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
FROM rbekker87/armhf-node:8.11

RUN apk add --no-cache 'su-exec>=0.2' && apk --update add bash gcc g++ make python && npm install sqlite3 --build-from-source

ENV NODE_ENV production
ENV GHOST_CLI_VERSION 1.9.1
ENV GHOST_VERSION 2.0.3
ENV GHOST_INSTALL /var/lib/ghost
ENV GHOST_CONTENT /var/lib/ghost/content

RUN npm install -g "ghost-cli@$GHOST_CLI_VERSION"

RUN set -ex; \
        mkdir -p "$GHOST_INSTALL" \
        && adduser -s /bin/sh -D node \
        && chown node:node "$GHOST_INSTALL" \
        && su-exec node ghost install "$GHOST_VERSION" --db sqlite3 --no-prompt --no-stack --no-setup --dir "$GHOST_INSTALL" \
        && cd "$GHOST_INSTALL" \
        && su-exec node ghost config --ip 0.0.0.0 --port 2368 --no-prompt --db sqlite3 --url http://localhost:2368 --dbpath "$GHOST_CONTENT/data/ghost.db" \
        && su-exec node ghost config paths.contentPath "$GHOST_CONTENT" \
        && su-exec node ln -s config.production.json "$GHOST_INSTALL/config.development.json" \
        && readlink -f "$GHOST_INSTALL/config.development.json" \
        && mv "$GHOST_CONTENT" "$GHOST_INSTALL/content.orig" \
        && mkdir -p "$GHOST_CONTENT" && chown node:node "$GHOST_CONTENT" \
        && "$GHOST_INSTALL/current/node_modules/knex-migrator/bin/knex-migrator" --version

ENV PATH $PATH:$GHOST_INSTALL/current/node_modules/knex-migrator/bin

WORKDIR $GHOST_INSTALL

COPY docker-entrypoint.sh /usr/local/bin
RUN chmod +x /usr/local/bin/docker-entrypoint.sh

ENTRYPOINT ["docker-entrypoint.sh"]

CMD ["node", "current/index.js"]

Our Boot Script

Our entrypoint script docker-entrypoint.sh:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
#!/bin/bash
set -e

if [[ "$*" == node*current/index.js* ]] && [ "$(id -u)" = '0' ];
  then
    chown -R node "$GHOST_CONTENT"
    exec su-exec node "$BASH_SOURCE" "$@"
fi

if [[ "$*" == node*current/index.js* ]];
  then
    baseDir="$GHOST_INSTALL/content.orig"
    for src in "$baseDir"/*/ "$baseDir"/themes/*;
      do
        src="${src%/}"
        target="$GHOST_CONTENT/${src#$baseDir/}"
        mkdir -p "$(dirname "$target")"
        if [ ! -e "$target" ];
          then
            tar -cC "$(dirname "$src")" "$(basename "$src")" | tar -xC "$(dirname "$target")"
        fi
      done

    knex-migrator-migrate --init --mgpath "$GHOST_INSTALL/current"
fi

prod() {
cat > /var/lib/ghost/config.development.json << EOF
{
  "url": "http://${SERVER_URL:-localhost}:${SERVER_PORT:-2368}",
  "server": {
    "port": ${SERVER_PORT:-2368},
    "host": "0.0.0.0"
  },
  "database": {
    "client": "sqlite3",
    "connection": {
      "filename": "/var/lib/ghost/content/data/ghost.db"
    }
  },
  "mail": {
    "transport": "SMTP",
    "from": "${FROM_NAME:-MyBlog} <${FROM_EMAIL:-ghost-blog@localhost}>",
    "options": {
      "service": "Mailgun",
      "host": "${SMTP_HOST:-localhost}",
      "port": ${SMTP_PORT:-25},
      "auth": {
        "user": "${SMTP_AUTH_USERNAME:-root}",
        "pass": "${SMTP_AUTH_PASSWORD:-password}"
      }
    }
  },
  "logging": {
    "transports": [
      "file",
      "stdout"
    ]
  },
  "process": "systemd",
  "paths": {
    "contentPath": "/var/lib/ghost/content"
  }
}
EOF
}

dev() {
cat > /var/lib/ghost/config.development.json << EOF
{
  "url": "http://${SERVER_URL:-localhost}:${SERVER_PORT:-2368}",
  "server": {
    "port": ${SERVER_PORT:-2368},
    "host": "0.0.0.0"
  },
  "database": {
    "client": "sqlite3",
    "connection": {
      "filename": "/var/lib/ghost/content/data/ghost.db"
    }
  },
  "mail": {
    "transport": "Direct"
  },
  "logging": {
    "transports": [
      "file",
      "stdout"
    ]
  },
  "process": "systemd",
  "paths": {
    "contentPath": "/var/lib/ghost/content"
  }
}
EOF
}

test(){
cat > /var/lib/ghost/config.development.json << EOF
{
  "url": "http://localhost:2368",
  "server": {
    "port": 2368,
    "host": "0.0.0.0"
  },
  "database": {
    "client": "sqlite3",
    "connection": {
      "filename": "/var/lib/ghost/content/data/ghost.db"
    }
  },
  "mail": {
    "transport": "Direct"
  },
  "logging": {
    "transports": [
      "file",
      "stdout"
    ]
  },
  "process": "systemd",
  "paths": {
    "contentPath": "/var/lib/ghost/content"
  }
}
EOF
}

if  [ "${ENV_TYPE}" = "PROD" ]
  then prod

elif [ "${ENV_TYPE}" = "DEV" ]
  then dev
  else test

fi

exec "$@"

The entrypoint script takes a couple of environment variables, as you can see if they are not defined, defaults will be inherited.

Configurable Environment Variables:

1
2
3
4
5
6
7
8
  - ENV_TYPE=PROD
  - SERVER_PORT=2368
  - SERVER_URL=myblog.pistack.co.za
  - FROM_NAME=MyName
  - SMTP_HOST=mail.mydomain.co.za
  - SMTP_PORT=587
  - SMTP_AUTH_USERNAME=me@mydomain.co.za
  - SMTP_AUTH_PASSWORD=secret

Building our Ghost Image

I have a public image available if you dont want to build/push, but for building:

1
$ docker build -t your-name/repo:tag

Deploy Ghost with Traefik

Our ghost-compose.yml with traefik will look like the following, note that I mounted the source path to the container’s path, the source path is running on a replicated glusterfs volume, which can be setup following this post

Also for this demonstration I was using the domain pistack.co.za, where you need to utilize the domain of your choice.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
version: "3.4"

services:
  ghost:
    image: rbekker87/armhf-ghost:2.0.3
    networks:
      - appnet
    volumes:
      - type: bind
        source: /mnt/volumes/myblog/content/data
        target: /var/www/ghost/content/data
    environment:
      - ENV_TYPE=PROD
      - SERVER_PORT=2368
      - SERVER_URL=myblog.pistack.co.za
      - FROM_NAME=MyName
      - SMTP_HOST=mail.mydomain.co.za
      - SMTP_PORT=587
      - SMTP_AUTH_USERNAME=me@mydomain.co.za
      - SMTP_AUTH_PASSWORD=secret
    deploy:
      replicas: 1
      labels:
        - "traefik.enable=true"
        - "traefik.backend=ghost"
        - "traefik.backend.loadbalancer.swarm=true"
        - "traefik.docker.network=appnet"
        - "traefik.port=2368"
        - "traefik.frontend.passHostHeader=true"
        - "traefik.frontend.rule=Host:myblog.pistack.co.za"
      replicas: 3
      update_config:
        parallelism: 2
        delay: 10s
      restart_policy:
        condition: on-failure
      placement:
        constraints: [node.role == worker]

networks:
  appnet:
    external: true

Deploy the stack:

1
$ docker stack deploy -c ghost-compose.yml web

Once the service is up, you will be able to reach your blog on the provided traefik.frontend.rule. If you don’t have traefik running, you can follow this post to get traefik up and running.

Resources:

Build a Traefik Proxy Image for Your Raspberry Pi on Docker Swarm

In this post we will setup a Docker Image for Traefik Proxy on the ARM Architecture, specifically on the Raspberry Pi, which we will deploy to our Raspberry Pi Docker Swarm.

Then we will build and push our image to a registry, then setup traefik and also setup a web application that sits behind our Traefik Proxy.

What is Traefik

Traefik is a modern load balancer and reverse proxy built for micro services.

Dockerfile

We will be running Traefik on Alpine 3.8:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
FROM rbekker87/armhf-alpine:3.8

ENV TRAEFIK_VERSION 1.7.0-rc3
ENV ARCH arm

ADD https://github.com/containous/traefik/releases/download/v${TRAEFIK_VERSION}/traefik_linux-${ARCH} /traefik

RUN apk add --no-cache ca-certificates \
    && chmod +x /traefik \
    && rm -rf /var/cache/apk/*

EXPOSE 80 8080 443

ENTRYPOINT ["/traefik"]

Build and Push

Build and Push your image to your registry of choice:

1
2
$ docker build -t your-user/repo:tag .
$ docker push your-user/repo:tag

If you do not want to push to a registry, I have a public image available at https://hub.docker.com/r/rbekker87/armhf-traefik/, the image itself is rbekker87/armhf-traefik:1.7.0-rc3

Deploy Traefik to the Swarm

From our traefik-compose.yml, you will notice that I have set that our network is external, so the network should exist prior to deploying the stack.

Let’s create the overlay network:

1
$ docker network create --driver overlay appnet

Below, the traefik-compose.yml, note that I’m using pistack.co.za as my domain:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
version: "3.4"

services:
  traefik:
    image: rbekker87/armhf-traefik:1.7.0-rc3
    command:
      - "--api"
      - "--docker"
      - "--docker.swarmmode"
      - "--docker.domain=pistack.co.za"
      - "--docker.watch"
      - "--logLevel=DEBUG"
      - "--web"
    networks:
      - appnet
    volumes:
      - /var/run/docker.sock:/var/run/docker.sock
    ports:
      - 80:80
      - 8080:8080
    deploy:
      mode: global
      restart_policy:
        condition: on-failure
      placement:
        constraints: [node.role == manager]

networks:
  appnet:
    external: true

Deploy the stack:

1
$ docker stack deploy -c traefik-compose.yml proxy

List the stacks:

1
2
3
$ docker stack ls
NAME                SERVICES
proxy               1

Check if the services in your stack is running. Since our deploy mode was global, there will be a replica running on each node, and in my swarm I’ve got 3 nodes:

1
2
3
$ docker stack services proxy
ID                  NAME                MODE                REPLICAS            IMAGE                    PORTS
16x31j7o0f0r        proxy_traefik       global              3/3                 rbekker87/armhf-traefik:1.7.0-rc3   *:80->80/tcp,*:8080->8080/tcp

Deploy a Web Service hooked up to Traefik

Pre-Requirement:

To register subdomains on the fly, set you DNS for your domain to the following (im using pistack.co.za in this example):

  • pistack.co.za A x.x.x.x
  • *.pistack.co.za A x.x.x.x

Next, we will deploy we app that will be associated to our Traefik service domain, so we will inform Traefik that our web app fqdn and port that will be registered with the proxy.

Our app-compose.yml file for our webapp:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
version: "3.4"

services:
  whoami:
    image: rbekker87/golang-whoami:alpine-amrhf
    networks:
      - appnet
    deploy:
      replicas: 3
      labels:
        - "traefik.backend=whoami"
        - "traefik.port=80"
        - "traefik.frontend.rule=Host:whoami.pistack.co.za"
      update_config:
        parallelism: 1
        delay: 10s
      restart_policy:
        condition: on-failure
      placement:
        constraints: [node.role == worker]
    healthcheck:
      test: nc -vz 127.0.0.1 80 || exit 1
      interval: 60s
      timeout: 3s
      retries: 3

networks:
  appnet:
    external: true

In the above compose, you will notice that our traefik backend is set to our service name, our port is the port that the proxy will forward requests to the containers port, since the proxy and the whoami container is in the same network, they will be able to communicate with each other. Then we also have our frontend rule which will be the endpoint we will reach our application on.

Deploy the stack:

1
2
$ docker stack deploy -c whoami.yml web
Creating service web_whoami

List the tasks running in our web stack:

1
2
3
$ docker stack services web
ID                  NAME                MODE                REPLICAS            IMAGE                                  PORTS
31ylfcfb7uyw        web_whoami          replicated          3/3                 rbekker87/golang-whoami:alpine-amrhf

Once all the replicas is running, move along to test the application

Testing our Application:

I have 3 replicas each running on their own container, so each container will respond with its own hostname:

1
2
3
4
5
$ docker service ps web_whoami
ID                  NAME                IMAGE                                  NODE                DESIRED STATE       CURRENT STATE            ERROR                              PORTS
ivn8fgfosvgd        web_whoami.1        rbekker87/golang-whoami:alpine-amrhf   rpi-01              Running             Running 26 minutes ago
rze6u6z56aop        web_whoami.2        rbekker87/golang-whoami:alpine-amrhf   rpi-02              Running             Running 26 minutes ago
6fjua869r498        web_whoami.3        rbekker87/golang-whoami:alpine-amrhf   rpi-04              Running             Running 23 minutes ago

Making our 1st GET request:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
$ $ curl http://whoami.pistack.co.za/
Hostname: 43f5f0a6682f
IP: 127.0.0.1
IP: 10.0.0.138
IP: 10.0.0.218
IP: 172.18.0.4
GET / HTTP/1.1
Host: whoami.pistack.co.za
User-Agent: curl/7.38.0
Accept: */*
Accept-Encoding: gzip
X-Forwarded-For: 165.73.96.95, 10.255.0.2
X-Forwarded-Host: whoami.pistack.co.za
X-Forwarded-Port: 80
X-Forwarded-Proto: http
X-Forwarded-Server: 31b37f9714d3
X-Real-Ip: 10.255.0.2

Our 2nd GET Request:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
$ curl http://whoami.pistack.co.za/
Hostname: d1c17a476414
IP: 127.0.0.1
IP: 10.0.0.138
IP: 10.0.0.71
IP: 172.19.0.5
GET / HTTP/1.1
Host: whoami.pistack.co.za
User-Agent: curl/7.38.0
Accept: */*
Accept-Encoding: gzip
X-Forwarded-For: 165.73.96.95, 10.255.0.2
X-Forwarded-Host: whoami.pistack.co.za
X-Forwarded-Port: 80
X-Forwarded-Proto: http
X-Forwarded-Server: 02b0ff6eab73
X-Real-Ip: 10.255.0.2

And our 3rd GET Request:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
$ curl http://whoami.pistack.co.za/
Hostname: 17c817a1813b
IP: 172.18.0.6
IP: 127.0.0.1
IP: 10.0.0.138
IP: 10.0.0.73
GET / HTTP/1.1
Host: whoami.pistack.co.za
User-Agent: curl/7.38.0
Accept: */*
Accept-Encoding: gzip
X-Forwarded-For: 165.73.96.95, 10.255.0.2
X-Forwarded-Host: whoami.pistack.co.za
X-Forwarded-Port: 80
X-Forwarded-Proto: http
X-Forwarded-Server: 31b37f9714d3
X-Real-Ip: 10.255.0.2

Hope this was useful.

Resources:

Building a Raspberry Pi Nginx Image With Caching on Alpine for Docker Swarm

In this guide, we will be creating a nginx reverse proxy with the ability to cache static content using a alpine image.

We will then push the image to gitlab’s private registry, and then run the service on docker swarm.

Create the backend service:

We will upstream to our blog using ghost, which you can deploy using:

1
$ docker service create --name blog --network docknet rbekker87/armhf-ghost:2.0.3

Current File Structure:

Our file structure for the assets we need to build the reverse proxy:

1
2
3
4
5
$ find .
./conf.d
./conf.d/blog.conf
./Dockerfile
./nginx.conf
  • Dockerfile
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
FROM hypriot/rpi-alpine-scratch
MAINTAINER Ruan Bekker

RUN apk update && \
    apk add nginx && \
    rm -rf /etc/nginx/nginx.conf && \
    chown -R nginx:nginx /var/lib/nginx && \
    rm -rf /var/cache/apk/*

ADD nginx.conf /etc/nginx/
ADD conf.d/blog.conf /etc/nginx/conf.d/

EXPOSE 80

CMD ["nginx", "-g", "daemon off;"]
  • nginx.conf
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
user nginx;
worker_processes 1;

events {
    worker_connections 1024;
    }

error_log  /var/log/nginx/nginx_error.log warn;

http {

    sendfile        on;
    tcp_nodelay         on;

    gzip              on;
    gzip_http_version 1.0;
    gzip_proxied      any;
    gzip_min_length   500;
    gzip_disable      "MSIE [1-6]\.";
    gzip_types        text/plain text/xml text/css
                      text/comma-separated-values
                      text/javascript
                      application/x-javascript
                      application/atom+xml;

    log_format  main  '$remote_addr - $remote_user [$time_local] "$request" '
                      '$status $body_bytes_sent "$http_referer" '
                      '"$http_user_agent" "$http_x_forwarded_for"';

    access_log  /var/log/nginx/access.log  main;
    error_log   /var/log/nginx/error.log;

    proxy_cache_path /var/cache/nginx/ levels=1:2 keys_zone=nginx_cache:5m max_size=128m inactive=60m;

    keepalive_timeout  60;
    server_tokens      off;

    include /etc/nginx/conf.d/*.conf;

}

Hostname resolution to our Ghost Blog Service: In our swarm we have a service called blog which is associated to the docknet network, so the dns resolution will resolve to the vip of the service. As seen in the figure below:

1
2
3
$ docker service ls
ID                  NAME                MODE                REPLICAS            IMAGE                                                    PORTS
nq42a6jfwx3d        blog                replicated          1/1                 rbekker87/armhf-ghost:2.0.3
  • conf.d/blog.conf
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
upstream ghost_blog {
    server blog:2368;
    }

server {
    listen 80;
    server_name blog.yourdomain.com;

    access_log  /var/log/nginx/blog_access.log  main;
    error_log   /var/log/nginx/blog_error.log;

    location / {

        proxy_cache                 nginx_cache;
        add_header                  X-Proxy-Cache $upstream_cache_status;
        proxy_ignore_headers        Cache-Control;
        proxy_cache_valid any       10m;
        proxy_cache_use_stale       error timeout http_500 http_502 http_503 http_504;

        proxy_pass                  http://ghost_blog;
        proxy_redirect              off;

        proxy_set_header            Host $host;
        proxy_set_header            X-Real-IP $remote_addr;
        proxy_set_header            X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_set_header            X-Forwarded-Host $server_name;
    }
}

Building the Image and Pushing to Gitlab

I’m using Gitlab in this demonstration, but you can use the registry of your choice:

1
2
3
4
$ docker login registry.gitlab.com
$ docker build -t registry.gitlab.com/user/docker/arm-nginx:caching .
$ docker tag registry.gitlab.com/user/docker/arm-nginx:caching registry.gitlab.com/user/docker/arm-nginx:caching
$ docker push registry.gitlab.com/user/docker/arm-nginx:caching

Deploy

Create the Nginx Reverse Proxy Service on Docker Swarm:

1
2
3
4
5
$ docker service create --name nginx_proxy \
--network docknet \
--publish 80:80 \
--replicas 1 \
--with-registry-auth registry.gitlab.com/user/docker/arm-nginx:caching

Listing our Services:

1
2
3
4
$ docker service ls
ID                  NAME                MODE                REPLICAS            IMAGE                                                    PORTS
je7x21l7egoh        nginx_proxy         replicated          1/1                 registry.gitlab.com/user/docker/arm-nginx:caching   *:80->80/tcp
nq42a6jfwx3d        blog                replicated          1/1                 rbekker87/armhf-ghost:2.0.3

Once you access your proxy on port 80, you should see your Ghost Blog Homepage like below:

Have a look at the benchmark performance when using Nginx with caching enabled

Resources:

Nginx Caching Performance for Static Content on Docker Swarm With RaspberryPi

The Environment:

I had my Ghost Blog listening on port 2368 and exposing port 80 on Docker so that the port translation directs port 80 traffic to port 2368 on Ghost directly.

Alex responded on my tweet and introduced Nginx Caching:

With this approach benchmarking results was not so great in terms of requests per second, and as this hostname will be only used for a blog, its a great idea to cache the content, this was achieved with the help from Alex’s blog: blog.alexellis.io/save-and-boost-with-nginx/

How Nginx was Configured:

I have a blogpost on how I setup Nginx on an Alpine Image, where I setup caching and proxy-pass the connections through to my ghost blog.

Benchmarking: Before Nginx with Caching was Implemented:

When doing an apache benchmark I got 9.31 requests per second performing the test on my LAN:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
$ ab -n 500 -c 10 http://rbkr.ddns.net/

This is ApacheBench, Version 2.3 <$Revision: 1706008 $>
Copyright 1996 Adam Twiss, Zeus Technology Ltd, http://www.zeustech.net/
Licensed to The Apache Software Foundation, http://www.apache.org/

Benchmarking rbkr.ddns.net (be patient)
Completed 100 requests
Completed 200 requests
Completed 300 requests
Completed 400 requests
Completed 500 requests
Finished 500 requests


Server Software:
Server Hostname:        blog.pistack.co.za
Server Port:            80

Document Path:          /
Document Length:        5470 bytes

Concurrency Level:      10
Time taken for tests:   53.725 seconds
Complete requests:      500
Failed requests:        0
Total transferred:      2863000 bytes
HTML transferred:       2735000 bytes
Requests per second:    9.31 [#/sec] (mean)
Time per request:       1074.501 [ms] (mean)
Time per request:       107.450 [ms] (mean, across all concurrent requests)
Transfer rate:          52.04 [Kbytes/sec] received

Connection Times (ms)
              min  mean[+/-sd] median   max
Connect:        1    2   0.5      2       6
Processing:   685 1068  68.7   1057    1306
Waiting:      683 1067  68.6   1056    1306
Total:        689 1070  68.7   1058    1312

Percentage of the requests served within a certain time (ms)
  50%   1058
  66%   1088
  75%   1102
  80%   1110
  90%   1163
  95%   1218
  98%   1240
  99%   1247
 100%   1312 (longest request)

Benchmarking: After Nginx Caching was Implemented:

After Nginx Caching was Implemented, I got 1067.73 requests per second using apache benchmark over a LAN connection! Absolutely awesome!

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
$ ab -n 500 -c 10 http://blog.pistack.co.za/
This is ApacheBench, Version 2.3 <$Revision: 1706008 $>
Copyright 1996 Adam Twiss, Zeus Technology Ltd, http://www.zeustech.net/
Licensed to The Apache Software Foundation, http://www.apache.org/

Benchmarking blog.pistack.co.za (be patient)
Completed 100 requests
Completed 200 requests
Completed 300 requests
Completed 400 requests
Completed 500 requests
Finished 500 requests


Server Software:        nginx
Server Hostname:        blog.pistack.co.za
Server Port:            80

Document Path:          /
Document Length:        5470 bytes

Concurrency Level:      10
Time taken for tests:   0.468 seconds
Complete requests:      500
Failed requests:        0
Total transferred:      2880500 bytes
HTML transferred:       2735000 bytes
Requests per second:    1067.73 [#/sec] (mean)
Time per request:       9.366 [ms] (mean)
Time per request:       0.937 [ms] (mean, across all concurrent requests)
Transfer rate:          6007.05 [Kbytes/sec] received

Connection Times (ms)
              min  mean[+/-sd] median   max
Connect:        3    4   1.4      4      10
Processing:     3    5   1.6      4      10
Waiting:        2    4   1.6      4      10
Total:          6    9   2.7      8      17

Percentage of the requests served within a certain time (ms)
  50%      8
  66%      8
  75%      9
  80%      9
  90%     15
  95%     15
  98%     15
  99%     16
 100%     17 (longest request)

Resources:

Thanks to Alex Ellis for the suggestion on this, and definitely have a look at blog.alexellis.io as he has some epic content on his blog!