Ruan Bekker's Blog

From a Curious mind to Posts on Github

Create Secrets With Vaults Transits Secret Engine

Vault’s transit secrets engine handles cryptographic functions on data-in-transit. Vault doesn’t store the data sent to the secrets engine, so it can also be viewed as encryption as a service.

In this tutorial we will demonstrate how to use Vault’s Transit Secret Engine.

Related Posts:

Enable the Transit Engine:

Enable transit secret engine using the /sys/mounts endpoint:

1
$ curl --header "X-Vault-Token: $VAULT_TOKEN" -XPOST -d '{"type": "transit", "description": "encs encryption"}' http://127.0.0.1:8200/v1/sys/mounts/transit

Create the Key Ring:

Create an encryption key ring named fookey using the transit/keys endpoint:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
$ curl -s --header "X-Vault-Token: $VAULT_TOKEN" -XGET http://127.0.0.1:8200/v1/transit/keys/fookey | jq
{
  "request_id": "8375227a-4a9f-a108-0b89-84c448419e80",
  "lease_id": "",
  "renewable": false,
  "lease_duration": 0,
  "data": {
    "allow_plaintext_backup": false,
    "deletion_allowed": false,
    "derived": false,
    "exportable": false,
    "keys": {
      "1": 1554654295
    },
    "latest_version": 1,
    "min_available_version": 0,
    "min_decryption_version": 1,
    "min_encryption_version": 0,
    "name": "fookey",
    "supports_decryption": true,
    "supports_derivation": true,
    "supports_encryption": true,
    "supports_signing": false,
    "type": "aes256-gcm96"
  },
  "wrap_info": null,
  "warnings": null,
  "auth": null
}

Encoding

Encode your string:

1
2
$ base64 <<< "hello world"
aGVsbG8gd29ybGQK

Encrypt

To encrypt your secret, use the transit/encrypt endpoint:

1
2
3
4
5
6
7
8
9
10
11
12
13
$ curl -s --header "X-Vault-Token: $VAULT_TOKEN" --request POST  --data '{"plaintext": "aGVsbG8gd29ybGQK"}' http://127.0.0.1:8200/v1/transit/encrypt/fookey | jq
{
  "request_id": "ab00ba0f-9e45-0aca-e3c1-7765fd83fc3c",
  "lease_id": "",
  "renewable": false,
  "lease_duration": 0,
  "data": {
    "ciphertext": "vault:v1:Yo4U6xXFM2FoBOaUrw0w3EpSlJS6gmsa4HP1xKtjrk0+xSqi5Rvjvg=="
  },
  "wrap_info": null,
  "warnings": null,
  "auth": null
}

Decrypt:

Use the transit/decrypt endpoint to decrypt the ciphertext:

1
2
3
4
5
6
7
8
9
10
11
12
13
$ curl -s --header "X-Vault-Token: $VAULT_TOKEN" --request POST  --data '{"ciphertext": "vault:v1:Yo4U6xXFM2FoBOaUrw0w3EpSlJS6gmsa4HP1xKtjrk0+xSqi5Rvjvg=="}' http://127.0.0.1:8200/v1/transit/decrypt/fookey | jq
{
  "request_id": "3d9743a0-2daf-823c-f413-8c8a90753479",
  "lease_id": "",
  "renewable": false,
  "lease_duration": 0,
  "data": {
    "plaintext": "aGVsbG8gd29ybGQK"
  },
  "wrap_info": null,
  "warnings": null,
  "auth": null
}

Decoding

Decode the response:

1
2
$ base64 --decode <<< "aGVsbG8gd29ybGQK"
hello world

Resources

Use the Vault API to Provision App Keys and Create KV Pairs

In this tutorial we will use Vault API to create a user and allow that user to write/read key/value pairs from a given path.

Related Posts:

Credentials / Authentication

Export Vault Root Tokens:

1
2
$ export ROOT_TOKEN="$(cat ~/.vault-token)"
$ export VAULT_TOKEN=${ROOT_TOKEN}

Check the vault status:

1
2
3
4
5
6
7
8
9
10
11
12
13
$ curl -s -XGET -H "X-Vault-Token: ${VAULT_TOKEN}" http://127.0.0.1:8200/v1/sys/health | jq
{
  "initialized": true,
  "sealed": false,
  "standby": false,
  "performance_standby": false,
  "replication_performance_mode": "disabled",
  "replication_dr_mode": "disabled",
  "server_time_utc": 1554652468,
  "version": "1.1.0",
  "cluster_name": "vault-cluster-bfb00cd7",
  "cluster_id": "dc1dc9a6-xx-xx-xx-a2870f475e7a"
}

Do a lookup for the root user:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
$ curl -s -XGET -H "X-Vault-Token: ${VAULT_TOKEN}" http://127.0.0.1:8200/v1/auth/token/lookup-self | jq
{
  "request_id": "69a19f66-5bad-3af2-81a5-81ca24e50b02",
  "lease_id": "",
  "renewable": false,
  "lease_duration": 0,
  "data": {
    "accessor": "A7Xkik1ebWpUfzqNrvADmQ08",
    "creation_time": 1554651149,
    "creation_ttl": 0,
    "display_name": "root",
    "entity_id": "",
    "expire_time": null,
    "explicit_max_ttl": 0,
    "id": "s.po8HkMdCnnAerlCAeHGGGszQ",
    "meta": null,
    "num_uses": 0,
    "orphan": true,
    "path": "auth/token/root",
    "policies": [
      "root"
    ],
    "ttl": 0,
    "type": "service"
  },
  "wrap_info": null,
  "warnings": null,
  "auth": null
}

Create the Roles

Create the AppRole:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
$ curl -s -XPOST -H "X-Vault-Token: ${VAULT_TOKEN}" -d '{"type": "approle"}' http://127.0.0.1:8200/v1/sys/auth/approle | jq
$ curl -s -XGET -H "X-Vault-Token: ${VAULT_TOKEN}" http://127.0.0.1:8200/v1/sys/auth | jq
{
  "token/": {
    "accessor": "auth_token_31f2381e",
    "config": {
      "default_lease_ttl": 0,
      "force_no_cache": false,
      "max_lease_ttl": 0,
      "token_type": "default-service"
    },
    "description": "token based credentials",
    "local": false,
    "options": null,
    "seal_wrap": false,
    "type": "token"
  },
  "approle/": {
    "accessor": "auth_approle_d542dcad",
    "config": {
      "default_lease_ttl": 0,
      "force_no_cache": false,
      "max_lease_ttl": 0,
      "token_type": "default-service"
    },
    "description": "",
    "local": false,
    "options": {},
    "seal_wrap": false,
    "type": "approle"
  },
  "request_id": "20554948-b8e0-4254-f21d-f9ad25f1e5d5",
  "lease_id": "",
  "renewable": false,
  "lease_duration": 0,
  "data": {
    "approle/": {
      "accessor": "auth_approle_d542dcad",
      "config": {
        "default_lease_ttl": 0,
        "force_no_cache": false,
        "max_lease_ttl": 0,
        "token_type": "default-service"
      },
      "description": "",
      "local": false,
      "options": {},
      "seal_wrap": false,
      "type": "approle"
    },
    "token/": {
      "accessor": "auth_token_31f2381e",
      "config": {
        "default_lease_ttl": 0,
        "force_no_cache": false,
        "max_lease_ttl": 0,
        "token_type": "default-service"
      },
      "description": "token based credentials",
      "local": false,
      "options": null,
      "seal_wrap": false,
      "type": "token"
    }
  },
  "wrap_info": null,
  "warnings": null,
  "auth": null
}

Create the test policy:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
$ curl -XPOST -H "X-Vault-Token: ${VAULT_TOKEN}" -d '{"policy": "{\"name\": \"test\", \"path\": {\"secret/*\": {\"policy\": \"write\"}}}"}' http://127.0.0.1:8200/v1/sys/policy/test
$ curl -s -XGET -H "X-Vault-Token: ${VAULT_TOKEN}" http://127.0.0.1:8200/v1/sys/policy/test | jq
{
  "name": "test",
  "rules": "{\"name\": \"test\", \"path\": {\"secret/*\": {\"policy\": \"write\"}}}",
  "request_id": "e4f55dc0-575f-ead9-48f6-43154153889a",
  "lease_id": "",
  "renewable": false,
  "lease_duration": 0,
  "data": {
    "name": "test",
    "rules": "{\"name\": \"test\", \"path\": {\"secret/*\": {\"policy\": \"write\"}}}"
  },
  "wrap_info": null,
  "warnings": null,
  "auth": null
}

Attach the policy to the approle:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
$ curl -XPOST -H "X-Vault-Token: ${VAULT_TOKEN}" -d '{"policies": "test"}' http://127.0.0.1:8200/v1/auth/approle/role/app
$ curl -s -XGET -H "X-Vault-Token: ${VAULT_TOKEN}" 'http://127.0.0.1:8200/v1/auth/approle/role?list=true' | jq .
{
  "request_id": "e645cad9-9010-4299-0e6b-0baf6d9194b8",
  "lease_id": "",
  "renewable": false,
  "lease_duration": 0,
  "data": {
    "keys": [
      "app"
    ]
  },
  "wrap_info": null,
  "warnings": null,
  "auth": null
}

Enable the kv store:

1
$ curl -H "X-Vault-Token: ${VAULT_TOKEN}" -XPOST --data '{"type": "kv", "description": "my key value store", "config": {"force_no_cache": true}}' http://127.0.0.1:8200/v1/sys/mounts/secret

Create the User Credentials

Get the role_id:

1
2
3
4
5
6
7
8
9
10
11
12
13
$ curl -s -XGET -H "X-Vault-Token: ${VAULT_TOKEN}" http://127.0.0.1:8200/v1/auth/approle/role/app/role-id | jq
{
  "request_id": "e803a1bf-a492-dad7-68db-bb1506752e03",
  "lease_id": "",
  "renewable": false,
  "lease_duration": 0,
  "data": {
    "role_id": "3e365c72-7aad-f4e4-521c-d7cf0dd83c0f"
  },
  "wrap_info": null,
  "warnings": null,
  "auth": null
}

Create the secret_id:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
$ curl -s -XPOST -H "X-Vault-Token: ${VAULT_TOKEN}" http://127.0.0.1:8200/v1/auth/approle/role/app/secret-id | jq
{
  "request_id": "b56d20c0-ff8a-a1fe-4d5f-42e57b625b83",
  "lease_id": "",
  "renewable": false,
  "lease_duration": 0,
  "data": {
    "secret_id": "5eecfe29-d6e1-50e6-7a70-04c6bea42b76",
    "secret_id_accessor": "2fa80586-32b9-1c6f-fe1d-7c547e5403e5"
  },
  "wrap_info": null,
  "warnings": null,
  "auth": null
}

Create the token with the role_id and secret_id:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
$ curl -s -XPOST -d '{"role_id": "3e365c72-7aad-f4e4-521c-d7cf0dd83c0f","secret_id": "5eecfe29-d6e1-50e6-7a70-04c6bea42b76"}' http://127.0.0.1:8200/v1/auth/approle/login | jq
{
  "request_id": "82470940-ef09-bcbb-f7a0-bdf085b4f47b",
  "lease_id": "",
  "renewable": false,
  "lease_duration": 0,
  "data": null,
  "wrap_info": null,
  "warnings": null,
  "auth": {
    "client_token": "s.7EtwtRGsZWOtkqcMvj3UMLP0",
    "accessor": "2TPL1vg5IZXgVF6Xf1RRzbmL",
    "policies": [
      "default",
      "test"
    ],
    "token_policies": [
      "default",
      "test"
    ],
    "metadata": {
      "role_name": "app"
    },
    "lease_duration": 2764800,
    "renewable": true,
    "entity_id": "d5051b01-b7ce-626c-a9f4-e1663f8c23e8",
    "token_type": "service",
    "orphan": true
  }
}

Create KV Pairs with New User

Export the user auth with the received token:

1
2
$ export APP_TOKEN=s.7EtwtRGsZWOtkqcMvj3UMLP0
$ export VAULT_TOKEN=$APP_TOKEN

Verify if you can lookup your own info:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
$ curl -s -XGET -H "X-Vault-Token: ${VAULT_TOKEN}" http://127.0.0.1:8200/v1/auth/token/lookup-self | jq
{
  "request_id": "2e69cd68-8668-3159-6440-c430cb61d2e6",
  "lease_id": "",
  "renewable": false,
  "lease_duration": 0,
  "data": {
    "accessor": "2TPL1vg5IZXgVF6Xf1RRzbmL",
    "creation_time": 1554651882,
    "creation_ttl": 2764800,
    "display_name": "approle",
    "entity_id": "d5051b01-b7ce-626c-a9f4-e1663f8c23e8",
    "expire_time": "2019-05-09T15:44:42.1013993Z",
    "explicit_max_ttl": 0,
    "id": "s.7EtwtRGsZWOtkqcMvj3UMLP0",
    "issue_time": "2019-04-07T15:44:42.1013788Z",
    "meta": {
      "role_name": "app"
    },
    "num_uses": 0,
    "orphan": true,
    "path": "auth/approle/login",
    "policies": [
      "default",
      "test"
    ],
    "renewable": true,
    "ttl": 2764556,
    "type": "service"
  },
  "wrap_info": null,
  "warnings": null,
  "auth": null
}

Create a KV pair:

1
$ curl -s -XPOST -H "X-Vault-Token: ${VAULT_TOKEN}" -d '{"app_password": "secret123"}' http://127.0.0.1:8200/v1/secret/app01/app_password

Read the secret from KV pair:

1
2
3
4
5
6
7
8
9
10
11
12
13
$ curl -s -XGET -H "X-Vault-Token: ${VAULT_TOKEN}" http://127.0.0.1:8200/v1/secret/app01/app_password | jq
{
  "request_id": "70d5f16d-2abb-fcfd-063f-0e21d9cef8fd",
  "lease_id": "",
  "renewable": false,
  "lease_duration": 2764800,
  "data": {
    "app_password": "secret123"
  },
  "wrap_info": null,
  "warnings": null,
  "auth": null
}

Try to write outside the allowed path:

1
2
$ curl -s -XPOST -H "X-Vault-Token: ${VAULT_TOKEN}" -d '{"app_password": "secret123"}' http://127.0.0.1:8200/v1/secrets/app01/app_password
{"errors":["1 error occurred:\n\t* permission denied\n\n"]}

Resources:

Persist Vault Data With Amazon S3 as a Storage Backend

In a previous post we have set up the vault server on docker, but using a file backend to persist our data.

In this tutorial we will configure vault to use amazon s3 as a storage backend to persist our data for vault.

Provision S3 Bucket

Create the S3 Bucket where our data will reside:

1
$ aws s3 mb --region=eu-west-1 s3://somename-vault-backend

Vault Config

Create the vault config, where we will provide details about our storage backend and configuration for the vault server:

1
$ vim volumes/config/s3vault.json

Populate the config file with the following details, you will just need to provide your own credentials:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
{
  "backend": {
    "s3": {
      "region": "eu-west-1",
      "access_key": "ACCESS_KEY",
      "secret_key": "SECRET_KEY",
      "bucket": "somename-vault-backend"
    }
  },
  "listener": {
    "tcp":{
      "address": "0.0.0.0:8200",
      "tls_disable": 1
    }
  },
  "ui": true
}

Docker Compose

As we are using docker to deploy our vault server, our docker-compose.yml:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
$ cat > docker-compose.yml << EOF
version: '2'
services:
  vault:
    image: vault
    container_name: vault
    ports:
      - "8200:8200"
    restart: always
    volumes:
      - ./volumes/logs:/vault/logs
      - ./volumes/file:/vault/file
      - ./volumes/config:/vault/config
    cap_add:
      - IPC_LOCK
    entrypoint: vault server -config=/vault/config/s3vault.json
EOF

Deploy the vault server:

1
$ docker-compose up

Go ahead and create some secrets, then deploy the docker container on another host to test out the data persistence.

Setup Prometheus and Node Exporter on Ubuntu for Epic Monitoring

image

Prometheus is one of those awesome open source monitoring services that I simply cannot live without. Prometheus is a Time Series Database that collects metrics from services using it’s exporters functionality. Prometheus has its own query language called PromQL and makes graphing epic visualiztions with services such as Grafana a breeze.

What are we doing today

We will install the prometheus service and set up node_exporter to consume node related metrics such as cpu, memory, io etc that will be scraped by the exporter configuration on prometheus, which then gets pushed into prometheus’s time series database. Which can then be used by services such as Grafana to visualize the data.

Other exporters is also available, such as: haproxy_exporter, blackbox_exporter etc, then you also get pushgateway which is used to push data to, and then your exporter configuration scrapes the data from the pushgateway endpoint. In a later tutorial, we will set up push gateway as well.

Install Prometheus

First, let’s provision our dedicated system users for prometheus and node exporter:

1
2
$ useradd --no-create-home --shell /bin/false prometheus
$ useradd --no-create-home --shell /bin/false node_exporter

Create the directories for it’s system files:

1
2
$ mkdir /etc/prometheus
$ mkdir /var/lib/prometheus

Apply the permissions:

1
2
$ chown prometheus:prometheus /etc/prometheus
$ chown prometheus:prometheus /var/lib/prometheus

Next, update your system:

1
$ apt update && apt upgrade -y

Let’s install prometheus, head over to https://prometheus.io/download/ and get the latest version of prometheus:

1
2
3
4
5
6
7
8
9
10
11
$ wget https://github.com/prometheus/prometheus/releases/download/v2.8.0/prometheus-2.8.0.linux-amd64.tar.gz
$ tar -xf prometheus-2.8.0.linux-amd64.tar.gz
$ cp prometheus-2.8.0.linux-amd64/prometheus /usr/local/bin/
$ cp prometheus-2.8.0.linux-amd64/promtool /usr/local/bin/
$ chown prometheus:prometheus /usr/local/bin/prometheus
$ chown prometheus:prometheus /usr/local/bin/promtool
$ cp -r prometheus-2.8.0.linux-amd64/consoles /etc/prometheus/
$ cp -r prometheus-2.8.0.linux-amd64/console_libraries /etc/prometheus/
$ chown -R prometheus:prometheus /etc/prometheus/consoles
$ chown -R prometheus:prometheus /etc/prometheus/console_libraries
$ rm -rf prometheus-2.8.0.linux-amd64*

Configure Prometheus

We need to tell prometheus to scrape itself in order to get prometheus performance data, edit the prometheus configuration:

1
$ vim /etc/prometheus/prometheus.yml

And add a scrape config: Set the interval on when it needs to scrap, the job name which will be in your metric and the endpoint which it needs to scrape:

1
2
3
4
5
6
7
8
global:
  scrape_interval: 15s

scrape_configs:
  - job_name: 'prometheus'
    scrape_interval: 5s
    static_configs:
      - targets: ['localhost:9090']

Apply permissions to the configured file:

1
$ chown prometheus:prometheus /etc/prometheus/prometheus.yml

Next, we need to define a systemd unit file so we can control the daemon using systemd:

1
$ vim /etc/systemd/system/prometheus.service

The config:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
[Unit]
Description=Prometheus
Wants=network-online.target
After=network-online.target

[Service]
User=prometheus
Group=prometheus
Type=simple
ExecStart=/usr/local/bin/prometheus \
    --config.file /etc/prometheus/prometheus.yml \
    --storage.tsdb.path /var/lib/prometheus/ \
    --web.console.templates=/etc/prometheus/consoles \
    --web.console.libraries=/etc/prometheus/console_libraries

[Install]
WantedBy=multi-user.target

Since we created a new systemd unit file, we need to reload the systemd daemon, then start the service:

1
2
$ systemctl daemon-reload
$ systemctl start prometheus

Let’s look at the status to see if everything works as expected:

1
2
3
4
5
6
7
8
9
10
11
$ systemctl status prometheus
prometheus.service - Prometheus
   Loaded: loaded (/etc/systemd/system/prometheus.service; disabled; vendor preset: enabled)
   Active: active (running) since Tue 2019-03-26 11:59:10 UTC; 6s ago
 Main PID: 16374 (prometheus)
    Tasks: 9 (limit: 4704)
   CGroup: /system.slice/prometheus.service
           └─16374 /usr/local/bin/prometheus --config.file /etc/prometheus/prometheus.yml --storage.tsdb.path /var/lib/prometheus/ --web.console.templates=/etc/prometheus/consoles --web.console.libraries=

...
Mar 26 11:59:10 ip-172-31-41-126 prometheus[16374]: level=info ts=2019-03-26T11:59:10.893770598Z caller=main.go:655 msg="TSDB started"

Seems legit! Enable the service on startup:

1
$ systemctl enable prometheus

Install Node Exporter

Now since we have prometheus up and running, we can start adding exporters to publish data into our prometheus time series database. As mentioned before, with node exporter, we will allow prometheus to scrape the node exporter endpoint to consume metrics about the node:

You will find the latest version from their website, which I have added at the top of this post.

1
2
3
4
5
$ wget https://github.com/prometheus/node_exporter/releases/download/v0.17.0/node_exporter-0.17.0.linux-amd64.tar.gz
$ tar -xf node_exporter-0.17.0.linux-amd64.tar.gz
$ cp node_exporter-0.17.0.linux-amd64/node_exporter /usr/local/bin
$ chown node_exporter:node_exporter /usr/local/bin/node_exporter
$ rm -rf node_exporter-0.17.0.linux-amd64*

Create the systemd unit file:

1
$ vim /etc/systemd/system/node_exporter.service

Apply this configuration:

1
2
3
4
5
6
7
8
9
10
11
12
13
[Unit]
Description=Node Exporter
Wants=network-online.target
After=network-online.target

[Service]
User=node_exporter
Group=node_exporter
Type=simple
ExecStart=/usr/local/bin/node_exporter

[Install]
WantedBy=multi-user.target

Reload the systemd daemon and start node exporter:

1
2
$ systemctl daemon-reload
$ systemctl start node_exporter

Look at the status:

1
2
3
4
5
6
7
8
9
10
$ node_exporter.service - Node Exporter
   Loaded: loaded (/etc/systemd/system/node_exporter.service; disabled; vendor preset: enabled)
   Active: active (running) since Tue 2019-03-26 12:01:39 UTC; 5s ago
 Main PID: 16474 (node_exporter)
    Tasks: 4 (limit: 4704)
   CGroup: /system.slice/node_exporter.service
           └─16474 /usr/local/bin/node_exporter

...
Mar 26 12:01:39 ip-172-31-41-126 node_exporter[16474]: time="2019-03-26T12:01:39Z" level=info msg="Listening on :9100" source="node_exporter.go:111"

If everything looks good, enable the service on boot:

1
$ systemctl enable node_exporter

Configure Node Exporter

Now that we have node exporter running, we need to tell prometheus how to scrape node exporter, so that the node related metrics can end up into prometheus. Edit the prometheus config:

1
$ vim /etc/prometheus/prometheus.yml

I’m providing the full config, but the config is the last section, where you can see the jobname is node_exporter:

1
2
3
4
5
6
7
8
9
10
11
12
13
global:
  scrape_interval: 15s

scrape_configs:
  - job_name: 'prometheus'
    scrape_interval: 5s
    static_configs:
      - targets: ['localhost:9090']

  - job_name: 'node_exporter'
    scrape_interval: 5s
    static_configs:
      - targets: ['localhost:9100']

Once the config is saved, restart prometheus and have a look at the status if everything is going as expected:

1
2
$ systemctl restart prometheus
$ systemctl status prometheus

Nginx Reverse Proxy

Let’s add a layer of security and front our setup with a nginx reverse proxy, so that we don’t have to access prometheus on high ports and we have the option to enable basic http authentication. Install nginx:

1
$ apt install nginx apache2-utils -y

Create the authentication file:

1
$ htpasswd -c /etc/nginx/.htpasswd admin

Create the nginx site configuration, this will tel nginx to route connections on port 80, to reverse proxy to localhost, port 9090, if authenticated:

1
2
$ rm /etc/nginx/sites-enabled/default
$ vim /etc/nginx/sites-enabled/prometheus.conf

And this is the config:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
server {
    listen 80 default_server;
    listen [::]:80 default_server;
    root /var/www/html;
    index index.html index.htm index.nginx-debian.html;
    server_name _;


    location / {
            auth_basic "Prometheus Auth";
            auth_basic_user_file /etc/nginx/.htpasswd;
            proxy_pass http://localhost:9090;
            proxy_http_version 1.1;
            proxy_set_header Upgrade $http_upgrade;
            proxy_set_header Connection 'upgrade';
            proxy_set_header Host $host;
            proxy_cache_bypass $http_upgrade;
        }
}

Reload nginx configuration:

1
$ systemctl reload nginx

Access the Beauty of Prometheus Land!

Once you have authenticated, head over to status, here you will see status info such as your targets, this wil be the endpoints that prometheus is scraping:

image

From the main screen, let’s dive into some queries using PromQL. Also see my Prometheus Cheatsheet.

For the first query, we want to see the available memory of this node in bytes (node_memory_MemAvailable_bytes):

image

Now since the value is in bytes, let’s convert the value to MB, (node_memory_MemAvailable_bytes/1024/1024)

image

Let’s say we want to see the average memory available in 5 minute buckets:

image

That’s a few basic ones, but feel free to checkout my Prometheus Cheatsheet for other examples. I update them as I find more queries.

Thanks

Hope this was informative. I am planning to publish a post on visualizing prometheus data with Grafana (which is EPIC!) and installing Pushgateway for custom integrations.

How to Fix the Following Signatures Couldnt Be Verified Because the Public Key Is Not Available With Apt

I was trying to install grafana on ubuntu when I got faced with: “the following signatures couldn’t be verified because the public key is not available” error as seen below:

1
2
3
4
5
6
7
8
9
10
11
$ sudo add-apt-repository "deb https://packages.grafana.com/oss/deb stable main"
Hit:1 http://eu-west-1.ec2.archive.ubuntu.com/ubuntu bionic InRelease
Get:2 http://eu-west-1.ec2.archive.ubuntu.com/ubuntu bionic-updates InRelease [88.7 kB]
Get:3 http://eu-west-1.ec2.archive.ubuntu.com/ubuntu bionic-backports InRelease [74.6 kB]
Get:4 http://security.ubuntu.com/ubuntu bionic-security InRelease [88.7 kB]
Get:5 http://eu-west-1.ec2.archive.ubuntu.com/ubuntu bionic-backports/universe Sources [2068 B]
Get:6 http://eu-west-1.ec2.archive.ubuntu.com/ubuntu bionic-backports/universe amd64 Packages [3492 B]
Get:7 https://packages.grafana.com/oss/deb stable InRelease [12.1 kB]
Err:7 https://packages.grafana.com/oss/deb stable InRelease
  The following signatures couldn't be verified because the public key is not available: NO_PUBKEY 8C8C34C524098CB6
Reading package lists... Done

In order to continue, we need to import the trusted key:

1
2
3
4
5
$ apt-key adv --keyserver keyserver.ubuntu.com --recv-keys  8C8C34C524098CB6
Executing: /tmp/apt-key-gpghome.9xlwQh2M06/gpg.1.sh --keyserver keyserver.ubuntu.com --recv-keys 8C8C34C524098CB6
gpg: key 8C8C34C524098CB6: public key "Grafana <info@grafana.com>" imported
gpg: Total number processed: 1
gpg:               imported: 1

Now that the key has been imported, we can update and continue:

1
2
3
4
5
6
7
8
9
$ apt update
Hit:1 http://eu-west-1.ec2.archive.ubuntu.com/ubuntu bionic InRelease
Hit:2 http://eu-west-1.ec2.archive.ubuntu.com/ubuntu bionic-updates InRelease
Hit:3 http://eu-west-1.ec2.archive.ubuntu.com/ubuntu bionic-backports InRelease
Hit:4 http://security.ubuntu.com/ubuntu bionic-security InRelease
Get:5 https://packages.grafana.com/oss/deb stable InRelease [12.1 kB]
Get:6 https://packages.grafana.com/oss/deb stable/main amd64 Packages [10.8 kB]
Fetched 22.9 kB in 1s (32.7 kB/s)
Reading package lists... Done

Setup Hashicorp Vault Server on Docker and a Getting Started CLI Guide

Vault is one of Hashicorp’s awesome services, which enables you to centrally store, access and distribute dynamic secrets such as tokens, passwords, certificates and encryption keys.

What will we be doing today

We will setup a Vault Server on Docker and demonstrate a getting started guide with the Vault CLI to Initialize the Vault, Create / Use and Manage Secrets.

For related posts:

Setting up the Vault Server

Create the directory structure:

1
2
$ touch docker-compose.yml
$ mkdir -p volumes/{config,file,logs}

Populate the vault config vault.json. (As you can see the config is local, in the next couple of posts, I will show how to persist this config to Amazon S3)

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
$ cat > volumes/config/vault.json << EOF
{
  "backend": {
    "file": {
      "path": "/vault/file"
    }
  },
  "listener": {
    "tcp":{
      "address": "0.0.0.0:8200",
      "tls_disable": 1
    }
  },
  "ui": true
}
EOF

Populate the docker-compose.yml:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
$ cat > docker-compose.yml << EOF
version: '2'
services:
  vault:
    image: vault
    container_name: vault
    ports:
      - "8200:8200"
    restart: always
    volumes:
      - ./volumes/logs:/vault/logs
      - ./volumes/file:/vault/file
      - ./volumes/config:/vault/config
    cap_add:
      - IPC_LOCK
    entrypoint: vault server -config=/vault/config/vault.json
EOF

Start the Vault Server:

1
$ docker-compose up

The UI is available at http://localhost:8200/ui and the api at http://localhost:8200

Interacting with the Vault CLI

I will demonstrate how to use the Vault CLI to interact with Vault. Let’s start by installing the vault cli tools, I am using mac, so I will be using brew:

1
$ brew install vault

Set environment variables:

1
$ export VAULT_ADDR='http://127.0.0.1:8200'

Initialize the Vault Cluster:

Initialize new vault cluster with 6 key shares:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
$ vault operator init -key-shares=6 -key-threshold=3
Unseal Key 1: RntjR...DQv
Unseal Key 2: 7E1bG...0LL+
Unseal Key 3: AEuhl...A1NO
Unseal Key 4: bZU76...FMGl
Unseal Key 5: DmEjY...n7Hk
Unseal Key 6: pC4pK...XbKb

Initial Root Token: s.F0JGq..98s2U

Vault initialized with 10 key shares and a key threshold of 3. Please
securely distribute the key shares printed above. When the Vault is re-sealed,
restarted, or stopped, you must supply at least 3 of these keys to unseal it
before it can start servicing requests.

Vault does not store the generated master key. Without at least 3 key to
reconstruct the master key, Vault will remain permanently sealed!

It is possible to generate new unseal keys, provided you have a quorum of
existing unseal keys shares. See "vault operator rekey" for more information.

In order to unseal the vault cluster, we need to supply it with 3 key shares:

1
2
3
$ vault operator unseal RntjR...DQv
$ vault operator unseal bZU76...FMGl
$ vault operator unseal pC4pK...XbKb

Ensure the vault is unsealed:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
$ vault status -format=json
{
  "type": "shamir",
  "initialized": true,
  "sealed": false,
  "t": 3,
  "n": 5,
  "progress": 0,
  "nonce": "",
  "version": "1.1.0",
  "migration": false,
  "cluster_name": "vault-cluster-dca2b572",
  "cluster_id": "469c2f1d-xx-xx-xx-03bfc497c883",
  "recovery_seal": false
}

Authenticate against the vault:

1
2
3
4
$ vault login s.tdlEqsfzGbePVlke5hTpr9Um
Success! You are now authenticated. The token information displayed below
is already stored in the token helper. You do NOT need to run "vault login"
again. Future Vault requests will automatically use this token.

Using the cli your auth token will be saved locally at ~/.vault-token.

Enable the secret kv engine:

1
$ vault secrets enable -version=1 -path=secret kv

Create and Read Secrets

Write a secret to the path enabled above:

1
$ vault kv put secret/my-app/password password=123

List your secrets:

1
2
3
4
$ vault kv list secret/
Keys
----
my-app/

Read the secret (defaults in table format):

1
2
3
4
5
$ vault kv get secret/my-app/password
Key                 Value
---                 -----
refresh_interval    768h
password            123

Read the secret in json format:

1
2
3
4
5
6
7
8
9
10
11
$ vault kv get --format=json secret/my-app/password
{
  "request_id": "0249c878-7432-9555-835a-89b275fca32o",
  "lease_id": "",
  "lease_duration": 2764800,
  "renewable": false,
  "data": {
    "password": "123"
  },
  "warnings": null
}

Read only the password value in the secret:

1
2
$ vault kv get -field=password secret/my-app/password
123

Key with Multiple Secrets

Create a key with multiple secrets:

1
$ vault kv put secret/reminders/app db_username=db.ruanbekker.com username=root password=secret

Read all the secrets:

1
2
3
4
5
6
7
8
9
10
11
12
13
$ vault kv get --format=json secret/reminders/app
{
  "request_id": "0144c878-7532-9555-835a-8cb275fca3dd",
  "lease_id": "",
  "lease_duration": 2764800,
  "renewable": false,
  "data": {
    "db_username": "db.ruanbekker.com",
    "password": "secret",
    "username": "root"
  },
  "warnings": null
}

Read only the username field in the key:

1
2
$ vault kv get -field=username secret/reminders/app
root

Delete the secret:

1
$ vault kv delete secret/reminders

Versioning

Create a key and set the metadata to max of 5 versions:

1
$ vault kv metadata put -max-versions=5 secret/fooapp/appname

Get the metadata of the key:

1
2
3
4
5
6
7
8
9
10
$ vault kv metadata get secret/fooapp/appname
======= Metadata =======
Key                Value
---                -----
cas_required       false
created_time       2019-04-07T12:35:54.355411Z
current_version    0
max_versions       5
oldest_version     0
updated_time       2019-04-07T12:35:54.355411Z

Write a secret appname to our key: secret/fooapp/appname:

1
2
3
4
5
6
7
$ vault kv put secret/fooapp/appname appname=app1
Key              Value
---              -----
created_time     2019-04-07T12:36:41.7577102Z
deletion_time    n/a
destroyed        false
version          1

Overwrite the key with a couple of requests:

1
2
$ vault kv put secret/fooapp/appname appname=app2
$ vault kv put secret/fooapp/appname appname=app3

Read the current value:

1
2
$ vault kv get -field=appname secret/fooapp/appname
app3

Get the version=2 value of this file:

1
2
$ vault kv get -field=appname -version=2 secret/fooapp/appname
app2

Thanks

Thanks for reading, hope this was informative. Have a look at Hashicorp’s Vault Documentation for more information on the project. I will post more posts on Vault under the #vault category.

Using Concourse CI to Deploy to Docker Swarm

In this tutorial we will use Concourse to Deploy our application to Docker Swarm.

The Flow

  • Our application code resides on Github
  • The pipeline triggers when a commit is pushed to the master branch
  • The pipeline will automatically deploy to the staging environment
  • The pipeline requires a manual trigger to deploy to prod
  • Note: Staging and Prod on the same swarm for demonstration

The code for this tutorial is available on my github repository

Application Structure

The application structure for our code looks like this:

Pipeline Walktrough

Our ci/pipeline.yml

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
resources:
  - name: main-repo
    type: git
    source:
      uri: git@github.com:ruanbekker/concourse-swarm-app-demo.git
      branch: master
      private_key: ((github_private_key))

  - name: main-repo-staging
    type: git
    source:
      uri: git@github.com:ruanbekker/concourse-swarm-app-demo.git
      branch: master
      private_key: ((github_private_key))
      paths:
        - config/staging/*

  - name: main-repo-prod
    type: git
    source:
      uri: git@github.com:ruanbekker/concourse-swarm-app-demo.git
      branch: master
      private_key: ((github_private_key))
      paths:
        - config/prod/*

  - name: slack-alert
    type: slack-notification
    source:
      url: ((slack_notification_url))

  - name: version-staging
    type: semver
    source:
      driver: git
      uri: git@github.com:ruanbekker/concourse-swarm-app-demo.git
      private_key: ((github_private_key))
      file: version-staging
      branch: version-staging

  - name: version-prod
    type: semver
    source:
      driver: git
      uri: git@github.com:ruanbekker/concourse-swarm-app-demo.git
      private_key: ((github_private_key))
      file: version-prod
      branch: version-prod

resource_types:
  - name: slack-notification
    type: docker-image
    source:
      repository: cfcommunity/slack-notification-resource
      tag: v1.3.0

jobs:
  - name: bump-staging-version
    plan:
    - get: main-repo-staging
      trigger: true
    - get: version-staging
    - put: version-staging
      params:
        bump: major

  - name: bump-prod-version
    plan:
    - get: main-repo-prod
      trigger: true
    - get: version-prod
    - put: version-prod
      params:
        bump: major

  - name: deploy-staging
    plan:
    - get: main-repo-staging
    - get: main-repo
    - get: version-staging
      passed:
      - bump-staging-version
      trigger: true
    - task: deploy-staging
      params:
        DOCKER_SWARM_HOSTNAME: ((docker_swarm_staging_host))
        DOCKER_SWARM_KEY: ((docker_swarm_key))
        DOCKER_HUB_USER: ((docker_hub_user))
        DOCKER_HUB_PASSWORD: ((docker_hub_password))
        SERVICE_NAME: app-staging
        SWARM: staging
        ENVIRONMENT: staging
        AWS_ACCESS_KEY_ID: ((aws_access_key_id))
        AWS_SECRET_ACCESS_KEY: ((aws_secret_access_key))
        AWS_DEFAULT_REGION: ((aws_region))
      config:
        platform: linux
        image_resource:
          type: docker-image
          source:
            repository: rbekker87/build-tools
            tag: latest
            username: ((docker_hub_user))
            password: ((docker_hub_password))
        inputs:
        - name: main-repo-staging
        - name: main-repo
        - name: version-staging
        run:
          path: /bin/sh
          args:
            - -c
            - |
              ./main-repo/ci/scripts/deploy.sh
      on_failure:
        put: slack-alert
        params:
          channel: '#system_events'
          username: 'concourse'
          icon_emoji: ':concourse:'
          silent: true
          text: |
            *$BUILD_PIPELINE_NAME/$BUILD_JOB_NAME* ($BUILD_NAME) FAILED :rage: - TestApp Deploy to staging-swarm failed
            http://ci.example.local/teams/$BUILD_TEAM_NAME/pipelines/$BUILD_PIPELINE_NAME/jobs/$BUILD_JOB_NAME/builds/$BUILD_NAME
      on_success:
        put: slack-alert
        params:
          channel: '#system_events'
          username: 'concourse'
          icon_emoji: ':concourse:'
          silent: true
          text: |
            *$BUILD_PIPELINE_NAME/$BUILD_JOB_NAME* ($BUILD_NAME) SUCCESS :aww_yeah: - TestApp Deploy to staging-swarm succeeded
            http://ci.example.local/teams/$BUILD_TEAM_NAME/pipelines/$BUILD_PIPELINE_NAME/jobs/$BUILD_JOB_NAME/builds/$BUILD_NAME

  - name: deploy-prod
    plan:
    - get: main-repo-prod
    - get: main-repo
    - get: version-prod
      passed:
      - bump-prod-version
    - task: deploy-prod
      params:
        DOCKER_SWARM_HOSTNAME: ((docker_swarm_prod_host))
        DOCKER_SWARM_KEY: ((docker_swarm_key))
        DOCKER_HUB_USER: ((docker_hub_user))
        DOCKER_HUB_PASSWORD: ((docker_hub_password))
        SERVICE_NAME: app-prod
        SWARM: prod
        ENVIRONMENT: production
        AWS_ACCESS_KEY_ID: ((aws_access_key_id))
        AWS_SECRET_ACCESS_KEY: ((aws_secret_access_key))
        AWS_DEFAULT_REGION: ((aws_region))
      config:
        platform: linux
        image_resource:
          type: docker-image
          source:
            repository: rbekker87/build-tools
            tag: latest
            username: ((docker_hub_user))
            password: ((docker_hub_password))
        inputs:
        - name: main-repo-prod
        - name: main-repo
        - name: version-prod
        run:
          path: /bin/sh
          args:
            - -c
            - |
              ./main-repo/ci/scripts/deploy.sh
      on_failure:
        put: slack-alert
        params:
          channel: '#system_events'
          username: 'concourse'
          icon_emoji: ':concourse:'
          silent: true
          text: |
            *$BUILD_PIPELINE_NAME/$BUILD_JOB_NAME* ($BUILD_NAME) FAILED :rage: - TestApp Deploy to prod-swarm failed
            http://ci.example.local/teams/$BUILD_TEAM_NAME/pipelines/$BUILD_PIPELINE_NAME/jobs/$BUILD_JOB_NAME/builds/$BUILD_NAME
      on_success:
        put: slack-alert
        params:
          channel: '#system_events'
          username: 'concourse'
          icon_emoji: ':concourse:'
          silent: true
          text: |
            *$BUILD_PIPELINE_NAME/$BUILD_JOB_NAME* ($BUILD_NAME) SUCCESS :aww_yeah: - TestApp Deploy to prod-swarm succeeded
            http://ci.example.local/teams/$BUILD_TEAM_NAME/pipelines/$BUILD_PIPELINE_NAME/jobs/$BUILD_JOB_NAME/builds/$BUILD_NAME

Our ci/credentials.yml which will hold all our secret info, which will remain local:

1
2
3
4
username: yourdockerusername
password: yourdockerpassword
docker_swarm_prod_host: 10.20.30.40
...

The first step of our deploy will invoke a shell script that will establish a ssh tunnel to the docker host, mounting the docker socket to a tcp local port, then exporting the docker host port to the tunneled port, ci/scripts/deploy.sh:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
#!/usr/bin/env sh

export DOCKER_HOST="localhost:2376"

echo "${DOCKER_SWARM_KEY}" | sed -e 's/\(KEY-----\)\s/\1\n/g; s/\s\(-----END\)/\n\1/g' | sed -e '2s/\s\+/\n/g' > key.pem
chmod 600 key.pem

screen -S \
  sshtunnel -m -d sh -c \
  "ssh -oStrictHostKeyChecking=no -oUserKnownHostsFile=/dev/null -i ./key.pem -NL localhost:2376:/var/run/docker.sock root@$DOCKER_SWARM_HOSTNAME"

sleep 5
docker login -u "${DOCKER_HUB_USER}" -p "${DOCKER_HUB_PASSWORD}"
docker stack deploy --prune -c ./main-repo/ci/docker/docker-compose.${ENVIRONMENT}.yml $SERVICE_NAME --with-registry-auth

if [ $? != "0" ]
  then
    echo "deploy failure for: $SERVICE_NAME"
    screen -S sshtunnel -X quit
    exit 1
  else
    set -x
    echo "deploy success for: $SERVICE_NAME"
    screen -S sshtunnel -X quit
fi

The deploy script references the docker-compose files, first our ci/docker/docker-compose.staging.yml:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
version: "3.4"

services:
  web:
    image: ruanbekker/web-center-name
    environment:
      - APP_ENVIRONMENT=Staging
    ports:
      - 81:5000
    networks:
      - web_net
    deploy:
      mode: replicated
      replicas: 2

networks:
  web_net: {}

Also, our docker-compose for production, ci/docker/docker-compose.production.yml:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
version: "3.4"

services:
  web:
    image: ruanbekker/web-center-name
    environment:
      - APP_ENVIRONMENT=Production
    ports:
      - 80:5000
    networks:
      - web_net
    deploy:
      mode: replicated
      replicas: 10

networks:
  web_net: {}

Set the Pipeline in Concourse

Create 2 branches in your github repository for versioning: version-staging and version-prod, then logon to concourse and save the target:

1
$ fly -t ci login -n main -c http://<concourse-ip>

Set the pipeline, point the config, local variables definition and name the pipeline:

1
$ fly -t ci sp -n main -c ci/pipeline.yml -p <pipeline-name> -l ci/<variables>.yml

You will find that the pipeline will look like below and that it will be in a paused state:

Unpause the pipeline:

1
$ fly -t ci up -p swarm-demo

The pipeline should kick-off automatically due to the trigger that is set to true:

Deployed automatically to staging, prod is a manual trigger:

Testing our Application

For demonstration purposes we have deployed staging on port 81 and production on port 80.

Testing Staging on http://:81/

Testing Production on http://:80/

Using MongoDB Inside Drone CI Services for Unit Testing

Another nice thing about Drone CI is the “Services” configuration within your pipeline. At times your unit or integration testing steps might be dependent of a database such as MongoDB, MySQL etc.

Drone allows you to spin up a ephemeral database service such as MongoDB using a Docker container as the fist step within your pipeline, defined in the services section. This step will always run first.

The service container will be reachable via the configured container name as its hostname. Keep note that if you run multiple paralel jobs that the service container will only be reachable from the container where the mongodb container is running.

What are we doing today

We will setup a really basic (and a bit useless) pipeline that will spin up a mongodb service container, use a step to write random data to mongodb and a step that reads data from mongodb.

For demonstration purposes, the data is really random but more focused on the service section.

All the source code for this demonstration is available on my github repository

Our Drone Pipeline

First we define our service, mongodb. Once the mongodb service is running, we will have our build step, our step that runs the mongodb version against our database, write data into our mongodb database, then read the data from mongodb, then the last step running a shell command with the date.

Our .drone.yml pipeline definition:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
---
kind: pipeline
name: mongotests

services:
- name: mongo
  image: mongo:4
  command: [ --smallfiles ]
  ports:
  - 27017

steps:
- name: build-step
  image: alpine
  commands:
  - echo "this should be a step that does something"

- name: mongodb-return-version
  image: mongo:4
  commands:
  - date
  - mongo --host mongo --eval "db.version()"

- name: mongodb-test-writes
  image: mongo:4
  commands:
  - date
  - sh scripts/write_mongo.sh

- name: mongodb-test-reads
  image: mongo:4
  commands:
  - date
  - sh scripts/read_mongo.sh

- name: last-step
  image: alpine
  commands:
  - echo "completed at $(date)"

Our scripts referenced in our steps:

The first will be our script that write random data into mongodb, scripts/write_mongo.sh:

1
2
3
4
5
#!/bin/sh
set -ex
echo "start writing"
mongo mongo:27017/mydb scripts/write.js
echo "done writing"

We are referencing a scripts/write.js file which is a function that randomizes data and generates a 1000 documents to write to mongodb:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
var txs = []
for (var x = 0; x < 1000 ; x++) {
 var transaction_types = ["credit card", "cash", "account"];
 var store_names = ["edgards", "cna", "makro", "picknpay", "checkers"];
 var random_transaction_type = Math.floor(Math.random() * (2 - 0 + 1)) + 0;
 var random_store_name = Math.floor(Math.random() * (4 - 0 + 1)) + 0;
 txs.push({
   transaction: 'tx_' + x,
   transaction_price: Math.round(Math.random()*1000),
   transaction_type: transaction_types[random_transaction_type],
   store_name: store_names[random_store_name]
   });
}
db.mycollection.insert(txs)

Our script that will read data from mongodb, scripts/read_mongo.sh:

1
2
3
4
5
6
7
8
#!/bin/sh
set -ex
echo "start reading"
mongo mongo:27017/mydb <<EOF
db.mycollection.find().count();
db.mycollection.find({transaction_price: { \$gt: 990}}).forEach( printjson );
EOF
echo "done reading"

The README.md to include the build status:

1
## project-name ![](https://cloud.drone.io/api/badges/<user-name>/<project-name>/status.svg?branch=master)

Once your source code is set in github, enable the repository on drone and push to github to trigger the build.

Demo and Screenshots

After pushing to github to trigger the build, heading over to drone, I can see that mongodb is running and our step has completed that executes the db.version() against mongodb:

Next our step executes to write the random data into mongodb:

After the data has been written to mongodb, our next step will read the number of documents from mongodb, and also run a query for transaction prices more than 990:

Once that has completed, we will have a shell command returning the time when the last step completed:

Resources

Queries Failing via Beeline Due to Anonymous User

Beeline Error: FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.tez.TezTask (state=08S01,code=1)

Issue:

Some time ago, I assisted a customer who was trying to do a select count(*) via beeline and failed with:

1
2
3
[hadoop@ip-10-10-9-226 ~]$ beeline -u jdbc:hive2://nn-emr.sysint.dxone.local:10000/default --silent=true --outputformat=csv2 -e "select count(*) from basetables_rms.rms_site"
19/04/26 06:41:15 [main]: WARN jdbc.HiveConnection: Request to set autoCommit to false; Hive does not support autoCommit=false.
Error: Error while processing statement: FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.tez.TezTask (state=08S01,code=1)

When reproducing this I found a jira: https://issues.apache.org/jira/browse/HIVE-14631 which related to the same issue and the workaround was to switch your execution engine to mapreduce. By doing that, it worked, but wanted a better resolution for the customer.

Debugging:

When setting enabling debugging, I found that the error is related to permissions:

1
2
3
4
5
6
7
8
$ beeline  -u jdbc:hive2://172.31.31.247:10000/default --silent=false --outputformat=csv2 -e "select count(*) from testdb.users"
Connecting to jdbc:hive2://172.31.31.247:10000/default
Connected to: Apache Hive (version 2.1.1-amzn-0)
Driver: Hive JDBC (version 2.1.1-amzn-0)
19/04/26 10:24:01 [main]: WARN jdbc.HiveConnection: Request to set autoCommit to false; Hive does not support autoCommit=false.
...
ERROR : Failed to execute tez graph.
org.apache.hadoop.security.AccessControlException: Permission denied: user=anonymous, access=WRITE, inode="/user/anonymous":hdfs:hadoop:drwxr-xr-x

So it seems that when the client (anonymous) is trying to copy the hive execution jar to is home path in HDFS, in this case (/home/anonymous/.hiveJars/) it fails due to permissions.

Resolution:

By passing the hadoop user, I was able to get the expected results:

1
2
3
4
5
6
$ beeline -n hadoop -u jdbc:hive2://172.31.31.247:10000/default --silent=false --outputformat=csv2 -e "select count(*) from testdb.users"
INFO  : Completed executing command(queryId=hive_20190426103246_33253d86-3ebc-462f-a5a1-f01877dd00a8); Time taken: 17.08 seconds
INFO  : OK
c0
1
1 row selected (17.282 seconds)

Listing the mentioned jar:

1
2
3
$ hdfs dfs -ls /user/hadoop/.hiveJars/
Found 1 items
-rw-r--r--   1 hadoop hadoop   32447131 2019-04-26 09:51 /user/hadoop/.hiveJars/hive-exec-2.1.1-amzn-0-ac46be4721493d9e62fd1b132ecee3d20fd283680edbc0cfa9809c656a493469.jar

Hope this might help someone facing the same issue

Using Drone CI to Build a Jekyll Site and Deploy to Docker Swarm

image

CICD Pipelines! <3

In this post I will show you how to setup a cicd pipeline using drone to build a jekyll site and deploy to docker swarm.

Environment Overview

Jekyll’s Codebase: Our code will be hosted on Github (I will demonstrate how to set it up from scratch)

Secret Store: Our secrets such as ssh key, swarm host address etc will be stored in drones secrets manager

Docker Swarm: Docker Swarm has Traefik as a HTTP Loadbalancer

Drone Server and Agent: If you dont have drone, you can setup drone server and agent on docker or have a look at cloud.drone.io

Workflow:

1
2
3
4
5
* Whenever a push to master is receive on github, the pipeline will be triggered
* The content from our github repository will be cloned to the agent on a container
* Jekyll will build and the output will be transferred to docker swarm using rsync
* The docker-compose.yml will be transferred to the docker swarm host using scp
* A docker stack deploy is ran via ssh

Install Jekyll Locally

Install Jekyll locally, as we will use it to create the initial site. I am using a mac, so I will be using brew. For other operating systems, have a look at this post.

I will be demonstrating with a weightloss blog as an example.

Install jekyll:

1
$ brew install jekyll

Go ahead and create a new site which will host the data for your jekyll site:

1
$ jekyll new blog-weightloss

Create a Github Repository

First we need to create an empty github repository, in my example it was github.com/ruanbekker/blog-weightloss.git. Once you create the repo change into the directory created by the jekyll new command:

1
$ cd blog-weightloss

Now initialize git, set the remote, add the jekyll data and push to github:

1
2
3
4
5
$ git init
$ git remote add origin git@github.com:ruanbekker/blog-weightloss.git # <== change to your repository
$ git add .
$ git commit -m "first commit"
$ git push origin master

You should see your data on your github repository.

Create Secrets on Drone

Logon to the Drone UI, sync repositories, activate the new repository and head over to settings where you will find the secrets section.

Add the following secrets:

1
2
3
4
5
6
7
8
Secret Name: swarm_host
Secret Value: ip address of your swarm

Secret Name: swarm_key
Secret Value: contents of your private ssh key

Secret Name: swarm_user
Secret Value: the user that is allowed to ssh

You should see the following:

image

Add the Drone Config

Drone looks from a .drone.yml file in the root directory for instructions on how to do its tasks. Lets go ahead and declare our pipeline:

1
$ vim .drone.yml

And populate the drone config:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
pipeline:
  jekyll-build:
    image: jekyll/jekyll:latest
    commands:
      - touch Gemfile.lock
      - chmod a+w Gemfile.lock
      - chown -R jekyll:jekyll /drone
      - gem update --system
      - gem install bundler
      - bundle install
      - bundle exec jekyll build

  transfer-build:
    image: drillster/drone-rsync
    hosts:
      from_secret: swarm_host
    key:
      from_secret: swarm_key
    user:
      from_secret: swarm_user
    source: ./*
    target: ~/my-weightloss-blog.com
    recursive: true
    delete: true
    when:
      branch: [master]
      event: [push]

  transfer-compose:
    image: appleboy/drone-scp
    host:
      from_secret: swarm_host
    username:
      from_secret: swarm_user
    key:
      from_secret: swarm_key
    target: /root/my-weightloss-blog.com
    source:
      - docker-compose.yml
    when:
      branch: [master]
      event: [push]

  deploy-jekyll-to-swarm:
    image: appleboy/drone-ssh
    host:
      from_secret: swarm_host
    username:
      from_secret: swarm_user
    key:
      from_secret: swarm_key
    port: 22
    script:
      - docker stack deploy --prune -c /root/my-weightloss-blog.com/docker-compose.yml apps
    when:
      branch: [master]
      event: [push]

Notifications?

If you want to be notified about your builds, you can add a slack notification step as the last step.

To do that, create a new webhook integration, you can follow this post for a step by step guide. After you have the webhook, go to secrets and create a slack_webhook secret.

Then apply the notification step as shown below:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
  notify-via-slack:
    image: plugins/slack
    webhook:
      from_secret: slack_webhook
    channel: system_events
    template: >
      
        [DRONE CI]: ** : /
        ( -  | )

      
        [DRONE CI]: ** : /
        ( -  | )
      

Based on the status, you should get a notification similar like this:

image

Add the Docker Compose

Next we need to declare our docker compose file which is needed to deploy our jekyll service to the swarm:

1
$ vim docker-compose.yml

And populate this info (just change the values for your own environment/settings):

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
version: '3.5'

services:
  myweightlossblog:
    image: ruanbekker/jekyll:contrast
    command: jekyll serve --watch --force_polling --verbose
    networks:
      - appnet
    volumes:
      - /root/my-weightloss-blog.com:/srv/jekyll
    deploy:
      mode: replicated
      replicas: 1
      labels:
        - "traefik.backend.loadbalancer.sticky=false"
        - "traefik.backend.loadbalancer.swarm=true"
        - "traefik.backend=myweightlossblog"
        - "traefik.docker.network=appnet"
        - "traefik.entrypoints=https"
        - "traefik.frontend.passHostHeader=true"
        - "traefik.frontend.rule=Host:www.my-weightloss-blog.com,my-weightloss-blog.com"
        - "traefik.port=4000"
      update_config:
        parallelism: 2
        delay: 10s
      restart_policy:
        condition: on-failure
      placement:
        constraints:
          - node.role == manager
networks:
  appnet:
    external: true

Push to Github

Now we need to push our .drone.yml and docker-compose.yml to github. Since the repository is activated on drone, any push to master will trigger the pipeline, so after this push we should go to drone to look at our pipeline running.

Add the untracked files and push to github:

1
2
3
4
$ git add .drone.yml
$ git add docker-compose.yml
$ git commit -m "add drone and docker config"
$ git push origin master

As you head over to your drone ui, you should see your pipeline output which will look more or less like this (just look how pretty it is! :D )

image

Test Jekyll

If your deployment has completed you should be able to access your application on the configured domain. A screenshot of my response when accessing Jekyll:

image

Absolutely Amazingness! I really love drone!