Ruan Bekker's Blog

From a Curious mind to Posts on Github

Install Grafana to Visualize Your Metrics From Datasources Such as Prometheus on Linux

image

Grafana is a Open Source Dashboarding service that allows you to monitor, analyze and graph metrics from datasources such as prometheus, influxdb, elasticsearch, aws cloudwatch, and many more.

Not only is grafana amazing, its super pretty!

Example of how a dashboard might look like:

E24B39B1-23C8-44C5-959D-6E6275F8FE99

What are we doing today

In this tutorial we will setup grafana on linux. If you have not set up prometheus, follow this blogpost to install prometheus.

Install Grafana

I will be demonstrating how to install grafana on debian, if you have another operating system, head over to grafana documentation for other supported operating systems.

Get the gpg key:

1
$ curl https://packages.grafana.com/gpg.key | sudo apt-key add -

Import the public keys:

1
$ apt-key adv --keyserver keyserver.ubuntu.com --recv-keys  8C8C34C524098CB6 

Add the latest stable packages to your repository:

1
$ add-apt-repository "deb https://packages.grafana.com/oss/deb stable main"

Install a pre-requirement package:

1
$ apt install apt-transport-https -y

Update the repository index and install grafana:

1
$ apt update && sudo apt install grafana -y

Once grafana is installed, start the service:

1
$ service grafana-server start

Then enable the service on boot:

1
$ update-rc.d grafana-server defaults

If you want to control the service via systemd:

1
2
3
$ systemctl daemon-reload
$ systemctl start grafana-server
$ systemctl status grafana-server

Optional: Nginx Reverse Proxy

If you want to front your grafana instance with a nginx reverse proxy:

1
2
3
4
5
6
7
8
9
10
11
12
13
$ cat /etc/nginx/sites-enabled/grafana
server {
    listen 80;
    server_name grafana.domain.com;

    location / {
        proxy_pass http://127.0.0.1:3000/;
        proxy_redirect http://127.0.0.1:3000/ /;
        proxy_http_version 1.1;
        proxy_set_header Host $host;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_set_header X-Real-IP $remote_addr;
    }

Then restart nginx:

1
$ systemctl restart nginx

Access Grafana

If you are accessing grafana directly, access grafana on http://your-grafana-ip:3000/ and your username is admin and password admin

Dashboarding Tutorials

Have a look at this screencast where the guys from grafana show you how to build dashboards:

Also have a look at their public repository of dashboards

For more tutorials on prometheus and metrics have a look at #prometheus

Install Pushgateway to Expose Metrics to Prometheus

In most cases when we want to scrape a node for metrics, we will install node-exporter on a host and configure prometheus to scrape the configured node to consume metric data. But in certain cases we want to push custom metrics to prometheus. In such cases, we can make use of pushgateway.

Pushgateway allows you to push custom metrics to push gateway’s endpoint, then we configure prometheus to scrape push gateway to consume the exposed metrics into prometheus.

Pre-Requirements

If you have not set up Prometheus, head over to this blogpost to set up prometheus on Linux.

What we will do?

In this tutorial, we will setup pushgateway on linux and after pushgateway has been setup, we will push some custom metrics to pushgateway and configure prometheus to scrape metrics from pushgateway.

Install Pushgateway

Get the latest version of pushgateway from prometheus.io, then download and extract:

1
2
$ wget https://github.com/prometheus/pushgateway/releases/download/v0.8.0/pushgateway-0.8.0.linux-amd64.tar.gz
$ tar -xvf pushgateway-0.8.0.linux-amd64.tar.gz

Create the pushgateway user:

1
$ useradd --no-create-home --shell /bin/false pushgateway

Move the binary in place and update the permissions to the user that we created:

1
2
$ cp pushgateway-0.8.0.linux-amd64/pushgateway /usr/local/bin/pushgateway
$ chown pushgateway:pushgateway /usr/local/bin/pushgateway

Create the systemd unit file:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
$ cat > /etc/systemd/system/pushgateway.service << EOF
[Unit]
Description=Pushgateway
Wants=network-online.target
After=network-online.target

[Service]
User=pushgateway
Group=pushgateway
Type=simple
ExecStart=/usr/local/bin/pushgateway \
    --web.listen-address=":9091" \
    --web.telemetry-path="/metrics" \
    --persistence.file="/tmp/metric.store" \
    --persistence.interval=5m \
    --log.level="info" \
    --log.format="logger:stdout?json=true"

[Install]
WantedBy=multi-user.target
EOF

Reload systemd and restart the pushgateway service:

1
2
$ systemctl daemon-reload
$ systemctl restart pushgateway

Ensure that pushgateway has been started:

1
2
3
4
5
6
7
8
9
10
$ systemctl status pushgateway
  pushgateway.service - Pushgateway
   Loaded: loaded (/etc/systemd/system/pushgateway.service; disabled; vendor preset: enabled)
   Active: active (running) since Tue 2019-05-07 09:05:57 UTC; 2min 33s ago
 Main PID: 6974 (pushgateway)
    Tasks: 6 (limit: 4704)
   CGroup: /system.slice/pushgateway.service
           └─6974 /usr/local/bin/pushgateway --web.listen-address=:9091 --web.telemetry-path=/metrics --persistence.file=/tmp/metric.store --persistence.interval=5m --log.level=info --log.format=logger:st

May 07 09:05:57 ip-172-31-41-126 systemd[1]: Started Pushgateway.

Configure Prometheus

Now we want to configure prometheus to scrape pushgateway for metrics, then the scraped metrics will be injected into prometheus’s time series database:

At the moment, I have prometheus, node-exporter and pushgateway on the same node so I will provide my complete prometheus configuration, If you are just looking for the pushgateway config, it will be the last line:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
$ cat /etc/prometheus/prometheus.yml
global:
  scrape_interval: 15s

scrape_configs:
  - job_name: 'prometheus'
    scrape_interval: 5s
    static_configs:
      - targets: ['localhost:9090']

  - job_name: 'node_exporter'
    scrape_interval: 5s
    static_configs:
      - targets: ['localhost:9100']

  - job_name: 'pushgateway'
    honor_labels: true
    static_configs:
      - targets: ['localhost:9091']

Restart prometheus:

1
$ systemctl restart prometheus

Push metrics to pushgateway

First we will look at a bash example to push metrics to pushgateway:

1
$ echo "cpu_utilization 20.25" | curl --data-binary @- http://localhost:9091/metrics/job/my_custom_metrics/instance/10.20.0.1:9000/provider/hetzner

Have a look at pushgateway’s metrics endpoint:

1
2
3
$ curl -L http://localhost:9091/metrics/
# TYPE cpu_utilization untyped
cpu_utlization{instance="10.20.0.1:9000",job="my_custom_metrics",provider="hetzner"} 20.25

Let’s look at a python example on how we can push metrics to pushgateway:

1
2
3
4
5
6
7
8
9
10
import requests

job_name='my_custom_metrics'
instance_name='10.20.0.1:9000'
provider='hetzner'
payload_key='cpu_utilization'
payload_value='21.90'

response = requests.post('http://localhost:9091/metrics/job/{j}/instance/{i}/team/{t}'.format(j=job_name, i=instance_name, t=team_name), data='{k} {v}\n'.format(k=payload_key, v=payload_value))
print(response.status_code)

With this method, you can push any custom metrics (bash, lambda function, etc) to pushgateway and allow prometheus to consume that data into it’s time series database.

Resources:

See #prometheus for more posts on Prometheus

Running a HA MySQL Galera Cluster on Docker Swarm

image

In this post we will setup a highly available mysql galera cluster on docker swarm.

About

The service is based of docker-mariadb-cluster repository and it’s designed not to have any persistent data attached to the service, but rely on the “nodes” to replicate the data.

Note, that however this proof of concept works, I always recommend to use a remote mysql database outside your cluster, such as RDS etc.

Since we don’t persist any data on the mysql cluster, I have associated a dbclient service that will run continious backups, which we will persist the path where the backups reside to disk.

Deploy the MySQL Cluster

The docker-compose.yml that we will use looks like this:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
version: '3.5'
services:
  dbclient:
    image: alpine
    environment:
      - BACKUP_ENABLED=1
      - BACKUP_INTERVAL=3600
      - BACKUP_PATH=/data
      - BACKUP_FILENAME=db_backup
    networks:
      - dbnet
    entrypoint: |
      sh -c 'sh -s << EOF
      apk add --no-cache mysql-client
      while true
        do
          if [ $$BACKUP_ENABLED == 1 ]
            then
              sleep $$BACKUP_INTERVAL
              mkdir -p $$BACKUP_PATH/$$(date +%F)
              echo "$$(date +%FT%H.%m) - Making Backup to : $$BACKUP_PATH/$$(date +%F)/$$BACKUP_FILENAME-$$(date +%FT%H.%m).sql.gz"
              mysqldump -u root -ppassword -h dblb --all-databases | gzip > $$BACKUP_PATH/$$(date +%F)/$$BACKUP_FILENAME-$$(date +%FT%H.%m).sql.gz
              find $$BACKUP_PATH -mtime 7 -delete
          fi
        done
      EOF'
    volumes:
      - vol_dbclient:/data
    deploy:
      mode: replicated
      replicas: 1

  dbcluster:
    image: toughiq/mariadb-cluster
    networks:
      - dbnet
    environment:
      - DB_SERVICE_NAME=dbcluster
      - MYSQL_ROOT_PASSWORD=password
      - MYSQL_DATABASE=mydb
      - MYSQL_USER=mydbuser
      - MYSQL_PASSWORD=mydbpass
    deploy:
      mode: replicated
      replicas: 1

  dblb:
    image: toughiq/maxscale
    networks:
      - dbnet
    ports:
      - 3306:3306
    environment:
      - DB_SERVICE_NAME=dbcluster
      - ENABLE_ROOT_USER=1
    deploy:
      mode: replicated
      replicas: 1

volumes:
  vol_dbclient:
    driver: local

networks:
  dbnet:
    name: dbnet
    driver: overlay

The dbclient is configured to be in the same network as the cluster so it can reach the mysql service. The default behavior is that it will make a backup every hour (3600 seconds) to the /data/{date}/ path.

Deploy the stack:

1
2
3
4
5
$ docker stack deploy -c docker-compose.yml galera
Creating network dbnet
Creating service galera_dbcluster
Creating service galera_dblb
Creating service galera_dbclient

Have a look to see if all the services is running:

1
2
3
4
5
$ docker service ls
ID                  NAME                MODE                REPLICAS            IMAGE                            PORTS
jm7p70qre72u        galera_dbclient     replicated          1/1                 alpine:latest
p8kcr5y7szte        galera_dbcluster    replicated          1/1                 toughiq/mariadb-cluster:latest
1hu3oxhujgfm        galera_dblb         replicated          1/1                 toughiq/maxscale:latest          :3306->3306/tcp

The Backup Client

As mentioned the backup client backs up to the /data/ path:

1
2
3
4
$ docker exec -it $(docker ps -f name=galera_dbclient -q) find /data/
/data/
/data/2019-05-10
/data/2019-05-10/db_backup-2019-05-10T10.05.sql.gz

Let’s go ahead and populate some data into our mysql database:

1
2
3
4
$ docker exec -it $(docker ps -f name=galera_dbclient -q) mysql -uroot -ppassword -h dblb
MySQL [(none)]> create table mydb.foo (name varchar(10));
MySQL [(none)]> insert into mydb.foo values('ruan');
MySQL [(none)]> exit

Scale the Cluster

At the moment we only have 1 replica for our mysql cluster, let’s go ahead and scale the cluster to 3 replicas:

1
2
3
4
5
6
7
$ docker service scale galera_dbcluster=3
galera_dbcluster scaled to 3
overall progress: 3 out of 3 tasks
1/3: running   [==================================================>]
2/3: running   [==================================================>]
3/3: running   [==================================================>]
verify: Service converged

Verify that the service has been scaled:

1
2
3
4
5
$ docker service ls
ID                  NAME                MODE                REPLICAS            IMAGE                            PORTS
jm7p70qre72u        galera_dbclient     replicated          1/1                 alpine:latest
p8kcr5y7szte        galera_dbcluster    replicated          3/3                 toughiq/mariadb-cluster:latest
1hu3oxhujgfm        galera_dblb         replicated          1/1                 toughiq/maxscale:latest          :3306->3306/tcp

Test, by reading from the database:

1
2
3
4
5
6
$ docker exec -it $(docker ps -f name=galera_dbclient -q) mysql -uroot -ppassword -h dblb -e'select * from mydb.foo;'
+------+
| name |
+------+
| ruan |
+------+

Simulate a Node Failure:

Simulate a node failure by killing one of the mysql containers:

1
$ docker kill 9e336032ab52

Verify that one container is missing from our service:

1
2
3
$ docker service ls
ID                  NAME                MODE                REPLICAS            IMAGE                            PORTS
p8kcr5y7szte        galera_dbcluster    replicated          2/3                 toughiq/mariadb-cluster:latest

While the container is provisioning, as we have 2 out of 3 running containers, read the data 3 times so test that the round robin queries dont hit the affected container (the dblb wont route traffic to the affected container):

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
$ docker exec -it $(docker ps -f name=galera_dbclient -q) mysql -uroot -ppassword -h dblb -e'select * from mydb.foo;'
+------+
| name |
+------+
| ruan |
+------+

$ docker exec -it $(docker ps -f name=galera_dbclient -q) mysql -uroot -ppassword -h dblb -e'select * from mydb.foo;'
+------+
| name |
+------+
| ruan |
+------+

$ docker exec -it $(docker ps -f name=galera_dbclient -q) mysql -uroot -ppassword -h dblb -e'select * from mydb.foo;'
+------+
| name |
+------+
| ruan |
+------+

Verify that the 3rd container has checked in:

1
2
3
$ docker service ls
ID                  NAME                MODE                REPLICAS            IMAGE                            PORTS
p8kcr5y7szte        galera_dbcluster    replicated          3/3                 toughiq/mariadb-cluster:latest

How to Restore?

I’m deleting the database to simulate the scenario where we need to restore:

1
2
$ docker exec -it $(docker ps -f name=galera_dbclient -q) sh
> mysql -uroot -ppassword -h dblb -e'drop database mydb;'

Ensure the db is not present:

1
2
> mysql -uroot -ppassword -h dblb -e'select * from mydb.foo;'
ERROR 1146 (42S02) at line 1: Table 'mydb.foo' doesn't exist

Find the archive and extract:

1
2
3
4
5
6
> find /data/
/data/
/data/2019-05-10
/data/2019-05-10/db_backup-2019-05-10T10.05.sql.gz

> gunzip /data/2019-05-10/db_backup-2019-05-10T10.05.sql.gz

Restore the backed up database to MySQL:

1
> mysql -uroot -ppassword -h dblb < /data/2019-05-10/db_backup-2019-05-10T10.05.sql

Test that we can read our data:

1
2
3
4
5
6
> mysql -uroot -ppassword -h dblb -e'select * from mydb.foo;'
+------+
| name |
+------+
| ruan |
+------+

Create Secrets With Vaults Transits Secret Engine

Vault’s transit secrets engine handles cryptographic functions on data-in-transit. Vault doesn’t store the data sent to the secrets engine, so it can also be viewed as encryption as a service.

In this tutorial we will demonstrate how to use Vault’s Transit Secret Engine.

Related Posts:

Enable the Transit Engine:

Enable transit secret engine using the /sys/mounts endpoint:

1
$ curl --header "X-Vault-Token: $VAULT_TOKEN" -XPOST -d '{"type": "transit", "description": "encs encryption"}' http://127.0.0.1:8200/v1/sys/mounts/transit

Create the Key Ring:

Create an encryption key ring named fookey using the transit/keys endpoint:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
$ curl -s --header "X-Vault-Token: $VAULT_TOKEN" -XGET http://127.0.0.1:8200/v1/transit/keys/fookey | jq
{
  "request_id": "8375227a-4a9f-a108-0b89-84c448419e80",
  "lease_id": "",
  "renewable": false,
  "lease_duration": 0,
  "data": {
    "allow_plaintext_backup": false,
    "deletion_allowed": false,
    "derived": false,
    "exportable": false,
    "keys": {
      "1": 1554654295
    },
    "latest_version": 1,
    "min_available_version": 0,
    "min_decryption_version": 1,
    "min_encryption_version": 0,
    "name": "fookey",
    "supports_decryption": true,
    "supports_derivation": true,
    "supports_encryption": true,
    "supports_signing": false,
    "type": "aes256-gcm96"
  },
  "wrap_info": null,
  "warnings": null,
  "auth": null
}

Encoding

Encode your string:

1
2
$ base64 <<< "hello world"
aGVsbG8gd29ybGQK

Encrypt

To encrypt your secret, use the transit/encrypt endpoint:

1
2
3
4
5
6
7
8
9
10
11
12
13
$ curl -s --header "X-Vault-Token: $VAULT_TOKEN" --request POST  --data '{"plaintext": "aGVsbG8gd29ybGQK"}' http://127.0.0.1:8200/v1/transit/encrypt/fookey | jq
{
  "request_id": "ab00ba0f-9e45-0aca-e3c1-7765fd83fc3c",
  "lease_id": "",
  "renewable": false,
  "lease_duration": 0,
  "data": {
    "ciphertext": "vault:v1:Yo4U6xXFM2FoBOaUrw0w3EpSlJS6gmsa4HP1xKtjrk0+xSqi5Rvjvg=="
  },
  "wrap_info": null,
  "warnings": null,
  "auth": null
}

Decrypt:

Use the transit/decrypt endpoint to decrypt the ciphertext:

1
2
3
4
5
6
7
8
9
10
11
12
13
$ curl -s --header "X-Vault-Token: $VAULT_TOKEN" --request POST  --data '{"ciphertext": "vault:v1:Yo4U6xXFM2FoBOaUrw0w3EpSlJS6gmsa4HP1xKtjrk0+xSqi5Rvjvg=="}' http://127.0.0.1:8200/v1/transit/decrypt/fookey | jq
{
  "request_id": "3d9743a0-2daf-823c-f413-8c8a90753479",
  "lease_id": "",
  "renewable": false,
  "lease_duration": 0,
  "data": {
    "plaintext": "aGVsbG8gd29ybGQK"
  },
  "wrap_info": null,
  "warnings": null,
  "auth": null
}

Decoding

Decode the response:

1
2
$ base64 --decode <<< "aGVsbG8gd29ybGQK"
hello world

Resources

Use the Vault API to Provision App Keys and Create KV Pairs

In this tutorial we will use Vault API to create a user and allow that user to write/read key/value pairs from a given path.

Related Posts:

Credentials / Authentication

Export Vault Root Tokens:

1
2
$ export ROOT_TOKEN="$(cat ~/.vault-token)"
$ export VAULT_TOKEN=${ROOT_TOKEN}

Check the vault status:

1
2
3
4
5
6
7
8
9
10
11
12
13
$ curl -s -XGET -H "X-Vault-Token: ${VAULT_TOKEN}" http://127.0.0.1:8200/v1/sys/health | jq
{
  "initialized": true,
  "sealed": false,
  "standby": false,
  "performance_standby": false,
  "replication_performance_mode": "disabled",
  "replication_dr_mode": "disabled",
  "server_time_utc": 1554652468,
  "version": "1.1.0",
  "cluster_name": "vault-cluster-bfb00cd7",
  "cluster_id": "dc1dc9a6-xx-xx-xx-a2870f475e7a"
}

Do a lookup for the root user:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
$ curl -s -XGET -H "X-Vault-Token: ${VAULT_TOKEN}" http://127.0.0.1:8200/v1/auth/token/lookup-self | jq
{
  "request_id": "69a19f66-5bad-3af2-81a5-81ca24e50b02",
  "lease_id": "",
  "renewable": false,
  "lease_duration": 0,
  "data": {
    "accessor": "A7Xkik1ebWpUfzqNrvADmQ08",
    "creation_time": 1554651149,
    "creation_ttl": 0,
    "display_name": "root",
    "entity_id": "",
    "expire_time": null,
    "explicit_max_ttl": 0,
    "id": "s.po8HkMdCnnAerlCAeHGGGszQ",
    "meta": null,
    "num_uses": 0,
    "orphan": true,
    "path": "auth/token/root",
    "policies": [
      "root"
    ],
    "ttl": 0,
    "type": "service"
  },
  "wrap_info": null,
  "warnings": null,
  "auth": null
}

Create the Roles

Create the AppRole:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
$ curl -s -XPOST -H "X-Vault-Token: ${VAULT_TOKEN}" -d '{"type": "approle"}' http://127.0.0.1:8200/v1/sys/auth/approle | jq
$ curl -s -XGET -H "X-Vault-Token: ${VAULT_TOKEN}" http://127.0.0.1:8200/v1/sys/auth | jq
{
  "token/": {
    "accessor": "auth_token_31f2381e",
    "config": {
      "default_lease_ttl": 0,
      "force_no_cache": false,
      "max_lease_ttl": 0,
      "token_type": "default-service"
    },
    "description": "token based credentials",
    "local": false,
    "options": null,
    "seal_wrap": false,
    "type": "token"
  },
  "approle/": {
    "accessor": "auth_approle_d542dcad",
    "config": {
      "default_lease_ttl": 0,
      "force_no_cache": false,
      "max_lease_ttl": 0,
      "token_type": "default-service"
    },
    "description": "",
    "local": false,
    "options": {},
    "seal_wrap": false,
    "type": "approle"
  },
  "request_id": "20554948-b8e0-4254-f21d-f9ad25f1e5d5",
  "lease_id": "",
  "renewable": false,
  "lease_duration": 0,
  "data": {
    "approle/": {
      "accessor": "auth_approle_d542dcad",
      "config": {
        "default_lease_ttl": 0,
        "force_no_cache": false,
        "max_lease_ttl": 0,
        "token_type": "default-service"
      },
      "description": "",
      "local": false,
      "options": {},
      "seal_wrap": false,
      "type": "approle"
    },
    "token/": {
      "accessor": "auth_token_31f2381e",
      "config": {
        "default_lease_ttl": 0,
        "force_no_cache": false,
        "max_lease_ttl": 0,
        "token_type": "default-service"
      },
      "description": "token based credentials",
      "local": false,
      "options": null,
      "seal_wrap": false,
      "type": "token"
    }
  },
  "wrap_info": null,
  "warnings": null,
  "auth": null
}

Create the test policy:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
$ curl -XPOST -H "X-Vault-Token: ${VAULT_TOKEN}" -d '{"policy": "{\"name\": \"test\", \"path\": {\"secret/*\": {\"policy\": \"write\"}}}"}' http://127.0.0.1:8200/v1/sys/policy/test
$ curl -s -XGET -H "X-Vault-Token: ${VAULT_TOKEN}" http://127.0.0.1:8200/v1/sys/policy/test | jq
{
  "name": "test",
  "rules": "{\"name\": \"test\", \"path\": {\"secret/*\": {\"policy\": \"write\"}}}",
  "request_id": "e4f55dc0-575f-ead9-48f6-43154153889a",
  "lease_id": "",
  "renewable": false,
  "lease_duration": 0,
  "data": {
    "name": "test",
    "rules": "{\"name\": \"test\", \"path\": {\"secret/*\": {\"policy\": \"write\"}}}"
  },
  "wrap_info": null,
  "warnings": null,
  "auth": null
}

Attach the policy to the approle:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
$ curl -XPOST -H "X-Vault-Token: ${VAULT_TOKEN}" -d '{"policies": "test"}' http://127.0.0.1:8200/v1/auth/approle/role/app
$ curl -s -XGET -H "X-Vault-Token: ${VAULT_TOKEN}" 'http://127.0.0.1:8200/v1/auth/approle/role?list=true' | jq .
{
  "request_id": "e645cad9-9010-4299-0e6b-0baf6d9194b8",
  "lease_id": "",
  "renewable": false,
  "lease_duration": 0,
  "data": {
    "keys": [
      "app"
    ]
  },
  "wrap_info": null,
  "warnings": null,
  "auth": null
}

Enable the kv store:

1
$ curl -H "X-Vault-Token: ${VAULT_TOKEN}" -XPOST --data '{"type": "kv", "description": "my key value store", "config": {"force_no_cache": true}}' http://127.0.0.1:8200/v1/sys/mounts/secret

Create the User Credentials

Get the role_id:

1
2
3
4
5
6
7
8
9
10
11
12
13
$ curl -s -XGET -H "X-Vault-Token: ${VAULT_TOKEN}" http://127.0.0.1:8200/v1/auth/approle/role/app/role-id | jq
{
  "request_id": "e803a1bf-a492-dad7-68db-bb1506752e03",
  "lease_id": "",
  "renewable": false,
  "lease_duration": 0,
  "data": {
    "role_id": "3e365c72-7aad-f4e4-521c-d7cf0dd83c0f"
  },
  "wrap_info": null,
  "warnings": null,
  "auth": null
}

Create the secret_id:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
$ curl -s -XPOST -H "X-Vault-Token: ${VAULT_TOKEN}" http://127.0.0.1:8200/v1/auth/approle/role/app/secret-id | jq
{
  "request_id": "b56d20c0-ff8a-a1fe-4d5f-42e57b625b83",
  "lease_id": "",
  "renewable": false,
  "lease_duration": 0,
  "data": {
    "secret_id": "5eecfe29-d6e1-50e6-7a70-04c6bea42b76",
    "secret_id_accessor": "2fa80586-32b9-1c6f-fe1d-7c547e5403e5"
  },
  "wrap_info": null,
  "warnings": null,
  "auth": null
}

Create the token with the role_id and secret_id:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
$ curl -s -XPOST -d '{"role_id": "3e365c72-7aad-f4e4-521c-d7cf0dd83c0f","secret_id": "5eecfe29-d6e1-50e6-7a70-04c6bea42b76"}' http://127.0.0.1:8200/v1/auth/approle/login | jq
{
  "request_id": "82470940-ef09-bcbb-f7a0-bdf085b4f47b",
  "lease_id": "",
  "renewable": false,
  "lease_duration": 0,
  "data": null,
  "wrap_info": null,
  "warnings": null,
  "auth": {
    "client_token": "s.7EtwtRGsZWOtkqcMvj3UMLP0",
    "accessor": "2TPL1vg5IZXgVF6Xf1RRzbmL",
    "policies": [
      "default",
      "test"
    ],
    "token_policies": [
      "default",
      "test"
    ],
    "metadata": {
      "role_name": "app"
    },
    "lease_duration": 2764800,
    "renewable": true,
    "entity_id": "d5051b01-b7ce-626c-a9f4-e1663f8c23e8",
    "token_type": "service",
    "orphan": true
  }
}

Create KV Pairs with New User

Export the user auth with the received token:

1
2
$ export APP_TOKEN=s.7EtwtRGsZWOtkqcMvj3UMLP0
$ export VAULT_TOKEN=$APP_TOKEN

Verify if you can lookup your own info:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
$ curl -s -XGET -H "X-Vault-Token: ${VAULT_TOKEN}" http://127.0.0.1:8200/v1/auth/token/lookup-self | jq
{
  "request_id": "2e69cd68-8668-3159-6440-c430cb61d2e6",
  "lease_id": "",
  "renewable": false,
  "lease_duration": 0,
  "data": {
    "accessor": "2TPL1vg5IZXgVF6Xf1RRzbmL",
    "creation_time": 1554651882,
    "creation_ttl": 2764800,
    "display_name": "approle",
    "entity_id": "d5051b01-b7ce-626c-a9f4-e1663f8c23e8",
    "expire_time": "2019-05-09T15:44:42.1013993Z",
    "explicit_max_ttl": 0,
    "id": "s.7EtwtRGsZWOtkqcMvj3UMLP0",
    "issue_time": "2019-04-07T15:44:42.1013788Z",
    "meta": {
      "role_name": "app"
    },
    "num_uses": 0,
    "orphan": true,
    "path": "auth/approle/login",
    "policies": [
      "default",
      "test"
    ],
    "renewable": true,
    "ttl": 2764556,
    "type": "service"
  },
  "wrap_info": null,
  "warnings": null,
  "auth": null
}

Create a KV pair:

1
$ curl -s -XPOST -H "X-Vault-Token: ${VAULT_TOKEN}" -d '{"app_password": "secret123"}' http://127.0.0.1:8200/v1/secret/app01/app_password

Read the secret from KV pair:

1
2
3
4
5
6
7
8
9
10
11
12
13
$ curl -s -XGET -H "X-Vault-Token: ${VAULT_TOKEN}" http://127.0.0.1:8200/v1/secret/app01/app_password | jq
{
  "request_id": "70d5f16d-2abb-fcfd-063f-0e21d9cef8fd",
  "lease_id": "",
  "renewable": false,
  "lease_duration": 2764800,
  "data": {
    "app_password": "secret123"
  },
  "wrap_info": null,
  "warnings": null,
  "auth": null
}

Try to write outside the allowed path:

1
2
$ curl -s -XPOST -H "X-Vault-Token: ${VAULT_TOKEN}" -d '{"app_password": "secret123"}' http://127.0.0.1:8200/v1/secrets/app01/app_password
{"errors":["1 error occurred:\n\t* permission denied\n\n"]}

Resources:

Persist Vault Data With Amazon S3 as a Storage Backend

In a previous post we have set up the vault server on docker, but using a file backend to persist our data.

In this tutorial we will configure vault to use amazon s3 as a storage backend to persist our data for vault.

Provision S3 Bucket

Create the S3 Bucket where our data will reside:

1
$ aws s3 mb --region=eu-west-1 s3://somename-vault-backend

Vault Config

Create the vault config, where we will provide details about our storage backend and configuration for the vault server:

1
$ vim volumes/config/s3vault.json

Populate the config file with the following details, you will just need to provide your own credentials:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
{
  "backend": {
    "s3": {
      "region": "eu-west-1",
      "access_key": "ACCESS_KEY",
      "secret_key": "SECRET_KEY",
      "bucket": "somename-vault-backend"
    }
  },
  "listener": {
    "tcp":{
      "address": "0.0.0.0:8200",
      "tls_disable": 1
    }
  },
  "ui": true
}

Docker Compose

As we are using docker to deploy our vault server, our docker-compose.yml:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
$ cat > docker-compose.yml << EOF
version: '2'
services:
  vault:
    image: vault
    container_name: vault
    ports:
      - "8200:8200"
    restart: always
    volumes:
      - ./volumes/logs:/vault/logs
      - ./volumes/file:/vault/file
      - ./volumes/config:/vault/config
    cap_add:
      - IPC_LOCK
    entrypoint: vault server -config=/vault/config/s3vault.json
EOF

Deploy the vault server:

1
$ docker-compose up

Go ahead and create some secrets, then deploy the docker container on another host to test out the data persistence.

Setup Prometheus and Node Exporter on Ubuntu for Epic Monitoring

image

Prometheus is one of those awesome open source monitoring services that I simply cannot live without. Prometheus is a Time Series Database that collects metrics from services using it’s exporters functionality. Prometheus has its own query language called PromQL and makes graphing epic visualiztions with services such as Grafana a breeze.

What are we doing today

We will install the prometheus service and set up node_exporter to consume node related metrics such as cpu, memory, io etc that will be scraped by the exporter configuration on prometheus, which then gets pushed into prometheus’s time series database. Which can then be used by services such as Grafana to visualize the data.

Other exporters is also available, such as: haproxy_exporter, blackbox_exporter etc, then you also get pushgateway which is used to push data to, and then your exporter configuration scrapes the data from the pushgateway endpoint. In a later tutorial, we will set up push gateway as well.

Install Prometheus

First, let’s provision our dedicated system users for prometheus and node exporter:

1
2
$ useradd --no-create-home --shell /bin/false prometheus
$ useradd --no-create-home --shell /bin/false node_exporter

Create the directories for it’s system files:

1
2
$ mkdir /etc/prometheus
$ mkdir /var/lib/prometheus

Apply the permissions:

1
2
$ chown prometheus:prometheus /etc/prometheus
$ chown prometheus:prometheus /var/lib/prometheus

Next, update your system:

1
$ apt update && apt upgrade -y

Let’s install prometheus, head over to https://prometheus.io/download/ and get the latest version of prometheus:

1
2
3
4
5
6
7
8
9
10
11
$ wget https://github.com/prometheus/prometheus/releases/download/v2.8.0/prometheus-2.8.0.linux-amd64.tar.gz
$ tar -xf prometheus-2.8.0.linux-amd64.tar.gz
$ cp prometheus-2.8.0.linux-amd64/prometheus /usr/local/bin/
$ cp prometheus-2.8.0.linux-amd64/promtool /usr/local/bin/
$ chown prometheus:prometheus /usr/local/bin/prometheus
$ chown prometheus:prometheus /usr/local/bin/promtool
$ cp -r prometheus-2.8.0.linux-amd64/consoles /etc/prometheus/
$ cp -r prometheus-2.8.0.linux-amd64/console_libraries /etc/prometheus/
$ chown -R prometheus:prometheus /etc/prometheus/consoles
$ chown -R prometheus:prometheus /etc/prometheus/console_libraries
$ rm -rf prometheus-2.8.0.linux-amd64*

Configure Prometheus

We need to tell prometheus to scrape itself in order to get prometheus performance data, edit the prometheus configuration:

1
$ vim /etc/prometheus/prometheus.yml

And add a scrape config: Set the interval on when it needs to scrap, the job name which will be in your metric and the endpoint which it needs to scrape:

1
2
3
4
5
6
7
8
global:
  scrape_interval: 15s

scrape_configs:
  - job_name: 'prometheus'
    scrape_interval: 5s
    static_configs:
      - targets: ['localhost:9090']

Apply permissions to the configured file:

1
$ chown prometheus:prometheus /etc/prometheus/prometheus.yml

Next, we need to define a systemd unit file so we can control the daemon using systemd:

1
$ vim /etc/systemd/system/prometheus.service

The config:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
[Unit]
Description=Prometheus
Wants=network-online.target
After=network-online.target

[Service]
User=prometheus
Group=prometheus
Type=simple
ExecStart=/usr/local/bin/prometheus \
    --config.file /etc/prometheus/prometheus.yml \
    --storage.tsdb.path /var/lib/prometheus/ \
    --web.console.templates=/etc/prometheus/consoles \
    --web.console.libraries=/etc/prometheus/console_libraries

[Install]
WantedBy=multi-user.target

Since we created a new systemd unit file, we need to reload the systemd daemon, then start the service:

1
2
$ systemctl daemon-reload
$ systemctl start prometheus

Let’s look at the status to see if everything works as expected:

1
2
3
4
5
6
7
8
9
10
11
$ systemctl status prometheus
prometheus.service - Prometheus
   Loaded: loaded (/etc/systemd/system/prometheus.service; disabled; vendor preset: enabled)
   Active: active (running) since Tue 2019-03-26 11:59:10 UTC; 6s ago
 Main PID: 16374 (prometheus)
    Tasks: 9 (limit: 4704)
   CGroup: /system.slice/prometheus.service
           └─16374 /usr/local/bin/prometheus --config.file /etc/prometheus/prometheus.yml --storage.tsdb.path /var/lib/prometheus/ --web.console.templates=/etc/prometheus/consoles --web.console.libraries=

...
Mar 26 11:59:10 ip-172-31-41-126 prometheus[16374]: level=info ts=2019-03-26T11:59:10.893770598Z caller=main.go:655 msg="TSDB started"

Seems legit! Enable the service on startup:

1
$ systemctl enable prometheus

Install Node Exporter

Now since we have prometheus up and running, we can start adding exporters to publish data into our prometheus time series database. As mentioned before, with node exporter, we will allow prometheus to scrape the node exporter endpoint to consume metrics about the node:

You will find the latest version from their website, which I have added at the top of this post.

1
2
3
4
5
$ wget https://github.com/prometheus/node_exporter/releases/download/v0.17.0/node_exporter-0.17.0.linux-amd64.tar.gz
$ tar -xf node_exporter-0.17.0.linux-amd64.tar.gz
$ cp node_exporter-0.17.0.linux-amd64/node_exporter /usr/local/bin
$ chown node_exporter:node_exporter /usr/local/bin/node_exporter
$ rm -rf node_exporter-0.17.0.linux-amd64*

Create the systemd unit file:

1
$ vim /etc/systemd/system/node_exporter.service

Apply this configuration:

1
2
3
4
5
6
7
8
9
10
11
12
13
[Unit]
Description=Node Exporter
Wants=network-online.target
After=network-online.target

[Service]
User=node_exporter
Group=node_exporter
Type=simple
ExecStart=/usr/local/bin/node_exporter

[Install]
WantedBy=multi-user.target

Reload the systemd daemon and start node exporter:

1
2
$ systemctl daemon-reload
$ systemctl start node_exporter

Look at the status:

1
2
3
4
5
6
7
8
9
10
$ node_exporter.service - Node Exporter
   Loaded: loaded (/etc/systemd/system/node_exporter.service; disabled; vendor preset: enabled)
   Active: active (running) since Tue 2019-03-26 12:01:39 UTC; 5s ago
 Main PID: 16474 (node_exporter)
    Tasks: 4 (limit: 4704)
   CGroup: /system.slice/node_exporter.service
           └─16474 /usr/local/bin/node_exporter

...
Mar 26 12:01:39 ip-172-31-41-126 node_exporter[16474]: time="2019-03-26T12:01:39Z" level=info msg="Listening on :9100" source="node_exporter.go:111"

If everything looks good, enable the service on boot:

1
$ systemctl enable node_exporter

Configure Node Exporter

Now that we have node exporter running, we need to tell prometheus how to scrape node exporter, so that the node related metrics can end up into prometheus. Edit the prometheus config:

1
$ vim /etc/prometheus/prometheus.yml

I’m providing the full config, but the config is the last section, where you can see the jobname is node_exporter:

1
2
3
4
5
6
7
8
9
10
11
12
13
global:
  scrape_interval: 15s

scrape_configs:
  - job_name: 'prometheus'
    scrape_interval: 5s
    static_configs:
      - targets: ['localhost:9090']

  - job_name: 'node_exporter'
    scrape_interval: 5s
    static_configs:
      - targets: ['localhost:9100']

Once the config is saved, restart prometheus and have a look at the status if everything is going as expected:

1
2
$ systemctl restart prometheus
$ systemctl status prometheus

Nginx Reverse Proxy

Let’s add a layer of security and front our setup with a nginx reverse proxy, so that we don’t have to access prometheus on high ports and we have the option to enable basic http authentication. Install nginx:

1
$ apt install nginx apache2-utils -y

Create the authentication file:

1
$ htpasswd -c /etc/nginx/.htpasswd admin

Create the nginx site configuration, this will tel nginx to route connections on port 80, to reverse proxy to localhost, port 9090, if authenticated:

1
2
$ rm /etc/nginx/sites-enabled/default
$ vim /etc/nginx/sites-enabled/prometheus.conf

And this is the config:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
server {
    listen 80 default_server;
    listen [::]:80 default_server;
    root /var/www/html;
    index index.html index.htm index.nginx-debian.html;
    server_name _;


    location / {
            auth_basic "Prometheus Auth";
            auth_basic_user_file /etc/nginx/.htpasswd;
            proxy_pass http://localhost:9090;
            proxy_http_version 1.1;
            proxy_set_header Upgrade $http_upgrade;
            proxy_set_header Connection 'upgrade';
            proxy_set_header Host $host;
            proxy_cache_bypass $http_upgrade;
        }
}

Reload nginx configuration:

1
$ systemctl reload nginx

Access the Beauty of Prometheus Land!

Once you have authenticated, head over to status, here you will see status info such as your targets, this wil be the endpoints that prometheus is scraping:

image

From the main screen, let’s dive into some queries using PromQL. Also see my Prometheus Cheatsheet.

For the first query, we want to see the available memory of this node in bytes (node_memory_MemAvailable_bytes):

image

Now since the value is in bytes, let’s convert the value to MB, (node_memory_MemAvailable_bytes/1024/1024)

image

Let’s say we want to see the average memory available in 5 minute buckets:

image

That’s a few basic ones, but feel free to checkout my Prometheus Cheatsheet for other examples. I update them as I find more queries.

Thanks

Hope this was informative. I am planning to publish a post on visualizing prometheus data with Grafana (which is EPIC!) and installing Pushgateway for custom integrations.

How to Fix the Following Signatures Couldnt Be Verified Because the Public Key Is Not Available With Apt

I was trying to install grafana on ubuntu when I got faced with: “the following signatures couldn’t be verified because the public key is not available” error as seen below:

1
2
3
4
5
6
7
8
9
10
11
$ sudo add-apt-repository "deb https://packages.grafana.com/oss/deb stable main"
Hit:1 http://eu-west-1.ec2.archive.ubuntu.com/ubuntu bionic InRelease
Get:2 http://eu-west-1.ec2.archive.ubuntu.com/ubuntu bionic-updates InRelease [88.7 kB]
Get:3 http://eu-west-1.ec2.archive.ubuntu.com/ubuntu bionic-backports InRelease [74.6 kB]
Get:4 http://security.ubuntu.com/ubuntu bionic-security InRelease [88.7 kB]
Get:5 http://eu-west-1.ec2.archive.ubuntu.com/ubuntu bionic-backports/universe Sources [2068 B]
Get:6 http://eu-west-1.ec2.archive.ubuntu.com/ubuntu bionic-backports/universe amd64 Packages [3492 B]
Get:7 https://packages.grafana.com/oss/deb stable InRelease [12.1 kB]
Err:7 https://packages.grafana.com/oss/deb stable InRelease
  The following signatures couldn't be verified because the public key is not available: NO_PUBKEY 8C8C34C524098CB6
Reading package lists... Done

In order to continue, we need to import the trusted key:

1
2
3
4
5
$ apt-key adv --keyserver keyserver.ubuntu.com --recv-keys  8C8C34C524098CB6
Executing: /tmp/apt-key-gpghome.9xlwQh2M06/gpg.1.sh --keyserver keyserver.ubuntu.com --recv-keys 8C8C34C524098CB6
gpg: key 8C8C34C524098CB6: public key "Grafana <info@grafana.com>" imported
gpg: Total number processed: 1
gpg:               imported: 1

Now that the key has been imported, we can update and continue:

1
2
3
4
5
6
7
8
9
$ apt update
Hit:1 http://eu-west-1.ec2.archive.ubuntu.com/ubuntu bionic InRelease
Hit:2 http://eu-west-1.ec2.archive.ubuntu.com/ubuntu bionic-updates InRelease
Hit:3 http://eu-west-1.ec2.archive.ubuntu.com/ubuntu bionic-backports InRelease
Hit:4 http://security.ubuntu.com/ubuntu bionic-security InRelease
Get:5 https://packages.grafana.com/oss/deb stable InRelease [12.1 kB]
Get:6 https://packages.grafana.com/oss/deb stable/main amd64 Packages [10.8 kB]
Fetched 22.9 kB in 1s (32.7 kB/s)
Reading package lists... Done

Setup Hashicorp Vault Server on Docker and a Getting Started CLI Guide

Vault is one of Hashicorp’s awesome services, which enables you to centrally store, access and distribute dynamic secrets such as tokens, passwords, certificates and encryption keys.

What will we be doing today

We will setup a Vault Server on Docker and demonstrate a getting started guide with the Vault CLI to Initialize the Vault, Create / Use and Manage Secrets.

For related posts:

Setting up the Vault Server

Create the directory structure:

1
2
$ touch docker-compose.yml
$ mkdir -p volumes/{config,file,logs}

Populate the vault config vault.json. (As you can see the config is local, in the next couple of posts, I will show how to persist this config to Amazon S3)

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
$ cat > volumes/config/vault.json << EOF
{
  "backend": {
    "file": {
      "path": "/vault/file"
    }
  },
  "listener": {
    "tcp":{
      "address": "0.0.0.0:8200",
      "tls_disable": 1
    }
  },
  "ui": true
}
EOF

Populate the docker-compose.yml:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
$ cat > docker-compose.yml << EOF
version: '2'
services:
  vault:
    image: vault
    container_name: vault
    ports:
      - "8200:8200"
    restart: always
    volumes:
      - ./volumes/logs:/vault/logs
      - ./volumes/file:/vault/file
      - ./volumes/config:/vault/config
    cap_add:
      - IPC_LOCK
    entrypoint: vault server -config=/vault/config/vault.json
EOF

Start the Vault Server:

1
$ docker-compose up

The UI is available at http://localhost:8200/ui and the api at http://localhost:8200

Interacting with the Vault CLI

I will demonstrate how to use the Vault CLI to interact with Vault. Let’s start by installing the vault cli tools, I am using mac, so I will be using brew:

1
$ brew install vault

Set environment variables:

1
$ export VAULT_ADDR='http://127.0.0.1:8200'

Initialize the Vault Cluster:

Initialize new vault cluster with 6 key shares:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
$ vault operator init -key-shares=6 -key-threshold=3
Unseal Key 1: RntjR...DQv
Unseal Key 2: 7E1bG...0LL+
Unseal Key 3: AEuhl...A1NO
Unseal Key 4: bZU76...FMGl
Unseal Key 5: DmEjY...n7Hk
Unseal Key 6: pC4pK...XbKb

Initial Root Token: s.F0JGq..98s2U

Vault initialized with 10 key shares and a key threshold of 3. Please
securely distribute the key shares printed above. When the Vault is re-sealed,
restarted, or stopped, you must supply at least 3 of these keys to unseal it
before it can start servicing requests.

Vault does not store the generated master key. Without at least 3 key to
reconstruct the master key, Vault will remain permanently sealed!

It is possible to generate new unseal keys, provided you have a quorum of
existing unseal keys shares. See "vault operator rekey" for more information.

In order to unseal the vault cluster, we need to supply it with 3 key shares:

1
2
3
$ vault operator unseal RntjR...DQv
$ vault operator unseal bZU76...FMGl
$ vault operator unseal pC4pK...XbKb

Ensure the vault is unsealed:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
$ vault status -format=json
{
  "type": "shamir",
  "initialized": true,
  "sealed": false,
  "t": 3,
  "n": 5,
  "progress": 0,
  "nonce": "",
  "version": "1.1.0",
  "migration": false,
  "cluster_name": "vault-cluster-dca2b572",
  "cluster_id": "469c2f1d-xx-xx-xx-03bfc497c883",
  "recovery_seal": false
}

Authenticate against the vault:

1
2
3
4
$ vault login s.tdlEqsfzGbePVlke5hTpr9Um
Success! You are now authenticated. The token information displayed below
is already stored in the token helper. You do NOT need to run "vault login"
again. Future Vault requests will automatically use this token.

Using the cli your auth token will be saved locally at ~/.vault-token.

Enable the secret kv engine:

1
$ vault secrets enable -version=1 -path=secret kv

Create and Read Secrets

Write a secret to the path enabled above:

1
$ vault kv put secret/my-app/password password=123

List your secrets:

1
2
3
4
$ vault kv list secret/
Keys
----
my-app/

Read the secret (defaults in table format):

1
2
3
4
5
$ vault kv get secret/my-app/password
Key                 Value
---                 -----
refresh_interval    768h
password            123

Read the secret in json format:

1
2
3
4
5
6
7
8
9
10
11
$ vault kv get --format=json secret/my-app/password
{
  "request_id": "0249c878-7432-9555-835a-89b275fca32o",
  "lease_id": "",
  "lease_duration": 2764800,
  "renewable": false,
  "data": {
    "password": "123"
  },
  "warnings": null
}

Read only the password value in the secret:

1
2
$ vault kv get -field=password secret/my-app/password
123

Key with Multiple Secrets

Create a key with multiple secrets:

1
$ vault kv put secret/reminders/app db_username=db.ruanbekker.com username=root password=secret

Read all the secrets:

1
2
3
4
5
6
7
8
9
10
11
12
13
$ vault kv get --format=json secret/reminders/app
{
  "request_id": "0144c878-7532-9555-835a-8cb275fca3dd",
  "lease_id": "",
  "lease_duration": 2764800,
  "renewable": false,
  "data": {
    "db_username": "db.ruanbekker.com",
    "password": "secret",
    "username": "root"
  },
  "warnings": null
}

Read only the username field in the key:

1
2
$ vault kv get -field=username secret/reminders/app
root

Delete the secret:

1
$ vault kv delete secret/reminders

Versioning

Create a key and set the metadata to max of 5 versions:

1
$ vault kv metadata put -max-versions=5 secret/fooapp/appname

Get the metadata of the key:

1
2
3
4
5
6
7
8
9
10
$ vault kv metadata get secret/fooapp/appname
======= Metadata =======
Key                Value
---                -----
cas_required       false
created_time       2019-04-07T12:35:54.355411Z
current_version    0
max_versions       5
oldest_version     0
updated_time       2019-04-07T12:35:54.355411Z

Write a secret appname to our key: secret/fooapp/appname:

1
2
3
4
5
6
7
$ vault kv put secret/fooapp/appname appname=app1
Key              Value
---              -----
created_time     2019-04-07T12:36:41.7577102Z
deletion_time    n/a
destroyed        false
version          1

Overwrite the key with a couple of requests:

1
2
$ vault kv put secret/fooapp/appname appname=app2
$ vault kv put secret/fooapp/appname appname=app3

Read the current value:

1
2
$ vault kv get -field=appname secret/fooapp/appname
app3

Get the version=2 value of this file:

1
2
$ vault kv get -field=appname -version=2 secret/fooapp/appname
app2

Thanks

Thanks for reading, hope this was informative. Have a look at Hashicorp’s Vault Documentation for more information on the project. I will post more posts on Vault under the #vault category.

Using Concourse CI to Deploy to Docker Swarm

In this tutorial we will use Concourse to Deploy our application to Docker Swarm.

The Flow

  • Our application code resides on Github
  • The pipeline triggers when a commit is pushed to the master branch
  • The pipeline will automatically deploy to the staging environment
  • The pipeline requires a manual trigger to deploy to prod
  • Note: Staging and Prod on the same swarm for demonstration

The code for this tutorial is available on my github repository

Application Structure

The application structure for our code looks like this:

Pipeline Walktrough

Our ci/pipeline.yml

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
resources:
  - name: main-repo
    type: git
    source:
      uri: git@github.com:ruanbekker/concourse-swarm-app-demo.git
      branch: master
      private_key: ((github_private_key))

  - name: main-repo-staging
    type: git
    source:
      uri: git@github.com:ruanbekker/concourse-swarm-app-demo.git
      branch: master
      private_key: ((github_private_key))
      paths:
        - config/staging/*

  - name: main-repo-prod
    type: git
    source:
      uri: git@github.com:ruanbekker/concourse-swarm-app-demo.git
      branch: master
      private_key: ((github_private_key))
      paths:
        - config/prod/*

  - name: slack-alert
    type: slack-notification
    source:
      url: ((slack_notification_url))

  - name: version-staging
    type: semver
    source:
      driver: git
      uri: git@github.com:ruanbekker/concourse-swarm-app-demo.git
      private_key: ((github_private_key))
      file: version-staging
      branch: version-staging

  - name: version-prod
    type: semver
    source:
      driver: git
      uri: git@github.com:ruanbekker/concourse-swarm-app-demo.git
      private_key: ((github_private_key))
      file: version-prod
      branch: version-prod

resource_types:
  - name: slack-notification
    type: docker-image
    source:
      repository: cfcommunity/slack-notification-resource
      tag: v1.3.0

jobs:
  - name: bump-staging-version
    plan:
    - get: main-repo-staging
      trigger: true
    - get: version-staging
    - put: version-staging
      params:
        bump: major

  - name: bump-prod-version
    plan:
    - get: main-repo-prod
      trigger: true
    - get: version-prod
    - put: version-prod
      params:
        bump: major

  - name: deploy-staging
    plan:
    - get: main-repo-staging
    - get: main-repo
    - get: version-staging
      passed:
      - bump-staging-version
      trigger: true
    - task: deploy-staging
      params:
        DOCKER_SWARM_HOSTNAME: ((docker_swarm_staging_host))
        DOCKER_SWARM_KEY: ((docker_swarm_key))
        DOCKER_HUB_USER: ((docker_hub_user))
        DOCKER_HUB_PASSWORD: ((docker_hub_password))
        SERVICE_NAME: app-staging
        SWARM: staging
        ENVIRONMENT: staging
        AWS_ACCESS_KEY_ID: ((aws_access_key_id))
        AWS_SECRET_ACCESS_KEY: ((aws_secret_access_key))
        AWS_DEFAULT_REGION: ((aws_region))
      config:
        platform: linux
        image_resource:
          type: docker-image
          source:
            repository: rbekker87/build-tools
            tag: latest
            username: ((docker_hub_user))
            password: ((docker_hub_password))
        inputs:
        - name: main-repo-staging
        - name: main-repo
        - name: version-staging
        run:
          path: /bin/sh
          args:
            - -c
            - |
              ./main-repo/ci/scripts/deploy.sh
      on_failure:
        put: slack-alert
        params:
          channel: '#system_events'
          username: 'concourse'
          icon_emoji: ':concourse:'
          silent: true
          text: |
            *$BUILD_PIPELINE_NAME/$BUILD_JOB_NAME* ($BUILD_NAME) FAILED :rage: - TestApp Deploy to staging-swarm failed
            http://ci.example.local/teams/$BUILD_TEAM_NAME/pipelines/$BUILD_PIPELINE_NAME/jobs/$BUILD_JOB_NAME/builds/$BUILD_NAME
      on_success:
        put: slack-alert
        params:
          channel: '#system_events'
          username: 'concourse'
          icon_emoji: ':concourse:'
          silent: true
          text: |
            *$BUILD_PIPELINE_NAME/$BUILD_JOB_NAME* ($BUILD_NAME) SUCCESS :aww_yeah: - TestApp Deploy to staging-swarm succeeded
            http://ci.example.local/teams/$BUILD_TEAM_NAME/pipelines/$BUILD_PIPELINE_NAME/jobs/$BUILD_JOB_NAME/builds/$BUILD_NAME

  - name: deploy-prod
    plan:
    - get: main-repo-prod
    - get: main-repo
    - get: version-prod
      passed:
      - bump-prod-version
    - task: deploy-prod
      params:
        DOCKER_SWARM_HOSTNAME: ((docker_swarm_prod_host))
        DOCKER_SWARM_KEY: ((docker_swarm_key))
        DOCKER_HUB_USER: ((docker_hub_user))
        DOCKER_HUB_PASSWORD: ((docker_hub_password))
        SERVICE_NAME: app-prod
        SWARM: prod
        ENVIRONMENT: production
        AWS_ACCESS_KEY_ID: ((aws_access_key_id))
        AWS_SECRET_ACCESS_KEY: ((aws_secret_access_key))
        AWS_DEFAULT_REGION: ((aws_region))
      config:
        platform: linux
        image_resource:
          type: docker-image
          source:
            repository: rbekker87/build-tools
            tag: latest
            username: ((docker_hub_user))
            password: ((docker_hub_password))
        inputs:
        - name: main-repo-prod
        - name: main-repo
        - name: version-prod
        run:
          path: /bin/sh
          args:
            - -c
            - |
              ./main-repo/ci/scripts/deploy.sh
      on_failure:
        put: slack-alert
        params:
          channel: '#system_events'
          username: 'concourse'
          icon_emoji: ':concourse:'
          silent: true
          text: |
            *$BUILD_PIPELINE_NAME/$BUILD_JOB_NAME* ($BUILD_NAME) FAILED :rage: - TestApp Deploy to prod-swarm failed
            http://ci.example.local/teams/$BUILD_TEAM_NAME/pipelines/$BUILD_PIPELINE_NAME/jobs/$BUILD_JOB_NAME/builds/$BUILD_NAME
      on_success:
        put: slack-alert
        params:
          channel: '#system_events'
          username: 'concourse'
          icon_emoji: ':concourse:'
          silent: true
          text: |
            *$BUILD_PIPELINE_NAME/$BUILD_JOB_NAME* ($BUILD_NAME) SUCCESS :aww_yeah: - TestApp Deploy to prod-swarm succeeded
            http://ci.example.local/teams/$BUILD_TEAM_NAME/pipelines/$BUILD_PIPELINE_NAME/jobs/$BUILD_JOB_NAME/builds/$BUILD_NAME

Our ci/credentials.yml which will hold all our secret info, which will remain local:

1
2
3
4
username: yourdockerusername
password: yourdockerpassword
docker_swarm_prod_host: 10.20.30.40
...

The first step of our deploy will invoke a shell script that will establish a ssh tunnel to the docker host, mounting the docker socket to a tcp local port, then exporting the docker host port to the tunneled port, ci/scripts/deploy.sh:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
#!/usr/bin/env sh

export DOCKER_HOST="localhost:2376"

echo "${DOCKER_SWARM_KEY}" | sed -e 's/\(KEY-----\)\s/\1\n/g; s/\s\(-----END\)/\n\1/g' | sed -e '2s/\s\+/\n/g' > key.pem
chmod 600 key.pem

screen -S \
  sshtunnel -m -d sh -c \
  "ssh -oStrictHostKeyChecking=no -oUserKnownHostsFile=/dev/null -i ./key.pem -NL localhost:2376:/var/run/docker.sock root@$DOCKER_SWARM_HOSTNAME"

sleep 5
docker login -u "${DOCKER_HUB_USER}" -p "${DOCKER_HUB_PASSWORD}"
docker stack deploy --prune -c ./main-repo/ci/docker/docker-compose.${ENVIRONMENT}.yml $SERVICE_NAME --with-registry-auth

if [ $? != "0" ]
  then
    echo "deploy failure for: $SERVICE_NAME"
    screen -S sshtunnel -X quit
    exit 1
  else
    set -x
    echo "deploy success for: $SERVICE_NAME"
    screen -S sshtunnel -X quit
fi

The deploy script references the docker-compose files, first our ci/docker/docker-compose.staging.yml:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
version: "3.4"

services:
  web:
    image: ruanbekker/web-center-name
    environment:
      - APP_ENVIRONMENT=Staging
    ports:
      - 81:5000
    networks:
      - web_net
    deploy:
      mode: replicated
      replicas: 2

networks:
  web_net: {}

Also, our docker-compose for production, ci/docker/docker-compose.production.yml:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
version: "3.4"

services:
  web:
    image: ruanbekker/web-center-name
    environment:
      - APP_ENVIRONMENT=Production
    ports:
      - 80:5000
    networks:
      - web_net
    deploy:
      mode: replicated
      replicas: 10

networks:
  web_net: {}

Set the Pipeline in Concourse

Create 2 branches in your github repository for versioning: version-staging and version-prod, then logon to concourse and save the target:

1
$ fly -t ci login -n main -c http://<concourse-ip>

Set the pipeline, point the config, local variables definition and name the pipeline:

1
$ fly -t ci sp -n main -c ci/pipeline.yml -p <pipeline-name> -l ci/<variables>.yml

You will find that the pipeline will look like below and that it will be in a paused state:

Unpause the pipeline:

1
$ fly -t ci up -p swarm-demo

The pipeline should kick-off automatically due to the trigger that is set to true:

Deployed automatically to staging, prod is a manual trigger:

Testing our Application

For demonstration purposes we have deployed staging on port 81 and production on port 80.

Testing Staging on http://:81/

Testing Production on http://:80/