Ruan Bekker's Blog

From a Curious mind to Posts on Github

Running Loki Behind Nginx Reverse Proxy

In this tutorial I will demonstrate how to run Loki v2.0.0 behind a Nginx Reverse Proxy with basic http authentication enabled on Nginx and what to do to configure Nginx for websockets, which is required when you want to use tail in logcli via Nginx.

Assumptions

My environment consists of a AWS Application LoadBalancer with a Host entry and a Target Group associated to port 80 of my Nginx/Loki EC2 instance.

Health checks to my EC2 instance are being performed to instance:80/ready

I have a S3 bucket and a DynamoDB table already running in my account which Loki will use. But NOTE that boltdb-shipper is now production ready since v2.0.0, which is awesome, because now you only require a object store such as S3, so you don’t need DynamoDB.

More information on this topic can be found under their changelog

What can you expect from this blogpost

We will go through the following topics:

  • Install Loki v2.0.0 and Nginx
  • Configure HTTP Basic Authentication to Loki’s API Endpoints
  • Bypass HTTP Basic Authentication to the /ready endpoint for our Load Balancer to perform healthchecks
  • Enable Nginx to upgrade websocket connections so that we can use logcli --tail
  • Test out access to Loki via our Nginx Reverse Proxy
  • Install and use LogCLI

Install Software

First we will install nginx and apache2-utils. In my use-case I will be using Ubuntu 20 as my operating system:

1
$ sudo apt update && sudp apt install nginx apache2-utils -y

Next we will install Loki v2.0.0, if you are upgrading from a previous version of Loki, I would recommend checking out the upgrade guide mentioned on their releases page.

Download the package:

1
$ curl -O -L "https://github.com/grafana/loki/releases/download/v2.0.0/loki-linux-amd64.zip"

Unzip the archive:

1
$ unzip loki-linux-amd64.zip

Move the binary to your $PATH:

1
$ sudo mv loki-linux-amd64 /usr/local/bin/loki

And ensure that the binary is executable:

1
$ sudo chmod a+x /usr/local/bin/loki

Configuration

Create the user that will be responsible for running loki:

1
$ useradd --no-create-home --shell /bin/false loki

Create the directory where we will place the loki configuration:

1
$ mkdir /etc/loki

Create the loki configuration file:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
$ cat /etc/loki/loki-config.yml
auth_enabled: false

server:
  http_listen_port: 3100
  http_listen_address: 127.0.0.1
  http_server_read_timeout: 1000s
  http_server_write_timeout: 1000s
  http_server_idle_timeout: 1000s
  log_level: info

ingester:
  lifecycler:
    address: 127.0.0.1
    ring:
      kvstore:
        store: inmemory
      replication_factor: 1
    final_sleep: 0s
  chunk_encoding: snappy
  chunk_idle_period: 1h
  chunk_target_size: 1048576
  chunk_retain_period: 30s
  max_transfer_retries: 0

# https://grafana.com/docs/loki/latest/configuration/#schema_config
schema_config:
  configs:
    - from: 2020-05-15
      store: aws
      object_store: s3
      schema: v11
      index:
        prefix: loki-logging-index

storage_config:
  aws:
    http_config:
      idle_conn_timeout: 90s
      response_header_timeout: 0s
    s3: s3://myak:mysk@eu-west-1/loki-logs-datastore

    dynamodb:
      dynamodb_url: dynamodb://myak:mysk@eu-west-1

limits_config:
  enforce_metric_name: false
  reject_old_samples: true
  reject_old_samples_max_age: 168h
  ingestion_rate_mb: 30
  ingestion_burst_size_mb: 60

# https://grafana.com/docs/loki/latest/operations/storage/retention/
# To avoid querying of data beyond the retention period, max_look_back_period config in chunk_store_config
# must be set to a value less than or equal to what is set in table_manager.retention_period
chunk_store_config:
  max_look_back_period: 720h

# https://grafana.com/docs/loki/latest/operations/storage/retention/
table_manager:
  retention_deletes_enabled: true
  retention_period: 720h
  chunk_tables_provisioning:
    inactive_read_throughput: 10
    inactive_write_throughput: 10
    provisioned_read_throughput: 50
    provisioned_write_throughput: 20
  index_tables_provisioning:
    inactive_read_throughput: 10
    inactive_write_throughput: 10
    provisioned_read_throughput: 50
    provisioned_write_throughput: 20

Apply permissions so that the loki user has access to it’s configuration:

1
$ chown -R loki:loki /etc/loki

Create a systemd unit file:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
$ cat /etc/systemd/system/loki.service
[Unit]
Description=Loki
Wants=network-online.target
After=network-online.target

[Service]
User=loki
Group=loki
Type=simple
Restart=on-failure
ExecStart=/usr/local/bin/loki -config.file /etc/loki/loki-config.yml

[Install]
WantedBy=multi-user.target

Create the main nginx config:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
$ cat /etc/nginx/nginx.conf
user www-data;
worker_processes auto;
pid /run/nginx.pid;
include /etc/nginx/modules-enabled/*.conf;
worker_rlimit_nofile 100000;

events {
        worker_connections 4000;
        use epoll;
        multi_accept on;
}

http {

  # basic settings
  sendfile on;
  tcp_nopush on;
  tcp_nodelay on;
  keepalive_timeout 65;
  types_hash_max_size 2048;
        open_file_cache_valid 30s;
        open_file_cache_min_uses 2;
        open_file_cache_errors on;

  include /etc/nginx/mime.types;
  default_type application/octet-stream;

        # ssl settings
  ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
  ssl_prefer_server_ciphers on;

  # websockets config
  map $http_upgrade $connection_upgrade {
            default upgrade;
            '' close;
        }

  # logging settings
  access_log off;
  access_log /var/log/nginx/access.log;
  error_log /var/log/nginx/error.log;

  # gzip settings
  gzip on;
      gzip_min_length 10240;
      gzip_comp_level 1;
      gzip_vary on;
      gzip_disable msie6;
      gzip_proxied expired no-cache no-store private auth;
      gzip_types
      text/css
      text/javascript
      text/xml
      text/plain
      text/x-component
      application/javascript
      application/x-javascript
      application/json
      application/xml
      application/rss+xml
      application/atom+xml
      font/truetype
      font/opentype
      application/vnd.ms-fontobject
      image/svg+xml;
      reset_timedout_connection on;
      client_body_timeout 10;
      send_timeout 2;
      keepalive_requests 100000;
        
        # virtual host configs
      include /etc/nginx/conf.d/loki.conf;
}

Create the virtual host config:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
$ cat /etc/nginx/conf.d/loki.conf
upstream loki {
  server 127.0.0.1:3100;
  keepalive 15;
}

server {
  listen 80;
  server_name loki.localdns.xyz;

  auth_basic "loki auth";
  auth_basic_user_file /etc/nginx/passwords;

  location / {
    proxy_read_timeout 1800s;
    proxy_connect_timeout 1600s;
    proxy_pass http://loki;
    proxy_http_version 1.1;
    proxy_set_header Upgrade $http_upgrade;
    proxy_set_header Connection $connection_upgrade;
    proxy_set_header Connection "Keep-Alive";
    proxy_set_header Proxy-Connection "Keep-Alive";
    proxy_redirect off;
  }

  location /ready {
    proxy_pass http://loki;
    proxy_http_version 1.1;
    proxy_set_header Connection "Keep-Alive";
    proxy_set_header Proxy-Connection "Keep-Alive";
    proxy_redirect off;
    auth_basic "off";
  }
}

As you’ve noticed, we are providing a auth_basic_user_file to /etc/nginx/passwords, so let’s create a user that we will be using to authenticate against loki:

1
$ htpasswd -c /etc/nginx/passwords lokiisamazing

Enable and Start Services

Because we created a systemd unit file, we need to reload the systemd daemon:

1
$ sudo systemctl daemon-reload

Then enable nginx and loki on boot:

1
2
$ sudo systemctl enable nginx
$ sudo systemctl enable loki

Then start or restart both services:

1
2
$ sudo systemctl restart nginx
$ sudo systemctl restart loki

You should see both ports, 80 and 3100 are listening:

1
2
3
$ sudo netstat -tulpn | grep -E '(3100|80)'
tcp        0      0 0.0.0.0:80              0.0.0.0:*               LISTEN      8949/nginx: master
tcp        0      0 127.0.0.1:3100          0.0.0.0:*               LISTEN      23498/loki

Test Access

You will notice that I have a /ready endpoint that I am proxy passing to loki, which bypasses authentication, this has been setup for my AWS Application Load Balancer’s Target Group to perform health checks against.

We can verify if we are getting a 200 response code without passing authentication:

1
2
3
4
5
6
7
8
9
10
$ curl -i http://loki.localdns.xyz/ready
HTTP/1.1 200 OK
Server: nginx/1.14.0 (Ubuntu)
Date: Thu, 29 Oct 2020 09:15:52 GMT
Content-Type: text/plain; charset=utf-8
Content-Length: 6
Connection: keep-alive
X-Content-Type-Options: nosniff

ready

If we try to make a request to Loki’s labels API endpoint, you will notice that we are returned with a 401 unauthorized response:

1
2
3
4
5
6
7
8
$ curl -i http://loki.localdns.xyz/loki/api/v1/labels
HTTP/1.1 401 Unauthorized
Server: nginx/1.14.0 (Ubuntu)
Date: Thu, 29 Oct 2020 09:16:52 GMT
Content-Type: text/html
Content-Length: 204
Connection: keep-alive
WWW-Authenticate: Basic realm="loki auth"

So let’s access the labels API endpoint by passing our basic auth credentials. To leave no leaking passwords behind, create a file and save your password content in that file:

1
2
$ vim /tmp/.pass
-> then enter your password and save the file <-

Expose the content as an environment variable:

1
$ pass=$(cat /tmp/.pass)

Now make a request to Loki’s labels endpoint by passing authentication:

1
2
3
4
5
6
7
8
9
$ curl -i -u lokiisawesome:$pass http://loki.localdns.xyz/loki/api/v1/labels
HTTP/1.1 200 OK
Server: nginx/1.14.0 (Ubuntu)
Date: Thu, 29 Oct 2020 09:20:20 GMT
Content-Type: application/json; charset=UTF-8
Content-Length: 277
Connection: keep-alive

{"status":"success","data":["__name__","aws_account","cluster_name","container_name","environment","filename","job","service","team"]}

Then ensure that your remove the password file:

1
$ rm -rf /tmp/.pass

And unset your pass environment variable, to clean up your tracks:

1
$ unset pass

LogCLI

Now for my favorite part, using logcli to interact with Loki, but more specifically using --tail as it requires websockets, nginx will now be able to upgrade those connections:

Install logcli, in my case I am using a mac, so I will be using darwin:

1
2
3
$ wget https://github.com/grafana/loki/releases/download/v2.0.0/logcli-darwin-amd64.zip
$ unzip logcli-darwin-amd64.zip
$ mv logcli-darwin-amd64 /usr/local/bin/logcli

Set your environment variables for logcli:

1
2
3
$ export LOKI_ADDR=https://loki.yourdomain.com # im doing ssl termination on the aws alb
$ export LOKI_USERNAME=lokiisawesome
$ export LOKI_PASSWORD=$pass 

Now for that sweetness of tailing ALL THE LOGS!! :-D . Let’s first discover the label that we want to select:

1
2
$ logcli labels --quiet container_name | grep deadman
ecs-deadmanswitch-4-deadmanswitch-01234567890abcdefghi

Then tail for the win!

1
2
3
$ logcli query --quiet --output raw --tail '{job="prod/dockerlogs", container_name=~"ecs-deadmanswitch.*"}'
time="2020-10-29T09:03:36Z" level=info msg="timerID: xxxxxxxxxxxxxxxxxxxx"
time="2020-10-29T09:03:36Z" level=info msg="POST - /ping/xxxxxxxxxxxxxxxxxxx"

Awesome right?

Thank You

Hope that you found this useful, make sure to follow Grafana’s blog for more awesome content:

If you liked this content, please make sure to share or come say hi on my website or twitter:

For other content of mine on Loki:

Upload Public SSH Keys Using Ansible

In this post I will demonstrate how you can use ansible to automate the task of adding one or more ssh public keys to multiple servers authorized_keys file.

This will be focused in a scenario where you have 5 new ssh keys that we would want to copy to our bastion hosts authorized_keys file

The User Accounts

We have our bastion server named bastion.mydomain.com where would like to create the following accounts: john, bob, sarah, sam, adam and also upload their personal ssh public keys to those accounts so that they can logon with their ssh private keys.

On my local directory, I have their ssh public keys as:

1
2
3
4
5
~/workspace/sshkeys/john.pub
~/workspace/sshkeys/bob.pub
~/workspace/sshkeys/sarah.pub
~/workspace/sshkeys/sam.pub
~/workspace/sshkeys/adam.pub

They will be referenced in our playbook as key: ".pub') }}" but if they were on github we can reference them as key: https://github.com/.keys, more info on that can be found on the authorized_key_module documentation.

The Target Server

Our inventory for the target server only includes one host, but we can add as many as we want, but our inventory will look like this:

1
2
3
4
5
$ cat inventory.ini
[bastion]
bastion-host ansible_host=34.x.x.x ansible_user=ubuntu ansible_ssh_private_key_file=~/.ssh/ansible.pem ansible_python_interpreter=/usr/bin/python3
[bastion:vars]
ansible_ssh_common_args='-o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null'

Test if the target server is reachable using the user ubuntu using our admin accounts ssh key ansible.pem:

1
2
3
4
5
$ ansible -i inventory.ini -m ping bastion
bastion | SUCCESS => {
    "changed": false,
    "ping": "pong"
}

Our Playbook

In this playbook, we will reference the users that we want to create and it will loop through those users, creating them on the target server and also use those names to match to the files on our laptop to match the ssh public keys:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
$ cat playbook.yml
---
- hosts: bastion
  become: yes
  become_user: root
  become_method: sudo
  tasks:
    - name: create local user account on the target server
      user:
        name: ''
        comment: ''
        shell: /bin/bash
        append: yes
        groups: sudo
        generate_ssh_key: yes
        ssh_key_type: rsa
      with_items:
        - john
        - bob
        - sarah
        - sam
        - adam

    - name: upload ssh public key to users authorized keys file
      authorized_key:
        user: ''
        state: present
        manage_dir: yes
        key: ".pub') }}"
      with_items:
        - john
        - bob
        - sarah
        - sam
        - adam

Deploy

Run the playbook:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
$ ansible-playbook -i inventory.ini ssh-setup.yml

PLAY [bastion]

TASK [Gathering Facts]
ok: [bastion-host]

TASK [create local user account on the target server]
changed: [bastion-host] => (item=john)
changed: [bastion-host] => (item=bob)
changed: [bastion-host] => (item=sarah)
changed: [bastion-host] => (item=sam)
changed: [bastion-host] => (item=adam)

TASK [upload ssh public key to users authorized keys file]
changed: [bastion-host] => (item=john)
changed: [bastion-host] => (item=bob)
changed: [bastion-host] => (item=sarah)
changed: [bastion-host] => (item=sam)
changed: [bastion-host] => (item=adam)

PLAY RECAP
bastion-host                   : ok=6    changed=5    unreachable=0    failed=0    skipped=0    rescued=0    ignored=0

Now when we ask one of the users, adam for example, to authenticate with:

1
$ ssh -i ~/.ssh/path_to_his_private_key.pem adamin@bastion.mydomain.com

They should have access to the server.

Thank You

Thanks for reading, for more information on this module check out their documentation:

Easy Ad-Hoc VPNs With Sshuttle

Theres a utility called sshuttle which allows you to VPN via a SSH connection, which is really handy when you quickly want to be able to reach a private range, which is accessible from a public reachable server such as a bastion host.

In this tutorial, I will demonstrate how to install sshuttle on a mac, if you are using a different OS you can see their documentation and then we will use the VPN connection to reach a “prod” and a “dev” environment.

SSH Config

We will declare 2 jump-boxes / bastion hosts in our ssh config:

  • dev-jump-host is a public server that has network access to our private endpoints in 172.31.0.0/16
  • prod-jump-host is a public server that has network access to our private endpoints in 172.31.0.0/16

In this case, the above example is 2 AWS Accounts with the same CIDR’s, and wanted to demonstrate using sshuttle for this reason, as if we had different CIDRs we can setup a dedicated VPN and route them respectively.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
$ cat ~/.ssh/config
Host *
    Port 22
    StrictHostKeyChecking no
    UserKnownHostsFile /dev/null
    ServerAliveInterval 60
    ServerAliveCountMax 30

Host dev-jump-host
    HostName dev-bastion.mydomain.com
    User bastion
    IdentityFile ~/.ssh/id_rsa

Host prod-jump-host
    HostName prod-bastion.mydomain.com
    User bastion
    IdentityFile ~/.ssh/id_rsa

Install sshuttle

Install sshuttle for your operating system:

1
2
3
4
5
# macos
$ brew install shuttle

# debian
$ apt install sshuttle

Usage

To setup a vpn tunnel to route connections to our prod account:

1
$ sshuttle -r prod-jump-host 172.31.0.0/16

Or to setup a vpn tunnel to route connections to our dev account:

1
$ sshuttle -r dev-jump-host 172.31.0.0/16

Once one of your chosen sessions establishes, you can use a new terminal to access your private network, as example:

1
$ nc -vz 172.31.23.40 22

Bash Functions

We can wrap this into functions, so we can use vpn_dev or vpn_prod which aliases to the commands shown below:

1
2
3
4
5
6
7
8
$ cat ~/.functions
vpn_prod(){
  sshuttle -r prod-jump-host 172.31.0.0/16
}

vpn_dev(){
  sshuttle -r dev-jump-host 172.31.0.0/16
}

Now source that to your environment:

1
$ source ~/.functions

Then you should be able to use vpn_dev and vpn_prod from your terminal:

1
2
3
4
$ vpn_prod
[local sudo] Password:
Warning: Permanently added 'xx,xx' (ECDSA) to the list of known hosts.
client: Connected.

And in a new terminal we can connect to a RDS MySQL Database sitting in a private network:

1
2
$ mysql -h my-prod-db.pvt.mydomain.com -u dbadmin -p$pass
mysql>

Sshuttle as a Service

You can create a systemd unit file to run a sshuttle vpn as a service. In this scenario I provided 2 different vpn routes, dev and prod, so you can create 2 seperate systemd unit files, but my case I will only create for prod:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
$ cat /etc/systemd/system/vpn_prod.service
[Unit]
Description=ShuttleProdVPN
Wants=network-online.target
After=network-online.target
StartLimitIntervalSec=500
StartLimitBurst=5

[Service]
User=root
Group=root
Type=simple
Restart=on-failure
RestartSec=10s
ExecStart=/usr/bin/sshuttle -r prod-jump-host 172.31.0.0/16

[Install]
WantedBy=multi-user.target

Reload the systemd daemon:

1
$ sudo systemctl daemon-reload

Enable and start the service:

1
2
$ sudo systemctl enable vpn_prod
$ sudo systemctl start vpn_prod

Thank You

Thanks for reading.

Use a SSH Jump Host With Ansible

In this post we will demonstrate how to use a SSH Bastion or Jump Host with Ansible to reach the target server.

In some scenarios, the target server might be in a private range which is only accessible via a bastion host, and that counts the same for ansible as ansible is using SSH to reach to the target servers.

SSH Config

Our bastion host is configured as bastion and the config under ~/.ssh/config looks like this:

1
2
3
4
5
6
7
8
9
10
11
Host *
    Port 22
    StrictHostKeyChecking no
    UserKnownHostsFile /dev/null
    ServerAliveInterval 60
    ServerAliveCountMax 30

Host bastion
    HostName bastion.mydomain.com
    User bastion
    IdentityFile ~/.ssh/id_rsa

To verify that our config is working, you should be able to use:

1
$ ssh bastion

Using a Bastion with Ansible

In order to reach our target server we need to use the bastion, so to test the SSH connection we can use this SSH one-liner. Our target server has a IP address of 172.31.81.94 and expects us to provide a ansible.pem private key and we need to authenticate with the ubuntu user:

1
$ ssh -o ProxyCommand="ssh -W %h:%p -q bastion" -i ~/.ssh/ansible.pem ubuntu@172.31.81.94

If we can reach our server its time to include it in our playbook.

In our inventory:

1
2
3
4
5
$ cat inventory.ini
[deployment]
server-a ansible_host=172.31.81.94 ansible_user=ubuntu ansible_ssh_private_key_file=~/.ssh/ansible.pem
[deployment:vars]
ansible_ssh_common_args='-o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null -o ProxyCommand="ssh -W %h:%p -q bastion"'

And our playbook which will use the ping module:

1
2
3
4
5
$ cat playbook.yml
- name: Test Ping
  hosts: deployment
  tasks:
  - action: ping

Test it out:

1
2
3
4
5
6
7
8
9
10
11
12
$ ansible-playbook -i inventory.ini ping.yml

PLAY [Test Ping] ***********************************************************************************************************************************************************

TASK [Gathering Facts] *****************************************************************************************************************************************************
ok: [server-a]

TASK [ping] ****************************************************************************************************************************************************************
ok: [server-a]

PLAY RECAP *****************************************************************************************************************************************************************
server-a                   : ok=2    changed=0    unreachable=0    failed=0    skipped=0    rescued=0    ignored=0

Basic Ping Role With Ansible in a Playbook

This is a short post on how to create a basic role to reference the ping module in Ansible.

Directory Structure

This is our directory strucuture:

1
2
3
4
5
6
7
8
9
10
11
$ tree .
.
├── inventory.ini
├── playbooks
│   └── myplaybook.yml
└── roles
    └── ping
        └── tasks
            └── main.yml

4 directories, 3 files

Create the directories:

1
2
$ mkdir -p playbooks
$ mkdir -p roles/ping/tasks

Our inventory.ini includes the hosts that we will be using, and in this case I will be defining a group named rpifleet with all the host nested under that group and I’m using the user pi and my private ssh key ~/.ssh/id_rsa:

1
2
3
4
5
6
$ cat inventory.ini
[rpifleet]
rpi-01 ansible_host=rpi-01.local ansible_user=pi ansible_ssh_private_key_file=~/.ssh/id_rsa ansible_python_interpreter=/usr/bin/python3
rpi-02 ansible_host=rpi-02.local ansible_user=pi ansible_ssh_private_key_file=~/.ssh/id_rsa ansible_python_interpreter=/usr/bin/python3
[rpifleet:vars]
ansible_ssh_common_args='-o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null'

Next our role is a basic role that will reference our ping module, so from our main playbook we will reference the role that we are defining:

1
2
3
4
$ cat roles/ping/tasks/main.yml
---
- name: Test Ping
  action: ping

Now that we have defined our ping role, we need to include it into our playbook:

1
2
3
4
5
6
$ cat playbooks/myplaybook.yml
---
- name: ping raspberry pi fleet
  hosts: rpifleet
  roles:
    - { role: ../roles/ping }

You will see due to my playbooks directory being non-default, I defined the path to the role directory.

Install Ansible

Next we need to install ansible:

1
$ pip install ansible

Run the Ansible Playbook

Now run the playbook which will ping the nodes using ssh. Using the ping module is useful when testing the connection to your nodes:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
$ ansible-playbook -i inventory.ini playbooks/myplaybook.yml

PLAY [ping raspberry pi fleet] *****************************************************

TASK [Gathering Facts] *************************************************************
ok: [rpi-02]
ok: [rpi-01]

TASK [../roles/ping : Test Ping] ***************************************************
ok: [rpi-02]
ok: [rpi-01]

PLAY RECAP *************************************************************************
rpi-01                     : ok=2    changed=0    unreachable=0    failed=0    skipped=0    rescued=0    ignored=0
rpi-02                     : ok=2    changed=0    unreachable=0    failed=0    skipped=0    rescued=0    ignored=0

Thank You

Thanks for reading

Using the Libvirt Provisioner With Terraform for KVM

terraform-ansible-kvm

In this post we will use the libvirt provisioner with Terraform to deploy a KVM Virtual Machine on a Remote KVM Host using SSH and use Ansible to deploy Nginx on our VM.

In my previous post I demonstrated how I provisioned my KVM Host and created a dedicated user for Terraform to authenticate to our KVM host to provision VMs.

Once you have KVM installed and your SSH access is sorted, we can start by installing our dependencies.

Install our Dependencies

First we will install Terraform:

1
2
3
$ wget https://releases.hashicorp.com/terraform/0.13.3/terraform_0.13.3_linux_amd64.zip
$ unzip terraform_0.13.3_linux_amd64.zip
$ sudo mv terraform /usr/local/bin/terraform

Then we will install Ansible:

1
2
3
$ virtualenv -p python3 .venv
$ source .venv/bin/activate
$ pip install ansible

Now in order to use the libvirt provisioner, we need to install it where we will run our Terraform deployment:

1
2
3
4
5
$ cd /tmp/
$ mkdir -p ~/.local/share/terraform/plugins/registry.terraform.io/dmacvicar/libvirt/0.6.2/linux_amd64
$ wget https://github.com/dmacvicar/terraform-provider-libvirt/releases/download/v0.6.2/terraform-provider-libvirt-0.6.2+git.1585292411.8cbe9ad0.Ubuntu_18.04.amd64.tar.gz
$ tar -xvf terraform-provider-libvirt-0.6.2+git.1585292411.8cbe9ad0.Ubuntu_18.04.amd64.tar.gz
$ mv ./terraform-provider-libvirt  ~/.local/share/terraform/plugins/registry.terraform.io/dmacvicar/libvirt/0.6.2/linux_amd64/

Our ssh config for our KVM host in ~/.ssh/config:

1
2
3
4
5
6
7
8
9
Host *
    Port 22
    StrictHostKeyChecking no
    UserKnownHostsFile /dev/null

Host ams-kvm-remote-host
    HostName ams-kvm.mydomain.com
    User deploys
    IdentityFile ~/.ssh/deploys.pem

Terraform all the things

Create a workspace directory for our demonstration:

1
2
$ mkdir -p ~/workspace/terraform-kvm-example/
$ cd ~/workspace/terraform-kvm-example/

First let’s create our providers.tf:

1
2
3
4
5
6
7
8
terraform {
  required_providers {
    libvirt = {
      source  = "dmacvicar/libvirt"
      version = "0.6.2"
    }
  }
}

Then our variables.tf, just double check where you need to change values to suite your environment:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
variable "libvirt_disk_path" {
  description = "path for libvirt pool"
  default     = "/opt/kvm/pool1"
}

variable "ubuntu_18_img_url" {
  description = "ubuntu 18.04 image"
  default     = "http://cloud-images.ubuntu.com/releases/bionic/release-20191008/ubuntu-18.04-server-cloudimg-amd64.img"
}

variable "vm_hostname" {
  description = "vm hostname"
  default     = "terraform-kvm-ansible"
}

variable "ssh_username" {
  description = "the ssh user to use"
  default     = "ubuntu"
}

variable "ssh_private_key" {
  description = "the private key to use"
  default     = "~/.ssh/id_rsa"
}

Create the main.tf, you will notice that we are using ssh to connect to KVM, and because the private range of our VM’s are not routable via the internet, I’m using a bastion host to reach them.

The bastion host (ssh config from the pre-requirements section) is the KVM host and you will see that ansible is also using that host as a jump box, to get to the VM. I am also using cloud-init to bootstrap the node with SSH, etc.

The reason why I’m using remote-exec before the ansible deployment, is to ensure that we can establish a command via SSH before Ansible starts.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
provider "libvirt" {
  uri = "qemu+ssh://deploys@ams-kvm-remote-host/system"
}

resource "libvirt_pool" "ubuntu" {
  name = "ubuntu"
  type = "dir"
  path = var.libvirt_disk_path
}

resource "libvirt_volume" "ubuntu-qcow2" {
  name = "ubuntu-qcow2"
  pool = libvirt_pool.ubuntu.name
  source = var.ubuntu_18_img_url
  format = "qcow2"
}

data "template_file" "user_data" {
  template = file("${path.module}/config/cloud_init.yml")
}

data "template_file" "network_config" {
  template = file("${path.module}/config/network_config.yml")
}

resource "libvirt_cloudinit_disk" "commoninit" {
  name           = "commoninit.iso"
  user_data      = data.template_file.user_data.rendered
  network_config = data.template_file.network_config.rendered
  pool           = libvirt_pool.ubuntu.name
}

resource "libvirt_domain" "domain-ubuntu" {
  name   = var.vm_hostname
  memory = "512"
  vcpu   = 1

  cloudinit = libvirt_cloudinit_disk.commoninit.id

  network_interface {
    network_name   = "default"
    wait_for_lease = true
    hostname       = var.vm_hostname
  }

  console {
    type        = "pty"
    target_port = "0"
    target_type = "serial"
  }

  console {
    type        = "pty"
    target_type = "virtio"
    target_port = "1"
  }

  disk {
    volume_id = libvirt_volume.ubuntu-qcow2.id
  }

  graphics {
    type        = "spice"
    listen_type = "address"
    autoport    = true
  }

  provisioner "remote-exec" {
    inline = [
      "echo 'Hello World'"
    ]

    connection {
      type                = "ssh"
      user                = var.ssh_username
      host                = libvirt_domain.domain-ubuntu.network_interface[0].addresses[0]
      private_key         = file(var.ssh_private_key)
      bastion_host        = "ams-kvm-remote-host"
      bastion_user        = "deploys"
      bastion_private_key = file("~/.ssh/deploys.pem")
      timeout             = "2m"
    }
  }

  provisioner "local-exec" {
    command = <<EOT
      echo "[nginx]" > nginx.ini
      echo "${libvirt_domain.domain-ubuntu.network_interface[0].addresses[0]}" >> nginx.ini
      echo "[nginx:vars]" >> nginx.ini
      echo "ansible_ssh_common_args='-o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null -o ProxyCommand=\"ssh -W %h:%p -q ams-kvm-remote-host\"'" >> nginx.ini
      ansible-playbook -u ${var.ssh_username} --private-key ${var.ssh_private_key} -i nginx.ini ansible/playbook.yml
      EOT
  }
}

As I’ve mentioned, Im using cloud-init, so lets setup the network config and cloud init under the config/ directory:

1
$ mkdir config

And our config/cloud_init.yml, just make sure that you configure your public ssh key for ssh access in the config:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
#cloud-config
# vim: syntax=yaml
# examples:
# https://cloudinit.readthedocs.io/en/latest/topics/examples.html
bootcmd:
  - echo 192.168.0.1 gw.homedns.xyz >> /etc/hosts
runcmd:
 - [ ls, -l, / ]
 - [ sh, -xc, "echo $(date) ': hello world!'" ]
ssh_pwauth: true
disable_root: false
chpasswd:
  list: |
     root:password
  expire: false
users:
  - name: ubuntu
    sudo: ALL=(ALL) NOPASSWD:ALL
    groups: users, admin
    home: /home/ubuntu
    shell: /bin/bash
    lock_passwd: false
    ssh-authorized-keys:
      - ssh-rsa AAAA ...your-public-ssh-key-goes-here... user@host
final_message: "The system is finally up, after $UPTIME seconds"

And our network config, in config/network_config.yml:

1
2
3
4
version: 2
ethernets:
  ens3:
    dhcp4: true

Now we will create our Ansible playbook, to deploy nginx to our VM, create the ansible directory:

1
$ mkdir ansible

Then create the ansible/playbook.yml:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
---
# https://docs.ansible.com/ansible/latest/collections/ansible/builtin/apt_module.html
# https://docs.ansible.com/ansible/latest/collections/ansible/builtin/systemd_module.html#examples
- hosts: nginx
  become: yes
  become_user: root
  become_method: sudo
  tasks:
    - name: Install nginx
      apt:
        name: nginx
        state: latest
        update_cache: yes

    - name: Enable service nginx and ensure it is not masked
      systemd:
        name: nginx
        enabled: yes
        masked: no

    - name: ensure nginx is started
      systemd:
        state: started
        name: nginx

This is optional, but I’m using a ansible.cfg file to define my defaults:

1
2
3
4
5
6
[defaults]
host_key_checking = False
ansible_port = 22
ansible_user = ubuntu
ansible_ssh_private_key_file = ~/.ssh/id_rsa
ansible_python_interpreter = /usr/bin/python3

And lastly, our outputs.tf which will display our IP address of our VM:

1
2
3
4
5
6
7
output "ip" {
  value = libvirt_domain.domain-ubuntu.network_interface[0].addresses[0]
}

output "url" {
  value = "http://${libvirt_domain.domain-ubuntu.network_interface[0].addresses[0]}"
}

Deploy our Terraform Deployment

It’s time to deploy a KVM instance with Terraform and deploy Nginx to our VM with Ansible using the local-exec provisioner.

Initialize terraform to download all the plugins:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
$ terraform init

Initializing the backend...

Initializing provider plugins...
- Finding latest version of hashicorp/template...
- Finding dmacvicar/libvirt versions matching "0.6.2"...
- Installing hashicorp/template v2.1.2...
- Installed hashicorp/template v2.1.2 (signed by HashiCorp)
- Installing dmacvicar/libvirt v0.6.2...
- Installed dmacvicar/libvirt v0.6.2 (unauthenticated)

The following providers do not have any version constraints in configuration,
so the latest version was installed.

To prevent automatic upgrades to new major versions that may contain breaking
changes, we recommend adding version constraints in a required_providers block
in your configuration, with the constraint strings suggested below.

* hashicorp/template: version = "~> 2.1.2"

Terraform has been successfully initialized!

Run a plan, to see what will be done:

1
2
3
4
5
6
7
8
9
$ terraform plan

...
Plan: 4 to add, 0 to change, 0 to destroy.

Changes to Outputs:
  + ip  = (known after apply)
  + url = (known after apply)
...

And run a apply to run our deployment:

1
2
3
4
5
6
7
8
9
10
11
12
$ terraform apply -auto-approve
...
libvirt_domain.domain-ubuntu (local-exec): PLAY RECAP *********************************************************************
libvirt_domain.domain-ubuntu (local-exec): 192.168.122.213            : ok=4    changed=1    unreachable=0    failed=0    skipped=0    rescued=0    ignored=0
libvirt_domain.domain-ubuntu: Creation complete after 2m24s [id=c96def6e-0361-441c-9e1f-5ba5f3fa5aec]

Apply complete! Resources: 4 added, 0 changed, 0 destroyed.

Outputs:

ip = 192.168.122.213
url = http://192.168.122.213

You can always get the output afterwards using show or output:

1
2
3
4
5
$ terraform show -json | jq -r '.values.outputs.ip.value'
192.168.122.213

$ terraform output -json ip | jq -r '.'
192.168.122.213

Test our VM

Hop onto the KVM host, and test out nginx:

1
2
3
4
5
6
7
8
9
10
$ curl -I http://192.168.122.213
HTTP/1.1 200 OK
Server: nginx/1.14.0 (Ubuntu)
Date: Thu, 08 Oct 2020 00:37:43 GMT
Content-Type: text/html
Content-Length: 612
Last-Modified: Thu, 08 Oct 2020 00:33:04 GMT
Connection: keep-alive
ETag: "5f7e5e40-264"
Accept-Ranges: bytes

via GIPHY

Thank You

Say Thanks!

Thanks for reading, check out my website or follow me at @ruanbekker on Twitter.

Setup a KVM Host for Virtualization on OneProvider

I’ve been on the hunt for a hobby dedicated server for a terraform project, where I’m intending to use the libvirt provider and found one awesome provider that offers amazingly great prices.

At oneprovider.com, they offer dedicated servers for great prices and they offer a huge number of locations. So I decided to give them a go and ordered a dedicated server in Amsterdam, Netherlands:

cheap-dedicated-servers

I went for a 4GB DDR3 RAM, Atom C2350 2 Cores CPU with 128GB SSD and 1Gbps unmetered bandwidth for $7.30 a month, which is super cheap and more than enough for a hobby project:

image

I’ve been using them for the last week and super impressed.

What are we doing

As part of my Terraform project, I would like to experiment with the libvirt provisioner to provision KVM instances, I need a dedicated server with KVM installed, and in this guide we will install KVM and create a dedicated user that we will use with Terraform.

Install KVM

Once your server is provisioned, SSH to your dedicated server and install cpu-checker to ensure that we are able to install KVM:

1
2
$ $ apt update && apt upgrade -y
$ apt install cpu-checker -y

Test using kvm-ok:

1
2
3
$ kvm-ok
INFO: /dev/kvm exists
KVM acceleration can be used

On a client pc, generate the SSH key that we will use to authenticate with on our KVM host:

1
$ ssh-keygen -t rsa -C deploys -f ~/.ssh/deploys.pem

Back on the server, create the user and prepare the ssh directory:

1
2
3
$ useradd -m -s /bin/bash deploys
$ mkdir /home/deploys/.ssh
$ touch /home/deploys/.ssh/authorized_keys

On the client PC where you generated your SSH key, copy the public key:

1
$ cat ~/.ssh/deploys.pem.pub| pbcopy

Paste your public key to the servers authorized_keys file:

1
2
$ vim /home/deploys/.ssh/authorized_keys
# paste the public key contents and save

Update the content below with the correct permissions:

1
2
3
$ chown -R deploys:deploys /home/deploys
$ chmod 755 /home/deploys/.ssh
$ chmod 644 /home/deploys/.ssh/authorized_keys

Install KVM on the host:

1
$ apt install bridge-utils qemu-kvm libvirt-bin virtinst -y

Add our dedicated user to the libvirt group:

1
$ usermod -G libvirt deploys

Create the directory where we will store our vm’s disks:

1
$ mkdir -p /opt/kvm

And apply ownership permissions for our user and group:

1
$ chown -R deploys:libvirt /opt/kvm

I ran into a permission denied issue using terraform and the dedicated user, and to resolve I had to ensure that the security_driver is set to none in /etc/libvirt/qemu.conf:

1
$ vim /etc/libvirt/qemu.conf

and update the following:

1
security_driver = "none"

Then restart libvirtd:

1
$ sudo systemctl restart libvirtd 

Test KVM

Switch to the deploys user:

1
$ sudo su - deploys

And list domains using virsh:

1
2
3
$ virsh list
 Id    Name                           State
----------------------------------------------------

Thank You

That’s it, now we have a KVM host that allows us to provision VM’s. In the next post we will install terraform and the libvirt provisioner for terraform to provision a vm and use ansible to deploy software to our vm.

Say Thanks!

Thanks for reaching out to me, check out my website or follow me at @ruanbekker on Twitter.

Using the Local-exec Provisioner With Terraform

This is a basic example on how to use the local-exec provisioner in terraform, and I will use it to write a environment variable’s value to disk.

Installing Terraform

Get the latest version of terraform, for this post, I will be using the latest version of the time of writing:

1
2
3
$ wget https://releases.hashicorp.com/terraform/0.13.3/terraform_0.13.3_linux_amd64.zip
$ unzip terraform_0.13.3_linux_amd64.zip
$ sudo mv terraform /usr/local/bin/terraform

Ensure that it’s working:

1
2
$ terraform -version
Terraform v0.13.3

Terraform local-exec

The local-exec provisioner allows us to run a command locally, so to test that we will write the environment variable owner=ruan to disk.

First setup our main.tf:

1
2
3
4
5
resource "null_resource" "this" {
  provisioner "local-exec" {
    command = "echo ${var.owner} > file_${null_resource.this.id}.txt"
  }
}

As you can see our local-exec provisioner is issuing the command echo to write the environment variable owner’s value to a file on disk, and the file name is file_ + the null resource’s id.

As we are referencing a variable, we need to define the variable, I will define it in variables.tf:

1
variable "owner" {}

As you can see, I am not defining the value, as I will define the value at runtime.

Initialize

When we initialize terraform, terraform builds up a dependency tree from all the .tf files and downloads any dependencies it requires:

1
$ terraform init

Apply

Run our deployment and pass our variable at runtime:

1
2
3
4
5
6
7
8
$ terraform apply -var 'owner=ruan' -auto-approve

null_resource.this: Creating...
null_resource.this: Provisioning with 'local-exec'...
null_resource.this (local-exec): Executing: ["/bin/sh" "-c" "echo ruan > file_4397943546484635522.txt"]
null_resource.this: Creation complete after 0s [id=4397943546484635522]

Apply complete! Resources: 1 added, 0 changed, 0 destroyed.

View the written file:

1
2
$ cat file_4397943546484635522.txt
ruan

If we wanted to define the environment variable in the variables.tf file, it will look like this:

1
2
3
4
variable "owner" {
  description = "the owner of this project"
  default     = "ruan"
}

The github repository for this code is located at:

Setup a NFS Server With Docker

In this tutorial we will setup a NFS Server using Docker for our development environment.

Host Storage Path

In this example we will be using our host path /data/nfs-storage which will host our storage for our NFS server, which will will mount to the container:

1
$ mkdir -p /data/nfs-storage

NFS Server

Create the NFS Server with docker:

1
2
3
4
5
6
$ docker run -itd --privileged \
  --restart unless-stopped \
  -e SHARED_DIRECTORY=/data \
  -v /data/nfs-storage:/data \
  -p 2049:2049 \
  itsthenetwork/nfs-server-alpine:12

We can do the same using docker-compose, for our docker-compose.yml:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
version: "2.1"
services:
  # https://hub.docker.com/r/itsthenetwork/nfs-server-alpine
  nfs:
    image: itsthenetwork/nfs-server-alpine:12
    container_name: nfs
    restart: unless-stopped
    privileged: true
    environment:
      - SHARED_DIRECTORY=/data
    volumes:
      - /data/nfs-storage:/data
    ports:
      - 2049:2049

To deploy using docker-compose:

1
$ docker-compose up -d

NFS Client

To use a NFS Client to mount this to your filesystem, you can look at this blogpost>

In summary:

1
2
$ sudo apt install nfs-client -y
$ sudo mount -v -o vers=4,loud 192.168.0.4:/ /mnt

Verify that the mount is showing:

1
2
3
4
$ df -h
Filesystem      Size  Used Avail Use% Mounted on
/dev/sda2       109G   53G   51G  52% /
192.168.0.4:/   4.5T  2.2T  2.1T  51% /mnt

Now, create a test file on our NFS export:

1
$ touch /mnt/file.txt

Verify that the test file is on the local path:

1
2
$ ls /data/nfs-storage/
file.txt

If you want to load this into other client’s /etc/fstab:

1
192.168.0.4:/   /mnt   nfs4    _netdev,auto  0  0

NFS Docker Volume Plugin

You can use a NFS Volume Plugin for Docker or Docker Swarm for persistent container storage.

To use the NFS Volume plugin, we need to download docker-volume-netshare from their github releases page.

1
2
3
$ wget https://github.com/ContainX/docker-volume-netshare/releases/download/v0.36/docker-volume-netshare_0.36_amd64.deb
$ dpkg -i docker-volume-netshare_0.36_amd64.deb
$ service docker-volume-netshare start

Then your docker-compose.yml:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
version: '3.7'

services:
  mysql:
    image: mariadb:10.1
    networks:
      - private
    environment:
      - MYSQL_ROOT_PASSWORD=${DATABASE_PASSWORD:-admin}
      - MYSQL_DATABASE=testdb
      - MYSQL_USER=${DATABASE_USER:-admin}
      - MYSQL_PASSWORD=${DATABASE_PASSWORD:-admin}
    volumes:
      - mysql_data.vol:/var/lib/mysql

volumes:
  mysql_data.vol:
    driver: nfs
    driver_opts:
      share: 192.168.69.1:/mysql_data_vol

Thank You

That’s it. Thanks for reading, follow me on Twitter and say hi! @ruanbekker

Say Thanks!

Using if Statements in Bash to Check if Environment Variables Exist

This is a quick post to demonstrate how to use if statements in bash to check if we have the required environment variables in our environment before we continue a script.

Let’s say we require FOO and BAR in our environment before we can continue, we can do this:

1
2
3
4
5
6
7
8
9
10
11
#!/usr/bin/env bash

if [ -z ${FOO} ] || [ -z ${BAR} ] ;
  then
    echo "required environment variables does not exist"
    exit 1
  else
    echo "required environment variables are set"
    # do things
    exit 0
fi

So now if FOO or BAR is not set in our environment, the script will exit with return code 1.

To test it, when we pass no environment variables:

1
2
3
$ chmod +x ./update.sh
$ ./update.sh
required environment variables does not exist

If we only pass one environment variable:

1
2
$ FOO=1 ./update.sh
required environment variables does not exist

And as the result we want, when we pass both required environment variables, we have success:

1
2
$ FOO=1 BAR=2 ./update.sh
required environment variables are set