Ruan Bekker's Blog

From a Curious mind to Posts on Github

Graphing Covid-19 Stats With Grafana and Elasticsearch Using Python

coronavirus-covid19-grafana-metrics

I stumbled upon a github repository that stores time-series data in json format of corona virus / covid19 statistics, which get updated daily.

I was curious to see data about my country and want to see how metrics will look like after our lockdown started, so I decided to consume that data with Python and the requests library, then ingest data about covid19 into Elasticsearch and the visualize the data with Grafana.

Sample of the Data

Let’s have a peek at the data to determine how we will use it to write to Elasticsearch. Let’s consume the data with python:

1
2
3
>>> import requests
>>> import json
>>> response = requests.get('https://pomber.github.io/covid19/timeseries.json').json()

Now let’s determine the data type:

1
2
>>> type(response)
<type 'dict'>

Now as it’s a dictionary, let’s look at they keys:

1
2
>>> response.keys()
[u'Canada', u'Sao Tome and Principe', u'Lithuania', u'Cambodia', u'Ethiopia',....

So let’s take a look how the data looks like if we do a lookup for Canada:

1
2
>>> type(response['Canada'])
<type 'list'>

As we can see it’s a list, we can count how many items is in our list:

1
2
>>> len(response['Canada'])
94

Now let’s peek at the data by accessing our first index of our list:

1
2
>>> response['Canada'][0]
{u'date': u'2020-1-22', u'confirmed': 0, u'recovered': 0, u'deaths': 0}

So our data will look like this:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
{
  [
    'Country Name': [
      {
        'date': '<string>', 
        'confirmed': '<int>', 
        'recovered': '<int>', 
        'deaths': '<int>'
      },
      {
        'date': '<string>',
        'confirmed': '<int>',
        'recovered': '<int>',
        'deaths': '<int>'
      },
    ],
    'Country Name': [
      ...
    ]
  ]
}

Some issues we need to fix

As you can see the date is displayed as 2020-1-22 instead of 2020-01-22, I want to make it consistent as I will be ingesting the data with a @timestamp key which we will use the date from the returned data. So first we will need to convert that before we ingest the data.

The other thing I was thinking of is that, if for some reason we need to ingest this data again, we dont want to sit with duplicates (same document with different _id’s), so for that I decided to generate a hash value that consist of the date and the country, so if the script run to ingest the data, it will use the same id for the specific document, which would just overwrite it, therefore we won’t sit with duplicates.

So the idea is to ingest a document to elasticsearch like this:

1
2
3
4
5
6
7
8
9
doc = {
    "_id": "sha_hash_value",
    "day": "2020-01-22",
    "timestamp": "@2020-01-22 00:00:00",
    "country": "CountryName",
    "confirmed": 0,
    "recovered": 0,
    "deaths": 0
}

How we will ingest the data

The first run will load all the data and ingest all the data up to the current day to elasticsearch. Once that is done, we will add code to our script to only ingest the most recent day’s data into elasticsearch, which we will control with a cronjob.

Create a index with a mapping to let Elasticsearch know timestamp will be a date field:

1
2
3
$ curl -XPUT -H 'Content-Type: application/json' \
  -u username:pass 'https://es.domain.com/coronastats' -d \
  '{"mappings": {"foo1": {"properties": {"timestamp" : {"type" : "date","format" : "yyyy-MM-dd HH:mm:ss"}}}}}'

Once our index is created, create the python script that will load the data, loop through each country’s daily data and ingest it into elasticsearch:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
#!/usr/bin/python
import requests
import datetime as dt
import json
import hashlib

url = 'https://pomber.github.io/covid19/timeseries.json'
elasticsearch_url = "https://es.domain.com"
elasticsearch_username = ""
elasticsearch_password = ""

api_response = requests.get(url).json()

def convert_datestamp(day):
    return str(dt.datetime.strptime(day, '%Y-%m-%d'))

def hash_function(country, date):
    string_to_hash = country + date
    hash_obj  = hashlib.sha1(string_to_hash.encode('utf-8'))
    hash_value = hash_obj.hexdigest()
    return hash_value

def map_es_doc(payload, country):
    doc = {
        "day": payload['date'],
        "timestamp": convert_datestamp(payload['date']),
        "country": country,
        "confirmed": payload['confirmed'],
        "recovered": payload['recovered'],
        "deaths": payload['deaths']
    }
    return doc

def ingest(doc_id, payload):
    response = requests.put(
        elasticsearch_url + '/coronastats/coronastats/' + doc_id,
        auth=(elasticsearch_username, elasticsearch_password),
        headers={'content-type': 'application/json'},
        json=payload
    )
    return response.status_code

for country in api_response.keys():
    try:
        for each_payload in api_response[country]:
            doc_id = hash_function(country, each_payload['date'])
            doc = map_es_doc(each_payload, country)
            response = ingest(doc_id, doc)
            print(response)
    except Exception as e:
        print(e)

Run the script to ingest all the data into elasticsearch. Now we will create the script that will run daily to only ingest the previous day’s data, so that we only ingest the latest data and not all the data from scratch again.

I will create this file in /opt/scripts/corona_covid19_ingest.py:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
#!/usr/bin/python
import requests
import datetime as dt
import json
import hashlib

url = 'https://pomber.github.io/covid19/timeseries.json'
elasticsearch_url = "https://es.domain.com"
elasticsearch_username = ""
elasticsearch_password = ""

api_response = requests.get(url).json()

yesterdays_date = dt.date.today() - dt.timedelta(days=1)

def convert_datestamp(day):
    return str(dt.datetime.strptime(day, '%Y-%m-%d'))

def hash_function(country, date):
    string_to_hash = country + date
    hash_obj  = hashlib.sha1(string_to_hash.encode('utf-8'))
    hash_value = hash_obj.hexdigest()
    return hash_value

def map_es_doc(payload, country):
    doc = {
        "day": payload['date'],
        "timestamp": convert_datestamp(payload['date']),
        "country": country,
        "confirmed": payload['confirmed'],
        "recovered": payload['recovered'],
        "deaths": payload['deaths']
    }
    return doc

def ingest(doc_id, payload):
    response = requests.put(
        elasticsearch_url + '/coronastats/coronastats/' + doc_id,
        auth=(elasticsearch_username, elasticsearch_password),
        headers={'content-type': 'application/json'},
        json=payload
    )
    return response.status_code

for country in api_response.keys():
    try:
        for each_payload in api_response[country]:
            if convert_datestamp(each_payload['date']).split()[0] == str(yesterdays_date):
                print("ingesting latest data for {country}".format(country=country))
                doc_id = hash_function(country, each_payload['date'])
                doc = map_es_doc(each_payload, country)
                response = ingest(doc_id, doc)
                print(response)
    except Exception as e:
        print(e)

The only difference with this script is that it checks if the date is equals to yesterday’s date, and if so the document will be prepared and ingested into elasticsearch. We will create a cronjob that runs this script every morning at 08:45.

First make the file executable:

1
$ chmod +x /opt/scripts/corona_covid19_ingest.py

Run crontab -e and add the following

1
45 8 * * * /opt/scripts/corona_covid19_ingest.py

Visualize the Data with Grafana

We will create this dashboard:

corona-covid-19-dashboard

We need a elasticsearch datasource that points to the index that we ingest our data into. Head over to datasources, add a elasticsearch datasource and set the index to coronastats and add the timefield as timestamp.

We want to make the dashboard dynamic to have a “country” dropdown selector, for that go to the dashboard settings, select variable and add a country variable:

covid19-dashboard-variables

First panel: “Reported Cases per Day”:

covid19-reported-cases

Second panel: “Confirmed Cases”:

covid19-confirmed-cases

Third panel: “Recovered Cases”:

covid19-recovered-cases

Now, if we select Italy, Spain and France as an example, we will see something like this:

covid19-country-stats

Thank You

Although its pretty cool visualizing data, the issue that we are in at the moment with coronavirus / covid19 is really scary and we should all do our part to try and stay home, sanitize and try not to spread the virus. Together we can all do great things by reducing the spread of this virus.

Stay safe everyone.

Nginx Metrics on Prometheus With the Nginx Log Exporter

In this post we will setup a nginx log exporter for prometeus to get metrics of our nginx web server, such as number of requests per method, status code, processed bytes etc. Then we will configure prometheus to scrape our nginx metric endpoint and also create a basic dashbaord to visualize our data.

If you follow along on this tutorial, it assumes that you have Prometheus and Grafana up and running. But if not the embedded links will take you to the blog posts to set it up.

Nginx Webserver

Install nginx:

1
2
$ apt update
$ apt install nginx -y

Configure your nginx server’s log format to match the nginx log exporter’s expected format, we will name it custom:

1
2
3
  log_format custom   '$remote_addr - $remote_user [$time_local] '
                      '"$request" $status $body_bytes_sent '
                      '"$http_referer" "$http_user_agent" "$http_x_forwarded_for"';

Edit your main nginx config:

1
$ vim /etc/nginx/nginx.conf

This is how my complete config looks like:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
user www-data;
worker_processes auto;
pid /run/nginx.pid;
# remote the escape char if you are going to use this config
include /etc/nginx/modules-enabled/\*.conf;

events {
  worker_connections 768;
}

http {

  # basic config
  sendfile on;
  tcp_nopush on;
  tcp_nodelay on;
  keepalive_timeout 65;
  types_hash_max_size 2048;
  include /etc/nginx/mime.types;
  default_type application/octet-stream;

  # ssl config
  ssl_protocols TLSv1 TLSv1.1 TLSv1.2; 
  ssl_prefer_server_ciphers on;

  # logging config
  log_format custom   '$remote_addr - $remote_user [$time_local] '
                      '"$request" $status $body_bytes_sent '
                      '"$http_referer" "$http_user_agent" "$http_x_forwarded_for"';

  access_log /var/log/nginx/access.log custom;
  error_log /var/log/nginx/error.log;

  # gzip
  gzip on;

  # virtual host config
  include /etc/nginx/conf.d/myapp.conf;

}

I will delete the default host config:

1
$ rm -rf /etc/nginx/sites-enabled/default

And then create my /etc/nginx/conf.d/myapp.conf as referenced in my main config, with the following:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
server {

  listen 80 default_server;
  # remove the escape char if you are going to use this config
  server_name \_;

  root /var/www/html;
  index index.html index.htm index.nginx-debian.html;

  location / {
    try_files $uri $uri/ =404;
  }

}

When you make a GET request to your server, you should see something like this in your access log:

1
10x.1x.2x.1x - - [25/Apr/2020:00:31:11 +0000] "GET / HTTP/1.1" 200 396 "-" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_14_6) AppleWebKit/605.1.15 (KHTML, like Gecko) Version/13.1 Safari/605.1.15" "-"

Nginx Log Exporter

Head over to the prometheus-nginxlog-exporter releases page and get the latest version, in the time of writing it is v1.4.0:

1
$ wget https://github.com/martin-helmich/prometheus-nginxlog-exporter/releases/download/v1.4.0/prometheus-nginxlog-exporter

Make it executable and move it to your path:

1
2
$ chmod +x prometheus-nginxlog-exporter
$ mv prometheus-nginxlog-exporter /usr/bin/prometheus-nginxlog-exporter

Create the directory where we will place our config for our exporter:

1
$ mkdir /etc/prometheus

Create the config file:

1
$ vim /etc/prometheus/nginxlog_exporter.yml

You can follow the instructions from github.com/prometheus-nginxlog-exporter for more information on configuration, but I will be using the following config:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
listen:
  port: 4040
  address: "0.0.0.0"

consul:
  enable: false

namespaces:
  - name: myapp
    format: "$remote_addr - $remote_user [$time_local] \"$request\" $status $body_bytes_sent \"$http_referer\" \"$http_user_agent\" \"$http_x_forwarded_for\""
    source:
      files:
        - /var/log/nginx/access.log
    labels:
      service: "myapp"
      environment: "production"
      hostname: "myapp.example.com"
    histogram_buckets: [.005, .01, .025, .05, .1, .25, .5, 1, 2.5, 5, 10]

Create the systemd unit file:

1
$ vim /etc/systemd/system/nginxlog_exporter.service

And my configuration that I will be using:

1
2
3
4
5
6
7
8
9
10
11
12
13
[Unit]
Description=Prometheus Log Exporter
Wants=network-online.target
After=network-online.target

[Service]
User=root
Group=root
Type=simple
ExecStart=/usr/bin/prometheus-nginxlog-exporter -config-file /etc/prometheus/nginxlog_exporter.yml

[Install]
WantedBy=multi-user.target

Reload systemd and enable the service on boot:

1
2
$ systemctl daemon-reload
$ systemctl enable nginxlog_exporter

Restart the service:

1
$ systemctl restart nginxlog_exporter

Ensure that the service is running:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
$ systemctl status nginxlog_exporter

● nginxlog_exporter.service - Prometheus Log Exporter
   Loaded: loaded (/etc/systemd/system/nginxlog_exporter.service; disabled; vendor preset: enabled)
   Active: active (running) since Sat 2020-04-25 00:50:06 UTC; 5s ago
 Main PID: 4561 (prometheus-ngin)
    Tasks: 7 (limit: 2317)
   CGroup: /system.slice/nginxlog_exporter.service
           └─4561 /usr/bin/prometheus-nginxlog-exporter -config-file /etc/prometheus/nginxlog_exporter.yml

Apr 25 00:50:06 nginx-log-exporter systemd[1]: Started Prometheus Log Exporter.
Apr 25 00:50:06 nginx-log-exporter prometheus-nginxlog-exporter[4561]: loading configuration file /etc/prometheus/nginxlog_exporter.yml
Apr 25 00:50:06 nginx-log-exporter prometheus-nginxlog-exporter[4561]: using configuration {Listen:{Port:4040 Address:0.0.0.0} Consul:{Enable:false Address: Datacenter: Scheme: Toke
Apr 25 00:50:06 nginx-log-exporter prometheus-nginxlog-exporter[4561]: starting listener for namespace myapp
Apr 25 00:50:06 nginx-log-exporter prometheus-nginxlog-exporter[4561]: running HTTP server on address 0.0.0.0:4040
Apr 25 00:50:06 nginx-log-exporter prometheus-nginxlog-exporter[4561]: 2020/04/25 00:50:06 Seeked /var/log/nginx/access.log - &{Offset:0 Whence:2}

Test the exporter

Make a couple of requests against your webserver:

1
$ for each in {1..10}; do curl http://78.141.211.49 ; done

So prometheus will now scrape the exporter http endpoint (:4040/metrics) and push the returned values into prometheus. But to get a feel on how the metrics look like, make a request to the metrics endpoint:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
$ curl http://localhost:4040/metrics
...
# HELP myapp_http_response_count_total Amount of processed HTTP requests
# TYPE myapp_http_response_count_total counter
myapp_http_response_count_total{environment="production",hostname="myapp.example.com",method="GET",service="myapp",status="200"} 10
myapp_http_response_count_total{environment="production",hostname="myapp.example.com",method="POST",service="myapp",status="404"} 1
# HELP myapp_http_response_size_bytes Total amount of transferred bytes
# TYPE myapp_http_response_size_bytes counter
myapp_http_response_size_bytes{environment="production",hostname="myapp.example.com",method="GET",service="myapp",status="200"} 6120
myapp_http_response_size_bytes{environment="production",hostname="myapp.example.com",method="POST",service="myapp",status="404"} 152
# HELP myapp_parse_errors_total Total number of log file lines that could not be parsed
# TYPE myapp_parse_errors_total counter
myapp_parse_errors_total 0
...

As you can see we are getting metrics such as response count total, response size, errors, etc.

Configure Prometheus

Let’s configure prometheus to scrape this endpoint. Head over to your prometheus instance, and edit your prometheus config:

1
$ vim /etc/prometheus/prometheus.yml

Note that in my config I have 2 endpoints that I am scraping, the prometheus endpoint which exists and I will be adding the nginx endpoint, so in full, this is how my config will look like:

1
2
3
4
5
6
7
8
9
10
11
12
13
global:
  scrape_interval: 15s

scrape_configs:
  - job_name: 'prometheus'
    scrape_interval: 5s
    static_configs:
      - targets: ['localhost:9090']

  - job_name: 'nginx'
    scrape_interval: 15s
    static_configs:
      - targets: ['ip.of.nginx.exporter:4040']

Restart prometheus:

1
$ systemctl restart prometheus

To verify that the exporter is working as expected, head over to your prometheus ui on port 9090, and query up{} to see if your exporters are returning 1:

image

We can then query prometheus with myapp_http_response_count_total{service="myapp"} to see the response counts:

image

Dashboarding in Grafana

If you don’t have Grafana installed, you can look at my Grafana Installation post to get that up and running.

If you have not created the Prometheus datasource, on Grafana, head over to the configuration section on your left, select Datasources, add a Prometheus datasource and add the following (this is assuming grafana runs on the prometheus node - which is fine for testing):

image

Create a new dashboard and add a new panel:

image

Let’s query our data to show us HTTP Method and Status code per 30s: rate(myapp_http_response_count_total{service="myapp"}[$__interval])

image

Thank You

Hope you found this helpful, if you haven’t seen my other posts on Prometheus, have a look at the following:

IPSec Site to Site VPN With Dynamic IPs With Openswan

In this tutorial we will setup a site to site ipsec vpn with strongswan and we will enable each server to discover the other vpn server via dynamic dns. We will also append to our config the ability of roadwarriors so that you will be able to connect to your homelab from any mobile or laptop device from any remote source.

Some background

Me and one of my friends decided to build a site to site vpn with strongswan so that our homelabs could be reachable to each other over private networks.

One challenge that I thought of is that both of our internet providers don’t support static ip addressing, so each vpn server needs to know where to connect to whenever the ip address changes.

What we will be doing

We will setup strongswan vpn on both servers and allow the private LAN ranges to be reachable for both sides. As I have a domain hosted on cloudflare, I will be using cloudflare’s api to update the A record of each sides dns whenever the IP changes.

Environment

On my side, which I will be referring to as Side-A:

1
2
3
Public DNS Name: side-a.example.com
Private Range: 192.168.0.0/24
VPN Server IP: 192.168.0.2

On my friend’s side, which I will be referring to as Side-B:

1
2
3
Public DNS Name: side-b.example.com
Private Range: 192.168.1.0/24
VPN Server IP: 192.168.1.2

Cloudflare Dynamic DNS

You don’t need to use Cloudflare, theres services such as dyndns.com, no-ip.com. But for this tutorial I will be using cloudflare to utilize my own domain.

I will be using the cloudflare-ddns-client

First we need to create a API Token, head over to your dashboard: dash.cloudflare.com, head over to “my profile”, select “API Tokens”, then allow “Read Zones” and “Edit DNS”, then select “Create Token”. Keep the returned token value in a safe place.

Install the pre-requirements:

1
$ apt install python python-dev python-pip make curl build-essential -y

Get the source and install:

1
2
3
$ git clone https://github.com/LINKIWI/cloudflare-ddns-client.git
$ cd cloudflare-ddns-client
$ make install

We will now configure the cloudflare dynamic dns client, this will be done on both sides, but will only demonstrate for side-a:

1
2
3
4
5
6
7
8
$ cloudflare-ddns --configure
Use API token or API key to authenticate?
Choose [T]oken or [K]ey: T
Enter the API token you created at https://dash.cloudflare.com/profile/api-tokens.
Required permissions are READ Account.Access: Organizations, Identity Providers, and Groups; READ Zone.Zone; EDIT Zone.DNS
CloudFlare API token: [redacted]
Enter the domains for which you would like to automatically update the DNS records, delimited by a single comma.
Comma-delimited domains: side-a.example.com

Testing it out to ensure the A record can be updated:

1
2
3
4
5
6
$ cloudflare-ddns --update-now
Found external IPv4: "1.x.x.x"
Listing all zones.
Finding all DNS records.
Updating the A record (ID x) of (sub)domain side-a.example.com (ID x) to 1.x.x.x.
DNS record updated successfully!

We can run this command from above in a cron, but I will use a bash script to only run when the public ip changed: /opt/scripts/detect_ip_change.sh:

1
2
3
4
5
6
7
8
#!/bin/bash
set -ex
MY_DDNS_HOST="side-a.example.com"

if [ $(dig ${MY_DDNS_HOST} +short) == $(curl -s icanhazip.com) ];
  then exit 0;
  else /usr/local/bin/cloudflare-ddns --update-now;
fi

Make the file executable: chmod +x /opt/scripts/detect_ip_change.sh then edit your cronjobs: crontab -e and add the script:

1
* * * * * /opt/scripts/detect_ip_change.sh

This will keep your DNS updated, this needs to be done on both sides, if you want to use dynamic dns.

Port Forwarding

We will need to forward UDP traffic from the router to the VPN server, on both sides:

1
2
3
4
5
Port: UDP/500 
Target: VPN-Server-IP:500

Port: UDP/4500
Target: VPN-Server-IP:4500

Create a Pre-Shared Key

Create a preshared key that will be used on both sides to authenticate:

1
2
$ openssl rand -base64 36
pgDU4eKZaQNL7GNRWJPvZbaSYFn2PAFjK9vDOvxAQ85p7qc4

This value will be used on both sides, which we will need later.

Install Strongswan on Side-A

Install strongswan and enable the service on boot:

1
2
$ apt install strongswan -y
$ systemctl enable strongswan

The left side will be the side we are configuring and the right side will be the remote side.

Create the config: /etc/ipsec.conf and provide the following config:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
config setup
    charondebug="all"
    uniqueids=yes
    virtual_private=
    cachecrls=no

conn vpn-to-side-b
    type=tunnel
    authby=secret
    left=%defaultroute
    leftid=side-a.example.com
    leftsubnet=192.168.0.0/24
    right=%side-b.example.com
    rightid=side-b.example.com
    rightsubnet=192.168.1.0/24
    ike=aes256-sha2_256-modp1024!
    esp=aes256-sha2_256!
    keyingtries=0
    ikelifetime=1h
    lifetime=8h
    dpddelay=30
    dpdtimeout=120
    dpdaction=restart
    auto=start

Create the secrets file: /etc/ipsec.secrets:

1
side-b.example.com : PSK "pgDU4eKZaQNL7GNRWJPvZbaSYFn2PAFjK9vDOvxAQ85p7qc4"

Append the following kernel parameters to /etc/sysctl.conf:

1
2
3
net.ipv4.ip_forward = 1
net.ipv4.conf.all.accept_redirects = 0
net.ipv4.conf.all.send_redirects = 0

Save:

1
$ sysctl -p

We now want to add a POSTROUTING and FORWARD rule using iptables:

1
2
$ iptables -t nat -A POSTROUTING -s 192.168.1.0/24  -d 192.168.0.0/24 -j MASQUERADE
$ iptables -A FORWARD -s 192.168.1.0/24 -d 192.168.0.0/24 -j ACCEPT

Now we need to route back:

1
$ ip route add 192.168.1.0/24 via 192.168.0.2 dev eth0

We want to persist the iptables and static route across reboots, so edit the /etc/rc.local file, if it’s not there create it with the following values:

1
2
3
4
5
#!/bin/bash
iptables -t nat -A POSTROUTING -s 192.168.1.0/24  -d 192.168.0.0/24 -j MASQUERADE
iptables -A FORWARD -s 192.168.1.0/24 -d 192.168.0.0/24 -j ACCEPT
ip route add 192.168.1.0/24 via 192.168.0.2 dev eth0
exit 0

If you created the file, make sure to apply executable permissions:

1
$ chmod +x /etc/rc.local

Read the secrets and restart strongswan:

1
2
$ ipsec rereadsecrets
$ systemctl restart strongswan

Install Strongswan on Side-B

Install strongswan and enable the service on boot:

1
2
$ apt install strongswan -y
$ systemctl enable strongswan

The left side will be the side we are configuring and the right side will be the remote side.

Create the config: /etc/ipsec.conf and provide the following config:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
config setup
    charondebug="all"
    uniqueids=yes
    virtual_private=
    cachecrls=no

conn vpn-to-side-a
    type=tunnel
    authby=secret
    left=%defaultroute
    leftid=side-b.example.com
    leftsubnet=192.168.1.0/24
    right=%side-a.example.com
    rightid=side-a.example.com
    rightsubnet=192.168.0.0/24
    ike=aes256-sha2_256-modp1024!
    esp=aes256-sha2_256!
    keyingtries=0
    ikelifetime=1h
    lifetime=8h
    dpddelay=30
    dpdtimeout=120
    dpdaction=restart
    auto=start

Create the secrets file: /etc/ipsec.secrets:

1
side-a.example.com : PSK "pgDU4eKZaQNL7GNRWJPvZbaSYFn2PAFjK9vDOvxAQ85p7qc4"

Append the following kernel parameters to /etc/sysctl.conf:

1
2
3
net.ipv4.ip_forward = 1
net.ipv4.conf.all.accept_redirects = 0
net.ipv4.conf.all.send_redirects = 0

Save:

1
$ sysctl -p

We now want to add a POSTROUTING and FORWARD rule using iptables:

1
2
$ iptables -t nat -A POSTROUTING -s 192.168.0.0/24  -d 192.168.1.0/24 -j MASQUERADE
$ iptables -A FORWARD -s 192.168.0.0/24 -d 192.168.1.0/24 -j ACCEPT

Now we need to route back:

1
$ ip route add 192.168.0.0/24 via 192.168.1.2 dev eth0

We want to persist the iptables and static route across reboots, so edit the /etc/rc.local file, if it’s not there create it with the following values:

1
2
3
4
5
#!/bin/bash
iptables -t nat -A POSTROUTING -s 192.168.0.0/24  -d 192.168.1.0/24 -j MASQUERADE
iptables -A FORWARD -s 192.168.0.0/24 -d 192.168.1.0/24 -j ACCEPT
ip route add 192.168.0.0/24 via 192.168.1.2 dev eth0
exit 0

If you created the file, make sure to apply executable permissions:

1
$ chmod +x /etc/rc.local

Read the secrets and restart strongswan:

1
2
$ ipsec rereadsecrets
$ systemctl restart strongswan

Verify Status

Verify that the ipsec tunnel is up on side-a:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
$ ipsec statusall

Connections:
  vpn-to-side-b:  %any...side-b.example.com,0.0.0.0/0,::/0  IKEv1/2
  vpn-to-side-b:   local:  [side-a.example.com] uses pre-shared key authentication
  vpn-to-side-b:   remote: [side-b.example.com] uses pre-shared key authentication
  vpn-to-side-b:   child:  192.168.0.0/24 === 192.168.1.0/24 TUNNEL
Security Associations (1 up, 0 connecting):
  vpn-to-side-b[1]: ESTABLISHED 28 minutes ago, 192.168.0.2[side-a.example.com]...4x.x.x.214[side-b.example.com]
  vpn-to-side-b[1]: IKEv2 SPIs: 81996170df1c927d_i e8294946491ddf08_r, pre-shared key reauthentication in 2 hours
  vpn-to-side-b[1]: IKE proposal: AES_CBC_128/HMAC_SHA2_256_128/PRF_HMAC_SHA2_256/ECP_256
  vpn-to-side-b{2}:  INSTALLED, TUNNEL, reqid 1, ESP in UDP SPIs: cc4504be_i c294cb26_o
  vpn-to-side-b{2}:  AES_CBC_128/HMAC_SHA2_256_128, 0 bytes_i, 240 bytes_o (4 pkts, 7s ago), rekeying in 18 minutes
  vpn-to-side-b{2}:   192.168.0.0/24 === 192.168.1.0/24

Verify that the ipsec tunnel is up on side-b:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
$ ipsec statusall

Connections:
 vpn-to-side-a:  %any...side-a.example.com,0.0.0.0/0,::/0  IKEv1/2
 vpn-to-side-a:   local:  [side-b.example.com] uses pre-shared key authentication
 vpn-to-side-a:   remote: [side-a.example.com] uses pre-shared key authentication
 vpn-to-side-a:   child:  192.168.1.0/24 === 192.168.0.0/24 TUNNEL
Security Associations (1 up, 0 connecting):
 vpn-to-side-a[2]: ESTABLISHED 20 minutes ago, 192.168.1.2[side-b.example.com]...14x.x.x.x[side-a.example.com]
 vpn-to-side-a[2]: IKEv2 SPIs: 81996170df1c927d_i e8294946491ddf08_r, pre-shared key reauthentication in 2 hours
 vpn-to-side-a[2]: IKE proposal: AES_CBC_128/HMAC_SHA2_256_128/PRF_HMAC_SHA2_256/ECP_256
 vpn-to-side-a{2}:  INSTALLED, TUNNEL, reqid 1, ESP in UDP SPIs: c294cb26_i cc4504be_o
 vpn-to-side-a{2}:  AES_CBC_128/HMAC_SHA2_256_128, 0 bytes_i, 0 bytes_o, rekeying in 26 minutes
 vpn-to-side-a{2}:   192.168.1.0/24 === 192.168.0.0/24

From side-a (192.168.0.2) ping the gateway on side-b (192.168.1.1):

1
2
3
$ $ ping -c2 192.168.1.1
PING 10.3.96.2 (192.168.1.1) 56(84) bytes of data.
64 bytes from 192.168.1.1: icmp_seq=1 ttl=62 time=11.9 ms

If you want to be able to reach the private range of the other side of the vpn from any device on your network, you should add a static route on your router to inform your default gateway where to route traffic to.

In this case on side-a (192.168.0.0/24) we want to inform our default gateway to route (192.168.1.0/24) to the VPN as it knows to route that destination over the VPN.

On side-a, on your router, add a static route:

1
2
3
Route: 192.168.1.0
Subnet: 255.255.255.0
Gateway: 192.168.0.2

On side-b, on your router, add a static route:

1
2
3
Route: 192.168.0.0
Subnet: 255.255.255.0
Gateway: 192.168.1.2

Optional: Roadwarrior VPN Clients

This step is optional, but since we can access each others homelabs, we thought it would be nice to be able to access the resources from mobile devices or laptops when we are on remote locations.

We made it that each VPN owner will connect to its own endpoint (for roadwarriors), so side-a (which will be me) will connect to its own dns endpoint to connect when away from home..

I will only demonstrate how to append your config to add the ability for a roadwarrion vpn connection, append to the /etc/ipsec.conf:

1
2
3
4
5
6
7
8
9
10
11
12
13
# ...
conn ikev2-vpn
    auto=add
    type=tunnel
    authby=secret
    left=%any
    leftid=side-a.roadwarrior
    leftsubnet=0.0.0.0/0
    right=%any
    rightid=%any
    rightsourceip=10.10.0.0/24
    rightdns=192.168.0.1,8.8.8.8
    auto=start

Append the secret in /etc/ipsec.secrets:

1
2
# ...
side-a.roadwarrior my-laptop : PSK "MySuperSecureSecret123"

Add the vpn ip’s that we will assign to the roardwarrior clients to the routing table:

1
$ ip route add 10.10.0.0/24 via 192.168.0.2 dev eth0

If you only want the roadwarriors to be able to reach your network, you will only forward to the local network such as:

1
$ iptables -A FORWARD -s 10.10.0.0/24 -d 192.168.0.0/24 -j ACCEPT

But we will be forwarding traffic to all destinations:

1
2
$ iptables -A FORWARD -s 10.10.0.0/24 -d 0.0.0.0/0 -j ACCEPT
$ iptables -t nat -A POSTROUTING -s 10.10.0.0/24 -d 0.0.0.0/0 -j MASQUERADE

Remember to append the routes to /etc/rc.local to persist across reboots.

Reread the secrets and restart strongswan:

1
2
$ ipsec rereadsecrets
$ systemctl restart strongswan

Connecting your VPN Client, I will be using my Laptop, with the following details:

1
2
3
4
5
6
7
VPN Type: IKEv2
Description: Home VPN
Server: side-a.example.com
Remote ID: side-a.roadwarrior
Local ID: my-laptop
User Authentication: None
Secret: MySuperSecureSecret123

Thank You

In this tutorial I demonstrated how to setup a site to site ipsec vpn between 2 sides that consists of internet connections that has dynamic ip’s and also appending roadwarrior config so that you can connect to your homelab from anywhere in the world.

Persistent Volumes With K3d Kubernetes

With k3d we can mount the host to container path, and with persistent volumes we can set a hostPath for our persistent volumes. With k3d, all the nodes will be using the same volume mapping which maps back to the host.

We will test the data persistence by writing a file inside a container, kill the pod, then exec into the pod again and test if the data persisted

The k3d Cluster

Create the directory on the host where we will persist the data:

1
> mkdir -p /tmp/k3dvol

Create the cluster:

1
2
> k3d create --name "k3d-cluster" --volume /tmp/k3dvol:/tmp/k3dvol --publish "80:80" --workers 2
> export KUBECONFIG="$(k3d get-kubeconfig --name='k3d-cluster')"

Our application will be a busybox container which will keep running with a ping command, map the persistent volume to /data inside the pod.

Our app.yml

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
apiVersion: v1
kind: PersistentVolume
metadata:
  name: task-pv-volume
  labels:
    type: local
spec:
  storageClassName: manual
  capacity:
    storage: 1Gi
  accessModes:
    - ReadWriteOnce
  hostPath:
    path: "/tmp/k3dvol"
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: task-pv-claim
spec:
  storageClassName: manual
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 1Gi
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: echo
spec:
  selector:
    matchLabels:
      app: echo
  strategy:
    type: Recreate
  template:
    metadata:
      labels:
        app: echo
    spec:
      volumes:
        - name: task-pv-storage
          persistentVolumeClaim:
            claimName: task-pv-claim
      containers:
      - image: busybox
        name: echo
        volumeMounts:
          - mountPath: "/data"
            name: task-pv-storage
        command: ["ping", "127.0.0.1"]

Deploy the workload:

1
2
3
4
> kubectl apply -f app.yml
persistentvolume/task-pv-volume created
persistentvolumeclaim/task-pv-claim created
deployment.apps/echo created

View the persistent volumes:

1
2
3
> kubectl get pv
NAME                                       CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS   CLAIM                    STORAGECLASS   REASON   AGE
task-pv-volume                             1Gi        RWO            Retain           Bound    default/task-pv-claim    manual                  6s

View the Persistent Volume Claims:

1
2
3
> kubectl get pvc
NAME             STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS   AGE
task-pv-claim    Bound    task-pv-volume                             1Gi        RWO            manual         11s

View the pods:

1
2
3
> kubectl get pods
NAME                   READY   STATUS    RESTARTS   AGE
echo-58fd7d9b6-x4rxj   1/1     Running   0          16s

Exec into the pod:

1
2
3
4
5
6
7
8
9
> kubectl exec -it echo-58fd7d9b6-x4rxj sh
/ # df -h
Filesystem                Size      Used Available Use% Mounted on
overlay                  58.4G     36.1G     19.3G  65% /
osxfs                   233.6G    139.7G     86.3G  62% /data
/dev/sda1                58.4G     36.1G     19.3G  65% /etc/hosts
/dev/sda1                58.4G     36.1G     19.3G  65% /dev/termination-log
/dev/sda1                58.4G     36.1G     19.3G  65% /etc/hostname
/dev/sda1                58.4G     36.1G     19.3G  65% /etc/resolv.conf

Write the hostname of the current pod to the persistent volume path:

1
2
3
4
/ # echo $(hostname)
echo-58fd7d9b6-x4rxj
/ # echo $(hostname) > /data/hostname.txt
/ # exit

Exit the pod and read the content from the host (workstation/laptop):

1
2
> cat /tmp/k3dvol/hostname.txt
echo-58fd7d9b6-x4rxj

Look at the host where the pod is running on:

1
2
3
4
5
> kubectl get nodes -o wide
NAME                       STATUS   ROLES    AGE   VERSION        INTERNAL-IP    EXTERNAL-IP   OS-IMAGE   KERNEL-VERSION     CONTAINER-RUNTIME
k3d-k3d-cluster-server     Ready    master   13m   v1.17.2+k3s1   192.168.32.2   <none>        Unknown    4.9.184-linuxkit   containerd://1.3.3-k3s1
k3d-k3d-cluster-worker-1   Ready    <none>   13m   v1.17.2+k3s1   192.168.32.4   <none>        Unknown    4.9.184-linuxkit   containerd://1.3.3-k3s1
k3d-k3d-cluster-worker-0   Ready    <none>   13m   v1.17.2+k3s1   192.168.32.3   <none>        Unknown    4.9.184-linuxkit   containerd://1.3.3-k3s1

Delete the pod:

1
2
> kubectl delete pod/echo-58fd7d9b6-x4rxj
pod "echo-58fd7d9b6-x4rxj" deleted

Wait until the pod is rescheduled again and verify if the pod is running on a different node:

1
2
3
> kubectl get pods -o wide
NAME                   READY   STATUS    RESTARTS   AGE   IP          NODE                       NOMINATED NODE   READINESS GATES
echo-58fd7d9b6-fkvbs   1/1     Running   0          35s   10.42.2.9   k3d-k3d-cluster-worker-1   <none>           <none>

Exec into the new pod:

1
> kubectl exec -it echo-58fd7d9b6-fkvbs sh

View if the data is persisted:

1
2
3
4
5
/ # hostname
echo-58fd7d9b6-fkvbs

/ # cat /data/hostname.txt
echo-58fd7d9b6-x4rxj

Asynchronous Function With OpenFaas

In this post we will explore how to use asynchronous functions in OpenFaas.

What are we doing

A synchronous request blocks the client until operation completes, where a asynchronous request doesn’t block the client, which is nice to use for long-running tasks or function invocations to run in the background through the use of NATS Streaming.

We will be building a Python Flask API Server which will act as our webhook service. When we invoke our function by making a http request, we also include a callback url as a header which will be the address where the queue worker will post it’s results.

Then we will make a http request to the synchronous function where we will get the response from the function and a http request to the asynchronous function, where we will get the response from the webhook service’s logs

Deploy OpenFaas

Deploy OpenFaas on a k3d Kubernetes Cluster if you want to follow along on your laptop. You can follow this post to deploy a kubernetes cluster and deploying openfaas:

Webhook Service

Lets build the Python Flask Webhook Service, our application code:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
from flask import Flask, request
from logging.config import dictConfig

dictConfig({
    'version': 1,
    'formatters': {'default': {
        'format': '[%(asctime)s] %(levelname)s in %(module)s: %(message)s',
    }},
    'handlers': {'wsgi': {
        'class': 'logging.StreamHandler',
        'stream': 'ext://flask.logging.wsgi_errors_stream',
        'formatter': 'default'
    }},
    'root': {
        'level': 'INFO',
        'handlers': ['wsgi']
    }
})

app = Flask(__name__)

@app.route("/", methods=["POST", "GET"])
def main():
    response = {}

    if request.method == "GET":
        response["event"] = "GET"
        app.logger.info("Received Event: GET")

    if request.method == "POST":
        response["event"] = request.get_data()
        app.logger.info("Receveid Event: {}".format(response))

    else:
        response["event"] == "OTHER"

    print("Received Event:")
    print(response)
    return "event: {} \n".format(response)

if __name__ == "__main__":
    app.run(host="0.0.0.0", port=5000)

Our Dockerfile:

1
2
3
4
5
FROM python:3.7-alpine
RUN pip install flask
ADD app.py /app.py
EXPOSE 5000
CMD ["python", "/app.py"]

Building and Pushing to Docker Hub (or you can use my docker image):

1
2
$ docker build -t yourusername/python-flask-webhook:openfaas .
$ docker push yourusername/python-flask-webhook:openfaas

Create the deployment manifest webhook.yml for our webhook service:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
$ cat > webhook.yml << EOF
apiVersion: v1
kind: Service
metadata:
  name: webhook-service
spec:
  selector:
    app: webhook
  ports:
    - protocol: TCP
      port: 5000
      targetPort: 5000
      name: web
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: webhook-ingress
  annotations:
    kubernetes.io/ingress.class: traefik
spec:
  rules:
  - host: webhook.localdns.xyz
    http:
      paths:
      - backend:
          serviceName: webhook-service
          servicePort: 5000
---
apiVersion: apps/v1
kind: Deployment
metadata:
  labels:
    app: webhook
  name: webhook
spec:
  replicas: 1
  selector:
    matchLabels:
      app: webhook
  template:
    metadata:
      labels:
        app: webhook
    spec:
      containers:
      - name: webhook
        image: ruanbekker/python-flask-webhook:openfaas
        imagePullPolicy: IfNotPresent
        ports:
        - containerPort: 5000
          name: http
          protocol: TCP
EOF

Now deploy to kubernetes:

1
$ kubectl apply -f webhook.yml

After a minute or so, verify that you get a response when making a http request:

1
2
$ curl http://webhook.localdns.xyz
event: {'event': 'GET'}

Deploy the OpenFaas Function

We will deploy a dockerfile type function which will return the data that we feed it:

1
2
3
4
5
6
7
$ faas-cli new --lang dockerfile function-async-task
$ faas-cli up -f function-async-task.yml

Deploying: function-async-task.

Deployed. 202 Accepted.
URL: http://openfaas.localdns.xyz/function/function-async-task

List the functions:

1
2
3
$ faas-cli list
Function                       Invocations      Replicas
function-async-task            0               1

Describe the function:

1
2
3
4
5
6
7
8
9
10
11
12
$ faas-cli describe function-async-task
Name:                function-async-task
Status:              Ready
Replicas:            1
Available replicas:  1
Invocations:         0
Image:               ruanbekker/function-async-task:latest
Function process:
URL:                 http://openfaas.localdns.xyz/function/function-async-task
Async URL:           http://openfaas.localdns.xyz/async-function/function-async-task
Labels:              faas_function : function-async-task
Annotations:         prometheus.io.scrape : false

Testing

Test synchronous function:

1
2
$ curl http://openfaas.localdns.xyz/function/function-async-task -d "test"
test

Test asynchronous function, remember, here we need to provide the callback url which the queue worker will inform, which will be our webhook service:

1
2
3
4
5
6
7
$ curl -i -H "X-Callback-Url: http://webhook-service.default.svc.cluster.local:5000" http://openfaas.localdns.xyz/async-async-function/function-async-task -d "asyyyyync"
HTTP/1.1 202 Accepted
Content-Length: 0
Date: Mon, 17 Feb 2020 13:57:26 GMT
Vary: Accept-Encoding
X-Call-Id: d757c10f-4293-4daa-bf52-bbdc17b7dea3
X-Start-Time: 1581947846737501600

Check the logs of the webhook pod:

1
2
3
$ kubectl logs -f pod/$(kubectl get pods --selector=app=webhook --output=jsonpath="{.items..metadata.name}")
[2020-02-17 13:57:26,774] INFO in app: Receveid Event: {'event': b'asyyyyync'}
[2020-02-17 13:57:26,775] INFO in internal: 10.42.0.6 - - [17/Feb/2020 13:57:26] "POST / HTTP/1.1" 200 -

Check the logs of the queue worker:

1
2
3
4
5
6
7
$ kubectl logs -f deployment/queue-worker -n openfaas
[45] Received on [faas-request]: 'sequence:45 subject:"faas-request" data:"{\"Header\":{\"Accept\":[\"*/*\"],\"Accept-Encoding\":[\"gzip\"],\"Content-Length\":[\"9\"],\"Content-Type\":[\"application/x-www-form-urlencoded\"],\"User-Agent\":[\"curl/7.54.0\"],\"X-Call-Id\":[\"d757c10f-4293-4daa-bf52-bbdc17b7dea3\"],\"X-Callback-Url\":[\"http://webhook-service.default.svc.cluster.local:5000\"],\"X-Forwarded-For\":[\"10.42.0.0\"],\"X-Forwarded-Host\":[\"openfaas.localdns.xyz\"],\"X-Forwarded-Port\":[\"80\"],\"X-Forwarded-Proto\":[\"http\"],\"X-Forwarded-Server\":[\"traefik-6787cddb4b-87zss\"],\"X-Real-Ip\":[\"10.42.0.0\"],\"X-Start-Time\":[\"1581947846737501600\"]},\"Host\":\"openfaas.localdns.xyz\",\"Body\":\"YXN5eXl5eW5j\",\"Method\":\"POST\",\"Path\":\"\",\"QueryString\":\"\",\"Function\":\"openfaas-function-cat\",\"CallbackUrl\":{\"Scheme\":\"http\",\"Opaque\":\"\",\"User\":null,\"Host\":\"webhook-service.default.svc.cluster.local:5000\",\"Path\":\"\",\"RawPath\":\"\",\"ForceQuery\":false,\"RawQuery\":\"\",\"Fragment\":\"\"}}" timestamp:1581947846738308800 '
Invoking: openfaas-function-cat with 9 bytes, via: http://gateway.openfaas.svc.cluster.local:8080/function/openfaas-function-cat/
Invoked: openfaas-function-cat [200] in 0.029029s
Callback to: http://webhook-service.default.svc.cluster.local:5000
openfaas-function-cat returned 9 bytes
Posted result for openfaas-function-cat to callback-url: http://webhook-service.default.svc.cluster.local:5000, status: 200

Make 1000 Requests:

1
2
3
4
5
6
$ date > time.date
  for x in {1..1000}
    do
      curl -i -H "X-Callback-Url: http://webhook-service.default.svc.cluster.local:5000" http://openfaas.localdns.xyz/async-function/openfaas-function-cat -d "asyyyyync"
    done
  date >> time.date

View the log file that we wrote before we started and finished our requests:

1
2
3
$ cat time.date
Mon Feb 17 16:03:16 SAST 2020
Mon Feb 17 16:03:48 SAST 2020

The last request was actioned at:

1
[2020-02-17 14:03:52,421] INFO in internal: 10.42.0.6 - - [17/Feb/2020 14:03:52] "POST / HTTP/1.1" 200 -

Thank You

This was a basic example to demonstrate async functions using OpenFaas

OpenFaas Documentation:

Traefik Ingress for OpenFaas on Kubernetes (K3d)

In this post we will deploy OpenFaas on Kubernetes locally using k3sup and k3d, then deploy a Traefik Ingress so that we can access the OpenFaas Gateway on HTTP over the standard port 80.

K3d is a amazing wrapper that deploys a k3s cluster on docker, and k3sup makes it very easy to provision OpenFaas to your Kubernetes cluster.

Deploy a Kubernetes Cluster

If you have not installed k3d, you can install k3d on mac with brew:

1
$ brew install k3d

We will deploy our cluster with 2 worker nodes and publish port 80 to the containers port 80:

1
$ k3d create --name="demo" --workers="2" --publish="80:80"

Point the kubeconfig to the location that k3d generated:

1
$ export KUBECONFIG="$(k3d get-kubeconfig --name='demo')"

Deploy OpenFaas

First we need to get k3sup:

1
$ curl -sLfS https://get.k3sup.dev | sudo sh

Once k3sup is installed, deploy OpenFaas to your cluster:

1
$ k3sup app install openfaas

Give it a minute or so and check if everything is running:

1
2
3
4
5
6
7
8
9
$ kubectl get pods -n openfaas
NAMESPACE     NAME                                 READY   STATUS      RESTARTS   AGE
openfaas      alertmanager-546f66b6c6-qtb69        1/1     Running     0          5m
openfaas      basic-auth-plugin-79b9878b7b-7vlln   1/1     Running     0          4m59s
openfaas      faas-idler-db8cd9c7d-8xfpp           1/1     Running     2          4m57s
openfaas      gateway-7dcc6d694d-dmvqn             2/2     Running     0          4m56s
openfaas      nats-d6d574749-rt9vw                 1/1     Running     0          4m56s
openfaas      prometheus-d99669d9b-mfxc8           1/1     Running     0          4m53s
openfaas      queue-worker-75f44b56b9-mhhbv        1/1     Running     0          4m52s

Traefik Ingress

In my scenario, I am using openfaas.localdns.xyz which resolves to 127.0.0.1. Next we need to know to which service to route the traffic to, we can find that by:

1
2
3
$ kubectl get svc/gateway -n openfaas
NAME      TYPE        CLUSTER-IP     EXTERNAL-IP   PORT(S)    AGE
gateway   ClusterIP   10.43.174.57   <none>        8080/TCP   23m

Below is our ingress.yml:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: openfaas-gateway-ingress
  namespace: openfaas
  annotations:
    kubernetes.io/ingress.class: traefik
spec:
  rules:
  - host: openfaas.localdns.xyz
    http:
      paths:
      - backend:
          serviceName: gateway
          servicePort: 8080

Apply the ingress:

1
2
$ kubectl apply -f ingress.yml
ingress.extensions/openfaas-gateway-ingress created

We can the verify that our ingress is visible:

1
2
3
$ kubectl get ingress -n openfaas
NAMESPACE   NAME                       HOSTS               ADDRESS      PORTS   AGE
openfaas    openfaas-gateway-ingress   openfaas.co.local   172.25.0.4   80      28s

OpenFaas CLI

Install the OpenFaas CLI:

1
$ curl -SLsf https://cli.openfaas.com | sudo sh

Export the OPENFAAS_URL to our ingress endpoint and OPENFAAS_PREFIX for your dockerhub username:

1
2
$ export OPENFAAS_URL=http://openfaas.localdns.xyz
$ export OPENFAAS_PREFIX=ruanbekker # change to your username

Get your credentials for the OpenFaas Gateway and login with the OpenFaas CLI:

1
2
$ PASSWORD=$(kubectl get secret -n openfaas basic-auth -o jsonpath="{.data.basic-auth-password}" | base64 --decode; echo)
$ echo -n $PASSWORD | faas-cli login --username admin --password-stdin

Deploy a Function

Deploy the figlet function as an example:

1
2
3
4
$ faas-cli store deploy figlet

Deployed. 202 Accepted.
URL: http://openfaas.localdns.xyz/function/figlet

Invoke the function:

1
2
3
4
5
6
7
$ curl http://openfaas.localdns.xyz/function/figlet -d 'hello, world'
 _          _ _                             _     _
| |__   ___| | | ___    __      _____  _ __| | __| |
| '_ \ / _ \ | |/ _ \   \ \ /\ / / _ \| '__| |/ _` |
| | | |  __/ | | (_) |   \ V  V / (_) | |  | | (_| |
|_| |_|\___|_|_|\___( )   \_/\_/ \___/|_|  |_|\__,_|
                    |/

Delete the Cluster

Delete your k3d Kubernetes Cluster:

1
$ k3d delete --name demo

Thank You

Install OpenFaas on K3d Kubernetes

In this post we will deploy iopenfaas on kubernetes (k3d)

Kubernetes on k3d

k3d is a helper tool that provisions a kubernetes distribution, called k3s on docker. To deploy a kubernetes cluster on k3d, you can follow this blog post

Deploy a 3 Node Kubernetes Cluster

Using k3d, let’s deploy a kubernetes cluster:

1
$ k3d create --name="demo" --workers="2" --publish="80:80"

Export the kubeconfig:

1
$ export KUBECONFIG="$(k3d get-kubeconfig --name='demo')"

Verify that you are able to communicate with your kubernetes cluster:

1
$ kubectl get nodes

Deploy OpenFaas

First we need to get k3sup :

1
$ curl -sLfS https://get.k3sup.dev | sudo sh

Once k3sup is installed, deploy openfaas to your cluster:

1
$ k3sup app install openfaas

Give it a minute or so and check if everything is running:

1
2
3
4
5
6
7
8
9
$ kubectl get pods -n openfaas
NAMESPACE     NAME                                 READY   STATUS      RESTARTS   AGE
openfaas      alertmanager-546f66b6c6-qtb69        1/1     Running     0          5m
openfaas      basic-auth-plugin-79b9878b7b-7vlln   1/1     Running     0          4m59s
openfaas      faas-idler-db8cd9c7d-8xfpp           1/1     Running     2          4m57s
openfaas      gateway-7dcc6d694d-dmvqn             2/2     Running     0          4m56s
openfaas      nats-d6d574749-rt9vw                 1/1     Running     0          4m56s
openfaas      prometheus-d99669d9b-mfxc8           1/1     Running     0          4m53s
openfaas      queue-worker-75f44b56b9-mhhbv        1/1     Running     0          4m52s

Install the openfaas-cli:

1
$ curl -SLsf https://cli.openfaas.com | sudo sh

In a screen session, forward port 8080 to the gateway service:

1
$ screen -S portfwd-process -m -d sh -c "kubectl port-forward -n openfaas svc/gateway 8080:8080"

Expose the gateway password as an environment variable:

1
$ PASSWORD=$(kubectl get secret -n openfaas basic-auth -o jsonpath="{.data.basic-auth-password}" | base64 --decode; echo)

Then login to the gateway:

1
$ echo -n $PASSWORD | faas-cli login --username admin --password-stdin

Deploy a OpenFaas Function

To list all the functions:

1
$ faas-cli store list

To deploy the figlet function:

1
2
3
4
$ faas-cli store deploy figlet

Deployed. 202 Accepted.
URL: http://127.0.0.1:8080/function/figlet

List your deployed functions:

1
2
3
$ faas-cli list
Function                          Invocations     Replicas
figlet                            0                1

Invoke your function:

1
2
3
4
5
6
7
$ curl http://127.0.0.1:8080/function/figlet -d 'hello, world'
 _          _ _                             _     _
| |__   ___| | | ___    __      _____  _ __| | __| |
| '_ \ / _ \ | |/ _ \   \ \ /\ / / _ \| '__| |/ _` |
| | | |  __/ | | (_) |   \ V  V / (_) | |  | | (_| |
|_| |_|\___|_|_|\___( )   \_/\_/ \___/|_|  |_|\__,_|
                    |/

Delete your Cluster

When you are done, delete your kubernetes cluster:

1
$ k3d delete --name demo

Thank You

Thank you for reading. If you like my content, feel free to visit me at ruan.dev or follow me on twitter at @ruanbekker

ko-fi

Lightweight Development Kubernetes Options: K3d

In this post we will cover a lightweight development kubernetes called, “k3d” which we will deploy on a mac.

What is k3d

k3d is a binary that provisions a k3s kubernetes cluster on docker

Pre-Requirements

You will require docker and we will be using brew to install k3d on a mac.

Install k3d

Installing k3d is as easy as:

1
$ brew install k3d

Verify your installation:

1
2
$ k3d --version
k3d version v1.3.1

Deploy a 3 Node Cluster

Using k3d, we will deploy a 3 node k3s cluster:

1
$ k3d create --name="demo" --workers="2" --publish="80:80"

This will deploy a master and 2 worker nodes and we will also publish our host port 80 to our container port 80 (k3s comes default with traefik)

Set your kubeconfig:

1
$ export KUBECONFIG="$(k3d get-kubeconfig --name='demo')"

Test it out by listing your nodes:

1
2
3
4
5
$ kubectl get nodes
NAME                STATUS   ROLES    AGE    VERSION
k3d-demo-server     Ready    master   102s   v1.14.6-k3s.1
k3d-demo-worker-0   Ready    worker   102s   v1.14.6-k3s.1
k3d-demo-worker-1   Ready    worker   102s   v1.14.6-k3s.1

That was easy right?

Deploy a Sample App

We will deploy a simple golang web application that will return the container name upon a http request. We will also make use of the traefik ingress for demonstration.

Our deployment manifest that I will save as app.yml:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: k3s-demo
  namespace: default
spec:
  replicas: 2
  selector:
    matchLabels:
      app: k3d-demo
  template:
    metadata:
      labels:
        app: k3d-demo
    spec:
      containers:
      - name: k3d-demo
        image: ruanbekker/hostname:latest
---
apiVersion: v1
kind: Service
metadata:
  name: k3d-demo
  namespace: default
spec:
  ports:
  - name: http
    targetPort: 8000
    port: 80
  selector:
    app: k3d-demo
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: k3d-demo
  annotations:
    kubernetes.io/ingress.class: "traefik"

spec:
  rules:
  - host: k3d-demo.example.org
    http:
      paths:
      - path: /
        backend:
          serviceName: k3d-demo
          servicePort: http

Deploy our application:

1
2
3
4
$ kubectl apply -f app.yml
deployment.extensions/k3s-demo created
service/k3d-demo created
ingress.extensions/k3d-demo created

Verify that the pods are running:

1
2
3
4
$ kubectl get pods
NAME                       READY   STATUS    RESTARTS   AGE
k3s-demo-f76d866b9-dv5z9   1/1     Running   0          10s
k3s-demo-f76d866b9-qxltk   1/1     Running   0          10s

Make a http request:

1
2
$ curl -H "Host: k3d-demo.example.org" http://localhost
Hostname: k3d-demo-f76d866b9-qxltk

Deleting your Cluster

To delete your cluster:

1
$ k3d delete --name demo

Thank You

Thank you for reading. If you like my content, feel free to visit me at ruan.dev or follow me on twitter at @ruanbekker

ko-fi

Run Localstack as a Service Container for AWS Mock Services on Drone CI

In this tutorial we will setup a basic pipeline in drone to make use of service containers, we will provision localstack so that we can provision AWS mock services.

We will create a kinesis stream on localstack, when the service is up, we will create a stream, put 100 records in the stream, read them from the stream and delete the kinesis stream.

Gitea and Drone Stack

If you don’t have the stack setup, have a look at this post where I go into detail on how to get that setup.

Create the Drone Config

In gitea, I have created a new git repository and created my drone config as .drone.yml with this pipeline config:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
---
kind: pipeline
type: docker
name: localstack

platform:
  os: linux
  arch: amd64

steps:
  - name: wait-for-localstack
    image: busybox
    commands:
      - sleep 10

  - name: list-kinesis-streams
    image: ruanbekker/awscli
    environment:
      AWS_ACCESS_KEY_ID: 123
      AWS_SECRET_ACCESS_KEY: xyz
      AWS_DEFAULT_REGION: eu-west-1
    commands:
      - aws --endpoint-url=http://localstack:4568 kinesis list-streams

  - name: create-kinesis-streams
    image: ruanbekker/awscli
    environment:
      AWS_ACCESS_KEY_ID: 123
      AWS_SECRET_ACCESS_KEY: xyz
      AWS_DEFAULT_REGION: eu-west-1
    commands:
      - aws --endpoint-url=http://localstack:4568 kinesis create-stream --stream-name mystream --shard-count 1

  - name: describe-kinesis-streams
    image: ruanbekker/awscli
    environment:
      AWS_ACCESS_KEY_ID: 123
      AWS_SECRET_ACCESS_KEY: xyz
      AWS_DEFAULT_REGION: eu-west-1
    commands:
      - aws --endpoint-url=http://localstack:4568 kinesis describe-stream --stream-name mystream

  - name: put-record-into-kinesis
    image: ruanbekker/awscli
    environment:
      AWS_ACCESS_KEY_ID: 123
      AWS_SECRET_ACCESS_KEY: xyz
      AWS_DEFAULT_REGION: eu-west-1
    commands:
      - for record in $$(seq 1 100); do aws --endpoint-url=http://localstack:4568 kinesis put-record --stream-name mystream --partition-key 123 --data testdata_$$record ; done

  - name: get-record-from-kinesis
    image: ruanbekker/awscli
    environment:
      AWS_ACCESS_KEY_ID: 123
      AWS_SECRET_ACCESS_KEY: xyz
      AWS_DEFAULT_REGION: eu-west-1
    commands:
      - SHARD_ITERATOR=$$(aws --endpoint-url=http://localstack:4568 kinesis get-shard-iterator --shard-id shardId-000000000000 --shard-iterator-type TRIM_HORIZON --stream-name mystream --query 'ShardIterator' --output text)
      - for each in $$(aws --endpoint-url=http://localstack:4568 kinesis get-records --shard-iterator $$SHARD_ITERATOR | jq -cr '.Records[].Data'); do echo $each | base64 -d ; echo "" ; done

  - name: delete-kinesis-stream
    image: ruanbekker/awscli
    environment:
      AWS_ACCESS_KEY_ID: 123
      AWS_SECRET_ACCESS_KEY: xyz
      AWS_DEFAULT_REGION: eu-west-1
    commands:
      - aws --endpoint-url=http://localstack:4568 kinesis delete-stream --stream-name mystream

services:
  - name: localstack
    image: localstack/localstack
    privileged: true
    environment:
      DOCKER_HOST: unix:///var/run/docker.sock
    volumes:
      - name: docker-socket
        path: /var/run/docker.sock
      - name: localstack-vol
        path: /tmp/localstack
    ports:
      - 8080

volumes:
- name: localstack-vol
  temp: {}
- name: docker-socket
  host:
    path: /var/run/docker.sock

To explain what we are doing, we are bringing up localstack as a service container, then using the aws cli tools we point to the localstack kinesis endpoint, creating a kinesis stream, put 100 records to the stream, then we read from the stream and delete thereafter.

Trigger the Pipeline

Then I head to drone activate my new git repository and select the repository as “Trusted”. I commited a dummy file to trigger the pipeline and it should look like this:

image

List Streams:

image

Put Records:

image

Delete Stream:

image

Run Kubernetes (K3s) as a Service Container on Drone CI

Drone services allow you to run a service container and will be available for the duration of your build, which is great if you want a ephemeral service to test your applications against.

Today we will experiment with services on drone and will deploy a k3s (a kubernetes distribution built by rancher) cluster as a drone service and interact with our cluster using kubectl.

I will be using multiple pipelines, where we will first deploy our “dev cluster”, when it’s up, we will use kubectl to interact with the cluster, once that is done, we will deploy our “staging cluster” and do the same.

This is very basic and we are not doing anything special, but this is a starting point and you can do pretty much whatever you want.

What is Drone

If you are not aware of Drone, Drone is a container-native continious deliver platform built on Go and you can check them out here: github.com/drone

Setup Gitea and Drone

If you don’t have the stack setup, have a look at this post where I go into detail on how to get that setup.

Create your Git Repo

Go ahead and create a git repo, you can name it anything, then it should look something like this:

image

Create a drone configuration, .drone.yml my pipeline will look like this:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
---
kind: pipeline
type: docker
name: dev

platform:
  os: linux
  arch: amd64

steps:
  - name: wait-for-k3s
    image: ruanbekker/build-tools
    commands:
      - sleep 30

  - name: prepare-k3s-kubeconfig
    image: alpine
    volumes:
      - name: k3s-kubeconfig
        path: /k3s-kubeconfig
    detach: false
    commands:
      - sed -i -e "s/127.0.0.1/k3s/g" /k3s-kubeconfig/kubeconfig.yaml

  - name: test-kubernetes
    image: ruanbekker/kubectl
    volumes:
      - name: k3s-kubeconfig
        path: /tmp
    environment:
      KUBECONFIG: /tmp/kubeconfig.yaml
    commands:
      - kubectl get nodes -o wide

services:
  - name: k3s
    image: rancher/k3s:v0.9.1
    privileged: true
    command:
      - server
    environment:
      K3S_KUBECONFIG_OUTPUT: /k3s-kubeconfig/kubeconfig.yaml
      K3S_KUBECONFIG_MODE: 777
    volumes:
      - name: k3s-kubeconfig
        path: /k3s-kubeconfig
    ports:
      - 6443

volumes:
- name: k3s-kubeconfig
  temp: {}

---
kind: pipeline
type: docker
name: staging

platform:
  os: linux
  arch: amd64

steps:
  - name: wait-for-k3s
    image: ruanbekker/build-tools
    commands:
      - sleep 30

  - name: prepare-k3s-kubeconfig
    image: alpine
    volumes:
      - name: k3s-kubeconfig
        path: /k3s-kubeconfig
    detach: false
    commands:
      - sed -i -e "s/127.0.0.1/k3s/g" /k3s-kubeconfig/kubeconfig.yaml

  - name: test-kubernetes
    image: ruanbekker/kubectl
    volumes:
      - name: k3s-kubeconfig
        path: /tmp
    environment:
      KUBECONFIG: /tmp/kubeconfig.yaml
    commands:
      - kubectl get nodes -o wide


services:
  - name: k3s
    image: rancher/k3s:v0.9.1
    privileged: true
    command:
      - server
    environment:
      K3S_KUBECONFIG_OUTPUT: /k3s-kubeconfig/kubeconfig.yaml
      K3S_KUBECONFIG_MODE: 777
    volumes:
      - name: k3s-kubeconfig
        path: /k3s-kubeconfig
    ports:
      - 6443

volumes:
- name: k3s-kubeconfig
  temp: {}

depends_on:
- dev

In this pipeline you can see that the staging pipeline depends on dev, so dev pipeline will start by creating the k3s service container, once its up I am using a step just to sleep for 30 seconds to allow it to boot.

Then I have defined a volume that will be persistent during the build time, which we will use to dump our kubeconfig file and update the hostname of our kubernetes endpoint. Once that is done our last step will set that file to the environment and use kubectl to interact with kubernetes.

Once our dev pipeline has finished, our staging pipeline will start.

Activate the Repo in Drone

Head over to drone on port 80 and activate the newly created git repo (and make sure that you select “Trusted”) and you will see the activity feed being empty:

image

Commit a dummy file to git and you should see your pipeline being triggered:

image

Once your pipeline has finished and everything succeeded, you should see the output of your nodes in your kubernetes service container:

image

As I mentioned earlier, we are not doing anything special but service containers allows us to do some awesome things.

Thank you for reading. If you like my content, feel free to visit me at ruan.dev or follow me on twitter at @ruanbekker

ko-fi