Ruan Bekker's Blog

From a Curious mind to Posts on Github

Nginx Basic Authentication With Source IP Whitelisting

Quick post on how to setup HTTP Basic Authentication and whitelist IP Based Sources to not get prompted for Authentication.

This could be useful for systems interacting with Nginx, so that they don’t have to provide authentication.

Dependencies:

Install nginx and the package required to create the auth file:

1
$ apt install nginx apache2-utils -y

Create the Password file:

1
$ htpasswd -c /etc/ngins/secrets admin

Configuration:

Create the site config:

1
2
$ rm -rf /etc/nginx/conf.d/*.conf
$ vim /etc/nginx/conf.d/default.conf
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
server {
    listen       80;
    server_name  localhost;

    location / {
        satisfy any;
        allow 127.0.0.1;
        deny all;

        auth_basic "restricted";
        auth_basic_user_file /etc/nginx/secrets;
        root   /usr/share/nginx/html;
        index  index.html index.htm;
    }

    error_page   500 502 503 504  /50x.html;
    location = /50x.html {
        root   /usr/share/nginx/html;
    }
}

Reload the Changes:

1
$ nginx -s reload

Testing:

Testing from our Whitelisted location (localhost):

1
2
curl -i http://127.0.0.1 
HTTP/1.1 200 OK

Testing from remote location:

1
2
3
4
5
$ curl -i http://localhost
HTTP/1.1 401 Unauthorized

$ curl -i http://admin:password@localhost
HTTP/1.1 200 OK

Populate Environment Variables From Docker Secrets With a Flask Demo App

In this post we will create a basic Python Flask WebApp on Docker Swarm, but we will read our Flask Host, and Flask Port from Environment Variables, which will be populated from Docker Secrets, which we will read in from a python script.

Our Directory Setup:

This can be retrieved from github.com/ruanbekker/docker-swarm-apps/tool-secrets-env-exporter, but I will place the code in here as well.

Dockerfile:
1
2
3
4
5
6
FROM alpine:edge
RUN apk add --no-cache python2 py2-pip && pip install flask
ADD exporter.py /exporter.py
ADD boot.sh /boot.sh
ADD app.py /app.py
CMD ["/bin/sh", "/boot.sh"]
exporter.py
1
2
3
4
5
6
7
8
import os
from glob import glob

for var in glob('/run/secrets/*'):
    k=var.split('/')[-1]
    v=open(var).read().rstrip('\n')
    os.environ[k] = v
    print("export {key}={value}".format(key=k,value=v))
app.py
1
2
3
4
5
6
7
8
9
10
11
12
13
14
import os
from flask import Flask

flask_host = str(os.environ['flask_host'])
flask_port = int(os.environ['flask_port'])

app = Flask(__name__)

@app.route('/')
def index():
    return 'ok\n'

if __name__ == '__main__':
    app.run(host=flask_host, port=flask_port)
boot.sh
1
2
3
4
#!/bin/sh
set -e
eval $(python /exporter.py)
python /app.py

Flow Information:

The exporter script checks all the secrets that is mounted to the container, then formats the secrets to a key/value pair, which then exports the environment variables to the current shell, which thereafter gets read by the flask application.

Usage:

Create Docker Secrets:

boot.sh
1
2
$ echo 5001 | docker secret create flask_port -
$ echo 0.0.0.0 | docker secret create flask_host -

Build and Push the Image:

boot.sh
1
2
$ docker build -t registry.gitlab.com/<user>/<repo>/<image>:<tag>
$ docker push registry.gitlab.com/<user>/<repo>/<image>:<tag>

Create the Service, and specify the secrets that we created earlier:

boot.sh
1
2
3
4
$ docker service create --name webapp \
--secret source=flask_host,target=flask_host \
--secret source=flask_port,target=flask_port \
registry.gitlab.com/<user>/<repo>/<image>:<tag>

Exec into the container, list to see where the secrets got populated:

boot.sh
1
2
$ ls /run/secrets/
flask_host  flask_port

Do a netstat, to see that the value from the created secret is listening:

boot.sh
1
2
3
4
$ netstat -tulpn
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address           Foreign Address         State       PID/Program name
tcp        0      0 0.0.0.0:5001            0.0.0.0:*               LISTEN      7/python

Do a GET request on the Flask Application:

boot.sh
1
2
$ curl http://0.0.0.0:5001/
ok

Send SMS Messages With Python and Twilio via Their API

This post will guide you through the steps on how to send SMS messages with Python and Twilio. We will use talaikis.com API to get a random quote that we will include in the body of the sms.

Signup for a Trail Account:

Sign up for a trail account at Twilio then create a number, which I will refer to as the sender number, take note of your accountid and token.

Create the Config:

Create the config, that will keep the accountid, token, sender number and recipient number:

config.py
1
2
3
4
5
6
secrets = {
    'account': 'xxxxxxxx',
    'token': 'xxxxxxx',
    'sender': '+1234567890',
    'receiver': '+0987654321'
}

Create the Client:

We will get a random quote via talaikis.com’s API which we will be using for the body of our text message, and then use twilio’s API to send the text message:

sms_client.py
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
from config import secrets
from twilio.rest import Client
import requests

twilio_acountid = secrets['account']
twilio_token = secrets['token']
twilio_receiver = secrets['receiver']
twilio_sender = secrets['sender']

quote_response = requests.get('https://talaikis.com/api/quotes/random').json()

client = Client(
    twilio_acountid,
    twilio_token
)

message = client.messages.create(
    to=twilio_receiver,
    from_=twilio_sender,
    body=quote_response['quote']
)

Message Preview:

Then within a couple of seconds your message should look something more or less like this:

For more info, have a look at their docs: - https://www.twilio.com/docs/

Golang: Reading From Files and Writing to Disk With Arguments

From our Previous Post we wrote a basic golang app that reads the contents of a file and writes it back to disk, but in a static way as we defined the source and destination filenames in the code.

Today we will use arguments to specify what the source and destination filenames should be instead of hardcoding it.

Our Golang Application:

We will be using if statements to determine if the number of arguments provided is as expected, if not, then a usage string should be returned to stdout. Then we will loop through the list of arguments to determine what the values for our source and destination file should be.

Once it completes, it prints out the coice of filenames that was used:

app.go
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
package main

import (
    "io/ioutil"
    "os"
    "fmt"
)

var (
    input_filename string
    output_filename string
)

func main() {

    if len(os.Args) < 5 {
        fmt.Printf("Usage: (-i/--input) 'input_filename' (-o/--output) 'output_filename' \n")
        os.Exit(0)
    }

    for i, arg := range os.Args {
        if arg == "-i" || arg == "--input" {
            input_filename = os.Args[i+1]
            }
        if arg == "-o" || arg == "--output" {
            output_filename = os.Args[i+1]
            }
        }

    input_file_content, error := ioutil.ReadFile(input_filename)

    if error != nil {
        panic(error)
    }

    fmt.Println("File used for reading:", input_filename)

    ioutil.WriteFile(output_filename, input_file_content, 0644)
    fmt.Println("File used for writing:", output_filename)
}

Build your application:

1
$ go build app.go

Run your application with no additional arguments to determine the expected behaviour:

1
2
$ ./app
Usage: (-i/--input) 'input_filename' (-o/--output) 'output_file-to-write'

It works as expected, now create a source file, then run the application:

1
$ echo $RANDOM > myfile.txt

Run the application, and in this run, we will set the destination file as newfile.txt:

1
2
3
$ ./app -i myfile.txt -o newfile.txt
File used for reading: myfile.txt
File used for writing: newfile.txt

Checking out the new file:

1
2
$ cat newfile.txt
8568

Golang: Reading From Files and Writing to Disk With Golang

)

Today we will create a very basic application to read content from a file, and write the content from the file back to disk, but to another filename.

Basically, doing a copy of the file to another filename.

Golang Environment: Golang Docker Image

Dropping into a Golang Environment using Docker:

1
$ docker run -it golang:alpine sh

Our Golang Application

After we are in our container, lets write our app:

app.go
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
package main

import (
    "io/ioutil"
)

func main() {

    content, error := ioutil.ReadFile("source-data.txt")
    if error != nil {
        panic(error)
    }

    error = ioutil.WriteFile("destination-data.txt", content, 0644)
    if error != nil {
        panic(error)
    }
}

Building our application to a binary:

1
$ go build app.go

Creating our source-data.txt :

1
$ echo "foo" > source-data.txt

Running the Golang App:

When we run this app, it will read the content of source-data.txt and write it to destination-data.txt:

1
$ ./app.go

We can see that the file has been written to disk:

1
2
3
$ ls | grep data
destination-data.txt
source-data.txt

Making sure the data is the same, we can do a md5sum hash function on them:

1
2
3
4
5
$ md5sum source-data.txt
d3b07384d113edec49eaa6238ad5ff00  source-data.txt

$ md5sum destination-data.txt
d3b07384d113edec49eaa6238ad5ff00  destination-data.txt

Next:

This was a very static way of doing it, as you need to hardcode the filenames. In the next post I will show how to use arguments to make it more dynamic.

Setup a KVM Hypervisor on Ubuntu to Host Virtual Machines

Today we will setup a KVM (Kernel Virtual Machine) Hypervisor, where we can host Virtual Machines. In order to do so, your host needs to Support Hardware Virtualization.

What we will be doing today:

  • Check if your host supports Hardware Virtualization
  • Setup the KVM Hypervisor
  • Setup a Alpine VM

Check for Hardware Virtualization Support:

We will install the package required to do the check:

1
$ sudo apt update && sudo apt install cpu-checker -y

Once that is installed, run kvm-ok and if its supported, your output should look something like this:

1
2
3
$ kvm-ok
INFO: /dev/kvm exists
KVM acceleration can be used

Installing KVM

Update your System and get the Packages required to Setup KVM:

1
2
$ sudo apt update && sudo apt upgrade -y
$ apt install bridge-utils qemu-kvm libvirt-bin virtinst -y

Add your user to the libvirtd group:

1
$ sudo usermod -G libvirtd $USER

Check that the libvirtd service is running:

1
2
$ sudo systemctl is-active libvirtd
active

You will also find that there is a new interface configured called virbr0 in my case.

Provision the Alpine VM and Setup OpenSSH:

Get the ISO:

1
$ wget http://dl-cdn.alpinelinux.org/alpine/v3.7/releases/x86_64/alpine-virt-3.7.0-x86_64.iso

Provision the VM:

1
2
3
4
5
6
7
8
9
10
11
$ virt-install \
--name alpine1 \
--ram 256 \
--disk path=/var/lib/libvirt/images/alpine1.img,size=8 \
--vcpus 1 \
--os-type linux \
--os-variant generic \
--network bridge:virbr0,model=virtio \
--graphics none \
--console pty,target_type=serial \
--cdrom ./alpine-virt-3.7.0-x86_64.iso

After this, you will be dropped into the console:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
Starting install...
Allocating 'alpine1.img'                                                                                                           |   8 GB  00:00:01
Creating domain...                                                                                                                 |    0 B  00:00:00
Connected to domain alpine1
Escape character is ^]

ISOLINUX 6.04 6.04-pre1  Copyright (C) 1994-2015 H. Peter Anvin et al
boot:

   OpenRC 0.24.1.a941ee4a0b is starting up Linux 4.9.65-1-virthardened (x86_64)

Welcome to Alpine Linux 3.7
Kernel 4.9.65-1-virthardened on an x86_64 (/dev/ttyS0)

localhost login:

Login with the root user and no password, then setup the VM by running setup-alpine:

1
2
3
4
localhost login: root
Welcome to Alpine!

localhost:~# setup-alpine

After completing the prompts reboot the VM by running reboot, then you will be dropped out of the console. Check the status of the reboot:

1
2
3
4
$ virsh list
 Id    Name                           State
----------------------------------------------------
 2     alpine1                        running

As we can see our guest is running, lets console to our guest, provide the root user and password that you provided during the setup phase:

1
2
3
4
5
6
7
$ virsh console 2
Connected to domain alpine1
Escape character is ^]

alpine1 login: root
Password:
Welcome to Alpine!

Setup OpenSSH so that we can SSH to our guest over the network:

1
2
$ apk update
$ apk add openssh

Configure SSH to accept Root Passwords, this is not advisable for production environments, but for testing this is okay. For Production servers, we will rather look at Key Based Authentication etc.

1
2
$ sed -i 's/#PermitRootLogin prohibit-password/PermitRootLogin yes/g' /etc/ssh/sshd_config
$ /etc/init.d/sshd restart

Get the IP Address:

1
2
3
4
5
6
7
8
9
$ ifconfig
eth0      Link encap:Ethernet  HWaddr 52:54:00:D0:48:0C
          inet addr:192.168.122.176  Bcast:192.168.122.255  Mask:255.255.255.0
          inet6 addr: fe80::5054:ff:fed0:480c/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:55 errors:0 dropped:28 overruns:0 frame:0
          TX packets:34 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000
          RX bytes:4545 (4.4 KiB)  TX bytes:3345 (3.2 KiB)

Exit the guest by running exit and Ctrl + ] to exit the console session.

Now SSH to your Alpine VM:

1
2
3
$ ssh root@192.168.122.176
root@192.168.122.176's password:
Welcome to Alpine!

Some Useful Commands:

List Running VMs:

1
2
3
4
$ virsh list
 Id    Name                           State
----------------------------------------------------
 3     alpine1                        running

Shutdown a VM:

1
2
$ virsh shutdown alpine1
Domain alpine1 is being shutdown

List all VMs:

1
2
3
4
$ virsh list --all
 Id    Name                           State
----------------------------------------------------
 -     alpine1                        shut off

Delete a VM:

1
2
3
$ virsh shutdown alpine1 #or to force shutdown:
$ virsh destroy alpine1
$ virsh undefine alpine1

Any future KVM posts will be tagged under KVM and Alpine posts will be available under the Alpine tag.

Guide to Setup Docker Convoy Volume Driver for Docker Swarm With NFS

In this post we will setup Rancher’s Convoy Storage Plugin with NFS, to provide data persistence in Docker Swarm.

The Overview:

This essentially means that we will have a NFS Volume, when the service gets created on Docker Swarm, the cluster creates these volumes with path mapping, so when a container gets spawned, restarted, scaled etc, the container that gets started on the new node will be aware of the volume, and will get the data that its expecting.

Its also good to note that our NFS Server will be a single point of failure, therefore its also good to look at a Distributed Volume like GlusterFS, XtreemFS, Ceph, etc.

  • NFS Server (10.8.133.83)
  • Rancher Convoy Plugin on Each Docker Node in the Swarm (10.8.133.83, 10.8.166.19, 10.8.142.195)

Setup NFS:

Setup the NFS Server

Update:

In order for the containers to be able to change permissions, you need to set (rw,async,no_subtree_check,no_wdelay,crossmnt,insecure,all_squash,insecure_locks,sec=sys,anonuid=0,anongid=0)

1
2
3
4
5
6
$ sudo apt-get install nfs-kernel-server nfs-common -y
$ mkdir /vol
$ chown -R nobody:nogroup /vol
$ echo '/vol 10.8.133.83(rw,sync,no_subtree_check) 10.8.166.19(rw,sync,no_subtree_check) 10.8.142.195(rw,sync,no_subtree_check)' >> /etc/exports
$ sudo systemctl restart nfs-kernel-server
$ sudo systemctl enable nfs-kernel-server

Setup the NFS Clients on each Docker Node:

1
2
3
4
$ sudo apt-get install nfs-common -y
$ mount 10.8.133.83:/vol /mnt
$ umount /mnt
$ df -h

If you can see tht the volume is mounted, unmount it and add it to the fstab so the volume can be mounted on boot:

1
2
$ sudo bash -c "echo '10.8.133.83:/vol /mnt nfs auto,nofail,noatime,nolock,intr,tcp,actimeo=1800 0 0' >> /etc/fstab"
$ sudo mount -a

Install Rancher Convoy Plugin:

The Plugin needs to be installed on each docker node that will be part of the swarm:

1
2
3
4
5
6
$ cd /tmp
$ wget https://github.com/rancher/convoy/releases/download/v0.5.0/convoy.tar.gz
$ tar xzf convoy.tar.gz
$ sudo cp convoy/convoy convoy/convoy-pdata_tools /usr/local/bin/
$ sudo mkdir -p /etc/docker/plugins/
$ sudo bash -c 'echo "unix:///var/run/convoy/convoy.sock" > /etc/docker/plugins/convoy.spec'

Create the init script:

Thanks to deviantony

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
#!/bin/sh
### BEGIN INIT INFO
# Provides:
# Required-Start:    $remote_fs $syslog
# Required-Stop:     $remote_fs $syslog
# Default-Start:     2 3 4 5
# Default-Stop:      0 1 6
# Short-Description: Start daemon at boot time
# Description:       Enable service provided by daemon.
### END INIT INFO

dir="/usr/local/bin"
cmd="convoy daemon --drivers vfs --driver-opts vfs.path=/mnt/docker/volumes"
user="root"
name="convoy"

pid_file="/var/run/$name.pid"
stdout_log="/var/log/$name.log"
stderr_log="/var/log/$name.err"

get_pid() {
    cat "$pid_file"
}

is_running() {
    [ -f "$pid_file" ] && ps `get_pid` > /dev/null 2>&1
}

case "$1" in
    start)
    if is_running; then
        echo "Already started"
    else
        echo "Starting $name"
        cd "$dir"
        if [ -z "$user" ]; then
            sudo $cmd >> "$stdout_log" 2>> "$stderr_log" &
        else
            sudo -u "$user" $cmd >> "$stdout_log" 2>> "$stderr_log" &
        fi
        echo $! > "$pid_file"
        if ! is_running; then
            echo "Unable to start, see $stdout_log and $stderr_log"
            exit 1
        fi
    fi
    ;;
    stop)
    if is_running; then
        echo -n "Stopping $name.."
        kill `get_pid`
        for i in {1..10}
        do
            if ! is_running; then
                break
            fi

            echo -n "."
            sleep 1
        done
        echo

        if is_running; then
            echo "Not stopped; may still be shutting down or shutdown may have failed"
            exit 1
        else
            echo "Stopped"
            if [ -f "$pid_file" ]; then
                rm "$pid_file"
            fi
        fi
    else
        echo "Not running"
    fi
    ;;
    restart)
    $0 stop
    if is_running; then
        echo "Unable to stop, will not attempt to start"
        exit 1
    fi
    $0 start
    ;;
    status)
    if is_running; then
        echo "Running"
    else
        echo "Stopped"
        exit 1
    fi
    ;;
    *)
    echo "Usage: $0 {start|stop|restart|status}"
    exit 1
    ;;
esac

exit 0

Make the script executable:

1
$ chmod +x /etc/init.d/convoy

Enable the service on boot:

1
$ sudo systemctl enable convoy

Start the service:

1
$ sudo /etc/init.d/convoy start

This should be done on all the nodes.

Externally Managed Convoy Volumes

One thing to note is that, after your delete a volume, you will still need to delete the directory from the path where its hosted, as the application does not do that by itself.

Creating the Volume Before hand:

1
2
3
4
5
6
7
8
9
$ convoy create test1
test1

$ docker volume ls
DRIVER              VOLUME NAME
convoy              test1

$ cat /mnt/docker/volumes/config/vfs_volume_test1.json
{"Name":"test1","Size":0,"Path":"/mnt/docker/volumes/test1","MountPoint":"","PrepareForVM":false,"CreatedTime":"Mon Feb 05 13:07:05 +0000 2018","Snapshots":{}}

Viewing the volume from another node:

1
2
3
$ docker volume ls
DRIVER              VOLUME NAME
convoy              test1

Creating a Test Service:

Create a test service to test the data persistence, our docker-compose.yml:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
version: '3.4'

volumes:
  test1:
    external: true

networks:
  appnet:
    external: true

services:
  test:
    image: alpine:edge
    command: sh -c "ping 127.0.0.1"
    volumes:
      - test1:/data
    networks:
      - appnet

Creating the Overlay Network and Deploying the Stack:

1
2
3
$ docker network create -d overlay appnet
$ docker stack deploy -c docker-compose.yml apps
Creating service apps_test

Write data to the volume in the container:

1
2
3
4
$ docker exec -it apps_test.1.iojo7fpw8jirqjs3iu8qr7qpe sh
/ # echo "ok" > /data/file.txt
/ # cat /data/file.txt
ok

Scale the service:

1
2
$ docker service scale apps_test=2
apps_test scaled to 2

Inspect to see if the new replica is on another node:

1
2
3
4
5
6
$ docker service ps apps_test
ID                  NAME                IMAGE               NODE                DESIRED STATE       CURRENT STATE               ERROR                         PORTS
myrq2pc3z26z        apps_test.1         alpine:edge         scw-docker-1        Running             Running 45 seconds ago
ny8t97l2q00c         \_ apps_test.1     alpine:edge         scw-docker-1        Shutdown            Failed 51 seconds ago       "task: non-zero exit (137)"
iojo7fpw8jir         \_ apps_test.1     alpine:edge         scw-docker-1        Shutdown            Failed about a minute ago   "task: non-zero exit (137)"
tt0nuusvgeki        apps_test.2         alpine:edge         scw-docker-2        Running             Running 15 seconds ago

Logon to the new container and test if the data is persisted:

1
2
3
$ docker exec -it apps_test.2.tt0nuusvgekirw1c5myu720ga sh
/ # cat /data/file.txt
ok

Delete the Stack and Redeploy and have a look at the data we created earlier, and you will notice the data is persisted:

1
2
3
4
5
$ docker stack rm apps
$ docker stack deploy -c docker-compose.yml apps
$ docker exec -it apps_test.1.la4w2sbuu8cmv6xamwxl7n0ip cat /data/file.txt
ok
$ docker stack rm apps

Create Volume via Compose:

You can also create the volume on service/stack creation level, so you dont need to create the volume before hand, the compose file:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
version: '3.4'

volumes:
  test2:
    driver: convoy
    driver_opts:
      size: 10

networks:
  appnet:
    external: true

services:
  test:
    image: alpine:edge
    command: sh -c "ping 127.0.0.1"
    volumes:
      - test2:/data
    networks:
      - appnet

Deploy the Stack:

1
2
$ docker stack deploy -c docker-compose-new.yml apps
Creating service apps_test

List the volumes and you will notice that the volume was created:

1
2
3
4
$ docker volume ls
DRIVER              VOLUME NAME
convoy              apps_test2
convoy              test1

Lets inspect the volume, to see more details about it:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
docker volume inspect apps_test2
[
    {
        "CreatedAt": "0001-01-01T00:00:00Z",
        "Driver": "convoy",
        "Labels": {
            "com.docker.stack.namespace": "apps"
        },
        "Mountpoint": "/mnt/docker/volumes/apps_test2",
        "Name": "apps_test2",
        "Options": {
            "size": "10"
        },
        "Scope": "local"
    }
]

As mentioned earlier, if you delete the volume, you need to delete the data directories as well

1
2
3
4
5
6
7
$ docker volume rm test1
test1

$ ls /mnt/docker/volumes/
apps_test2  config  test1

$ rm -rf /mnt/docker/volumes/test1

More info about the project: - https://github.com/rancher/convoy

Setup a NFS Server on Ubuntu

Quick post on how to setup a NFS Server on Ubuntu and how to setup the client to interact with the NFS Server.

Setup the Dependencies:

1
2
$ apt update && sudo apt upgrade -y
$ sudo apt-get install nfs-kernel-server nfs-common -y

Create the Directory for NFS and set permissions:

1
2
mkdir /vol
chown -R nobody:nogroup /vol

Allow the Clients:

We need to set in the exports file, the clients we would like to allow:

  • rw: Allows Client R/W Access to the Volume.
  • sync: This option forces NFS to write changes to disk before replying. More stable and Consistent. Note, it does reduce the speed of file operations.
  • no_subtree_check: This prevents subtree checking, which is a process where the host must check whether the file is actually still available in the exported tree for every request. This can cause many problems when a file is renamed while the client has it opened. In almost all cases, it is better to disable subtree checking.
1
$ echo '/vol 10.8.133.83(rw,sync,no_subtree_check) 10.8.166.19(rw,sync,no_subtree_check) 10.8.142.195(rw,sync,no_subtree_check)' >> /etc/exports

Start the NFS Server:

Restart the service and enable the service on boot:

1
2
$ sudo systemctl restart nfs-kernel-server
$ sudo systemctl enable nfs-kernel-server

Client Side:

We will mount the NFS Volume to our Clients /mnt partition.

Install the dependencies:

1
$ sudo apt-get install nfs-common -y

Test if we can mount the volume, then unmount it, as we will set the config in our fstab:

1
2
3
$ sudo mount 10.8.133.83:/vol /mnt
$ sudo umount /mnt
$ df -h

Set the config in your fstab, then mount it from there:

1
2
3
$ sudo bash -c "echo '10.8.133.83:/vol /mnt nfs auto,nofail,noatime,nolock,intr,tcp,actimeo=1800 0 0' >> /etc/fstab"
$ sudo mount -a
$ df -h

Now you shoule be able to write to your NFS Volume from your client.

Sources: - 1 2

Setup a Site to Site IPsec VPN With Strongswan and PreShared Key Authentication

Today we will setup a Site to Site ipsec VPN with Strongswan, which will be configured with PreShared Key Authentication.

After our tunnels are established, we will be able to reach the private ips over the vpn tunnels.

ruanbekker-cheatsheets

Get the Dependencies:

Update your repository indexes and install strongswan:

1
2
$ apt update && sudo apt upgrade -y
$ apt install strongswan -y

Set the following kernel parameters:

1
2
3
4
5
6
7
$ cat >> /etc/sysctl.conf << EOF
net.ipv4.ip_forward = 1 
net.ipv4.conf.all.accept_redirects = 0 
net.ipv4.conf.all.send_redirects = 0
EOF

$ sysctl -p /etc/sysctl.conf

Generate Preshared Key:

We will need a preshared key that both servers will use:

1
2
$ openssl rand -base64 64
87zRQqylaoeF5I8o4lRhwvmUzf+pYdDpsCOlesIeFA/2xrtxKXJTbCPZgqplnXgPX5uprL+aRgxD8ua7MmdWaQ

Details of our 2 Sites:

Site A:

1
2
3
Location: Paris, France
External IP: 51.15.139.201
Internal IP: 10.10.27.1/24

Site B:

1
2
3
Location: Amsterdam, Netherlands
External IP: 51.15.44.48
Internal IP: 10.9.141.1/24

Configure Site A:

We will setup our VPN Gateway in Site A (Paris), first to setup the /etc/ipsec.secrets file:

1
2
3
$ cat /etc/ipsec.secrets
# source      destination
51.15.139.201 51.15.44.48 : PSK "87zRQqylaoeF5I8o4lRhwvmUzf+pYdDpsCOlesIeFA/2xrtxKXJTbCPZgqplnXgPX5uprL+aRgxD8ua7MmdWaQ"

Now to setup our VPN configuration in /etc/ipsec.conf:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
cat /etc/ipsec.conf
# basic configuration
config setup
        charondebug="all"
        uniqueids=yes
        strictcrlpolicy=no

# connection to amsterdam datacenter
conn paris-to-amsterdam
  authby=secret
  left=%defaultroute
  leftid=51.15.139.201
  leftsubnet=10.10.27.1/24
  right=51.15.44.48
  rightsubnet=10.9.141.1/24
  ike=aes256-sha2_256-modp1024!
  esp=aes256-sha2_256!
  keyingtries=0
  ikelifetime=1h
  lifetime=8h
  dpddelay=30
  dpdtimeout=120
  dpdaction=restart
  auto=start

Firewall Rules:

1
$ sudo iptables -t nat -A POSTROUTING -s 10.9.141.0/24 -d 10.10.27.0/24 -j MASQUERADE

Configure Site B:

We will setup our VPN Gateway in Site B (Amsterdam), setup the /etc/ipsec.secrets file:

1
2
$ cat /etc/ipsec.secrets
51.15.44.48 51.15.139.201 : PSK "87zRQqylaoeF5I8o4lRhwvmUzf+pYdDpsCOlesIeFA/2xrtxKXJTbCPZgqplnXgPX5uprL+aRgxD8ua7MmdWaQ"

Next to setup our VPN Configuration:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
cat /etc/ipsec.conf
# basic configuration
config setup
        charondebug="all"
        uniqueids=yes
        strictcrlpolicy=no

# connection to paris datacenter
conn amsterdam-to-paris
  authby=secret
  left=%defaultroute
  leftid=51.15.44.48
  leftsubnet=10.9.141.1/24
  right=51.15.139.201
  rightsubnet=10.10.27.1/24
  ike=aes256-sha2_256-modp1024!
  esp=aes256-sha2_256!
  keyingtries=0
  ikelifetime=1h
  lifetime=8h
  dpddelay=30
  dpdtimeout=120
  dpdaction=restart
  auto=start

Firewall Rules:

1
$ sudo iptables -t nat -A POSTROUTING -s 10.10.27.0/24 -d 10.9.41.0/24 -J MASQUERADE

Start the VPN:

Start the VPN on both ends:

1
$ sudo ipsec restart

Get the status of the tunnel, in this case we are logged onto our Site A (Paris) Server:

1
2
3
4
5
$ sudo ipsec status
Security Associations (1 up, 0 connecting):
paris-to-amsterdam[2]: ESTABLISHED 14 minutes ago, 10.10.27.161[51.15.139.201]...51.15.44.48[51.15.44.48]
paris-to-amsterdam{1}:  INSTALLED, TUNNEL, reqid 1, ESP in UDP SPIs: c8c868ee_i c9d58dbd_o
paris-to-amsterdam{1}:   10.10.27.1/24 === 10.9.141.1/24

Test if we can see the remote end on its private range:

1
2
3
$ ping 10.9.141.97
PING 10.9.141.97 (10.9.141.97) 56(84) bytes of data.
64 bytes from 10.9.141.97: icmp_seq=1 ttl=64 time=14.6 ms

Set the service to start on boot:

1
$ sudo systemctl enable strongswan

Then your VPN should be setup correctly.

Other useful commands:

Start / Stop / Status:

1
2
3
4
5
6
$ sudo ipsec up connection-name
$ sudo ipsec down connection-name

$ sudo ipsec restart
$ sudo ipsec status
$ sudo ipsec statusall

Get the Policies and States of the IPsec Tunnel:

1
2
$ sudo ip xfrm state
$ sudo ip xfrm policy

Reload the secrets, while the service is running:

1
$ sudo ipsec rereadsecrets

Check if traffic flows through the tunnel:

1
$ sudo tcpdump esp

Adding more connections to your config:

If you have to add another site to your config, the example of the ipsec.secrets will look like:

1
2
3
$ cat /etc/ipsec.secrets
51.15.139.201 51.15.44.48 : PSK "87zRQqylaoeF5I8o4lRhwvmUzf+pYdDpsCOlesIeFA/2xrtxKXJTbCPZgqplnXgPX5uprL+aRgxD8ua7MmdWaQ"
51.15.139.201 51.15.87.41  : PSK "87zRQqylaoeF5I8o4lRhwvmUzf+pYdDpsCOlesIeFA/2xrtxKXJTbCPZgqplnXgPX5uprL+aRgxD8ua7MmdWaQ"

And the ipsec.conf:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
cat /etc/ipsec.conf
# basic configuration
config setup
        charondebug="all"
        uniqueids=yes
        strictcrlpolicy=no

# connection to amsterdam datacenter
conn paris-to-amsterdam
  authby=secret
  left=%defaultroute
  leftid=51.15.139.201
  leftsubnet=10.10.27.161/32
  right=51.15.44.48
  rightsubnet=10.9.141.97/32
  ike=aes256-sha2_256-modp1024!
  esp=aes256-sha2_256!
  keyingtries=0
  ikelifetime=1h
  lifetime=8h
  dpddelay=30
  dpdtimeout=120
  dpdaction=restart
  auto=start

# connection to frankfurt datacenter
conn paris-to-frankfurt
  authby=secret
  left=%defaultroute
  leftid=51.15.139.201
  leftsubnet=10.10.27.1/24
  right=51.15.87.41
  rightsubnet=10.9.137.1/24
  ike=aes256-sha2_256-modp1024!
  esp=aes256-sha2_256!
  keyingtries=0
  ikelifetime=1h
  lifetime=8h
  dpddelay=30
  dpdtimeout=120
  dpdaction=restart
  auto=start

Just remember to configure the config on the Frankfurt VPN Gateway, and the example of the status output will look like the following:

1
2
3
4
5
6
7
8
$ sudo ipsec status
Security Associations (2 up, 0 connecting):
paris-to-frankfurt[2]: ESTABLISHED 102 seconds ago, 10.10.27.161[51.15.139.201]...51.15.87.41[51.15.87.41]
paris-to-frankfurt{1}:  INSTALLED, TUNNEL, reqid 2, ESP in UDP SPIs: cbc62a1f_i c95b8f78_o
paris-to-frankfurt{1}:   10.10.27.1/24 === 10.9.137.1/24
paris-to-amsterdam[1]: ESTABLISHED 102 seconds ago, 10.10.27.161[51.15.139.201]...51.15.44.48[51.15.44.48]
paris-to-amsterdam{2}:  INSTALLED, TUNNEL, reqid 1, ESP in UDP SPIs: c7b36756_i cc54053c_o
paris-to-amsterdam{2}:   10.10.27.1/24 === 10.9.141.1/24

Authenticate to Your AWS MySQL RDS Instance via IAM

On Amazon Web Services with RDS for MySQL or Aurora with MySQL compatibility, you can authenticate to your Database instance or cluster using IAM for database authentication. The benefit of using this authentication method is that you don’t need to use a password when you connect to your database, but you use your authentication token instead

Update: - Amazon Supports IAM Authentication for PostgreSQL

  • More info from their docs

What we will be doing today:

We will do the following:

  • Create RDS MySQL Database
  • Create IAM Policy that allows a user to connect via a MySQL User
  • Create IAM User and associate IAM Policy
  • Configure the new user credentials in the awscli credential provider
  • Bash script to generate the auth token and authenticate to RDS via Token instead of password

Create the RDS Database:

In this tutorial I will spin up a db.t2.micro in eu-west-1 with IAMDatabaseAuthentication Enabled:

1
2
3
4
5
6
7
8
9
aws rds create-db-instance \
    --db-instance-identifier rbtest \
    --db-instance-class db.t2.micro \
    --engine MySQL \
    --allocated-storage 20 \
    --master-username dbadmin \
    --master-user-password mysuperpassword \
    --region eu-west-1 \
    --enable-iam-database-authentication

Give it some time to spin up, then get your database endpoint:

1
2
$ aws rds describe-db-instances --db-instance-identifier rbtest | jq -r ".DBInstances[].Endpoint.Address"
rbtest.abcdefgh.eu-west-1.rds.amazonaws.com

If you need to have SSL Enabled, get the bundled certificate as described in the Using SSL with RDS docs.

1
wget -O /tmp/rds.pem https://s3.amazonaws.com/rds-downloads/rds-combined-ca-bundle.pem

Create the Database Account:

Create the database account on the MySQL RDS instance as described from their docs. IAM handles the authentication via AWSAuthenticationPlugin, therefore we do not need to set passwords on the database.

Connect to the database:

1
$ mysql -u dbadmin -h rbtest.abcdefgh.eu-west-1.rds.amazonaws.com -p

Create the database:

1
2
mysql> CREATE USER mydbaccount IDENTIFIED WITH AWSAuthenticationPlugin AS 'RDS';
mysql> FLUSH PRIVILEGES;

Creating the Databases and Granting Permissions

While you are on the database, create 2 databases (db1 and db2) with some tables, which we will use for our user to have read only access to, and create one database (db3) which the user will not have access to:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
mysql> create database db1;
mysql> create database db2;

mysql> use db1;
mysql> create table foo (name VARCHAR(20), age INT);
mysql> insert into foo values('ruan', 31);
mysql> insert into foo values('james', 32);

mysql> use db2;
mysql> create table foo (location VARCHAR(255));
mysql> insert into foo values('south africa');
mysql> insert into foo values('new zealand');
mysql> insert into foo values('australia');

mysql> grant select on db1.* to 'mydbuser';
mysql> grant select on db2.* to 'mydbuser';

mysql> create database db3;
mysql> use db3;
mysql> create table foo (passwords VARCHAR(255));
mysql> insert into foo values('superpassword');
mysql> insert into foo values('sekret');

mysql> flush privileges;

IAM Permissions to allow our user to authenticate to our RDS.

First to create the user and configure awscli tools. My default profile has administrative access, so we will create our db user in its own profile and configure our awscli tools with its new access key and secret key:

1
2
3
4
5
$ aws configure --profile dbuser
AWS Access Key ID [None]: xxxxxxxxxxxxx
AWS Secret Access Key [None]: xxxxxxxxxxxxxxxxxx
Default region name [None]: eu-west-1
Default output format [None]: json

Now we need to create a IAM policy to allow our user to authenticate to our RDS Instance via IAM, which we will associate with our Users account.

We need the AWS Account ID, the Database Identifier Resource ID, and the User Account that we created on MySQL.

To get the DB ResourceId:

1
2
$ aws rds describe-db-instances --db-instance-identifier rbtest | jq -r ".DBInstances[].DbiResourceId
db-123456789ABCDEFGH

Create the IAM Policy and associate it with the new user account:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
{
   "Version": "2012-10-17",
   "Statement": [
      {
           "Sid": "RDSIAMAUTH",
         "Effect": "Allow",
         "Action": [
             "rds-db:connect"
         ],
         "Resource": [
             "arn:aws:rds-db:eu-west-1:123456789012:dbuser:db-123456789ABCDEFGH/mydbaccount"
         ]
      }
   ]
}

The bash script will get the authentication token which will be used as the password. Note that the authentication token will expire after 15 minutes after creation. The docs

1
2
3
4
5
#!/bin/bash
db_endpoint="rbtest.abcdefgh.eu-west-1.rds.amazonaws.com"
local_mysql_user="mydbaccount"
auth_token="$(aws --profile dbuser rds generate-db-auth-token --hostname ${RDSHOST} --port 3306 --username ${local_mysql_user} )"
mysql --host=${db_endpoint} --port=3306 --enable-cleartext-plugin --user=${local_mysql_user} --password=${auth_token}

Testing it out:

Now that our policies are in place, credentials from the credential provider has been set and our bash script is setup, lets connect to our database:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
./conn-mysql.sh

mysql> show databases;
+--------------------+
| Database           |
+--------------------+
| information_schema |
| db1                |
| db2                |
+--------------------+
3 rows in set (0.16 sec)

mysql> select * from db2.foo;
+--------------+
| location     |
+--------------+
| south africa |
| new zealand  |
| australia    |
+--------------+

mysql> select * from db3.foo;
ERROR 1044 (42000): Access denied for user 'mydbaccount'@'*' to database 'db3'

mysql> create database test123;
ERROR 1044 (42000): Access denied for user 'mydbaccount'@'%' to database 'test123'

Changing the IAM Policy to revoke access:

1
2
3
./conn-mysql.sh
mysql: [Warning] Using a password on the command line interface can be insecure.
ERROR 1045 (28000): Access denied for user 'mydbaccount'@'10.0.0.10' (using password: YES)

Creating a MySQL Client Wrapper Script:

Using bash we can create a wrapper script so we can connect to our database like the following:

1
2
$ mysql-iam prod rbtest.eu-west-1.amazonaws.com mydbaccount
mysql>

Here is the script:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
#!/usr/bin/env bash

# Wrapper MySQL Client for IAM Based Authentication for MySQL and Amazon Aurora on RDS
# Read: https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/UsingWithRDS.IAMDBAuth.html
# Usage: [app] [aws_profile] [rds_endpoint] [rds_mysql_username]

command_exists() {
  type "$1" &> /dev/null ;
}

check_required_parameters() {
  aws_profile="$1"
  rds_hostname="$2"
  rds_username="$3"
  if ! [[ -n "$aws_profile" && -n "$rds_username" && -n "$rds_username" ]]
    then
      echo "Error: Missing Parameters"
      echo "Expected: $0 aws_profile_name rds_endpoint_name rds_db_username"
      echo "Usage: $0 prod dbname.eu-west-1.amazonaws.com dba"
      exit 1
  fi
}

get_auth_token() {
  aws_bin=$(which aws | head -1)
  auth_token="$($aws_bin --profile $aws_profile rds generate-db-auth-token --hostname $rds_hostname --port 3306 --username $rds_username )"
}

connect_to_rds() {
  mysql_bin=$(which mysql | head -1)
  ${mysql_bin} --host=${rds_hostname} --port=3306 --enable-cleartext-plugin --user=${rds_username} --password=${auth_token}
}

if [ "$1" == "help" ]
  then
    echo "Help"
    echo "Expected: $0 aws_profile_name rds_endpoint_name rds_db_username"
    echo "Usage: $0 prod dbname.eu-west-1.amazonaws.com dba_user"
    exit 0
fi

if command_exists aws && command_exists mysql
then
  check_required_parameters $1 $2 $3
  get_auth_token
  connect_to_rds
else
  echo "Error: Make sure aws-cli and mysql client is installed"
fi

For more information on this, have a look at the docs