Ruan Bekker's Blog

From a Curious mind to Posts on Github

Docker Multistage Builds for Hugo

blog-ruanbekker-multistage-builds

In this tutorial I will demonstrate how to keep your docker container images nice and slim with the use of multistage builds for a hugo documentation project.

Hugo is a static content generator so essentially that means that it will generate your markdown files into html. Therefore we don’t need to include all the content from our project repository as we only need the static content (html, css, javascript) to reside on our final container image.

What are we doing today

We will use the DOKS Modern Documentation theme for Hugo as our project example, where we will build and run our documentation website on a docker container, but more importantly make use of multistage builds to optimize the size of our container image.

Our Build Strategy

Since hugo is a static content generator, we will use a node container image as our base. We will then build and generate the content using npm run build which will generate the static content to /src/public in our build stage.

Since we then have static content, we can utilize a second stage using a nginx container image with the purpose of a web server to host our static content. We will copy the static content from our build stage into our second stage and place it under our defined path in our nginx config.

This way we only include the required content on our final container image.

Building our Container Image

First clone the docs github repository and change to the directory:

1
2
git clone https://github.com/h-enk/doks
cd doks

Now create a Dockerfile in the root path with the following content:

1
2
3
4
5
6
7
8
9
10
11
FROM node:16.15.1 as build
WORKDIR /src
ADD . .
RUN npm install
RUN npm run build

FROM  nginx:alpine
LABEL demonstration.by Ruan Bekker <@ruanbekker>
COPY  nginx/config/nginx.conf /etc/nginx/nginx.conf
COPY  nginx/config/app.conf /etc/nginx/conf.d/app.conf
COPY  --from=build /src/public /usr/share/nginx/app

As we can see we are copying two nginx config files to our final image, which we will need to create.

Create the nginx config directory:

1
mkdir -p nginx/config

The content for our main nginx config nginx/config/nginx.conf:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
user  nginx;
worker_processes  auto;
error_log  /var/log/nginx/error.log notice;
pid        /var/run/nginx.pid;

events {
    worker_connections  1024;
}

http {
    include       /etc/nginx/mime.types;
    default_type  application/octet-stream;

    log_format  main  '$remote_addr - $remote_user [$time_local] "$request" '
                      '$status $body_bytes_sent "$http_referer" '
                      '"$http_user_agent" "$http_x_forwarded_for"';

    access_log  /var/log/nginx/access.log  main;

    sendfile        on;

    # timeouts
    client_body_timeout 12;
    client_header_timeout 12;
    keepalive_timeout  25;
    send_timeout 10;

    # buffer size
    client_body_buffer_size 10K;
    client_header_buffer_size 1k;
    client_max_body_size 8m;
    large_client_header_buffers 4 4k;

    # gzip compression
    gzip  on;
    gzip_vary on;
    gzip_min_length 10240;
    gzip_proxied expired no-cache no-store private auth;
    gzip_types text/plain text/css text/xml text/javascript application/x-javascript application/xml;
    gzip_disable "MSIE [1-6]\.";

    include /etc/nginx/conf.d/app.conf;
}

And in our main nginx config we are including a virtual host config app.conf, which we will create locally, and the content of nginx/config/app.conf:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
server {
    listen       80;
    server_name  localhost;

    location / {
        root   /usr/share/nginx/app;
        index  index.html index.htm;
    }

    #error_page  404              /404.html;

    # redirect server error pages to the static page /50x.html
    error_page   500 502 503 504  /50x.html;
    location = /50x.html {
        root   /usr/share/nginx/html;
    }

}

Now that we have our docker config in place, we can build our container image:

1
docker build -t ruanbekker/hashnode-docs-blogpost:latest .

Then we can review the size of our container image, which is only 27.4MB in size, pretty neat right.

1
2
3
4
docker images --filter reference=ruanbekker/hashnode-docs-blogpost

REPOSITORY                          TAG       IMAGE ID       CREATED          SIZE
ruanbekker/hashnode-docs-blogpost   latest    5b60f30f40e6   21 minutes ago   27.4MB

Running our Container

Now that we’ve built our container image, we can run our documentation site, by specifying our host port on the left to map to our container port on the right in 80:80:

1
docker run -it -p 80:80 ruanbekker/hashnode-docs-blogpost:latest

When you don’t have port 80 already listening prior to running the previous command, when you head to http://localhost (if you are running this locally), you should see our documentation site up and running:

image

Thank You

I have published this container image to ruanbekker/hashnode-docs-blogpost.

Thanks for reading, feel free to check out my website, feel free to subscribe to my newsletter or follow me at @ruanbekker on Twitter.

Remote Builds With Docker Contexts

using-docker-contexts

Often you want to save some battery life when you are doing docker builds and leverage a remote host to do the intensive work and we can utilise docker context over ssh to do just that.

About

In this tutorial I will show you how to use a remote docker engine to do docker builds, so you still run the docker client locally, but the context of your build will be sent to a remote docker engine via ssh.

We will setup password-less ssh, configure our ssh config, create the remote docker context, then use the remote docker context.

image

Password-less SSH

I will be copying my public key to the remote host:

1
$ ssh-copy-id ruan@192.168.2.18

Setup my ssh config:

1
2
3
4
5
6
7
$ cat ~/.ssh/config
Host home-server
    Hostname 192.168.2.18
    User ruan
    IdentityFile ~/.ssh/id_rsa
    StrictHostKeyChecking no
    UserKnownHostsFile /dev/null

Test:

1
2
$ ssh home-server whoami
ruan

Docker Context

On the target host (192.168.2.18) we can verify that docker is installed:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
$ docker version
Client: Docker Engine - Community
 Version:           20.10.12
 API version:       1.41
 Go version:        go1.16.12
 Git commit:        e91ed57
 Built:             Mon Dec 13 11:45:37 2021
 OS/Arch:           linux/amd64
 Context:           default
 Experimental:      true

Server: Docker Engine - Community
 Engine:
  Version:          20.10.12
  API version:      1.41 (minimum version 1.12)
  Go version:       go1.16.12
  Git commit:       459d0df
  Built:            Mon Dec 13 11:43:46 2021
  OS/Arch:          linux/amd64
  Experimental:     false
 containerd:
  Version:          1.4.12
  GitCommit:        7b11cfaabd73bb80907dd23182b9347b4245eb5d
 runc:
  Version:          1.0.2
  GitCommit:        v1.0.2-0-g52b36a2
 docker-init:
  Version:          0.19.0
  GitCommit:        de40ad0

On the client (my laptop in this example), we will create a docker context called “home-server” and point it to our target host:

1
2
3
$ docker context create home-server --docker "host=ssh://home-server"
home-server
Successfully created context "home-server"

Now we can list our contexts:

1
2
3
4
docker context ls
NAME                TYPE                DESCRIPTION                               DOCKER ENDPOINT               KUBERNETES ENDPOINT                                  ORCHESTRATOR
default *           moby                Current DOCKER_HOST based configuration   unix:///var/run/docker.sock   https://k3d-master.127.0.0.1.nip.io:6445 (default)   swarm
home-server         moby                                                          ssh://home-server

Using Contexts

We can verify if this works by listing our cached docker images locally and on our remote host:

1
2
$ docker --context=default images | wc -l
 16

And listing the remote images by specifying the context:

1
2
$ docker --context=home-server images | wc -l
 70

We can set the default context to our target host:

1
2
$ docker context use home-server
home-server

Running Containers over Contexts

So running containers with remote contexts essentially becomes running containers on remote hosts. In the past, I had to setup a ssh tunnel, point the docker host env var to that endpoint, then run containers on the remote host.

Thats something of the past, we can just point our docker context to our remote host and run the container. If you haven’t set the default context, you can specify the context, so running a docker container on a remote host with your docker client locally:

1
2
$ docker --context=home-server run -it -p 8002:8080 ruanbekker/hostname
2022/07/14 05:44:04 Server listening on port 8080

Now from our client (laptop), we can test our container on our remote host:

1
2
$ curl http://192.168.2.18:8002
Hostname: 8605d292e2b4

The same way can be used to do remote docker builds, you have your Dockerfile locally, but when you build, you point the context to the remote host, and your context (dockerfile and files referenced in your dockerfile) will be sent to the remote host. This way you can save a lot of battery life as the computation is done on the remote docker engine.

Thank You

Thanks for reading, feel free to check out my website, feel free to subscribe to my newsletter or follow me at @ruanbekker on Twitter.

Create a RAID5 Array With Mdadm on Linux

setup-raid5-array-ubuntu-linux

In this tutorial we will setup a RAID5 array, which is striping across multiple drives with distributed paritiy, which is good for redundancy. We will be using Ubuntu for our Linux Distribution, but the technique applies to other Linux Distributions as well.

What are we trying to achieve

We will run a server with one root disk and 6 extra disks, where we will first create our raid5 array with three disks, then I will show you how to expand your raid5 array by adding three other disks.

Things fail all the time, and it’s not fun when hard drives breaks, therefore we want to do our best to prevent our applications from going down due to hardware failures. To achieve data redundancy, we want to use three hard drives, which we want to add into a raid configuration that will proviide us:

  • striping, which is the technique of segmenting logically sequential data, so that consecutive segments are stored on different physical storage devices.
  • distributed parity, where parity data are distributed between the physical disks, where there is only one parity block per disk, this provide protection against one physical disk failure, where the minimum number of disks are three.

This is how a RAID5 array looks like (image from diskpart.com):

raid5

Hardware Overview

We will have a Linux server with one root disk and six extra disks:

1
2
3
4
5
6
7
8
9
10
$ lsblk
NAME    MAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
xvda    202:0    0    8G  0 disk
└─xvda1 202:1    0    8G  0 part /
xvdb    202:16   0   10G  0 disk
xvdc    202:32   0   10G  0 disk
xvdd    202:48   0   10G  0 disk
xvde    202:64   0   10G  0 disk
xvdf    202:80   0   10G  0 disk
xvdg    202:96   0   10G  0 disk

Dependencies

We require mdadm to create our raid configuration:

1
2
$ sudo apt update
$ sudo apt install mdadm -y

Format Disks

First we will format and partition the following disks: /dev/xvdb, /dev/xvdc, /dev/xvdd, I will demonstrate the process for one disk, but repeat them for the other as well:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
$ fdisk /dev/xvdc

Welcome to fdisk (util-linux 2.34).
Changes will remain in memory only, until you decide to write them.
Be careful before using the write command.

The old ext4 signature will be removed by a write command.

Device does not contain a recognized partition table.
Created a new DOS disklabel with disk identifier 0x26a2d2f6.

Command (m for help): n
Partition type
   p   primary (0 primary, 0 extended, 4 free)
   e   extended (container for logical partitions)
Select (default p): p
Partition number (1-4, default 1): 1
First sector (2048-20971519, default 2048):
Last sector, +/-sectors or +/-size{K,M,G,T,P} (2048-20971519, default 20971519):

Created a new partition 1 of type 'Linux' and of size 10 GiB.

Command (m for help): t
Selected partition 1
Hex code (type L to list all codes): fd
Changed type of partition 'Linux' to 'Linux raid autodetect'.

Command (m for help): w
The partition table has been altered.
Calling ioctl() to re-read partition table.
Syncing disks.

Create RAID5 Array

Using mdadm, create the /dev/md0 device, by specifying the raid level and the disks that we want to add to the array:

1
2
3
$ mdadm --create /dev/md0 --level=5 --raid-devices=3 /dev/xvdb1 /dev/xvdc1 /dev/xvdd1
mdadm: Defaulting to version 1.2 metadata
mdadm: array /dev/md0 started.

Now that our device has been added, we can monitor the process:

1
2
3
4
5
6
7
$ cat /proc/mdstat
Personalities : [raid6] [raid5] [raid4]
md0 : active raid5 xvdd1[3] xvdc1[1] xvdb1[0]
      20951040 blocks super 1.2 level 5, 512k chunk, algorithm 2 [3/2] [UU_]
      [==>..................]  recovery = 11.5% (1212732/10475520) finish=4.7min speed=32103K/sec

unused devices: <none>

As you can see, currently its at 11.5%, give it some time to let it complete, you should treat the following as a completed state:

1
2
3
4
5
6
$ cat /proc/mdstat
Personalities : [raid6] [raid5] [raid4]
md0 : active raid5 xvdd1[3] xvdc1[1] xvdb1[0]
      20951040 blocks super 1.2 level 5, 512k chunk, algorithm 2 [3/3] [UUU]

unused devices: <none>

We can also inspect devices with mdadm:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
$ mdadm -E /dev/xvd[b-d]1
/dev/xvdb1:
          Magic : a92b4efc
        Version : 1.2
    Feature Map : 0x0
     Array UUID : ea997bce:a530519c:ae41022e:0f4306bf
           Name : ip-172-31-3-57:0  (local to host ip-172-31-3-57)
  Creation Time : Wed Jan 12 13:36:39 2022
     Raid Level : raid5
   Raid Devices : 3

 Avail Dev Size : 20951040 (9.99 GiB 10.73 GB)
     Array Size : 20951040 (19.98 GiB 21.45 GB)
    Data Offset : 18432 sectors
   Super Offset : 8 sectors
   Unused Space : before=18280 sectors, after=0 sectors
          State : clean
    Device UUID : 8305a179:3ef96520:6c7b41dd:bdc7401f

    Update Time : Wed Jan 12 13:42:14 2022
  Bad Block Log : 512 entries available at offset 136 sectors
       Checksum : 1f9b4887 - correct
         Events : 18

         Layout : left-symmetric
     Chunk Size : 512K

   Device Role : Active device 0
   Array State : AAA ('A' == active, '.' == missing, 'R' == replacing)

To get information about your raid5 device:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
$ mdadm --detail /dev/md0
/dev/md0:
           Version : 1.2
     Creation Time : Wed Jan 12 13:36:39 2022
        Raid Level : raid5
        Array Size : 20951040 (19.98 GiB 21.45 GB)
     Used Dev Size : 10475520 (9.99 GiB 10.73 GB)
      Raid Devices : 3
     Total Devices : 3
       Persistence : Superblock is persistent

       Update Time : Wed Jan 12 13:42:14 2022
             State : clean
    Active Devices : 3
   Working Devices : 3
    Failed Devices : 0
     Spare Devices : 0

            Layout : left-symmetric
        Chunk Size : 512K

Consistency Policy : resync

              Name : ip-172-31-3-57:0  (local to host ip-172-31-3-57)
              UUID : ea997bce:a530519c:ae41022e:0f4306bf
            Events : 18

    Number   Major   Minor   RaidDevice State
       0     202       17        0      active sync   /dev/xvdb1
       1     202       33        1      active sync   /dev/xvdc1
       3     202       49        2      active sync   /dev/xvdd1

Create Filesystems

We will use our /dev/md0 device and create a ext4 filesystem:

1
2
3
4
5
6
7
8
9
10
11
12
$ mkfs.ext4 /dev/md0
mke2fs 1.45.5 (07-Jan-2020)
Creating filesystem with 5237760 4k blocks and 1310720 inodes
Filesystem UUID: 579f045e-d270-4ff2-b36b-8dc506c27c5f
Superblock backups stored on blocks:
  32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208,
  4096000

Allocating group tables: done
Writing inode tables: done
Creating journal (32768 blocks): done
Writing superblocks and filesystem accounting information: done

We can then verify that by looking at our block devices using lsblk:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
$ lsblk
NAME    MAJ:MIN RM  SIZE RO TYPE  MOUNTPOINT
xvda    202:0    0    8G  0 disk
└─xvda1 202:1    0    8G  0 part  /
xvdb    202:16   0   10G  0 disk
└─xvdb1 202:17   0   10G  0 part
  └─md0   9:0    0   20G  0 raid5
xvdc    202:32   0   10G  0 disk
└─xvdc1 202:33   0   10G  0 part
  └─md0   9:0    0   20G  0 raid5
xvdd    202:48   0   10G  0 disk
└─xvdd1 202:49   0   10G  0 part
  └─md0   9:0    0   20G  0 raid5
xvde    202:64   0   10G  0 disk
xvdf    202:80   0   10G  0 disk
xvdg    202:96   0   10G  0 disk

Now we can mount our device to /mnt:

1
$ mount /dev/md0 /mnt

We can verify that the device is mounted by using df:

1
2
3
4
$ df -h
Filesystem      Size  Used Avail Use% Mounted on
/dev/root       7.7G  1.5G  6.3G  19% /
/dev/md0         20G   45M   19G   1% /mnt

To persist the device across reboots, add it to the /etc/fstab file:

1
2
$ cat /etc/fstab
/dev/md0                /mnt     ext4   defaults                0 0

Now our filesystem which is mounted at /mnt is ready to be used.

RAID Configuration (across reboots)

By default RAID doesn’t have a config file, therefore we need to save it manually. If this step is not followed RAID device will not be in md0, but perhaps something else.

So, we must have to save the configuration to persist across reboots, when it reboot it gets loaded to the kernel and RAID will also get loaded.

1
$ mdadm --detail --scan --verbose >> /etc/mdadm.conf

Note: Saving the configuration will keep the RAID level stable in the md0 device.

Adding Spare Devices

Earlier I mentioned that we have spare disks that we can use to expand our raid device. After they have been formatted we can add them as spare devices to our raid setup:

1
2
3
4
$ mdadm --add /dev/md0 /dev/xvde1 /dev/xvdf1 /dev/xvdg1
mdadm: added /dev/xvde1
mdadm: added /dev/xvdf1
mdadm: added /dev/xvdg1

Verify our change by viewing the detail of our device:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
$ mdadm --detail /dev/md0
/dev/md0:
           Version : 1.2
     Creation Time : Wed Jan 12 13:36:39 2022
        Raid Level : raid5
        Array Size : 20951040 (19.98 GiB 21.45 GB)
     Used Dev Size : 10475520 (9.99 GiB 10.73 GB)
      Raid Devices : 3
     Total Devices : 6
       Persistence : Superblock is persistent

       Update Time : Wed Jan 12 14:28:23 2022
             State : clean
    Active Devices : 3
   Working Devices : 6
    Failed Devices : 0
     Spare Devices : 3

            Layout : left-symmetric
        Chunk Size : 512K

Consistency Policy : resync

              Name : ip-172-31-3-57:0  (local to host ip-172-31-3-57)
              UUID : ea997bce:a530519c:ae41022e:0f4306bf
            Events : 27

    Number   Major   Minor   RaidDevice State
       0     202       17        0      active sync   /dev/xvdb1
       1     202       33        1      active sync   /dev/xvdc1
       3     202       49        2      active sync   /dev/xvdd1

       4     202       65        -      spare   /dev/xvde1
       5     202       81        -      spare   /dev/xvdf1
       6     202       97        -      spare   /dev/xvdg1

As you can see it’s only spares at this moment, we can use the spares for data storage, by growing our device:

1
$ mdadm --grow --raid-devices=6 /dev/md0

Verify:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
$ mdadm --detail /dev/md0
/dev/md0:
           Version : 1.2
     Creation Time : Wed Jan 12 13:36:39 2022
        Raid Level : raid5
        Array Size : 20951040 (19.98 GiB 21.45 GB)
     Used Dev Size : 10475520 (9.99 GiB 10.73 GB)
      Raid Devices : 6
     Total Devices : 6
       Persistence : Superblock is persistent

       Update Time : Wed Jan 12 15:15:31 2022
             State : clean, reshaping
    Active Devices : 6
   Working Devices : 6
    Failed Devices : 0
     Spare Devices : 0

            Layout : left-symmetric
        Chunk Size : 512K

Consistency Policy : resync

    Reshape Status : 0% complete
     Delta Devices : 3, (3->6)

              Name : ip-172-31-3-57:0  (local to host ip-172-31-3-57)
              UUID : ea997bce:a530519c:ae41022e:0f4306bf
            Events : 36

    Number   Major   Minor   RaidDevice State
       0     202       17        0      active sync   /dev/xvdb1
       1     202       33        1      active sync   /dev/xvdc1
       3     202       49        2      active sync   /dev/xvdd1
       6     202       97        3      active sync   /dev/xvdg1
       5     202       81        4      active sync   /dev/xvdf1
       4     202       65        5      active sync   /dev/xvde1

Wait for the raid to rebuild, by viewing the mdstat::

1
2
3
4
5
6
7
$ cat /proc/mdstat
Personalities : [raid6] [raid5] [raid4]
md0 : active raid5 xvdg1[6] xvdf1[5] xvde1[4] xvdd1[3] xvdc1[1] xvdb1[0]
      20951040 blocks super 1.2 level 5, 512k chunk, algorithm 2 [6/6] [UUUUUU]
      [>....................]  reshape =  0.7% (76772/10475520) finish=18.0min speed=9596K/sec

unused devices: <none>

Resizing our Filesystem

Once we added the spares and growed our device, we need to run integrity checks, then we can resize the volume. But first, we need to unmount our filesystem:

1
$ umount /mnt

Run a integrity check:

1
2
3
4
5
6
7
8
$ e2fsck -f /dev/md0
e2fsck 1.45.5 (07-Jan-2020)
Pass 1: Checking inodes, blocks, and sizes
Pass 2: Checking directory structure
Pass 3: Checking directory connectivity
Pass 4: Checking reference counts
Pass 5: Checking group summary information
/dev/md0: 12/1310720 files (0.0% non-contiguous), 126323/5237760 blocks

Once that has passed, resize the file system:

1
2
3
4
$ resize2fs /dev/md0
resize2fs 1.45.5 (07-Jan-2020)
Resizing the filesystem on /dev/md0 to 13094400 (4k) blocks.
The filesystem on /dev/md0 is now 13094400 (4k) blocks long.

Then we remount our filesystem:

1
$ mount /dev/md0 /mnt

After the filesystem has been mounted, we can view the disk size and confirm that the size increased:

1
2
3
$ df -h /mnt
Filesystem      Size  Used Avail Use% Mounted on
/dev/md0         50G   52M   47G   1% /mnt

Thank You

Thanks for reading, feel free to check out my website, feel free to subscribe to my newsletter or follow me at @ruanbekker on Twitter.

Install a Specific Python Version on Ubuntu

install-specific-python-version

In this short tutorial, I will demonstrate how to install a spcific version of Python on Ubuntu Linux.

Say Thanks! Ko-fi

Dependencies

Update the apt repositories:

1
$ sudo apt update

Then install the required dependencies:

1
$ sudo apt install libssl-dev openssl wget build-essential zlib1g-dev -y

Python Versions

Head over to the Python Downloads section and select the version of your choice, in my case I will be using Python 3.8.13, once you have the download link, download it:

1
$ wget https://www.python.org/ftp/python/3.8.13/Python-3.8.13.tgz

Then extract the tarball:

1
$ tar -xvf Python-3.8.13.tgz

Once it completes, change to the directory:

1
$ cd Python-3.8.13

Installation

Compile and add --enable-optimizations flag as an argument:

1
$ ./configure --enable-optimizations

Run make and make install:

1
2
$ make
$ sudo make install

Once it completes, you can symlink the python binary so that it’s detected by your PATH, if you have no installed python versions or want to use it as the default, you can force overwriting the symlink:

1
$ sudo ln -fs /usr/local/bin/python3 /usr/bin/python3

Then we can test it by running:

1
2
$ python3 --version
Python 3.8.13

Thank You

Thanks for reading, feel free to check out my website, feel free to subscribe to my newsletter or follow me at @ruanbekker on Twitter.

How to Persist Iptables Rules After Reboots

persist-iptables-after-reboot

In this tutorial we will demonstrate how to persist iptables rules across reboots.

Rules Peristence

By default, when you create iptables rules its active, but as soon as you restart your server, the rules will be gone. Therefore we need to persist these rules across reboots.

Dependencies

We require the package iptables-persistent and I will install it on a debian system so I will be using apt:

1
2
sudo apt update
sudo apt install iptables-persistent -y

Ensure that the service is enabled to start on boot:

1
sudo systemctl enable netfilter-persistent

Creating Iptables Rules

In this case I will allow port 80 on TCP from all sources:

1
sudo iptables -I INPUT -p tcp --dport 80 -j ACCEPT

To persist our current rules, we need to save them to /etc/iptables/rules.v4 with iptables-save:

1
sudo iptables-save > /etc/iptables/rules.v4

Now when we restart, our rules will be loaded and our previous defined rules will be active.

Thank You

Thanks for reading, feel free to check out my website, feel free to subscribe to my newsletter or follow me at @ruanbekker on Twitter.

How to Read and Write Json Data With Python

This is a short tutorial on how to use python to write and read files.

Example

To write the following json data:

1
{"name": "ruan"}

To a file named /tmp/data.json, we will be using this code:

1
2
3
4
5
import json

data = {"name": "ruan"}
with open('data.json', 'w') as f:
    f.write(json.dumps(data))

When we execute that code, we will find the data inside that file:

1
2
$ cat /tmp/data.json
{"name": "ruan"}

And if we want to use python to read the data:

1
2
3
4
import json

with open('data.json', 'r') as f:
    json.loads(f.read())

When we execute that code, we will see:

1
{'name': 'ruan'}

Thank You

Thanks for reading, feel free to check out my website, read my newsletter or follow me at @ruanbekker on Twitter.

Setup Linkding Bookmarks Manager on Docker

Note: Originally posted on containers.fan

I’ve stumbled upon a great bookmarks manager service called Linkding. What I really like about it, it allows you to save your bookmarks, assign tags to it to search for it later, it has chrome and firefox browser extensions, and comes with an API.

Installing Linkding

We will be using Traefik to do SSL termination and host based routing, if you don’t have Traefik running already, you can follow this post to get that set up:

You can follow the linkding documentation for more detailed information.

The docker-compose.yml that I will be use:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
version: "3.8"

services:
  linkding:
    image: sissbruecker/linkding:latest
    container_name: linkding
    volumes:
      - ./data:/etc/linkding/data
    environment:
      - LD_DISABLE_BACKGROUND_TASKS=False
      - LD_DISABLE_URL_VALIDATION=False
    restart: unless-stopped
    cpus: 0.5
    mem_limit: 512m
    networks:
      - public
    labels:
      - "traefik.enable=true"
      - "traefik.http.routers.linkding-app.rule=Host(`linkding.yourdomain.net`)"
      - "traefik.http.routers.linkding-app.entrypoints=https"
      - "traefik.http.routers.linkding-app.tls.certresolver=letsencrypt"
    logging:
      driver: "json-file"
      options:
        max-size: "1m"

networks:
  public:
    name: public

Make sure to replace the FQDN of your choice, as I used linkding.yourdomain.net as an example.

Once everything is in place, boot the stack:

1
docker-compose up -d

Admin Account Registration

Once your linkding container has booted, you can create a admin user with the following command (ensure to replace where needed):

1
docker-compose exec linkding python manage.py createsuperuser --username=admin --email=root@localhost

Once you head over to the linkding url that you provided and you logon, you should be able to see something like this:

linkding

Creating Bookmarks

When you select “Add Bookmark” and you provide the URL, linkding will retrieve the title and the description and populate it for you, and you can provide the tags (seperated by spaces):

linkding-bookmark

Browser Extensions

To add a browser extension, select “Settings”, then “Integrations”, then you will find the link to the browser extension for Chrome and Firefox:

linkding-browser-extension

After you install the browser extension and click on it for the first time, it will ask you to set the Linkding Base URL and API Authentication Token:

linkding-configuration

You can find that at the bottom of the “Integrations” section:

linkding-rest-api-access

REST API

You can follow the API Docs for more information, using an example to search for bookmarks with the term “docker”:

1
curl -sL -H "Authorization: Token ${LINKDING_API_TOKEN}" "https://linkding.${DOMAIN}/api/bookmarks?q=docker" | python3 -m json.tool

In my case returns a response like the following:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
{
    "count": 1,
    "next": null,
    "previous": null,
    "results": [
        {
            "id": 6,
            "url": "https://www.docker.com/blog/deploying-web-applications-quicker-and-easier-with-caddy-2/",
            "title": "",
            "description": "",
            "website_title": "Deploying Web Applications Quicker and Easier with Caddy 2 - Docker",
            "website_description": "Deploying web apps can be tough, even with leading server technologies. Learn how you can use Caddy 2 and Docker simplify this process.",
            "is_archived": false,
            "tag_names": [
                "caddy",
                "docker"
            ],
            "date_added": "2022-05-31T19:11:53.739002Z",
            "date_modified": "2022-05-31T19:11:53.739016Z"
        }
    ]
}

Thank You

Thanks for reading, feel free to check out my website, read my newsletter or follow me at @ruanbekker on Twitter.

Python Flask Forms With Jinja Templating

ruanbekker-blog

In this tutorial, we will demonstrate how to use Python Flask and render_template to use Jinja Templating with our Form. The example is just a ui that accepts a firstname, lastname and email address and when we submit the form data, it renders on a table.

Install Flask

Create a virtual environment and install python flask

1
2
3
python3 -m pip install virtualenv
python3 -m virtualenv -p python3 .venv
source .venv/bin/activate

The Code

First we will create our application code in app.py:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
from flask import Flask, render_template, request

app_version = '1.1.0'

app = Flask(__name__)

@app.route('/')
def root():
    return render_template('form.html')

@app.route('/result',methods = ['POST', 'GET'])
def result():
    if request.method == 'POST':
        result = request.form
        json_result = dict(result)
        print(json_result)
        return render_template("result.html", result=result, app_version=app_version)

if __name__ == '__main__':
    app.run(debug = True)

As you can see our first route / will render the template in form.html. Our second route /result a couple of things are happening:

  • If we received a POST method, we will capture the form data
  • We are then casting it to a dictionary data type
  • Print the results out of our form data (for debugging)
  • Then we are passing the result object and the app_version variable to our template where it will be parsed.

When using render_template all html files resides under the templates directory, so let’s first create our base.html file that we will use as a starting point in templates/base.html:

1
mkdir templates

Then in your templates/base.html:

In our templates/form.html we have our form template, and you can see we are referencing our base.html in our template to include the first bit:

Then our last template templates/result.html is used when we click on submit, when the form data is displayed in our table:

So our directory structure should look like this:

1
2
3
4
5
6
7
├── app.py
└── templates
    ├── base.html
    ├── form.html
    └── result.html

1 directory, 4 files

Then run the server:

1
python app.py

Screenshots

It should look like the following when you access http://localhost:5000/

python-flask-forms

After entering your form data, select “Submit”, then you should see the following:

python-flask-forms

So you can see that our request data was parsed through the template and our app version variable as well.

Thank You

Thanks for reading, if you like my content, check out my website, read my newsletter or follow me at @ruanbekker on Twitter.

ruanbekker-cheatsheets

Prometheus Relabel Config Examples

This is a quick demonstration on how to use prometheus relabel configs, when you have scenarios for when example, you want to use a part of your hostname and assign it to a prometheus label.

Prometheus Relabling

Using a standard prometheus config to scrape two targets: - ip-192-168-64-29.multipass:9100 - ip-192-168-64-30.multipass:9100

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
global:
  scrape_interval:     15s
  evaluation_interval: 15s
  external_labels:
    cluster: 'local'

scrape_configs:
  - job_name: 'prometheus'
    scrape_interval: 15s
    static_configs:
    - targets: ['localhost:9090']

  - job_name: 'multipass-nodes'
    static_configs:
    - targets: ['ip-192-168-64-29.multipass:9100']
      labels:
        test: 1
    - targets: ['ip-192-168-64-30.multipass:9100']
      labels:
        test: 1

The Result:

image

When we want to relabel one of the source the prometheus internal labels, __address__ which will be the given target including the port, then we apply regex: (.*) to catch everything from the source label, and since there is only one group we use the replacement as ${1}-randomtext and use that value to apply it as the value of the given target_label which in this case is for randomlabel, which will be in this case:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
global:
  scrape_interval:     15s
  evaluation_interval: 15s
  external_labels:
    cluster: 'local'

scrape_configs:
  - job_name: 'prometheus'
    scrape_interval: 15s
    static_configs:
    - targets: ['localhost:9090']

  - job_name: 'multipass-nodes'
    static_configs:
    - targets: ['ip-192-168-64-29.multipass:9100']
      labels:
        test: 3
    - targets: ['ip-192-168-64-30.multipass:9100']
      labels:
        test: 3
    relabel_configs:
    - source_labels: [__address__]
      regex: '(.+)'
      replacement: '${1}-randomtext'
      target_label: randomlabel

The Result:

image

In this case we want to relabel the __address__ and apply the value to the instance label, but we want to exclude the :9100 from the __address__ label:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
# Config: https://github.com/prometheus/prometheus/blob/release-2.36/config/testdata/conf.good.yml
global:
  scrape_interval:     15s
  evaluation_interval: 15s
  external_labels:
    cluster: 'local'

scrape_configs:
  - job_name: 'prometheus'
    scrape_interval: 15s
    static_configs:
    - targets: ['localhost:9090']

  - job_name: 'multipass-nodes'
    static_configs:
    - targets: ['ip-192-168-64-29.multipass:9100']
      labels:
        test: 4
    - targets: ['ip-192-168-64-30.multipass:9100']
      labels:
        test: 4
    relabel_configs:
    - source_labels: [__address__]
      separator: ':'
      regex: '(.*):(.*)'
      replacement: '${1}'
      target_label: instance

The Result:

image

AWS EC2 SD Configs

On AWS EC2 you can make use of the ec2_sd_config where you can make use of EC2 Tags, to set the values of your tags to prometheus label values.

In this scenario, on my EC2 instances I have 3 tags: - Key: PrometheusScrape, Value: Enabled - Key: Name, Value: pdn-server-1 - Key: Environment, Value: dev

In our config, we only apply a node-exporter scrape config to instances which are tagged PrometheusScrape=Enabled, then we use the Name tag, and assign it’s value to the instance tag, and the similarly we assign the Environment tag value to the environment promtheus label value.

Because this prometheus instance resides in the same VPC, I am using the __meta_ec2_private_ip which is the private ip address of the EC2 instance to assign it to the address where it needs to scrape the node exporter metrics endpoint:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
scrape_configs:
  - job_name: node-exporter
    scrape_interval: 15s
    ec2_sd_configs:
    - region: eu-west-1
      port: 9100
      filters:
        - name: tag:PrometheusScrape
          values:
            - Enabled
    relabel_configs:
    - source_labels: [__meta_ec2_private_ip]
      replacement: '${1}:9100'
      target_label: __address__
    - source_labels: [__meta_ec2_tag_Name]
      target_label: instance
    - source_labels: [__meta_ec2_tag_Environment]
      target_label: environment

You will need a EC2 Ready Only instance role (or access keys on the configuration) in order for prometheus to read the EC2 tags on your account.

See their documentation for more info.

Stack

The docker-compose used:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
version: '3.8'

services:
  prometheus:
    image: prom/prometheus
    container_name: 'prometheus'
    user: root
    restart: unless-stopped
    volumes:
      - ./prometheus.yml:/etc/prometheus/prometheus.yml
      - prometheus-data:/prometheus
    command:
      - '--config.file=/etc/prometheus/prometheus.yml'
      - '--storage.tsdb.path=/prometheus'
      - '--storage.tsdb.retention=14d'
      - '--web.console.libraries=/etc/prometheus/console_libraries'
      - '--web.console.templates=/etc/prometheus/consoles'
      - '--web.external-url=http://prometheus.127.0.0.1.nip.io'
    ports:
      - 9090:9090
    networks:
      - public
    logging:
      driver: "json-file"
      options:
        max-size: "1m"

networks:
  public:
    name: public

volumes:
  prometheus-data: {}

References

Usful docs:

Thank You

Thanks for reading, if you like my content, check out my website, read my newsletter or follow me at @ruanbekker on Twitter.

Create a AWS Lambda Layer With Docker

In this tutorial we will be creating a AWS Lambda Python Layer that will include the Python Requests package and we will compile the package with Docker and the LambCI image.

Getting Started

First we will create the directory where we will store the intermediate data:

1
2
$ mkdir lambda-layers
$ cd lambda-layers

Then we will create the directory structure, as you can see I will be using the python 3.8 runtime:

1
2
$ mkdir -p requests/python/lib/python3.8
$ cd requests

Write the dependencies to the requirements file:

1
$ echo "requests" > requirements.txt

Install dependencies locally using docker, where we will be using the lambci/lambda:build-python3.8 iamge and we are mounting our current working directory to /var/task inside the container, and then we will be running the command pip install -r requirements.txt -t python/lib/python3.7/site-packages/; exit inside the container, which will essentially dump the content to our working directory:

1
2
3
$ docker run -v $PWD:/var/task \
   lambci/lambda:build-python3.8 \
   sh -c "pip install -r requirements.txt -t python/lib/python3.8/site-packages/; exit"

Zip up the deployment package that we will push to AWS Lambda Layers:

1
$ zip -r package.zip python > /dev/null

Publish the layer using the aws cli tools, by specifying the deployment package, the compatible runtime and a identifier:

1
2
3
4
5
$ aws --profile dev lambda \
   publish-layer-version --layer-name python-requests \
   --description "Python Requests using 3.8 Runtime" \
   --zip-file fileb://package.zip \
   --compatible-runtime "python3.8"

Then when you want to reference the layer on the functio that you want to create, you can do it like this:

1
2
3
4
5
$ aws lambda create-function --function-name test-requests \
   --runtime python3.8 \
   --handler lambda_function.lambda_handler \
   --role "" --layers "arn:aws:lambda:eu-west-1:xxxxxxxxxxxx:layer:test-requests" \
   --code "S3Bucket=string,S3Key=string"

Thank You

Thanks for reading, if you like my content, check out my website, read my newsletter or follow me at @ruanbekker on Twitter.

Credit to oznetnerd.com.