Ruan Bekker's Blog

From a Curious mind to Posts on Github

Everything You Need to Know About Helm

image

Helm, its one amazing piece of software that I use multiple times per day!

What is Helm?

You can think of helm as a package manager for kubernetes, but in fact its much more than that.

Think about it in the following way:

  • Kubernetes Package Manager
  • Way to templatize your applications (this is the part im super excited about)
  • Easy way to install applications to your kubernetes cluster
  • Easy way to do upgrades to your applications
  • Websites such as artifacthub.io provides a nice interface to lookup any application an how to install or upgrade that application.

How does Helm work?

Helm uses your kubernetes config to connect to your kubernetes cluster. In most cases it utilises the config defined by the KUBECONFIG environment variable, which in most cases points to ~/kube/config.

If you want to follow along, you can view the following blog post to provision a kubernetes cluster locally:

Once you have provisioned your kubernetes cluster locally, you can proceed to install helm, I will make the assumption that you are using Mac:

1
brew install helm

Once helm has been installed, you can test the installation by listing any helm releases, by running:

1
helm list

Helm Charts

Helm uses a packaging format called charts, which is a collection of files that describes a related set of kubernetes resources. A sinlge helm chart m ight be used to deploy something simple such as a deployment or something complex that deploys a deployment, ingress, horizontal pod autoscaler, etc.

Using Helm to deploy applications

So let’s assume that we have our kubernetes cluster deployed, and now we are ready to deploy some applications to kubernetes, but we are unsure on how we would do that.

Let’s assume we want to install Nginx.

First we would navigate to artifacthub.io, which is a repository that holds a bunch of helm charts and the information on how to deploy helm charts to our cluster.

Then we would search for Nginx, which would ultimately let us land on:

On this view, we have super useful information such as how to use this helm chart, the default values, etc.

Now that we have identified the chart that we want to install, we can have a look at their readme, which will indicate how to install the chart:

1
2
$ helm repo add my-repo https://charts.bitnami.com/bitnami
$ helm install my-release my-repo/nginx

But before we do that, if we think about it, we add a repository, then before we install a release, we could first find information such as the release versions, etc.

So the way I would do it, is to first add the repository:

1
$ helm repo add bitnami https://charts.bitnami.com/bitnami

Then since we have added the repository, we can update our repository to ensure that we have the latest release versions:

1
$ helm repo update

Now that we have updated our local repositories, we want to find the release versions, and we can do that by listing the repository in question. For example, if we don’t know the application name, we can search by the repository name:

1
$ helm search repo bitnami/ --versions

In this case we will get an output of all the applications that is currently being hosted by Bitnami.

If we know the repository and the release name, we can extend our search by using:

1
$ helm search repo bitnami/nginx --versions

In this case we get an output of all the Nginx release versions that is currently hosted by Bitnami.

Installing a Helm Release

Now that we have received a response from helm search repo, we can see that we have different release versions, as example:

1
2
3
NAME                             CHART VERSION   APP VERSION DESCRIPTION
bitnami/nginx                     13.2.22         1.23.3      NGINX Open Source is a web server that can be a...
bitnami/nginx                     13.2.21         1.23.3      NGINX Open Source is a web server that can be a...

For each helm chart, the chart has default values which means, when we install the helm release it will use the default values which is defined by the helm chart.

We have the concept of overriding the default values with a yaml configuration file we usually refer to values.yaml, that we can define the values that we want to override our default values with.

To get the current default values, we can use helm show values, which will look like the following:

1
$ helm show values bitnami/nginx --version 13.2.22

That will output to standard out, but we can redirect the output to a file using the following:

1
$ helm show values bitnami/nginx --version 13.2.22 > nginx-values.yaml

Now that we have redirected the output to nginx-values.yaml, we can inspect the default values using cat nginx-values.yaml, and any values that we see that we want to override, we can edit the yaml file and once we are done we can save it.

Now that we have our override values, we can install a release to our kubernetes cluster.

Let’s assume we want to install nginx to our cluster under the name my-nginx and we want to deploy it to the namespace called web-servers:

1
$ helm upgrade --install my-nginx bitnami/nginx --values nginx-values.yaml --namespace web-servers --create-namespace --version 13.2.22

In the example above, we defined the following:

  • upgrade --install - meaning we are installing a release, if already exists, do an upgrade
  • my-nginx - use the release name my-nginx
  • bitnami/nginx - use the repository and chart named nginx
  • --values nginx-values.yaml - define the values file with the overrides
  • --namespace web-servers --create-namespace - define the namespace where the release will be installed to, and create the namespace if not exists
  • --version 13.2.22 - specify the version of the chart to be installed

Information about the release

We can view information about our release by running:

1
$ helm list -n web-servers

Creating your own helm charts

It’s very common to create your own helm charts when you follow a common pattern in a microservice architecture or something else, where you only want to override specific values such as the container image, etc.

In this case we can create our own helm chart using:

1
2
3
$ mkdir ~/charts
$ cd ~/charts
$ helm create my-chart

This will create a scaffoliding project with the required information that we need to create our own helm chart. If we look at a tree view, it will look like the following:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
$ tree .
.
└── my-chart
    ├── Chart.yaml
    ├── charts
    ├── templates
    │   ├── NOTES.txt
    │   ├── _helpers.tpl
    │   ├── deployment.yaml
    │   ├── hpa.yaml
    │   ├── ingress.yaml
    │   ├── service.yaml
    │   ├── serviceaccount.yaml
    │   └── tests
    │       └── test-connection.yaml
    └── values.yaml

4 directories, 10 files

This example chart can already be used, to see what this chart will produce when running it with helm, we can use the helm template command:

1
2
$ cd my-chart
$ helm template example . --values values.yaml

The output will be something like the following:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
---
# Source: my-chart/templates/deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: example-my-chart
  labels:
    helm.sh/chart: my-chart-0.1.0
    app.kubernetes.io/name: my-chart
    app.kubernetes.io/instance: example
    app.kubernetes.io/version: "1.16.0"
    app.kubernetes.io/managed-by: Helm
spec:
  replicas: 1
  template:
    spec:
      containers:
        - name: my-chart
          image: "nginx:1.16.0"
          ...
---
...

In our example it will create a service account, service, deployment, etc.

As you can see the spec.template.spec.containers[].image is set to nginx:1.16.0, and to see how that was computed, we can have a look at templates/deployment.yaml:

As you can see in image: section we have .Values.image.repository and .Values.image.tag, and those values are being retrieved from the values.yaml file, and when we look at the values.yaml file:

1
2
3
4
5
image:
  repository: nginx
  pullPolicy: IfNotPresent
  # Overrides the image tag whose default is the chart appVersion.
  tag: ""

If we want to override the image repository and image tag, we can update the values.yaml file to lets say:

1
2
3
4
image:
  repository: busybox
  tag: latest
  pullPolicy: IfNotPresent

When we run our helm template command again, we can see that the computed values changed to what we want:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
$ helm template example . --values values.yaml
---
# Source: my-chart/templates/deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: example-my-chart
spec:
  replicas: 1
  template:
    spec:
      containers:
        - name: my-chart
          image: "busybox:latest"
          imagePullPolicy: IfNotPresent
      ...

Another way is to use --set:

1
2
3
4
5
6
7
8
$ helm template example . --values values.yaml --set image.repository=ruanbekker/containers,image.tag=curl
spec:
  template:
    spec:
      containers:
        - name: my-chart
          image: "ruanbekker/containers:curl"
      ...

The template subcommand provides a great way to debug your charts. To learn more about helm charts, view their documentation.

Publish your Helm Chart to ChartMuseum

ChartMuseum is an open-source Helm Chart Repository server written in Go.

Running chartmuseum demonstration will be done locally on my workstation using Docker. To run the server:

1
2
3
4
5
6
7
$ docker run --rm -it \
  -p 8080:8080 \
  -e DEBUG=1 \
  -e STORAGE=local \
  -e STORAGE_LOCAL_ROOTDIR=/charts \
  -v $(pwd)/charts:/charts \
  ghcr.io/helm/chartmuseum:v0.14.0

Now that ChartMuseum is running, we will need to install a helm plugin called helm-push which helps to push charts to our chartmusuem repository:

1
$ helm plugin install https://github.com/chartmuseum/helm-push

We can verify if our plugin was installed:

1
2
3
$ helm plugin list
NAME      VERSION DESCRIPTION
cm-push   0.10.3  Push chart package to ChartMuseum

Now we add our chartmuseum helm chart repository, which we will call cm-local:

1
$ helm repo add cm-local http://localhost:8080/

We can list our helm repository:

1
2
3
$ helm repo list
NAME                  URL
cm-local              http://localhost:8080/

Now that our helm repository has been added, we can push our helm chart to our helm chart repository. Ensure that we are in our chart repository directory, where the Chart.yaml file should be in our current directory. We need this file as it holds metadata about our chart.

We can view the Chart.yaml:

1
2
3
4
5
6
apiVersion: v2
name: my-chart
description: A Helm chart for Kubernetes
type: application
version: 0.1.0
appVersion: "1.16.0"

Push the helm chart to chartmuseum:

1
2
3
$ helm cm-push . http://localhost:8080/ --version 0.0.1
Pushing my-chart-0.0.1.tgz to http://localhost:8080/...
Done.

Now we should update our repositories so that we can get the latest changes:

1
$ helm repo update

Now we can list the charts under our repository:

1
2
3
$ helm search repo cm-local/
NAME              CHART VERSION   APP VERSION DESCRIPTION
cm-local/my-chart 0.0.1           1.16.0      A Helm chart for Kubernetes

We can now get the values for our helm chart by running:

1
$ helm show values cm-local/my-chart

This returns the values yaml that we can use for our chart, so let’s say you want to output the values yaml so that we can use to to deploy a release we can do:

1
$ helm show values cm-local/my-chart > my-values.yaml

Now when we want to deploy a release, we can do:

1
$ helm upgrade --install my-release cm-local/my-chart --values my-values.yaml --namespace test --create-namespace --version 0.0.1

After the release was deployed, we can list the releases by running:

1
$ helm list

And to view the release history:

1
$ helm history my-release

Resources

Please find the following information with regards to Helm documentation: - helm docs - helm cart template guide

If you need a kubernetes cluster and you would like to run this locally, find the following documentation in order to do that: - using kind for local kubernetes clusters

Thank You

Thanks for reading, feel free to check out my website, feel free to subscribe to my newsletter or follow me at @ruanbekker on Twitter.

Getting Started With Wiremock

In this tutorial we will use docker to run an instance of wiremock to setup a mock api for us to test our api’s.

Wiremock

Wiremock is a tool for building mock API’s which enables us to build stable development environments.

Docker and Wiremock

Run a wiremock instance with docker:

1
docker run -it --rm -p 8080:8080 --name wiremock wiremock/wiremock:2.34.0

Then our wiremock instance will be exposed on port 8080 locally, which we can use to make a request against to create a api mapping:

1
2
3
4
curl -XPOST -H "Content-Type: application/json" \
  http://localhost:8080/__admin/mappings
  -d '{"request": {"url": "/testapi","method": "GET"}, "response": {"status": 200, "body": "{\"result\": \"ok\"
}", "headers": {"Content-Type": "application/json"}}}'

The response should be something like this:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
{
    "id" : "223a2c0a-8b43-42dc-8ba6-fe973da1e420",
    "request" : {
      "url" : "/testapi",
      "method" : "GET"
    },
    "response" : {
      "status" : 200,
      "body" : "{\"result\": \"ok\"}",
      "headers" : {
        "Content-Type" : "application/json"
      }
    },
    "uuid" : "223a2c0a-8b43-42dc-8ba6-fe973da1e420"
}

Test Wiremock

If we make a GET request against our API:

1
curl http://localhost:8080/testapi

Our response should be:

1
2
3
{
  "result": "ok"
}

Export Wiremock Mappings

We can export our mappings to a local file named stubs.json with:

1
curl -s http://localhost:8080/__admin/mappings --output stubs.json

Import Wiremock Mappings

We can import our mappings from our stubs.json file with:

1
curl -XPOST -v --data-binary @stubs.json http://localhost:8080/__admin/mappings/import

Resources

Thank You

Thanks for reading, feel free to check out my website, feel free to subscribe to my newsletter or follow me at @ruanbekker on Twitter.

Logging With Docker Promtail and Grafana Loki

grafana-loki-promtail

In this post we will use Grafana Promtail to collect all our logs and ship it to Grafana Loki.

About

We will be using Docker Compose and mount the docker socket to Grafana Promtail so that it is aware of all the docker events and configure it that only containers with docker labels logging=promtail needs to be enabled for logging, which will then scrape those logs and send it to Grafana Loki where we will visualize it in Grafana.

Promtail

In our promtail configuration config/promtail.yaml:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
# https://grafana.com/docs/loki/latest/clients/promtail/configuration/
# https://docs.docker.com/engine/api/v1.41/#operation/ContainerList
server:
  http_listen_port: 9080
  grpc_listen_port: 0

positions:
  filename: /tmp/positions.yaml

clients:
  - url: http://loki:3100/loki/api/v1/push

scrape_configs:
  - job_name: flog_scrape
    docker_sd_configs:
      - host: unix:///var/run/docker.sock
        refresh_interval: 5s
        filters:
          - name: label
            values: ["logging=promtail"]
    relabel_configs:
      - source_labels: ['__meta_docker_container_name']
        regex: '/(.*)'
        target_label: 'container'
      - source_labels: ['__meta_docker_container_log_stream']
        target_label: 'logstream'
      - source_labels: ['__meta_docker_container_label_logging_jobname']
        target_label: 'job'

You can see we are using the docker_sd_configs provider and filter only docker containers with the docker labels logging=promtail and once we have those logs we relabel our labels to have the container name and we also use docker labels like log_stream and logging_jobname to add labels to our logs.

Grafana Config

We would like to auto configure our datasources for Grafana and in config/grafana-datasources.yml we have:

1
2
3
4
5
6
7
8
9
10
apiVersion: 1

datasources:
  - name: Loki
    type: loki
    access: proxy
    url: http://loki:3100
    version: 1
    editable: false
    isDefault: true

Docker Compose

Then lastly we have our docker-compose.yml that wire up all our containers:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
version: '3.8'

services:
  nginx-app:
    container_name: nginx-app
    image: nginx
    labels:
      logging: "promtail"
      logging_jobname: "containerlogs"
    ports:
      - 8080:80
    networks:
      - app

  grafana:
    image: grafana/grafana:latest
    ports:
      - 3000:3000
    volumes:
      - ./config/grafana-datasources.yml:/etc/grafana/provisioning/datasources/datasources.yaml
    environment:
      - GF_AUTH_ANONYMOUS_ENABLED=true
      - GF_AUTH_ANONYMOUS_ORG_ROLE=Admin
      - GF_AUTH_DISABLE_LOGIN_FORM=true
    networks:
      - app

  loki:
    image: grafana/loki:latest
    ports:
      - 3100:3100
    command: -config.file=/etc/loki/local-config.yaml
    networks:
      - app

  promtail:
    image:  grafana/promtail:latest
    container_name: promtail
    volumes:
      - ./config/promtail.yaml:/etc/promtail/docker-config.yaml
      - /var/lib/docker/containers:/var/lib/docker/containers:ro
      - /var/run/docker.sock:/var/run/docker.sock
    command: -config.file=/etc/promtail/docker-config.yaml
    depends_on:
      - loki
    networks:
      - app

networks:
  app:
    name: app

As you can see with our nginx container we define our labels:

1
2
3
4
5
6
  nginx-app:
    container_name: nginx-app
    image: nginx
    labels:
      logging: "promtail"
      logging_jobname: "containerlogs"

Which uses logging: "promtail" to let promtail know this log container’s log to be scraped and logging_jobname: "containerlogs" which will assign containerlogs to the job label.

Start the stack

If you are following along all this configuration is available in my github repository https://github.com/ruanbekker/docker-promtail-loki .

Once you have everything in place you can start it with:

1
docker-compose up -d

Access nginx on http://localhost:8080

image

Then navigate to grafana on http://localhost:3000 and select explore on the left and select the container:

image

And you will see the logs:

image

Thank You

Thanks for reading, feel free to check out my website, feel free to subscribe to my newsletter or follow me at @ruanbekker on Twitter.

KinD for Local Kubernetes Clusters

kubernetes-kind

In this tutorial we will demonstrate how to use KinD (Kubernetes in Docker) to provision local kubernetes clusters for local development.

Updated at: 2023-12-22

About

KinD uses container images to run as “nodes”, so spinning up and tearing down clusters becomes really easy or running multiple or different versions, is as easy as pointing to a different container image.

Configuration such as node count, ports, volumes, image versions can either be controlled via the command line or via configuration, more information on that can be found on their documentation:

Installation

Follow the docs for more information, but for mac:

1
brew install kind

To verify if kind was installed, you can run:

1
kind version

Create a Cluster

Create the cluster with command line arguments, such as cluster name, the container image:

1
kind create cluster --name cluster-1 --image kindest/node:v1.26.6

And the output will look something like this:

1
2
3
4
5
6
7
8
9
10
11
12
13
Creating cluster "cluster-1" ...
 ✓ Ensuring node image (kindest/node:v1.26.6) 🖼
 ✓ Preparing nodes 📦
 ✓ Writing configuration 📜
 ✓ Starting control-plane 🕹️
 ✓ Installing CNI 🔌
 ✓ Installing StorageClass 💾
Set kubectl context to "kind-cluster-1"
You can now use your cluster with:

kubectl cluster-info --context kind-cluster-1

Have a question, bug, or feature request? Let us know! https://kind.sigs.k8s.io/#community 🙂

Then you can interact with the cluster using:

1
kubectl get nodes --context kind-cluster-1

Then delete the cluster using:

1
kind delete cluster --name kind-cluster-1

I highly recommend installing kubectx, which makes it easy to switch between kubernetes contexts.

Create a Cluster with Config

If you would like to define your cluster configuration as config, you can create a file default-config.yaml with the following as a 2 node cluster, and specifying version 1.24.0:

1
2
3
4
5
6
7
8
---
kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
nodes:
- role: control-plane
  image: kindest/node:v1.26.6@sha256:6e2d8b28a5b601defe327b98bd1c2d1930b49e5d8c512e1895099e4504007adb
- role: worker
  image: kindest/node:v1.26.6@sha256:6e2d8b28a5b601defe327b98bd1c2d1930b49e5d8c512e1895099e4504007adb

Then create the cluster and point the config:

1
kind create cluster --name kind-cluster --config default-config.yaml

Interact with the Cluster

View the cluster info:

1
kubectl cluster-info --context kind-kind-cluster

View cluster contexts:

1
kubectl config get-contexts

Use context:

1
kubectl config use-context kind-kind-cluster

View nodes:

1
2
3
4
5
kubectl get nodes -o wide

NAME                         STATUS   ROLES           AGE     VERSION   INTERNAL-IP   EXTERNAL-IP   OS-IMAGE       KERNEL-VERSION      CONTAINER-RUNTIME
kind-cluster-control-plane   Ready    control-plane   2m11s   v1.26.6   172.20.0.5    <none>        Ubuntu 21.10   5.10.104-linuxkit   containerd://1.6.4
kind-cluster-worker          Ready    <none>          108s    v1.26.6   172.20.0.4    <none>        Ubuntu 21.10   5.10.104-linuxkit   containerd://1.6.4

Deploy Sample Application

We will create a deployment, a service and port-forward to our service to access our application. You can also specify port configuration to your cluster so that you don’t need to port-forward, which you can find in their port mappings documentation

I will be using the following commands to generate the manifests, but will also add them to this post:

1
2
kubectl create deployment hostname --namespace default --replicas 2 --image ruanbekker/containers:hostname --port 8080 --dry-run=client -o yaml > hostname-deployment.yaml
kubectl expose deployment hostname --namespace default --port=80 --target-port=8080 --name=hostname-http --dry-run=client -o yaml > hostname-service.yaml

The manifest:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
---
apiVersion: apps/v1
kind: Deployment
metadata:
  creationTimestamp: null
  labels:
    app: hostname
  name: hostname
  namespace: default
spec:
  replicas: 2
  selector:
    matchLabels:
      app: hostname
  strategy: {}
  template:
    metadata:
      creationTimestamp: null
      labels:
        app: hostname
    spec:
      containers:
      - image: ruanbekker/containers:hostname
        name: containers
        ports:
        - containerPort: 8080
        resources: {}
status: {}
---
apiVersion: v1
kind: Service
metadata:
  creationTimestamp: null
  labels:
    app: hostname
  name: hostname-http
  namespace: default
spec:
  ports:
  - port: 80
    protocol: TCP
    targetPort: 8080
  selector:
    app: hostname
status:
  loadBalancer: {}

Then apply them with:

1
kubectl apply -f <name-of-manifest>.yaml

Or if you used kubectl to create them:

1
2
kubectl apply -f hostname-deployment.yaml
kubectl apply -f hostname-service.yaml

You can then view your resources with:

1
2
3
4
5
6
7
8
9
10
11
12
kubectl get deployment,pod,service

NAME                       READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/hostname   2/2     2            2           9m27s

NAME                            READY   STATUS    RESTARTS   AGE
pod/hostname-7ff58c5644-67vhq   1/1     Running   0          9m27s
pod/hostname-7ff58c5644-wjjbw   1/1     Running   0          9m27s

NAME                    TYPE        CLUSTER-IP     EXTERNAL-IP   PORT(S)   AGE
service/hostname-http   ClusterIP   10.96.218.58   <none>        80/TCP    5m48s
service/kubernetes      ClusterIP   10.96.0.1      <none>        443/TCP   24m

Port forward to your service:

1
kubectl port-forward svc/hostname-http 8080:80

Then access your application:

1
2
3
curl http://localhost:8080/

Hostname: hostname-7ff58c5644-wjjbw

Delete Kind Cluster

View the clusters:

1
kind get clusters

Delete a cluster:

1
kind delete cluster --name kind-cluster

Additional Configs

If you want more configuration options, you can look at their documentation:

But one more example that I like using, is to define the port mappings:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
nodes:
- role: control-plane
  image: kindest/node:v1.26.6@sha256:6e2d8b28a5b601defe327b98bd1c2d1930b49e5d8c512e1895099e4504007adb
  extraPortMappings:
  - containerPort: 80
    hostPort: 80
    protocol: TCP
    listenAddress: "0.0.0.0"
  - containerPort: 443
    hostPort: 443
    protocol: TCP
  kubeadmConfigPatches:
  - |
    kind: InitConfiguration
    nodeRegistration:
      kubeletExtraArgs:
        node-labels: "ingress-ready=true"

Extras

I highly recommend using kubectx to switch contexts and kubens to set the default namespace, and aliases:

1
2
3
alias k=kubectl
alias kx=kubectx
alias kns=kubens

Thank You

Thanks for reading, feel free to check out my website, feel free to subscribe to my newsletter or follow me at @ruanbekker on Twitter.

Ansible Playbook for Your Macbook Homebrew Packages

ansible-macbook-homebrew

In this tutorial I will demonstrate how to use Ansible for Homebrew Configuration Management. The aim for using Ansible to manage your homebrew packages helps you to have a consistent list of packages on your macbook.

For me personally, when I get a new laptop it’s always a mission to get the same packages installed as what I had before, and ansible solves that for us to have all our packages defined in configuration management.

Install Ansible

Install ansible with python and pip:

1
python3 -m pip install ansible==4.9.0

Ansible Configuration

Create the ansible.cfg configuration file:

1
2
3
[defaults]
inventory = inventory.ini
deprecation_warnings = False

Our inventory.ini will define the information about our target host, which will be localhost as we are using ansible to run against our local target which is our macbook:

1
2
3
4
5
[localhost]
my.laptop  ansible_connection=local

[localhost:vars]
ansible_python_interpreter = /usr/bin/python3

Ansible Playbook

Our playbook homebrew.yaml will define the tasks to add the homebrew taps, cask packages and homebrew packages. You can change the packages as you desire, but these are the ones that I use:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
- hosts: localhost
  name: Macbook Playbook
  gather_facts: False
  vars:
    TFENV_ARCH: amd64
  tasks:
    - name: Ensures taps are present via homebrew
      community.general.homebrew_tap:
        name: ""
        state: present
      with_items:
        - hashicorp/tap

    - name: Ensures packages are present via homebrew cask
      community.general.homebrew_cask:
        name: ""
        state: present
        install_options: 'appdir=/Applications'
      with_items:
        - visual-studio-code
        - multipass
        - spotify

    - name: Ensures packages are present via homebrew
      community.general.homebrew:
        name: ""
        path: "/Applications"
        state: present
      with_items:
        - openssl
        - readline
        - sqlite3
        - xz
        - zlib
        - jq
        - yq
        - wget
        - go
        - kubernetes-cli
        - fzf
        - sshuttle
        - hugo
        - helm
        - kind
        - awscli
        - gnupg
        - kubectx
        - helm
        - stern
        - terraform
        - tfenv
        - pyenv
        - jsonnet
      ignore_errors: yes
      tags:
        - packages

Deploy Playbook

Now you can run the playbook using:

1
ansible-playbook homebrew.yaml

Source Code

The code can be found in my github repository: - https://github.com/ruanbekker/ansible-macbook-setup

Thanks

Thanks for reading, feel free to check out my website, feel free to subscribe to my newsletter or follow me at @ruanbekker on Twitter.

Docker Multistage Builds for Hugo

blog-ruanbekker-multistage-builds

In this tutorial I will demonstrate how to keep your docker container images nice and slim with the use of multistage builds for a hugo documentation project.

Hugo is a static content generator so essentially that means that it will generate your markdown files into html. Therefore we don’t need to include all the content from our project repository as we only need the static content (html, css, javascript) to reside on our final container image.

What are we doing today

We will use the DOKS Modern Documentation theme for Hugo as our project example, where we will build and run our documentation website on a docker container, but more importantly make use of multistage builds to optimize the size of our container image.

Our Build Strategy

Since hugo is a static content generator, we will use a node container image as our base. We will then build and generate the content using npm run build which will generate the static content to /src/public in our build stage.

Since we then have static content, we can utilize a second stage using a nginx container image with the purpose of a web server to host our static content. We will copy the static content from our build stage into our second stage and place it under our defined path in our nginx config.

This way we only include the required content on our final container image.

Building our Container Image

First clone the docs github repository and change to the directory:

1
2
git clone https://github.com/h-enk/doks
cd doks

Now create a Dockerfile in the root path with the following content:

1
2
3
4
5
6
7
8
9
10
11
FROM node:16.15.1 as build
WORKDIR /src
ADD . .
RUN npm install
RUN npm run build

FROM  nginx:alpine
LABEL demonstration.by Ruan Bekker <@ruanbekker>
COPY  nginx/config/nginx.conf /etc/nginx/nginx.conf
COPY  nginx/config/app.conf /etc/nginx/conf.d/app.conf
COPY  --from=build /src/public /usr/share/nginx/app

As we can see we are copying two nginx config files to our final image, which we will need to create.

Create the nginx config directory:

1
mkdir -p nginx/config

The content for our main nginx config nginx/config/nginx.conf:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
user  nginx;
worker_processes  auto;
error_log  /var/log/nginx/error.log notice;
pid        /var/run/nginx.pid;

events {
    worker_connections  1024;
}

http {
    include       /etc/nginx/mime.types;
    default_type  application/octet-stream;

    log_format  main  '$remote_addr - $remote_user [$time_local] "$request" '
                      '$status $body_bytes_sent "$http_referer" '
                      '"$http_user_agent" "$http_x_forwarded_for"';

    access_log  /var/log/nginx/access.log  main;

    sendfile        on;

    # timeouts
    client_body_timeout 12;
    client_header_timeout 12;
    keepalive_timeout  25;
    send_timeout 10;

    # buffer size
    client_body_buffer_size 10K;
    client_header_buffer_size 1k;
    client_max_body_size 8m;
    large_client_header_buffers 4 4k;

    # gzip compression
    gzip  on;
    gzip_vary on;
    gzip_min_length 10240;
    gzip_proxied expired no-cache no-store private auth;
    gzip_types text/plain text/css text/xml text/javascript application/x-javascript application/xml;
    gzip_disable "MSIE [1-6]\.";

    include /etc/nginx/conf.d/app.conf;
}

And in our main nginx config we are including a virtual host config app.conf, which we will create locally, and the content of nginx/config/app.conf:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
server {
    listen       80;
    server_name  localhost;

    location / {
        root   /usr/share/nginx/app;
        index  index.html index.htm;
    }

    #error_page  404              /404.html;

    # redirect server error pages to the static page /50x.html
    error_page   500 502 503 504  /50x.html;
    location = /50x.html {
        root   /usr/share/nginx/html;
    }

}

Now that we have our docker config in place, we can build our container image:

1
docker build -t ruanbekker/hashnode-docs-blogpost:latest .

Then we can review the size of our container image, which is only 27.4MB in size, pretty neat right.

1
2
3
4
docker images --filter reference=ruanbekker/hashnode-docs-blogpost

REPOSITORY                          TAG       IMAGE ID       CREATED          SIZE
ruanbekker/hashnode-docs-blogpost   latest    5b60f30f40e6   21 minutes ago   27.4MB

Running our Container

Now that we’ve built our container image, we can run our documentation site, by specifying our host port on the left to map to our container port on the right in 80:80:

1
docker run -it -p 80:80 ruanbekker/hashnode-docs-blogpost:latest

When you don’t have port 80 already listening prior to running the previous command, when you head to http://localhost (if you are running this locally), you should see our documentation site up and running:

image

Thank You

I have published this container image to ruanbekker/hashnode-docs-blogpost.

Thanks for reading, feel free to check out my website, feel free to subscribe to my newsletter or follow me at @ruanbekker on Twitter.

Remote Builds With Docker Contexts

using-docker-contexts

Often you want to save some battery life when you are doing docker builds and leverage a remote host to do the intensive work and we can utilise docker context over ssh to do just that.

About

In this tutorial I will show you how to use a remote docker engine to do docker builds, so you still run the docker client locally, but the context of your build will be sent to a remote docker engine via ssh.

We will setup password-less ssh, configure our ssh config, create the remote docker context, then use the remote docker context.

image

Password-less SSH

I will be copying my public key to the remote host:

1
$ ssh-copy-id ruan@192.168.2.18

Setup my ssh config:

1
2
3
4
5
6
7
$ cat ~/.ssh/config
Host home-server
    Hostname 192.168.2.18
    User ruan
    IdentityFile ~/.ssh/id_rsa
    StrictHostKeyChecking no
    UserKnownHostsFile /dev/null

Test:

1
2
$ ssh home-server whoami
ruan

Docker Context

On the target host (192.168.2.18) we can verify that docker is installed:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
$ docker version
Client: Docker Engine - Community
 Version:           20.10.12
 API version:       1.41
 Go version:        go1.16.12
 Git commit:        e91ed57
 Built:             Mon Dec 13 11:45:37 2021
 OS/Arch:           linux/amd64
 Context:           default
 Experimental:      true

Server: Docker Engine - Community
 Engine:
  Version:          20.10.12
  API version:      1.41 (minimum version 1.12)
  Go version:       go1.16.12
  Git commit:       459d0df
  Built:            Mon Dec 13 11:43:46 2021
  OS/Arch:          linux/amd64
  Experimental:     false
 containerd:
  Version:          1.4.12
  GitCommit:        7b11cfaabd73bb80907dd23182b9347b4245eb5d
 runc:
  Version:          1.0.2
  GitCommit:        v1.0.2-0-g52b36a2
 docker-init:
  Version:          0.19.0
  GitCommit:        de40ad0

On the client (my laptop in this example), we will create a docker context called “home-server” and point it to our target host:

1
2
3
$ docker context create home-server --docker "host=ssh://home-server"
home-server
Successfully created context "home-server"

Now we can list our contexts:

1
2
3
4
docker context ls
NAME                TYPE                DESCRIPTION                               DOCKER ENDPOINT               KUBERNETES ENDPOINT                                  ORCHESTRATOR
default *           moby                Current DOCKER_HOST based configuration   unix:///var/run/docker.sock   https://k3d-master.127.0.0.1.nip.io:6445 (default)   swarm
home-server         moby                                                          ssh://home-server

Using Contexts

We can verify if this works by listing our cached docker images locally and on our remote host:

1
2
$ docker --context=default images | wc -l
 16

And listing the remote images by specifying the context:

1
2
$ docker --context=home-server images | wc -l
 70

We can set the default context to our target host:

1
2
$ docker context use home-server
home-server

Running Containers over Contexts

So running containers with remote contexts essentially becomes running containers on remote hosts. In the past, I had to setup a ssh tunnel, point the docker host env var to that endpoint, then run containers on the remote host.

Thats something of the past, we can just point our docker context to our remote host and run the container. If you haven’t set the default context, you can specify the context, so running a docker container on a remote host with your docker client locally:

1
2
$ docker --context=home-server run -it -p 8002:8080 ruanbekker/hostname
2022/07/14 05:44:04 Server listening on port 8080

Now from our client (laptop), we can test our container on our remote host:

1
2
$ curl http://192.168.2.18:8002
Hostname: 8605d292e2b4

The same way can be used to do remote docker builds, you have your Dockerfile locally, but when you build, you point the context to the remote host, and your context (dockerfile and files referenced in your dockerfile) will be sent to the remote host. This way you can save a lot of battery life as the computation is done on the remote docker engine.

Thank You

Thanks for reading, feel free to check out my website, feel free to subscribe to my newsletter or follow me at @ruanbekker on Twitter.

Create a RAID5 Array With Mdadm on Linux

setup-raid5-array-ubuntu-linux

In this tutorial we will setup a RAID5 array, which is striping across multiple drives with distributed paritiy, which is good for redundancy. We will be using Ubuntu for our Linux Distribution, but the technique applies to other Linux Distributions as well.

What are we trying to achieve

We will run a server with one root disk and 6 extra disks, where we will first create our raid5 array with three disks, then I will show you how to expand your raid5 array by adding three other disks.

Things fail all the time, and it’s not fun when hard drives breaks, therefore we want to do our best to prevent our applications from going down due to hardware failures. To achieve data redundancy, we want to use three hard drives, which we want to add into a raid configuration that will proviide us:

  • striping, which is the technique of segmenting logically sequential data, so that consecutive segments are stored on different physical storage devices.
  • distributed parity, where parity data are distributed between the physical disks, where there is only one parity block per disk, this provide protection against one physical disk failure, where the minimum number of disks are three.

This is how a RAID5 array looks like (image from diskpart.com):

raid5

Hardware Overview

We will have a Linux server with one root disk and six extra disks:

1
2
3
4
5
6
7
8
9
10
$ lsblk
NAME    MAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
xvda    202:0    0    8G  0 disk
└─xvda1 202:1    0    8G  0 part /
xvdb    202:16   0   10G  0 disk
xvdc    202:32   0   10G  0 disk
xvdd    202:48   0   10G  0 disk
xvde    202:64   0   10G  0 disk
xvdf    202:80   0   10G  0 disk
xvdg    202:96   0   10G  0 disk

Dependencies

We require mdadm to create our raid configuration:

1
2
$ sudo apt update
$ sudo apt install mdadm -y

Format Disks

First we will format and partition the following disks: /dev/xvdb, /dev/xvdc, /dev/xvdd, I will demonstrate the process for one disk, but repeat them for the other as well:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
$ fdisk /dev/xvdc

Welcome to fdisk (util-linux 2.34).
Changes will remain in memory only, until you decide to write them.
Be careful before using the write command.

The old ext4 signature will be removed by a write command.

Device does not contain a recognized partition table.
Created a new DOS disklabel with disk identifier 0x26a2d2f6.

Command (m for help): n
Partition type
   p   primary (0 primary, 0 extended, 4 free)
   e   extended (container for logical partitions)
Select (default p): p
Partition number (1-4, default 1): 1
First sector (2048-20971519, default 2048):
Last sector, +/-sectors or +/-size{K,M,G,T,P} (2048-20971519, default 20971519):

Created a new partition 1 of type 'Linux' and of size 10 GiB.

Command (m for help): t
Selected partition 1
Hex code (type L to list all codes): fd
Changed type of partition 'Linux' to 'Linux raid autodetect'.

Command (m for help): w
The partition table has been altered.
Calling ioctl() to re-read partition table.
Syncing disks.

Create RAID5 Array

Using mdadm, create the /dev/md0 device, by specifying the raid level and the disks that we want to add to the array:

1
2
3
$ mdadm --create /dev/md0 --level=5 --raid-devices=3 /dev/xvdb1 /dev/xvdc1 /dev/xvdd1
mdadm: Defaulting to version 1.2 metadata
mdadm: array /dev/md0 started.

Now that our device has been added, we can monitor the process:

1
2
3
4
5
6
7
$ cat /proc/mdstat
Personalities : [raid6] [raid5] [raid4]
md0 : active raid5 xvdd1[3] xvdc1[1] xvdb1[0]
      20951040 blocks super 1.2 level 5, 512k chunk, algorithm 2 [3/2] [UU_]
      [==>..................]  recovery = 11.5% (1212732/10475520) finish=4.7min speed=32103K/sec

unused devices: <none>

As you can see, currently its at 11.5%, give it some time to let it complete, you should treat the following as a completed state:

1
2
3
4
5
6
$ cat /proc/mdstat
Personalities : [raid6] [raid5] [raid4]
md0 : active raid5 xvdd1[3] xvdc1[1] xvdb1[0]
      20951040 blocks super 1.2 level 5, 512k chunk, algorithm 2 [3/3] [UUU]

unused devices: <none>

We can also inspect devices with mdadm:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
$ mdadm -E /dev/xvd[b-d]1
/dev/xvdb1:
          Magic : a92b4efc
        Version : 1.2
    Feature Map : 0x0
     Array UUID : ea997bce:a530519c:ae41022e:0f4306bf
           Name : ip-172-31-3-57:0  (local to host ip-172-31-3-57)
  Creation Time : Wed Jan 12 13:36:39 2022
     Raid Level : raid5
   Raid Devices : 3

 Avail Dev Size : 20951040 (9.99 GiB 10.73 GB)
     Array Size : 20951040 (19.98 GiB 21.45 GB)
    Data Offset : 18432 sectors
   Super Offset : 8 sectors
   Unused Space : before=18280 sectors, after=0 sectors
          State : clean
    Device UUID : 8305a179:3ef96520:6c7b41dd:bdc7401f

    Update Time : Wed Jan 12 13:42:14 2022
  Bad Block Log : 512 entries available at offset 136 sectors
       Checksum : 1f9b4887 - correct
         Events : 18

         Layout : left-symmetric
     Chunk Size : 512K

   Device Role : Active device 0
   Array State : AAA ('A' == active, '.' == missing, 'R' == replacing)

To get information about your raid5 device:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
$ mdadm --detail /dev/md0
/dev/md0:
           Version : 1.2
     Creation Time : Wed Jan 12 13:36:39 2022
        Raid Level : raid5
        Array Size : 20951040 (19.98 GiB 21.45 GB)
     Used Dev Size : 10475520 (9.99 GiB 10.73 GB)
      Raid Devices : 3
     Total Devices : 3
       Persistence : Superblock is persistent

       Update Time : Wed Jan 12 13:42:14 2022
             State : clean
    Active Devices : 3
   Working Devices : 3
    Failed Devices : 0
     Spare Devices : 0

            Layout : left-symmetric
        Chunk Size : 512K

Consistency Policy : resync

              Name : ip-172-31-3-57:0  (local to host ip-172-31-3-57)
              UUID : ea997bce:a530519c:ae41022e:0f4306bf
            Events : 18

    Number   Major   Minor   RaidDevice State
       0     202       17        0      active sync   /dev/xvdb1
       1     202       33        1      active sync   /dev/xvdc1
       3     202       49        2      active sync   /dev/xvdd1

Create Filesystems

We will use our /dev/md0 device and create a ext4 filesystem:

1
2
3
4
5
6
7
8
9
10
11
12
$ mkfs.ext4 /dev/md0
mke2fs 1.45.5 (07-Jan-2020)
Creating filesystem with 5237760 4k blocks and 1310720 inodes
Filesystem UUID: 579f045e-d270-4ff2-b36b-8dc506c27c5f
Superblock backups stored on blocks:
  32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208,
  4096000

Allocating group tables: done
Writing inode tables: done
Creating journal (32768 blocks): done
Writing superblocks and filesystem accounting information: done

We can then verify that by looking at our block devices using lsblk:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
$ lsblk
NAME    MAJ:MIN RM  SIZE RO TYPE  MOUNTPOINT
xvda    202:0    0    8G  0 disk
└─xvda1 202:1    0    8G  0 part  /
xvdb    202:16   0   10G  0 disk
└─xvdb1 202:17   0   10G  0 part
  └─md0   9:0    0   20G  0 raid5
xvdc    202:32   0   10G  0 disk
└─xvdc1 202:33   0   10G  0 part
  └─md0   9:0    0   20G  0 raid5
xvdd    202:48   0   10G  0 disk
└─xvdd1 202:49   0   10G  0 part
  └─md0   9:0    0   20G  0 raid5
xvde    202:64   0   10G  0 disk
xvdf    202:80   0   10G  0 disk
xvdg    202:96   0   10G  0 disk

Now we can mount our device to /mnt:

1
$ mount /dev/md0 /mnt

We can verify that the device is mounted by using df:

1
2
3
4
$ df -h
Filesystem      Size  Used Avail Use% Mounted on
/dev/root       7.7G  1.5G  6.3G  19% /
/dev/md0         20G   45M   19G   1% /mnt

To persist the device across reboots, add it to the /etc/fstab file:

1
2
$ cat /etc/fstab
/dev/md0                /mnt     ext4   defaults                0 0

Now our filesystem which is mounted at /mnt is ready to be used.

RAID Configuration (across reboots)

By default RAID doesn’t have a config file, therefore we need to save it manually. If this step is not followed RAID device will not be in md0, but perhaps something else.

So, we must have to save the configuration to persist across reboots, when it reboot it gets loaded to the kernel and RAID will also get loaded.

1
$ mdadm --detail --scan --verbose >> /etc/mdadm.conf

Note: Saving the configuration will keep the RAID level stable in the md0 device.

Adding Spare Devices

Earlier I mentioned that we have spare disks that we can use to expand our raid device. After they have been formatted we can add them as spare devices to our raid setup:

1
2
3
4
$ mdadm --add /dev/md0 /dev/xvde1 /dev/xvdf1 /dev/xvdg1
mdadm: added /dev/xvde1
mdadm: added /dev/xvdf1
mdadm: added /dev/xvdg1

Verify our change by viewing the detail of our device:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
$ mdadm --detail /dev/md0
/dev/md0:
           Version : 1.2
     Creation Time : Wed Jan 12 13:36:39 2022
        Raid Level : raid5
        Array Size : 20951040 (19.98 GiB 21.45 GB)
     Used Dev Size : 10475520 (9.99 GiB 10.73 GB)
      Raid Devices : 3
     Total Devices : 6
       Persistence : Superblock is persistent

       Update Time : Wed Jan 12 14:28:23 2022
             State : clean
    Active Devices : 3
   Working Devices : 6
    Failed Devices : 0
     Spare Devices : 3

            Layout : left-symmetric
        Chunk Size : 512K

Consistency Policy : resync

              Name : ip-172-31-3-57:0  (local to host ip-172-31-3-57)
              UUID : ea997bce:a530519c:ae41022e:0f4306bf
            Events : 27

    Number   Major   Minor   RaidDevice State
       0     202       17        0      active sync   /dev/xvdb1
       1     202       33        1      active sync   /dev/xvdc1
       3     202       49        2      active sync   /dev/xvdd1

       4     202       65        -      spare   /dev/xvde1
       5     202       81        -      spare   /dev/xvdf1
       6     202       97        -      spare   /dev/xvdg1

As you can see it’s only spares at this moment, we can use the spares for data storage, by growing our device:

1
$ mdadm --grow --raid-devices=6 /dev/md0

Verify:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
$ mdadm --detail /dev/md0
/dev/md0:
           Version : 1.2
     Creation Time : Wed Jan 12 13:36:39 2022
        Raid Level : raid5
        Array Size : 20951040 (19.98 GiB 21.45 GB)
     Used Dev Size : 10475520 (9.99 GiB 10.73 GB)
      Raid Devices : 6
     Total Devices : 6
       Persistence : Superblock is persistent

       Update Time : Wed Jan 12 15:15:31 2022
             State : clean, reshaping
    Active Devices : 6
   Working Devices : 6
    Failed Devices : 0
     Spare Devices : 0

            Layout : left-symmetric
        Chunk Size : 512K

Consistency Policy : resync

    Reshape Status : 0% complete
     Delta Devices : 3, (3->6)

              Name : ip-172-31-3-57:0  (local to host ip-172-31-3-57)
              UUID : ea997bce:a530519c:ae41022e:0f4306bf
            Events : 36

    Number   Major   Minor   RaidDevice State
       0     202       17        0      active sync   /dev/xvdb1
       1     202       33        1      active sync   /dev/xvdc1
       3     202       49        2      active sync   /dev/xvdd1
       6     202       97        3      active sync   /dev/xvdg1
       5     202       81        4      active sync   /dev/xvdf1
       4     202       65        5      active sync   /dev/xvde1

Wait for the raid to rebuild, by viewing the mdstat::

1
2
3
4
5
6
7
$ cat /proc/mdstat
Personalities : [raid6] [raid5] [raid4]
md0 : active raid5 xvdg1[6] xvdf1[5] xvde1[4] xvdd1[3] xvdc1[1] xvdb1[0]
      20951040 blocks super 1.2 level 5, 512k chunk, algorithm 2 [6/6] [UUUUUU]
      [>....................]  reshape =  0.7% (76772/10475520) finish=18.0min speed=9596K/sec

unused devices: <none>

Resizing our Filesystem

Once we added the spares and growed our device, we need to run integrity checks, then we can resize the volume. But first, we need to unmount our filesystem:

1
$ umount /mnt

Run a integrity check:

1
2
3
4
5
6
7
8
$ e2fsck -f /dev/md0
e2fsck 1.45.5 (07-Jan-2020)
Pass 1: Checking inodes, blocks, and sizes
Pass 2: Checking directory structure
Pass 3: Checking directory connectivity
Pass 4: Checking reference counts
Pass 5: Checking group summary information
/dev/md0: 12/1310720 files (0.0% non-contiguous), 126323/5237760 blocks

Once that has passed, resize the file system:

1
2
3
4
$ resize2fs /dev/md0
resize2fs 1.45.5 (07-Jan-2020)
Resizing the filesystem on /dev/md0 to 13094400 (4k) blocks.
The filesystem on /dev/md0 is now 13094400 (4k) blocks long.

Then we remount our filesystem:

1
$ mount /dev/md0 /mnt

After the filesystem has been mounted, we can view the disk size and confirm that the size increased:

1
2
3
$ df -h /mnt
Filesystem      Size  Used Avail Use% Mounted on
/dev/md0         50G   52M   47G   1% /mnt

Thank You

Thanks for reading, feel free to check out my website, feel free to subscribe to my newsletter or follow me at @ruanbekker on Twitter.

Install a Specific Python Version on Ubuntu

install-specific-python-version

In this short tutorial, I will demonstrate how to install a spcific version of Python on Ubuntu Linux.

Say Thanks! Ko-fi

Dependencies

Update the apt repositories:

1
$ sudo apt update

Then install the required dependencies:

1
$ sudo apt install libssl-dev openssl wget build-essential zlib1g-dev -y

Python Versions

Head over to the Python Downloads section and select the version of your choice, in my case I will be using Python 3.8.13, once you have the download link, download it:

1
$ wget https://www.python.org/ftp/python/3.8.13/Python-3.8.13.tgz

Then extract the tarball:

1
$ tar -xvf Python-3.8.13.tgz

Once it completes, change to the directory:

1
$ cd Python-3.8.13

Installation

Compile and add --enable-optimizations flag as an argument:

1
$ ./configure --enable-optimizations

Run make and make install:

1
2
$ make
$ sudo make install

Once it completes, you can symlink the python binary so that it’s detected by your PATH, if you have no installed python versions or want to use it as the default, you can force overwriting the symlink:

1
$ sudo ln -fs /usr/local/bin/python3 /usr/bin/python3

Then we can test it by running:

1
2
$ python3 --version
Python 3.8.13

Thank You

Thanks for reading, feel free to check out my website, feel free to subscribe to my newsletter or follow me at @ruanbekker on Twitter.

How to Persist Iptables Rules After Reboots

persist-iptables-after-reboot

In this tutorial we will demonstrate how to persist iptables rules across reboots.

Rules Peristence

By default, when you create iptables rules its active, but as soon as you restart your server, the rules will be gone. Therefore we need to persist these rules across reboots.

Dependencies

We require the package iptables-persistent and I will install it on a debian system so I will be using apt:

1
2
sudo apt update
sudo apt install iptables-persistent -y

Ensure that the service is enabled to start on boot:

1
sudo systemctl enable netfilter-persistent

Creating Iptables Rules

In this case I will allow port 80 on TCP from all sources:

1
sudo iptables -I INPUT -p tcp --dport 80 -j ACCEPT

To persist our current rules, we need to save them to /etc/iptables/rules.v4 with iptables-save:

1
sudo iptables-save > /etc/iptables/rules.v4

Now when we restart, our rules will be loaded and our previous defined rules will be active.

Thank You

Thanks for reading, feel free to check out my website, feel free to subscribe to my newsletter or follow me at @ruanbekker on Twitter.