Ruan Bekker's Blog

From a Curious mind to Posts on Github

How to Run a AMD64 Bit Linux VM on a Mac M1

This tutorial will show you how you can run 64bit Ubuntu Linux Virtual Machines on a Apple Mac M1 arm64 architecture macbook using UTM.

Installation

Head over to their documentation and download the UTM.dmg file and install it, once it is installed and you have opened UTM, you should see this screen:

image

Creating a Virtual Machine

In my case I would like to run a Ubuntu VM, so head over to the Ubuntu Server Download page and download the version of choice, I will be downloading Ubuntu Server 22.04, once you have your ISO image downloaded, you can head over to the next step which is to “Create a New Virtual Machine”:

image

I will select “Emulate” as I want to run a amd64 bit architecture, then select “Linux”:

image

In the next step we want to select the Ubuntu ISO image that we downloaded, which we want to use to boot our VM from:

image

Browse and select the image that you downloaded, once you selected it, it should show something like this:

image

Select continue, then select the architecture to x86_64, the system I kept on defaults and the memory I have set to 2048MB and cores to 2 but that is just my preference:

image

The next screen is to configure storage, as this is for testing I am setting mine to 8GB:

image

The next screen is shared directories, this is purely optional, I have created a directory for this:

1
mkdir ~/utm

Which I’ve then defined for a shared directory, but this depends if you need to have shared directories from your local workstation.

The next screen is a summary of your choices and you can name your vm here:

image

Once you are happy select save, and you should see something like this:

image

You can then select the play button to start your VM.

The console should appear and you can select install or try this vm:

image

This will start the installation process of a Linux Server:

image

Here you can select the options that you would like, I would just recommend to ensure that you select Install OpenSSH Server so that you can connect to your VM via SSH.

Once you get to this screen:

image

The installation process is busy and you will have to wait a couple of minutes for it to complete. Once you see the following screen the installation is complete:

image

On the right hand side select the circle, then select CD/DVD and select the ubuntu iso and select eject:

image

Starting your VM

Then power off the guest and power on again, then you should get a console login, then you can proceed to login, and view the ip address:

SSH to your VM

Now from your terminal you should be able to ssh to the VM:

We can also verify that we are running a 64bit vm, by running uname --processor:

Thank You

Thanks for reading, feel free to check out my website, feel free to subscribe to my newsletter or follow me at @ruanbekker on Twitter.

Running a Multi-Broker Kafka Cluster on Docker

In this post we will run a Kakfa cluster with 3 kafka brokers on docker compose and using a producer to send messages to our topics and a consumer that will receive the messages from the topics, which we will develop in python and explore the kafka-ui.

What is Kafka?

Kafka is a distributed event store and stream processing platform. Kafka is used to build real-time streaming data pipelines and real-time streaming applications.

This is a fantastic resource if you want to understand the components better in detail: - apache-kafka-architecture-what-you-need-to-know

But on a high level, the components of a typical Kafka setup:

  1. Zookeeper: Kafka relies on Zookeeper to do leadership election of Kafka Brokers and Topic Partitions.
  2. Broker: Kafka server that receives messages from producers, assigns them to offsets and commit the messages to disk storage. A offset is used for data consistency in a event of failure, so that consumers know from where to consume from their last message.
  3. Topic: A topic can be thought of categories to organize messages. Producers writes messages to topics, consumers reads from those topics.
  4. Partitions: A topic is split into multiple partitions. This improves scalability through parallelism (not just one broker). Kafka also does replication

For great in detail information about kafka and its components, I encourage you to visit the mentioned post from above.

Launch Kafka

This is the docker-compose.yaml that we will be using to run a kafka cluster with 3 broker containers, 1 zookeeper container, 1 producer, 1 consumer and a kafka-ui.

All the source code is available on my quick-starts github repository .

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
version: "3.9"

services:
  zookeeper:
    platform: linux/amd64
    image: confluentinc/cp-zookeeper:${CONFLUENT_PLATFORM_VERSION:-7.4.0}
    container_name: zookeeper
    restart: unless-stopped
    ports:
      - '32181:32181'
      - '2888:2888'
      - '3888:3888'
    environment:
      ZOOKEEPER_SERVER_ID: 1
      ZOOKEEPER_CLIENT_PORT: 32181
      ZOOKEEPER_TICK_TIME: 2000
      ZOOKEEPER_INIT_LIMIT: 5
      ZOOKEEPER_SYNC_LIMIT: 2
      ZOOKEEPER_SERVERS: zookeeper:2888:3888
    healthcheck:
      test: echo stat | nc localhost 32181
      interval: 10s
      timeout: 10s
      retries: 3
    networks:
      - kafka
    logging:
      driver: "json-file"
      options:
        max-size: "1m"

  kafka-ui:
    container_name: kafka-ui
    image: provectuslabs/kafka-ui:latest
    ports:
      - 8080:8080
    depends_on:
      - broker-1
      - broker-2
      - broker-3
    environment:
      KAFKA_CLUSTERS_0_NAME: broker-1
      KAFKA_CLUSTERS_0_BOOTSTRAPSERVERS: broker-1:29091
      KAFKA_CLUSTERS_0_METRICS_PORT: 19101
      KAFKA_CLUSTERS_1_NAME: broker-2
      KAFKA_CLUSTERS_1_BOOTSTRAPSERVERS: broker-2:29092
      KAFKA_CLUSTERS_1_METRICS_PORT: 19102
      KAFKA_CLUSTERS_2_NAME: broker-3
      KAFKA_CLUSTERS_2_BOOTSTRAPSERVERS: broker-3:29093
      KAFKA_CLUSTERS_2_METRICS_PORT: 19103
      DYNAMIC_CONFIG_ENABLED: 'true'
    networks:
      - kafka
    logging:
      driver: "json-file"
      options:
        max-size: "1m"

  broker-1:
    platform: linux/amd64
    image: confluentinc/cp-kafka:${CONFLUENT_PLATFORM_VERSION:-7.4.0}
    container_name: broker-1
    restart: unless-stopped
    ports:
      - '9091:9091'
    depends_on:
      - zookeeper
    environment:
      KAFKA_BROKER_ID: 1
      KAFKA_ZOOKEEPER_CONNECT: zookeeper:32181
      KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: INTERNAL:PLAINTEXT,EXTERNAL:PLAINTEXT
      KAFKA_INTER_BROKER_LISTENER_NAME: INTERNAL
      KAFKA_ADVERTISED_LISTENERS: INTERNAL://broker-1:29091,EXTERNAL://localhost:9091
      KAFKA_DEFAULT_REPLICATION_FACTOR: 3
      KAFKA_NUM_PARTITIONS: 3
      KAFKA_JMX_PORT: 19101
      KAFKA_JMX_HOSTNAME: localhost
    healthcheck:
      test: nc -vz localhost 9091
      interval: 10s
      timeout: 10s
      retries: 3
    networks:
      - kafka
    logging:
      driver: "json-file"
      options:
        max-size: "1m"

  broker-2:
    platform: linux/amd64
    image: confluentinc/cp-kafka:${CONFLUENT_PLATFORM_VERSION:-7.4.0}
    container_name: broker-2
    restart: unless-stopped
    ports:
      - '9092:9092'
    depends_on:
      - zookeeper
    environment:
      KAFKA_BROKER_ID: 2
      KAFKA_ZOOKEEPER_CONNECT: zookeeper:32181
      KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: INTERNAL:PLAINTEXT,EXTERNAL:PLAINTEXT
      KAFKA_INTER_BROKER_LISTENER_NAME: INTERNAL
      KAFKA_ADVERTISED_LISTENERS: INTERNAL://broker-2:29092,EXTERNAL://localhost:9092
      KAFKA_DEFAULT_REPLICATION_FACTOR: 3
      KAFKA_NUM_PARTITIONS: 3
      KAFKA_JMX_PORT: 19102
      KAFKA_JMX_HOSTNAME: localhost
    healthcheck:
      test: nc -vz localhost 9092
      interval: 10s
      timeout: 10s
      retries: 3
    networks:
      - kafka
    logging:
      driver: "json-file"
      options:
        max-size: "1m"

  broker-3:
    platform: linux/amd64
    image: confluentinc/cp-kafka:${CONFLUENT_PLATFORM_VERSION:-7.4.0}
    container_name: broker-3
    restart: unless-stopped
    ports:
      - '9093:9093'
    depends_on:
      - zookeeper
    environment:
      KAFKA_BROKER_ID: 3
      KAFKA_ZOOKEEPER_CONNECT: zookeeper:32181
      KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: INTERNAL:PLAINTEXT,EXTERNAL:PLAINTEXT
      KAFKA_INTER_BROKER_LISTENER_NAME: INTERNAL
      KAFKA_ADVERTISED_LISTENERS: INTERNAL://broker-3:29093,EXTERNAL://localhost:9093
      KAFKA_DEFAULT_REPLICATION_FACTOR: 3
      KAFKA_NUM_PARTITIONS: 3
      KAFKA_JMX_PORT: 19103
      KAFKA_JMX_HOSTNAME: localhost
    healthcheck:
      test: nc -vz localhost 9093
      interval: 10s
      timeout: 10s
      retries: 3
    networks:
      - kafka
    logging:
      driver: "json-file"
      options:
        max-size: "1m"

  producer:
    platform: linux/amd64
    container_name: producer
    image: ruanbekker/kafka-producer-consumer:2023-05-17
    # source: https://github.com/ruanbekker/quick-starts/tree/main/docker/kafka/python-client
    restart: always
    environment:
      - ACTION=producer
      - BOOTSTRAP_SERVERS=broker-1:29091,broker-2:29092,broker-3:29093
      - TOPIC=my-topic
      - PYTHONUNBUFFERED=1 # https://github.com/docker/compose/issues/4837#issuecomment-302765592
    networks:
      - kafka
    depends_on:
      - zookeeper
      - broker-1
      - broker-2
      - broker-3
    logging:
      driver: "json-file"
      options:
        max-size: "1m"

  consumer:
    platform: linux/amd64
    container_name: consumer
    image: ruanbekker/kafka-producer-consumer:2023-05-17
    # source: https://github.com/ruanbekker/quick-starts/tree/main/docker/kafka/python-client
    restart: always
    environment:
      - ACTION=consumer
      - BOOTSTRAP_SERVERS=broker-1:29091,broker-2:29092,broker-3:29093
      - TOPIC=my-topic
      - CONSUMER_GROUP=cg-group-id
      - PYTHONUNBUFFERED=1 # https://github.com/docker/compose/issues/4837#issuecomment-302765592
    networks:
      - kafka
    depends_on:
      - zookeeper
      - broker-1
      - broker-2
      - broker-3
      - producer
    logging:
      driver: "json-file"
      options:
        max-size: "1m"

networks:
  kafka:
    name: kafka

Note: This docker-compose yaml can be found in my kafka quick-starts repository.

In our compose file we defined our core stack:

  • 1 Zookeeper Container
  • 3 Kafka Broker Containers
  • 1 Kafka UI

Then we have our clients:

We can boot the stack with:

1
docker-compose up -d

You can verify that the brokers are passing their health checks with:

1
2
3
4
5
6
7
8
9
10
docker-compose ps

NAME                IMAGE                                           COMMAND                  SERVICE             CREATED             STATUS                   PORTS
broker-1            confluentinc/cp-kafka:7.4.0                     "/etc/confluent/dock…"   broker-1            5 minutes ago       Up 4 minutes (healthy)   0.0.0.0:9091->9091/tcp, :::9091->9091/tcp, 9092/tcp
broker-2            confluentinc/cp-kafka:7.4.0                     "/etc/confluent/dock…"   broker-2            5 minutes ago       Up 4 minutes (healthy)   0.0.0.0:9092->9092/tcp, :::9092->9092/tcp
broker-3            confluentinc/cp-kafka:7.4.0                     "/etc/confluent/dock…"   broker-3            5 minutes ago       Up 4 minutes (healthy)   9092/tcp, 0.0.0.0:9093->9093/tcp, :::9093->9093/tcp
consumer            ruanbekker/kafka-producer-consumer:2023-05-17   "sh /src/run.sh $ACT…"   consumer            5 minutes ago       Up 4 minutes
kafka-ui            provectuslabs/kafka-ui:latest                   "/bin/sh -c 'java --…"   kafka-ui            5 minutes ago       Up 4 minutes             0.0.0.0:8080->8080/tcp, :::8080->8080/tcp
producer            ruanbekker/kafka-producer-consumer:2023-05-17   "sh /src/run.sh $ACT…"   producer            5 minutes ago       Up 4 minutes
zookeeper           confluentinc/cp-zookeeper:7.4.0                 "/etc/confluent/dock…"   zookeeper           5 minutes ago       Up 5 minutes (healthy)   0.0.0.0:2888->2888/tcp, :::2888->2888/tcp, 0.0.0.0:3888->3888/tcp, :::3888->3888/tcp, 2181/tcp, 0.0.0.0:32181->32181/tcp, :::32181->32181/tcp

Producers and Consumers

The producer generates random data and sends it to a topic, where the consumer will listen on the same topic and read messages from that topic.

To view the output of what the producer is doing, you can tail the logs:

1
2
3
4
5
6
7
8
docker logs -f producer

setting up producer, checking if brokers are available
brokers not available yet
brokers are available and ready to produce messages
message sent to kafka with squence id of 1
message sent to kafka with squence id of 2
message sent to kafka with squence id of 3

And to view the output of what the consumer is doing, you can tail the logs:

1
2
3
4
5
6
7
docker logs -f consumer

starting consumer, checks if brokers are availabe
brokers not availbe yet
brokers are available and ready to consume messages
{'sequence_id': 10, 'user_id': '20520', 'transaction_id': '4026fd10-2aca-4d2e-8bd2-8ef0201af2dd', 'product_id': '17974', 'address': '71741 Lopez Throughway | South John | BT', 'signup_at': '2023-05-11 06:54:52', 'platform_id': 'Tablet', 'message': 'transaction made by userid 119740995334901'}
{'sequence_id': 11, 'user_id': '78172', 'transaction_id': '4089cee1-0a58-4d9b-9489-97b6bc4b768f', 'product_id': '21477', 'address': '735 Jasmine Village Apt. 009 | South Deniseland | BN', 'signup_at': '2023-05-17 09:54:10', 'platform_id': 'Tablet', 'message': 'transaction made by userid 159204336307945'}

Kafka UI

The Kafka UI will be available on http://localhost:8080

Where we can view lots of information, but in the below screenshot we can see our topics:

image

And when we look at the my-topic, we can see a overview dashboard of our topic information:

image

We can also look at the messages in our topic, and also search for messages:

image

And we can also look at the current consumers:

image

Resources

My Quick-Starts Github Repository:

Thank You

Thanks for reading, feel free to check out my website, feel free to subscribe to my newsletter or follow me at @ruanbekker on Twitter.

Manage Helm Releases With Terraform

helm-releases-with-terraform

In this post we will use terraform to deploy a helm release to kubernetes.

Kubernetes

For this demonstration I will be using kind to deploy a local Kubernetes cluster to the operating system that I am running this on, which will be Ubuntu Linux. For a more in-depth tutorial on Kind, you can see my post on Kind for Local Kubernetes Clusters.

Installing the Pre-Requirements

We will be installing terraform, docker, kind and kubectl on Linux.

Install terraform:

1
2
3
4
wget https://releases.hashicorp.com/terraform/1.3.0/terraform_1.3.0_linux_amd64.zip
unzip terraform_1.3.0_linux_amd64.zip
rm terraform_1.3.0_linux_amd64.zip
mv terraform /usr/bin/terraform

Verify that terraform has been installed:

1
terraform -version

Which in my case returns:

1
2
Terraform v1.3.0
on linux_amd64

Install Docker on Linux (be careful to curl pipe bash - trust the scripts that you are running):

1
curl https://get.docker.com | bash

Then running docker ps should return:

1
CONTAINER ID   IMAGE        COMMAND         CREATED          STATUS          PORTS       NAMES

Install kind on Linux:

1
2
3
4
apt update
curl -Lo ./kind https://kind.sigs.k8s.io/dl/v0.17.0/kind-linux-amd64
chmod +x ./kind
sudo mv ./kind /usr/local/bin/kind

Then to verify that kind was installed with kind --version should return:

1
kind version 0.17.0

Create a kubernetes cluster using kind:

1
kind create cluster --name rbkr --image kindest/node:v1.24.0

Now install kubectl:

1
2
curl -LO "https://dl.k8s.io/release/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/linux/amd64/kubectl"
sudo install -o root -g root -m 0755 kubectl /usr/local/bin/kubectl

Then to verify that kubectl was installed:

1
kubectl version --client

Which in my case returns:

1
2
Client Version: version.Info{Major:"1", Minor:"26", GitVersion:"v1.26.1", GitCommit:"8f94681cd294aa8cfd3407b8191f6c70214973a4", GitTreeState:"clean", BuildDate:"2023-01-18T15:58:16Z", GoVersion:"go1.19.5", Compiler:"gc", Platform:"linux/amd64"}
Kustomize Version: v4.5.7

Now we can test if kubectl can communicate with the kubernetes api server:

1
kubectl get nodes

In my case it returns:

1
2
NAME                 STATUS   ROLES           AGE     VERSION
rbkr-control-plane   Ready    control-plane   6m20s   v1.24.0

Terraform

Now that our pre-requirements are sorted we can configure terraform to communicate with kubernetes. For that to happen, we need to consult the terraform kubernetes provider’s documentation.

As per their documentation they provide us with this snippet:

1
2
3
4
5
6
7
8
9
10
11
12
terraform {
  required_providers {
    kubernetes = {
      source = "hashicorp/kubernetes"
      version = "2.18.0"
    }
  }
}

provider "kubernetes" {
  # Configuration options
}

And from their main page, it gives us a couple of options to configure the provider and the easiest is probably to read the ~/.kube/config configuration file.

But in cases where you have multiple configurations in your kube config file, this might not be ideal, and I like to be precise, so I will extract the client certificate, client key and cluster ca certificate and endpoint from our ~/.kube/config file.

If we run cat ~/.kube/config we will see something like this:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
apiVersion: v1
clusters:
- cluster:
    certificate-authority-data: LS0tLS1CRU......FURS0tLS0tCg==
    server: https://127.0.0.1:40305
  name: kind-rbkr
contexts:
- context:
    cluster: kind-rbkr
    user: kind-rbkr
  name: kind-rbkr
current-context: kind-rbkr
kind: Config
preferences: {}
users:
- name: kind-rbkr
  user:
    client-certificate-data: LS0tLS1CRX......FURS0tLS0tCg==
    client-key-data: LS0tLS1CRUejhKWUk2N2.....S0tCg==

First we will create a directory for our certificates:

1
mkdir ~/certs

I have truncated my kube config for readability, but for our first file certs/client-cert.pem we will copy the value of client-certificate-data:, which will look something like this:

1
2
cat certs/client-cert.pem
LS0tLS1CRX......FURS0tLS0tCg==

Then we will copy the contents of client-key-data: into certs/client-key.pem and then lastly the content of certificate-authority-data: into certs/cluster-ca-cert.pem.

So then we should have the following files inside our certs/ directory:

1
2
3
4
5
6
7
tree certs/
certs/
├── client-cert.pem
├── client-key.pem
└── cluster-ca-cert.pem

0 directories, 3 files

Now make them read only:

1
chmod 400 ~/certs/*

Now that we have that we can start writing our terraform configuration. In providers.tf:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
terraform {
  required_providers {
    kubernetes = {
      source = "hashicorp/kubernetes"
      version = "2.18.0"
    }
  }
}

provider "kubernetes" {
  host                   = "https://127.0.0.1:40305"
  client_certificate     = base64decode(file("~/certs/client-cert.pem"))
  client_key             = base64decode(file("~/certs/client-key.pem"))
  cluster_ca_certificate = base64decode(file("~/certs/cluster-ca-cert.pem"))
}

Your host might look different to mine, but you can find your host endpoint in ~/.kube/config.

For a simple test we can list all our namespaces to ensure that our configuration is working. In a file called namespaces.tf, we can populate the following:

1
2
3
4
5
data "kubernetes_all_namespaces" "allns" {}

output "all-ns" {
  value = data.kubernetes_all_namespaces.allns.namespaces
}

Now we need to initialize terraform so that it can download the providers:

1
terraform init

Then we can run a plan which will reveal our namespaces:

1
2
3
4
5
6
7
8
9
10
11
12
13
terraform plan

data.kubernetes_all_namespaces.allns: Reading...
data.kubernetes_all_namespaces.allns: Read complete after 0s [id=a0ff7e83ffd7b2d9953abcac9f14370e842bdc8f126db1b65a18fd09faa3347b]

Changes to Outputs:
  + all-ns = [
      + "default",
      + "kube-node-lease",
      + "kube-public",
      + "kube-system",
      + "local-path-storage",
    ]

We can now remove our namespaces.tf as our test worked:

1
rm namespaces.tf

Helm Releases with Terraform

We will need two things, we need to consult the terraform helm release provider documentation and we also need to consult the helm chart documentation which we are interested in.

In my previous post I wrote about Everything you need to know about Helm and I used the Bitnami Nginx Helm Chart, so we will use that one again.

As we are working with helm releases, we need to configure the helm provider, I will just extend my configuration from my previous provider config in providers.tf:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
terraform {
  required_providers {
    kubernetes = {
      source = "hashicorp/kubernetes"
      version = "2.18.0"
    }
    helm = {
      source = "hashicorp/helm"
      version = "2.9.0"
    }
  }
}

provider "kubernetes" {
  host                   = "https://127.0.0.1:40305"
  client_certificate     = base64decode(file("~/certs/client-cert.pem"))
  client_key             = base64decode(file("~/certs/client-key.pem"))
  cluster_ca_certificate = base64decode(file("~/certs/cluster-ca-cert.pem"))
}

provider "helm" {
  kubernetes {
    host                   = "https://127.0.0.1:40305"
    client_certificate     = base64decode(file("~/certs/client-cert.pem"))
    client_key             = base64decode(file("~/certs/client-key.pem"))
    cluster_ca_certificate = base64decode(file("~/certs/cluster-ca-cert.pem"))
  }
}

We will create three terraform files:

1
touch {main,outputs,variables}.tf

And our values yaml in helm-chart/nginx/values.yaml:

1
mkdir -p helm-chart/nginx

Then you can copy the values file from https://artifacthub.io/packages/helm/bitnami/nginx?modal=values into helm-chart/nginx/values.yaml.

In our main.tf I will use two ways to override values in our values.yaml using set and templatefile. The reason for the templatefile, is when we want to fetch a value and want to replace the content with our values file, it could be used when we retrieve a value from a data source as an example. In my example im just using a variable.

We will have the following:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
resource "helm_release" "nginx" {
  name             = var.release_name
  version          = var.chart_version
  namespace        = var.namespace
  create_namespace = var.create_namespace
  chart            = var.chart_name
  repository       = var.chart_repository_url
  dependency_update = true
  reuse_values      = true
  force_update      = true
  atomic              = var.atomic

  set {
    name  = "image.tag"
    value = "1.23.3-debian-11-r3"
  }

  set {
    name  = "service.type"
    value = "ClusterIP"
  }

  values = [
    templatefile("${path.module}/helm-chart/nginx/values.yaml", {
      NAME_OVERRIDE   = var.release_name
    }
  )]

}

As you can see we are referencing a NAME_OVERRIDE in our values.yaml, I have cleaned up the values file to the following:

1
2
3
4
5
6
7
nameOverride: "${NAME_OVERRIDE}"

## ref: https://hub.docker.com/r/bitnami/nginx/tags/
image:
  registry: docker.io
  repository: bitnami/nginx
  tag: 1.23.3-debian-11-r3

The NAME_OVERRIDE must be in a ${} format.

In our variables.tf we will have the following:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
variable "release_name" {
  type        = string
  default     = "nginx"
  description = "The name of our release."
}

variable "chart_repository_url" {
  type        = string
  default     = "https://charts.bitnami.com/bitnami"
  description = "The chart repository url."
}

variable "chart_name" {
  type        = string
  default     = "nginx"
  description = "The name of of our chart that we want to install from the repository."
}

variable "chart_version" {
  type        = string
  default     = "13.2.20"
  description = "The version of our chart."
}

variable "namespace" {
  type        = string
  default     = "apps"
  description = "The namespace where our release should be deployed into."
}

variable "create_namespace" {
  type        = bool
  default     = true
  description = "If it should create the namespace if it doesnt exist."
}

variable "atomic" {
  type        = bool
  default     = false
  description = "If it should wait until release is deployed."
}

And lastly our outputs.tf:

1
2
3
output "metadata" {
  value = helm_release.nginx.metadata
}

Now that we have all our configuration ready, we can initialize terraform:

1
terraform init

Then we can run a plan to see what terraform wants to deploy:

1
terraform plan

The plan output shows the following:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
Terraform used the selected providers to generate the following execution plan. Resource actions are indicated with the following symbols:
  + create

Terraform will perform the following actions:

  # helm_release.nginx will be created
  + resource "helm_release" "nginx" {
      + atomic                     = false
      + chart                      = "nginx"
      + cleanup_on_fail            = false
      + create_namespace           = true
      + dependency_update          = false
      + disable_crd_hooks          = false
      + disable_openapi_validation = false
      + disable_webhooks           = false
      + force_update               = false
      + id                         = (known after apply)
      + lint                       = false
      + manifest                   = (known after apply)
      + max_history                = 0
      + metadata                   = (known after apply)
      + name                       = "nginx"
      + namespace                  = "apps"
      + pass_credentials           = false
      + recreate_pods              = false
      + render_subchart_notes      = true
      + replace                    = false
      + repository                 = "https://charts.bitnami.com/bitnami"
      + reset_values               = false
      + reuse_values               = false
      + skip_crds                  = false
      + status                     = "deployed"
      + timeout                    = 300
      + values                     = [
          + <<-EOT
                nameOverride: "nginx"

                ## ref: https://hub.docker.com/r/bitnami/nginx/tags/
                image:
                  registry: docker.io
                  repository: bitnami/nginx
                  tag: 1.23.3-debian-11-r3
            EOT,
        ]
      + verify                     = false
      + version                    = "13.2.20"
      + wait                       = false
      + wait_for_jobs              = false

      + set {
          + name  = "image.tag"
          + value = "1.23.3-debian-11-r3"
        }
    }

Plan: 1 to add, 0 to change, 0 to destroy.

Changes to Outputs:
  + metadata = (known after apply)

Once we are happy with our plan, we can run a apply:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
terraform apply

Plan: 1 to add, 0 to change, 0 to destroy.

Changes to Outputs:
  + metadata = (known after apply)

Do you want to perform these actions?
  Terraform will perform the actions described above.
  Only 'yes' will be accepted to approve.

  Enter a value: yes

helm_release.nginx: Creating...
helm_release.nginx: Still creating... [10s elapsed]

metadata = tolist([
  {
    "app_version" = "1.23.3"
    "chart" = "nginx"
    "name" = "nginx"
    "namespace" = "apps"
    "revision" = 1
    "values" = "{\"image\":{\"registry\":\"docker.io\",\"repository\":\"bitnami/nginx\",\"tag\":\"1.23.3-debian-11-r3\"},\"nameOverride\":\"nginx\"}"
    "version" = "13.2.20"
  },
])

Then we can verify if the pod is running:

1
2
3
kubectl get pods -n apps
NAME                    READY   STATUS    RESTARTS   AGE
nginx-59bdc6465-xdbfh   1/1     Running   0          2m35s

Importing Helm Releases into Terraform State

If you have an existing helm release that was deployed with helm and you want to transfer the ownership to terraform, you first need to write the terraform code, then import the resources into terraform state using:

1
terraform import helm_release.nginx apps/nginx

Where the last argument is <namespace>/<release-name>. Once that is imported you can run terraform plan and apply.

If you want to discover all helm releases managed by helm you can use:

1
kubectl get all -A -l app.kubernetes.io/managed-by=Helm

Thank You

Thanks for reading, feel free to check out my website, feel free to subscribe to my newsletter or follow me at @ruanbekker on Twitter.

Persisting Terraform Remote State in Gitlab

terraform-state-gitlab

In this tutorial we will demonstrate how to persist your terraform state in gitlab managed terraform state, using the terraform http backend.

For detailed information about this consult their documentation

What are we doing?

We will create a terraform pipeline which will run the plan step automatically and a manual step to run the apply step.

During these steps and different pipelines we need to persist our terraform state remotely so that new pipelines can read from our state what we last stored.

Gitlab offers a remote backend for our terraform state which we can use, and we will use a basic example of using the random resource.

Prerequisites

If you don’t see the “Infrastructure” menu on your left, you need to enable it at “Settings”, “General”, “Visibility”, “Project features”, “Permissions” and under “Operations”, turn on the toggle.

For more information on this see their documentation

Authentication

For this demonstration I created a token which is only scoped for this one project, for this we need a to create a token under, “Settings”, “Access Tokens”:

image

Select the api under scope:

image

Store the token name and token value as TF_USERNAME and TF_PASSWORD as a CICD variable under “Settings”, “CI/CD”, “Variables”.

Terraform Code

We will use a basic random_uuid resource for this demonstration, our main.tf:

1
2
3
4
5
6
resource "random_uuid" "uuid" {}

output "uuid" {
  value       = random_uuid.uuid.result
  sensitive   = false
}

Our providers.tf, you will notice the backend "http" {} is what is required for our gitlab remote state:

1
2
3
4
5
6
7
8
9
10
11
12
terraform {
  required_providers {
    random = {
      source = "hashicorp/random"
      version = "3.4.3"
    }
  }
  backend "http" {}
  required_version = "~> 1.3.6"
}

provider "random" {}

Push that up to gitlab for now.

Gitlab Pipeline

Our .gitlab-ci.yml consists of a plan step and a apply step which is a manual step as we first want to review our plan step before we apply.

Our pipeline will only run on the default branch, which in my case is main:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
image:
  name: hashicorp/terraform:1.3.6
  entrypoint: [""]

cache:
  paths:
    - .terraform

workflow:
  rules:
    - if: $CI_COMMIT_BRANCH == $CI_DEFAULT_BRANCH
    - when: never

variables:
  TF_ADDRESS: "https://gitlab.com/api/v4/projects/${CI_PROJECT_ID}/terraform/state/default-terraform.tfstate"

stages:
  - plan
  - apply

.terraform_init: &terraform_init
  - terraform init
      -backend-config=address=${TF_ADDRESS}
      -backend-config=lock_address=${TF_ADDRESS}/lock
      -backend-config=unlock_address=${TF_ADDRESS}/lock
      -backend-config=username=${TF_USERNAME}
      -backend-config=password=${TF_PASSWORD}
      -backend-config=lock_method=POST
      -backend-config=unlock_method=DELETE
      -backend-config=retry_wait_min=5

terraform:plan:
  stage: plan
  artifacts:
    paths:
      - '**/*.tfplan'
      - '**/.terraform.lock.hcl'
  before_script:
    - *terraform_init
  script:
    - terraform validate
    - terraform plan -input=false -out default.tfplan

terraform:apply:
  stage: apply
  artifacts:
    paths:
      - '**/*.tfplan'
      - '**/.terraform.lock.hcl'
  before_script:
    - *terraform_init
  script:
    - terraform apply -input=false -auto-approve default.tfplan
  when: manual

Where the magic happens is in the terraform init step, that is where we will initialize the terraform state in gitlab, and as you can see we are taking the TF_ADDRESS variable to define the path of our state and in this case our state file will be named default-terraform.tfstate.

If it was a case where you are deploying multiple environments, you can use something like ${ENVIRONMENT}-terraform.tfstate.

When we run our pipeline, we can look at our plan step:

image

Once we are happy with this we can run the manual step and do the apply step, then our pipeline should look like this:

image

When we inspect our terraform state in the infrastructure menu, we can see the state file was created:

image

Thank You

Thanks for reading, feel free to check out my website, feel free to subscribe to my newsletter or follow me at @ruanbekker on Twitter.

Everything You Need to Know About Helm

image

Helm, its one amazing piece of software that I use multiple times per day!

What is Helm?

You can think of helm as a package manager for kubernetes, but in fact its much more than that.

Think about it in the following way:

  • Kubernetes Package Manager
  • Way to templatize your applications (this is the part im super excited about)
  • Easy way to install applications to your kubernetes cluster
  • Easy way to do upgrades to your applications
  • Websites such as artifacthub.io provides a nice interface to lookup any application an how to install or upgrade that application.

How does Helm work?

Helm uses your kubernetes config to connect to your kubernetes cluster. In most cases it utilises the config defined by the KUBECONFIG environment variable, which in most cases points to ~/kube/config.

If you want to follow along, you can view the following blog post to provision a kubernetes cluster locally:

Once you have provisioned your kubernetes cluster locally, you can proceed to install helm, I will make the assumption that you are using Mac:

1
brew install helm

Once helm has been installed, you can test the installation by listing any helm releases, by running:

1
helm list

Helm Charts

Helm uses a packaging format called charts, which is a collection of files that describes a related set of kubernetes resources. A sinlge helm chart m ight be used to deploy something simple such as a deployment or something complex that deploys a deployment, ingress, horizontal pod autoscaler, etc.

Using Helm to deploy applications

So let’s assume that we have our kubernetes cluster deployed, and now we are ready to deploy some applications to kubernetes, but we are unsure on how we would do that.

Let’s assume we want to install Nginx.

First we would navigate to artifacthub.io, which is a repository that holds a bunch of helm charts and the information on how to deploy helm charts to our cluster.

Then we would search for Nginx, which would ultimately let us land on:

On this view, we have super useful information such as how to use this helm chart, the default values, etc.

Now that we have identified the chart that we want to install, we can have a look at their readme, which will indicate how to install the chart:

1
2
$ helm repo add my-repo https://charts.bitnami.com/bitnami
$ helm install my-release my-repo/nginx

But before we do that, if we think about it, we add a repository, then before we install a release, we could first find information such as the release versions, etc.

So the way I would do it, is to first add the repository:

1
$ helm repo add bitnami https://charts.bitnami.com/bitnami

Then since we have added the repository, we can update our repository to ensure that we have the latest release versions:

1
$ helm repo update

Now that we have updated our local repositories, we want to find the release versions, and we can do that by listing the repository in question. For example, if we don’t know the application name, we can search by the repository name:

1
$ helm search repo bitnami/ --versions

In this case we will get an output of all the applications that is currently being hosted by Bitnami.

If we know the repository and the release name, we can extend our search by using:

1
$ helm search repo bitnami/nginx --versions

In this case we get an output of all the Nginx release versions that is currently hosted by Bitnami.

Installing a Helm Release

Now that we have received a response from helm search repo, we can see that we have different release versions, as example:

1
2
3
NAME                             CHART VERSION   APP VERSION DESCRIPTION
bitnami/nginx                     13.2.22         1.23.3      NGINX Open Source is a web server that can be a...
bitnami/nginx                     13.2.21         1.23.3      NGINX Open Source is a web server that can be a...

For each helm chart, the chart has default values which means, when we install the helm release it will use the default values which is defined by the helm chart.

We have the concept of overriding the default values with a yaml configuration file we usually refer to values.yaml, that we can define the values that we want to override our default values with.

To get the current default values, we can use helm show values, which will look like the following:

1
$ helm show values bitnami/nginx --version 13.2.22

That will output to standard out, but we can redirect the output to a file using the following:

1
$ helm show values bitnami/nginx --version 13.2.22 > nginx-values.yaml

Now that we have redirected the output to nginx-values.yaml, we can inspect the default values using cat nginx-values.yaml, and any values that we see that we want to override, we can edit the yaml file and once we are done we can save it.

Now that we have our override values, we can install a release to our kubernetes cluster.

Let’s assume we want to install nginx to our cluster under the name my-nginx and we want to deploy it to the namespace called web-servers:

1
$ helm upgrade --install my-nginx bitnami/nginx --values nginx-values.yaml --namespace web-servers --create-namespace --version 13.2.22

In the example above, we defined the following:

  • upgrade --install - meaning we are installing a release, if already exists, do an upgrade
  • my-nginx - use the release name my-nginx
  • bitnami/nginx - use the repository and chart named nginx
  • --values nginx-values.yaml - define the values file with the overrides
  • --namespace web-servers --create-namespace - define the namespace where the release will be installed to, and create the namespace if not exists
  • --version 13.2.22 - specify the version of the chart to be installed

Information about the release

We can view information about our release by running:

1
$ helm list -n web-servers

Creating your own helm charts

It’s very common to create your own helm charts when you follow a common pattern in a microservice architecture or something else, where you only want to override specific values such as the container image, etc.

In this case we can create our own helm chart using:

1
2
3
$ mkdir ~/charts
$ cd ~/charts
$ helm create my-chart

This will create a scaffoliding project with the required information that we need to create our own helm chart. If we look at a tree view, it will look like the following:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
$ tree .
.
└── my-chart
    ├── Chart.yaml
    ├── charts
    ├── templates
    │   ├── NOTES.txt
    │   ├── _helpers.tpl
    │   ├── deployment.yaml
    │   ├── hpa.yaml
    │   ├── ingress.yaml
    │   ├── service.yaml
    │   ├── serviceaccount.yaml
    │   └── tests
    │       └── test-connection.yaml
    └── values.yaml

4 directories, 10 files

This example chart can already be used, to see what this chart will produce when running it with helm, we can use the helm template command:

1
2
$ cd my-chart
$ helm template example . --values values.yaml

The output will be something like the following:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
---
# Source: my-chart/templates/deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: example-my-chart
  labels:
    helm.sh/chart: my-chart-0.1.0
    app.kubernetes.io/name: my-chart
    app.kubernetes.io/instance: example
    app.kubernetes.io/version: "1.16.0"
    app.kubernetes.io/managed-by: Helm
spec:
  replicas: 1
  template:
    spec:
      containers:
        - name: my-chart
          image: "nginx:1.16.0"
          ...
---
...

In our example it will create a service account, service, deployment, etc.

As you can see the spec.template.spec.containers[].image is set to nginx:1.16.0, and to see how that was computed, we can have a look at templates/deployment.yaml:

As you can see in image: section we have .Values.image.repository and .Values.image.tag, and those values are being retrieved from the values.yaml file, and when we look at the values.yaml file:

1
2
3
4
5
image:
  repository: nginx
  pullPolicy: IfNotPresent
  # Overrides the image tag whose default is the chart appVersion.
  tag: ""

If we want to override the image repository and image tag, we can update the values.yaml file to lets say:

1
2
3
4
image:
  repository: busybox
  tag: latest
  pullPolicy: IfNotPresent

When we run our helm template command again, we can see that the computed values changed to what we want:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
$ helm template example . --values values.yaml
---
# Source: my-chart/templates/deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: example-my-chart
spec:
  replicas: 1
  template:
    spec:
      containers:
        - name: my-chart
          image: "busybox:latest"
          imagePullPolicy: IfNotPresent
      ...

Another way is to use --set:

1
2
3
4
5
6
7
8
$ helm template example . --values values.yaml --set image.repository=ruanbekker/containers,image.tag=curl
spec:
  template:
    spec:
      containers:
        - name: my-chart
          image: "ruanbekker/containers:curl"
      ...

The template subcommand provides a great way to debug your charts. To learn more about helm charts, view their documentation.

Publish your Helm Chart to ChartMuseum

ChartMuseum is an open-source Helm Chart Repository server written in Go.

Running chartmuseum demonstration will be done locally on my workstation using Docker. To run the server:

1
2
3
4
5
6
7
$ docker run --rm -it \
  -p 8080:8080 \
  -e DEBUG=1 \
  -e STORAGE=local \
  -e STORAGE_LOCAL_ROOTDIR=/charts \
  -v $(pwd)/charts:/charts \
  ghcr.io/helm/chartmuseum:v0.14.0

Now that ChartMuseum is running, we will need to install a helm plugin called helm-push which helps to push charts to our chartmusuem repository:

1
$ helm plugin install https://github.com/chartmuseum/helm-push

We can verify if our plugin was installed:

1
2
3
$ helm plugin list
NAME      VERSION DESCRIPTION
cm-push   0.10.3  Push chart package to ChartMuseum

Now we add our chartmuseum helm chart repository, which we will call cm-local:

1
$ helm repo add cm-local http://localhost:8080/

We can list our helm repository:

1
2
3
$ helm repo list
NAME                  URL
cm-local              http://localhost:8080/

Now that our helm repository has been added, we can push our helm chart to our helm chart repository. Ensure that we are in our chart repository directory, where the Chart.yaml file should be in our current directory. We need this file as it holds metadata about our chart.

We can view the Chart.yaml:

1
2
3
4
5
6
apiVersion: v2
name: my-chart
description: A Helm chart for Kubernetes
type: application
version: 0.1.0
appVersion: "1.16.0"

Push the helm chart to chartmuseum:

1
2
3
$ helm cm-push . http://localhost:8080/ --version 0.0.1
Pushing my-chart-0.0.1.tgz to http://localhost:8080/...
Done.

Now we should update our repositories so that we can get the latest changes:

1
$ helm repo update

Now we can list the charts under our repository:

1
2
3
$ helm search repo cm-local/
NAME              CHART VERSION   APP VERSION DESCRIPTION
cm-local/my-chart 0.0.1           1.16.0      A Helm chart for Kubernetes

We can now get the values for our helm chart by running:

1
$ helm show values cm-local/my-chart

This returns the values yaml that we can use for our chart, so let’s say you want to output the values yaml so that we can use to to deploy a release we can do:

1
$ helm show values cm-local/my-chart > my-values.yaml

Now when we want to deploy a release, we can do:

1
$ helm upgrade --install my-release cm-local/my-chart --values my-values.yaml --namespace test --create-namespace --version 0.0.1

After the release was deployed, we can list the releases by running:

1
$ helm list

And to view the release history:

1
$ helm history my-release

Resources

Please find the following information with regards to Helm documentation: - helm docs - helm cart template guide

If you need a kubernetes cluster and you would like to run this locally, find the following documentation in order to do that: - using kind for local kubernetes clusters

Thank You

Thanks for reading, feel free to check out my website, feel free to subscribe to my newsletter or follow me at @ruanbekker on Twitter.

Getting Started With Wiremock

In this tutorial we will use docker to run an instance of wiremock to setup a mock api for us to test our api’s.

Wiremock

Wiremock is a tool for building mock API’s which enables us to build stable development environments.

Docker and Wiremock

Run a wiremock instance with docker:

1
docker run -it --rm -p 8080:8080 --name wiremock wiremock/wiremock:2.34.0

Then our wiremock instance will be exposed on port 8080 locally, which we can use to make a request against to create a api mapping:

1
2
3
4
curl -XPOST -H "Content-Type: application/json" \
  http://localhost:8080/__admin/mappings
  -d '{"request": {"url": "/testapi","method": "GET"}, "response": {"status": 200, "body": "{\"result\": \"ok\"
}", "headers": {"Content-Type": "application/json"}}}'

The response should be something like this:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
{
    "id" : "223a2c0a-8b43-42dc-8ba6-fe973da1e420",
    "request" : {
      "url" : "/testapi",
      "method" : "GET"
    },
    "response" : {
      "status" : 200,
      "body" : "{\"result\": \"ok\"}",
      "headers" : {
        "Content-Type" : "application/json"
      }
    },
    "uuid" : "223a2c0a-8b43-42dc-8ba6-fe973da1e420"
}

Test Wiremock

If we make a GET request against our API:

1
curl http://localhost:8080/testapi

Our response should be:

1
2
3
{
  "result": "ok"
}

Export Wiremock Mappings

We can export our mappings to a local file named stubs.json with:

1
curl -s http://localhost:8080/__admin/mappings --output stubs.json

Import Wiremock Mappings

We can import our mappings from our stubs.json file with:

1
curl -XPOST -v --data-binary @stubs.json http://localhost:8080/__admin/mappings/import

Resources

Thank You

Thanks for reading, feel free to check out my website, feel free to subscribe to my newsletter or follow me at @ruanbekker on Twitter.

Logging With Docker Promtail and Grafana Loki

grafana-loki-promtail

In this post we will use Grafana Promtail to collect all our logs and ship it to Grafana Loki.

About

We will be using Docker Compose and mount the docker socket to Grafana Promtail so that it is aware of all the docker events and configure it that only containers with docker labels logging=promtail needs to be enabled for logging, which will then scrape those logs and send it to Grafana Loki where we will visualize it in Grafana.

Promtail

In our promtail configuration config/promtail.yaml:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
# https://grafana.com/docs/loki/latest/clients/promtail/configuration/
# https://docs.docker.com/engine/api/v1.41/#operation/ContainerList
server:
  http_listen_port: 9080
  grpc_listen_port: 0

positions:
  filename: /tmp/positions.yaml

clients:
  - url: http://loki:3100/loki/api/v1/push

scrape_configs:
  - job_name: flog_scrape
    docker_sd_configs:
      - host: unix:///var/run/docker.sock
        refresh_interval: 5s
        filters:
          - name: label
            values: ["logging=promtail"]
    relabel_configs:
      - source_labels: ['__meta_docker_container_name']
        regex: '/(.*)'
        target_label: 'container'
      - source_labels: ['__meta_docker_container_log_stream']
        target_label: 'logstream'
      - source_labels: ['__meta_docker_container_label_logging_jobname']
        target_label: 'job'

You can see we are using the docker_sd_configs provider and filter only docker containers with the docker labels logging=promtail and once we have those logs we relabel our labels to have the container name and we also use docker labels like log_stream and logging_jobname to add labels to our logs.

Grafana Config

We would like to auto configure our datasources for Grafana and in config/grafana-datasources.yml we have:

1
2
3
4
5
6
7
8
9
10
apiVersion: 1

datasources:
  - name: Loki
    type: loki
    access: proxy
    url: http://loki:3100
    version: 1
    editable: false
    isDefault: true

Docker Compose

Then lastly we have our docker-compose.yml that wire up all our containers:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
version: '3.8'

services:
  nginx-app:
    container_name: nginx-app
    image: nginx
    labels:
      logging: "promtail"
      logging_jobname: "containerlogs"
    ports:
      - 8080:80
    networks:
      - app

  grafana:
    image: grafana/grafana:latest
    ports:
      - 3000:3000
    volumes:
      - ./config/grafana-datasources.yml:/etc/grafana/provisioning/datasources/datasources.yaml
    environment:
      - GF_AUTH_ANONYMOUS_ENABLED=true
      - GF_AUTH_ANONYMOUS_ORG_ROLE=Admin
      - GF_AUTH_DISABLE_LOGIN_FORM=true
    networks:
      - app

  loki:
    image: grafana/loki:latest
    ports:
      - 3100:3100
    command: -config.file=/etc/loki/local-config.yaml
    networks:
      - app

  promtail:
    image:  grafana/promtail:latest
    container_name: promtail
    volumes:
      - ./config/promtail.yaml:/etc/promtail/docker-config.yaml
      - /var/lib/docker/containers:/var/lib/docker/containers:ro
      - /var/run/docker.sock:/var/run/docker.sock
    command: -config.file=/etc/promtail/docker-config.yaml
    depends_on:
      - loki
    networks:
      - app

networks:
  app:
    name: app

As you can see with our nginx container we define our labels:

1
2
3
4
5
6
  nginx-app:
    container_name: nginx-app
    image: nginx
    labels:
      logging: "promtail"
      logging_jobname: "containerlogs"

Which uses logging: "promtail" to let promtail know this log container’s log to be scraped and logging_jobname: "containerlogs" which will assign containerlogs to the job label.

Start the stack

If you are following along all this configuration is available in my github repository https://github.com/ruanbekker/docker-promtail-loki .

Once you have everything in place you can start it with:

1
docker-compose up -d

Access nginx on http://localhost:8080

image

Then navigate to grafana on http://localhost:3000 and select explore on the left and select the container:

image

And you will see the logs:

image

Thank You

Thanks for reading, feel free to check out my website, feel free to subscribe to my newsletter or follow me at @ruanbekker on Twitter.

KinD for Local Kubernetes Clusters

kubernetes-kind

In this tutorial we will demonstrate how to use KinD (Kubernetes in Docker) to provision local kubernetes clusters for local development.

About

KinD uses container images to run as “nodes”, so spinning up and tearing down clusters becomes really easy or running multiple or different versions, is as easy as pointing to a different container image.

Configuration such as node count, ports, volumes, image versions can either be controlled via the command line or via configuration, more information on that can be found on their documentation:

Installation

Follow the docs for more information, but for mac:

1
brew install kind

To verify if kind was installed, you can run:

1
kind version

Create a Cluster

Create the cluster with command line arguments, such as cluster name, the container image:

1
kind create cluster --name cluster-1 --image kindest/node:v1.24.0

And the output will look something like this:

1
2
3
4
5
6
7
8
9
10
11
12
13
Creating cluster "cluster-1" ...
 ✓ Ensuring node image (kindest/node:v1.24.0) 🖼
 ✓ Preparing nodes 📦
 ✓ Writing configuration 📜
 ✓ Starting control-plane 🕹️
 ✓ Installing CNI 🔌
 ✓ Installing StorageClass 💾
Set kubectl context to "kind-cluster-1"
You can now use your cluster with:

kubectl cluster-info --context kind-cluster-1

Have a question, bug, or feature request? Let us know! https://kind.sigs.k8s.io/#community 🙂

I highly recommend installing kubectx, which makes it easy to switch between kubernetes contexts.

Create a Cluster with Config

If you would like to define your cluster configuration as config, you can create a file default-config.yaml with the following as a 2 node cluster, and specifying version 1.24.0:

1
2
3
4
5
6
7
8
---
kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
nodes:
- role: control-plane
  image: kindest/node:v1.24.0@sha256:0866296e693efe1fed79d5e6c7af8df71fc73ae45e3679af05342239cdc5bc8e
- role: worker
  image: kindest/node:v1.24.0@sha256:0866296e693efe1fed79d5e6c7af8df71fc73ae45e3679af05342239cdc5bc8e

Then create the cluster and point the config:

1
kind create cluster --name kind-cluster --config default-config.yaml

Interact with the Cluster

View the cluster info:

1
kubectl cluster-info --context kind-kind-cluster

View cluster contexts:

1
kubectl config get-contexts

Use context:

1
kubectl config use-context kind-kind-cluster

View nodes:

1
2
3
4
5
kubectl get nodes -o wide

NAME                         STATUS   ROLES           AGE     VERSION   INTERNAL-IP   EXTERNAL-IP   OS-IMAGE       KERNEL-VERSION      CONTAINER-RUNTIME
kind-cluster-control-plane   Ready    control-plane   2m11s   v1.24.0   172.20.0.5    <none>        Ubuntu 21.10   5.10.104-linuxkit   containerd://1.6.4
kind-cluster-worker          Ready    <none>          108s    v1.24.0   172.20.0.4    <none>        Ubuntu 21.10   5.10.104-linuxkit   containerd://1.6.4

Deploy Sample Application

We will create a deployment, a service and port-forward to our service to access our application. You can also specify port configuration to your cluster so that you don’t need to port-forward, which you can find in their port mappings documentation

I will be using the following commands to generate the manifests, but will also add them to this post:

1
2
kubectl create deployment hostname --namespace default --replicas 2 --image ruanbekker/containers:hostname --port 8080 --dry-run=client -o yaml > hostname-deployment.yaml
kubectl expose deployment hostname --namespace default --port=80 --target-port=8080 --name=hostname-http --dry-run=client -o yaml > hostname-service.yaml

The manifest:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
---
apiVersion: apps/v1
kind: Deployment
metadata:
  creationTimestamp: null
  labels:
    app: hostname
  name: hostname
  namespace: default
spec:
  replicas: 2
  selector:
    matchLabels:
      app: hostname
  strategy: {}
  template:
    metadata:
      creationTimestamp: null
      labels:
        app: hostname
    spec:
      containers:
      - image: ruanbekker/containers:hostname
        name: containers
        ports:
        - containerPort: 8080
        resources: {}
status: {}
---
apiVersion: v1
kind: Service
metadata:
  creationTimestamp: null
  labels:
    app: hostname
  name: hostname-http
  namespace: default
spec:
  ports:
  - port: 80
    protocol: TCP
    targetPort: 8080
  selector:
    app: hostname
status:
  loadBalancer: {}

Then apply them with:

1
kubectl apply -f <name-of-manifest>.yaml

Or if you used kubectl to create them:

1
2
kubectl apply -f hostname-deployment.yaml
kubectl apply -f hostname-service.yaml

You can then view your resources with:

1
2
3
4
5
6
7
8
9
10
11
12
kubectl get deployment,pod,service

NAME                       READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/hostname   2/2     2            2           9m27s

NAME                            READY   STATUS    RESTARTS   AGE
pod/hostname-7ff58c5644-67vhq   1/1     Running   0          9m27s
pod/hostname-7ff58c5644-wjjbw   1/1     Running   0          9m27s

NAME                    TYPE        CLUSTER-IP     EXTERNAL-IP   PORT(S)   AGE
service/hostname-http   ClusterIP   10.96.218.58   <none>        80/TCP    5m48s
service/kubernetes      ClusterIP   10.96.0.1      <none>        443/TCP   24m

Port forward to your service:

1
kubectl port-forward svc/hostname-http 8080:80

Then access your application:

1
2
3
curl http://localhost:8080/

Hostname: hostname-7ff58c5644-wjjbw

Delete Kind Cluster

View the clusters:

1
kind get clusters

Delete a cluster:

1
kind delete cluster --name kind-cluster

Extras

I highly recommend using kubectx to switch contexts and kubens to set the default namespace, and aliases:

1
2
3
alias k=kubectl
alias kx=kubectx
alias kns=kubens

Thank You

Thanks for reading, feel free to check out my website, feel free to subscribe to my newsletter or follow me at @ruanbekker on Twitter.

Ansible Playbook for Your Macbook Homebrew Packages

ansible-macbook-homebrew

In this tutorial I will demonstrate how to use Ansible for Homebrew Configuration Management. The aim for using Ansible to manage your homebrew packages helps you to have a consistent list of packages on your macbook.

For me personally, when I get a new laptop it’s always a mission to get the same packages installed as what I had before, and ansible solves that for us to have all our packages defined in configuration management.

Install Ansible

Install ansible with python and pip:

1
python3 -m pip install ansible==4.9.0

Ansible Configuration

Create the ansible.cfg configuration file:

1
2
3
[defaults]
inventory = inventory.ini
deprecation_warnings = False

Our inventory.ini will define the information about our target host, which will be localhost as we are using ansible to run against our local target which is our macbook:

1
2
3
4
5
[localhost]
my.laptop  ansible_connection=local

[localhost:vars]
ansible_python_interpreter = /usr/bin/python3

Ansible Playbook

Our playbook homebrew.yaml will define the tasks to add the homebrew taps, cask packages and homebrew packages. You can change the packages as you desire, but these are the ones that I use:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
- hosts: localhost
  name: Macbook Playbook
  gather_facts: False
  vars:
    TFENV_ARCH: amd64
  tasks:
    - name: Ensures taps are present via homebrew
      community.general.homebrew_tap:
        name: ""
        state: present
      with_items:
        - hashicorp/tap

    - name: Ensures packages are present via homebrew cask
      community.general.homebrew_cask:
        name: ""
        state: present
        install_options: 'appdir=/Applications'
      with_items:
        - visual-studio-code
        - multipass
        - spotify

    - name: Ensures packages are present via homebrew
      community.general.homebrew:
        name: ""
        path: "/Applications"
        state: present
      with_items:
        - openssl
        - readline
        - sqlite3
        - xz
        - zlib
        - jq
        - yq
        - wget
        - go
        - kubernetes-cli
        - fzf
        - sshuttle
        - hugo
        - helm
        - kind
        - awscli
        - gnupg
        - kubectx
        - helm
        - stern
        - terraform
        - tfenv
        - pyenv
        - jsonnet
      ignore_errors: yes
      tags:
        - packages

Deploy Playbook

Now you can run the playbook using:

1
ansible-playbook homebrew.yaml

Source Code

The code can be found in my github repository: - https://github.com/ruanbekker/ansible-macbook-setup

Thanks

Thanks for reading, feel free to check out my website, feel free to subscribe to my newsletter or follow me at @ruanbekker on Twitter.

Docker Multistage Builds for Hugo

blog-ruanbekker-multistage-builds

In this tutorial I will demonstrate how to keep your docker container images nice and slim with the use of multistage builds for a hugo documentation project.

Hugo is a static content generator so essentially that means that it will generate your markdown files into html. Therefore we don’t need to include all the content from our project repository as we only need the static content (html, css, javascript) to reside on our final container image.

What are we doing today

We will use the DOKS Modern Documentation theme for Hugo as our project example, where we will build and run our documentation website on a docker container, but more importantly make use of multistage builds to optimize the size of our container image.

Our Build Strategy

Since hugo is a static content generator, we will use a node container image as our base. We will then build and generate the content using npm run build which will generate the static content to /src/public in our build stage.

Since we then have static content, we can utilize a second stage using a nginx container image with the purpose of a web server to host our static content. We will copy the static content from our build stage into our second stage and place it under our defined path in our nginx config.

This way we only include the required content on our final container image.

Building our Container Image

First clone the docs github repository and change to the directory:

1
2
git clone https://github.com/h-enk/doks
cd doks

Now create a Dockerfile in the root path with the following content:

1
2
3
4
5
6
7
8
9
10
11
FROM node:16.15.1 as build
WORKDIR /src
ADD . .
RUN npm install
RUN npm run build

FROM  nginx:alpine
LABEL demonstration.by Ruan Bekker <@ruanbekker>
COPY  nginx/config/nginx.conf /etc/nginx/nginx.conf
COPY  nginx/config/app.conf /etc/nginx/conf.d/app.conf
COPY  --from=build /src/public /usr/share/nginx/app

As we can see we are copying two nginx config files to our final image, which we will need to create.

Create the nginx config directory:

1
mkdir -p nginx/config

The content for our main nginx config nginx/config/nginx.conf:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
user  nginx;
worker_processes  auto;
error_log  /var/log/nginx/error.log notice;
pid        /var/run/nginx.pid;

events {
    worker_connections  1024;
}

http {
    include       /etc/nginx/mime.types;
    default_type  application/octet-stream;

    log_format  main  '$remote_addr - $remote_user [$time_local] "$request" '
                      '$status $body_bytes_sent "$http_referer" '
                      '"$http_user_agent" "$http_x_forwarded_for"';

    access_log  /var/log/nginx/access.log  main;

    sendfile        on;

    # timeouts
    client_body_timeout 12;
    client_header_timeout 12;
    keepalive_timeout  25;
    send_timeout 10;

    # buffer size
    client_body_buffer_size 10K;
    client_header_buffer_size 1k;
    client_max_body_size 8m;
    large_client_header_buffers 4 4k;

    # gzip compression
    gzip  on;
    gzip_vary on;
    gzip_min_length 10240;
    gzip_proxied expired no-cache no-store private auth;
    gzip_types text/plain text/css text/xml text/javascript application/x-javascript application/xml;
    gzip_disable "MSIE [1-6]\.";

    include /etc/nginx/conf.d/app.conf;
}

And in our main nginx config we are including a virtual host config app.conf, which we will create locally, and the content of nginx/config/app.conf:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
server {
    listen       80;
    server_name  localhost;

    location / {
        root   /usr/share/nginx/app;
        index  index.html index.htm;
    }

    #error_page  404              /404.html;

    # redirect server error pages to the static page /50x.html
    error_page   500 502 503 504  /50x.html;
    location = /50x.html {
        root   /usr/share/nginx/html;
    }

}

Now that we have our docker config in place, we can build our container image:

1
docker build -t ruanbekker/hashnode-docs-blogpost:latest .

Then we can review the size of our container image, which is only 27.4MB in size, pretty neat right.

1
2
3
4
docker images --filter reference=ruanbekker/hashnode-docs-blogpost

REPOSITORY                          TAG       IMAGE ID       CREATED          SIZE
ruanbekker/hashnode-docs-blogpost   latest    5b60f30f40e6   21 minutes ago   27.4MB

Running our Container

Now that we’ve built our container image, we can run our documentation site, by specifying our host port on the left to map to our container port on the right in 80:80:

1
docker run -it -p 80:80 ruanbekker/hashnode-docs-blogpost:latest

When you don’t have port 80 already listening prior to running the previous command, when you head to http://localhost (if you are running this locally), you should see our documentation site up and running:

image

Thank You

I have published this container image to ruanbekker/hashnode-docs-blogpost.

Thanks for reading, feel free to check out my website, feel free to subscribe to my newsletter or follow me at @ruanbekker on Twitter.