Ruan Bekker's Blog

From a Curious mind to Posts on Github

Persistent Volumes With K3d Kubernetes

With k3d we can mount the host to container path, and with persistent volumes we can set a hostPath for our persistent volumes. With k3d, all the nodes will be using the same volume mapping which maps back to the host.

We will test the data persistence by writing a file inside a container, kill the pod, then exec into the pod again and test if the data persisted

The k3d Cluster

Create the directory on the host where we will persist the data:

1
> mkdir -p /tmp/k3dvol

Create the cluster:

1
2
> k3d create --name "k3d-cluster" --volume /tmp/k3dvol:/tmp/k3dvol --publish "80:80" --workers 2
> export KUBECONFIG="$(k3d get-kubeconfig --name='k3d-cluster')"

Our application will be a busybox container which will keep running with a ping command, map the persistent volume to /data inside the pod.

Our app.yml

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
apiVersion: v1
kind: PersistentVolume
metadata:
  name: task-pv-volume
  labels:
    type: local
spec:
  storageClassName: manual
  capacity:
    storage: 1Gi
  accessModes:
    - ReadWriteOnce
  hostPath:
    path: "/tmp/k3dvol"
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: task-pv-claim
spec:
  storageClassName: manual
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 1Gi
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: echo
spec:
  selector:
    matchLabels:
      app: echo
  strategy:
    type: Recreate
  template:
    metadata:
      labels:
        app: echo
    spec:
      volumes:
        - name: task-pv-storage
          persistentVolumeClaim:
            claimName: task-pv-claim
      containers:
      - image: busybox
        name: echo
        volumeMounts:
          - mountPath: "/data"
            name: task-pv-storage
        command: ["ping", "127.0.0.1"]

Deploy the workload:

1
2
3
4
> kubectl apply -f app.yml
persistentvolume/task-pv-volume created
persistentvolumeclaim/task-pv-claim created
deployment.apps/echo created

View the persistent volumes:

1
2
3
> kubectl get pv
NAME                                       CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS   CLAIM                    STORAGECLASS   REASON   AGE
task-pv-volume                             1Gi        RWO            Retain           Bound    default/task-pv-claim    manual                  6s

View the Persistent Volume Claims:

1
2
3
> kubectl get pvc
NAME             STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS   AGE
task-pv-claim    Bound    task-pv-volume                             1Gi        RWO            manual         11s

View the pods:

1
2
3
> kubectl get pods
NAME                   READY   STATUS    RESTARTS   AGE
echo-58fd7d9b6-x4rxj   1/1     Running   0          16s

Exec into the pod:

1
2
3
4
5
6
7
8
9
> kubectl exec -it echo-58fd7d9b6-x4rxj sh
/ # df -h
Filesystem                Size      Used Available Use% Mounted on
overlay                  58.4G     36.1G     19.3G  65% /
osxfs                   233.6G    139.7G     86.3G  62% /data
/dev/sda1                58.4G     36.1G     19.3G  65% /etc/hosts
/dev/sda1                58.4G     36.1G     19.3G  65% /dev/termination-log
/dev/sda1                58.4G     36.1G     19.3G  65% /etc/hostname
/dev/sda1                58.4G     36.1G     19.3G  65% /etc/resolv.conf

Write the hostname of the current pod to the persistent volume path:

1
2
3
4
/ # echo $(hostname)
echo-58fd7d9b6-x4rxj
/ # echo $(hostname) > /data/hostname.txt
/ # exit

Exit the pod and read the content from the host (workstation/laptop):

1
2
> cat /tmp/k3dvol/hostname.txt
echo-58fd7d9b6-x4rxj

Look at the host where the pod is running on:

1
2
3
4
5
> kubectl get nodes -o wide
NAME                       STATUS   ROLES    AGE   VERSION        INTERNAL-IP    EXTERNAL-IP   OS-IMAGE   KERNEL-VERSION     CONTAINER-RUNTIME
k3d-k3d-cluster-server     Ready    master   13m   v1.17.2+k3s1   192.168.32.2   <none>        Unknown    4.9.184-linuxkit   containerd://1.3.3-k3s1
k3d-k3d-cluster-worker-1   Ready    <none>   13m   v1.17.2+k3s1   192.168.32.4   <none>        Unknown    4.9.184-linuxkit   containerd://1.3.3-k3s1
k3d-k3d-cluster-worker-0   Ready    <none>   13m   v1.17.2+k3s1   192.168.32.3   <none>        Unknown    4.9.184-linuxkit   containerd://1.3.3-k3s1

Delete the pod:

1
2
> kubectl delete pod/echo-58fd7d9b6-x4rxj
pod "echo-58fd7d9b6-x4rxj" deleted

Wait until the pod is rescheduled again and verify if the pod is running on a different node:

1
2
3
> kubectl get pods -o wide
NAME                   READY   STATUS    RESTARTS   AGE   IP          NODE                       NOMINATED NODE   READINESS GATES
echo-58fd7d9b6-fkvbs   1/1     Running   0          35s   10.42.2.9   k3d-k3d-cluster-worker-1   <none>           <none>

Exec into the new pod:

1
> kubectl exec -it echo-58fd7d9b6-fkvbs sh

View if the data is persisted:

1
2
3
4
5
/ # hostname
echo-58fd7d9b6-fkvbs

/ # cat /data/hostname.txt
echo-58fd7d9b6-x4rxj

Asynchronous Function With OpenFaas

In this post we will explore how to use asynchronous functions in OpenFaas.

What are we doing

A synchronous request blocks the client until operation completes, where a asynchronous request doesn’t block the client, which is nice to use for long-running tasks or function invocations to run in the background through the use of NATS Streaming.

We will be building a Python Flask API Server which will act as our webhook service. When we invoke our function by making a http request, we also include a callback url as a header which will be the address where the queue worker will post it’s results.

Then we will make a http request to the synchronous function where we will get the response from the function and a http request to the asynchronous function, where we will get the response from the webhook service’s logs

Deploy OpenFaas

Deploy OpenFaas on a k3d Kubernetes Cluster if you want to follow along on your laptop. You can follow this post to deploy a kubernetes cluster and deploying openfaas:

Webhook Service

Lets build the Python Flask Webhook Service, our application code:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
from flask import Flask, request
from logging.config import dictConfig

dictConfig({
    'version': 1,
    'formatters': {'default': {
        'format': '[%(asctime)s] %(levelname)s in %(module)s: %(message)s',
    }},
    'handlers': {'wsgi': {
        'class': 'logging.StreamHandler',
        'stream': 'ext://flask.logging.wsgi_errors_stream',
        'formatter': 'default'
    }},
    'root': {
        'level': 'INFO',
        'handlers': ['wsgi']
    }
})

app = Flask(__name__)

@app.route("/", methods=["POST", "GET"])
def main():
    response = {}

    if request.method == "GET":
        response["event"] = "GET"
        app.logger.info("Received Event: GET")

    if request.method == "POST":
        response["event"] = request.get_data()
        app.logger.info("Receveid Event: {}".format(response))

    else:
        response["event"] == "OTHER"

    print("Received Event:")
    print(response)
    return "event: {} \n".format(response)

if __name__ == "__main__":
    app.run(host="0.0.0.0", port=5000)

Our Dockerfile:

1
2
3
4
5
FROM python:3.7-alpine
RUN pip install flask
ADD app.py /app.py
EXPOSE 5000
CMD ["python", "/app.py"]

Building and Pushing to Docker Hub (or you can use my docker image):

1
2
$ docker build -t yourusername/python-flask-webhook:openfaas .
$ docker push yourusername/python-flask-webhook:openfaas

Create the deployment manifest webhook.yml for our webhook service:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
$ cat > webhook.yml << EOF
apiVersion: v1
kind: Service
metadata:
  name: webhook-service
spec:
  selector:
    app: webhook
  ports:
    - protocol: TCP
      port: 5000
      targetPort: 5000
      name: web
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: webhook-ingress
  annotations:
    kubernetes.io/ingress.class: traefik
spec:
  rules:
  - host: webhook.localdns.xyz
    http:
      paths:
      - backend:
          serviceName: webhook-service
          servicePort: 5000
---
apiVersion: apps/v1
kind: Deployment
metadata:
  labels:
    app: webhook
  name: webhook
spec:
  replicas: 1
  selector:
    matchLabels:
      app: webhook
  template:
    metadata:
      labels:
        app: webhook
    spec:
      containers:
      - name: webhook
        image: ruanbekker/python-flask-webhook:openfaas
        imagePullPolicy: IfNotPresent
        ports:
        - containerPort: 5000
          name: http
          protocol: TCP
EOF

Now deploy to kubernetes:

1
$ kubectl apply -f webhook.yml

After a minute or so, verify that you get a response when making a http request:

1
2
$ curl http://webhook.localdns.xyz
event: {'event': 'GET'}

Deploy the OpenFaas Function

We will deploy a dockerfile type function which will return the data that we feed it:

1
2
3
4
5
6
7
$ faas-cli new --lang dockerfile function-async-task
$ faas-cli up -f function-async-task.yml

Deploying: function-async-task.

Deployed. 202 Accepted.
URL: http://openfaas.localdns.xyz/function/function-async-task

List the functions:

1
2
3
$ faas-cli list
Function                       Invocations      Replicas
function-async-task            0               1

Describe the function:

1
2
3
4
5
6
7
8
9
10
11
12
$ faas-cli describe function-async-task
Name:                function-async-task
Status:              Ready
Replicas:            1
Available replicas:  1
Invocations:         0
Image:               ruanbekker/function-async-task:latest
Function process:
URL:                 http://openfaas.localdns.xyz/function/function-async-task
Async URL:           http://openfaas.localdns.xyz/async-function/function-async-task
Labels:              faas_function : function-async-task
Annotations:         prometheus.io.scrape : false

Testing

Test synchronous function:

1
2
$ curl http://openfaas.localdns.xyz/function/function-async-task -d "test"
test

Test asynchronous function, remember, here we need to provide the callback url which the queue worker will inform, which will be our webhook service:

1
2
3
4
5
6
7
$ curl -i -H "X-Callback-Url: http://webhook-service.default.svc.cluster.local:5000" http://openfaas.localdns.xyz/async-async-function/function-async-task -d "asyyyyync"
HTTP/1.1 202 Accepted
Content-Length: 0
Date: Mon, 17 Feb 2020 13:57:26 GMT
Vary: Accept-Encoding
X-Call-Id: d757c10f-4293-4daa-bf52-bbdc17b7dea3
X-Start-Time: 1581947846737501600

Check the logs of the webhook pod:

1
2
3
$ kubectl logs -f pod/$(kubectl get pods --selector=app=webhook --output=jsonpath="{.items..metadata.name}")
[2020-02-17 13:57:26,774] INFO in app: Receveid Event: {'event': b'asyyyyync'}
[2020-02-17 13:57:26,775] INFO in internal: 10.42.0.6 - - [17/Feb/2020 13:57:26] "POST / HTTP/1.1" 200 -

Check the logs of the queue worker:

1
2
3
4
5
6
7
$ kubectl logs -f deployment/queue-worker -n openfaas
[45] Received on [faas-request]: 'sequence:45 subject:"faas-request" data:"{\"Header\":{\"Accept\":[\"*/*\"],\"Accept-Encoding\":[\"gzip\"],\"Content-Length\":[\"9\"],\"Content-Type\":[\"application/x-www-form-urlencoded\"],\"User-Agent\":[\"curl/7.54.0\"],\"X-Call-Id\":[\"d757c10f-4293-4daa-bf52-bbdc17b7dea3\"],\"X-Callback-Url\":[\"http://webhook-service.default.svc.cluster.local:5000\"],\"X-Forwarded-For\":[\"10.42.0.0\"],\"X-Forwarded-Host\":[\"openfaas.localdns.xyz\"],\"X-Forwarded-Port\":[\"80\"],\"X-Forwarded-Proto\":[\"http\"],\"X-Forwarded-Server\":[\"traefik-6787cddb4b-87zss\"],\"X-Real-Ip\":[\"10.42.0.0\"],\"X-Start-Time\":[\"1581947846737501600\"]},\"Host\":\"openfaas.localdns.xyz\",\"Body\":\"YXN5eXl5eW5j\",\"Method\":\"POST\",\"Path\":\"\",\"QueryString\":\"\",\"Function\":\"openfaas-function-cat\",\"CallbackUrl\":{\"Scheme\":\"http\",\"Opaque\":\"\",\"User\":null,\"Host\":\"webhook-service.default.svc.cluster.local:5000\",\"Path\":\"\",\"RawPath\":\"\",\"ForceQuery\":false,\"RawQuery\":\"\",\"Fragment\":\"\"}}" timestamp:1581947846738308800 '
Invoking: openfaas-function-cat with 9 bytes, via: http://gateway.openfaas.svc.cluster.local:8080/function/openfaas-function-cat/
Invoked: openfaas-function-cat [200] in 0.029029s
Callback to: http://webhook-service.default.svc.cluster.local:5000
openfaas-function-cat returned 9 bytes
Posted result for openfaas-function-cat to callback-url: http://webhook-service.default.svc.cluster.local:5000, status: 200

Make 1000 Requests:

1
2
3
4
5
6
$ date > time.date
  for x in {1..1000}
    do
      curl -i -H "X-Callback-Url: http://webhook-service.default.svc.cluster.local:5000" http://openfaas.localdns.xyz/async-function/openfaas-function-cat -d "asyyyyync"
    done
  date >> time.date

View the log file that we wrote before we started and finished our requests:

1
2
3
$ cat time.date
Mon Feb 17 16:03:16 SAST 2020
Mon Feb 17 16:03:48 SAST 2020

The last request was actioned at:

1
[2020-02-17 14:03:52,421] INFO in internal: 10.42.0.6 - - [17/Feb/2020 14:03:52] "POST / HTTP/1.1" 200 -

Thank You

This was a basic example to demonstrate async functions using OpenFaas

OpenFaas Documentation:

Traefik Ingress for OpenFaas on Kubernetes (K3d)

In this post we will deploy OpenFaas on Kubernetes locally using k3sup and k3d, then deploy a Traefik Ingress so that we can access the OpenFaas Gateway on HTTP over the standard port 80.

K3d is a amazing wrapper that deploys a k3s cluster on docker, and k3sup makes it very easy to provision OpenFaas to your Kubernetes cluster.

Deploy a Kubernetes Cluster

If you have not installed k3d, you can install k3d on mac with brew:

1
$ brew install k3d

We will deploy our cluster with 2 worker nodes and publish port 80 to the containers port 80:

1
$ k3d create --name="demo" --workers="2" --publish="80:80"

Point the kubeconfig to the location that k3d generated:

1
$ export KUBECONFIG="$(k3d get-kubeconfig --name='demo')"

Deploy OpenFaas

First we need to get k3sup:

1
$ curl -sLfS https://get.k3sup.dev | sudo sh

Once k3sup is installed, deploy OpenFaas to your cluster:

1
$ k3sup app install openfaas

Give it a minute or so and check if everything is running:

1
2
3
4
5
6
7
8
9
$ kubectl get pods -n openfaas
NAMESPACE     NAME                                 READY   STATUS      RESTARTS   AGE
openfaas      alertmanager-546f66b6c6-qtb69        1/1     Running     0          5m
openfaas      basic-auth-plugin-79b9878b7b-7vlln   1/1     Running     0          4m59s
openfaas      faas-idler-db8cd9c7d-8xfpp           1/1     Running     2          4m57s
openfaas      gateway-7dcc6d694d-dmvqn             2/2     Running     0          4m56s
openfaas      nats-d6d574749-rt9vw                 1/1     Running     0          4m56s
openfaas      prometheus-d99669d9b-mfxc8           1/1     Running     0          4m53s
openfaas      queue-worker-75f44b56b9-mhhbv        1/1     Running     0          4m52s

Traefik Ingress

In my scenario, I am using openfaas.localdns.xyz which resolves to 127.0.0.1. Next we need to know to which service to route the traffic to, we can find that by:

1
2
3
$ kubectl get svc/gateway -n openfaas
NAME      TYPE        CLUSTER-IP     EXTERNAL-IP   PORT(S)    AGE
gateway   ClusterIP   10.43.174.57   <none>        8080/TCP   23m

Below is our ingress.yml:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: openfaas-gateway-ingress
  namespace: openfaas
  annotations:
    kubernetes.io/ingress.class: traefik
spec:
  rules:
  - host: openfaas.localdns.xyz
    http:
      paths:
      - backend:
          serviceName: gateway
          servicePort: 8080

Apply the ingress:

1
2
$ kubectl apply -f ingress.yml
ingress.extensions/openfaas-gateway-ingress created

We can the verify that our ingress is visible:

1
2
3
$ kubectl get ingress -n openfaas
NAMESPACE   NAME                       HOSTS               ADDRESS      PORTS   AGE
openfaas    openfaas-gateway-ingress   openfaas.co.local   172.25.0.4   80      28s

OpenFaas CLI

Install the OpenFaas CLI:

1
$ curl -SLsf https://cli.openfaas.com | sudo sh

Export the OPENFAAS_URL to our ingress endpoint and OPENFAAS_PREFIX for your dockerhub username:

1
2
$ export OPENFAAS_URL=http://openfaas.localdns.xyz
$ export OPENFAAS_PREFIX=ruanbekker # change to your username

Get your credentials for the OpenFaas Gateway and login with the OpenFaas CLI:

1
2
$ PASSWORD=$(kubectl get secret -n openfaas basic-auth -o jsonpath="{.data.basic-auth-password}" | base64 --decode; echo)
$ echo -n $PASSWORD | faas-cli login --username admin --password-stdin

Deploy a Function

Deploy the figlet function as an example:

1
2
3
4
$ faas-cli store deploy figlet

Deployed. 202 Accepted.
URL: http://openfaas.localdns.xyz/function/figlet

Invoke the function:

1
2
3
4
5
6
7
$ curl http://openfaas.localdns.xyz/function/figlet -d 'hello, world'
 _          _ _                             _     _
| |__   ___| | | ___    __      _____  _ __| | __| |
| '_ \ / _ \ | |/ _ \   \ \ /\ / / _ \| '__| |/ _` |
| | | |  __/ | | (_) |   \ V  V / (_) | |  | | (_| |
|_| |_|\___|_|_|\___( )   \_/\_/ \___/|_|  |_|\__,_|
                    |/

Delete the Cluster

Delete your k3d Kubernetes Cluster:

1
$ k3d delete --name demo

Thank You

Install OpenFaas on K3d Kubernetes

In this post we will deploy iopenfaas on kubernetes (k3d)

Kubernetes on k3d

k3d is a helper tool that provisions a kubernetes distribution, called k3s on docker. To deploy a kubernetes cluster on k3d, you can follow this blog post

Deploy a 3 Node Kubernetes Cluster

Using k3d, let’s deploy a kubernetes cluster:

1
$ k3d create --name="demo" --workers="2" --publish="80:80"

Export the kubeconfig:

1
$ export KUBECONFIG="$(k3d get-kubeconfig --name='demo')"

Verify that you are able to communicate with your kubernetes cluster:

1
$ kubectl get nodes

Deploy OpenFaas

First we need to get k3sup :

1
$ curl -sLfS https://get.k3sup.dev | sudo sh

Once k3sup is installed, deploy openfaas to your cluster:

1
$ k3sup app install openfaas

Give it a minute or so and check if everything is running:

1
2
3
4
5
6
7
8
9
$ kubectl get pods -n openfaas
NAMESPACE     NAME                                 READY   STATUS      RESTARTS   AGE
openfaas      alertmanager-546f66b6c6-qtb69        1/1     Running     0          5m
openfaas      basic-auth-plugin-79b9878b7b-7vlln   1/1     Running     0          4m59s
openfaas      faas-idler-db8cd9c7d-8xfpp           1/1     Running     2          4m57s
openfaas      gateway-7dcc6d694d-dmvqn             2/2     Running     0          4m56s
openfaas      nats-d6d574749-rt9vw                 1/1     Running     0          4m56s
openfaas      prometheus-d99669d9b-mfxc8           1/1     Running     0          4m53s
openfaas      queue-worker-75f44b56b9-mhhbv        1/1     Running     0          4m52s

Install the openfaas-cli:

1
$ curl -SLsf https://cli.openfaas.com | sudo sh

In a screen session, forward port 8080 to the gateway service:

1
$ screen -S portfwd-process -m -d sh -c "kubectl port-forward -n openfaas svc/gateway 8080:8080"

Expose the gateway password as an environment variable:

1
$ PASSWORD=$(kubectl get secret -n openfaas basic-auth -o jsonpath="{.data.basic-auth-password}" | base64 --decode; echo)

Then login to the gateway:

1
$ echo -n $PASSWORD | faas-cli login --username admin --password-stdin

Deploy a OpenFaas Function

To list all the functions:

1
$ faas-cli store list

To deploy the figlet function:

1
2
3
4
$ faas-cli store deploy figlet

Deployed. 202 Accepted.
URL: http://127.0.0.1:8080/function/figlet

List your deployed functions:

1
2
3
$ faas-cli list
Function                          Invocations     Replicas
figlet                            0                1

Invoke your function:

1
2
3
4
5
6
7
$ curl http://127.0.0.1:8080/function/figlet -d 'hello, world'
 _          _ _                             _     _
| |__   ___| | | ___    __      _____  _ __| | __| |
| '_ \ / _ \ | |/ _ \   \ \ /\ / / _ \| '__| |/ _` |
| | | |  __/ | | (_) |   \ V  V / (_) | |  | | (_| |
|_| |_|\___|_|_|\___( )   \_/\_/ \___/|_|  |_|\__,_|
                    |/

Delete your Cluster

When you are done, delete your kubernetes cluster:

1
$ k3d delete --name demo

Thank You

Thank you for reading. If you like my content, feel free to visit me at ruan.dev or follow me on twitter at @ruanbekker

ko-fi

Lightweight Development Kubernetes Options: K3d

In this post we will cover a lightweight development kubernetes called, “k3d” which we will deploy on a mac.

What is k3d

k3d is a binary that provisions a k3s kubernetes cluster on docker

Pre-Requirements

You will require docker and we will be using brew to install k3d on a mac.

Install k3d

Installing k3d is as easy as:

1
$ brew install k3d

Verify your installation:

1
2
$ k3d --version
k3d version v1.3.1

Deploy a 3 Node Cluster

Using k3d, we will deploy a 3 node k3s cluster:

1
$ k3d create --name="demo" --workers="2" --publish="80:80"

This will deploy a master and 2 worker nodes and we will also publish our host port 80 to our container port 80 (k3s comes default with traefik)

Set your kubeconfig:

1
$ export KUBECONFIG="$(k3d get-kubeconfig --name='demo')"

Test it out by listing your nodes:

1
2
3
4
5
$ kubectl get nodes
NAME                STATUS   ROLES    AGE    VERSION
k3d-demo-server     Ready    master   102s   v1.14.6-k3s.1
k3d-demo-worker-0   Ready    worker   102s   v1.14.6-k3s.1
k3d-demo-worker-1   Ready    worker   102s   v1.14.6-k3s.1

That was easy right?

Deploy a Sample App

We will deploy a simple golang web application that will return the container name upon a http request. We will also make use of the traefik ingress for demonstration.

Our deployment manifest that I will save as app.yml:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: k3s-demo
  namespace: default
spec:
  replicas: 2
  selector:
    matchLabels:
      app: k3d-demo
  template:
    metadata:
      labels:
        app: k3d-demo
    spec:
      containers:
      - name: k3d-demo
        image: ruanbekker/hostname:latest
---
apiVersion: v1
kind: Service
metadata:
  name: k3d-demo
  namespace: default
spec:
  ports:
  - name: http
    targetPort: 8000
    port: 80
  selector:
    app: k3d-demo
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: k3d-demo
  annotations:
    kubernetes.io/ingress.class: "traefik"

spec:
  rules:
  - host: k3d-demo.example.org
    http:
      paths:
      - path: /
        backend:
          serviceName: k3d-demo
          servicePort: http

Deploy our application:

1
2
3
4
$ kubectl apply -f app.yml
deployment.extensions/k3s-demo created
service/k3d-demo created
ingress.extensions/k3d-demo created

Verify that the pods are running:

1
2
3
4
$ kubectl get pods
NAME                       READY   STATUS    RESTARTS   AGE
k3s-demo-f76d866b9-dv5z9   1/1     Running   0          10s
k3s-demo-f76d866b9-qxltk   1/1     Running   0          10s

Make a http request:

1
2
$ curl -H "Host: k3d-demo.example.org" http://localhost
Hostname: k3d-demo-f76d866b9-qxltk

Deleting your Cluster

To delete your cluster:

1
$ k3d delete --name demo

Thank You

Thank you for reading. If you like my content, feel free to visit me at ruan.dev or follow me on twitter at @ruanbekker

ko-fi

Run Localstack as a Service Container for AWS Mock Services on Drone CI

In this tutorial we will setup a basic pipeline in drone to make use of service containers, we will provision localstack so that we can provision AWS mock services.

We will create a kinesis stream on localstack, when the service is up, we will create a stream, put 100 records in the stream, read them from the stream and delete the kinesis stream.

Gitea and Drone Stack

If you don’t have the stack setup, have a look at this post where I go into detail on how to get that setup.

Create the Drone Config

In gitea, I have created a new git repository and created my drone config as .drone.yml with this pipeline config:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
---
kind: pipeline
type: docker
name: localstack

platform:
  os: linux
  arch: amd64

steps:
  - name: wait-for-localstack
    image: busybox
    commands:
      - sleep 10

  - name: list-kinesis-streams
    image: ruanbekker/awscli
    environment:
      AWS_ACCESS_KEY_ID: 123
      AWS_SECRET_ACCESS_KEY: xyz
      AWS_DEFAULT_REGION: eu-west-1
    commands:
      - aws --endpoint-url=http://localstack:4568 kinesis list-streams

  - name: create-kinesis-streams
    image: ruanbekker/awscli
    environment:
      AWS_ACCESS_KEY_ID: 123
      AWS_SECRET_ACCESS_KEY: xyz
      AWS_DEFAULT_REGION: eu-west-1
    commands:
      - aws --endpoint-url=http://localstack:4568 kinesis create-stream --stream-name mystream --shard-count 1

  - name: describe-kinesis-streams
    image: ruanbekker/awscli
    environment:
      AWS_ACCESS_KEY_ID: 123
      AWS_SECRET_ACCESS_KEY: xyz
      AWS_DEFAULT_REGION: eu-west-1
    commands:
      - aws --endpoint-url=http://localstack:4568 kinesis describe-stream --stream-name mystream

  - name: put-record-into-kinesis
    image: ruanbekker/awscli
    environment:
      AWS_ACCESS_KEY_ID: 123
      AWS_SECRET_ACCESS_KEY: xyz
      AWS_DEFAULT_REGION: eu-west-1
    commands:
      - for record in $$(seq 1 100); do aws --endpoint-url=http://localstack:4568 kinesis put-record --stream-name mystream --partition-key 123 --data testdata_$$record ; done

  - name: get-record-from-kinesis
    image: ruanbekker/awscli
    environment:
      AWS_ACCESS_KEY_ID: 123
      AWS_SECRET_ACCESS_KEY: xyz
      AWS_DEFAULT_REGION: eu-west-1
    commands:
      - SHARD_ITERATOR=$$(aws --endpoint-url=http://localstack:4568 kinesis get-shard-iterator --shard-id shardId-000000000000 --shard-iterator-type TRIM_HORIZON --stream-name mystream --query 'ShardIterator' --output text)
      - for each in $$(aws --endpoint-url=http://localstack:4568 kinesis get-records --shard-iterator $$SHARD_ITERATOR | jq -cr '.Records[].Data'); do echo $each | base64 -d ; echo "" ; done

  - name: delete-kinesis-stream
    image: ruanbekker/awscli
    environment:
      AWS_ACCESS_KEY_ID: 123
      AWS_SECRET_ACCESS_KEY: xyz
      AWS_DEFAULT_REGION: eu-west-1
    commands:
      - aws --endpoint-url=http://localstack:4568 kinesis delete-stream --stream-name mystream

services:
  - name: localstack
    image: localstack/localstack
    privileged: true
    environment:
      DOCKER_HOST: unix:///var/run/docker.sock
    volumes:
      - name: docker-socket
        path: /var/run/docker.sock
      - name: localstack-vol
        path: /tmp/localstack
    ports:
      - 8080

volumes:
- name: localstack-vol
  temp: {}
- name: docker-socket
  host:
    path: /var/run/docker.sock

To explain what we are doing, we are bringing up localstack as a service container, then using the aws cli tools we point to the localstack kinesis endpoint, creating a kinesis stream, put 100 records to the stream, then we read from the stream and delete thereafter.

Trigger the Pipeline

Then I head to drone activate my new git repository and select the repository as “Trusted”. I commited a dummy file to trigger the pipeline and it should look like this:

image

List Streams:

image

Put Records:

image

Delete Stream:

image

Run Kubernetes (K3s) as a Service Container on Drone CI

Drone services allow you to run a service container and will be available for the duration of your build, which is great if you want a ephemeral service to test your applications against.

Today we will experiment with services on drone and will deploy a k3s (a kubernetes distribution built by rancher) cluster as a drone service and interact with our cluster using kubectl.

I will be using multiple pipelines, where we will first deploy our “dev cluster”, when it’s up, we will use kubectl to interact with the cluster, once that is done, we will deploy our “staging cluster” and do the same.

This is very basic and we are not doing anything special, but this is a starting point and you can do pretty much whatever you want.

What is Drone

If you are not aware of Drone, Drone is a container-native continious deliver platform built on Go and you can check them out here: github.com/drone

Setup Gitea and Drone

If you don’t have the stack setup, have a look at this post where I go into detail on how to get that setup.

Create your Git Repo

Go ahead and create a git repo, you can name it anything, then it should look something like this:

image

Create a drone configuration, .drone.yml my pipeline will look like this:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
---
kind: pipeline
type: docker
name: dev

platform:
  os: linux
  arch: amd64

steps:
  - name: wait-for-k3s
    image: ruanbekker/build-tools
    commands:
      - sleep 30

  - name: prepare-k3s-kubeconfig
    image: alpine
    volumes:
      - name: k3s-kubeconfig
        path: /k3s-kubeconfig
    detach: false
    commands:
      - sed -i -e "s/127.0.0.1/k3s/g" /k3s-kubeconfig/kubeconfig.yaml

  - name: test-kubernetes
    image: ruanbekker/kubectl
    volumes:
      - name: k3s-kubeconfig
        path: /tmp
    environment:
      KUBECONFIG: /tmp/kubeconfig.yaml
    commands:
      - kubectl get nodes -o wide

services:
  - name: k3s
    image: rancher/k3s:v0.9.1
    privileged: true
    command:
      - server
    environment:
      K3S_KUBECONFIG_OUTPUT: /k3s-kubeconfig/kubeconfig.yaml
      K3S_KUBECONFIG_MODE: 777
    volumes:
      - name: k3s-kubeconfig
        path: /k3s-kubeconfig
    ports:
      - 6443

volumes:
- name: k3s-kubeconfig
  temp: {}

---
kind: pipeline
type: docker
name: staging

platform:
  os: linux
  arch: amd64

steps:
  - name: wait-for-k3s
    image: ruanbekker/build-tools
    commands:
      - sleep 30

  - name: prepare-k3s-kubeconfig
    image: alpine
    volumes:
      - name: k3s-kubeconfig
        path: /k3s-kubeconfig
    detach: false
    commands:
      - sed -i -e "s/127.0.0.1/k3s/g" /k3s-kubeconfig/kubeconfig.yaml

  - name: test-kubernetes
    image: ruanbekker/kubectl
    volumes:
      - name: k3s-kubeconfig
        path: /tmp
    environment:
      KUBECONFIG: /tmp/kubeconfig.yaml
    commands:
      - kubectl get nodes -o wide


services:
  - name: k3s
    image: rancher/k3s:v0.9.1
    privileged: true
    command:
      - server
    environment:
      K3S_KUBECONFIG_OUTPUT: /k3s-kubeconfig/kubeconfig.yaml
      K3S_KUBECONFIG_MODE: 777
    volumes:
      - name: k3s-kubeconfig
        path: /k3s-kubeconfig
    ports:
      - 6443

volumes:
- name: k3s-kubeconfig
  temp: {}

depends_on:
- dev

In this pipeline you can see that the staging pipeline depends on dev, so dev pipeline will start by creating the k3s service container, once its up I am using a step just to sleep for 30 seconds to allow it to boot.

Then I have defined a volume that will be persistent during the build time, which we will use to dump our kubeconfig file and update the hostname of our kubernetes endpoint. Once that is done our last step will set that file to the environment and use kubectl to interact with kubernetes.

Once our dev pipeline has finished, our staging pipeline will start.

Activate the Repo in Drone

Head over to drone on port 80 and activate the newly created git repo (and make sure that you select “Trusted”) and you will see the activity feed being empty:

image

Commit a dummy file to git and you should see your pipeline being triggered:

image

Once your pipeline has finished and everything succeeded, you should see the output of your nodes in your kubernetes service container:

image

As I mentioned earlier, we are not doing anything special but service containers allows us to do some awesome things.

Thank you for reading. If you like my content, feel free to visit me at ruan.dev or follow me on twitter at @ruanbekker

ko-fi

Setup Gitea and Drone on Docker 2020 Edition

This post will show how to setup gitea and drone on a docker host with docker-compose. The drone example in this tutorial will be very basic, but in future posts I will focus more on pipeline examples using drone.

As this post I will use to link back for those who needs to setup the stack first.

Deploy Gitea and Drone

Get the docker-compose.yml:

1
$ wget -O docker-compose.yml https://gist.githubusercontent.com/ruanbekker/27d2cb2e3f4194ee5cfe2bcdc9c4bf52/raw/25590a23e87190a871d70fd57ab461ce303cd286/2020.02.04-gitea-drone_docker-compose.yml

Verify the environment variables and adjust the defaults if you want to change something, if you want your git clone ssh url to point to a dns name as well as the url for gitea, then change the following to your dns:

1
2
3
4
5
  gitea:
    ...
    environment:
      - ROOT_URL=http://gi.myresolvable.dns:3000
      - SSH_DOMAIN=git.myresolvable.dns

then deploy:

1
$ docker-compose up -d

Access your Stack

The default port for Gitea in this setup is port 3000:

image

Initial configuration will be pre-populated from our environment variables:

image

From the additional settings section, create your admin user (this user is referenced in our docker-compose as well)

image

Because I am using gitea as my hostname, you will be redirected to http://gitea:3000/user/login, if you don’t have a host entry setup for that it will fail, but you can just replace your servers ip in the request url and it will take you to the login screen, and after logging on, you should see this screen:

image

Access drone on port 80, you will be directed to the login screen:

image

Use the same credentials that you have used to sign up with gitea, and after logging on, you should see this:

image

If ever your login does not work, just delete the drone access token on gitea (gitea:3000/user/settings/applications)

Create a Git Repository

On gitea, create a new git repository:

image

You should now see your git repository:

image

Create a new file .drone.yml with the following content:

1
2
3
4
5
6
7
8
9
kind: pipeline
name: hello-world
type: docker

steps:
  - name: say-hello
    image: busybox
    commands:
      - echo hello-world

It should look like this:

image

Configure Drone

Commit the file in your git repository and head over to drone (which should be available on port 80) and select “Sync”, after a couple of seconds you should see the git repository:

image

Select “Activate” and “Activate Repository”, on the next screen select “Trusted”, verify that the configuration file name is the same as which we created, then select save:

image

Trigger the Build

If you click on “Activity Feed” you should see a empty feed. Head back to git and commit a dummy file to trigger the build to start. I will create a file name trigger with the value as 1 for my dummy file.

After committing the file, you will see on drone that the build started:

image

When we select the build, you can see we have a clone step and the step that we defined to echo “hello-world”:

image

Thank You

This was a basic introduction for gitea and drone, but I will use this post in conjunction with more gitea examples in the future.

Setup Thanos on Docker: A Highly Available Prometheus

Today we will look at Thanos, a open source, highly available prometheus setup with long term storage capabilites, that we will run on docker to simplify the setup.

Note that running this proof of concept does not make it highly available as we will run everything on one host, but it will give you a feel what Thanos is about. In a future post, I will setup Thanos in a multi node environment.

Prometheus

If you are not familiar with Prometheus, then have a look at their documentation, but in short, prometheus is a open source monitoring system and time series database developed by soundcloud.

Prometheus is a monitoring system includes a rich, multidimensional data model, a concise and powerful query language called PromQL, an efficient embedded timeseries database, and over 150 integrations with third-party systems.

Thanos

Thanos is a highly available prometheus setup with long term storage capabilities.

Thanos allows you to ship your data to S3/Minio for long storage capabilites, so you could for example only store your “live” data on prometheus for 2 weeks, then everything older than that gets sent to object storage such as amazon s3 or minio. This helps your prometheus instance not to be flooded with data or prevents you from running out of storage space. The nice thing is, when you query for data older than 2 weeks, it will fetch the data from object storage.

Thanos has a global query view, which essentially means you can query your prometheus metrics from one endpoint backed by multiple prometheus servers or cluster.

You can still use the same tools such as Grafana as it utilizes the same Prometheus Query API.

Thanos provides downsampling and compaction, so that you downsample your historical data for massive query speedup when querying large time ranges.

Thanos Components

Thanos is a clustered system of components which can be categorized as follows:

  • Metric sources

    • Thanos provides two components that act as data sources: Prometheus Sidecar and Rule Nodes
    • Sidecar implements gRPC service on top of Prometheus
    • Rule Node directly implements it on top of the Prometheus storage engine it is running
    • Data sources that persist their data for long term storage, do so via the Prometheus 2.0 storage engine
    • Storage engine periodically produces immutable blocks of data for a fixed time range
    • A blocks top-level directory includes chunks, index and meta.json files
    • Chunk files hold a few hundred MB worth of chunks each
    • The index file holds all information needed to lookup specific series by their labels and the positions of their chunks.
    • The meta.json file holds metadata about block like stats, time range, and compaction level
  • Stores

    • A Store Node acts as a Gateway to block data that is stored in an object storage bucket
    • It implements the same gRPC API as Data Sources to provide access to all metric data found in the bucket
    • Continuously synchronizes which blocks exist in the bucket and translates requests for metric data into object storage requests
    • Implements various strategies to minimize the number of requests to the object storage
    • Prometheus 2.0 storage layout is optimized for minimal read amplification
    • At this time of writing, only index data is cached
    • Stores and Data Sources are the same, store nodes and data sources expose the same gRPC Store API
    • Store API allows to look up data by a set of label matchers and a time range
    • It then returns compressed chunks of samples as they are found in the block data
    • So it’s purely a data retrieval API and does not provide complex query execution
  • Query Layer

    • Queriers are stateless and horizontally scalable instances that implement PromQL on top of the Store APIs exposed in the cluster
    • Queriers participate in the cluster to be able to resiliently discover all data sources and store nodes
    • Rule nodes in return can discover query nodes to evaluate recording and alerting rules
    • Based on the metadata of store and source nodes, they attempt to minimize the request fanout to fetch data for a particular query
    • The only scalable components of Thanos is the query nodes as none of the Thanos components provide sharding
    • Scaling of storage capacity is ensured by relying on an external object storage system
    • Store, rule, and compactor nodes are all expected to scale significantly within a single instance or high availability pair

The information from above was retrieved from their website, feel free to check them out if you want to read more on the concepts of thanos.

The Architecture Overview of Thanos looks like this:

What are we doing today

We will setup a Thanos Cluster with Minio, Node-Exporter, Grafana on Docker. Our Thanos setup will consist of 3 prometheus containers, each one running with a sidecar container, a store container, 2 query containers, then we have the remotewrite and receive containers which node-exporter will use to ship its metrics to.

The minio container will be used as our long-term storage and the mc container will be used to initialize the storage bucket which is used by thanos.

Deploy the Cluster

Below is the docker-compose.yml and the script to generate the configs for thanos:

Once you have saved the compose as docker-compose.yml and the script as configs.sh you can create the configs:

1
$ bash configs.sh

The script from above creates the data directory and place all the configs that thanos will use in there. Next deploy the thanos cluster:

1
$ docker-compose -f docker-compose.yml up

It should look something like this:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
$ docker-compose -f docker-compose.yml up
Starting node-exporter ... done
Starting minio         ... done
Starting grafana        ... done
Starting prometheus0    ... done
Starting prometheus1     ... done
Starting thanos-receive  ... done
Starting thanos-store    ... done
Starting prometheus2     ... done
Starting mc             ... done
Starting thanos-sidecar0 ... done
Starting thanos-sidecar1     ... done
Starting thanos-sidecar2     ... done
Starting thanos-remote-write ... done
Starting thanos-query1       ... done
Starting thanos-query0       ... done
Attaching to node-exporter, minio, grafana, mc, prometheus0, prometheus1, thanos-store, prometheus2, thanos-receive, thanos-sidecar0, thanos-sidecar1, thanos-sidecar2, thanos-remote-write, thanos-query0, thanos-query1

Access the Query UI, which looks identical to the Prometheus UI: - http://localhost:10904/graph

It will look more or less like this:

image

When we access minio on: - http://localhost:9000/minio

And under the thanos bucket you will see the objects being persisted:

image

When we access grafana on: - http://localhost:3000/

Select datasources, add a prometheus datasource and select the endpoint: http://query0:10904, which should look like this:

image

When we create a dashboard, you can test a query with thanos_sidecar_prometheus_up and it should look something like this:

image

Save Output to Local File With Ansible

This playbook demonstrates how you can redirect shell output to a local file

Inventory

Our inventory.ini file:

1
2
[localhost]
localhost

The Script

Our script: /tmp/foo

1
2
3
#!/usr/bin/env bash
echo "foo"
echo "bar"

Apply executable permissions:

1
$ chmod +x /tmp/foo

Playbook

Our playbook: debug.yml

1
2
3
4
5
6
7
---
- hosts: localhost
  tasks:
    - shell: /tmp/foo
      register: foo_result
      ignore_errors: True
    - local_action: copy content= dest=file

Running

Running the Ansible Playbook:

1
2
3
4
5
6
7
8
9
10
11
12
$ ansible-playbook -i inventory.ini debug.yml

PLAY [localhost] ********************************************************************************************************************************************************************

TASK [shell] ************************************************************************************************************************************************************************
changed: [localhost]

TASK [copy] *************************************************************************************************************************************************************************
changed: [localhost -> localhost]

PLAY RECAP **************************************************************************************************************************************************************************
localhost                  : ok=2    changed=2    unreachable=0    failed=0

View the local saved file:

1
2
3
$ cat file
foo
bar

Read More

For more content on Ansible check out my Ansible category