Ruan Bekker's Blog

From a Curious mind to Posts on Github

Run Localstack as a Service Container for AWS Mock Services on Drone CI

In this tutorial we will setup a basic pipeline in drone to make use of service containers, we will provision localstack so that we can provision AWS mock services.

We will create a kinesis stream on localstack, when the service is up, we will create a stream, put 100 records in the stream, read them from the stream and delete the kinesis stream.

Gitea and Drone Stack

If you don’t have the stack setup, have a look at this post where I go into detail on how to get that setup.

Create the Drone Config

In gitea, I have created a new git repository and created my drone config as .drone.yml with this pipeline config:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
---
kind: pipeline
type: docker
name: localstack

platform:
  os: linux
  arch: amd64

steps:
  - name: wait-for-localstack
    image: busybox
    commands:
      - sleep 10

  - name: list-kinesis-streams
    image: ruanbekker/awscli
    environment:
      AWS_ACCESS_KEY_ID: 123
      AWS_SECRET_ACCESS_KEY: xyz
      AWS_DEFAULT_REGION: eu-west-1
    commands:
      - aws --endpoint-url=http://localstack:4568 kinesis list-streams

  - name: create-kinesis-streams
    image: ruanbekker/awscli
    environment:
      AWS_ACCESS_KEY_ID: 123
      AWS_SECRET_ACCESS_KEY: xyz
      AWS_DEFAULT_REGION: eu-west-1
    commands:
      - aws --endpoint-url=http://localstack:4568 kinesis create-stream --stream-name mystream --shard-count 1

  - name: describe-kinesis-streams
    image: ruanbekker/awscli
    environment:
      AWS_ACCESS_KEY_ID: 123
      AWS_SECRET_ACCESS_KEY: xyz
      AWS_DEFAULT_REGION: eu-west-1
    commands:
      - aws --endpoint-url=http://localstack:4568 kinesis describe-stream --stream-name mystream

  - name: put-record-into-kinesis
    image: ruanbekker/awscli
    environment:
      AWS_ACCESS_KEY_ID: 123
      AWS_SECRET_ACCESS_KEY: xyz
      AWS_DEFAULT_REGION: eu-west-1
    commands:
      - for record in $$(seq 1 100); do aws --endpoint-url=http://localstack:4568 kinesis put-record --stream-name mystream --partition-key 123 --data testdata_$$record ; done

  - name: get-record-from-kinesis
    image: ruanbekker/awscli
    environment:
      AWS_ACCESS_KEY_ID: 123
      AWS_SECRET_ACCESS_KEY: xyz
      AWS_DEFAULT_REGION: eu-west-1
    commands:
      - SHARD_ITERATOR=$$(aws --endpoint-url=http://localstack:4568 kinesis get-shard-iterator --shard-id shardId-000000000000 --shard-iterator-type TRIM_HORIZON --stream-name mystream --query 'ShardIterator' --output text)
      - for each in $$(aws --endpoint-url=http://localstack:4568 kinesis get-records --shard-iterator $$SHARD_ITERATOR | jq -cr '.Records[].Data'); do echo $each | base64 -d ; echo "" ; done

  - name: delete-kinesis-stream
    image: ruanbekker/awscli
    environment:
      AWS_ACCESS_KEY_ID: 123
      AWS_SECRET_ACCESS_KEY: xyz
      AWS_DEFAULT_REGION: eu-west-1
    commands:
      - aws --endpoint-url=http://localstack:4568 kinesis delete-stream --stream-name mystream

services:
  - name: localstack
    image: localstack/localstack
    privileged: true
    environment:
      DOCKER_HOST: unix:///var/run/docker.sock
    volumes:
      - name: docker-socket
        path: /var/run/docker.sock
      - name: localstack-vol
        path: /tmp/localstack
    ports:
      - 8080

volumes:
- name: localstack-vol
  temp: {}
- name: docker-socket
  host:
    path: /var/run/docker.sock

To explain what we are doing, we are bringing up localstack as a service container, then using the aws cli tools we point to the localstack kinesis endpoint, creating a kinesis stream, put 100 records to the stream, then we read from the stream and delete thereafter.

Trigger the Pipeline

Then I head to drone activate my new git repository and select the repository as “Trusted”. I commited a dummy file to trigger the pipeline and it should look like this:

image

List Streams:

image

Put Records:

image

Delete Stream:

image

Run Kubernetes (K3s) as a Service Container on Drone CI

Drone services allow you to run a service container and will be available for the duration of your build, which is great if you want a ephemeral service to test your applications against.

Today we will experiment with services on drone and will deploy a k3s (a kubernetes distribution built by rancher) cluster as a drone service and interact with our cluster using kubectl.

I will be using multiple pipelines, where we will first deploy our “dev cluster”, when it’s up, we will use kubectl to interact with the cluster, once that is done, we will deploy our “staging cluster” and do the same.

This is very basic and we are not doing anything special, but this is a starting point and you can do pretty much whatever you want.

What is Drone

If you are not aware of Drone, Drone is a container-native continious deliver platform built on Go and you can check them out here: github.com/drone

Setup Gitea and Drone

If you don’t have the stack setup, have a look at this post where I go into detail on how to get that setup.

Create your Git Repo

Go ahead and create a git repo, you can name it anything, then it should look something like this:

image

Create a drone configuration, .drone.yml my pipeline will look like this:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
---
kind: pipeline
type: docker
name: dev

platform:
  os: linux
  arch: amd64

steps:
  - name: wait-for-k3s
    image: ruanbekker/build-tools
    commands:
      - sleep 30

  - name: prepare-k3s-kubeconfig
    image: alpine
    volumes:
      - name: k3s-kubeconfig
        path: /k3s-kubeconfig
    detach: false
    commands:
      - sed -i -e "s/127.0.0.1/k3s/g" /k3s-kubeconfig/kubeconfig.yaml

  - name: test-kubernetes
    image: ruanbekker/kubectl
    volumes:
      - name: k3s-kubeconfig
        path: /tmp
    environment:
      KUBECONFIG: /tmp/kubeconfig.yaml
    commands:
      - kubectl get nodes -o wide

services:
  - name: k3s
    image: rancher/k3s:v0.9.1
    privileged: true
    command:
      - server
    environment:
      K3S_KUBECONFIG_OUTPUT: /k3s-kubeconfig/kubeconfig.yaml
      K3S_KUBECONFIG_MODE: 777
    volumes:
      - name: k3s-kubeconfig
        path: /k3s-kubeconfig
    ports:
      - 6443

volumes:
- name: k3s-kubeconfig
  temp: {}

---
kind: pipeline
type: docker
name: staging

platform:
  os: linux
  arch: amd64

steps:
  - name: wait-for-k3s
    image: ruanbekker/build-tools
    commands:
      - sleep 30

  - name: prepare-k3s-kubeconfig
    image: alpine
    volumes:
      - name: k3s-kubeconfig
        path: /k3s-kubeconfig
    detach: false
    commands:
      - sed -i -e "s/127.0.0.1/k3s/g" /k3s-kubeconfig/kubeconfig.yaml

  - name: test-kubernetes
    image: ruanbekker/kubectl
    volumes:
      - name: k3s-kubeconfig
        path: /tmp
    environment:
      KUBECONFIG: /tmp/kubeconfig.yaml
    commands:
      - kubectl get nodes -o wide


services:
  - name: k3s
    image: rancher/k3s:v0.9.1
    privileged: true
    command:
      - server
    environment:
      K3S_KUBECONFIG_OUTPUT: /k3s-kubeconfig/kubeconfig.yaml
      K3S_KUBECONFIG_MODE: 777
    volumes:
      - name: k3s-kubeconfig
        path: /k3s-kubeconfig
    ports:
      - 6443

volumes:
- name: k3s-kubeconfig
  temp: {}

depends_on:
- dev

In this pipeline you can see that the staging pipeline depends on dev, so dev pipeline will start by creating the k3s service container, once its up I am using a step just to sleep for 30 seconds to allow it to boot.

Then I have defined a volume that will be persistent during the build time, which we will use to dump our kubeconfig file and update the hostname of our kubernetes endpoint. Once that is done our last step will set that file to the environment and use kubectl to interact with kubernetes.

Once our dev pipeline has finished, our staging pipeline will start.

Activate the Repo in Drone

Head over to drone on port 80 and activate the newly created git repo (and make sure that you select “Trusted”) and you will see the activity feed being empty:

image

Commit a dummy file to git and you should see your pipeline being triggered:

image

Once your pipeline has finished and everything succeeded, you should see the output of your nodes in your kubernetes service container:

image

As I mentioned earlier, we are not doing anything special but service containers allows us to do some awesome things.

Thank you for reading. If you like my content, feel free to visit me at ruan.dev or follow me on twitter at @ruanbekker

ko-fi

Setup Gitea and Drone on Docker 2020 Edition

This post will show how to setup gitea and drone on a docker host with docker-compose. The drone example in this tutorial will be very basic, but in future posts I will focus more on pipeline examples using drone.

As this post I will use to link back for those who needs to setup the stack first.

Deploy Gitea and Drone

Get the docker-compose.yml:

1
$ wget -O docker-compose.yml https://gist.githubusercontent.com/ruanbekker/27d2cb2e3f4194ee5cfe2bcdc9c4bf52/raw/25590a23e87190a871d70fd57ab461ce303cd286/2020.02.04-gitea-drone_docker-compose.yml

Verify the environment variables and adjust the defaults if you want to change something, if you want your git clone ssh url to point to a dns name as well as the url for gitea, then change the following to your dns:

1
2
3
4
5
  gitea:
    ...
    environment:
      - ROOT_URL=http://gi.myresolvable.dns:3000
      - SSH_DOMAIN=git.myresolvable.dns

then deploy:

1
$ docker-compose up -d

Access your Stack

The default port for Gitea in this setup is port 3000:

image

Initial configuration will be pre-populated from our environment variables:

image

From the additional settings section, create your admin user (this user is referenced in our docker-compose as well)

image

Because I am using gitea as my hostname, you will be redirected to http://gitea:3000/user/login, if you don’t have a host entry setup for that it will fail, but you can just replace your servers ip in the request url and it will take you to the login screen, and after logging on, you should see this screen:

image

Access drone on port 80, you will be directed to the login screen:

image

Use the same credentials that you have used to sign up with gitea, and after logging on, you should see this:

image

If ever your login does not work, just delete the drone access token on gitea (gitea:3000/user/settings/applications)

Create a Git Repository

On gitea, create a new git repository:

image

You should now see your git repository:

image

Create a new file .drone.yml with the following content:

1
2
3
4
5
6
7
8
9
kind: pipeline
name: hello-world
type: docker

steps:
  - name: say-hello
    image: busybox
    commands:
      - echo hello-world

It should look like this:

image

Configure Drone

Commit the file in your git repository and head over to drone (which should be available on port 80) and select “Sync”, after a couple of seconds you should see the git repository:

image

Select “Activate” and “Activate Repository”, on the next screen select “Trusted”, verify that the configuration file name is the same as which we created, then select save:

image

Trigger the Build

If you click on “Activity Feed” you should see a empty feed. Head back to git and commit a dummy file to trigger the build to start. I will create a file name trigger with the value as 1 for my dummy file.

After committing the file, you will see on drone that the build started:

image

When we select the build, you can see we have a clone step and the step that we defined to echo “hello-world”:

image

Thank You

This was a basic introduction for gitea and drone, but I will use this post in conjunction with more gitea examples in the future.

Setup Thanos on Docker: A Highly Available Prometheus

Today we will look at Thanos, a open source, highly available prometheus setup with long term storage capabilites, that we will run on docker to simplify the setup.

Note that running this proof of concept does not make it highly available as we will run everything on one host, but it will give you a feel what Thanos is about. In a future post, I will setup Thanos in a multi node environment.

Prometheus

If you are not familiar with Prometheus, then have a look at their documentation, but in short, prometheus is a open source monitoring system and time series database developed by soundcloud.

Prometheus is a monitoring system includes a rich, multidimensional data model, a concise and powerful query language called PromQL, an efficient embedded timeseries database, and over 150 integrations with third-party systems.

Thanos

Thanos is a highly available prometheus setup with long term storage capabilities.

Thanos allows you to ship your data to S3/Minio for long storage capabilites, so you could for example only store your “live” data on prometheus for 2 weeks, then everything older than that gets sent to object storage such as amazon s3 or minio. This helps your prometheus instance not to be flooded with data or prevents you from running out of storage space. The nice thing is, when you query for data older than 2 weeks, it will fetch the data from object storage.

Thanos has a global query view, which essentially means you can query your prometheus metrics from one endpoint backed by multiple prometheus servers or cluster.

You can still use the same tools such as Grafana as it utilizes the same Prometheus Query API.

Thanos provides downsampling and compaction, so that you downsample your historical data for massive query speedup when querying large time ranges.

Thanos Components

Thanos is a clustered system of components which can be categorized as follows:

  • Metric sources

    • Thanos provides two components that act as data sources: Prometheus Sidecar and Rule Nodes
    • Sidecar implements gRPC service on top of Prometheus
    • Rule Node directly implements it on top of the Prometheus storage engine it is running
    • Data sources that persist their data for long term storage, do so via the Prometheus 2.0 storage engine
    • Storage engine periodically produces immutable blocks of data for a fixed time range
    • A blocks top-level directory includes chunks, index and meta.json files
    • Chunk files hold a few hundred MB worth of chunks each
    • The index file holds all information needed to lookup specific series by their labels and the positions of their chunks.
    • The meta.json file holds metadata about block like stats, time range, and compaction level
  • Stores

    • A Store Node acts as a Gateway to block data that is stored in an object storage bucket
    • It implements the same gRPC API as Data Sources to provide access to all metric data found in the bucket
    • Continuously synchronizes which blocks exist in the bucket and translates requests for metric data into object storage requests
    • Implements various strategies to minimize the number of requests to the object storage
    • Prometheus 2.0 storage layout is optimized for minimal read amplification
    • At this time of writing, only index data is cached
    • Stores and Data Sources are the same, store nodes and data sources expose the same gRPC Store API
    • Store API allows to look up data by a set of label matchers and a time range
    • It then returns compressed chunks of samples as they are found in the block data
    • So it’s purely a data retrieval API and does not provide complex query execution
  • Query Layer

    • Queriers are stateless and horizontally scalable instances that implement PromQL on top of the Store APIs exposed in the cluster
    • Queriers participate in the cluster to be able to resiliently discover all data sources and store nodes
    • Rule nodes in return can discover query nodes to evaluate recording and alerting rules
    • Based on the metadata of store and source nodes, they attempt to minimize the request fanout to fetch data for a particular query
    • The only scalable components of Thanos is the query nodes as none of the Thanos components provide sharding
    • Scaling of storage capacity is ensured by relying on an external object storage system
    • Store, rule, and compactor nodes are all expected to scale significantly within a single instance or high availability pair

The information from above was retrieved from their website, feel free to check them out if you want to read more on the concepts of thanos.

The Architecture Overview of Thanos looks like this:

What are we doing today

We will setup a Thanos Cluster with Minio, Node-Exporter, Grafana on Docker. Our Thanos setup will consist of 3 prometheus containers, each one running with a sidecar container, a store container, 2 query containers, then we have the remotewrite and receive containers which node-exporter will use to ship its metrics to.

The minio container will be used as our long-term storage and the mc container will be used to initialize the storage bucket which is used by thanos.

Deploy the Cluster

Below is the docker-compose.yml and the script to generate the configs for thanos:

Once you have saved the compose as docker-compose.yml and the script as configs.sh you can create the configs:

1
$ bash configs.sh

The script from above creates the data directory and place all the configs that thanos will use in there. Next deploy the thanos cluster:

1
$ docker-compose -f docker-compose.yml up

It should look something like this:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
$ docker-compose -f docker-compose.yml up
Starting node-exporter ... done
Starting minio         ... done
Starting grafana        ... done
Starting prometheus0    ... done
Starting prometheus1     ... done
Starting thanos-receive  ... done
Starting thanos-store    ... done
Starting prometheus2     ... done
Starting mc             ... done
Starting thanos-sidecar0 ... done
Starting thanos-sidecar1     ... done
Starting thanos-sidecar2     ... done
Starting thanos-remote-write ... done
Starting thanos-query1       ... done
Starting thanos-query0       ... done
Attaching to node-exporter, minio, grafana, mc, prometheus0, prometheus1, thanos-store, prometheus2, thanos-receive, thanos-sidecar0, thanos-sidecar1, thanos-sidecar2, thanos-remote-write, thanos-query0, thanos-query1

Access the Query UI, which looks identical to the Prometheus UI: - http://localhost:10904/graph

It will look more or less like this:

image

When we access minio on: - http://localhost:9000/minio

And under the thanos bucket you will see the objects being persisted:

image

When we access grafana on: - http://localhost:3000/

Select datasources, add a prometheus datasource and select the endpoint: http://query0:10904, which should look like this:

image

When we create a dashboard, you can test a query with thanos_sidecar_prometheus_up and it should look something like this:

image

Save Output to Local File With Ansible

This playbook demonstrates how you can redirect shell output to a local file

Inventory

Our inventory.ini file:

1
2
[localhost]
localhost

The Script

Our script: /tmp/foo

1
2
3
#!/usr/bin/env bash
echo "foo"
echo "bar"

Apply executable permissions:

1
$ chmod +x /tmp/foo

Playbook

Our playbook: debug.yml

1
2
3
4
5
6
7
---
- hosts: localhost
  tasks:
    - shell: /tmp/foo
      register: foo_result
      ignore_errors: True
    - local_action: copy content= dest=file

Running

Running the Ansible Playbook:

1
2
3
4
5
6
7
8
9
10
11
12
$ ansible-playbook -i inventory.ini debug.yml

PLAY [localhost] ********************************************************************************************************************************************************************

TASK [shell] ************************************************************************************************************************************************************************
changed: [localhost]

TASK [copy] *************************************************************************************************************************************************************************
changed: [localhost -> localhost]

PLAY RECAP **************************************************************************************************************************************************************************
localhost                  : ok=2    changed=2    unreachable=0    failed=0

View the local saved file:

1
2
3
$ cat file
foo
bar

Read More

For more content on Ansible check out my Ansible category

Environment Variables With Ansible

This is a quick post on how to use environment variables in ansible

Inventory

Our inventory.ini file looks like this:

1
2
[localhost]
localhost

Across Tasks

You can set environment variables across tasks, and let your tasks inherit the variables:

1
2
3
4
5
6
7
8
9
10
11
- hosts: localhost
  vars:
    var_mysecret: secret123

  tasks:
    - name: echo my env var
      environment:
        MYNAME: ""
      shell: "echo hello $MYNAME > /tmp/bla.txt"
      args:
        creates: /tmp/bla.txt

When we run the task:

1
$ ansible-playbook -i inventory.ini -u ruan task.yml

Check the output:

1
2
$ cat /tmp/bla.txt
hello secret123

Environment Variables Per Task

You can set environment variables per task:

1
2
3
4
5
6
7
8
- hosts: dev
  tasks:
    - name: echo my env var
      environment:
        MYNAME: "RUAN"
      shell: "echo hello $MYNAME > /tmp/bla2.txt"
      args:
        creates: /tmp/bla2.txt

Running the task:

1
$ ansible-playbook -i inventory.ini -u ruan task.yml

Checking the output:

1
2
$ cat /tmp/bla2.txt
hello RUAN

Read More

Read more on environment variables in ansible in their documentation

Setup a WireGuard VPN Server on Linux

Installation

I will be installing my wireguard vpn server on a ubuntu 18 server, for other distributions you can have a look at their docs

1
2
3
$ sudo add-apt-repository ppa:wireguard/wireguard
$ sudo apt update
$ sudo apt install wireguard -y

Configuration

On the Server, create they keys directory where we will save our keys:

1
$ mkdir -p /etc/wireguard/keys

Create the private and public key:

1
$ wg genkey | tee privatekey | wg pubkey > publickey

Generate the pre-shared key:

1
$ wg genpsk > client.psk

On the client, create the keys directory:

1
$ mkdir -p ~/wireguard/keys

Create the private and public keys:

1
2
$ cd ~/wireguard/keys
$ wg genkey | tee privatekey | wg pubkey > publickey

Populate the server config:

1
2
3
4
5
6
7
8
9
10
11
12
$ cat /etc/wireguard/wg0.conf
[Interface]
PrivateKey = <output-of-client.privatekey>
Address = 192.168.199.1/32
ListenPort = 8999
PostUp = sysctl -w net.ipv4.ip_forward=1; iptables -A FORWARD -i %i -j ACCEPT; iptables -t nat -A POSTROUTING -o ens3 -j MASQUERADE
PostDown = iptables -D FORWARD -i %i -j ACCEPT; iptables -t nat -D POSTROUTING -o ens3 -j MASQUERADE

[Peer]
PublicKey = <output-of-server.publickey>
PresharedKey = <output-of-client.psk>
AllowedIPs = 192.168.199.2/32

Populate the client config:

1
2
3
4
5
6
7
8
9
10
11
12
$ cat ~/wireguard/wg0.conf
[Interface]
PrivateKey = <output-of-client.privatekey>
Address = 192.168.199.2/24
DNS = 1.1.1.1

[Peer]
PublicKey = <output-of-server.publickey>
PresharedKey = <output-of-client.psk>
Endpoint = <server-public-ip>:8999
AllowedIPs = 0.0.0.0/0
PersistentKeepalive = 25

Start the Server

On the server, enable and start the service:

1
2
$ systemctl enable wg-quick@wg0.service
$ wg-quick up wg0

On the client, connect the VPN:

1
$ wg-quick up ~/wireguard/wg0.conf

Verify the status:

1
2
3
4
5
6
7
8
9
10
11
12
$ wg show
interface: wg0
  public key: +Giwk8Y5KS5wx9mw0nEIdQODI+DsR+3TcbMxjJqfZys=
  private key: (hidden)
  listening port: 8999

peer: Q8LGMj6CeCYQJp+sTu74mLMRoPFAprV8PsnS0cu9fDI=
  preshared key: (hidden)
  endpoint: 102.132.208.80:57800
  allowed ips: 192.168.199.2/32
  latest handshake: 22 seconds ago
  transfer: 292.00 KiB received, 322.15 KiB sent

Check if you can ping the private ip address of the VPN:

1
2
3
$ ping 192.168.199.2
PING 192.168.199.2 (192.168.199.2): 56 data bytes
64 bytes from 192.168.199.2: icmp_seq=0 ttl=63 time=304.844 ms

Managing Background Processes With Screen

image

This is a quick post on how to create, manage and delete background processes with screen

About

Screen allows you to run processes in a different session, so when you exit your terminal the process will still be running.

Install

Install screen on the operating system of choice, for debian based systems it will be:

1
$ sudo apt install screen -y

Working with Screen

To create a screen session, you can just run screen or you can provide an argument to provide a name:

1
$ screen -S my-screen-session

Now you will be dropped into a screen session, run a ping:

1
$ ping 8.8.8.8

Now to allow the ping process to run in the background, send the commands to detach the screen session:

1
Ctrl + a, then press d

To view the screen session:

1
2
3
4
$ screen -ls
There is a screen on:
  45916.my-screen-session (Detached)
1 Socket in /var/folders/jr/dld7mjhn0sx6881xs_0s7rtc0000gn/T/.screen.

To resume the screen session, pass the screen id or screen name as a argument:

1
2
3
$ screen -r my-screen-session
64 bytes from 8.8.8.8: icmp_seq=297 ttl=55 time=7.845 ms
64 bytes from 8.8.8.8: icmp_seq=298 ttl=55 time=6.339 ms

Scripting

To use a one liner to send a process as a detached screen session for scripting as an example, you can do that like this:

1
$ screen -S ping-process -m -d sh -c "ping 8.8.8.8"

Listing the screen session:

1
2
3
$ screen -ls
There is a screen on:
  46051.ping-process  (Detached)

Terminating the screen session:

1
$ screen -S ping-process -X quit

Thank You

Let me know what you think. If you liked my content, feel free to visit me at ruan.dev or follow me on twitter at @ruanbekker

ko-fi

2019: My Personal Highlights for the Year

image

2019 was a great year! I met some awesome people from Civo, Traefik, Rancher, OpenFaas, Docker, Elastic, AWS and the list goes on.

Thank you to everyone of you that helped me during this year, to the ones inspired me, all the great motivation, support and shout outs! There’s so many people to thank, even for the people that is not mentioned, if you ever interacted with me, helped me or supported me, thank you to each and everyone of you!

Below is a list of some of my personal highlights:

Number of Blogposts per Website:

Learnings

  • You cannot be good at everything
  • You need to switch off every now and then
  • Work / Life balance is important
  • A hobby other than work does wonders to help switch off every now and then

Contributions on Github

Contributions for 2019:

image

Most Starred Github Repository:

image

Most Starred Gist:

image

Analytics

Some analytics for my blog posts:

blog.ruanbekker.com

Analytics for blog.ruanbekker.com:

image

Top 10 Most Viewed Pages:

image

Most Viewed by Country:

image

sysadmins.co.za

Analytics for sysadmins.co.za:

image

Top 10 Most Viewed Pages:

image

Most Viewed by Country:

image

Authors on Blogposts:

A list of places where I blog on:

Proud Moments

Some of my proud moments on Twitter:

2019.06.11 - Scaleway Tweet on Kapsule

2019.06.11 - Traefik Tweet on Kubernetes

2019.07.13 - Mention from OpenFaas on VSCode Demo

2019.07.14 - Elasticsearch Tweet from Devconnected

2019.08.14 - Rancher’s Tweet on my Rpi K3s Blogpost

2019.08.19 - Civo Learn Guide

2019.10.09 - Civo Marketplace MongoDB

2019.10.23 - Civo Marketplace Jenkins

2019.11.05 - Traefik Swag

2019.11.14 - Mentions on Civo Blog for KUBE100

Some proud moments from mentions on blog posts:

2019.08.06 - VPNCloud Peer to Peer Docs

image

2019.08.06 - MarkHeath Blog Post Mention

image

2019.08.08 - Civo Docker Swarm Blogpost

2019.08.13 - Raspberry Pi Post (teamserverless)

2019.10.11 - Serverless Email - Migration OpenFaas Blog post:

Certifications:

MongoDB Basics:

image

MongoDB Cluster Administration:

image

Thank You

Thank you for reading. If you like my content, feel free to visit me at ruan.dev or follow me on twitter at @ruanbekker

ko-fi

Setting the Correct Service Name in Datadog Logging for Docker Swarm

For some reason, when logging to datadog from your applications running on docker swarm, the service names in datadog appears to have the names on the docker image. The application talks to the datadog agent which runs in global mode on swarm.

Setting DATADOG_SERVICE_NAME or DD_SERVICE_NAME as environment variables on the swarm service has zero affect, as they keep showing the service name as the docker image name, as example:

08496333-01C4-4492-807E-FAC40826AFDE

If we inspect the tags, we can see that the docker image shows as the source and maps through as the docker service name. As you can see the swarm service name is what we want to be the service name (not alpine):

783C6D52-62B2-4F2B-A6D4-28150CC58005

One way how to fix this is to setup a pipeline processor, head over to Logs -> Configuration:

93CEE277-55A6-4DE1-8AE6-A02C64B0ACAD

Select “Pipelines” and add a new pipeline, select the filter source:alpine to limit down the results to the alpine image, and name your processor:

0BF3D6A6-9646-442D-A494-8DF489C5217F

Next add a new processor and set the type to remapper, select the tag group as “swarm_service” and set the attribute to service and name the processor:

C02092F4-0EEC-4AF9-9E2A-F7A126560CD8

Add a new processor:

5C2F7FB9-8948-4588-A283-86E94BC07513

Select a service remapper, set the attribute to service and name the processor:

852904AE-9395-4B4B-B1F4-54427D88C970

Now when you go back to logs, you will find that the service name is being set to the correct service name in datadog:

0F11DDC4-E99C-4A2F-B6AB-7409B4E7546C

When you inspect one of the logs, you will see that the attribute is being set to the log:

4B098970-6345-40B9-9F90-411D8FE6A9E6