Ruan Bekker's Blog

From a Curious mind to Posts on Github

Run Docker Containers With Terraform

In this post I will demonstrate how to use the terraform docker_container resource from the docker provider to run two docker containers, traefik and nginx and use the random provider to generate a random url for us.

Pre-Requisites

You will require terraform and docker to be installed.

Project Structure

The source code for this post is available on my github repository, but the project structure will look like the following:

image

Our providers.tf:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
terraform {
  required_providers {
    docker = {
      source  = "kreuzwerker/docker"
      version = "2.15.0"
    }
    random = {
      version = "~> 3.0"
    }
  }
}

provider "docker" {
  host = "unix:///var/run/docker.sock"
}

provider "random" {}

Our variables.tf:

1
2
3
4
variable "domain" {
  type    = string
  default = "localdns.xyz"
}

Our outputs.tf:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
output "nginx_container_name" {
  value = docker_container.nginx.name
}

output "traefik_container_name" {
  value = docker_container.traefik.name
}

output "traefik_url" {
  value = "http://traefik.${var.domain}/"
}

output "nginx_url" {
  value = "http://www.${random_string.nginx.result}.${var.domain}/"
}

Our main.tf:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
resource "random_string" "nginx" {
  length  = 8
  upper   = false
  special = false
}

resource "docker_image" "nginx" {
  name = "nginx:stable-alpine"
}

resource "docker_image" "traefik" {
  name = "traefik:1.7.14"
}

resource "docker_network" "nginx" {
  name   = "docknet"
  driver = "bridge"
}

resource "docker_container" "traefik" {
  name  = "traefik"
  image = docker_image.traefik.name

  networks_advanced {
    name    = docker_network.nginx.name
    aliases = ["docknet"]
  }

  restart = "unless-stopped"
  destroy_grace_seconds = 30
  must_run = true
  memory = 256

  volumes {
    host_path      = "/var/run/docker.sock"
    container_path = "/var/run/docker.sock"
  }

  command = [
    "--api",
    "--docker",
    "--docker.watch",
    "--entrypoints=Name:http Address::80",
    "--logLevel=INFO"
  ]

  ports {
    internal = 80
    external = 80
    ip       = "0.0.0.0"
  }

  labels {
    label = "traefik.enable"
    value = true
  }

  labels {
    label = "traefik.docker.network"
    value = "docknet"
  }

  labels {
    label = "traefik.frontend.rule"
    value = "Host:traefik.${var.domain}"
  }

  labels {
    label = "traefik.port"
    value = 8080
  }

}

resource "docker_container" "nginx" {
  name  = "nginx"
  image = docker_image.nginx.name

  networks_advanced {
    name    = docker_network.nginx.name
    aliases = ["docknet"]
  }

  restart = "unless-stopped"
  destroy_grace_seconds = 30
  must_run = true
  memory = 256

  volumes {
    host_path      = "/Users/ruan/personal/terraform-playground/docker-containers/html"
    container_path = "/usr/share/nginx/html"
  }

  volumes {
    host_path      = "/Users/ruan/personal/terraform-playground/docker-containers/configs/nginx.conf"
    container_path = "/etc/nginx/nginx.conf"
  }

  volumes {
    host_path      = "/Users/ruan/personal/terraform-playground/docker-containers/configs/app.conf"
    container_path = "/etc/nginx/conf.d/app.conf"
  }

  env = [
    "PUID=501",
    "PGID=20"
  ]

  labels {
    label = "traefik.enable"
    value = true
  }

  labels {
    label = "traefik.docker.network"
    value = "docknet"
  }

  labels {
    label = "traefik.frontend.rule"
    value = "Host:www.${random_string.nginx.result}.${var.domain}"
  }

  labels {
    label = "traefik.port"
    value = 80
  }

  depends_on = [
    docker_container.traefik,
    random_string.nginx
  ]

}

Our html/index.html:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
<!doctype html>
<html lang="en">
    <head>
        <meta charset="utf-8">
        <meta name="viewport" content="width=device-width, initial-scale=1">

        <title>Welcome</title>

        <!-- Fonts -->
        <link href="https://fonts.googleapis.com/css?family=Nunito:200,600" rel="stylesheet">

        <!-- Styles -->
        <style>
            html, body {
                background-color: #fff;
                color: #636b6f;
                font-family: 'Nunito', sans-serif;
                font-weight: 200;
                height: 100vh;
                margin: 0;
            }

            .full-height {
                height: 100vh;
            }

            .flex-center {
                align-items: center;
                display: flex;
                justify-content: center;
            }

            .position-ref {
                position: relative;
            }

            .top-right {
                position: absolute;
                right: 10px;
                top: 18px;
            }

            .content {
                text-align: center;
            }

            .title {
                font-size: 84px;
            }

            .links > a {
                color: #636b6f;
                padding: 0 25px;
                font-size: 13px;
                font-weight: 600;
                letter-spacing: .1rem;
                text-decoration: none;
                text-transform: uppercase;
            }

            .m-b-md {
                margin-bottom: 30px;
            }
        </style>
    </head>
    <body>
        <div class="flex-center position-ref full-height">
            <div class="content">
                <div class="title m-b-md">
                    Welcome
                </div>

                <div class="links">
                    <a href="https://ruan.dev" target="_blank">About Me</a>
                </div>
            </div>
        </div>
    </body>
</html>

Our configs/nginx.conf:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
user  nginx;
worker_processes  auto;
error_log  /var/log/nginx/error.log notice;
pid        /var/run/nginx.pid;

events {
    worker_connections  1024;
}

http {
    include       /etc/nginx/mime.types;
    default_type  application/octet-stream;

    log_format  main  '$remote_addr - $remote_user [$time_local] "$request" '
                      '$status $body_bytes_sent "$http_referer" '
                      '"$http_user_agent" "$http_x_forwarded_for"';

    access_log  /var/log/nginx/access.log  main;
    sendfile        on;
    keepalive_timeout  65;

    include /etc/nginx/conf.d/app.conf;
}

And lastly, our configs/app.conf:

1
2
3
4
5
6
7
8
9
10
11
12
13
server {
  listen 80;
  server_name _;

  location / {
    root   /usr/share/nginx/html;
    index  index.html;
  }

  location /healthz {
    return 200 'up';
  }
}

Deployment

Once everything is in place, or if you want to clone my repository, you can do that by:

1
2
git clone https://github.com/ruanbekker/terraform-docker-container-example
cd terraform-docker-container-example

Then we can initialize terraform by fetching the required plugins:

1
terraform init

Once that completes we can run a plan:

1
terraform plan

And that should output something more or less like:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
Terraform used the selected providers to generate the following execution plan. Resource actions are indicated with
the following symbols:
  + create

Terraform will perform the following actions:

  # docker_container.nginx will be created
  + resource "docker_container" "nginx" {
      + attach                = false
      + bridge                = (known after apply)
      + command               = (known after apply)
      + container_logs        = (known after apply)
      + destroy_grace_seconds = 30
      + entrypoint            = (known after apply)
      + env                   = [
          + "PGID=20",
          + "PUID=501",
        ]
      + exit_code             = (known after apply)
      + gateway               = (known after apply)
      + hostname              = (known after apply)
      + id                    = (known after apply)
      + image                 = "nginx:stable-alpine"
      + init                  = (known after apply)
      + ip_address            = (known after apply)
      + ip_prefix_length      = (known after apply)
      + ipc_mode              = (known after apply)
      + log_driver            = "json-file"
      + logs                  = false
      + memory                = 256
      + must_run              = true
      + name                  = "nginx"
      + network_data          = (known after apply)
      + read_only             = false
      + remove_volumes        = true
      + restart               = "unless-stopped"
      + rm                    = false
      + security_opts         = (known after apply)
      + shm_size              = (known after apply)
      + start                 = true
      + stdin_open            = false
      + tty                   = false

      + healthcheck {
          + interval     = (known after apply)
          + retries      = (known after apply)
          + start_period = (known after apply)
          + test         = (known after apply)
          + timeout      = (known after apply)
        }

      + labels {
          + label = "traefik.docker.network"
          + value = "docknet"
        }
      + labels {
          + label = "traefik.enable"
          + value = "true"
        }
      + labels {
          + label = "traefik.frontend.rule"
          + value = (known after apply)
        }
      + labels {
          + label = "traefik.port"
          + value = "80"
        }

      + networks_advanced {
          + aliases = [
              + "docknet",
            ]
          + name    = "docknet"
        }

      + volumes {
          + container_path = "/etc/nginx/conf.d/app.conf"
          + host_path      = "/Users/ruan/personal/terraform-playground/docker-containers/configs/app.conf"
        }
      + volumes {
          + container_path = "/etc/nginx/nginx.conf"
          + host_path      = "/Users/ruan/personal/terraform-playground/docker-containers/configs/nginx.conf"
        }
      + volumes {
          + container_path = "/usr/share/nginx/html"
          + host_path      = "/Users/ruan/personal/terraform-playground/docker-containers/html"
        }
    }

  # docker_container.traefik will be created
  + resource "docker_container" "traefik" {
      + attach                = false
      + bridge                = (known after apply)
      + command               = [
          + "--api",
          + "--docker",
          + "--docker.watch",
          + "--entrypoints=Name:http Address::80",
          + "--logLevel=INFO",
        ]
      + container_logs        = (known after apply)
      + destroy_grace_seconds = 30
      + entrypoint            = (known after apply)
      + env                   = (known after apply)
      + exit_code             = (known after apply)
      + gateway               = (known after apply)
      + hostname              = (known after apply)
      + id                    = (known after apply)
      + image                 = "traefik:1.7.14"
      + init                  = (known after apply)
      + ip_address            = (known after apply)
      + ip_prefix_length      = (known after apply)
      + ipc_mode              = (known after apply)
      + log_driver            = "json-file"
      + logs                  = false
      + memory                = 256
      + must_run              = true
      + name                  = "traefik"
      + network_data          = (known after apply)
      + read_only             = false
      + remove_volumes        = true
      + restart               = "unless-stopped"
      + rm                    = false
      + security_opts         = (known after apply)
      + shm_size              = (known after apply)
      + start                 = true
      + stdin_open            = false
      + tty                   = false

      + healthcheck {
          + interval     = (known after apply)
          + retries      = (known after apply)
          + start_period = (known after apply)
          + test         = (known after apply)
          + timeout      = (known after apply)
        }

      + labels {
          + label = "traefik.docker.network"
          + value = "docknet"
        }
      + labels {
          + label = "traefik.enable"
          + value = "true"
        }
      + labels {
          + label = "traefik.frontend.rule"
          + value = "Host:traefik.localdns.xyz"
        }
      + labels {
          + label = "traefik.port"
          + value = "8080"
        }

      + networks_advanced {
          + aliases = [
              + "docknet",
            ]
          + name    = "docknet"
        }

      + ports {
          + external = 80
          + internal = 80
          + ip       = "0.0.0.0"
          + protocol = "tcp"
        }

      + volumes {
          + container_path = "/var/run/docker.sock"
          + host_path      = "/var/run/docker.sock"
        }
    }

  # docker_image.nginx will be created
  + resource "docker_image" "nginx" {
      + id          = (known after apply)
      + latest      = (known after apply)
      + name        = "nginx:stable-alpine"
      + output      = (known after apply)
      + repo_digest = (known after apply)
    }

  # docker_image.traefik will be created
  + resource "docker_image" "traefik" {
      + id          = (known after apply)
      + latest      = (known after apply)
      + name        = "traefik:1.7.14"
      + output      = (known after apply)
      + repo_digest = (known after apply)
    }

  # docker_network.nginx will be created
  + resource "docker_network" "nginx" {
      + driver      = "bridge"
      + id          = (known after apply)
      + internal    = (known after apply)
      + ipam_driver = "default"
      + name        = "docknet"
      + options     = (known after apply)
      + scope       = (known after apply)

      + ipam_config {
          + aux_address = (known after apply)
          + gateway     = (known after apply)
          + ip_range    = (known after apply)
          + subnet      = (known after apply)
        }
    }

  # random_string.nginx will be created
  + resource "random_string" "nginx" {
      + id          = (known after apply)
      + length      = 8
      + lower       = true
      + min_lower   = 0
      + min_numeric = 0
      + min_special = 0
      + min_upper   = 0
      + number      = true
      + result      = (known after apply)
      + special     = false
      + upper       = false
    }

Plan: 6 to add, 0 to change, 0 to destroy.

Changes to Outputs:
  + nginx_container_name   = "nginx"
  + nginx_url              = (known after apply)
  + traefik_container_name = "traefik"
  + traefik_url            = "http://traefik.localdns.xyz/"

Which we can see will create 2 containers, traefik and then nginx, map the configs and html in place and also sets the traefik hostname in the labels for our respective containers so that we can reach them via the specific host headers.

The we can deploy our containers:

1
terraform apply -auto-approve

Which will provide us the output detail defined from our outputs.tf:

1
2
3
4
5
6
7
8
Apply complete! Resources: 6 added, 0 changed, 0 destroyed.

Outputs:

nginx_container_name = "nginx"
nginx_url = "http://www.5igjdfq9.localdns.xyz/"
traefik_container_name = "traefik"
traefik_url = "http://traefik.localdns.xyz/"

Access our Containers

We can access our Traefik Dashboard on http://traefik.localdns.xyz and should look something like this:

image

And when we access our Nginx container on http://www.5igjdfq9.localdns.xyz it should look more or less like this:

image

Running a docker ps will show our running containers:

1
2
3
4
docker ps
CONTAINER ID   IMAGE                  COMMAND                  CREATED         STATUS                PORTS                NAMES
e45158ae8cba   nginx:stable-alpine    "/docker-entrypoint   3 minutes ago   Up 3 minutes          80/tcp               nginx
ebdbe42a0fcb   traefik:1.7.14         "/traefik --api       3 minutes ago   Up 3 minutes          0.0.0.0:80->80/tcp   traefik

Cleanup

We can delete our containers by running:

1
terraform destroy -auto-approve

Thank You

Thanks for reading, if you like my content, check out my website or follow me at @ruanbekker on Twitter.

Setup Traefik Version 2 on Docker

In this tutorial we will be setting up Traefik v2 as our reverse proxy with port 80 and 443 enabled, and then hook up a example application behind the application load balancer, and route incoiming requests via host headers.

What is Traefik

Traefik is a modern HTTP reverse proxy and load balancer that makes deploying microservices super easy by making use of docker labels to route your traffic based on host headers, path prefixes etc. Please check out their website to find out more about them.

Use Case

In our example we want to route traefik from http://app.selfhosted.co.za to hit our proxy on port 80, then we want traefik to redirect port 80 to the 443 port configured on the proxy which is configured with letsencrypt and reverse proxy the connection to our application.

The application is being configured via docker labels, which we will get into later.

Our Environment

I will be using the domain selfhosted.co.za, so if you are following along, you can just replace this domain with yours.

For this demonstration I have spun up a VM at Civo as you can see below:

image

From the provided public IP address, we will be creating a DNS A record for our domain, and then create a wildcard entry to CNAME to our initial dns name:

image

You might not want to point all the subdomains to that entry, but to simplify things, every application that needs to be routed via traefik, I can manage from a traefik config level, since my dns is already pointing to the public ip where traefik is running on.

So if I spin up a new container, lets say bitwarden, I can just set bitwarden.selfhosted.co.za in the labels of that container and due to the dns already pointing to traefik, traefik will route the connection to the correct container.

Pre-Requisites

In order to follow along you will need docker and docker-compose to be installed, and can be validated using:

1
2
3
4
5
docker -v
Docker version 20.10.7, build f0df350

docker-compose -v
docker-compose version 1.28.6, build 5db8d86f

Traefik on Docker

We will have one docker-compose.yml file which has the proxy and the example application. Be sure to change the following to suite your environment: - traefik.http.routers.api.rule=Host()' - --certificatesResolvers.letsencrypt.acme.email=youremail@yourdomain.net

The compose:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
---
version: '3.8'

services:
  traefik:
    image: traefik:2.4
    container_name: traefik
    restart: unless-stopped
    volumes:
      - ./traefik/acme.json:/acme.json
      - /var/run/docker.sock:/var/run/docker.sock
    networks:
      - docknet
    labels:
      - 'traefik.enable=true'
      - 'traefik.http.routers.api.rule=Host(`traefik.selfhosted.co.za`)'
      - 'traefik.http.routers.api.entrypoints=https'
      - 'traefik.http.routers.api.service=api@internal'
      - 'traefik.http.routers.api.tls=true'
      - 'traefik.http.routers.api.tls.certresolver=letsencrypt'
    ports:
      - 80:80
      - 443:443
    command:
      - '--api'
      - '--providers.docker=true'
      - '--providers.docker.exposedByDefault=false'
      - '--entrypoints.http=true'
      - '--entrypoints.http.address=:80'
      - '--entrypoints.http.http.redirections.entrypoint.to=https'
      - '--entrypoints.http.http.redirections.entrypoint.scheme=https'
      - '--entrypoints.https=true'
      - '--entrypoints.https.address=:443'
      - '--certificatesResolvers.letsencrypt.acme.email=youremail@yourdomain.net'
      - '--certificatesResolvers.letsencrypt.acme.storage=acme.json'
      - '--certificatesResolvers.letsencrypt.acme.httpChallenge.entryPoint=http'
      - '--log=true'
      - '--log.level=INFO'
    logging:
      driver: "json-file"
      options:
        max-size: "1m"

  webapp:
    image: traefik/whoami
    container_name: webapp
    restart: unless-stopped
    networks:
      - docknet
    labels:
      - 'traefik.enable=true'
      - 'traefik.http.routers.webapp.rule=Host(`app.selfhosted.co.za`)'
      - 'traefik.http.routers.webapp.entrypoints=https'
      - 'traefik.http.routers.webapp.tls=true'
      - 'traefik.http.routers.webapp.tls.certresolver=letsencrypt'
      - 'traefik.http.routers.webapp.service=webappservice'
      - 'traefik.http.services.webappservice.loadbalancer.server.port=80'
    logging:
      driver: "json-file"
      options:
        max-size: "1m"

networks:
  docknet:
    name: docknet

Prepare the ./traefik/acme.json file:

1
2
3
mkdir traefik
touch traefik/acme.json
chmod 600 traefik/acme.json

As you can see in order to wire a application onto the proxy we need the following labels:

1
2
3
4
5
6
7
  - 'traefik.enable=true'
  - 'traefik.http.routers.webapp.rule=Host(`app.selfhosted.co.za`)'
  - 'traefik.http.routers.webapp.entrypoints=https'
  - 'traefik.http.routers.webapp.tls=true'
  - 'traefik.http.routers.webapp.tls.certresolver=letsencrypt'
  - 'traefik.http.routers.webapp.service=webappservice'
  - 'traefik.http.services.webappservice.loadbalancer.server.port=80'

Now boot our stack using docker-compose:

1
docker-compose up -d

You can follow the logs to ensure everything works as expected:

1
2
3
4
5
6
7
8
9
10
11
docker-compose logs -f
Attaching to webapp, traefik
traefik    | time="2021-07-11T11:02:22Z" level=info msg="Configuration loaded from flags."
traefik    | time="2021-07-11T11:02:22Z" level=info msg="Starting provider aggregator.ProviderAggregator {}"
traefik    | time="2021-07-11T11:02:22Z" level=info msg="Starting provider *traefik.Provider {}"
traefik    | time="2021-07-11T11:02:22Z" level=info msg="Starting provider *docker.Provider {\"watch\":true,\"endpoint\":\"unix:///var/run/docker.so                                              ck\",\"defaultRule\":\"Host(``)\",\"swarmModeRefreshSeconds\":\"15s\"}"
traefik    | time="2021-07-11T11:02:22Z" level=info msg="Starting provider *acme.ChallengeTLSALPN {\"Timeout\":4000000000}"
traefik    | time="2021-07-11T11:02:22Z" level=info msg="Starting provider *acme.Provider {\"email\":\"youremail@domain.com\",\"caServer\":\"https://                                              acme-v02.api.letsencrypt.org/directory\",\"storage\":\"acme.json\",\"keyType\":\"RSA4096\",\"httpChallenge\":{\"entryPoint\":\"http\"},\"ResolverNam                                              e\":\"letsencrypt\",\"store\":{},\"TLSChallengeProvider\":{\"Timeout\":4000000000},\"HTTPChallengeProvider\":{}}"
traefik    | time="2021-07-11T11:02:22Z" level=info msg="Testing certificate renew..." providerName=letsencrypt.acme
traefik    | time="2021-07-11T11:02:24Z" level=info msg=Register... providerName=letsencrypt.acme
webapp     | Starting up on port 80

The certificate process might take anything from 5-30s in my experience.

Test the Application

Now that our webapp container is running, make a http request using curl against the configured host rule, which is app.selfhosted.co.za on http so that we can validate if traefik is doing a redirect to https:

1
2
3
4
5
6
7
8
9
10
11
12
curl -IL http://app.selfhosted.co.za:80

HTTP/1.1 308 Permanent Redirect
Location: https://app.selfhosted.co.za/
Date: Sun, 11 Jul 2021 11:05:47 GMT
Content-Length: 18
Content-Type: text/plain; charset=utf-8

HTTP/2 200
content-type: text/plain; charset=utf-8
date: Sun, 11 Jul 2021 11:05:47 GMT
content-length: 343

If we access our webapp service in our web browser, we will see the following:

image

We can also validate that the certificate is valid:

image

We can also access the traefik dashboard using the configured domain, in this case traefik.selfhosted.co.za, and you should see the pretty traefik dashboard:

image

Future Posts

In future posts I will be using this post as the base setup on getting traefik up and running, and future posts that uses traefik will be tagged under #traefik.

Thank You

Thanks for reading, if you like my content, check out my website or follow me at @ruanbekker on Twitter.

Install Nodejs on Linux Using NVM

In this post we will install Nodejs using Node Version Manager (nvm), which allows you to install and use different versions of node via the command line.

For more information on NVM, checkout their github repository

Install

I will be using a debian based linux distribution, so I first will be updating my package manager’s indexes:

1
$ apt update

Then I will install NVM using the instructions from their repository (always ensure that you are aware what you are installing when you curl, pipe, bash):

1
$ curl -o- https://raw.githubusercontent.com/nvm-sh/nvm/v0.39.0/install.sh | bash

Verify

You can now log out and log back in for your path to be updated, or you can follow the instructions on your terminal to source your session so that your path to nvm is updated:

1
2
3
4
$ export NVM_DIR="$HOME/.nvm"
$ [ -s "$NVM_DIR/bash_completion" ] && \. "$NVM_DIR/bash_completion"
$ [ -s "$NVM_DIR/nvm.sh" ] && \. "$NVM_DIR/nvm.sh"
$ [ -s "$NVM_DIR/bash_completion" ] && \. "$NVM_DIR/bash_completion"

Then you can verify if nvm is in your path:

1
2
$ command -v nvm
nvm

Installing a Node Version

Before we install a specific version of nodejs, let’s first look at the LTS versions from the Fermium release:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
$ nvm ls-remote --lts=fermium
       v14.15.0   (LTS: Fermium)
       v14.15.1   (LTS: Fermium)
       v14.15.2   (LTS: Fermium)
       v14.15.3   (LTS: Fermium)
       v14.15.4   (LTS: Fermium)
       v14.15.5   (LTS: Fermium)
       v14.16.0   (LTS: Fermium)
       v14.16.1   (LTS: Fermium)
       v14.17.0   (LTS: Fermium)
       v14.17.1   (LTS: Fermium)
       v14.17.2   (LTS: Fermium)
       v14.17.3   (LTS: Fermium)
       v14.17.4   (LTS: Fermium)
       v14.17.5   (LTS: Fermium)
       v14.17.6   (LTS: Fermium)
       v14.18.0   (Latest LTS: Fermium)

So I want to install v14.8.0:

1
$ nvm install 14.8.0

I also would like to make it my default version of node:

1
2
$ nvm alias default node
default -> node (-> v14.8.0)

Verify Installation

Now we can verify if npm is installed:

1
2
$ npm -v
6.14.7

as well as node:

1
2
$ node -v
v14.8.0

Thank You

Thanks for reading, if you like my content, check out my website or follow me at @ruanbekker on Twitter.

Setup TLS and Basic Authentication on Node Exporter for Prometheus

I had a public VPS server that I wanted to scrape node-exporter metrics from, but my Prometheus instance was behind a Dynamic IP address, so to allow only my prometheus instance to scrape my Node Exporter instance, was a bit difficult, since the IP keep changing and I had to update my iptables firewall rules.

In this tutorial I will show you how to setup TLS and Basic Authentication on Node Exporter, and how to configure prometheus to pass the auhtentication to successfully scrape the node exporter metrics endpoint.

Install Node Exporter

On the node-exporter host, set the environment variables for the version, user and directory path where node exporter will be installed::

1
2
3
$ NODE_EXPORTER_VERSION="1.1.2"
$ NODE_EXPORTER_USER="node_exporter"
$ BIN_DIRECTORY="/usr/local/bin"

Download and place the node-exporter binary in place:

1
2
3
4
5
6
$ wget https://github.com/prometheus/node_exporter/releases/download/v${NODE_EXPORTER_VERSION}/node_exporter-${NODE_EXPORTER_VERSION}.linux-amd64.tar.gz
$ tar -xf node_exporter-${NODE_EXPORTER_VERSION}.linux-amd64.tar.gz
$ cp node_exporter-${NODE_EXPORTER_VERSION}.linux-amd64/node_exporter ${BIN_DIRECTORY}/
$ chown ${NODE_EXPORTER_USER}:${NODE_EXPORTER_USER} ${BIN_DIRECTORY}/node_exporter
$ rm -rf node_exporter-${NODE_EXPORTER_VERSION}.linux-amd64*
$ mkdir /etc/node-exporter

Configuration

Create a self-signed cert for node-exporter:

1
$ openssl req -new -newkey rsa:2048 -days 365 -nodes -x509 -keyout node_exporter.key -out node_exporter.crt -subj "/C=ZA/ST=CT/L=SA/O=VPN/CN=localhost" -addext "subjectAltName = DNS:localhost"

Move the certs into the directory we created:

1
$ mv node_exporter.* /etc/node-exporter/

Install htpasswd so that we can generate a password hash with bcrypt, which will prompt you for a password that we are setting for the prometheus user::

1
2
$ apt install apache2-utils
$ htpasswd -nBC 10 "" | tr -d ':\n'; echo

Now populate the config for node-exporter:

1
2
3
4
5
6
$ cat /etc/node-exporter/config.yml
tls_server_config:
  cert_file: node_exporter.crt
  key_file: node_exporter.key
basic_auth_users:
  prometheus: <the-output-value-of-htpasswd>

Change the ownership of the node exporter directory:

1
$ chown -R ${NODE_EXPORTER_USER}:${NODE_EXPORTER_USER} /etc/node-exporter

Then create the systemd unit file:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
$ cat > /etc/systemd/system/node_exporter.service << EOF
[Unit]
Description=Node Exporter
Wants=network-online.target
After=network-online.target
StartLimitIntervalSec=500
StartLimitBurst=5
[Service]
User=${NODE_EXPORTER_USER}
Group=${NODE_EXPORTER_USER}
Type=simple
Restart=on-failure
RestartSec=5s
ExecStart=${BIN_DIRECTORY}/node_exporter --web.config=/etc/node-exporter/config.yml
[Install]
WantedBy=multi-user.target
EOF

Reload systemd and start node-exporter

1
2
3
$ systemctl daemon-reload
$ systemctl enable node_exporter
$ systemctl restart node_exporter

Prometheus Config

Copy the /etc/node-exporter/node_exporter.crt from the node-exporter node to prometheus-node, then in the /etc/prometheus/prometheus.yml config:

1
2
3
4
5
6
7
8
9
10
11
12
13
scrape_configs:
  - job_name: 'node-exporter-tls'
    scheme: https
    basic_auth:
      username: prometheus
      password: <the-plain-text-password>
    tls_config:
      ca_file: node_exporter.crt
      insecure_skip_verify: true
    static_configs:
    - targets: ['node-exporter-ip:9100']
      labels:
        instance: friendly-instance-name

After you restart prometheus, you should see the metrics in prometheus' tsdb of the node exporter target that we are scraping.

Thank You

Thanks for reading, if you like my content, check out my website or follow me at @ruanbekker on Twitter.

Install Concourse CI v7.4 on Ubuntu Linux

Concourse is a Pipeline Based Continious Integration system written in Go

Resources:

Older Version

An older version is available:

What is Concourse CI:

Concourse CI is a Continious Integration Platform. Concourse enables you to construct pipelines with a yaml configuration that can consist out of 3 core concepts, tasks, resources, and jobs that compose them. For more information about this have a look at their docs

What will we be doing today

We will setup a Concourse CI Server v6.7.6 (web and worker) on Ubuntu 20.04 and run the traditional Hello, World pipeline

Setup the Server:

Concourse needs PostgresSQL server:

1
2
3
$ apt update && apt upgrade -y
$ apt install postgresql postgresql-contrib -y
$ systemctl enable postgresql

Create the Database and User for Concourse on Postgres:

1
2
$ sudo -u postgres createuser concourse
$ sudo -u postgres createdb --owner=concourse atc

Download the Concourse Binary:

1
2
3
4
$ export CONCOURSE_VERSION=7.4.0
$ wget https://github.com/concourse/concourse/releases/download/v${CONCOURSE_VERSION}/concourse-${CONCOURSE_VERSION}-linux-amd64.tgz
$ tar -xvf concourse-${CONCOURSE_VERSION}-linux-amd64.tgz -C /usr/local/
$ rm -rf concourse-*-linux-amd64.tgz

Create the Encryption Keys:

1
2
3
4
5
$ mkdir /etc/concourse
$ ssh-keygen -t rsa -q -N '' -f /etc/concourse/tsa_host_key -m pem
$ ssh-keygen -t rsa -q -N '' -f /etc/concourse/worker_key -m pem
$ ssh-keygen -t rsa -q -N '' -f /etc/concourse/session_signing_key -m pem
$ cp /etc/concourse/worker_key.pub /etc/concourse/authorized_worker_keys -m pem

Set the IP Address:

1
$ export IP_ADDRESS=$(ifconfig $(route -n | grep '0.0.0.0' | head -1 | rev | awk '{print $1}' | rev) | grep -w 'inet' | awk '{print $2}')

Concourse Web Process Configuration:

1
2
3
4
5
6
7
8
9
10
11
12
13
$ cat > /etc/concourse/web_environment << EOF
PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/local/concourse/bin
CONCOURSE_ADD_LOCAL_USER=ruan:$(openssl rand -hex 14)
CONCOURSE_SESSION_SIGNING_KEY=/etc/concourse/session_signing_key
CONCOURSE_TSA_HOST_KEY=/etc/concourse/tsa_host_key
CONCOURSE_TSA_AUTHORIZED_KEYS=/etc/concourse/authorized_worker_keys
CONCOURSE_POSTGRES_HOST=127.0.0.1
CONCOURSE_POSTGRES_USER=concourse
CONCOURSE_POSTGRES_PASSWORD=concourse
CONCOURSE_POSTGRES_DATABASE=atc
CONCOURSE_MAIN_TEAM_LOCAL_USER=ruan
CONCOURSE_EXTERNAL_URL=http://$IP_ADDRESS:8080
EOF

Concourse Worker Process Configuration:

1
2
3
4
5
6
7
8
cat > /etc/concourse/worker_environment << EOF
PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/local/concourse/bin
CONCOURSE_WORK_DIR=/var/lib/concourse
CONCOURSE_TSA_HOST=127.0.0.1:2222
CONCOURSE_TSA_PUBLIC_KEY=/etc/concourse/tsa_host_key.pub
CONCOURSE_TSA_WORKER_PRIVATE_KEY=/etc/concourse/worker_key
CONCOURSE_GARDEN_DNS_SERVER=8.8.8.8
EOF

Create a Concourse user:

1
2
3
4
$ mkdir /var/lib/concourse
$ sudo adduser --system --group concourse
$ sudo chown -R concourse:concourse /etc/concourse /var/lib/concourse
$ sudo chmod 600 /etc/concourse/*_environment

Create SystemD Unit Files, first for the Web Service:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
$ cat > /etc/systemd/system/concourse-web.service << EOF
[Unit]
Description=Concourse CI web process (ATC and TSA)
After=postgresql.service

[Service]
User=concourse
Restart=on-failure
EnvironmentFile=/etc/concourse/web_environment
ExecStart=/usr/local/concourse/bin/concourse web

[Install]
WantedBy=multi-user.target
EOF

Then the SystemD Unit File for the Worker Service:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
$ cat > /etc/systemd/system/concourse-worker.service << EOF
[Unit]
Description=Concourse CI worker process
After=concourse-web.service

[Service]
User=root
Restart=on-failure
EnvironmentFile=/etc/concourse/worker_environment
ExecStart=/usr/local/concourse/bin/concourse worker

[Install]
WantedBy=multi-user.target
EOF

Create a postgres password for the concourse user:

1
2
3
4
$ cd /home/concourse/
$ sudo -u concourse psql atc
atc=> ALTER USER concourse WITH PASSWORD 'concourse';
atc=> \q

Start and Enable the Services:

1
2
3
4
5
6
7
$ systemctl start concourse-web concourse-worker
$ systemctl enable concourse-web concourse-worker postgresql
$ systemctl status concourse-web concourse-worker

$ systemctl is-active concourse-worker concourse-web
active
active

The listening ports should more or less look like the following:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
$ netstat -tulpn

Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address           Foreign Address         State       PID/Program name
tcp        0      0 127.0.0.1:7777          0.0.0.0:*               LISTEN      4530/concourse
tcp        0      0 127.0.0.1:7788          0.0.0.0:*               LISTEN      4530/concourse
tcp        0      0 127.0.0.1:8079          0.0.0.0:*               LISTEN      4525/concourse
tcp        0      0 0.0.0.0:22              0.0.0.0:*               LISTEN      1283/sshd
tcp        0      0 127.0.0.1:5432          0.0.0.0:*               LISTEN      4047/postgres
tcp6       0      0 :::36159                :::*                    LISTEN      4525/concourse
tcp6       0      0 :::46829                :::*                    LISTEN      4525/concourse
tcp6       0      0 :::2222                 :::*                    LISTEN      4525/concourse
tcp6       0      0 :::8080                 :::*                    LISTEN      4525/concourse
tcp6       0      0 :::22                   :::*                    LISTEN      1283/sshd
udp        0      0 0.0.0.0:68              0.0.0.0:*                           918/dhclient
udp        0      0 0.0.0.0:42165           0.0.0.0:*                           4530/concourse

You can check the logs like this:

1
2
$ sudo journalctl -fu concourse-web
$ sudo journalctl -fu concourse-worker

Make a request using the API:

1
2
$ curl http://${IP_ADDRESS}:8080/api/v1/info
{"version":"7.4.0","worker_version":"2.3","feature_flags":{"across_step":false,"build_rerun":false,"cache_streamed_volumes":false,"global_resources":false,"pipeline_instances":false,"redact_secrets":false,"resource_causality":false},"external_url":"http://x.x.x.x:8080"}

Client Side:

I will be using a the Fly cli from a Mac, so first we need to download the fly-cli for Mac:

1
2
3
4
5
$ export CONCOURSE_VERSION=7.4.0
$ wget https://github.com/concourse/concourse/releases/download/v${CONCOURSE_VERSION}/fly-${CONCOURSE_VERSION}-darwin-amd64.tgz
$ tar -xvf fly-${CONCOURSE_VERSION}-darwin-amd64.tgz
$ sudo mv fly /usr/local/bin/fly
$ rm -rf fly-${CONCOURSE_VERSION}-darwin-amd64.tgz

Next, we need to setup our Concourse Target by Authenticating against our Concourse Endpoint, lets setup our target with the name ci, and make sure to replace the ip address with the ip of your concourse server:

1
2
3
4
5
6
7
8
9
$ fly -t ci login -c http://${IP_ADDRESS}:8080
logging in to team 'main'

navigate to the following URL in your browser:

  http://${IP_ADDRESS}:8080/login?fly_port=42181

or enter token manually (input hidden):
target saved

Lets list our targets:

1
2
3
$ fly targets
name  url                        team  expiry
ci    http://x.x.x.x:8080        main  Wed, 08 Nov 2021 15:32:59 UTC

Listing Registered Workers:

1
2
3
$ fly -t ci workers
name              containers  platform  tags  team  state    version
x.x.x.x           0           linux     none  none  running  1.2

Listing Active Containers:

1
2
$ fly -t ci containers
handle                                worker            pipeline     job            build #  build id  type   name                  attempt

Hello World Pipeline:

Let’s create a basic pipeline, that will print out Hello, World!:

Our hello-world.yml

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
jobs:
- name: my-job
  plan:
  - task: say-hello
    config:
      platform: linux
      image_resource:
        type: docker-image
        source:
          repository: alpine
          tag: edge
      run:
        path: /bin/sh
        args:
        - -c
        - |
          echo "============="
          echo "Hello, World!"
          echo "============="

Applying the configuration to our pipeline:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
$ fly -t ci set-pipeline -p yeeehaa -c hello-world.yml
jobs:
  job my-job has been added:
    name: my-job
    plan:
    - task: say-hello
      config:
        platform: linux
        image_resource:
          type: docker-image
          source:
            repository: alpine
            tag: edge
        run:
          path: /bin/sh
          args:
          - -c
          - |
            echo "============="
            echo "Hello, World!"
            echo "============="

apply configuration? [yN]: y
pipeline created!
you can view your pipeline here: http://x.x.x.x:8080/teams/main/pipelines/yeeehaa

the pipeline is currently paused. to unpause, either:
  - run the unpause-pipeline command
  - click play next to the pipeline in the web ui

We can browse to the WebUI to unpause the pipeline, but since I like to do everything on cli as far as possible, I will unpause the pipeline via cli:

1
2
$ fly -t ci unpause-pipeline -p yeeehaa
unpaused 'yeeehaa'

Now our Pipeline is unpaused, but since we did not specify any triggers, we need to manually trigger the pipeline to run, you can either via the WebUI, select your pipeline which in this case will be named yeeehaa and then select the job, which will be my-job then hit the + sign, which will trigger the pipeline.

I will be using the cli:

1
2
$ fly -t ci trigger-job --job yeeehaa/my-job
started yeeehaa/my-job #1

Via the WebUI on http://x.x.x.x:8080/teams/main/pipelines/yeeehaa/jobs/my-job/builds/1 you should see the Hello, World! output, or via the cli, we also have the option to see the output, so let’s trigger it again, but this time passing the --watch flag:

1
2
3
4
5
6
7
8
9
10
11
12
$ fly -t ci trigger-job --job yeeehaa/my-job --watch
started yeeehaa/my-job #2

initializing
running /bin/sh -c echo "============="
echo "Hello, World!"
echo "============="

=============
Hello, World!
=============
succeeded

Listing our Workers and Containers again:

1
2
3
4
5
6
7
$ fly -t ci workers
name              containers  platform  tags  team  state    version
x.x.x.x            2           linux     none  none  running  1.2

$ fly -t ci containers
handle                                worker            pipeline     job         build #  build id  type   name           attempt
46282555-64cd-5h1b-67b8-316486h58eb8  x.x.x.x           yeeehaa      my-job      2        729       task   say-hello      n/a

Thank You

Thanks for reading, if you like my content, check out my website or follow me at @ruanbekker on Twitter.

A Tour With Vagrant and Virtualbox on Mac

vagrant

Vagrant, yet another amazing product from Hashicorp.

Vagrant makes it really easy to provision virtual servers for local development (not limited to), which they refer as “boxes”, that enables developers to run their jobs/tasks/applications in a really easy and fast way. Vagrant utilizes a declarative configuration model, so you can describe which OS you want, bootstrap them with installation instructions as soon as it boots, etc.

What are we doing today?

When completing this tutorial, you will have Vagrant and Virtualbox installed on your Mac and should be able to launch a Ubuntu Virtual Server locally with Vagrant and using the Virtualbox provider which will be responsible for running our VM’s.

We will also look at different configuration options to configure the VM, bootstrapping software, using the shell, docker and ansible provisioner.

For this demonstration, I am using a Mac OSX, but you can run this on Mac, Windows or Linux. First we will use Homebrew to install Virtualbox, then Vagrant, then we will provision a Ubuntu box and I will also show how to inject shell commands into your Vagrantfile so that you can provision software to your VM, and also forward traffic to a web server from the host to the guest.

If you are looking for a Linux version instead of mac, you can look at this post: * Use Vagrant to Setup a Local Development Environment on Linux

Pre-Requisites

I will be installing Vagrant and Virtualbox with Homebrew, if you do not have homebrew installed, you can install homebrew with:

1
$ ruby -e "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/master/install)"

Once homebrew is installed, it’s a good thing to update the indexes:

1
$ brew update

Virtualbox

Install VirtualBox using homebrew:

1
$ brew install --cask virtualbox

Vagrant

Install Vagrant using homebrew:

1
$ brew install --cask vagrant

Install the virtualbox guest additions plugin for vagrant:

1
$ vagrant plugin install vagrant-vbguest

If you would like a vagrant manager utility to help you manage your vagrant boxes, you can install vagrant-manager using homebrew:

1
$ brew install --cask vagrant-manager

Create your first Vagrant Box

From app.vagrantup.com/boxes/search you can search for any box, such as ubuntu, centos, alpine etc and for this demonstration I am going with ubuntu/focal64.

I am creating a new directory for my devbox:

1
2
$ mkdir devbox
$ cd devbox

Then initialize the Vagrantfile by running:

1
$ vagrant init ubuntu/focal64

A Vagrantfile has been created in the current working directory:

1
2
3
4
5
$ cat Vagrantfile | grep -v "#"

Vagrant.configure("2") do |config|
  config.vm.box = "ubuntu/focal64"
end

Boot the VM:

1
$ vagrant up

The box should now be in a started state, and we can verify that by running:

1
2
3
4
$ vagrant status
Current machine states:

default                   running (virtualbox)

We can now SSH to our VM by running:

1
2
$ vagrant ssh
vagrant@ubuntu-focal:~$

Installing Software with Vagrant

First let’s destroy the VM that we created:

1
$ vagrant destroy --force

Then edit the Vagrantfile and add the commands that we want to be executed when the VM boots, in our case, installing Nginx:

1
2
3
4
5
6
7
8
Vagrant.configure("2") do |config|
  config.vm.box = "ubuntu/focal64"
  config.vm.network "forwarded_port", guest: 80, host: 8080
  config.vm.provision "shell", inline: <<-SHELL
     apt update
     apt install nginx -y
  SHELL
end

You will also notice that we are forwarding port 8080 from our host, to port 80 on the VM so that we can access the webserver on port 8080 from our laptop. Then boot the VM:

1
$ vagrant up

Once the VM has booted and installed our software, we should be able to access the index document served by Nginx on our VM:

1
2
3
4
5
6
7
8
9
10
11
$ curl -I http://localhost:8080/

HTTP/1.1 200 OK
Server: nginx/1.18.0 (Ubuntu)
Date: Sat, 14 Aug 2021 18:11:59 GMT
Content-Type: text/html
Content-Length: 612
Last-Modified: Sat, 14 Aug 2021 18:11:10 GMT
Connection: keep-alive
ETag: "6118073e-264"
Accept-Ranges: bytes

Shared Folders

Let’s say you want to map your local directory to your VM, in a scenario where you want to store your index.html on your laptop and map it to the VM, we can use config.vm.synced_folder.

On our laptop, create a html directory where we will store our index.hml:

1
$ mkdir html

Now create the content in the index.html under the html directory:

1
$ echo "Hello, World" > html/index.html

Now we need to make vagrant aware of the folder that we are mapping to the VM, so we need to edit the Vagrantfile and it will now look like this:

1
2
3
4
5
6
7
8
9
10
11
12
# -*- mode: ruby -*-
# vi: set ft=ruby :

Vagrant.configure("2") do |config|
  config.vm.box = "ubuntu/focal64"
  config.vm.network "forwarded_port", guest: 80, host: 8080
  config.vm.provision "shell", inline: <<-SHELL
     apt update
     apt install nginx -y
  SHELL
  config.vm.synced_folder "html", "/var/www/html"
end

To reload the VM with our changes, we use vagrant provision to update our VM when changes to provisioners are made, and vagrant reload when we have config changes such as config.vm.network, but to restart the VM and forcing provisioners to run, we can use the following:

Thanks @joshva_jebaraj

1
$ vagrant reload --provision

Once the VM is up, we can verify the changes:

1
2
$ curl http://localhost:8080/
Hello, World

Now we can edit our content locally which is synced to our VM.

Setting Hostname and Configure Memory

We can also configure the hostname of our VM and configure the amount of memory that we want to allocate to our VM using:

  • config.vm.hostname
  • vb.memory

An example of that will look like the following:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
# -*- mode: ruby -*-
# vi: set ft=ruby :

Vagrant.configure("2") do |config|
  config.vm.box = "ubuntu/focal64"
  config.vm.hostname = "mydevbox"
  config.vm.network "forwarded_port", guest: 80, host: 8080
  config.vm.provision "shell", inline: <<-SHELL
     apt update
     apt install nginx -y
  SHELL
  config.vm.synced_folder "html", "/var/www/html"
  config.vm.provider "virtualbox" do |vb|
    vb.memory = "1024"
  end
end

`

In this example our VM’s hostname is mydevbox and we assigned 1024MB of memory to our VM.

Provisioners: Shell

We can also run scripts from our local directory on our laptop on our VM using the shell provisioner.

First we need to create the script on our local directory:

1
2
3
4
$ cat bootstrap.sh
#!/usr/bin/env bash
set -x
echo "my hostname is $(hostname)"

Then in our Vagrantfile we inform vagrant to execute the shell script:

1
2
3
4
5
6
7
8
# -*- mode: ruby -*-
# vi: set ft=ruby :

Vagrant.configure("2") do |config|
  config.vm.box = "ubuntu/focal64"
  config.vm.hostname = "mydevbox"
  config.vm.provision :shell, :path => "bootstrap.sh"
end

Since my VM is already running, I will be doing a reload:

1
2
3
4
5
6
7
$ vagrant reload --provision
...
==> default: Running provisioner: shell...
    default: Running: /var/folders/04/r10yvb8d5dgfvd167jz5z23w0000gn/T/vagrant-shell20210814-70233-1p9dump.sh
    default: ++ hostname
    default: my hostname is mydevbox
    default: + echo 'my hostname is mydevbox'

As you can see the shell script from our local directory was executed on our VM, you can use this method to automate installations as well, etc.

Provisioners: Docker

Vagrant offers a docker provisioner, and for this example we will be hosting a mysql server using docker container in our VM.

Our Vagrantfile:

1
2
3
4
5
6
7
8
9
10
11
12
# -*- mode: ruby -*-
# vi: set ft=ruby :

Vagrant.configure("2") do |config|
  config.vm.box = "ubuntu/focal64"
  config.vm.hostname = "mydevbox"
  config.vm.network "forwarded_port", guest: 3306, host: 3306
  config.vm.provision "docker" do |d|
    d.run "mysql", image: "mysql:8.0",
      args: "-p 3306:3306 -e MYSQL_ROOT_PASSWORD=password"
  end
end

Since I don’t have port 3306 listening locally, I have mapped port 3306 from my laptop to port 3306 on my VM and I am using the mysql:8.0 container image from docker hub and passing the arguments which is specific to the container.

The convenient thing about the docker provisioner, is that it will install docker onto the VM for you.

Once the config has been set in your Vagrantfile do a reload:

1
2
3
4
5
6
7
$ vagrant reload --provision
...
    default: /vagrant => /Users/ruanbekker/workspace/vagrant/devbox
==> default: Running provisioner: docker...
    default: Installing Docker onto machine...
==> default: Starting Docker containers...
==> default: -- Container: mysql

From our laptop we should be able to communicate with our mysql server:

1
2
3
4
5
6
7
8
9
10
11
$ nc -vz localhost 3306
found 0 associations
found 1 connections:
     1:   flags=82<CONNECTED,PREFERRED>
  outif lo0
  src 127.0.0.1 port 58745
  dst 127.0.0.1 port 3306
  rank info not available
  TCP aux info available

Connection to localhost port 3306 [tcp/mysql] succeeded!

We can also SSH to our VM and verify if the container is running:

1
$ vagrant ssh

And then list the containers:

1
2
3
$  docker ps
CONTAINER ID   IMAGE       COMMAND                  CREATED         STATUS         PORTS                                                  NAMES
30a843a486ae   mysql:8.0   "docker-entrypoint.sh    2 minutes ago   Up 2 minutes   0.0.0.0:3306->3306/tcp, :::3306->3306/tcp, 33060/tcp   mysql

Provisioners: Ansible

We can also execute Ansible playbooks on our VM using the Ansible Provisioner.

Something to note is that we use ansible to execute the playbook on the host, and ansible_local to execute the playbook on the VM.

First we will create our project structure for ansible, so that we have the following in place:

1
2
3
4
.
Vagrantfile
provisioning/playbook.yml
provisioning/group_vars/all

Create the provisioning directory:

1
$ mkdir provisioning

Then the content for our provisioning/playbook.yml playbook:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
---
- hosts: all
  become: yes
  tasks:
    - name: ensure ntpd is at the latest version
      apt:
        pkg: ntp
        state: ""
      notify:
      - restart ntpd
  handlers:
    - name: restart ntpd
      service:
        name: ntp
        state: restarted

Our provisioning/group_vars/all file that will contain the variables for the all group:

1
desired_state: "latest"

In our Vagrantfile:

1
2
3
4
5
6
7
8
9
10
# -*- mode: ruby -*-
# vi: set ft=ruby :

Vagrant.configure("2") do |config|
  config.vm.box = "ubuntu/focal64"
  config.vm.hostname = "mydevbox"
  config.vm.provision :ansible do |ansible|
    ansible.playbook = "provisioning/playbook.yml"
  end
end

When using ansible with vagrant the inventory is auto-generated when then inventory is not specified. Vagrant will store the inventory on the host at .vagrant/provisioners/ansible/inventory/vagrant_ansible_inventory.

To execute playbooks with ansible, we need ansible installed on our host machine, for this demonstration I will be using virtualenv and then install ansible using pip:

1
2
3
4
$ python3 -m pip install virtualenv
$ virtualenv -p $(which python3) .venv
$ source .venv/bin/activate
$ pip install ansible

Now that we have ansible installed, reload the VM to execute the playbook on our VM:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
$ vagrant reload --provision
...
==> default: Running provisioner: ansible...
    default: Running ansible-playbook...

PLAY [all] *********************************************************************

TASK [Gathering Facts] *********************************************************
ok: [default]

TASK [ensure ntpd is at the latest version] ************************************
ok: [default]

PLAY RECAP *********************************************************************
default                    : ok=2    changed=0    unreachable=0    failed=0    skipped=0    rescued=0    ignored=0

Pretty neat right?

Tear Down

To destroy the VM:

1
$ vagrant destroy --force

Resources

For more information on vagrant, check out their documentation:

On provisioning documentation:

I have a couple of example Vagrantfiles available on my github repository:

Thank You

Thanks for reading, if you like my content, check out my website or follow me at @ruanbekker on Twitter.

How to Specify Wallet Name in Bitcoin Core Walletnotify

With bitcoin-core, you get a configuration option called walletnotify which allow you to invoke a command whenever you receive a payment, first confirmation of a payment or send a payment.

You can specify %s as an argument which will be used to parse the transaction id.

Bitcoind WalletNotify TransactionID Example

To see what walletnotify does, in my bitcoin.conf I had a basic script to write a entry every time I receive a payment:

1
2
3
$ cat ~/.bitcoin/bitcoin.conf
...
walletnotify=/bin/notify.sh %s %w

And in my /bin/notify.sh script I have this:

1
2
3
4
5
#!/usr/bin/env bash
transaction_id=$1

# writing to log
echo "[$(date +%FT%T)] event for txid $transaction_id" >> /var/log/bitcoin-notify.log

I have executable permissions for the script:

1
$ chmod +x /bin/notify.sh

When a payment was made, my logfile showed the following:

1
[2021-08-04T12:21:43] event for txid xxxxxx5d92f729ed77xxxxxx2cbccedxxxxa7a03a801xxxxxxx33a41c1xxxxxd2 

Capturing the wallet name in walletnotify

In bitcoin-core we wave wallets, and in a wallet we have one or more bitcoin addresses, as can be seen below for wallets:

1
2
$ curl -s -u "bitcoin:${bpass}" -d '{"jsonrpc": "1.0", "id": "curl", "method": "listwallets", "params": []}' -H 'content-type: text/plain;' http://127.0.0.1:18332/
{"result":["rpi01-main", "rpi01-secondary"],"error":null,"id":"curl"}

and to get the addresses for that wallet:

1
2
$ curl -s -u "bitcoin:${bpass}" -d '{"jsonrpc": "1.0", "id": "curl", "method": "getaddressesbylabel", "params": [""]}' -H 'content-type: text/plain;' http://127.0.0.1:18332/wallet/rpi01-main
{"result":{"txxxxxmefmcpq98xxxxxxx80gvug2fe97xxxxxx8yv":{"purpose":"receive"}},"error":null,"id":"curl"}

I had to figure out how to capture the wallet name as well as the transaction id, as I thought its not possible until I stumbled upon a post which mentioned from bitcoind 0.20:

The -walletnotify configuration parameter will now replace any %w in its argument with the name of the wallet generating the notification.

Which was merged by this PR: - https://github.com/bitcoin/bitcoin/pull/13339

So first to verify that bitcoind is newer than mentioned:

1
2
$ /usr/local/bin/bitcoind -version
Bitcoin Core version v0.21.1

Updated the walletnotify config in bitcoin.conf to include %w:

1
2
$ cat /home/bitcoin/.bitcoin/bitcoin.conf | grep wallet
walletnotify=/bin/notify.sh %s %w

Then in the notify.sh script:

1
2
3
4
5
#!/usr/bin/env bash
transaction_id=$1
wallet_name=$2

echo "[$(date +%FT%T)] $transaction_id $wallet_name" >> /var/log/bitcoin-notify.log

And then restart bitcoind:

1
$ sudo systemctl restart bitcoind

When a transaction occurred, I could see the transaction id with the corresponding wallet name:

1
2
$ tail -f /var/log/bitcoin-notify.log
[2021-08-04T12:31:20] fxxxxxxxxxxxxxxxxxxxxxxx2cbcced28ea26fhkxxxxhjn01f33a41c12f8xxx8 rpi01-main

Thanks

Thanks for reading, if you like my content, check out my website or follow me at @ruanbekker on Twitter.

AWS EC2 Linux - Warning: Setlocale: LC_CTYPE: Cannot Change Locale UTF-8

On Amazon Linux EC2 Instances, I noticed the following error when SSH onto them:

1
-bash: warning: setlocale: LC_CTYPE: cannot change locale (UTF-8): No such file or directory

To resolve, add the following to the /etc/environment file:

1
2
3
$ cat /etc/environment
LANG=en_US.utf-8
LC_ALL=en_US.utf-8

Logout and log back in and it should be resolved.

Task Runner With YAML Config Written in Go

Task (aka Taskfile) is a task runner written in Go, which is similar to GNU Make, but in my opinion is a lot easier to use as you specify your tasks in yaml.

What to expect

In this post we will go through a quick demonstration using Task, how to install Task, as well as a couple of basic examples to get you up and running with Task.

Install

For mac, installing task::

1
$ brew install go-task/tap/go-task

For linux, installing task:

1
$ sh -c "$(curl --location https://taskfile.dev/install.sh)" -- -d -b /usr/local/bin

Or manual installation for arm as an example:

1
2
3
4
5
6
$ pushd /tmp
$ wget https://github.com/go-task/task/releases/download/v3.7.0/task_linux_arm.tar.gz
$ tar -xvf task_linux_arm.tar.gz
$ sudo mv task /usr/local/bin/task
$ sudo chmod +x /usr/local/bin/task
$ popd

Verify that task is installed:

1
2
$ task --version
Task version: v3.7.0

For more information check the installation page: - https://taskfile.dev/#/installation

Usage

Task uses a default config file: Taskfile.yml in the current working directory where you can provide context on what your tasks should do.

To generate a Taskfile.yml with example config, task gives us a --init flag to generate a sample.

For a basic hello-world example, our task helloworld will echo out hello, world!. To generate the sample code, run:

1
task --init

Then update the config, to the following:

1
2
3
4
5
6
7
version: '3'

tasks:
  helloworld:
    desc: prints out hello world message
    cmds:
      - echo "hello, world!"

To demonstrate what the config means:

  • tasks: refers to the list of tasks
  • helloworld: is the task name
  • desc: describes the task, useful for listing tasks
  • cmds: the commands that the task will execute

To list all our tasks for our taskfile:

1
2
3
$ task --list
task: Available tasks for this project:
* helloworld:     prints out hello world message

Which we call using the application task with the argument of the task name:

1
2
3
$ task helloworld
task: [helloworld] echo "hello, world!"
hello, world!

We can also reduce the output verbosity using silent:

1
2
3
4
5
6
7
8
version: '3'

tasks:
  helloworld:
    desc: prints out hello world message
    cmds:
      - echo "hello, world!"
    silent: true

Which will result in:

1
2
$ task helloworld
hello, world!

For a example using environment variables, we can use it in two ways:

  • per task
  • globally, across all tasks

For using environment variables per task:

1
2
3
4
5
6
7
8
version: '3'

tasks:
  helloworld:
    cmds:
      - echo "hello, $WORD!"
    env:
      WORD: world

Results in:

1
2
3
$ task helloworld
task: [helloworld] echo "hello, $WORD!"
hello, world!

For using environment variables globally across all tasks:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
version: '3'

env:
  WORD: world

tasks:
  helloworld:
    cmds:
      - echo "hello, $WORD!"
    env:
      GREETING: hello

  byeworld:
    cmds:
      - echo "$GREETING, $WORD!"
    env:
      GREETING: bye

Running our first task:

1
2
3
$ task helloworld
task: [helloworld] echo "hello, $WORD!"
hello, world!

And running our second task:

1
2
3
$ task byeworld
task: [byeworld] echo "$GREETING, $WORD!"
bye, world!

To store your environment variables in a .env file, you can specify it as the following in your Taskfile.yml:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
version: '3'

dotenv: ['.env']

tasks:
  helloworld:
    cmds:
      - echo "hello, $WORD!"
    env:
      GREETING: hello

  byeworld:
    cmds:
      - echo "$GREETING, $WORD!"
    env:
      GREETING: bye

And in your .env:

1
WORD=world

Then you should see your environment variables referenced from the .env file:

1
2
3
$ task helloworld
task: [helloworld] echo "hello, $WORD!"
hello, world!

We can also reference config using vars:

1
2
3
4
5
6
7
8
9
10
version: '3'

vars:
  GREETING: Hello, World!

tasks:
  default:
    desc: prints out a message
    cmds:
      - echo ""

In this case our task name is default, therefore we can only run task without any arguments, as default with be the default task:

1
2
3
$ task
task: [default] echo "Hello, World!"
Hello, World!

To run both tasks with one command, you can specify dependencies, so if we define a task with zero commands but just dependencies, it will call those tasks and execute them:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
version: '3'

env:
  WORD: world

tasks:
  helloworld:
    cmds:
      - echo "hello, $WORD!"
    env:
      GREETING: hello

  byeworld:
    cmds:
      - echo "$GREETING, $WORD!"
    env:
      GREETING: bye

  all:
    deps: [helloworld, byeworld]

So when we run the all task:

1
2
3
4
5
$ task all
task: [helloworld] echo "hello, $WORD!"
hello, world!
task: [byeworld] echo "$GREETING, $WORD!"
bye, world!

For more usage examples, have a look at their documentation: - https://taskfile.dev/#/usage

Thanks

Thanks for reading, if you like my content, check out my website or follow me at @ruanbekker on Twitter.

Basic Logging With Python

I’m trying to force myself to move away from using the print() function as I’m pretty much using print all the time to cater for logging, and using the logging package instead.

This is a basic example of using logging in a basic python app:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
import logging

logging.basicConfig(
    level=logging.INFO,
    format="%(asctime)s [%(levelname)s] %(name)s %(message)s",
    handlers=[
        logging.StreamHandler()
    ]
)

messagestring = {'info': 'info message', 'warn': 'this is a warning', 'err': 'this is a error'}

logger = logging.getLogger('thisapp')
logger.info('message: {}'.format(messagestring['info']))
logger.warning('message: {}'.format(messagestring['warn']))
logger.error('message: {}'.format(messagestring['err']))

When running this example, this is the output that you will see:

1
2
3
4
$ python app.py
2021-07-19 13:07:43,647 [INFO] thisapp message: info message
2021-07-19 13:07:43,647 [WARNING] thisapp message: this is a warning
2021-07-19 13:07:43,647 [ERROR] thisapp message: this is a error

More more info on this package, see it’s documentation: - https://docs.python.org/3/library/logging.html

Thanks for reading, if you like my content, check out my website or follow me at @ruanbekker on Twitter.