Ruan Bekker's Blog

From a Curious mind to Posts on Github

HTTPS Termination Using LetsEncrypt With Traefik on Docker Swarm

We will setup a HTTPS Termination on Traefik for our Java Web Application using Payara Micro, that will sit behind our Traefik proxy. In this guide, I will be using GitLab’s Private Registry for pushing my Images to.

Traefik Dockerfile:

Our Traefik Dockerfile:

Traefik Dockerfile
1
2
3
4
5
FROM traefik
ADD traefik.toml .
EXPOSE 80
EXPOSE 8080
EXPOSE 443

traefik.toml

Our Traefik config: traefik.toml

traefik.toml
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
defaultEntryPoints = ["http", "https"]

[web]
address = ":8080"

[entryPoints]

[entryPoints.http]
address = ":80"

[entryPoints.https]
address = ":443"

[entryPoints.https.tls]

[acme]
email = "recipient@domain.com"
storage = "acme.json"
entryPoint = "https"
onDemand = false
OnHostRule = true

[docker]
endpoint = "unix:///var/run/docker.sock"
domain = "apps.domain.com"
watch = true
exposedbydefault = false

Build the Image:

Login to GitLab’s Registry, build and push the image:

1
2
3
$ docker login registry.gitlab.com
$ docker build -t registry.gitlab.com/<user>/<repo>/traefik:latest .
$ docker push registry.gitlab.com/<user>/<repo>/traefik:latest

Traefik:

Create the Traefik Proxy Service:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
$ docker service create \
--name traefik \
--constraint 'node.role==manager' \
--publish 80:80 \
--publish 443:443 \
--publish 8080:8080 \
--mount type=bind,source=/var/run/docker.sock,target=/var/run/docker.sock \
--network appnet \
--with-registry-auth registry.gitlab.com/<user>/<repo>/traefik:latest \
--docker \
--docker.swarmmode \
--docker.domain=apps.domain.com \
--docker.watch \
--logLevel=DEBUG \
--web

Java Web Application:

Our Java Web Applications Dockerfile:

Dockerfile
1
2
FROM payara/micro
COPY app.war /opt/payara/deployments/app.war

Build and Push the Image to our GitLab Registry:

1
2
$ docker build -t registry.gitlab.com/<user>/<repo>/java_web:latest .
$ docker push registry.gitlab.com/<user>/<repo>/java_web:latest

Create the Java Web Application on Docker Swarm, specifiying our Host, and also a PathPrefix, so that the Traefik Proxy can accept requests for the Hostname, and anything from /app/

1
2
3
4
5
6
$ docker service create \
--name java_web \
--label 'traefik.port=8080' \
--label traefik.frontend.rule="Host:apps.domain.com; PathPrefix: /app/" \
--network appnet \
--with-registry-auth registry.gitlab.com/<user>/<repo>/java_web:latest

Now we should be able to access our Web Application on https://apps.domain.com/app/

Resources:

Run Kibana on Docker Swarm With Traefik

We will create a Kibana Service on Docker Swarm, that will sit behind a Traefik Reverse Proxy.

Create the Overlay Network:

1
$ docker network create --driver overlay appnet

Create the Traefik Service:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
docker service create \
--name traefik \
--constraint 'node.role==manager' \
--publish 80:80 \
--publish 443:443 \
--publish 8080:8080 \
--mount type=bind,source=/var/run/docker.sock,target=/var/run/docker.sock \
--network appnet \
traefik:camembert \
--docker --docker.swarmmode  \
--docker.domain=apps.domain.com \
--docker.watch \
--logLevel=DEBUG \
--web

Set DNS:

Set a wildcard *.apps.domain.com to resolve to apps.domain.com, where apps.domain.com resolves to your swarm addresses

Create Kibana:

Create a Kibana Service and set the ELASTICSEARCH_URL to your External Elasticsearch Endpoint, take note that it uses port 9200 by default.

1
2
3
4
5
6
$ docker service create \
--name kibana \
--label 'traefik.port=5601' \
--network appnet \
--env KIBANA_ELASTICSEARCH_URL=elasticsearch.domain.com \
bitnami/kibana

Access Kibana:

Your Kibana endpoint will be available at: http://kibana.apps.domain.com

Resources:

Using Python to Write Data to a MySQL Database

From our previous post, we used python to read data from mysql. In this post we will be using the random library to write random data into mysql.

We will define our lists with the categorized data, and then using for loop, to write data into our mysql database:

Create The Database:

Using Python to Create the Database:

1
2
3
4
5
6
>>> conn = pdb.connect(host=db_host, user=db_username, passwd=db_password)
>>> cursor = conn.cursor()
>>> cursor.execute("CREATE DATABASE testdb1")
1L
>>> cursor.execute("CREATE TABLE testdb1.myusers(name VARCHAR(50), surname VARCHAR(50), countries VARCHAR(50), job VARCHAR(20), os VARCHAR(20), car VARCHAR(20))")
0L

Now to list our databases:

1
2
3
4
5
6
7
8
9
10
11
12
>>> cursor.execute("show databases")
12L

>>> dbs = cursor.fetchall()
>>> for x in dbs:
...     print(x)
...

('information_schema',)
('mysql',)
('performance_schema',)
('testdb1',)

Python Code to Write to MySQL

We will create a mysql_write.py file, with the following contents to define our random data that we will write to our mysql database. The config module can be found from this post.

mysql_write.py
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
import MySQLdb as pdb
from config import credentials as secrets
import random
import datetime

db_host = secrets['mysql']['host']
db_username = secrets['mysql']['username']
db_password = secrets['mysql']['password']
db_name = secrets['mysql']['database']

for x in range(10):
    a = random.choice(names)
    b = random.choice(surnames)
    c = random.choice(countries)
    d = random.choice(job)
    e = random.choice(os)
    f = random.choice(car)

    cursor.execute("INSERT INTO myusers values('{name}', '{surname}', '{countries}', '{job}', '{os}', '{car}');".format(name=a, surname=b, countries=c, job=d, os=e, car=f))

conn.commit()
conn.close()

After running the file: python mysql_write.py we should have 10 records in our database.

Reading the Data from MySQLL

To verify that the data is in our MySQL Database, lets logon to our mysql database:

1
$ mysql -u root -p
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
mysql> select * from testdb1.myusers;
+----------+----------+-----------+----------------+---------+---------------+
| name     | surname  | countries | job            | os      | car           |
+----------+----------+-----------+----------------+---------+---------------+
| James    | James    | New York  | Waiter         | Mac     | Volkswagen    |
| Jennifer | Smith    | New York  | Scientist      | Windows | Audi          |
| Michelle | Jacobs   | Italy     | Police Officer | Mac     | Ford          |
| Michelle | Anderson | Italy     | Waiter         | Windows | Ford          |
| Jennifer | Smith    | England   | Doctor         | Windows | Toyota        |
| Peter    | Jacobs   | England   | IT             | Windows | BMW           |
| Samantha | James    | England   | Doctor         | Mac     | Mazda         |
| Frank    | Phillips | England   | IT             | Mac     | BMW           |
| Samantha | James    | England   | Banker         | Linux   | Mercedez-Benz |
| Peter    | Anderson | Sweden    | Doctor         | Windows | BMW           |
+----------+----------+-----------+----------------+---------+---------------+

Next, lets use Python to do the same, create a file mysql_read.py with the following content:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
import MySQLdb as pdb
from config import credentials as secrets

db_host = secrets['mysql']['host']
db_username = secrets['mysql']['username']
db_password = secrets['mysql']['password']
db_name = secrets['mysql']['database']

conn = pdb.connect(host=db_host, user=db_username, passwd=db_password, db=db_name)
cursor = conn.cursor()

cursor.execute("select * from myusers")
read = cursor.fetchall()

for x in read:
    print(x)

conn.close()

Running the Python file, to read the data:

1
2
3
4
5
6
7
8
9
10
11
12
$ python mysql_read.py

('James', 'James', 'New York', 'Waiter', 'Mac', 'Volkswagen')
('Jennifer', 'Smith', 'New York', 'Scientist', 'Windows', 'Audi')
('Michelle', 'Jacobs', 'Italy', 'Police Officer', 'Mac', 'Ford')
('Michelle', 'Anderson', 'Italy', 'Waiter', 'Windows', 'Ford')
('Jennifer', 'Smith', 'England', 'Doctor', 'Windows', 'Toyota')
('Peter', 'Jacobs', 'England', 'IT', 'Windows', 'BMW')
('Samantha', 'James', 'England', 'Doctor', 'Mac', 'Mazda')
('Frank', 'Phillips', 'England', 'IT', 'Mac', 'BMW')
('Samantha', 'James', 'England', 'Banker', 'Linux', 'Mercedez-Benz')
('Peter', 'Anderson', 'Sweden', 'Doctor', 'Windows', 'BMW')

Using Python to Read Data From a MySQL Database

Wanted to use Python to read some data from MySQL and stumbled upon a couple of great resources, which I noted some of my output below:

Install Dependencies:

1
2
3
$ apt install python-dev libmysqlclient-dev python-setuptools gcc
$ easy_install pip
$ pip install MySQL-python

Download Some Sample Data:

Download the world dataset for MySQL:

1
2
$ wget http://downloads.mysql.com/docs/world.sql.zip
$ unzip world.sql.zip

Create Database:

Create the Database in MySQL for the dataset that we downloaded:

1
$ mysql -u root -p -e'CREATE DATABASE world;'

Import Data:

Import the data into the world database:

1
$ mysql -u root -p world < world.sql

Create the MySQL Credentials File:

Create a config.py file and populate the credentials in a dictionary:

1
2
3
4
5
6
7
8
credentials = {
  'mysql': {
      'host': 'localhost',
      'username': 'root',
      'password': 'password',
      'database': 'world'
  }
}

Run Queries from Python:

Enter the Python interpreter and run some queries:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
>>> import MySQLdb as pdb
>>> from config import credentials as secrets

# assignments 
>>> db_host = secrets['mysql']['host']
>>> db_username = secrets['mysql']['username']
>>> db_password = secrets['mysql']['password']
>>> db_name = secrets['mysql']['database']

# create a connection to the database
>>> conn = pdb.connect(host=db_host, user=db_username, passwd=db_password, db=db_name)

# create a object for the queries we will be using
>>> cursor = conn.cursor()

# execute the query
>>> cursor.execute('select continent, name from country where continent = "Africa" limit 5')
5L

# fetch the results by assigning it to the results object:
>>> results = cursor.fetchall()

# loop and print results:
>>> for x in results:
...     print(x)
...
('Africa', 'Angola')
('Africa', 'Burundi')
('Africa', 'Benin')
('Africa', 'Burkina Faso')
('Africa', 'Botswana')

# close the connection
>>> conn.close()

Graphing Results to Plotly:

A great blogpost that shows how to use this data to graph the results to plotly

Resources:

Using Minios Python SDK to Interact With a Minio S3 Bucket

In our previous post, we have Setup Minio Server which is a self-hosted alternative to Amazon’s S3 Service.

We will go through some basic examples on working with the Python SDK, to interact with Minio.

Installing the Minio Python Library:

Ensure that Python and Pip is installed, the install the Python Minio Library:

1
2
3
$ virtualenv -p /usr/local/bin/python2.7 .venv
$ source .venv/bin/activate
(.venv)$ pip install minio

Create a Bucket:

Enter the Python Interpreter and Create a S3 Bucket on your Minio Server:

1
2
3
>>> from minio import Minio
>>> client = Minio('10.0.0.2:9000', access_key='ASDASDASD', secret_key='ASDASDASD', secure=False)
>>> client.make_bucket('pythonbucket', location='us-west-1')

List Buckets:

I have also created a bucket from my previous post, so we should have 2 buckets:

1
2
3
4
5
6
>>> buckets = client.list_buckets()
>>> for bucket in buckets:
...     print(bucket).name
...
news3bucket
pythonbucket

Put Objects to your Bucket:

Write a string to a file, then upload the file to 2 different destination objects. The arguments is: BucketName, Destination, Source.

1
2
3
4
5
6
7
8
>>> data = open('file.txt', 'w')
>>> data.write('This is some text' + '\n')
>>> data.close()

>>> client.fput_object('pythonbucket', 'bucket/contents/file.txt', 'file.txt')
'6b8c327f0fc6f470c030a5b6c71154c5'
>>> client.fput_object('pythonbucket', 'bucket/contents/file2.txt', 'file.txt')
'6b8c327f0fc6f470c030a5b6c71154c5'

List Objects in your Bucket:

List the objects in your bucket:

1
2
3
4
5
6
7
>>> objects = client.list_objects('pythonbucket', prefix='bucket/contents/', recursive=True)
>>> for obj in objects:
>>> for obj in objects:
...     print(obj.object_name, obj.size)
...
('bucket/contents/file.txt', 18)
('bucket/contents/file2.txt', 18)

Remove Objects from your Bucket:

Remove the objects from your Bucket, the list your bucket to verify that they are removed:

1
2
3
4
5
6
7
>>> client.remove_object('pythonbucket', 'bucket/contents/file.txt')
>>> client.remove_object('pythonbucket', 'bucket/contents/file2.txt')

>>> for obj in objects:
...     print(obj.object_name, obj.size)
...
>>>

Remove the Bucket:

Remove the Bucket that we created:

1
2
>>> client.remove_bucket('pythonbucket')
>>> exit()

Resources:

Minio has some great documentation, for more information on their SDK:

Run Your Self-Hosted S3 Service With Minio on Docker Swarm

Minio is a distributed object storage server built for cloud applications, which is similar to Amazon’s S3 Service.

Today, we will create the server on docker swarm, as I don’t currently have a external data store like GlusterFS / NFS etc, I will host the data on the manager node, and set a constraint for the service so that the service can only run on the manager node.

Prepare the Data Directory:

I will only rely on the manager node for my data, so on my manager node:

1
$ mkdir -p /mnt/data

Create the Service:

If you have a Replicated Gluster Volume or NFS which is mounted throughout your docker swarm, you can create the directory path for it, and the update your --mount source path to your external data store. In my case, I will just point it to my manager node’s /mnt/data path as I have setup the service to only run on the one manager node in my swarm:

1
2
3
4
5
6
7
8
9
10
$ docker service create \
--name minio \
--network appnet \
--replicas 1 \
--publish 9000:9000 \
--constraint 'node.role==manager' \
-e "MINIO_ACCESS_KEY=AKIAASDKJASDL" \
-e "MINIO_SECRET_KEY=AKIAASDKJASDL" \
--mount "type=bind,source=/mnt/data,target=/data" \
minio/minio server /data

Install the AWS CLI Tools:

We will use the awscli tools to interact with our Minio Server:

1
$ pip install awscli

Configure the Client:

Configure the awscli client with the access details that we passed in our docker service:

1
2
3
4
5
$ aws configure --profile minio
AWS Access Key ID []: AKIAASDKJASDL
AWS Secret Access Key []: ASLDKJASDLKJASDLKJ
Default region name []: us-west-1
Default output format []: json

Create the Bucket:

Create a New Bucket, in this case news3bucket

1
2
aws --profile minio --endpoint-url http://MINIO-IP:9000 s3 mb s3://news3bucket
make_bucket: news3bucket

List Buckets:

List our endpoint, to see the buckets on our server:

1
2
$ aws --profile minio --endpoint-url http://MINIO-IP:9000 s3 ls /
2017-09-08 15:01:40 news3bucket

Upload an Object to your Bucket:

We will upload an image awsddb-1.png to our new bucket:

1
2
$ aws --profile minio --endpoint-url http://MINIO-IP:9000 s3 cp awsddb-1.png s3://news3bucket/
upload: ./awsddb-1.png to s3://news3bucket/awsddb-1.png

List Bucket:

List your bucket, to see the uploaded object:

1
2
$ aws --profile minio --endpoint-url http://MINIO-IP:9000 s3 ls s3://news3bucket
2017-09-08 15:03:11      19851 awsddb-1.png

Download Object:

Download the image from your Bucket, and set the local file to file.png:

1
2
$ aws --profile minio --endpoint-url http://MINIO-IP:9000 s3 cp s3://news3bucket/awsddb-1.png file.png
download: s3://news3bucket/awsddb-1.png to ./file.png

Web Access:

You can also access Minio’s Web Interface on the port that you have exposed, in my case: http://MINIO-IP:9000/minio/

Resources:

How to Create a Local Docker Swarm Cluster With Docker in Docker on Your Workstation

Creating a Docker Swarm Cluster, locally on your Workstation using Docker in Docker (DID) for testing Purposes:

Project Resources:

Create The Nodes:

Create the Docker containers that will act as our Docker nodes:

1
2
3
$ docker run --privileged --name docker-node1 -v /Users/ruan/docker/did/vols/node1:/var/lib/docker -d docker:dind --storage-driver=vfs
$ docker run --privileged --name docker-node2 -v /Users/ruan/docker/did/vols/node2:/var/lib/docker -d docker:dind --storage-driver=vfs
$ docker run --privileged --name docker-node3 -v /Users/ruan/docker/did/vols/node3:/var/lib/docker -d docker:dind --storage-driver=vfs

Initialize the Swarm:

Log onto the manager node:

1
$ docker exec -it docker-node1 sh

Initialize the Swarm:

1
2
3
4
5
6
7
8
$ docker swarm init --advertise-addr eth0
Swarm initialized: current node (17ydtkqdwxzwea2riadxj4zbw) is now a manager.

To add a worker to this swarm, run the following command:

    docker swarm join --token SWMTKN-1-4goolm8dvwictc7d39aonpcv6ca1pfj31q7irjga17o2srzf6f-b4k3hln6ogvjgmnbs1qxnjvj9 172.17.0.2:2377

To add a manager to this swarm, run 'docker swarm join-token manager' and follow the instructions.

Join the Worker Nodes to the Swarm:

1
2
3
$ docker exec -it docker-node2 sh
/ # docker swarm join --token SWMTKN-1-4mvb68vefr3dogxr6omu3uq04r4gddftdbmfomxo9pefks9siu-3t7ua7k2xigl9rwgp4dwzcxm0 172.17.0.2:2377
This node joined a swarm as a worker.
1
2
3
$ docker exec -it docker-node3 sh
/ # docker swarm join --token SWMTKN-1-4mvb68vefr3dogxr6omu3uq04r4gddftdbmfomxo9pefks9siu-3t7ua7k2xigl9rwgp4dwzcxm0 172.17.0.2:2377
This node joined a swarm as a worker.

List the Nodes:

Log onto the Manager node and list the nodes:

1
2
3
4
5
6
$ docker exec -it docker-node1 sh
/ # docker node ls
ID                            HOSTNAME            STATUS              AVAILABILITY        MANAGER STATUS
1hnq4b4w87w6trobwye5ap4sh *   5657c28bf618        Ready               Active              Leader
wglbb029g1kczttiaf5r6iavi     b2924bb8e555        Ready               Active
xxr9kdqy49u2tx61w31ife90j     6622a06a1b3c        Ready               Active

Traefik:

Creating a HTTP Reverse Proxy, using Traefik:

1
2
3
4
5
6
7
8
9
$ docker network create --driver overlay traefik-net
$ docker service create \
--name traefik \
--constraint 'node.role==manager' \
--publish 80:80 \
--publish 8080:8080 \
--mount type=bind,source=/var/run/docker.sock,target=/var/run/docker.sock \
--network traefik-net \
traefik:camembert --docker --docker.swarmmode --docker.domain=ruanbekker.internal --docker.watch --logLevel=DEBUG --web

Getting Started With Chef: Creating a Website With Apache

From my previous post we got started with Installing the Chef Devlopment Kit and using the file resource type.

In this post we will create a recipe that will:

  • Update the APT Cache
  • Install the Apache2 package
  • Enables and Starts Apache2 on Boot
  • Create a index.html for our Website

Creating a Web Server:

We will create our webserver.rb recipe, and our first section will consist of the following:

  • Ensuring our APT Cache is up to date
  • The Frequency property indiciates 24 hours
  • The periodic action indicates that the update occurs periodically
  • Optional: the :update action will update the apt cache on each run
  • Installs the apache2 package (No action is specified, defaults to :install)
1
2
3
4
5
6
apt_update 'Update APT Cache Daily' do
  frequency 86_400
  action :periodic
end

package 'apache2'

Running this recipe at this moment will provide the following output:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
$ chef-client --local-mode webserver.rb
..
Converging 2 resources
Recipe: @recipe_files::/root/chef-repo/webserver.rb
  * apt_update[Update APT Cache Daily] action periodic
    - update new lists of packages
    * directory[/var/lib/apt/periodic] action create (up to date)
    * directory[/etc/apt/apt.conf.d] action create (up to date)
    * file[/etc/apt/apt.conf.d/15update-stamp] action create_if_missing
      - create new file /etc/apt/apt.conf.d/15update-stamp
      - update content in file /etc/apt/apt.conf.d/15update-stamp from none to 174cdb
      --- /etc/apt/apt.conf.d/15update-stamp    2017-09-04 16:53:31.604488306 +0000
      +++ /etc/apt/apt.conf.d/.chef-15update-stamp20170904-5727-1p2g8zw 2017-09-04 16:53:31.604488306 +0000
      @@ -1 +1,2 @@
      +APT::Update::Post-Invoke-Success {"touch /var/lib/apt/periodic/update-success-stamp 2>/dev/null || true";};
    * execute[apt-get -q update] action run
      - execute apt-get -q update

Next, we will set apache2 to start on boot and start the service:

1
2
3
4
5
6
7
8
9
10
11
apt_update 'Update APT Cache Daily' do
  frequency 86_400
  action :periodic
end

package 'apache2'

service 'apache2' do
  supports status: true
  action [:enable, :start]
end

Running our chef client, will produce the following output:

1
2
3
4
5
6
7
8
$ chef-client --local-mode webserver.rb
Converging 3 resources
Recipe: @recipe_files::/root/chef-repo/webserver.rb
  * apt_update[Update APT Cache Daily] action periodic (up to date)
  * apt_package[apache2] action install (up to date)
  * service[apache2] action enable (up to date)
  * service[apache2] action start
    - start service service[apache2]

Verifying that our apache2 service is started:

1
2
$ /etc/init.d/apache2 status
 * apache2 is running

Next, using the file resource, we will replace the `/var/www/html/index.html' landing page with the one that we will specify in our recipe:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
apt_update 'Update APT Cache Daily' do
  frequency 86_400
  action :periodic
end

package 'apache2'

service 'apache2' do
  supports status: true
  action [:enable, :start]
end

file '/var/www/html/index.html' do
  content '<html>
  <body>
    <h1>Hello, World!</h1>
  </body>
</html>'
end

And our full webserver.rb recipe will look like the following:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
# update cache periodically every 24 hours
apt_update 'Update APT Cache Daily' do
  frequency 86_400
  action :periodic
end

# install apache2 (:install is the default action)
package 'apache2'

# enable apache2 on boot and start apache2
service 'apache2' do
  supports status: true
  action [:enable, :start]
end

# create a custom html page
file '/var/www/html/index.html' do
  content '<html>
  <body>
    <h1>Hello, World!</h1>
  </body>
</html>'
end

Running our Chef Client against our Recipe:

For the previous snippets, we took it section by section, here we will run the whole recipe:

1
2
3
4
5
6
7
8
9
10
11
12
$ chef-client --local-mode webserver.rb
...
Converging 4 resources
Recipe: @recipe_files::/root/chef-repo/webserver.rb
  * apt_update[Update APT Cache Daily] action periodic (up to date)
  * apt_package[apache2] action install (up to date)
  * service[apache2] action enable (up to date)
  * service[apache2] action start (up to date)
  * file[/var/www/html/index.html] action create
    - update content in file /var/www/html/index.html from 538f31 to 9d1dca
    --- /var/www/html/index.html        2017-09-04 16:53:55.134043652 +0000
    +++ /var/www/html/.chef-index20170904-7451-3kt1p7.html      2017-09-04 17:00:16.306831840 +0000

Testing our Website:

And finally, testing our website:

1
2
3
4
5
6
$ curl -XGET http://localhost/
<html>
  <body>
    <h1>Hello, World!</h1>
  </body>
</html>

Resources:

Getting Started With Chef: Working With Files

Chef: Infrastructure as Code, Automation, Configuration Management, having a service that can do that, and especially having something in place that knows what the desired state of your configurations/applications should be is definitely a plus.

I stumbled upon learn.chef.io which is a great resource for learning chef, as I am learning Chef at this moment.

The Components of Chef consists of:

  • Chef Workstation (ChefDK enables you to use the tools locally to test before pushing your code to the Chef Server)
  • Chef Server (Central Repository for your Cookbooks and info of every node Chef Manages)
  • Chef Client (a Node that is Managed by the Chef Server)

In this post we will install the Chef Development Kit, and work with the chef-client in local-mode to create, update and delete files using the file resource type.

Getting Started with Chef: Installation:

Installing the Chef Development Kit:

1
2
3
$ sudo apt-get update && apt-get upgrade -y
$ sudo apt-get install curl git -y
$ curl https://omnitruck.chef.io/install.sh | sudo bash -s -- -P chefdk -c stable -v 2.0.28

Configure a Resource:

Using chef-client in local mode, we will use the resource: file to create a recipe that will create our motd file

hello.rb
1
2
3
file '/tmp/motd' do
  content 'hello world'
end

Running chef client against our recipe in local-mode:

1
2
3
4
5
6
7
8
9
10
11
$ chef-client --local-mode hello.rb
..
Converging 1 resources
Recipe: @recipe_files::/root/chef-repo/hello.rb
  * file[/tmp/motd] action create
    - create new file /tmp/motd
    - update content in file /tmp/motd from none to b94d27
    --- /tmp/motd       2017-09-04 16:18:19.265699403 +0000
    +++ /tmp/.chef-motd20170904-4500-54fh8w     2017-09-04 16:18:19.265699403 +0000
    @@ -1 +1,2 @@
    +hello world

Verify the Content:

1
2
$ cat /tmp/motd
hello world

Running the command again will do nothing, as the content is in its desired state:

1
2
3
4
5
$ chef-client --local-mode hello.rb
..
Converging 1 resources
Recipe: @recipe_files::/root/chef-repo/hello.rb
  * file[/tmp/motd] action create (up to date)

Changing our recipe by replacing the word world with chef, we will find that the content of our file will be updated:

1
2
3
4
5
6
7
8
9
10
11
$ chef-client --local-mode hello.rb
..
Converging 1 resources
Recipe: @recipe_files::/root/chef-repo/hello.rb
  * file[/tmp/motd] action create
    - update content in file /tmp/motd from b94d27 to c38c60
    --- /tmp/motd       2017-09-04 16:18:19.265699403 +0000
    +++ /tmp/.chef-motd20170904-4903-wuigr      2017-09-04 16:23:21.379649145 +0000
    @@ -1,2 +1,2 @@
    -hello world
    +hello chef

Let’s overwrite the content of our motd file manually:

1
$ echo 'hello robots' > /tmp/motd

Running Chef Client against our recipe again, allows Chef to restore our content to the desired state that is specified in our recipe:

1
2
3
4
5
6
7
8
9
10
11
$ chef-client --local-mode hello.rb
..
Converging 1 resources
Recipe: @recipe_files::/root/chef-repo/hello.rb
  * file[/tmp/motd] action create
    - update content in file /tmp/motd from 548078 to c38c60
    --- /tmp/motd       2017-09-04 16:24:29.308286834 +0000
    +++ /tmp/.chef-motd20170904-5103-z16ssa     2017-09-04 16:24:42.528021632 +0000
    @@ -1,2 +1,2 @@
    -hello robots
    +hello chef

Deleting a file from our recipe:

destroy.rb
1
2
3
file '/tmp/motd' do
  action :delete
end

Now using chef client to execute against this file will remove our file:

1
2
3
4
$ chef-client --local-mode destroy.rb
Recipe: @recipe_files::/root/chef-repo/destroy.rb
  * file[/tmp/motd] action delete
    - delete file /tmp/motd

Resources:

Splitting Characters With Python to Determine Name Surname and Email Address

I had a bunch of email addresses that was set in a specific format that I can strip characters from, to build up a Username, Name and Surname from the Email Address, that I could use to for dynamic reporting.

Using Split in Python

Here I will define the value of emailadress to a string, then using Python’s split() function to get the values that I want:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
>>> emailaddress = "ruan.bekker@domain.com"
>>> emailaddress.split("@", 1)
['ruan.bekker', 'domain.com']
>>> username = emailaddress.split("@", 1)[0]
>>> username
'ruan.bekker'
>>> username.split(".", 1)
['ruan', 'bekker']
>>> name = username.split(".", 1)[0].capitalize()
>>> surname = username.split(".", 1)[1].capitalize()
>>> name
'Ruan'
>>> surname
'Bekker'
>>> username
'ruan.bekker'
>>> emailaddress
'ruan.bekker@domain.com'

Print The Values in Question:

Now that we have define our keys, let’s print the values:

1
2
>>> print("Name: {0}, Surname: {1}, UserName: {2}, Email Address: {3}".format(name, surname, username, emailaddress))
Name: Ruan, Surname: Bekker, UserName: ruan.bekker, Email Address: ruan.bekker@domain.com

From here on you can build up for example an email function that you can pass the values to your function to get a specific job done.

Update: Capitalize from One String

Today, I had to capitalize the name and surname that was linked to one variable:

1
2
3
4
>>> user = 'james.bond'
>>> username = ' '.join(map(str, [x.capitalize() for x in user.split(".")]))
>>> print(username)
James Bond