Ruan Bekker's Blog

From a Curious mind to Posts on Github

Using Minios Python SDK to Interact With a Minio S3 Bucket

In our previous post, we have Setup Minio Server which is a self-hosted alternative to Amazon’s S3 Service.

We will go through some basic examples on working with the Python SDK, to interact with Minio.

Installing the Minio Python Library:

Ensure that Python and Pip is installed, the install the Python Minio Library:

1
2
3
$ virtualenv -p /usr/local/bin/python2.7 .venv
$ source .venv/bin/activate
(.venv)$ pip install minio

Create a Bucket:

Enter the Python Interpreter and Create a S3 Bucket on your Minio Server:

1
2
3
>>> from minio import Minio
>>> client = Minio('10.0.0.2:9000', access_key='ASDASDASD', secret_key='ASDASDASD', secure=False)
>>> client.make_bucket('pythonbucket', location='us-west-1')

List Buckets:

I have also created a bucket from my previous post, so we should have 2 buckets:

1
2
3
4
5
6
>>> buckets = client.list_buckets()
>>> for bucket in buckets:
...     print(bucket).name
...
news3bucket
pythonbucket

Put Objects to your Bucket:

Write a string to a file, then upload the file to 2 different destination objects. The arguments is: BucketName, Destination, Source.

1
2
3
4
5
6
7
8
>>> data = open('file.txt', 'w')
>>> data.write('This is some text' + '\n')
>>> data.close()

>>> client.fput_object('pythonbucket', 'bucket/contents/file.txt', 'file.txt')
'6b8c327f0fc6f470c030a5b6c71154c5'
>>> client.fput_object('pythonbucket', 'bucket/contents/file2.txt', 'file.txt')
'6b8c327f0fc6f470c030a5b6c71154c5'

List Objects in your Bucket:

List the objects in your bucket:

1
2
3
4
5
6
7
>>> objects = client.list_objects('pythonbucket', prefix='bucket/contents/', recursive=True)
>>> for obj in objects:
>>> for obj in objects:
...     print(obj.object_name, obj.size)
...
('bucket/contents/file.txt', 18)
('bucket/contents/file2.txt', 18)

Remove Objects from your Bucket:

Remove the objects from your Bucket, the list your bucket to verify that they are removed:

1
2
3
4
5
6
7
>>> client.remove_object('pythonbucket', 'bucket/contents/file.txt')
>>> client.remove_object('pythonbucket', 'bucket/contents/file2.txt')

>>> for obj in objects:
...     print(obj.object_name, obj.size)
...
>>>

Remove the Bucket:

Remove the Bucket that we created:

1
2
>>> client.remove_bucket('pythonbucket')
>>> exit()

Resources:

Minio has some great documentation, for more information on their SDK:

Run Your Self-Hosted S3 Service With Minio on Docker Swarm

Minio is a distributed object storage server built for cloud applications, which is similar to Amazon’s S3 Service.

Today, we will create the server on docker swarm, as I don’t currently have a external data store like GlusterFS / NFS etc, I will host the data on the manager node, and set a constraint for the service so that the service can only run on the manager node.

Prepare the Data Directory:

I will only rely on the manager node for my data, so on my manager node:

1
$ mkdir -p /mnt/data

Create the Service:

If you have a Replicated Gluster Volume or NFS which is mounted throughout your docker swarm, you can create the directory path for it, and the update your --mount source path to your external data store. In my case, I will just point it to my manager node’s /mnt/data path as I have setup the service to only run on the one manager node in my swarm:

1
2
3
4
5
6
7
8
9
10
$ docker service create \
--name minio \
--network appnet \
--replicas 1 \
--publish 9000:9000 \
--constraint 'node.role==manager' \
-e "MINIO_ACCESS_KEY=AKIAASDKJASDL" \
-e "MINIO_SECRET_KEY=AKIAASDKJASDL" \
--mount "type=bind,source=/mnt/data,target=/data" \
minio/minio server /data

Install the AWS CLI Tools:

We will use the awscli tools to interact with our Minio Server:

1
$ pip install awscli

Configure the Client:

Configure the awscli client with the access details that we passed in our docker service:

1
2
3
4
5
$ aws configure --profile minio
AWS Access Key ID []: AKIAASDKJASDL
AWS Secret Access Key []: ASLDKJASDLKJASDLKJ
Default region name []: us-west-1
Default output format []: json

Create the Bucket:

Create a New Bucket, in this case news3bucket

1
2
aws --profile minio --endpoint-url http://MINIO-IP:9000 s3 mb s3://news3bucket
make_bucket: news3bucket

List Buckets:

List our endpoint, to see the buckets on our server:

1
2
$ aws --profile minio --endpoint-url http://MINIO-IP:9000 s3 ls /
2017-09-08 15:01:40 news3bucket

Upload an Object to your Bucket:

We will upload an image awsddb-1.png to our new bucket:

1
2
$ aws --profile minio --endpoint-url http://MINIO-IP:9000 s3 cp awsddb-1.png s3://news3bucket/
upload: ./awsddb-1.png to s3://news3bucket/awsddb-1.png

List Bucket:

List your bucket, to see the uploaded object:

1
2
$ aws --profile minio --endpoint-url http://MINIO-IP:9000 s3 ls s3://news3bucket
2017-09-08 15:03:11      19851 awsddb-1.png

Download Object:

Download the image from your Bucket, and set the local file to file.png:

1
2
$ aws --profile minio --endpoint-url http://MINIO-IP:9000 s3 cp s3://news3bucket/awsddb-1.png file.png
download: s3://news3bucket/awsddb-1.png to ./file.png

Web Access:

You can also access Minio’s Web Interface on the port that you have exposed, in my case: http://MINIO-IP:9000/minio/

Resources:

How to Create a Local Docker Swarm Cluster With Docker in Docker on Your Workstation

Creating a Docker Swarm Cluster, locally on your Workstation using Docker in Docker (DID) for testing Purposes:

Project Resources:

Create The Nodes:

Create the Docker containers that will act as our Docker nodes:

1
2
3
$ docker run --privileged --name docker-node1 -v /Users/ruan/docker/did/vols/node1:/var/lib/docker -d docker:dind --storage-driver=vfs
$ docker run --privileged --name docker-node2 -v /Users/ruan/docker/did/vols/node2:/var/lib/docker -d docker:dind --storage-driver=vfs
$ docker run --privileged --name docker-node3 -v /Users/ruan/docker/did/vols/node3:/var/lib/docker -d docker:dind --storage-driver=vfs

Initialize the Swarm:

Log onto the manager node:

1
$ docker exec -it docker-node1 sh

Initialize the Swarm:

1
2
3
4
5
6
7
8
$ docker swarm init --advertise-addr eth0
Swarm initialized: current node (17ydtkqdwxzwea2riadxj4zbw) is now a manager.

To add a worker to this swarm, run the following command:

    docker swarm join --token SWMTKN-1-4goolm8dvwictc7d39aonpcv6ca1pfj31q7irjga17o2srzf6f-b4k3hln6ogvjgmnbs1qxnjvj9 172.17.0.2:2377

To add a manager to this swarm, run 'docker swarm join-token manager' and follow the instructions.

Join the Worker Nodes to the Swarm:

1
2
3
$ docker exec -it docker-node2 sh
/ # docker swarm join --token SWMTKN-1-4mvb68vefr3dogxr6omu3uq04r4gddftdbmfomxo9pefks9siu-3t7ua7k2xigl9rwgp4dwzcxm0 172.17.0.2:2377
This node joined a swarm as a worker.
1
2
3
$ docker exec -it docker-node3 sh
/ # docker swarm join --token SWMTKN-1-4mvb68vefr3dogxr6omu3uq04r4gddftdbmfomxo9pefks9siu-3t7ua7k2xigl9rwgp4dwzcxm0 172.17.0.2:2377
This node joined a swarm as a worker.

List the Nodes:

Log onto the Manager node and list the nodes:

1
2
3
4
5
6
$ docker exec -it docker-node1 sh
/ # docker node ls
ID                            HOSTNAME            STATUS              AVAILABILITY        MANAGER STATUS
1hnq4b4w87w6trobwye5ap4sh *   5657c28bf618        Ready               Active              Leader
wglbb029g1kczttiaf5r6iavi     b2924bb8e555        Ready               Active
xxr9kdqy49u2tx61w31ife90j     6622a06a1b3c        Ready               Active

Traefik:

Creating a HTTP Reverse Proxy, using Traefik:

1
2
3
4
5
6
7
8
9
$ docker network create --driver overlay traefik-net
$ docker service create \
--name traefik \
--constraint 'node.role==manager' \
--publish 80:80 \
--publish 8080:8080 \
--mount type=bind,source=/var/run/docker.sock,target=/var/run/docker.sock \
--network traefik-net \
traefik:camembert --docker --docker.swarmmode --docker.domain=ruanbekker.internal --docker.watch --logLevel=DEBUG --web

Getting Started With Chef: Creating a Website With Apache

From my previous post we got started with Installing the Chef Devlopment Kit and using the file resource type.

In this post we will create a recipe that will:

  • Update the APT Cache
  • Install the Apache2 package
  • Enables and Starts Apache2 on Boot
  • Create a index.html for our Website

Creating a Web Server:

We will create our webserver.rb recipe, and our first section will consist of the following:

  • Ensuring our APT Cache is up to date
  • The Frequency property indiciates 24 hours
  • The periodic action indicates that the update occurs periodically
  • Optional: the :update action will update the apt cache on each run
  • Installs the apache2 package (No action is specified, defaults to :install)
1
2
3
4
5
6
apt_update 'Update APT Cache Daily' do
  frequency 86_400
  action :periodic
end

package 'apache2'

Running this recipe at this moment will provide the following output:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
$ chef-client --local-mode webserver.rb
..
Converging 2 resources
Recipe: @recipe_files::/root/chef-repo/webserver.rb
  * apt_update[Update APT Cache Daily] action periodic
    - update new lists of packages
    * directory[/var/lib/apt/periodic] action create (up to date)
    * directory[/etc/apt/apt.conf.d] action create (up to date)
    * file[/etc/apt/apt.conf.d/15update-stamp] action create_if_missing
      - create new file /etc/apt/apt.conf.d/15update-stamp
      - update content in file /etc/apt/apt.conf.d/15update-stamp from none to 174cdb
      --- /etc/apt/apt.conf.d/15update-stamp    2017-09-04 16:53:31.604488306 +0000
      +++ /etc/apt/apt.conf.d/.chef-15update-stamp20170904-5727-1p2g8zw 2017-09-04 16:53:31.604488306 +0000
      @@ -1 +1,2 @@
      +APT::Update::Post-Invoke-Success {"touch /var/lib/apt/periodic/update-success-stamp 2>/dev/null || true";};
    * execute[apt-get -q update] action run
      - execute apt-get -q update

Next, we will set apache2 to start on boot and start the service:

1
2
3
4
5
6
7
8
9
10
11
apt_update 'Update APT Cache Daily' do
  frequency 86_400
  action :periodic
end

package 'apache2'

service 'apache2' do
  supports status: true
  action [:enable, :start]
end

Running our chef client, will produce the following output:

1
2
3
4
5
6
7
8
$ chef-client --local-mode webserver.rb
Converging 3 resources
Recipe: @recipe_files::/root/chef-repo/webserver.rb
  * apt_update[Update APT Cache Daily] action periodic (up to date)
  * apt_package[apache2] action install (up to date)
  * service[apache2] action enable (up to date)
  * service[apache2] action start
    - start service service[apache2]

Verifying that our apache2 service is started:

1
2
$ /etc/init.d/apache2 status
 * apache2 is running

Next, using the file resource, we will replace the `/var/www/html/index.html' landing page with the one that we will specify in our recipe:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
apt_update 'Update APT Cache Daily' do
  frequency 86_400
  action :periodic
end

package 'apache2'

service 'apache2' do
  supports status: true
  action [:enable, :start]
end

file '/var/www/html/index.html' do
  content '<html>
  <body>
    <h1>Hello, World!</h1>
  </body>
</html>'
end

And our full webserver.rb recipe will look like the following:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
# update cache periodically every 24 hours
apt_update 'Update APT Cache Daily' do
  frequency 86_400
  action :periodic
end

# install apache2 (:install is the default action)
package 'apache2'

# enable apache2 on boot and start apache2
service 'apache2' do
  supports status: true
  action [:enable, :start]
end

# create a custom html page
file '/var/www/html/index.html' do
  content '<html>
  <body>
    <h1>Hello, World!</h1>
  </body>
</html>'
end

Running our Chef Client against our Recipe:

For the previous snippets, we took it section by section, here we will run the whole recipe:

1
2
3
4
5
6
7
8
9
10
11
12
$ chef-client --local-mode webserver.rb
...
Converging 4 resources
Recipe: @recipe_files::/root/chef-repo/webserver.rb
  * apt_update[Update APT Cache Daily] action periodic (up to date)
  * apt_package[apache2] action install (up to date)
  * service[apache2] action enable (up to date)
  * service[apache2] action start (up to date)
  * file[/var/www/html/index.html] action create
    - update content in file /var/www/html/index.html from 538f31 to 9d1dca
    --- /var/www/html/index.html        2017-09-04 16:53:55.134043652 +0000
    +++ /var/www/html/.chef-index20170904-7451-3kt1p7.html      2017-09-04 17:00:16.306831840 +0000

Testing our Website:

And finally, testing our website:

1
2
3
4
5
6
$ curl -XGET http://localhost/
<html>
  <body>
    <h1>Hello, World!</h1>
  </body>
</html>

Resources:

Getting Started With Chef: Working With Files

Chef: Infrastructure as Code, Automation, Configuration Management, having a service that can do that, and especially having something in place that knows what the desired state of your configurations/applications should be is definitely a plus.

I stumbled upon learn.chef.io which is a great resource for learning chef, as I am learning Chef at this moment.

The Components of Chef consists of:

  • Chef Workstation (ChefDK enables you to use the tools locally to test before pushing your code to the Chef Server)
  • Chef Server (Central Repository for your Cookbooks and info of every node Chef Manages)
  • Chef Client (a Node that is Managed by the Chef Server)

In this post we will install the Chef Development Kit, and work with the chef-client in local-mode to create, update and delete files using the file resource type.

Getting Started with Chef: Installation:

Installing the Chef Development Kit:

1
2
3
$ sudo apt-get update && apt-get upgrade -y
$ sudo apt-get install curl git -y
$ curl https://omnitruck.chef.io/install.sh | sudo bash -s -- -P chefdk -c stable -v 2.0.28

Configure a Resource:

Using chef-client in local mode, we will use the resource: file to create a recipe that will create our motd file

hello.rb
1
2
3
file '/tmp/motd' do
  content 'hello world'
end

Running chef client against our recipe in local-mode:

1
2
3
4
5
6
7
8
9
10
11
$ chef-client --local-mode hello.rb
..
Converging 1 resources
Recipe: @recipe_files::/root/chef-repo/hello.rb
  * file[/tmp/motd] action create
    - create new file /tmp/motd
    - update content in file /tmp/motd from none to b94d27
    --- /tmp/motd       2017-09-04 16:18:19.265699403 +0000
    +++ /tmp/.chef-motd20170904-4500-54fh8w     2017-09-04 16:18:19.265699403 +0000
    @@ -1 +1,2 @@
    +hello world

Verify the Content:

1
2
$ cat /tmp/motd
hello world

Running the command again will do nothing, as the content is in its desired state:

1
2
3
4
5
$ chef-client --local-mode hello.rb
..
Converging 1 resources
Recipe: @recipe_files::/root/chef-repo/hello.rb
  * file[/tmp/motd] action create (up to date)

Changing our recipe by replacing the word world with chef, we will find that the content of our file will be updated:

1
2
3
4
5
6
7
8
9
10
11
$ chef-client --local-mode hello.rb
..
Converging 1 resources
Recipe: @recipe_files::/root/chef-repo/hello.rb
  * file[/tmp/motd] action create
    - update content in file /tmp/motd from b94d27 to c38c60
    --- /tmp/motd       2017-09-04 16:18:19.265699403 +0000
    +++ /tmp/.chef-motd20170904-4903-wuigr      2017-09-04 16:23:21.379649145 +0000
    @@ -1,2 +1,2 @@
    -hello world
    +hello chef

Let’s overwrite the content of our motd file manually:

1
$ echo 'hello robots' > /tmp/motd

Running Chef Client against our recipe again, allows Chef to restore our content to the desired state that is specified in our recipe:

1
2
3
4
5
6
7
8
9
10
11
$ chef-client --local-mode hello.rb
..
Converging 1 resources
Recipe: @recipe_files::/root/chef-repo/hello.rb
  * file[/tmp/motd] action create
    - update content in file /tmp/motd from 548078 to c38c60
    --- /tmp/motd       2017-09-04 16:24:29.308286834 +0000
    +++ /tmp/.chef-motd20170904-5103-z16ssa     2017-09-04 16:24:42.528021632 +0000
    @@ -1,2 +1,2 @@
    -hello robots
    +hello chef

Deleting a file from our recipe:

destroy.rb
1
2
3
file '/tmp/motd' do
  action :delete
end

Now using chef client to execute against this file will remove our file:

1
2
3
4
$ chef-client --local-mode destroy.rb
Recipe: @recipe_files::/root/chef-repo/destroy.rb
  * file[/tmp/motd] action delete
    - delete file /tmp/motd

Resources:

Splitting Characters With Python to Determine Name Surname and Email Address

I had a bunch of email addresses that was set in a specific format that I can strip characters from, to build up a Username, Name and Surname from the Email Address, that I could use to for dynamic reporting.

Using Split in Python

Here I will define the value of emailadress to a string, then using Python’s split() function to get the values that I want:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
>>> emailaddress = "ruan.bekker@domain.com"
>>> emailaddress.split("@", 1)
['ruan.bekker', 'domain.com']
>>> username = emailaddress.split("@", 1)[0]
>>> username
'ruan.bekker'
>>> username.split(".", 1)
['ruan', 'bekker']
>>> name = username.split(".", 1)[0].capitalize()
>>> surname = username.split(".", 1)[1].capitalize()
>>> name
'Ruan'
>>> surname
'Bekker'
>>> username
'ruan.bekker'
>>> emailaddress
'ruan.bekker@domain.com'

Print The Values in Question:

Now that we have define our keys, let’s print the values:

1
2
>>> print("Name: {0}, Surname: {1}, UserName: {2}, Email Address: {3}".format(name, surname, username, emailaddress))
Name: Ruan, Surname: Bekker, UserName: ruan.bekker, Email Address: ruan.bekker@domain.com

From here on you can build up for example an email function that you can pass the values to your function to get a specific job done.

Update: Capitalize from One String

Today, I had to capitalize the name and surname that was linked to one variable:

1
2
3
4
>>> user = 'james.bond'
>>> username = ' '.join(map(str, [x.capitalize() for x in user.split(".")]))
>>> print(username)
James Bond

Setup a 3 Node MongoDB Replica Set on Ubuntu 16

Today we will setup a 3 Node Replica Set for MongoDB on Ubuntu 16. A Replica Set is a form of data replication, so that your data resides on more than one node for data durability. We will setup the 1st node as the primary node, the second as the secondary node and the 3rd node will act as an arbiter.

The arbiter node can almost be mentioned as a voter node, as it will be set in place to prevent split brain.

Resources:

Installing MongoDB on our 3 Nodes:

Our case, using Ubuntu 16.04, setting up our repository and installing mongodb from our repository:

1
2
3
4
$ sudo apt-key adv --keyserver hkp://keyserver.ubuntu.com:80 --recv 0C49F3730359A14518585931BC711F9BA15703C6
$ echo "deb [ arch=amd64,arm64 ] http://repo.mongodb.org/apt/ubuntu xenial/mongodb-org/3.4 multiverse" | sudo tee /etc/apt/sources.list.d/mongodb-org-3.4.list
$ apt update
$ apt install -y mongodb-org -y

Preparing our Directories:

1
2
$ mkdir -p /srv/mongodb/rs0-0 /srv/mongodb/rs0-1 /srv/mongodb/rs0-2
$ mkdir -p /var/log/mongodb/rs0-0 /var/log/mongodb/rs0-1 /var/log/mongodb/rs0-2

Populating our MongoDB Configuration:

  • MongoDB Prefers XFS File Systems when using WiredTiger.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
$ cat > /etc/mongod.conf << EOF
storage:
  dbPath: /var/lib/mongodb
  journal:
    enabled: false

storage:
  mmapv1:
    smallFiles: true

systemLog:
  destination: file
  logAppend: true
  path: /var/log/mongodb/mongod.log

net:
  port: 27017
  bindIp: 0.0.0.0

replication:
  replSetName: rs0

security:
  authorization: enabled
EOF

Enable MongoDB On Startup and Start MongoDB:

1
2
$ systemctl enable mongod
$ systemctl restart mongod

Setup MongoDB Replica Sets:

In our setup we will have 3 nodes: (mongodb-1, mongodb-2, mongodb3) From our Primary Node, connect to MongoDB and inititalize our replica set:

1
2
3
4
5
6
7
8
9
10
$ mongo
MongoDB shell version v3.4.7
connecting to: mongodb://127.0.0.1:27017
MongoDB server version: 3.4.7
> rs.initiate()
{
        "info2" : "no configuration specified. Using a default configuration for the set",
        "me" : "mysql-1:27017",
        "ok" : 1
}

Next, add our 2 other MongoDB Nodes, remember mongodb-3 is our arbiter node:

1
2
3
4
rs0:SECONDARY> rs.add("mongodb-2")
{ "ok" : 1 }
rs0:PRIMARY> rs.add("mongodb-3", true)
{ "ok" : 1 }

Verify the Replica Set Status:

1
rs0:PRIMARY> rs.status()
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
{
        "set" : "rs0",
        "date" : ISODate("2017-08-27T13:17:42.469Z"),
        "myState" : 1,
        "term" : NumberLong(1),
        "heartbeatIntervalMillis" : NumberLong(2000),
        "optimes" : {
                "lastCommittedOpTime" : {
                        "ts" : Timestamp(0, 0),
                        "t" : NumberLong(-1)
                },
                "appliedOpTime" : {
                        "ts" : Timestamp(1503839853, 1),
                        "t" : NumberLong(1)
                },
                "durableOpTime" : {
                        "ts" : Timestamp(1503839722, 1),
                        "t" : NumberLong(-1)
                }
        },
        "members" : [
                {
                        "_id" : 0,
                        "name" : "mysql-1:27017",
                        "health" : 1,
                        "state" : 1,
                        "stateStr" : "PRIMARY",
                        "uptime" : 422,
                        "optime" : {
                                "ts" : Timestamp(1503839853, 1),
                                "t" : NumberLong(1)
                        },
                        "optimeDate" : ISODate("2017-08-27T13:17:33Z"),
                        "electionTime" : Timestamp(1503839723, 1),
                        "electionDate" : ISODate("2017-08-27T13:15:23Z"),
                        "configVersion" : 3,
                        "self" : true
                },
                {
                        "_id" : 1,
                        "name" : "mongodb-2:27017",
                        "health" : 1,
                        "state" : 2,
                        "stateStr" : "SECONDARY",
                        "uptime" : 28,
                        "optime" : {
                                "ts" : Timestamp(1503839853, 1),
                                "t" : NumberLong(1)
                        },
                        "optimeDurable" : {
                                "ts" : Timestamp(0, 0),
                                "t" : NumberLong(-1)
                        },
                        "optimeDate" : ISODate("2017-08-27T13:17:33Z"),
                        "optimeDurableDate" : ISODate("1970-01-01T00:00:00Z"),
                        "lastHeartbeat" : ISODate("2017-08-27T13:17:41.707Z"),
                        "lastHeartbeatRecv" : ISODate("2017-08-27T13:17:40.699Z"),
                        "pingMs" : NumberLong(4),
                        "syncingTo" : "mysql-1:27017",
                        "configVersion" : 3
                },
                {
                        "_id" : 2,
                        "name" : "mongodb-3:27017",
                        "health" : 1,
                        "state" : 7,
                        "stateStr" : "ARBITER",
                        "uptime" : 8,
                        "lastHeartbeat" : ISODate("2017-08-27T13:17:41.721Z"),
                        "lastHeartbeatRecv" : ISODate("2017-08-27T13:17:38.749Z"),
                        "pingMs" : NumberLong(2),
                        "configVersion" : 3
                }
        ],
        "ok" : 1
}
rs0:PRIMARY> exit
bye

Setup Auth:

Setup Authentication on our MongoDB Database, we will create the user adminuser and setup the password to secret:

1
2
3
4
rs0:PRIMARY> use admin
switched to db admin

rs0:PRIMARY> db.createUser({user: "adminuser", pwd: "secret", roles:[{role: "root", db: "admin"}]})
1
2
3
4
5
6
7
8
9
10
Successfully added user: {
        "user" : "adminuser",
        "roles" : [
                {
                        "role" : "root",
                        "db" : "admin"
                }
        ]
}
rs0:PRIMARY> exit

Restart MongoDB:

1
$ systemctl restart mongod

Connect and Authenticate against MongoDB:

Connect to your MongoDB Cluster with auth:

1
$ mongo --host mongodb.example.com --port 27017 -u <username> -p --authenticationDatabase admin

Setup HAProxy Load Balancer for MySQL Galera With IP Whitelisting and Backup Servers

Today we will setup a HAProxy Service for our 3 Node MySQL Galera Cluster

Our Setup:

  • 3 Node Galera MySQL Cluster
  • 3 HAProxy Services (Each HAProxy Service Running on the MySQL Nodes)
  • MySQL Listens on Port 3307
  • HAProxy Listens on Port 3306 and Proxies through to 3307

I have setup HAProxy on the same node as the MySQL Servers for my use case, but you can also setup HAProxy on a node outside the MySQL Host.

So essentially our MySQL Galera Cluster is a Multi Master Setup, but for now we will only accept connections from Node-A, and have Node-B and Node-C as Backup servers. Should Node-A go down, HAProxy will route connections to Node-B, and if Node-B also goes down, connections will be routed to Node-C.

If the Primary Node, which is Node-A recovers, connections will be restored to Node-A.

Security:

We use iptables to allow traffic between the nodes for port TCP/3307 and allow all traffic for Port TCP/3306, as HAProxy will allow the IP Based Access:

Iptables for Each Node
1
2
3
4
5
$ iptables -I INPUT -s {Node-A} -p tcp --dport 3307 -j ACCEPT
$ iptables -I INPUT -s {Node-B} -p tcp --dport 3307 -j ACCEPT
$ iptables -I INPUT -s {Node-C} -p tcp --dport 3307 -j ACCEPT
$ iptables -A INPUT -p tcp --dport 3306 -j ACCEPT
$ iptables -A INPUT -p tcp --dport 3307 -j DROP

HAProxy:

Installing HAProxy on Ubuntu:

Install HAProxy
1
2
$ sudo apt update
$ sudo apt install haproxy -y

Configure HAProxy with a Port 3306 listener, specify your source addresses that you would like to be authorized to communicate with MySQL and then specify the servers to proxy the connections to our MySQL Galera Cluster, specifying 2 backup servers:

/etc/haproxy/haproxy.cfg
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
global
  log         127.0.0.1 local2
  chroot      /var/lib/haproxy
  pidfile     /var/run/haproxy.pid
  maxconn     1020
  user        haproxy
  group       haproxy
  daemon

  stats socket /var/lib/haproxy/stats.sock mode 600 level admin
  stats timeout 2m

defaults
  mode    tcp
  log     global
  option  dontlognull
  option  redispatch
  retries                   3
  timeout queue             45s
  timeout connect           5s
  timeout client            1m
  timeout server            1m
  timeout check             10s
  maxconn                   1020

listen stats
  bind    *:80
  mode    http
  stats   enable
  stats   show-legends
  stats   refresh           5s
  stats   uri               /
  stats   realm             Haproxy\ Statistics
  stats   auth              admin:secret
  stats   admin             if TRUE

listen galera-lb
  bind    *:3306
  mode    tcp
  acl     network_allowed src 10.10.1.0/24 10.32.15.2/32
  tcp-request               content accept if network_allowed
  tcp-request               content reject
  default_backend           galera-cluster

backend galera-cluster
  balance roundrobin
  server  scw-mysql-1 10.0.0.2:3307  check
  server  scw-mysql-2 10.0.0.3:3307  check backup
  server  scw-mysql-3 10.0.0.4:3307  check backup

Start HAProxy:

Start HAProxy Service
1
2
$ sudo systemctl enable haproxy
$ sudo systemctl restart haproxy

Authorize HAProxy Hostnames to Connect to MySQL:

In this case we need to allow the Hostnames to be able to connect to mysql:

1
2
mysql> GRANT ALL PRIVILEGES ON *.* TO 'root'@'%' IDENTIFIED BY 'secrets' WITH GRANT OPTION;
mysql> FLUSH PRIVILEGES;

Resources:

Secure Your Access to Kibana 5 and Elasticsearch 5 With Nginx for AWS

As until now, AWS does not offer VPC Support for Elasticsearch, so this make things a bit difficult authorizing Private IP Ranges.

One workaround would be to setup a Nginx Reverse Proxy on AWS within the your Private VPC, associate a EIP on your Nginx EC2 Instance, then authorize your EIP on your Elasticsearch IP Access Policy.

Update:

Our Setup:

In this setup, we will have an Internal ELB (Elastic Load Balancer), which we will associate 1 or more EC2 Nginx Instances behind the ELB, then setup our Nginx to Revere Proxy our connections through to our Elasticsearch Endpoint.

We will also setup Basic HTTP Authentication for our / elasticsearch endpoint, and our /kibana endpoint. But we will keep the authentication seperate from each other, so that credentials for ES and Kibana is not the same, but depending on your use case, you can allow both endpoints to reference the same credential file.

Install Nginx

Depending on your Linux Distribution, the package manager may differ, I am using Amazon Linux:

Install Nginx
1
2
$ sudo yum update -y
$ sudo yum install nginx httpd-tools -y

Configure Nginx:

Remove the default configuration and replace the nginx.conf with the following:

Remove Default Nginx Config
1
$ sudo rm -r /etc/nginx/nginx.conf

Main Nginx Configuration:

/etc/nginx/nginx.conf
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
user nginx;
worker_processes auto;
pid /run/nginx.pid;
error_log /var/log/nginx/error.log;

events {
  worker_connections 1024;
}

http {

  # Basic Settings
  sendfile on;
  tcp_nopush on;
  tcp_nodelay on;
  keepalive_timeout 65;
  types_hash_max_size 2048;
  server_names_hash_bucket_size 128;

  include /etc/nginx/mime.types;
  default_type application/octet-stream;

  # Logging Settings
        log_format  main  '$remote_addr - $remote_user [$time_local] "$request" '
                      '$status $body_bytes_sent "$http_referer" '
                      '"$http_user_agent" "$http_x_forwarded_for"';

  access_log /var/log/nginx/access.log main;

  # Gzip Settings
  gzip on;
  gzip_disable "msie6";

  # Elasticsearch Config
  include /etc/nginx/conf.d/elasticsearch.conf;
}

The Reverse Proxy Configuration:

/etc/nginx/conf.d/elasticsearch.conf
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
server {

  listen 80;
  server_name elk.mydomain.com;

  # error logging
  error_log /var/log/nginx/elasticsearch_error.log;

  # authentication: server wide
  #auth_basic "Auth";
  #auth_basic_user_file /etc/nginx/.secrets;

  location / {

    # authentication: elasticsearch
    auth_basic "Elasticsearch Auth";
    auth_basic_user_file /etc/nginx/.secrets_elasticsearch;

    proxy_http_version 1.1;
    proxy_set_header Host https://search.eu-west-1.es.amazonaws.com;
    proxy_set_header X-Real-IP {NGINX-EIP};
    proxy_set_header Connection "Keep-Alive";
    proxy_set_header Proxy-Connection "Keep-Alive";
    proxy_set_header Authorization "";

    proxy_pass https://search.eu-west-1.es.amazonaws.com/;
    proxy_redirect https://search.eu-west-1.es.amazonaws.com/ http://{NGINX-EIP}/;

  }

  location /kibana {

    # authentication: kibana
    auth_basic "Kibana Auth";
    auth_basic_user_file /etc/nginx/.secrets_kibana;

    proxy_http_version 1.1;
    proxy_set_header Host https://search.eu-west-1.es.amazonaws.com;
    proxy_set_header X-Real-IP {NGINX-EIP};
    proxy_set_header Connection "Keep-Alive";
    proxy_set_header Proxy-Connection "Keep-Alive";
    proxy_set_header Authorization "";

    proxy_pass https://search.eu-west-1.es.amazonaws.com/_plugin/kibana/;
    proxy_redirect https://search.eu-west-1.es.amazonaws.com/_plugin/kibana/ http://{NGINX_EIP}/kibana/;

  }

  # elb checks
  location /status {
    root /usr/share/nginx/html/;
  }

}

Setup Authentication:

Setup the authentication for elasticsearch and kibana:

Create Auth for Kibana and Elasticsearch
1
2
$ sudo htpasswd -c /etc/nginx/.secrets_elasticsearch admin
$ sudo htpasswd -c /etc/nginx/.secrets_kibana admin

Restart Nginx and Enable on Startup

Restart the nginx process and enable the process on boot:

Restart Nginx
1
2
$ sudo /etc/init.d/nginx restart
$ sudo chkconfig nginx on

Configure ELB:

Create a New Internal ELB, set the Backend Instances on Port 80, and the healthcheck should point to /status/index.html as this location block does not require authentication and our ELB will be able to get a 200 reponse if all is good. Next you can configure your Route 53 Hosted Zone, elk.mydomain.com to map to your ELB.

End Result

Now you should be able to access Elasticsearch on http://elk.mydomain.com/ and Kibana on http://elk.mydomain.com/kibana after authenticating.

Reference Credentials Outside Your Main Application in Python

In this post I will show one way of referencing credentials from your application in Python, without setting them in your applications code. We will create a seperate python file which will hold our credentials, and then call them from our main application.

Our Main Application

This app will print our username, just for the sake of this example:

app.py
1
2
3
4
5
6
from config import credentials as secrets

my_username = secrets['APP1']['username']
my_password = secrets['APP1']['password']

print("Hello, your username is: {username}".format(username=my_username))

Our Credentials File

Then we have our file which will hold our credentials:

config.py
1
2
3
4
5
6
credentials = {
        'APP1': {
            'username': 'foo',
            'password': 'bar'
            }
        }

That is at least one way of doing it, you could also use environment variables using the os module, which is described here

References: