Ruan Bekker's Blog

From a Curious mind to Posts on Github

Reference Credentials Outside Your Main Application in Python

In this post I will show one way of referencing credentials from your application in Python, without setting them in your applications code. We will create a seperate python file which will hold our credentials, and then call them from our main application.

Our Main Application

This app will print our username, just for the sake of this example:

app.py
1
2
3
4
5
6
from config import credentials as secrets

my_username = secrets['APP1']['username']
my_password = secrets['APP1']['password']

print("Hello, your username is: {username}".format(username=my_username))

Our Credentials File

Then we have our file which will hold our credentials:

config.py
1
2
3
4
5
6
credentials = {
        'APP1': {
            'username': 'foo',
            'password': 'bar'
            }
        }

That is at least one way of doing it, you could also use environment variables using the os module, which is described here

References:

Change IAM Username With AWS CLI

You may find yourself in a position where you need to rename more than one IAM Username, and one way of doing this is using the AWS CLI tools to rename the username.

The benefit of this is that the user’s access keys remains the same, any policies associated to the user, will stay on the user after the username gets renamed.

The only thing that changes, is ofcourse the username that the user will use when logging onto the AWS Management Console:

Details of our User:

We will change the IAM User peter to peter.franklin. Currently Peter’s ACCESS_KEY will be AKIA123456ABCDEF1234 which is configured with the profile name peter.

Lets first get details of our user before changing it:

1
2
3
4
5
6
7
8
9
10
11
$ aws --profile admin iam get-user --user-name peter
{
    "User": {
        "UserName": "peter",
        "PasswordLastUsed": "2017-08-28T13:17:22Z",
        "CreateDate": "2017-08-28T13:11:25Z",
        "UserId": "ABCDEFGHIJKLMNOPQRST",
        "Path": "/",
        "Arn": "arn:aws:iam::123456789012:user/peter"
    }
}

Rename the IAM User

Update user peter to peter.franklin:

Rename the IAM User
1
$ aws --profile aws iam update-user --user-name peter --new-user-name peter.franklin

Describe peter’s new username:

1
2
3
4
5
6
7
8
9
10
11
$ aws --profile aws iam get-user --user-name peter.franklin
{
    "User": {
        "UserName": "peter.franklin",
        "PasswordLastUsed": "2017-08-28T13:23:18Z",
        "CreateDate": "2017-08-28T13:11:25Z",
        "UserId": "ABCDEFGHIJKLNMOPQRST",
        "Path": "/",
        "Arn": "arn:aws:iam::123456789012:user/peter.franklin"
    }
}

Verify that access keys are the same:

1
2
3
4
5
6
7
8
9
10
11
$ aws --profile aws iam list-access-keys --user-name peter.franklin
{
    "AccessKeyMetadata": [
        {
            "UserName": "peter.franklin",
            "Status": "Active",
            "CreateDate": "2017-08-28T13:11:27Z",
            "AccessKeyId": "AKIA123456ABCDEF1234"
        }
    ]
}

At this momemnt we can see that Peter’s AccessKeyId is still the same, which means he does not have to update his credentials on his end.

Some Useful CLI Commands:

Get only the Access Key for a User:

1
2
$ aws --profile admin iam list-access-keys --user-name peter.franklin | jq -r '.[][].AccessKeyId'
AKIA123456ABCDEF1234

Determine when the AccessKey was last used, and for which Service:

For auditing, or verifying if a AccessKeyId is being used, we can call the get-access-key-last-used, which will give us the last time the key was used, and also see for which service in question.

Let Peter create a DynamoDB Table:

1
2
3
4
5
$ aws --profile peter dynamodb \
create-table --table-name test01 \
--attribute-definitions "AttributeName=username,AttributeType=S" \
--key-schema "AttributeName=username,KeyType=HASH" \
--provisioned-throughput "ReadCapacityUnits=1,WriteCapacityUnits=1"
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
{
    "TableDescription": {
        "TableArn": "arn:aws:dynamodb:eu-west-1:123456789012:table/test01",
        "AttributeDefinitions": [
            {
                "AttributeName": "username",
                "AttributeType": "S"
            }
        ],
        "ProvisionedThroughput": {
            "NumberOfDecreasesToday": 0,
            "WriteCapacityUnits": 1,
            "ReadCapacityUnits": 1
        },
        "TableSizeBytes": 0,
        "TableName": "test01",
        "TableStatus": "CREATING",
        "KeySchema": [
            {
                "KeyType": "HASH",
                "AttributeName": "username"
            }
        ],
        "ItemCount": 0,
        "CreationDateTime": 1503928537.671
    }
}

Get Detail on LastUsedDate:

1
2
3
4
5
6
7
$ aws --profile admin iam get-access-key-last-used  --access-key $(aws --profile aws iam list-access-keys --user-name peter.franklin | jq -r '.[][].AccessKeyId') | jq -r '.[]'
peter.franklin
{
  "Region": "eu-west-1",
  "ServiceName": "dynamodb",
  "LastUsedDate": "2017-08-28T13:55:00Z"
}

Only getting the LastUsedDate of the AccessKeyId:

1
2
$ aws --profile admin iam get-access-key-last-used  --access-key $(aws --profile aws iam list-access-keys --user-name peter.franklin | jq -r '.[][].AccessKeyId') | jq '.AccessKeyLastUsed.LastUsedDate'
"2017-08-28T13:55:00Z"

Resources:

Using the Python API for MongoDB Using PyMongo

Using the Python API for MongoDB using Pymongo

Requirements:

You will need to install the pymongo driver using pip:

Install Pymongo
1
$ pip install pymongo

A configuration file with your access credentials, which I like to use outside my code:

config.py
1
2
3
4
5
6
7
credentials = {
    "mongodb": {
        "HOSTNAME": "host.domain.com",
        "USERNAME": "username",
        "PASSWORD": "password"
    }
}

Connecting to MongoDB:

From the python interpreter, connect to MongoDB:

1
2
3
4
5
6
>>> from pymongo import MongoClient
>>> from config import credentials as secrets
>>> mongo_host = secrets['mongodb']['HOSTNAME']
>>> mongo_username = secrets['mongodb']['USERNAME']
>>> mongo_password = secrets['mongodb']['PASSWORD']
>>> mongodb_client = MongoClient('mongodb://%s:%s@%s:27017/admin?authMechanism=SCRAM-SHA-1' % (mongo_username, mongo_password, mongo_host))

Find the Database that you are connected to:

1
2
>>> mongodb_client.get_database().name
u'admin'

Find all the databases that is currently on your MongoDB Server:

1
2
3
4
5
6
7
>>> dbs = mongodb_client.database_names()
>>> for x in dbs:
...     print(x)
...
admin
flask_reminders
local

Create a Database, Collection and Write a Document into your Database:

Let’s create a database, in my case it will be ruan-test, and my collection name mycollection and the write one item into it:

1
2
3
4
5
6
>>> newdb = mongodb_client['ruan-test']
>>> newdb_collection = newdb['mycollection']
>>> doc = {"name": "frank", "surname": "jeffreys", "tags": ["person", "name"]}
>>> doc_id = newdb_collection.insert_one(doc).inserted_id
>>> print(doc_id)
59a319ec1f15a5088ba3a339

Note: you can also connect to your collection like the following

1
>>> newdb_collection = mongodb_client['ruan-test']['mycollection']

We have inserted one item into our database, which we can verify with count():

1
2
>>> newdb_collection.find().count()
1

As you can see I have the value of the item’s id, we can use that to find it from our collection:

1
2
>>> newdb_collection.find_one({"_id": doc_id})
{u'_id': ObjectId('59a319ec1f15a5088ba3a339'), u'surname': u'jeffreys', u'name': u'frank', u'tags': [u'person', u'name']}

As we only have one item in our database, we can also use find_one() which will give us the exact same data:

1
2
>>> newdb_collection.find_one()
{u'_id': ObjectId('59a319ec1f15a5088ba3a339'), u'surname': u'jeffreys', u'name': u'frank', u'tags': [u'person', u'name']}

We can write some more data to our database, but this time, lets write to a different collection:

1
2
3
>>> newdb_collection2 = newdb['new-collection-2']
>>> item = newdb_collection2.insert_one({"name": "ruby", "surname": "james"}).inserted_id
>>> item2 = newdb_collection2.insert_one({"name": "ruby", "surname": "james"}).inserted_id

As we captured the items _id, we can view the:

1
2
3
4
>>> print(item)
59a31acf1f15a5088ba3a33b
>>> print(item2)
59a31a8a1f15a5088ba3a33a

Query Data from MongoDB:

We can then query for this data:

1
2
3
4
5
>>> newdb2.find_one({"name": "ruby"})
{u'_id': ObjectId('59a31acf1f15a5088ba3a33b'), u'surname': u'james', u'name': u'ruby'}

>>> newdb2.find_one({"_id": item})
{u'_id': ObjectId('59a31acf1f15a5088ba3a33b'), u'surname': u'james', u'name': u'ruby'}

Also scan for all items in the collection:

1
2
3
4
5
6
7
8
9
>>> scan = newdb_collection2.find({})
>>> for x in scan:
...     print(x)
...
{u'_id': ObjectId('59a31a8a1f15a5088ba3a33a'), u'surname': u'james', u'name': u'phillip'}
{u'_id': ObjectId('59a31acf1f15a5088ba3a33b'), u'surname': u'james', u'name': u'ruby'}

>>> newdb2.find().count()
2

We can now verify that we have 2 collections in our database:

1
2
>>> newdb.collection_names()
[u'mycollection-2', u'mycollection']

Connecting to an existing Database:

Let’s connect to an existing database on our MongoDB Server:

1
>>> flaskdb = mongodb_client['flask_reminders']

List the collections:

1
2
>>> flaskdb.collection_names()
[u'reminders', u'usersessions']

Count the number of items in our reminders Collection:

1
2
>>> flaskdb.reminders.find().count()
624

Find a Random Item:

1
2
>>> flaskdb.reminders.find_one()
{u'category': u'Python', u'description': u'Chatbot with SQLite', u'link': u'http://rodic.fr/blog/python-chatbot-1/', u'date': u'2017-01-03', u'_id': ObjectId('586bb6dd0269103671afce32'), u'type': u'Discovered Service'}

Find One Item, with a Specific Value, for example the value AWS for our Category key:

1
2
>>> flaskdb.reminders.find_one({"category": "AWS"})
{u'category': u'AWS', u'description': u'Elasticsearch Documentation Access Policies', u'link': u'http://docs.aws.amazon.com/elasticsearch-service/latest/developerguide/es-createupdatedomains.html#es-createdomain-configure-access-policies', u'date': u'2017-02-13', u'_id': ObjectId('58a1d45202691070616947c3'), u'type': u'Documentation'}

Find All Items, with a specific value:

1
2
3
4
5
6
>>> data = flaskdb.reminders.find({"category": "AWS"})
>>> for x in data:
...     print(x)
...
{u'category': u'Python', u'description': u'Chatbot with SQLite', u'link': u'http://rodic.fr/blog/python-chatbot-1/', u'date': u'2017-01-03', u'_id': ObjectId('586bb6dd0269103671afce32'), u'type': u'Discovered Service'}
{u'category': u'Python', u'description': u'Boto: Kinesis List', u'link': u'https://gitlab.com/rbekker87/code-examples/blob/master/kinesis/firehose/python/firehose.list.py', u'date': u'2017-01-05', u'_id': ObjectId('586dde1e0269103671afce36'), u'type': u'Stuff Done'}

Deleting Databases:

Cleaning up, deleting the database that we created, when a database is delete, the collections within that database also gets removed.

First list the databases:

1
2
3
4
5
6
7
8
>>> dbs = mongodb_client.database_names()
>>> for x in dbs:
...     print(x)
...
admin
flask_reminders
local
ruan-test

Then delete the database that you want to delete:

1
>>> mongodb_client.drop_database("ruan-test")

Then verify if the database was removed:

1
2
3
4
5
6
7
>>> dbs = mongodb_client.database_names()
>>> for x in dbs:
...     print(x)
...
admin
flask_reminders
local

Resources:

Setup a Local MongoDB Development 3 Member Replica Set

Setup a Development Environment of a MongoDB Replica Set consisting of 3 mongod MongoDB Instances.

This is purely aimed for a testing or development environment, as one of the key points is that security is disabled, and that for this post, all 3 instances will be running on the same node.

Resources:

Installation:

I am using Ubuntu 16.04, for other distributions, have a look at MongoDBs Installation Page

MongoDB Installation
1
2
3
4
$ sudo apt-key adv --keyserver hkp://keyserver.ubuntu.com:80 --recv 0C49F3730359A14518585931BC711F9BA15703C6
$ echo "deb [ arch=amd64,arm64 ] http://repo.mongodb.org/apt/ubuntu xenial/mongodb-org/3.4 multiverse" | sudo tee /etc/apt/sources.list.d/mongodb-org-3.4.list
$ apt update
$ apt install -y mongodb-org -y

Prepare Directories:

Prepare the data directories, and as I am planning to use the --fork option, I need to specify the the --logpath, so therefore I will create the log directories as well:

Create the Directory Paths
1
2
$ mkdir -p /srv/mongodb/rs0-0 /srv/mongodb/rs0-1 /srv/mongodb/rs0-2
$ mkdir -p /var/log/mongodb/rs0-0 /var/log/mongodb/rs0-1 /var/log/mongodb/rs0-2

Run 3 MongoDB Instances:

Create 3 MongoDB Instances, each instance listening on it’s unique port.

From MongoDB’s Documentation:

“The –smallfiles and –oplogSize settings reduce the disk space that each mongod instance uses”

1
2
3
$ mongod --port 27017 --dbpath /srv/mongodb/rs0-0 --replSet rs0 --smallfiles --oplogSize 128 --logpath /var/log/mongodb/rs0-0/server.log --fork
$ mongod --port 27018 --dbpath /srv/mongodb/rs0-1 --replSet rs0 --smallfiles --oplogSize 128 --logpath /var/log/mongodb/rs0-1/server.log --fork
$ mongod --port 27019 --dbpath /srv/mongodb/rs0-2 --replSet rs0 --smallfiles --oplogSize 128 --logpath /var/log/mongodb/rs0-2/server.log --fork

Cofirm:

Confirm that the processes are listening on the ports that we defined:

1
2
3
4
5
6
$ netstat -tulpn
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address           Foreign Address         State       PID/Program name
tcp        0      0 0.0.0.0:27017           0.0.0.0:*               LISTEN      1100/mongod
tcp        0      0 0.0.0.0:27018           0.0.0.0:*               LISTEN      1127/mongod
tcp        0      0 0.0.0.0:27019           0.0.0.0:*               LISTEN      1154/mongod

Connect to the first MongoDB Instnace:

Connect to our first MongoDB Instance, where we will setup the replica set:

1
2
$ mongo --port 27017
\>

Create the Replica Set Configuration Object:

1
2
3
4
5
6
7
8
9
> rsconf = {
             _id: "rs0",
             members: [
                        {
                         _id: 0,
                         host: "10.78.1.24:27017"
                        }
                      ]
           }

Initiate the replica set configuration:

1
2
> rs.initiate( rsconf )
{ "ok" : 1 }

Display the Replica Configuration with rs.conf():

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
rs0:SECONDARY> rs.conf()
{
        "_id" : "rs0",
        "version" : 1,
        "protocolVersion" : NumberLong(1),
        "members" : [
                {
                        "_id" : 0,
                        "host" : "10.78.1.24:27017",
                        "arbiterOnly" : false,
                        "buildIndexes" : true,
                        "hidden" : false,
                        "priority" : 1,
                        "tags" : {

                        },
                        "slaveDelay" : NumberLong(0),
                        "votes" : 1
                }
        ],
        "settings" : {
                "chainingAllowed" : true,
                "heartbeatIntervalMillis" : 2000,
                "heartbeatTimeoutSecs" : 10,
                "electionTimeoutMillis" : 10000,
                "catchUpTimeoutMillis" : 60000,
                "getLastErrorModes" : {

                },
                "getLastErrorDefaults" : {
                        "w" : 1,
                        "wtimeout" : 0
                },
                "replicaSetId" : ObjectId("59a2339f5ff27709a1645d28")
        }
}

Add the other two mongodb instances to the replica set using rs.add():

1
2
3
4
5
rs0:PRIMARY> rs.add("10.78.1.24:27018")
{ "ok" : 1 }

rs0:PRIMARY> rs.add("10.78.1.24:27019")
{ "ok" : 1 }

View the status of our MongoDB Replica Set with rs.status():

1
rs0:PRIMARY> rs.status()
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
{
        "set" : "rs0",
        "date" : ISODate("2017-08-27T02:52:08.106Z"),
        "myState" : 1,
        "term" : NumberLong(1),
        "heartbeatIntervalMillis" : NumberLong(2000),
        "optimes" : {
                "lastCommittedOpTime" : {
                        "ts" : Timestamp(1503802316, 1),
                        "t" : NumberLong(1)
                },
                "appliedOpTime" : {
                        "ts" : Timestamp(1503802316, 1),
                        "t" : NumberLong(1)
                },
                "durableOpTime" : {
                        "ts" : Timestamp(1503802316, 1),
                        "t" : NumberLong(1)
                }
        },
        "members" : [
                {
                        "_id" : 0,
                        "name" : "10.78.1.24:27017",
                        "health" : 1,
                        "state" : 1,
                        "stateStr" : "PRIMARY",
                        "uptime" : 890,
                        "optime" : {
                                "ts" : Timestamp(1503802316, 1),
                                "t" : NumberLong(1)
                        },
                        "optimeDate" : ISODate("2017-08-27T02:51:56Z"),
                        "infoMessage" : "could not find member to sync from",
                        "electionTime" : Timestamp(1503802272, 1),
                        "electionDate" : ISODate("2017-08-27T02:51:12Z"),
                        "configVersion" : 3,
                        "self" : true
                },
                {
                        "_id" : 1,
                        "name" : "10.78.1.24:27018",
                        "health" : 1,
                        "state" : 2,
                        "stateStr" : "SECONDARY",
                        "uptime" : 16,
                        "optime" : {
                                "ts" : Timestamp(1503802316, 1),
                                "t" : NumberLong(1)
                        },
                        "optimeDurable" : {
                                "ts" : Timestamp(1503802316, 1),
                                "t" : NumberLong(1)
                        },
                        "optimeDate" : ISODate("2017-08-27T02:51:56Z"),
                        "optimeDurableDate" : ISODate("2017-08-27T02:51:56Z"),
                        "lastHeartbeat" : ISODate("2017-08-27T02:52:06.638Z"),
                        "lastHeartbeatRecv" : ISODate("2017-08-27T02:52:07.638Z"),
                        "pingMs" : NumberLong(0),
                        "syncingTo" : "10.78.1.24:27017",
                        "configVersion" : 3
                },
                {
                        "_id" : 2,
                        "name" : "10.78.1.24:27019",
                        "health" : 1,
                        "state" : 2,
                        "stateStr" : "SECONDARY",
                        "uptime" : 11,
                        "optime" : {
                                "ts" : Timestamp(1503802316, 1),
                                "t" : NumberLong(1)
                        },
                        "optimeDurable" : {
                                "ts" : Timestamp(1503802316, 1),
                                "t" : NumberLong(1)
                        },
                        "optimeDate" : ISODate("2017-08-27T02:51:56Z"),
                        "optimeDurableDate" : ISODate("2017-08-27T02:51:56Z"),
                        "lastHeartbeat" : ISODate("2017-08-27T02:52:06.638Z"),
                        "lastHeartbeatRecv" : ISODate("2017-08-27T02:52:07.241Z"),
                        "pingMs" : NumberLong(0),
                        "configVersion" : 3
                }
        ],
        "ok" : 1
}

Write some Data to MongoDB:

Create a Database named mydb:

1
2
rs0:PRIMARY> use mydb
switched to db mydb

Create a Collection, named mycol1:

1
2
3
4
5
rs0:PRIMARY> db.createCollection("mycol1")
{ "ok" : 1 }

rs0:PRIMARY> show collections
mycol1

Write 2 documents with:

  • Name: James, Home Address: Country => South Africa, City => Cape Town
  • Name: Frank, Home Address: Country => Ireland, City => Dublin
Write some Data
1
2
3
4
5
rs0:PRIMARY> db.mycol1.insert({"name": "james", "home address": {"country": "south africa", "city": "cape town"}})
WriteResult({ "nInserted" : 1 })

rs0:PRIMARY> db.mycol1.insert({"name": "frank", "home address": {"country": "ireland", "city": "dublin"}})
WriteResult({ "nInserted" : 1 })

Count all Documents in our Database:

Counting
1
2
rs0:PRIMARY> db.mycol1.find().count()
2

Scan through all documents, and show the in pretty print:

Pretty Print
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
rs0:PRIMARY> db.mycol1.find().pretty()
{
        "_id" : ObjectId("59a23d26c0c3824694f79ff6"),
        "name" : "james",
        "home address" : {
                "country" : "south africa",
                "city" : "cape town"
        }
}
{
        "_id" : ObjectId("59a23dbdc0c3824694f79ff7"),
        "name" : "frank",
        "home address" : {
                "country" : "ireland",
                "city" : "dublin"
        }
}

Find Information about Frank:

Franks Info
1
2
rs0:PRIMARY> db.mycol1.find({"name": "frank"})
{ "_id" : ObjectId("59a23dbdc0c3824694f79ff7"), "name" : "frank", "home address" : { "country" : "ireland", "city" : "dublin" } }

Delete the Database, but confirm which database your are logged on, the delete using dropDatabase():

Drop Database
1
2
3
4
5
6
7
8
rs0:PRIMARY> db
mydb

rs0:PRIMARY> db.dropDatabase()
{ "dropped" : "mydb", "ok" : 1 }

rs0:PRIMARY> exit
bye

Building a Alpine Nginx PHP-Fpm Image on Docker for PHP Applications

A Post on Building a Alpine Based Image that will serve PHP Pages, using Nginx and PHP-FPM5.

I have a lot of modules enabled, which might not be neccesary, but in my case I wanted to have a couple of them enabled, for testing.

One of the Requirements:

One of the requirements was that I needed SMTP support from the container as I am using Startbootstrap Freelancer Theme, which I configured to relay mail from the contact from to one of my external relay hosts.

Our Directory Structure:

Our data that we will be working with will consist of our Dockerfile, our website files, nginx config, and a wrapper script that will control nginx and php-fpm5 processes:

Directory Structure
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
.
|-- Dockerfile
|-- README.md
|-- html
|   |-- css
|   |-- fonts
|   |-- img
|   |-- index.html
|   |-- js
|   `-- mail
|       `-- contact.php
|-- nginx.conf
|-- start_nginx.sh
|-- start_php-fpm5.sh
`-- wrapper.sh

Going into Some Detail:

First, our Dockerfile, which you will see I started the image from Apline:

Dockerfile
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
FROM alpine:edge

RUN apk update \
    && apk add nginx \
    && adduser -D -u 1000 -g 'www' www \
    && mkdir /www \
    && chown -R www:www /var/lib/nginx \
    && chown -R www:www /www \
    && rm -rf /etc/nginx/nginx.conf

ENV PHP_FPM_USER="www"
ENV PHP_FPM_GROUP="www"
ENV PHP_FPM_LISTEN_MODE="0660"
ENV PHP_MEMORY_LIMIT="512M"
ENV PHP_MAX_UPLOAD="50M"
ENV PHP_MAX_FILE_UPLOAD="200"
ENV PHP_MAX_POST="100M"
ENV PHP_DISPLAY_ERRORS="On"
ENV PHP_DISPLAY_STARTUP_ERRORS="On"
ENV PHP_ERROR_REPORTING="E_COMPILE_ERROR\|E_RECOVERABLE_ERROR\|E_ERROR\|E_CORE_ERROR"
ENV PHP_CGI_FIX_PATHINFO=0
ENV TIMEZONE="Africa/Johannesburg"

RUN apk add curl \
    ssmtp \
    tzdata \
    php5-fpm \
    php5-mcrypt \
    php5-soap \
    php5-openssl \
    php5-gmp \
    php5-pdo_odbc \
    php5-json \
    php5-dom \
    php5-pdo \
    php5-zip \
    php5-mysql \
    php5-mysqli \
    php5-sqlite3 \
    php5-pdo_pgsql \
    php5-bcmath \
    php5-gd \
    php5-odbc \
    php5-pdo_mysql \
    php5-pdo_sqlite \
    php5-gettext \
    php5-xmlreader \
    php5-xmlrpc \
    php5-bz2 \
    php5-iconv \
    php5-pdo_dblib \
    php5-curl \
    php5-ctype

RUN sed -i "s|;listen.owner\s*=\s*nobody|listen.owner = ${PHP_FPM_USER}|g" /etc/php5/php-fpm.conf \
    && sed -i "s|;listen.group\s*=\s*nobody|listen.group = ${PHP_FPM_GROUP}|g" /etc/php5/php-fpm.conf \
    && sed -i "s|;listen.mode\s*=\s*0660|listen.mode = ${PHP_FPM_LISTEN_MODE}|g" /etc/php5/php-fpm.conf \
    && sed -i "s|user\s*=\s*nobody|user = ${PHP_FPM_USER}|g" /etc/php5/php-fpm.conf \
    && sed -i "s|group\s*=\s*nobody|group = ${PHP_FPM_GROUP}|g" /etc/php5/php-fpm.conf \
    && sed -i "s|;log_level\s*=\s*notice|log_level = notice|g" /etc/php5/php-fpm.conf \
    && sed -i 's/include\ \=\ \/etc\/php5\/fpm.d\/\*.conf/\;include\ \=\ \/etc\/php5\/fpm.d\/\*.conf/g' /etc/php5/php-fpm.conf

RUN sed -i "s|display_errors\s*=\s*Off|display_errors = ${PHP_DISPLAY_ERRORS}|i" /etc/php5/php.ini \
    && sed -i "s|display_startup_errors\s*=\s*Off|display_startup_errors = ${PHP_DISPLAY_STARTUP_ERRORS}|i" /etc/php5/php.ini \
    && sed -i "s|error_reporting\s*=\s*E_ALL & ~E_DEPRECATED & ~E_STRICT|error_reporting = ${PHP_ERROR_REPORTING}|i" /etc/php5/php.ini \
    && sed -i "s|;*memory_limit =.*|memory_limit = ${PHP_MEMORY_LIMIT}|i" /etc/php5/php.ini \
    && sed -i "s|;*upload_max_filesize =.*|upload_max_filesize = ${PHP_MAX_UPLOAD}|i" /etc/php5/php.ini \
    && sed -i "s|;*max_file_uploads =.*|max_file_uploads = ${PHP_MAX_FILE_UPLOAD}|i" /etc/php5/php.ini \
    && sed -i "s|;*post_max_size =.*|post_max_size = ${PHP_MAX_POST}|i" /etc/php5/php.ini \
    && sed -i "s|;*cgi.fix_pathinfo=.*|cgi.fix_pathinfo= ${PHP_CGI_FIX_PATHINFO}|i" /etc/php5/php.ini
    && sed -i 's/smtp_port\ =\ 25/smtp_port\ =\ 81/g' /etc/php5/php.ini \
    && sed -i 's/SMTP\ =\ localhost/SMTP\ =\ mail.bekkersolutions.com/g' /etc/php5/php.ini \
    && sed -i 's/;sendmail_path\ =/sendmail_path\ =\ \/usr\/sbin\/sendmail\ -t/g' /etc/php5/php.ini

RUN rm -rf /etc/localtime \
    && ln -s /usr/share/zoneinfo/${TIMEZONE} /etc/localtime \
    && echo "${TIMEZONE}" > /etc/timezone \
    && sed -i "s|;*date.timezone =.*|date.timezone = ${TIMEZONE}|i" /etc/php5/php.ini \ 
    && echo 'sendmail_path = "/usr/sbin/ssmtp -t "' > /etc/php5/conf.d/mail.ini \
    && sed -i 's/mailhub=mail/mailhub=mail.domain.com\:81/g' /etc/ssmtp/ssmtp.conf

COPY nginx.conf /etc/nginx/nginx.conf
COPY index.php /www/index.php
COPY test.html /www/test.html
COPY start_nginx.sh /start_nginx.sh
COPY start_php-fpm5.sh /start_php-fpm5.sh
COPY wrapper.sh /wrapper.sh

RUN chmod +x /start_nginx.sh /start_php-fpm5.sh /wrapper.sh

CMD ["/wrapper.sh"]

Next, our nginx.conf configuration file:

nginx.conf
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
user                            www;
worker_processes                1;

error_log                       /var/log/nginx/error.log warn;
pid                             /var/run/nginx.pid;

events {
    worker_connections          1024;
}

http {
    include                     /etc/nginx/mime.types;
    default_type                application/octet-stream;
    sendfile                    on;
    access_log                  /var/log/nginx/access.log;
    keepalive_timeout           3000;

    server {
        listen                  80;
        root                    /www;
        index                   index.html index.htm index.php;
        server_name             _;
        client_max_body_size    32m;
        error_page              500 502 503 504  /50x.html;

        location = /50x.html {
              root              /var/lib/nginx/html;
        }

        location ~ \.php$ {
              fastcgi_pass      127.0.0.1:9000;
              fastcgi_index     index.php;
              include           fastcgi.conf;
        }
    }
}

Then our directory, html that will consist our websites data, for a simple example, I will create a sample index.php page which can be used:

html/index.php
1
2
3
4
<?php
$word = "foo";
echo "The word is: $word\n";
?>

Then, following our wrapper.sh script that will start our php-fpm5, and nginx processes, and then monitor these processes, if one of the processes have to exit, the wrapper script will return a exit code, which will result the container to exit, if there is anything wrong with the service:

The PHP-FPM script:

start_php-fpm5.sh
1
2
#!/bin/sh
/usr/bin/php-fpm5

The Nginx Script:

start_nginx.sh
1
2
#!/bin/sh
/usr/sbin/nginx -c /etc/nginx/nginx.conf

The Wrapper Script:

wrapper.sh
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
#!/bin/sh

/start_php-fpm5.sh -D
status=$?
if [ $status -ne 0 ]; then
  echo "php-fpm5 Failed: $status"
  exit $status
  else echo "Starting PHP-FPM: OK"
fi

sleep 2

/start_nginx.sh -D
status=$?
if [ $status -ne 0 ]; then
  echo "Nginx Failed: $status"
  exit $status
  else echo "Starting Nginx: OK"
fi

sleep 2

while /bin/true; do
  ps aux | grep 'php-fpm: master process' | grep -q -v grep
  PHP_FPM_STATUS=$?
  echo "Checking PHP-FPM, Status Code: $PHP_FPM_STATUS"
  sleep 2

  ps aux | grep 'nginx: master process' | grep -q -v grep
  NGINX_STATUS=$?
  echo "Checking NGINX, Status Code: $NGINX_STATUS"
  sleep 2

  if [ $PHP_FPM_STATUS -ne 0 ];
    then
      echo "$(date +%F_%T) FATAL: PHP-FPM Raised a Status Code of $PHP_FPM_STATUS and exited"
      exit -1

   elif [ $NGINX_STATUS -ne 0 ];
     then
       echo "$(date +%F_%T) FATAL: NGINX Raised a Status Code of $NGINX_STATUS and exited"
       exit -1

   else
     sleep 2
        echo "$(date +%F_%T) - HealtCheck: NGINX and PHP-FPM: OK"
  fi
  sleep 60
done

Building the Image:

I am primarily using docker swarm, so I am building the image, and pushing to a private registry:

Build and Push the Image
1
2
$ docker build -t registry.gitlab.com/<user>/<repo>/alpine:php5 .
$ docker push registry.gitlab.com/<user>/<repo>/alpine:php5

Create the PHP Service:

Create a Docker Service
1
2
3
4
5
$ docker service create \
--name php-app \
--network appnet \
--replicas 3 \
--with-registry-auth registry.gitlab.com/<user>/<repo>/alpine:php5

For a Container from the Image on the Host:

Run a Container from the Image
1
$ docker run -itd --name php-app -p 80:80 registry.gitlab.com/<user>/<repo>/alpine:php5

Test the Web App:

Make a GET Request
1
2
$ curl -XGET http://127.0.0.1:80/
The word is: foo

Create a Lightweight Webserver (Service) With Lighttpd on Alpine Running on Docker Swarm

In this post we will create a docker service that will host a static html website. We are using the alpine:edge image and using the lighttpd package as our webserver application.

The Directory Structure:

Our working directory consists of:

Directory Tree
1
2
3
4
5
6
7
$ tree
.
|-- Dockerfile
`-- htdocs
    `-- index.html

1 directory, 2 files

First, our Dockerfile:

Dockerfile
1
2
3
4
5
6
7
8
9
FROM alpine:edge

RUN apk update \
    && apk add lighttpd \
    && rm -rf /var/cache/apk/*

ADD htdocs /var/www/localhost/htdocs

CMD ["lighttpd", "-D", "-f", "/etc/lighttpd/lighttpd.conf"]

Then our htdocs/index.html which is based off bootstrap:

index.html
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
<!DOCTYPE html>
<html lang="en">

  <head>

    <meta charset="utf-8">
    <meta name="viewport" content="width=device-width, initial-scale=1, shrink-to-fit=no">
    <meta name="description" content="">
    <meta name="author" content="">

    <title>Bare - Start Bootstrap Template</title>

    <!-- Bootstrap core CSS -->
    <link href="http://obj-cache.cloud.ruanbekker.com/static/css/bootstrap.min.css" rel="stylesheet">

    <!-- Custom styles for this template -->
    <style>
      body {
        padding-top: 54px;
      }
      @media (min-width: 992px) {
        body {
          padding-top: 56px;
        }
      }

    </style>

  </head>

  <body>

    <!-- Navigation -->
    <nav class="navbar navbar-expand-lg navbar-dark bg-dark fixed-top">
      <div class="container">
        <a class="navbar-brand" href="#">Start Bootstrap</a>
        <button class="navbar-toggler" type="button" data-toggle="collapse" data-target="#navbarResponsive" aria-controls="navbarResponsive" aria-expanded="false" aria-label="Toggle navigation">
          <span class="navbar-toggler-icon"></span>
        </button>
        <div class="collapse navbar-collapse" id="navbarResponsive">
          <ul class="navbar-nav ml-auto">
            <li class="nav-item active">
              <a class="nav-link" href="#">Home
                <span class="sr-only">(current)</span>
              </a>
            </li>
            <li class="nav-item">
              <a class="nav-link" href="https://startbootstrap.com/template-overviews/bare/">About</a>
            </li>
            <li class="nav-item">
              <a class="nav-link" href="#">Services</a>
            </li>
            <li class="nav-item">
              <a class="nav-link" href="#">Contact</a>
            </li>
          </ul>
        </div>
      </div>
    </nav>

    <!-- Page Content -->
    <div class="container">

      <div class="row">
        <div class="col-lg-12 text-center">
          <h1 class="mt-5">A Bootstrap 4 Starter Template</h1>
          <p class="lead">Complete with pre-defined file paths and responsive navigation!</p>
          <ul class="list-unstyled">
            <li>Bootstrap 4.0.0-beta</li>
            <li>jQuery 3.2.1</li>
          </ul>
        </div>
      </div>
    </div>

    <!-- Bootstrap core JavaScript -->
    <script src="http://obj-cache.cloud.ruanbekker.com/static/js/jquery.min.js"></script>
    <script src="http://obj-cache.cloud.ruanbekker.com/static/js/popper.min.js"></script>
    <script src="http://obj-cache.cloud.ruanbekker.com/static/js/bootstrap.min.js"></script>

  </body>

</html>

Creating the Service:

First we will need to build the image, for my personal projects, I like to use gitlab’s private registry, but there are many to choose from:

Build the Image
1
2
3
$ docker login registry.gitlab.com
$ docker build -t registry.gitlab.com/<user>/<repo>/lighttpd:bootstrap .
$ docker push registry.gitlab.com/<user>/<repo>/lighttpd:bootstrap

There’s many ways we can create the service, like using this service as a backend application, where nginx or traefik can proxy the requests through, but in this case we have nothing listening on port 80, so we will create the service and publish port 80 to the service, from the host:

Create the Service
1
2
3
4
5
$ docker service create \
--name web-bootstrap \
--replicas 1 \
--network appnet \
--with-registry-auth registry.gitlab.com/<user>/<repo>/lighttpd:bootstrap

Accessing your Website:

As this service will serve as our website, it should look more or less like the following:

Structured Search With Elasticsearch

Structured Search with Elasticsearch:

In this post we will ingest some dummy data into elasticsearch, then we will perform some queries to get the following info:

  • Student Names
  • Student Ages
  • Include / Exclude
  • Marks greater than
  • Finding Students with Specific marks, etc.

Create the Mapping for our Index:

for our Data as we wont use the Dynamic Mapping that comes with by default:

1
2
3
4
5
6
7
8
9
10
11
12
13
$ curl -XPUT http://127.0.0.1:9200/school -d '
{
  "mappings": {
    "students": {
      "properties": {
        "name": {"type": "string"},
        "marks": {"type": "short"},
        "gender": {"type": "string"},
        "age": {"type": "short"}
      }
    }
  }
}'

Ingesting the Data into Elasticsearch:

You can either using the following:

1
2
3
4
5
6
7
curl -XPOST http://127.0.0.1:9200/school/students/ -d ' {"name": "james", "marks": 60, "gender": "male", "age": 14} '
curl -XPOST http://127.0.0.1:9200/school/students/ -d ' {"name": "simon", "marks": 70, "gender": "male", "age": 15} '
curl -XPOST http://127.0.0.1:9200/school/students/ -d ' {"name": "samantha", "marks": 70, "gender": "female", "age": 14} '
curl -XPOST http://127.0.0.1:9200/school/students/ -d ' {"name": "john", "marks": 60, "gender": "male", "age": 14} '
curl -XPOST http://127.0.0.1:9200/school/students/ -d ' {"name": "michelle", "marks": 30, "gender": "female", "age": 14} '
curl -XPOST http://127.0.0.1:9200/school/students/ -d ' {"name": "max", "marks": 75, "gender": "female", "age": 15} '
curl -XPOST http://127.0.0.1:9200/school/students/ -d ' {"name": "frank", "marks": 79, "gender": "male", "age": 15} '

or using the Bulk API:

Save the following as bulk.json:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
{"index" : {"_index" : "school", "_type" : "students", "_id" : "AV4cYob6VgXGdBeaBa1c" } }
{"name": "james", "marks": 60, "gender": "male", "age": 14}
{"index" : {"_index" : "school", "_type" : "students", "_id" : "AV4cYqU3VgXGdBeaBa1d" } }
{"name": "simon", "marks": 70, "gender": "male", "age": 15}
{"index" : {"_index" : "school", "_type" : "students", "_id" : "AV4cYrzFVgXGdBeaBa1v" } }
{"name": "samantha", "marks": 70, "gender": "female", "age": 14}
{"index" : {"_index" : "school", "_type" : "students", "_id" : "AV4cYtuUVgXGdBeaBa2I" } }
{"name": "john", "marks": 60, "gender": "male", "age": 14}
{"index" : {"_index" : "school", "_type" : "students", "_id" : "AV4cYvMOVgXGdBeaBa2K" } }
{"name": "michelle", "marks": 30, "gender": "female", "age": 14}
{"index" : {"_index" : "school", "_type" : "students", "_id" : "AV4cYwwnVgXGdBeaBa2j" } }
{"name": "max", "marks": 75, "gender": "female", "age": 15}
{"index" : {"_index" : "school", "_type" : "students", "_id" : "AV4cYyXYVgXGdBeaBa29" } }
{"name": "frank", "marks": 79, "gender": "male", "age": 15}

Then use the Bulk API to Ingest into Elasticsearch:

1
$ curl -s -XPOST http://127.0.0.1:9200/_bulk --data-binary @bulk.json

Then you should have 7 documents ingested into Elasticsearch:

1
2
3
$ curl -XGET http://10.4.156.13:9200/_cat/indices/school?v
health status index  pri rep docs.count docs.deleted store.size pri.store.size
yellow open   school   5   1          7            0     19.4kb         19.4kb

You should notice that I have a yellow state, as this is a single instance where I am running elasticsearch on, so there will be unassigned shards.

Query Student Names:

Let’s search for the Student with the Name Max:

1
$ curl -XGET http://10.4.156.13:9200/school/students/_search?pretty -d '{"query": {"term" : {"name" : "max"}}}'
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
{
  "took" : 5,
  "timed_out" : false,
  "_shards" : {
    "total" : 5,
    "successful" : 5,
    "failed" : 0
  },
  "hits" : {
    "total" : 1,
    "max_score" : 1.0,
    "hits" : [ {
      "_index" : "school",
      "_type" : "students",
      "_id" : "AV4cRqaLVgXGdBeaBaWD",
      "_score" : 1.0,
      "_source" : {
        "name" : "max",
        "marks" : 75,
        "gender" : "female",
        "age" : 15
      }
    } ]
  }
}

Search Student Ages:

Search for all students with the age of 15:

1
$ curl -XGET http://10.4.156.13:9200/school/students/_search?pretty -d '{"query": {"term" : {"age" : 15}}}'
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
{
  "took" : 14,
  "timed_out" : false,
  "_shards" : {
    "total" : 5,
    "successful" : 5,
    "failed" : 0
  },
  "hits" : {
    "total" : 3,
    "max_score" : 1.0,
    "hits" : [ {
      "_index" : "school",
      "_type" : "students",
      "_id" : "AV4cRWPPVgXGdBeaBaVF",
      "_score" : 1.0,
      "_source" : {
        "name" : "simon",
        "marks" : 70,
        "gender" : "male",
        "age" : 15
      }
    }, {
      "_index" : "school",
      "_type" : "students",
      "_id" : "AV4cRqaLVgXGdBeaBaWD",
      "_score" : 1.0,
      "_source" : {
        "name" : "max",
        "marks" : 75,
        "gender" : "female",
        "age" : 15
      }
    }, {
      "_index" : "school",
      "_type" : "students",
      "_id" : "AV4cRxayVgXGdBeaBaWg",
      "_score" : 0.30685282,
      "_source" : {
        "name" : "frank",
        "marks" : 79,
        "gender" : "male",
        "age" : 15
      }
    } ]
  }
}

Query, Include but also Exclude:

Query everyone that is 14, and which is males, except the Student called John:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
$ curl -XGET http://10.4.156.13:9200/school/students/_search?pretty -d '
{
   "query" : {
      "constant_score" : {
         "filter" : {
            "bool" : {
              "should" : [
                 { "term" : {"age" : 14}},
                 { "term" : {"gender" : "male"}}
              ],
              "must_not" : {
                 "term" : {"name" : "john"}
              }
           }
         }
      }
   }
}'
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
"hits" : {
  "total" : 5,
  "max_score" : 1.0,
  "hits" : [ {
    "_index" : "school",
    "_type" : "students",
    "_id" : "AV4cYob6VgXGdBeaBa1c",
    "_score" : 1.0,
    "_source" : {
      "name" : "james",
      "marks" : 60,
      "gender" : "male",
      "age" : 14
    }
  }, {
    "_index" : "school",
    "_type" : "students",
    "_id" : "AV4cYvMOVgXGdBeaBa2K",
    "_score" : 1.0,
    "_source" : {
      "name" : "michelle",
      "marks" : 30,
      "gender" : "female",
      "age" : 14
    }
  }, {
    "_index" : "school",
    "_type" : "students",
    "_id" : "AV4cYyXYVgXGdBeaBa29",
    "_score" : 1.0,
    "_source" : {
      "name" : "frank",
      "marks" : 79,
      "gender" : "male",
      "age" : 15
    }
  }, {
    "_index" : "school",
    "_type" : "students",
    "_id" : "AV4cYqU3VgXGdBeaBa1d",
    "_score" : 1.0,
    "_source" : {
      "name" : "simon",
      "marks" : 70,
      "gender" : "male",
      "age" : 15
    }
  }, {
    "_index" : "school",
    "_type" : "students",
    "_id" : "AV4cYrzFVgXGdBeaBa1v",
    "_score" : 1.0,
    "_source" : {
      "name" : "samantha",
      "marks" : 70,
      "gender" : "female",
      "age" : 14
    }
  } ]
}
}

Query for Age, Gender with High Grades:

Show me everyone thats 14 years old, including males, but only with scores better than 70 and up

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
$ curl -XGET http://10.4.156.13:9200/school/students/_search?pretty -d '
{
   "query" : {
      "constant_score" : {
         "filter" : {
            "bool" : {
              "should" : [
                 { "term" : {"age" : 14}},
                 { "term" : {"gender" : "male"}}
              ],
              "must_not" : {
                 "range" : {"marks": {"lt": 70, "gte": 0}}
              }
           }
         }
      }
   }
}'

Everyone that got 70 and more:

Show me all the students that has marks of 70 and above:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
$ curl -XGET http://10.4.156.13:9200/school/students/_search?pretty -d '
{
   "query" : {
      "constant_score" : {
         "filter" : {
            "bool" : {
            "must" : [
              { "range" : {"marks" : {"lt": 100, "gte": 70}}}
            ]
          }
        }              
     }
   }
}'
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
{
  "took" : 6,
  "timed_out" : false,
  "_shards" : {
    "total" : 5,
    "successful" : 5,
    "failed" : 0
  },
  "hits" : {
    "total" : 4,
    "max_score" : 1.0,
    "hits" : [ {
      "_index" : "school",
      "_type" : "students",
      "_id" : "AV4cYwwnVgXGdBeaBa2j",
      "_score" : 1.0,
      "_source" : {
        "name" : "max",
        "marks" : 75,
        "gender" : "female",
        "age" : 15
      }
    }, {
      "_index" : "school",
      "_type" : "students",
      "_id" : "AV4cYyXYVgXGdBeaBa29",
      "_score" : 1.0,
      "_source" : {
        "name" : "frank",
        "marks" : 79,
        "gender" : "male",
        "age" : 15
      }
    }, {
      "_index" : "school",
      "_type" : "students",
      "_id" : "AV4cYqU3VgXGdBeaBa1d",
      "_score" : 1.0,
      "_source" : {
        "name" : "simon",
        "marks" : 70,
        "gender" : "male",
        "age" : 15
      }
    }, {
      "_index" : "school",
      "_type" : "students",
      "_id" : "AV4cYrzFVgXGdBeaBa1v",
      "_score" : 1.0,
      "_source" : {
        "name" : "samantha",
        "marks" : 70,
        "gender" : "female",
        "age" : 14
      }
    } ]
  }
}

Or you can do it like this:

Query Range only with gt:

Show me everyone that got more than 70:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
$ curl -XGET http://10.4.156.13:9200/school/students/_search?pretty -d '
{
   "query" : {
      "constant_score" : {
         "filter" : {
            "bool" : {
              "must" : [
                { "range" : {"marks" : {"gt": 70}}}
            ]
          }
        }              
     }
   }
}'
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
{
  "took" : 7,
  "timed_out" : false,
  "_shards" : {
    "total" : 5,
    "successful" : 5,
    "failed" : 0
  },
  "hits" : {
    "total" : 2,
    "max_score" : 1.0,
    "hits" : [ {
      "_index" : "school",
      "_type" : "students",
      "_id" : "AV4cYwwnVgXGdBeaBa2j",
      "_score" : 1.0,
      "_source" : {
        "name" : "max",
        "marks" : 75,
        "gender" : "female",
        "age" : 15
      }
    }, {
      "_index" : "school",
      "_type" : "students",
      "_id" : "AV4cYyXYVgXGdBeaBa29",
      "_score" : 1.0,
      "_source" : {
        "name" : "frank",
        "marks" : 79,
        "gender" : "male",
        "age" : 15
      }
    } ]
  }
}

Gender Specific, with Grades more than 70:

Show me females that has more than 70:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
$ curl -XGET http://10.4.156.13:9200/school/students/_search?pretty -d '
{
   "query" : {
      "constant_score" : {
         "filter" : {
            "bool" : {
            "must" : [
              { "range" : {"marks" : {"lt": 100, "gte": 70}}}
            ],
            "must_not": [
              {"term": {"gender": "male"}}
            ]
          }
        }              
     }
   }
}'
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
{
  "took" : 13,
  "timed_out" : false,
  "_shards" : {
    "total" : 5,
    "successful" : 5,
    "failed" : 0
  },
  "hits" : {
    "total" : 2,
    "max_score" : 1.0,
    "hits" : [ {
      "_index" : "school",
      "_type" : "students",
      "_id" : "AV4cYwwnVgXGdBeaBa2j",
      "_score" : 1.0,
      "_source" : {
        "name" : "max",
        "marks" : 75,
        "gender" : "female",
        "age" : 15
      }
    }, {
      "_index" : "school",
      "_type" : "students",
      "_id" : "AV4cYrzFVgXGdBeaBa1v",
      "_score" : 1.0,
      "_source" : {
        "name" : "samantha",
        "marks" : 70,
        "gender" : "female",
        "age" : 14
      }
    } ]
  }
}

Grade Specific:

Show me the ones that got 30 and 75:

1
2
3
4
5
6
7
8
9
10
11
12
$ curl -XGET http://10.4.156.13:9200/school/students/_search?pretty -d '
{
   "query" : {
      "constant_score" : {
         "filter" : {
            "terms" : {
              "marks": [30, 75]
            }
         }
      }
   }
}'
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
{
  "took" : 6,
  "timed_out" : false,
  "_shards" : {
    "total" : 5,
    "successful" : 5,
    "failed" : 0
  },
  "hits" : {
    "total" : 2,
    "max_score" : 1.0,
    "hits" : [ {
      "_index" : "school",
      "_type" : "students",
      "_id" : "AV4cYwwnVgXGdBeaBa2j",
      "_score" : 1.0,
      "_source" : {
        "name" : "max",
        "marks" : 75,
        "gender" : "female",
        "age" : 15
      }
    }, {
      "_index" : "school",
      "_type" : "students",
      "_id" : "AV4cYvMOVgXGdBeaBa2K",
      "_score" : 1.0,
      "_source" : {
        "name" : "michelle",
        "marks" : 30,
        "gender" : "female",
        "age" : 14
      }
    } ]
  }
}

For more information on this, have a look at Elasticsearch: Structured Search

Modern Reverse Proxy With Traefik on Docker Swarm

Traefik is a modern load balancer and reverse proxy built for micro services.

We will build 4 WebServices with Traefik, where we will go through the following scenarios:

  • Hostname Based Routingi (With Path’s and Without)
  • Path Based Routing

Pre-Requisites:

From your DNS Provider add wildcard entries to the Docker Swarm Public IPs:

  • apps.domain.com -> A Record to each Docker Swarm Node
  • *.apps.domain.com => apps.doamin.com

This will allow us to create web applications on the fly.

Static Website with Traefik:

Create Traefik Proxy:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
docker service create \
--name traefik \
--constraint 'node.role==manager' \
--publish 80:80 \
--publish 443:443 \
--publish 8080:8080 \
--mount type=bind,source=/var/run/docker.sock,target=/var/run/docker.sock \
--network appnet \
traefik:camembert \
--docker --docker.swarmmode  \
--docker.domain=apps.domain.com \
--docker.watch \
--logLevel=DEBUG \
--web

Build a WebService with 2 Endpoints:

Our Website will have:

  • /
  • /test/

Our Dockerfile:

1
2
3
4
5
6
7
8
FROM alpine:edge

RUN apk update \
    && apk add lighttpd

ADD htdocs /var/www/localhost/htdocs

CMD ["lighttpd", "-D", "-f", "/etc/lighttpd/lighttpd.conf"]

Our htdocs directory:

1
2
3
4
5
find ./htdocs/
./htdocs/
./htdocs/index.html
./htdocs/test
./htdocs/test/index.html

Building and Push the Image to a Registry of your choice:

1
2
3
docker login registry.gitlab.com
docker build -t registry.gitlab.com/<user>/<repo>/lighttpd:test
docker push registry.gitlab.com/<user>/<repo>/lighttpd:test

Create the 1st Service, No Hostname or Path based specified:

The Service will allow us to view / and /test/ paths, and also enable us to use the service name as the subdomain, or the domain specified in the traefik service:

1
docker service create --name web1 --label 'traefik.port=80'  --network appnet --with-registry-auth registry.gitlab.com/<user>/<repo>/lighttpd:test

Testing the service:

1
2
3
4
$ curl http://web1.apps.domain.com/
<html>
Root Page
</html>
1
2
3
4
$ curl http://web2.apps.domain.com/test/
<html>
Test Page
</html>

and

1
2
3
4
$ curl http://apps.domain.com/test/
<html>
Test Page
</html>

but

1
2
$ curl http://foo.apps.domain.com/test/
404 page not found

Create the 2nd Service, Only 1 Path Based Routing:

This service will only allow us to view the /test/ endpoint:

1
$ docker service create --name website2 --label 'traefik.port=80' --label traefik.frontend.rule="Path: /test/" --network appnet --with-registry-auth registry.gitlab.com/<user>/<repo>/lighttpd:test

Testing the Service:

1
2
$ curl http://web1.apps.domain.com/
404 page not found
1
2
3
4
$ curl http://web2.apps.domain.com/test/
<html>
Test Page
</html>

Hostname Based and Path Based Routing:

1
2
3
4
5
6
$ docker service create \
--name web3 \
--label 'traefik.port=80' \
--label traefik.frontend.rule="Host:apps.domain.com; Path: /test/" \
--network appnet \
--with-registry-auth registry.gitlab.com/rbekker87/docker/lighttpd:u1t-test

Test the / endpoint, which should not work:

1
2
$ curl  http://apps.domain.com/
404 page not found

and the /test/ endpoint:

1
2
3
4
$ curl  http://apps.domain.com/test/
<html>
Test Page
</html>

Also, any other FQDN that is specified will not work as it does not match the traefik.frontend.rule:

1
2
$ curl  http://web3.apps.domain.com/
404 page not found

Strictly Hostname Based Routing and not specifying any paths:

1
2
3
4
5
6
$ docker service create \
--name web4 \
--label 'traefik.port=80' \
--label traefik.frontend.rule="Host:apps.domain.com" \
--network appnet \
--with-registry-auth registry.gitlab.com/<user>/<repo>/lighttpd:u1t-test

Testing the Service:

1
2
3
4
$ curl http://apps.domain.com/
<html>
Root Page
</html>
1
2
3
4
$ curl http://apps.domain.com/test/
<html>
Test Page
</html>

Anything specified other than that, will result in a 404 Response.

Create a ZFS Raidz1 Volume Pool on Ubuntu 16

Setting up ZFS Volume Pool on Ubuntu 16.04

Installation

1
$ sudo apt-get install zfsutils-linux -y

Creating the ZFS Storage Pool

We will create a RAIDZ(1) Volume which is like Raid5 with Single Parity, so we can lose one of the Physical Disks before Raid failure.

Let’s first have a look at our disks that we have on our server:

1
2
3
4
5
6
$ lsblk
NAME    MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
xvda    202:0    0   8G  0 disk
└─xvda1 202:1    0   8G  0 part /
xvdf    202:80   0 100G  0 disk 
xvdg    202:80   0 100G  0 disk 

So we will be creating the volume consisting of /dev/xvdf and /dev/xvdg and we will name our pool: storage-pool

1
$ zpool create storage-pool raidz1 xvdf xvdg -f

Listing Pools

1
2
3
$ zpool list
NAME           SIZE  ALLOC   FREE  EXPANDSZ   FRAG    CAP  DEDUP  HEALTH  ALTROOT
storage-pool   199G   125K   199G         -     0%     0%  1.00x  ONLINE  -

We can also list the volume with zfs:

1
2
3
$ zfs list
NAME            USED  AVAIL  REFER  MOUNTPOINT
storage-pool    125K  199G   19K    /storage-pool

Mounting the Volume:

You will find that the volume is already mounted:

1
2
3
4
$ df -h
Filesystem      Size  Used Avail Use% Mounted on
/dev/xvda1      7.7G  1.1G  6.7G  14% /
pool            199G  125K  198G   1% /pool

Resources:

See how Brett Kelly from 45 Drives tried to break a Storage Cluster with GlusterFS and ZFS:

Great ZFS Performance Comparison:

Setup MongoDB Client on CentOS 6

I have a bastion host that is still running CentOS6 and epel repos provides mongodb-shell version 2.x and Mlab requires version 3.x

Setup the Repositories

Create the repository:

1
2
3
4
5
6
7
8
$ cat > /etc/yum.repos.d/mongodb.repo << EOF
[mongodb-org-3.4]
name=MongoDB Repository
baseurl=https://repo.mongodb.org/yum/redhat/\$releasever/mongodb-org/3.4/x86_64/
gpgcheck=1
enabled=1
gpgkey=https://www.mongodb.org/static/pgp/server-3.4.asc
EOF

Update the repository index:

1
$ sudo yum update -y

Install MongoDB-Shell

Install the MongoDB Shell Client:

1
$ sudo yum install mongodb-shell -y

Update: Thanks to Rick, when you use CentOS 7, you can install the Shell Client as instructed below:

1
$ sudo yum install mongodb-org-shell -y

Connect to your Remote MongoDB Instance:

1
$ mongo remotedb.mlab.com:27017/<dbname> -u <user> -p <pass>