Ruan Bekker's Blog

From a Curious mind to Posts on Github

Setup a 3 Node MongoDB Replica Set on Ubuntu 16

Today we will setup a 3 Node Replica Set for MongoDB on Ubuntu 16. A Replica Set is a form of data replication, so that your data resides on more than one node for data durability. We will setup the 1st node as the primary node, the second as the secondary node and the 3rd node will act as an arbiter.

The arbiter node can almost be mentioned as a voter node, as it will be set in place to prevent split brain.

Resources:

Installing MongoDB on our 3 Nodes:

Our case, using Ubuntu 16.04, setting up our repository and installing mongodb from our repository:

1
2
3
4
$ sudo apt-key adv --keyserver hkp://keyserver.ubuntu.com:80 --recv 0C49F3730359A14518585931BC711F9BA15703C6
$ echo "deb [ arch=amd64,arm64 ] http://repo.mongodb.org/apt/ubuntu xenial/mongodb-org/3.4 multiverse" | sudo tee /etc/apt/sources.list.d/mongodb-org-3.4.list
$ apt update
$ apt install -y mongodb-org -y

Preparing our Directories:

1
2
$ mkdir -p /srv/mongodb/rs0-0 /srv/mongodb/rs0-1 /srv/mongodb/rs0-2
$ mkdir -p /var/log/mongodb/rs0-0 /var/log/mongodb/rs0-1 /var/log/mongodb/rs0-2

Populating our MongoDB Configuration:

  • MongoDB Prefers XFS File Systems when using WiredTiger.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
$ cat > /etc/mongod.conf << EOF
storage:
  dbPath: /var/lib/mongodb
  journal:
    enabled: false

storage:
  mmapv1:
    smallFiles: true

systemLog:
  destination: file
  logAppend: true
  path: /var/log/mongodb/mongod.log

net:
  port: 27017
  bindIp: 0.0.0.0

replication:
  replSetName: rs0

security:
  authorization: enabled
EOF

Enable MongoDB On Startup and Start MongoDB:

1
2
$ systemctl enable mongod
$ systemctl restart mongod

Setup MongoDB Replica Sets:

In our setup we will have 3 nodes: (mongodb-1, mongodb-2, mongodb3) From our Primary Node, connect to MongoDB and inititalize our replica set:

1
2
3
4
5
6
7
8
9
10
$ mongo
MongoDB shell version v3.4.7
connecting to: mongodb://127.0.0.1:27017
MongoDB server version: 3.4.7
> rs.initiate()
{
        "info2" : "no configuration specified. Using a default configuration for the set",
        "me" : "mysql-1:27017",
        "ok" : 1
}

Next, add our 2 other MongoDB Nodes, remember mongodb-3 is our arbiter node:

1
2
3
4
rs0:SECONDARY> rs.add("mongodb-2")
{ "ok" : 1 }
rs0:PRIMARY> rs.add("mongodb-3", true)
{ "ok" : 1 }

Verify the Replica Set Status:

1
rs0:PRIMARY> rs.status()
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
{
        "set" : "rs0",
        "date" : ISODate("2017-08-27T13:17:42.469Z"),
        "myState" : 1,
        "term" : NumberLong(1),
        "heartbeatIntervalMillis" : NumberLong(2000),
        "optimes" : {
                "lastCommittedOpTime" : {
                        "ts" : Timestamp(0, 0),
                        "t" : NumberLong(-1)
                },
                "appliedOpTime" : {
                        "ts" : Timestamp(1503839853, 1),
                        "t" : NumberLong(1)
                },
                "durableOpTime" : {
                        "ts" : Timestamp(1503839722, 1),
                        "t" : NumberLong(-1)
                }
        },
        "members" : [
                {
                        "_id" : 0,
                        "name" : "mysql-1:27017",
                        "health" : 1,
                        "state" : 1,
                        "stateStr" : "PRIMARY",
                        "uptime" : 422,
                        "optime" : {
                                "ts" : Timestamp(1503839853, 1),
                                "t" : NumberLong(1)
                        },
                        "optimeDate" : ISODate("2017-08-27T13:17:33Z"),
                        "electionTime" : Timestamp(1503839723, 1),
                        "electionDate" : ISODate("2017-08-27T13:15:23Z"),
                        "configVersion" : 3,
                        "self" : true
                },
                {
                        "_id" : 1,
                        "name" : "mongodb-2:27017",
                        "health" : 1,
                        "state" : 2,
                        "stateStr" : "SECONDARY",
                        "uptime" : 28,
                        "optime" : {
                                "ts" : Timestamp(1503839853, 1),
                                "t" : NumberLong(1)
                        },
                        "optimeDurable" : {
                                "ts" : Timestamp(0, 0),
                                "t" : NumberLong(-1)
                        },
                        "optimeDate" : ISODate("2017-08-27T13:17:33Z"),
                        "optimeDurableDate" : ISODate("1970-01-01T00:00:00Z"),
                        "lastHeartbeat" : ISODate("2017-08-27T13:17:41.707Z"),
                        "lastHeartbeatRecv" : ISODate("2017-08-27T13:17:40.699Z"),
                        "pingMs" : NumberLong(4),
                        "syncingTo" : "mysql-1:27017",
                        "configVersion" : 3
                },
                {
                        "_id" : 2,
                        "name" : "mongodb-3:27017",
                        "health" : 1,
                        "state" : 7,
                        "stateStr" : "ARBITER",
                        "uptime" : 8,
                        "lastHeartbeat" : ISODate("2017-08-27T13:17:41.721Z"),
                        "lastHeartbeatRecv" : ISODate("2017-08-27T13:17:38.749Z"),
                        "pingMs" : NumberLong(2),
                        "configVersion" : 3
                }
        ],
        "ok" : 1
}
rs0:PRIMARY> exit
bye

Setup Auth:

Setup Authentication on our MongoDB Database, we will create the user adminuser and setup the password to secret:

1
2
3
4
rs0:PRIMARY> use admin
switched to db admin

rs0:PRIMARY> db.createUser({user: "adminuser", pwd: "secret", roles:[{role: "root", db: "admin"}]})
1
2
3
4
5
6
7
8
9
10
Successfully added user: {
        "user" : "adminuser",
        "roles" : [
                {
                        "role" : "root",
                        "db" : "admin"
                }
        ]
}
rs0:PRIMARY> exit

Restart MongoDB:

1
$ systemctl restart mongod

Connect and Authenticate against MongoDB:

Connect to your MongoDB Cluster with auth:

1
$ mongo --host mongodb.example.com --port 27017 -u <username> -p --authenticationDatabase admin

Setup HAProxy Load Balancer for MySQL Galera With IP Whitelisting and Backup Servers

Today we will setup a HAProxy Service for our 3 Node MySQL Galera Cluster

Our Setup:

  • 3 Node Galera MySQL Cluster
  • 3 HAProxy Services (Each HAProxy Service Running on the MySQL Nodes)
  • MySQL Listens on Port 3307
  • HAProxy Listens on Port 3306 and Proxies through to 3307

I have setup HAProxy on the same node as the MySQL Servers for my use case, but you can also setup HAProxy on a node outside the MySQL Host.

So essentially our MySQL Galera Cluster is a Multi Master Setup, but for now we will only accept connections from Node-A, and have Node-B and Node-C as Backup servers. Should Node-A go down, HAProxy will route connections to Node-B, and if Node-B also goes down, connections will be routed to Node-C.

If the Primary Node, which is Node-A recovers, connections will be restored to Node-A.

Security:

We use iptables to allow traffic between the nodes for port TCP/3307 and allow all traffic for Port TCP/3306, as HAProxy will allow the IP Based Access:

Iptables for Each Node
1
2
3
4
5
$ iptables -I INPUT -s {Node-A} -p tcp --dport 3307 -j ACCEPT
$ iptables -I INPUT -s {Node-B} -p tcp --dport 3307 -j ACCEPT
$ iptables -I INPUT -s {Node-C} -p tcp --dport 3307 -j ACCEPT
$ iptables -A INPUT -p tcp --dport 3306 -j ACCEPT
$ iptables -A INPUT -p tcp --dport 3307 -j DROP

HAProxy:

Installing HAProxy on Ubuntu:

Install HAProxy
1
2
$ sudo apt update
$ sudo apt install haproxy -y

Configure HAProxy with a Port 3306 listener, specify your source addresses that you would like to be authorized to communicate with MySQL and then specify the servers to proxy the connections to our MySQL Galera Cluster, specifying 2 backup servers:

/etc/haproxy/haproxy.cfg
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
global
  log         127.0.0.1 local2
  chroot      /var/lib/haproxy
  pidfile     /var/run/haproxy.pid
  maxconn     1020
  user        haproxy
  group       haproxy
  daemon

  stats socket /var/lib/haproxy/stats.sock mode 600 level admin
  stats timeout 2m

defaults
  mode    tcp
  log     global
  option  dontlognull
  option  redispatch
  retries                   3
  timeout queue             45s
  timeout connect           5s
  timeout client            1m
  timeout server            1m
  timeout check             10s
  maxconn                   1020

listen stats
  bind    *:80
  mode    http
  stats   enable
  stats   show-legends
  stats   refresh           5s
  stats   uri               /
  stats   realm             Haproxy\ Statistics
  stats   auth              admin:secret
  stats   admin             if TRUE

listen galera-lb
  bind    *:3306
  mode    tcp
  acl     network_allowed src 10.10.1.0/24 10.32.15.2/32
  tcp-request               content accept if network_allowed
  tcp-request               content reject
  default_backend           galera-cluster

backend galera-cluster
  balance roundrobin
  server  scw-mysql-1 10.0.0.2:3307  check
  server  scw-mysql-2 10.0.0.3:3307  check backup
  server  scw-mysql-3 10.0.0.4:3307  check backup

Start HAProxy:

Start HAProxy Service
1
2
$ sudo systemctl enable haproxy
$ sudo systemctl restart haproxy

Authorize HAProxy Hostnames to Connect to MySQL:

In this case we need to allow the Hostnames to be able to connect to mysql:

1
2
mysql> GRANT ALL PRIVILEGES ON *.* TO 'root'@'%' IDENTIFIED BY 'secrets' WITH GRANT OPTION;
mysql> FLUSH PRIVILEGES;

Resources:

Secure Your Access to Kibana 5 and Elasticsearch 5 With Nginx for AWS

As until now, AWS does not offer VPC Support for Elasticsearch, so this make things a bit difficult authorizing Private IP Ranges.

One workaround would be to setup a Nginx Reverse Proxy on AWS within the your Private VPC, associate a EIP on your Nginx EC2 Instance, then authorize your EIP on your Elasticsearch IP Access Policy.

Update:

Our Setup:

In this setup, we will have an Internal ELB (Elastic Load Balancer), which we will associate 1 or more EC2 Nginx Instances behind the ELB, then setup our Nginx to Revere Proxy our connections through to our Elasticsearch Endpoint.

We will also setup Basic HTTP Authentication for our / elasticsearch endpoint, and our /kibana endpoint. But we will keep the authentication seperate from each other, so that credentials for ES and Kibana is not the same, but depending on your use case, you can allow both endpoints to reference the same credential file.

Install Nginx

Depending on your Linux Distribution, the package manager may differ, I am using Amazon Linux:

Install Nginx
1
2
$ sudo yum update -y
$ sudo yum install nginx httpd-tools -y

Configure Nginx:

Remove the default configuration and replace the nginx.conf with the following:

Remove Default Nginx Config
1
$ sudo rm -r /etc/nginx/nginx.conf

Main Nginx Configuration:

/etc/nginx/nginx.conf
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
user nginx;
worker_processes auto;
pid /run/nginx.pid;
error_log /var/log/nginx/error.log;

events {
  worker_connections 1024;
}

http {

  # Basic Settings
  sendfile on;
  tcp_nopush on;
  tcp_nodelay on;
  keepalive_timeout 65;
  types_hash_max_size 2048;
  server_names_hash_bucket_size 128;

  include /etc/nginx/mime.types;
  default_type application/octet-stream;

  # Logging Settings
        log_format  main  '$remote_addr - $remote_user [$time_local] "$request" '
                      '$status $body_bytes_sent "$http_referer" '
                      '"$http_user_agent" "$http_x_forwarded_for"';

  access_log /var/log/nginx/access.log main;

  # Gzip Settings
  gzip on;
  gzip_disable "msie6";

  # Elasticsearch Config
  include /etc/nginx/conf.d/elasticsearch.conf;
}

The Reverse Proxy Configuration:

/etc/nginx/conf.d/elasticsearch.conf
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
server {

  listen 80;
  server_name elk.mydomain.com;

  # error logging
  error_log /var/log/nginx/elasticsearch_error.log;

  # authentication: server wide
  #auth_basic "Auth";
  #auth_basic_user_file /etc/nginx/.secrets;

  location / {

    # authentication: elasticsearch
    auth_basic "Elasticsearch Auth";
    auth_basic_user_file /etc/nginx/.secrets_elasticsearch;

    proxy_http_version 1.1;
    proxy_set_header Host https://search.eu-west-1.es.amazonaws.com;
    proxy_set_header X-Real-IP {NGINX-EIP};
    proxy_set_header Connection "Keep-Alive";
    proxy_set_header Proxy-Connection "Keep-Alive";
    proxy_set_header Authorization "";

    proxy_pass https://search.eu-west-1.es.amazonaws.com/;
    proxy_redirect https://search.eu-west-1.es.amazonaws.com/ http://{NGINX-EIP}/;

  }

  location /kibana {

    # authentication: kibana
    auth_basic "Kibana Auth";
    auth_basic_user_file /etc/nginx/.secrets_kibana;

    proxy_http_version 1.1;
    proxy_set_header Host https://search.eu-west-1.es.amazonaws.com;
    proxy_set_header X-Real-IP {NGINX-EIP};
    proxy_set_header Connection "Keep-Alive";
    proxy_set_header Proxy-Connection "Keep-Alive";
    proxy_set_header Authorization "";

    proxy_pass https://search.eu-west-1.es.amazonaws.com/_plugin/kibana/;
    proxy_redirect https://search.eu-west-1.es.amazonaws.com/_plugin/kibana/ http://{NGINX_EIP}/kibana/;

  }

  # elb checks
  location /status {
    root /usr/share/nginx/html/;
  }

}

Setup Authentication:

Setup the authentication for elasticsearch and kibana:

Create Auth for Kibana and Elasticsearch
1
2
$ sudo htpasswd -c /etc/nginx/.secrets_elasticsearch admin
$ sudo htpasswd -c /etc/nginx/.secrets_kibana admin

Restart Nginx and Enable on Startup

Restart the nginx process and enable the process on boot:

Restart Nginx
1
2
$ sudo /etc/init.d/nginx restart
$ sudo chkconfig nginx on

Configure ELB:

Create a New Internal ELB, set the Backend Instances on Port 80, and the healthcheck should point to /status/index.html as this location block does not require authentication and our ELB will be able to get a 200 reponse if all is good. Next you can configure your Route 53 Hosted Zone, elk.mydomain.com to map to your ELB.

End Result

Now you should be able to access Elasticsearch on http://elk.mydomain.com/ and Kibana on http://elk.mydomain.com/kibana after authenticating.

Reference Credentials Outside Your Main Application in Python

In this post I will show one way of referencing credentials from your application in Python, without setting them in your applications code. We will create a seperate python file which will hold our credentials, and then call them from our main application.

Our Main Application

This app will print our username, just for the sake of this example:

app.py
1
2
3
4
5
6
from config import credentials as secrets

my_username = secrets['APP1']['username']
my_password = secrets['APP1']['password']

print("Hello, your username is: {username}".format(username=my_username))

Our Credentials File

Then we have our file which will hold our credentials:

config.py
1
2
3
4
5
6
credentials = {
        'APP1': {
            'username': 'foo',
            'password': 'bar'
            }
        }

That is at least one way of doing it, you could also use environment variables using the os module, which is described here

References:

Change IAM Username With AWS CLI

You may find yourself in a position where you need to rename more than one IAM Username, and one way of doing this is using the AWS CLI tools to rename the username.

The benefit of this is that the user’s access keys remains the same, any policies associated to the user, will stay on the user after the username gets renamed.

The only thing that changes, is ofcourse the username that the user will use when logging onto the AWS Management Console:

Details of our User:

We will change the IAM User peter to peter.franklin. Currently Peter’s ACCESS_KEY will be AKIA123456ABCDEF1234 which is configured with the profile name peter.

Lets first get details of our user before changing it:

1
2
3
4
5
6
7
8
9
10
11
$ aws --profile admin iam get-user --user-name peter
{
    "User": {
        "UserName": "peter",
        "PasswordLastUsed": "2017-08-28T13:17:22Z",
        "CreateDate": "2017-08-28T13:11:25Z",
        "UserId": "ABCDEFGHIJKLMNOPQRST",
        "Path": "/",
        "Arn": "arn:aws:iam::123456789012:user/peter"
    }
}

Rename the IAM User

Update user peter to peter.franklin:

Rename the IAM User
1
$ aws --profile aws iam update-user --user-name peter --new-user-name peter.franklin

Describe peter’s new username:

1
2
3
4
5
6
7
8
9
10
11
$ aws --profile aws iam get-user --user-name peter.franklin
{
    "User": {
        "UserName": "peter.franklin",
        "PasswordLastUsed": "2017-08-28T13:23:18Z",
        "CreateDate": "2017-08-28T13:11:25Z",
        "UserId": "ABCDEFGHIJKLNMOPQRST",
        "Path": "/",
        "Arn": "arn:aws:iam::123456789012:user/peter.franklin"
    }
}

Verify that access keys are the same:

1
2
3
4
5
6
7
8
9
10
11
$ aws --profile aws iam list-access-keys --user-name peter.franklin
{
    "AccessKeyMetadata": [
        {
            "UserName": "peter.franklin",
            "Status": "Active",
            "CreateDate": "2017-08-28T13:11:27Z",
            "AccessKeyId": "AKIA123456ABCDEF1234"
        }
    ]
}

At this momemnt we can see that Peter’s AccessKeyId is still the same, which means he does not have to update his credentials on his end.

Some Useful CLI Commands:

Get only the Access Key for a User:

1
2
$ aws --profile admin iam list-access-keys --user-name peter.franklin | jq -r '.[][].AccessKeyId'
AKIA123456ABCDEF1234

Determine when the AccessKey was last used, and for which Service:

For auditing, or verifying if a AccessKeyId is being used, we can call the get-access-key-last-used, which will give us the last time the key was used, and also see for which service in question.

Let Peter create a DynamoDB Table:

1
2
3
4
5
$ aws --profile peter dynamodb \
create-table --table-name test01 \
--attribute-definitions "AttributeName=username,AttributeType=S" \
--key-schema "AttributeName=username,KeyType=HASH" \
--provisioned-throughput "ReadCapacityUnits=1,WriteCapacityUnits=1"
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
{
    "TableDescription": {
        "TableArn": "arn:aws:dynamodb:eu-west-1:123456789012:table/test01",
        "AttributeDefinitions": [
            {
                "AttributeName": "username",
                "AttributeType": "S"
            }
        ],
        "ProvisionedThroughput": {
            "NumberOfDecreasesToday": 0,
            "WriteCapacityUnits": 1,
            "ReadCapacityUnits": 1
        },
        "TableSizeBytes": 0,
        "TableName": "test01",
        "TableStatus": "CREATING",
        "KeySchema": [
            {
                "KeyType": "HASH",
                "AttributeName": "username"
            }
        ],
        "ItemCount": 0,
        "CreationDateTime": 1503928537.671
    }
}

Get Detail on LastUsedDate:

1
2
3
4
5
6
7
$ aws --profile admin iam get-access-key-last-used  --access-key $(aws --profile aws iam list-access-keys --user-name peter.franklin | jq -r '.[][].AccessKeyId') | jq -r '.[]'
peter.franklin
{
  "Region": "eu-west-1",
  "ServiceName": "dynamodb",
  "LastUsedDate": "2017-08-28T13:55:00Z"
}

Only getting the LastUsedDate of the AccessKeyId:

1
2
$ aws --profile admin iam get-access-key-last-used  --access-key $(aws --profile aws iam list-access-keys --user-name peter.franklin | jq -r '.[][].AccessKeyId') | jq '.AccessKeyLastUsed.LastUsedDate'
"2017-08-28T13:55:00Z"

Resources:

Using the Python API for MongoDB Using PyMongo

Using the Python API for MongoDB using Pymongo

Requirements:

You will need to install the pymongo driver using pip:

Install Pymongo
1
$ pip install pymongo

A configuration file with your access credentials, which I like to use outside my code:

config.py
1
2
3
4
5
6
7
credentials = {
    "mongodb": {
        "HOSTNAME": "host.domain.com",
        "USERNAME": "username",
        "PASSWORD": "password"
    }
}

Connecting to MongoDB:

From the python interpreter, connect to MongoDB:

1
2
3
4
5
6
>>> from pymongo import MongoClient
>>> from config import credentials as secrets
>>> mongo_host = secrets['mongodb']['HOSTNAME']
>>> mongo_username = secrets['mongodb']['USERNAME']
>>> mongo_password = secrets['mongodb']['PASSWORD']
>>> mongodb_client = MongoClient('mongodb://%s:%s@%s:27017/admin?authMechanism=SCRAM-SHA-1' % (mongo_username, mongo_password, mongo_host))

Find the Database that you are connected to:

1
2
>>> mongodb_client.get_database().name
u'admin'

Find all the databases that is currently on your MongoDB Server:

1
2
3
4
5
6
7
>>> dbs = mongodb_client.database_names()
>>> for x in dbs:
...     print(x)
...
admin
flask_reminders
local

Create a Database, Collection and Write a Document into your Database:

Let’s create a database, in my case it will be ruan-test, and my collection name mycollection and the write one item into it:

1
2
3
4
5
6
>>> newdb = mongodb_client['ruan-test']
>>> newdb_collection = newdb['mycollection']
>>> doc = {"name": "frank", "surname": "jeffreys", "tags": ["person", "name"]}
>>> doc_id = newdb_collection.insert_one(doc).inserted_id
>>> print(doc_id)
59a319ec1f15a5088ba3a339

Note: you can also connect to your collection like the following

1
>>> newdb_collection = mongodb_client['ruan-test']['mycollection']

We have inserted one item into our database, which we can verify with count():

1
2
>>> newdb_collection.find().count()
1

As you can see I have the value of the item’s id, we can use that to find it from our collection:

1
2
>>> newdb_collection.find_one({"_id": doc_id})
{u'_id': ObjectId('59a319ec1f15a5088ba3a339'), u'surname': u'jeffreys', u'name': u'frank', u'tags': [u'person', u'name']}

As we only have one item in our database, we can also use find_one() which will give us the exact same data:

1
2
>>> newdb_collection.find_one()
{u'_id': ObjectId('59a319ec1f15a5088ba3a339'), u'surname': u'jeffreys', u'name': u'frank', u'tags': [u'person', u'name']}

We can write some more data to our database, but this time, lets write to a different collection:

1
2
3
>>> newdb_collection2 = newdb['new-collection-2']
>>> item = newdb_collection2.insert_one({"name": "ruby", "surname": "james"}).inserted_id
>>> item2 = newdb_collection2.insert_one({"name": "ruby", "surname": "james"}).inserted_id

As we captured the items _id, we can view the:

1
2
3
4
>>> print(item)
59a31acf1f15a5088ba3a33b
>>> print(item2)
59a31a8a1f15a5088ba3a33a

Query Data from MongoDB:

We can then query for this data:

1
2
3
4
5
>>> newdb2.find_one({"name": "ruby"})
{u'_id': ObjectId('59a31acf1f15a5088ba3a33b'), u'surname': u'james', u'name': u'ruby'}

>>> newdb2.find_one({"_id": item})
{u'_id': ObjectId('59a31acf1f15a5088ba3a33b'), u'surname': u'james', u'name': u'ruby'}

Also scan for all items in the collection:

1
2
3
4
5
6
7
8
9
>>> scan = newdb_collection2.find({})
>>> for x in scan:
...     print(x)
...
{u'_id': ObjectId('59a31a8a1f15a5088ba3a33a'), u'surname': u'james', u'name': u'phillip'}
{u'_id': ObjectId('59a31acf1f15a5088ba3a33b'), u'surname': u'james', u'name': u'ruby'}

>>> newdb2.find().count()
2

We can now verify that we have 2 collections in our database:

1
2
>>> newdb.collection_names()
[u'mycollection-2', u'mycollection']

Connecting to an existing Database:

Let’s connect to an existing database on our MongoDB Server:

1
>>> flaskdb = mongodb_client['flask_reminders']

List the collections:

1
2
>>> flaskdb.collection_names()
[u'reminders', u'usersessions']

Count the number of items in our reminders Collection:

1
2
>>> flaskdb.reminders.find().count()
624

Find a Random Item:

1
2
>>> flaskdb.reminders.find_one()
{u'category': u'Python', u'description': u'Chatbot with SQLite', u'link': u'http://rodic.fr/blog/python-chatbot-1/', u'date': u'2017-01-03', u'_id': ObjectId('586bb6dd0269103671afce32'), u'type': u'Discovered Service'}

Find One Item, with a Specific Value, for example the value AWS for our Category key:

1
2
>>> flaskdb.reminders.find_one({"category": "AWS"})
{u'category': u'AWS', u'description': u'Elasticsearch Documentation Access Policies', u'link': u'http://docs.aws.amazon.com/elasticsearch-service/latest/developerguide/es-createupdatedomains.html#es-createdomain-configure-access-policies', u'date': u'2017-02-13', u'_id': ObjectId('58a1d45202691070616947c3'), u'type': u'Documentation'}

Find All Items, with a specific value:

1
2
3
4
5
6
>>> data = flaskdb.reminders.find({"category": "AWS"})
>>> for x in data:
...     print(x)
...
{u'category': u'Python', u'description': u'Chatbot with SQLite', u'link': u'http://rodic.fr/blog/python-chatbot-1/', u'date': u'2017-01-03', u'_id': ObjectId('586bb6dd0269103671afce32'), u'type': u'Discovered Service'}
{u'category': u'Python', u'description': u'Boto: Kinesis List', u'link': u'https://gitlab.com/rbekker87/code-examples/blob/master/kinesis/firehose/python/firehose.list.py', u'date': u'2017-01-05', u'_id': ObjectId('586dde1e0269103671afce36'), u'type': u'Stuff Done'}

Deleting Databases:

Cleaning up, deleting the database that we created, when a database is delete, the collections within that database also gets removed.

First list the databases:

1
2
3
4
5
6
7
8
>>> dbs = mongodb_client.database_names()
>>> for x in dbs:
...     print(x)
...
admin
flask_reminders
local
ruan-test

Then delete the database that you want to delete:

1
>>> mongodb_client.drop_database("ruan-test")

Then verify if the database was removed:

1
2
3
4
5
6
7
>>> dbs = mongodb_client.database_names()
>>> for x in dbs:
...     print(x)
...
admin
flask_reminders
local

Resources:

Setup a Local MongoDB Development 3 Member Replica Set

Setup a Development Environment of a MongoDB Replica Set consisting of 3 mongod MongoDB Instances.

This is purely aimed for a testing or development environment, as one of the key points is that security is disabled, and that for this post, all 3 instances will be running on the same node.

Resources:

Installation:

I am using Ubuntu 16.04, for other distributions, have a look at MongoDBs Installation Page

MongoDB Installation
1
2
3
4
$ sudo apt-key adv --keyserver hkp://keyserver.ubuntu.com:80 --recv 0C49F3730359A14518585931BC711F9BA15703C6
$ echo "deb [ arch=amd64,arm64 ] http://repo.mongodb.org/apt/ubuntu xenial/mongodb-org/3.4 multiverse" | sudo tee /etc/apt/sources.list.d/mongodb-org-3.4.list
$ apt update
$ apt install -y mongodb-org -y

Prepare Directories:

Prepare the data directories, and as I am planning to use the --fork option, I need to specify the the --logpath, so therefore I will create the log directories as well:

Create the Directory Paths
1
2
$ mkdir -p /srv/mongodb/rs0-0 /srv/mongodb/rs0-1 /srv/mongodb/rs0-2
$ mkdir -p /var/log/mongodb/rs0-0 /var/log/mongodb/rs0-1 /var/log/mongodb/rs0-2

Run 3 MongoDB Instances:

Create 3 MongoDB Instances, each instance listening on it’s unique port.

From MongoDB’s Documentation:

“The –smallfiles and –oplogSize settings reduce the disk space that each mongod instance uses”

1
2
3
$ mongod --port 27017 --dbpath /srv/mongodb/rs0-0 --replSet rs0 --smallfiles --oplogSize 128 --logpath /var/log/mongodb/rs0-0/server.log --fork
$ mongod --port 27018 --dbpath /srv/mongodb/rs0-1 --replSet rs0 --smallfiles --oplogSize 128 --logpath /var/log/mongodb/rs0-1/server.log --fork
$ mongod --port 27019 --dbpath /srv/mongodb/rs0-2 --replSet rs0 --smallfiles --oplogSize 128 --logpath /var/log/mongodb/rs0-2/server.log --fork

Cofirm:

Confirm that the processes are listening on the ports that we defined:

1
2
3
4
5
6
$ netstat -tulpn
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address           Foreign Address         State       PID/Program name
tcp        0      0 0.0.0.0:27017           0.0.0.0:*               LISTEN      1100/mongod
tcp        0      0 0.0.0.0:27018           0.0.0.0:*               LISTEN      1127/mongod
tcp        0      0 0.0.0.0:27019           0.0.0.0:*               LISTEN      1154/mongod

Connect to the first MongoDB Instnace:

Connect to our first MongoDB Instance, where we will setup the replica set:

1
2
$ mongo --port 27017
\>

Create the Replica Set Configuration Object:

1
2
3
4
5
6
7
8
9
> rsconf = {
             _id: "rs0",
             members: [
                        {
                         _id: 0,
                         host: "10.78.1.24:27017"
                        }
                      ]
           }

Initiate the replica set configuration:

1
2
> rs.initiate( rsconf )
{ "ok" : 1 }

Display the Replica Configuration with rs.conf():

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
rs0:SECONDARY> rs.conf()
{
        "_id" : "rs0",
        "version" : 1,
        "protocolVersion" : NumberLong(1),
        "members" : [
                {
                        "_id" : 0,
                        "host" : "10.78.1.24:27017",
                        "arbiterOnly" : false,
                        "buildIndexes" : true,
                        "hidden" : false,
                        "priority" : 1,
                        "tags" : {

                        },
                        "slaveDelay" : NumberLong(0),
                        "votes" : 1
                }
        ],
        "settings" : {
                "chainingAllowed" : true,
                "heartbeatIntervalMillis" : 2000,
                "heartbeatTimeoutSecs" : 10,
                "electionTimeoutMillis" : 10000,
                "catchUpTimeoutMillis" : 60000,
                "getLastErrorModes" : {

                },
                "getLastErrorDefaults" : {
                        "w" : 1,
                        "wtimeout" : 0
                },
                "replicaSetId" : ObjectId("59a2339f5ff27709a1645d28")
        }
}

Add the other two mongodb instances to the replica set using rs.add():

1
2
3
4
5
rs0:PRIMARY> rs.add("10.78.1.24:27018")
{ "ok" : 1 }

rs0:PRIMARY> rs.add("10.78.1.24:27019")
{ "ok" : 1 }

View the status of our MongoDB Replica Set with rs.status():

1
rs0:PRIMARY> rs.status()
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
{
        "set" : "rs0",
        "date" : ISODate("2017-08-27T02:52:08.106Z"),
        "myState" : 1,
        "term" : NumberLong(1),
        "heartbeatIntervalMillis" : NumberLong(2000),
        "optimes" : {
                "lastCommittedOpTime" : {
                        "ts" : Timestamp(1503802316, 1),
                        "t" : NumberLong(1)
                },
                "appliedOpTime" : {
                        "ts" : Timestamp(1503802316, 1),
                        "t" : NumberLong(1)
                },
                "durableOpTime" : {
                        "ts" : Timestamp(1503802316, 1),
                        "t" : NumberLong(1)
                }
        },
        "members" : [
                {
                        "_id" : 0,
                        "name" : "10.78.1.24:27017",
                        "health" : 1,
                        "state" : 1,
                        "stateStr" : "PRIMARY",
                        "uptime" : 890,
                        "optime" : {
                                "ts" : Timestamp(1503802316, 1),
                                "t" : NumberLong(1)
                        },
                        "optimeDate" : ISODate("2017-08-27T02:51:56Z"),
                        "infoMessage" : "could not find member to sync from",
                        "electionTime" : Timestamp(1503802272, 1),
                        "electionDate" : ISODate("2017-08-27T02:51:12Z"),
                        "configVersion" : 3,
                        "self" : true
                },
                {
                        "_id" : 1,
                        "name" : "10.78.1.24:27018",
                        "health" : 1,
                        "state" : 2,
                        "stateStr" : "SECONDARY",
                        "uptime" : 16,
                        "optime" : {
                                "ts" : Timestamp(1503802316, 1),
                                "t" : NumberLong(1)
                        },
                        "optimeDurable" : {
                                "ts" : Timestamp(1503802316, 1),
                                "t" : NumberLong(1)
                        },
                        "optimeDate" : ISODate("2017-08-27T02:51:56Z"),
                        "optimeDurableDate" : ISODate("2017-08-27T02:51:56Z"),
                        "lastHeartbeat" : ISODate("2017-08-27T02:52:06.638Z"),
                        "lastHeartbeatRecv" : ISODate("2017-08-27T02:52:07.638Z"),
                        "pingMs" : NumberLong(0),
                        "syncingTo" : "10.78.1.24:27017",
                        "configVersion" : 3
                },
                {
                        "_id" : 2,
                        "name" : "10.78.1.24:27019",
                        "health" : 1,
                        "state" : 2,
                        "stateStr" : "SECONDARY",
                        "uptime" : 11,
                        "optime" : {
                                "ts" : Timestamp(1503802316, 1),
                                "t" : NumberLong(1)
                        },
                        "optimeDurable" : {
                                "ts" : Timestamp(1503802316, 1),
                                "t" : NumberLong(1)
                        },
                        "optimeDate" : ISODate("2017-08-27T02:51:56Z"),
                        "optimeDurableDate" : ISODate("2017-08-27T02:51:56Z"),
                        "lastHeartbeat" : ISODate("2017-08-27T02:52:06.638Z"),
                        "lastHeartbeatRecv" : ISODate("2017-08-27T02:52:07.241Z"),
                        "pingMs" : NumberLong(0),
                        "configVersion" : 3
                }
        ],
        "ok" : 1
}

Write some Data to MongoDB:

Create a Database named mydb:

1
2
rs0:PRIMARY> use mydb
switched to db mydb

Create a Collection, named mycol1:

1
2
3
4
5
rs0:PRIMARY> db.createCollection("mycol1")
{ "ok" : 1 }

rs0:PRIMARY> show collections
mycol1

Write 2 documents with:

  • Name: James, Home Address: Country => South Africa, City => Cape Town
  • Name: Frank, Home Address: Country => Ireland, City => Dublin
Write some Data
1
2
3
4
5
rs0:PRIMARY> db.mycol1.insert({"name": "james", "home address": {"country": "south africa", "city": "cape town"}})
WriteResult({ "nInserted" : 1 })

rs0:PRIMARY> db.mycol1.insert({"name": "frank", "home address": {"country": "ireland", "city": "dublin"}})
WriteResult({ "nInserted" : 1 })

Count all Documents in our Database:

Counting
1
2
rs0:PRIMARY> db.mycol1.find().count()
2

Scan through all documents, and show the in pretty print:

Pretty Print
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
rs0:PRIMARY> db.mycol1.find().pretty()
{
        "_id" : ObjectId("59a23d26c0c3824694f79ff6"),
        "name" : "james",
        "home address" : {
                "country" : "south africa",
                "city" : "cape town"
        }
}
{
        "_id" : ObjectId("59a23dbdc0c3824694f79ff7"),
        "name" : "frank",
        "home address" : {
                "country" : "ireland",
                "city" : "dublin"
        }
}

Find Information about Frank:

Franks Info
1
2
rs0:PRIMARY> db.mycol1.find({"name": "frank"})
{ "_id" : ObjectId("59a23dbdc0c3824694f79ff7"), "name" : "frank", "home address" : { "country" : "ireland", "city" : "dublin" } }

Delete the Database, but confirm which database your are logged on, the delete using dropDatabase():

Drop Database
1
2
3
4
5
6
7
8
rs0:PRIMARY> db
mydb

rs0:PRIMARY> db.dropDatabase()
{ "dropped" : "mydb", "ok" : 1 }

rs0:PRIMARY> exit
bye

Building a Alpine Nginx PHP-Fpm Image on Docker for PHP Applications

A Post on Building a Alpine Based Image that will serve PHP Pages, using Nginx and PHP-FPM5.

I have a lot of modules enabled, which might not be neccesary, but in my case I wanted to have a couple of them enabled, for testing.

One of the Requirements:

One of the requirements was that I needed SMTP support from the container as I am using Startbootstrap Freelancer Theme, which I configured to relay mail from the contact from to one of my external relay hosts.

Our Directory Structure:

Our data that we will be working with will consist of our Dockerfile, our website files, nginx config, and a wrapper script that will control nginx and php-fpm5 processes:

Directory Structure
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
.
|-- Dockerfile
|-- README.md
|-- html
|   |-- css
|   |-- fonts
|   |-- img
|   |-- index.html
|   |-- js
|   `-- mail
|       `-- contact.php
|-- nginx.conf
|-- start_nginx.sh
|-- start_php-fpm5.sh
`-- wrapper.sh

Going into Some Detail:

First, our Dockerfile, which you will see I started the image from Apline:

Dockerfile
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
FROM alpine:edge

RUN apk update \
    && apk add nginx \
    && adduser -D -u 1000 -g 'www' www \
    && mkdir /www \
    && chown -R www:www /var/lib/nginx \
    && chown -R www:www /www \
    && rm -rf /etc/nginx/nginx.conf

ENV PHP_FPM_USER="www"
ENV PHP_FPM_GROUP="www"
ENV PHP_FPM_LISTEN_MODE="0660"
ENV PHP_MEMORY_LIMIT="512M"
ENV PHP_MAX_UPLOAD="50M"
ENV PHP_MAX_FILE_UPLOAD="200"
ENV PHP_MAX_POST="100M"
ENV PHP_DISPLAY_ERRORS="On"
ENV PHP_DISPLAY_STARTUP_ERRORS="On"
ENV PHP_ERROR_REPORTING="E_COMPILE_ERROR\|E_RECOVERABLE_ERROR\|E_ERROR\|E_CORE_ERROR"
ENV PHP_CGI_FIX_PATHINFO=0
ENV TIMEZONE="Africa/Johannesburg"

RUN apk add curl \
    ssmtp \
    tzdata \
    php5-fpm \
    php5-mcrypt \
    php5-soap \
    php5-openssl \
    php5-gmp \
    php5-pdo_odbc \
    php5-json \
    php5-dom \
    php5-pdo \
    php5-zip \
    php5-mysql \
    php5-mysqli \
    php5-sqlite3 \
    php5-pdo_pgsql \
    php5-bcmath \
    php5-gd \
    php5-odbc \
    php5-pdo_mysql \
    php5-pdo_sqlite \
    php5-gettext \
    php5-xmlreader \
    php5-xmlrpc \
    php5-bz2 \
    php5-iconv \
    php5-pdo_dblib \
    php5-curl \
    php5-ctype

RUN sed -i "s|;listen.owner\s*=\s*nobody|listen.owner = ${PHP_FPM_USER}|g" /etc/php5/php-fpm.conf \
    && sed -i "s|;listen.group\s*=\s*nobody|listen.group = ${PHP_FPM_GROUP}|g" /etc/php5/php-fpm.conf \
    && sed -i "s|;listen.mode\s*=\s*0660|listen.mode = ${PHP_FPM_LISTEN_MODE}|g" /etc/php5/php-fpm.conf \
    && sed -i "s|user\s*=\s*nobody|user = ${PHP_FPM_USER}|g" /etc/php5/php-fpm.conf \
    && sed -i "s|group\s*=\s*nobody|group = ${PHP_FPM_GROUP}|g" /etc/php5/php-fpm.conf \
    && sed -i "s|;log_level\s*=\s*notice|log_level = notice|g" /etc/php5/php-fpm.conf \
    && sed -i 's/include\ \=\ \/etc\/php5\/fpm.d\/\*.conf/\;include\ \=\ \/etc\/php5\/fpm.d\/\*.conf/g' /etc/php5/php-fpm.conf

RUN sed -i "s|display_errors\s*=\s*Off|display_errors = ${PHP_DISPLAY_ERRORS}|i" /etc/php5/php.ini \
    && sed -i "s|display_startup_errors\s*=\s*Off|display_startup_errors = ${PHP_DISPLAY_STARTUP_ERRORS}|i" /etc/php5/php.ini \
    && sed -i "s|error_reporting\s*=\s*E_ALL & ~E_DEPRECATED & ~E_STRICT|error_reporting = ${PHP_ERROR_REPORTING}|i" /etc/php5/php.ini \
    && sed -i "s|;*memory_limit =.*|memory_limit = ${PHP_MEMORY_LIMIT}|i" /etc/php5/php.ini \
    && sed -i "s|;*upload_max_filesize =.*|upload_max_filesize = ${PHP_MAX_UPLOAD}|i" /etc/php5/php.ini \
    && sed -i "s|;*max_file_uploads =.*|max_file_uploads = ${PHP_MAX_FILE_UPLOAD}|i" /etc/php5/php.ini \
    && sed -i "s|;*post_max_size =.*|post_max_size = ${PHP_MAX_POST}|i" /etc/php5/php.ini \
    && sed -i "s|;*cgi.fix_pathinfo=.*|cgi.fix_pathinfo= ${PHP_CGI_FIX_PATHINFO}|i" /etc/php5/php.ini
    && sed -i 's/smtp_port\ =\ 25/smtp_port\ =\ 81/g' /etc/php5/php.ini \
    && sed -i 's/SMTP\ =\ localhost/SMTP\ =\ mail.bekkersolutions.com/g' /etc/php5/php.ini \
    && sed -i 's/;sendmail_path\ =/sendmail_path\ =\ \/usr\/sbin\/sendmail\ -t/g' /etc/php5/php.ini

RUN rm -rf /etc/localtime \
    && ln -s /usr/share/zoneinfo/${TIMEZONE} /etc/localtime \
    && echo "${TIMEZONE}" > /etc/timezone \
    && sed -i "s|;*date.timezone =.*|date.timezone = ${TIMEZONE}|i" /etc/php5/php.ini \ 
    && echo 'sendmail_path = "/usr/sbin/ssmtp -t "' > /etc/php5/conf.d/mail.ini \
    && sed -i 's/mailhub=mail/mailhub=mail.domain.com\:81/g' /etc/ssmtp/ssmtp.conf

COPY nginx.conf /etc/nginx/nginx.conf
COPY index.php /www/index.php
COPY test.html /www/test.html
COPY start_nginx.sh /start_nginx.sh
COPY start_php-fpm5.sh /start_php-fpm5.sh
COPY wrapper.sh /wrapper.sh

RUN chmod +x /start_nginx.sh /start_php-fpm5.sh /wrapper.sh

CMD ["/wrapper.sh"]

Next, our nginx.conf configuration file:

nginx.conf
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
user                            www;
worker_processes                1;

error_log                       /var/log/nginx/error.log warn;
pid                             /var/run/nginx.pid;

events {
    worker_connections          1024;
}

http {
    include                     /etc/nginx/mime.types;
    default_type                application/octet-stream;
    sendfile                    on;
    access_log                  /var/log/nginx/access.log;
    keepalive_timeout           3000;

    server {
        listen                  80;
        root                    /www;
        index                   index.html index.htm index.php;
        server_name             _;
        client_max_body_size    32m;
        error_page              500 502 503 504  /50x.html;

        location = /50x.html {
              root              /var/lib/nginx/html;
        }

        location ~ \.php$ {
              fastcgi_pass      127.0.0.1:9000;
              fastcgi_index     index.php;
              include           fastcgi.conf;
        }
    }
}

Then our directory, html that will consist our websites data, for a simple example, I will create a sample index.php page which can be used:

html/index.php
1
2
3
4
<?php
$word = "foo";
echo "The word is: $word\n";
?>

Then, following our wrapper.sh script that will start our php-fpm5, and nginx processes, and then monitor these processes, if one of the processes have to exit, the wrapper script will return a exit code, which will result the container to exit, if there is anything wrong with the service:

The PHP-FPM script:

start_php-fpm5.sh
1
2
#!/bin/sh
/usr/bin/php-fpm5

The Nginx Script:

start_nginx.sh
1
2
#!/bin/sh
/usr/sbin/nginx -c /etc/nginx/nginx.conf

The Wrapper Script:

wrapper.sh
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
#!/bin/sh

/start_php-fpm5.sh -D
status=$?
if [ $status -ne 0 ]; then
  echo "php-fpm5 Failed: $status"
  exit $status
  else echo "Starting PHP-FPM: OK"
fi

sleep 2

/start_nginx.sh -D
status=$?
if [ $status -ne 0 ]; then
  echo "Nginx Failed: $status"
  exit $status
  else echo "Starting Nginx: OK"
fi

sleep 2

while /bin/true; do
  ps aux | grep 'php-fpm: master process' | grep -q -v grep
  PHP_FPM_STATUS=$?
  echo "Checking PHP-FPM, Status Code: $PHP_FPM_STATUS"
  sleep 2

  ps aux | grep 'nginx: master process' | grep -q -v grep
  NGINX_STATUS=$?
  echo "Checking NGINX, Status Code: $NGINX_STATUS"
  sleep 2

  if [ $PHP_FPM_STATUS -ne 0 ];
    then
      echo "$(date +%F_%T) FATAL: PHP-FPM Raised a Status Code of $PHP_FPM_STATUS and exited"
      exit -1

   elif [ $NGINX_STATUS -ne 0 ];
     then
       echo "$(date +%F_%T) FATAL: NGINX Raised a Status Code of $NGINX_STATUS and exited"
       exit -1

   else
     sleep 2
        echo "$(date +%F_%T) - HealtCheck: NGINX and PHP-FPM: OK"
  fi
  sleep 60
done

Building the Image:

I am primarily using docker swarm, so I am building the image, and pushing to a private registry:

Build and Push the Image
1
2
$ docker build -t registry.gitlab.com/<user>/<repo>/alpine:php5 .
$ docker push registry.gitlab.com/<user>/<repo>/alpine:php5

Create the PHP Service:

Create a Docker Service
1
2
3
4
5
$ docker service create \
--name php-app \
--network appnet \
--replicas 3 \
--with-registry-auth registry.gitlab.com/<user>/<repo>/alpine:php5

For a Container from the Image on the Host:

Run a Container from the Image
1
$ docker run -itd --name php-app -p 80:80 registry.gitlab.com/<user>/<repo>/alpine:php5

Test the Web App:

Make a GET Request
1
2
$ curl -XGET http://127.0.0.1:80/
The word is: foo

Create a Lightweight Webserver (Service) With Lighttpd on Alpine Running on Docker Swarm

In this post we will create a docker service that will host a static html website. We are using the alpine:edge image and using the lighttpd package as our webserver application.

The Directory Structure:

Our working directory consists of:

Directory Tree
1
2
3
4
5
6
7
$ tree
.
|-- Dockerfile
`-- htdocs
    `-- index.html

1 directory, 2 files

First, our Dockerfile:

Dockerfile
1
2
3
4
5
6
7
8
9
FROM alpine:edge

RUN apk update \
    && apk add lighttpd \
    && rm -rf /var/cache/apk/*

ADD htdocs /var/www/localhost/htdocs

CMD ["lighttpd", "-D", "-f", "/etc/lighttpd/lighttpd.conf"]

Then our htdocs/index.html which is based off bootstrap:

index.html
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
<!DOCTYPE html>
<html lang="en">

  <head>

    <meta charset="utf-8">
    <meta name="viewport" content="width=device-width, initial-scale=1, shrink-to-fit=no">
    <meta name="description" content="">
    <meta name="author" content="">

    <title>Bare - Start Bootstrap Template</title>

    <!-- Bootstrap core CSS -->
    <link href="http://obj-cache.cloud.ruanbekker.com/static/css/bootstrap.min.css" rel="stylesheet">

    <!-- Custom styles for this template -->
    <style>
      body {
        padding-top: 54px;
      }
      @media (min-width: 992px) {
        body {
          padding-top: 56px;
        }
      }

    </style>

  </head>

  <body>

    <!-- Navigation -->
    <nav class="navbar navbar-expand-lg navbar-dark bg-dark fixed-top">
      <div class="container">
        <a class="navbar-brand" href="#">Start Bootstrap</a>
        <button class="navbar-toggler" type="button" data-toggle="collapse" data-target="#navbarResponsive" aria-controls="navbarResponsive" aria-expanded="false" aria-label="Toggle navigation">
          <span class="navbar-toggler-icon"></span>
        </button>
        <div class="collapse navbar-collapse" id="navbarResponsive">
          <ul class="navbar-nav ml-auto">
            <li class="nav-item active">
              <a class="nav-link" href="#">Home
                <span class="sr-only">(current)</span>
              </a>
            </li>
            <li class="nav-item">
              <a class="nav-link" href="https://startbootstrap.com/template-overviews/bare/">About</a>
            </li>
            <li class="nav-item">
              <a class="nav-link" href="#">Services</a>
            </li>
            <li class="nav-item">
              <a class="nav-link" href="#">Contact</a>
            </li>
          </ul>
        </div>
      </div>
    </nav>

    <!-- Page Content -->
    <div class="container">

      <div class="row">
        <div class="col-lg-12 text-center">
          <h1 class="mt-5">A Bootstrap 4 Starter Template</h1>
          <p class="lead">Complete with pre-defined file paths and responsive navigation!</p>
          <ul class="list-unstyled">
            <li>Bootstrap 4.0.0-beta</li>
            <li>jQuery 3.2.1</li>
          </ul>
        </div>
      </div>
    </div>

    <!-- Bootstrap core JavaScript -->
    <script src="http://obj-cache.cloud.ruanbekker.com/static/js/jquery.min.js"></script>
    <script src="http://obj-cache.cloud.ruanbekker.com/static/js/popper.min.js"></script>
    <script src="http://obj-cache.cloud.ruanbekker.com/static/js/bootstrap.min.js"></script>

  </body>

</html>

Creating the Service:

First we will need to build the image, for my personal projects, I like to use gitlab’s private registry, but there are many to choose from:

Build the Image
1
2
3
$ docker login registry.gitlab.com
$ docker build -t registry.gitlab.com/<user>/<repo>/lighttpd:bootstrap .
$ docker push registry.gitlab.com/<user>/<repo>/lighttpd:bootstrap

There’s many ways we can create the service, like using this service as a backend application, where nginx or traefik can proxy the requests through, but in this case we have nothing listening on port 80, so we will create the service and publish port 80 to the service, from the host:

Create the Service
1
2
3
4
5
$ docker service create \
--name web-bootstrap \
--replicas 1 \
--network appnet \
--with-registry-auth registry.gitlab.com/<user>/<repo>/lighttpd:bootstrap

Accessing your Website:

As this service will serve as our website, it should look more or less like the following:

Structured Search With Elasticsearch

Structured Search with Elasticsearch:

In this post we will ingest some dummy data into elasticsearch, then we will perform some queries to get the following info:

  • Student Names
  • Student Ages
  • Include / Exclude
  • Marks greater than
  • Finding Students with Specific marks, etc.

Create the Mapping for our Index:

for our Data as we wont use the Dynamic Mapping that comes with by default:

1
2
3
4
5
6
7
8
9
10
11
12
13
$ curl -XPUT http://127.0.0.1:9200/school -d '
{
  "mappings": {
    "students": {
      "properties": {
        "name": {"type": "string"},
        "marks": {"type": "short"},
        "gender": {"type": "string"},
        "age": {"type": "short"}
      }
    }
  }
}'

Ingesting the Data into Elasticsearch:

You can either using the following:

1
2
3
4
5
6
7
curl -XPOST http://127.0.0.1:9200/school/students/ -d ' {"name": "james", "marks": 60, "gender": "male", "age": 14} '
curl -XPOST http://127.0.0.1:9200/school/students/ -d ' {"name": "simon", "marks": 70, "gender": "male", "age": 15} '
curl -XPOST http://127.0.0.1:9200/school/students/ -d ' {"name": "samantha", "marks": 70, "gender": "female", "age": 14} '
curl -XPOST http://127.0.0.1:9200/school/students/ -d ' {"name": "john", "marks": 60, "gender": "male", "age": 14} '
curl -XPOST http://127.0.0.1:9200/school/students/ -d ' {"name": "michelle", "marks": 30, "gender": "female", "age": 14} '
curl -XPOST http://127.0.0.1:9200/school/students/ -d ' {"name": "max", "marks": 75, "gender": "female", "age": 15} '
curl -XPOST http://127.0.0.1:9200/school/students/ -d ' {"name": "frank", "marks": 79, "gender": "male", "age": 15} '

or using the Bulk API:

Save the following as bulk.json:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
{"index" : {"_index" : "school", "_type" : "students", "_id" : "AV4cYob6VgXGdBeaBa1c" } }
{"name": "james", "marks": 60, "gender": "male", "age": 14}
{"index" : {"_index" : "school", "_type" : "students", "_id" : "AV4cYqU3VgXGdBeaBa1d" } }
{"name": "simon", "marks": 70, "gender": "male", "age": 15}
{"index" : {"_index" : "school", "_type" : "students", "_id" : "AV4cYrzFVgXGdBeaBa1v" } }
{"name": "samantha", "marks": 70, "gender": "female", "age": 14}
{"index" : {"_index" : "school", "_type" : "students", "_id" : "AV4cYtuUVgXGdBeaBa2I" } }
{"name": "john", "marks": 60, "gender": "male", "age": 14}
{"index" : {"_index" : "school", "_type" : "students", "_id" : "AV4cYvMOVgXGdBeaBa2K" } }
{"name": "michelle", "marks": 30, "gender": "female", "age": 14}
{"index" : {"_index" : "school", "_type" : "students", "_id" : "AV4cYwwnVgXGdBeaBa2j" } }
{"name": "max", "marks": 75, "gender": "female", "age": 15}
{"index" : {"_index" : "school", "_type" : "students", "_id" : "AV4cYyXYVgXGdBeaBa29" } }
{"name": "frank", "marks": 79, "gender": "male", "age": 15}

Then use the Bulk API to Ingest into Elasticsearch:

1
$ curl -s -XPOST http://127.0.0.1:9200/_bulk --data-binary @bulk.json

Then you should have 7 documents ingested into Elasticsearch:

1
2
3
$ curl -XGET http://10.4.156.13:9200/_cat/indices/school?v
health status index  pri rep docs.count docs.deleted store.size pri.store.size
yellow open   school   5   1          7            0     19.4kb         19.4kb

You should notice that I have a yellow state, as this is a single instance where I am running elasticsearch on, so there will be unassigned shards.

Query Student Names:

Let’s search for the Student with the Name Max:

1
$ curl -XGET http://10.4.156.13:9200/school/students/_search?pretty -d '{"query": {"term" : {"name" : "max"}}}'
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
{
  "took" : 5,
  "timed_out" : false,
  "_shards" : {
    "total" : 5,
    "successful" : 5,
    "failed" : 0
  },
  "hits" : {
    "total" : 1,
    "max_score" : 1.0,
    "hits" : [ {
      "_index" : "school",
      "_type" : "students",
      "_id" : "AV4cRqaLVgXGdBeaBaWD",
      "_score" : 1.0,
      "_source" : {
        "name" : "max",
        "marks" : 75,
        "gender" : "female",
        "age" : 15
      }
    } ]
  }
}

Search Student Ages:

Search for all students with the age of 15:

1
$ curl -XGET http://10.4.156.13:9200/school/students/_search?pretty -d '{"query": {"term" : {"age" : 15}}}'
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
{
  "took" : 14,
  "timed_out" : false,
  "_shards" : {
    "total" : 5,
    "successful" : 5,
    "failed" : 0
  },
  "hits" : {
    "total" : 3,
    "max_score" : 1.0,
    "hits" : [ {
      "_index" : "school",
      "_type" : "students",
      "_id" : "AV4cRWPPVgXGdBeaBaVF",
      "_score" : 1.0,
      "_source" : {
        "name" : "simon",
        "marks" : 70,
        "gender" : "male",
        "age" : 15
      }
    }, {
      "_index" : "school",
      "_type" : "students",
      "_id" : "AV4cRqaLVgXGdBeaBaWD",
      "_score" : 1.0,
      "_source" : {
        "name" : "max",
        "marks" : 75,
        "gender" : "female",
        "age" : 15
      }
    }, {
      "_index" : "school",
      "_type" : "students",
      "_id" : "AV4cRxayVgXGdBeaBaWg",
      "_score" : 0.30685282,
      "_source" : {
        "name" : "frank",
        "marks" : 79,
        "gender" : "male",
        "age" : 15
      }
    } ]
  }
}

Query, Include but also Exclude:

Query everyone that is 14, and which is males, except the Student called John:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
$ curl -XGET http://10.4.156.13:9200/school/students/_search?pretty -d '
{
   "query" : {
      "constant_score" : {
         "filter" : {
            "bool" : {
              "should" : [
                 { "term" : {"age" : 14}},
                 { "term" : {"gender" : "male"}}
              ],
              "must_not" : {
                 "term" : {"name" : "john"}
              }
           }
         }
      }
   }
}'
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
"hits" : {
  "total" : 5,
  "max_score" : 1.0,
  "hits" : [ {
    "_index" : "school",
    "_type" : "students",
    "_id" : "AV4cYob6VgXGdBeaBa1c",
    "_score" : 1.0,
    "_source" : {
      "name" : "james",
      "marks" : 60,
      "gender" : "male",
      "age" : 14
    }
  }, {
    "_index" : "school",
    "_type" : "students",
    "_id" : "AV4cYvMOVgXGdBeaBa2K",
    "_score" : 1.0,
    "_source" : {
      "name" : "michelle",
      "marks" : 30,
      "gender" : "female",
      "age" : 14
    }
  }, {
    "_index" : "school",
    "_type" : "students",
    "_id" : "AV4cYyXYVgXGdBeaBa29",
    "_score" : 1.0,
    "_source" : {
      "name" : "frank",
      "marks" : 79,
      "gender" : "male",
      "age" : 15
    }
  }, {
    "_index" : "school",
    "_type" : "students",
    "_id" : "AV4cYqU3VgXGdBeaBa1d",
    "_score" : 1.0,
    "_source" : {
      "name" : "simon",
      "marks" : 70,
      "gender" : "male",
      "age" : 15
    }
  }, {
    "_index" : "school",
    "_type" : "students",
    "_id" : "AV4cYrzFVgXGdBeaBa1v",
    "_score" : 1.0,
    "_source" : {
      "name" : "samantha",
      "marks" : 70,
      "gender" : "female",
      "age" : 14
    }
  } ]
}
}

Query for Age, Gender with High Grades:

Show me everyone thats 14 years old, including males, but only with scores better than 70 and up

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
$ curl -XGET http://10.4.156.13:9200/school/students/_search?pretty -d '
{
   "query" : {
      "constant_score" : {
         "filter" : {
            "bool" : {
              "should" : [
                 { "term" : {"age" : 14}},
                 { "term" : {"gender" : "male"}}
              ],
              "must_not" : {
                 "range" : {"marks": {"lt": 70, "gte": 0}}
              }
           }
         }
      }
   }
}'

Everyone that got 70 and more:

Show me all the students that has marks of 70 and above:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
$ curl -XGET http://10.4.156.13:9200/school/students/_search?pretty -d '
{
   "query" : {
      "constant_score" : {
         "filter" : {
            "bool" : {
            "must" : [
              { "range" : {"marks" : {"lt": 100, "gte": 70}}}
            ]
          }
        }              
     }
   }
}'
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
{
  "took" : 6,
  "timed_out" : false,
  "_shards" : {
    "total" : 5,
    "successful" : 5,
    "failed" : 0
  },
  "hits" : {
    "total" : 4,
    "max_score" : 1.0,
    "hits" : [ {
      "_index" : "school",
      "_type" : "students",
      "_id" : "AV4cYwwnVgXGdBeaBa2j",
      "_score" : 1.0,
      "_source" : {
        "name" : "max",
        "marks" : 75,
        "gender" : "female",
        "age" : 15
      }
    }, {
      "_index" : "school",
      "_type" : "students",
      "_id" : "AV4cYyXYVgXGdBeaBa29",
      "_score" : 1.0,
      "_source" : {
        "name" : "frank",
        "marks" : 79,
        "gender" : "male",
        "age" : 15
      }
    }, {
      "_index" : "school",
      "_type" : "students",
      "_id" : "AV4cYqU3VgXGdBeaBa1d",
      "_score" : 1.0,
      "_source" : {
        "name" : "simon",
        "marks" : 70,
        "gender" : "male",
        "age" : 15
      }
    }, {
      "_index" : "school",
      "_type" : "students",
      "_id" : "AV4cYrzFVgXGdBeaBa1v",
      "_score" : 1.0,
      "_source" : {
        "name" : "samantha",
        "marks" : 70,
        "gender" : "female",
        "age" : 14
      }
    } ]
  }
}

Or you can do it like this:

Query Range only with gt:

Show me everyone that got more than 70:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
$ curl -XGET http://10.4.156.13:9200/school/students/_search?pretty -d '
{
   "query" : {
      "constant_score" : {
         "filter" : {
            "bool" : {
              "must" : [
                { "range" : {"marks" : {"gt": 70}}}
            ]
          }
        }              
     }
   }
}'
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
{
  "took" : 7,
  "timed_out" : false,
  "_shards" : {
    "total" : 5,
    "successful" : 5,
    "failed" : 0
  },
  "hits" : {
    "total" : 2,
    "max_score" : 1.0,
    "hits" : [ {
      "_index" : "school",
      "_type" : "students",
      "_id" : "AV4cYwwnVgXGdBeaBa2j",
      "_score" : 1.0,
      "_source" : {
        "name" : "max",
        "marks" : 75,
        "gender" : "female",
        "age" : 15
      }
    }, {
      "_index" : "school",
      "_type" : "students",
      "_id" : "AV4cYyXYVgXGdBeaBa29",
      "_score" : 1.0,
      "_source" : {
        "name" : "frank",
        "marks" : 79,
        "gender" : "male",
        "age" : 15
      }
    } ]
  }
}

Gender Specific, with Grades more than 70:

Show me females that has more than 70:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
$ curl -XGET http://10.4.156.13:9200/school/students/_search?pretty -d '
{
   "query" : {
      "constant_score" : {
         "filter" : {
            "bool" : {
            "must" : [
              { "range" : {"marks" : {"lt": 100, "gte": 70}}}
            ],
            "must_not": [
              {"term": {"gender": "male"}}
            ]
          }
        }              
     }
   }
}'
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
{
  "took" : 13,
  "timed_out" : false,
  "_shards" : {
    "total" : 5,
    "successful" : 5,
    "failed" : 0
  },
  "hits" : {
    "total" : 2,
    "max_score" : 1.0,
    "hits" : [ {
      "_index" : "school",
      "_type" : "students",
      "_id" : "AV4cYwwnVgXGdBeaBa2j",
      "_score" : 1.0,
      "_source" : {
        "name" : "max",
        "marks" : 75,
        "gender" : "female",
        "age" : 15
      }
    }, {
      "_index" : "school",
      "_type" : "students",
      "_id" : "AV4cYrzFVgXGdBeaBa1v",
      "_score" : 1.0,
      "_source" : {
        "name" : "samantha",
        "marks" : 70,
        "gender" : "female",
        "age" : 14
      }
    } ]
  }
}

Grade Specific:

Show me the ones that got 30 and 75:

1
2
3
4
5
6
7
8
9
10
11
12
$ curl -XGET http://10.4.156.13:9200/school/students/_search?pretty -d '
{
   "query" : {
      "constant_score" : {
         "filter" : {
            "terms" : {
              "marks": [30, 75]
            }
         }
      }
   }
}'
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
{
  "took" : 6,
  "timed_out" : false,
  "_shards" : {
    "total" : 5,
    "successful" : 5,
    "failed" : 0
  },
  "hits" : {
    "total" : 2,
    "max_score" : 1.0,
    "hits" : [ {
      "_index" : "school",
      "_type" : "students",
      "_id" : "AV4cYwwnVgXGdBeaBa2j",
      "_score" : 1.0,
      "_source" : {
        "name" : "max",
        "marks" : 75,
        "gender" : "female",
        "age" : 15
      }
    }, {
      "_index" : "school",
      "_type" : "students",
      "_id" : "AV4cYvMOVgXGdBeaBa2K",
      "_score" : 1.0,
      "_source" : {
        "name" : "michelle",
        "marks" : 30,
        "gender" : "female",
        "age" : 14
      }
    } ]
  }
}

For more information on this, have a look at Elasticsearch: Structured Search