Ruan Bekker's Blog

From a Curious mind to Posts on Github

Getting Started With Chef: Creating a Website With Apache

From my previous post we got started with Installing the Chef Devlopment Kit and using the file resource type.

In this post we will create a recipe that will:

  • Update the APT Cache
  • Install the Apache2 package
  • Enables and Starts Apache2 on Boot
  • Create a index.html for our Website

Creating a Web Server:

We will create our webserver.rb recipe, and our first section will consist of the following:

  • Ensuring our APT Cache is up to date
  • The Frequency property indiciates 24 hours
  • The periodic action indicates that the update occurs periodically
  • Optional: the :update action will update the apt cache on each run
  • Installs the apache2 package (No action is specified, defaults to :install)
1
2
3
4
5
6
apt_update 'Update APT Cache Daily' do
  frequency 86_400
  action :periodic
end

package 'apache2'

Running this recipe at this moment will provide the following output:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
$ chef-client --local-mode webserver.rb
..
Converging 2 resources
Recipe: @recipe_files::/root/chef-repo/webserver.rb
  * apt_update[Update APT Cache Daily] action periodic
    - update new lists of packages
    * directory[/var/lib/apt/periodic] action create (up to date)
    * directory[/etc/apt/apt.conf.d] action create (up to date)
    * file[/etc/apt/apt.conf.d/15update-stamp] action create_if_missing
      - create new file /etc/apt/apt.conf.d/15update-stamp
      - update content in file /etc/apt/apt.conf.d/15update-stamp from none to 174cdb
      --- /etc/apt/apt.conf.d/15update-stamp    2017-09-04 16:53:31.604488306 +0000
      +++ /etc/apt/apt.conf.d/.chef-15update-stamp20170904-5727-1p2g8zw 2017-09-04 16:53:31.604488306 +0000
      @@ -1 +1,2 @@
      +APT::Update::Post-Invoke-Success {"touch /var/lib/apt/periodic/update-success-stamp 2>/dev/null || true";};
    * execute[apt-get -q update] action run
      - execute apt-get -q update

Next, we will set apache2 to start on boot and start the service:

1
2
3
4
5
6
7
8
9
10
11
apt_update 'Update APT Cache Daily' do
  frequency 86_400
  action :periodic
end

package 'apache2'

service 'apache2' do
  supports status: true
  action [:enable, :start]
end

Running our chef client, will produce the following output:

1
2
3
4
5
6
7
8
$ chef-client --local-mode webserver.rb
Converging 3 resources
Recipe: @recipe_files::/root/chef-repo/webserver.rb
  * apt_update[Update APT Cache Daily] action periodic (up to date)
  * apt_package[apache2] action install (up to date)
  * service[apache2] action enable (up to date)
  * service[apache2] action start
    - start service service[apache2]

Verifying that our apache2 service is started:

1
2
$ /etc/init.d/apache2 status
 * apache2 is running

Next, using the file resource, we will replace the `/var/www/html/index.html' landing page with the one that we will specify in our recipe:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
apt_update 'Update APT Cache Daily' do
  frequency 86_400
  action :periodic
end

package 'apache2'

service 'apache2' do
  supports status: true
  action [:enable, :start]
end

file '/var/www/html/index.html' do
  content '<html>
  <body>
    <h1>Hello, World!</h1>
  </body>
</html>'
end

And our full webserver.rb recipe will look like the following:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
# update cache periodically every 24 hours
apt_update 'Update APT Cache Daily' do
  frequency 86_400
  action :periodic
end

# install apache2 (:install is the default action)
package 'apache2'

# enable apache2 on boot and start apache2
service 'apache2' do
  supports status: true
  action [:enable, :start]
end

# create a custom html page
file '/var/www/html/index.html' do
  content '<html>
  <body>
    <h1>Hello, World!</h1>
  </body>
</html>'
end

Running our Chef Client against our Recipe:

For the previous snippets, we took it section by section, here we will run the whole recipe:

1
2
3
4
5
6
7
8
9
10
11
12
$ chef-client --local-mode webserver.rb
...
Converging 4 resources
Recipe: @recipe_files::/root/chef-repo/webserver.rb
  * apt_update[Update APT Cache Daily] action periodic (up to date)
  * apt_package[apache2] action install (up to date)
  * service[apache2] action enable (up to date)
  * service[apache2] action start (up to date)
  * file[/var/www/html/index.html] action create
    - update content in file /var/www/html/index.html from 538f31 to 9d1dca
    --- /var/www/html/index.html        2017-09-04 16:53:55.134043652 +0000
    +++ /var/www/html/.chef-index20170904-7451-3kt1p7.html      2017-09-04 17:00:16.306831840 +0000

Testing our Website:

And finally, testing our website:

1
2
3
4
5
6
$ curl -XGET http://localhost/
<html>
  <body>
    <h1>Hello, World!</h1>
  </body>
</html>

Resources:

Getting Started With Chef: Working With Files

Chef: Infrastructure as Code, Automation, Configuration Management, having a service that can do that, and especially having something in place that knows what the desired state of your configurations/applications should be is definitely a plus.

I stumbled upon learn.chef.io which is a great resource for learning chef, as I am learning Chef at this moment.

The Components of Chef consists of:

  • Chef Workstation (ChefDK enables you to use the tools locally to test before pushing your code to the Chef Server)
  • Chef Server (Central Repository for your Cookbooks and info of every node Chef Manages)
  • Chef Client (a Node that is Managed by the Chef Server)

In this post we will install the Chef Development Kit, and work with the chef-client in local-mode to create, update and delete files using the file resource type.

Getting Started with Chef: Installation:

Installing the Chef Development Kit:

1
2
3
$ sudo apt-get update && apt-get upgrade -y
$ sudo apt-get install curl git -y
$ curl https://omnitruck.chef.io/install.sh | sudo bash -s -- -P chefdk -c stable -v 2.0.28

Configure a Resource:

Using chef-client in local mode, we will use the resource: file to create a recipe that will create our motd file

hello.rb
1
2
3
file '/tmp/motd' do
  content 'hello world'
end

Running chef client against our recipe in local-mode:

1
2
3
4
5
6
7
8
9
10
11
$ chef-client --local-mode hello.rb
..
Converging 1 resources
Recipe: @recipe_files::/root/chef-repo/hello.rb
  * file[/tmp/motd] action create
    - create new file /tmp/motd
    - update content in file /tmp/motd from none to b94d27
    --- /tmp/motd       2017-09-04 16:18:19.265699403 +0000
    +++ /tmp/.chef-motd20170904-4500-54fh8w     2017-09-04 16:18:19.265699403 +0000
    @@ -1 +1,2 @@
    +hello world

Verify the Content:

1
2
$ cat /tmp/motd
hello world

Running the command again will do nothing, as the content is in its desired state:

1
2
3
4
5
$ chef-client --local-mode hello.rb
..
Converging 1 resources
Recipe: @recipe_files::/root/chef-repo/hello.rb
  * file[/tmp/motd] action create (up to date)

Changing our recipe by replacing the word world with chef, we will find that the content of our file will be updated:

1
2
3
4
5
6
7
8
9
10
11
$ chef-client --local-mode hello.rb
..
Converging 1 resources
Recipe: @recipe_files::/root/chef-repo/hello.rb
  * file[/tmp/motd] action create
    - update content in file /tmp/motd from b94d27 to c38c60
    --- /tmp/motd       2017-09-04 16:18:19.265699403 +0000
    +++ /tmp/.chef-motd20170904-4903-wuigr      2017-09-04 16:23:21.379649145 +0000
    @@ -1,2 +1,2 @@
    -hello world
    +hello chef

Let’s overwrite the content of our motd file manually:

1
$ echo 'hello robots' > /tmp/motd

Running Chef Client against our recipe again, allows Chef to restore our content to the desired state that is specified in our recipe:

1
2
3
4
5
6
7
8
9
10
11
$ chef-client --local-mode hello.rb
..
Converging 1 resources
Recipe: @recipe_files::/root/chef-repo/hello.rb
  * file[/tmp/motd] action create
    - update content in file /tmp/motd from 548078 to c38c60
    --- /tmp/motd       2017-09-04 16:24:29.308286834 +0000
    +++ /tmp/.chef-motd20170904-5103-z16ssa     2017-09-04 16:24:42.528021632 +0000
    @@ -1,2 +1,2 @@
    -hello robots
    +hello chef

Deleting a file from our recipe:

destroy.rb
1
2
3
file '/tmp/motd' do
  action :delete
end

Now using chef client to execute against this file will remove our file:

1
2
3
4
$ chef-client --local-mode destroy.rb
Recipe: @recipe_files::/root/chef-repo/destroy.rb
  * file[/tmp/motd] action delete
    - delete file /tmp/motd

Resources:

Splitting Characters With Python to Determine Name Surname and Email Address

I had a bunch of email addresses that was set in a specific format that I can strip characters from, to build up a Username, Name and Surname from the Email Address, that I could use to for dynamic reporting.

Using Split in Python

Here I will define the value of emailadress to a string, then using Python’s split() function to get the values that I want:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
>>> emailaddress = "ruan.bekker@domain.com"
>>> emailaddress.split("@", 1)
['ruan.bekker', 'domain.com']
>>> username = emailaddress.split("@", 1)[0]
>>> username
'ruan.bekker'
>>> username.split(".", 1)
['ruan', 'bekker']
>>> name = username.split(".", 1)[0].capitalize()
>>> surname = username.split(".", 1)[1].capitalize()
>>> name
'Ruan'
>>> surname
'Bekker'
>>> username
'ruan.bekker'
>>> emailaddress
'ruan.bekker@domain.com'

Print The Values in Question:

Now that we have define our keys, let’s print the values:

1
2
>>> print("Name: {0}, Surname: {1}, UserName: {2}, Email Address: {3}".format(name, surname, username, emailaddress))
Name: Ruan, Surname: Bekker, UserName: ruan.bekker, Email Address: ruan.bekker@domain.com

From here on you can build up for example an email function that you can pass the values to your function to get a specific job done.

Update: Capitalize from One String

Today, I had to capitalize the name and surname that was linked to one variable:

1
2
3
4
>>> user = 'james.bond'
>>> username = ' '.join(map(str, [x.capitalize() for x in user.split(".")]))
>>> print(username)
James Bond

Setup a 3 Node MongoDB Replica Set on Ubuntu 16

Today we will setup a 3 Node Replica Set for MongoDB on Ubuntu 16. A Replica Set is a form of data replication, so that your data resides on more than one node for data durability. We will setup the 1st node as the primary node, the second as the secondary node and the 3rd node will act as an arbiter.

The arbiter node can almost be mentioned as a voter node, as it will be set in place to prevent split brain.

Resources:

Installing MongoDB on our 3 Nodes:

Our case, using Ubuntu 16.04, setting up our repository and installing mongodb from our repository:

1
2
3
4
$ sudo apt-key adv --keyserver hkp://keyserver.ubuntu.com:80 --recv 0C49F3730359A14518585931BC711F9BA15703C6
$ echo "deb [ arch=amd64,arm64 ] http://repo.mongodb.org/apt/ubuntu xenial/mongodb-org/3.4 multiverse" | sudo tee /etc/apt/sources.list.d/mongodb-org-3.4.list
$ apt update
$ apt install -y mongodb-org -y

Preparing our Directories:

1
2
$ mkdir -p /srv/mongodb/rs0-0 /srv/mongodb/rs0-1 /srv/mongodb/rs0-2
$ mkdir -p /var/log/mongodb/rs0-0 /var/log/mongodb/rs0-1 /var/log/mongodb/rs0-2

Populating our MongoDB Configuration:

  • MongoDB Prefers XFS File Systems when using WiredTiger.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
$ cat > /etc/mongod.conf << EOF
storage:
  dbPath: /var/lib/mongodb
  journal:
    enabled: false

storage:
  mmapv1:
    smallFiles: true

systemLog:
  destination: file
  logAppend: true
  path: /var/log/mongodb/mongod.log

net:
  port: 27017
  bindIp: 0.0.0.0

replication:
  replSetName: rs0

security:
  authorization: enabled
EOF

Enable MongoDB On Startup and Start MongoDB:

1
2
$ systemctl enable mongod
$ systemctl restart mongod

Setup MongoDB Replica Sets:

In our setup we will have 3 nodes: (mongodb-1, mongodb-2, mongodb3) From our Primary Node, connect to MongoDB and inititalize our replica set:

1
2
3
4
5
6
7
8
9
10
$ mongo
MongoDB shell version v3.4.7
connecting to: mongodb://127.0.0.1:27017
MongoDB server version: 3.4.7
> rs.initiate()
{
        "info2" : "no configuration specified. Using a default configuration for the set",
        "me" : "mysql-1:27017",
        "ok" : 1
}

Next, add our 2 other MongoDB Nodes, remember mongodb-3 is our arbiter node:

1
2
3
4
rs0:SECONDARY> rs.add("mongodb-2")
{ "ok" : 1 }
rs0:PRIMARY> rs.add("mongodb-3", true)
{ "ok" : 1 }

Verify the Replica Set Status:

1
rs0:PRIMARY> rs.status()
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
{
        "set" : "rs0",
        "date" : ISODate("2017-08-27T13:17:42.469Z"),
        "myState" : 1,
        "term" : NumberLong(1),
        "heartbeatIntervalMillis" : NumberLong(2000),
        "optimes" : {
                "lastCommittedOpTime" : {
                        "ts" : Timestamp(0, 0),
                        "t" : NumberLong(-1)
                },
                "appliedOpTime" : {
                        "ts" : Timestamp(1503839853, 1),
                        "t" : NumberLong(1)
                },
                "durableOpTime" : {
                        "ts" : Timestamp(1503839722, 1),
                        "t" : NumberLong(-1)
                }
        },
        "members" : [
                {
                        "_id" : 0,
                        "name" : "mysql-1:27017",
                        "health" : 1,
                        "state" : 1,
                        "stateStr" : "PRIMARY",
                        "uptime" : 422,
                        "optime" : {
                                "ts" : Timestamp(1503839853, 1),
                                "t" : NumberLong(1)
                        },
                        "optimeDate" : ISODate("2017-08-27T13:17:33Z"),
                        "electionTime" : Timestamp(1503839723, 1),
                        "electionDate" : ISODate("2017-08-27T13:15:23Z"),
                        "configVersion" : 3,
                        "self" : true
                },
                {
                        "_id" : 1,
                        "name" : "mongodb-2:27017",
                        "health" : 1,
                        "state" : 2,
                        "stateStr" : "SECONDARY",
                        "uptime" : 28,
                        "optime" : {
                                "ts" : Timestamp(1503839853, 1),
                                "t" : NumberLong(1)
                        },
                        "optimeDurable" : {
                                "ts" : Timestamp(0, 0),
                                "t" : NumberLong(-1)
                        },
                        "optimeDate" : ISODate("2017-08-27T13:17:33Z"),
                        "optimeDurableDate" : ISODate("1970-01-01T00:00:00Z"),
                        "lastHeartbeat" : ISODate("2017-08-27T13:17:41.707Z"),
                        "lastHeartbeatRecv" : ISODate("2017-08-27T13:17:40.699Z"),
                        "pingMs" : NumberLong(4),
                        "syncingTo" : "mysql-1:27017",
                        "configVersion" : 3
                },
                {
                        "_id" : 2,
                        "name" : "mongodb-3:27017",
                        "health" : 1,
                        "state" : 7,
                        "stateStr" : "ARBITER",
                        "uptime" : 8,
                        "lastHeartbeat" : ISODate("2017-08-27T13:17:41.721Z"),
                        "lastHeartbeatRecv" : ISODate("2017-08-27T13:17:38.749Z"),
                        "pingMs" : NumberLong(2),
                        "configVersion" : 3
                }
        ],
        "ok" : 1
}
rs0:PRIMARY> exit
bye

Setup Auth:

Setup Authentication on our MongoDB Database, we will create the user adminuser and setup the password to secret:

1
2
3
4
rs0:PRIMARY> use admin
switched to db admin

rs0:PRIMARY> db.createUser({user: "adminuser", pwd: "secret", roles:[{role: "root", db: "admin"}]})
1
2
3
4
5
6
7
8
9
10
Successfully added user: {
        "user" : "adminuser",
        "roles" : [
                {
                        "role" : "root",
                        "db" : "admin"
                }
        ]
}
rs0:PRIMARY> exit

Restart MongoDB:

1
$ systemctl restart mongod

Connect and Authenticate against MongoDB:

Connect to your MongoDB Cluster with auth:

1
$ mongo --host mongodb.example.com --port 27017 -u <username> -p --authenticationDatabase admin

Setup HAProxy Load Balancer for MySQL Galera With IP Whitelisting and Backup Servers

Today we will setup a HAProxy Service for our 3 Node MySQL Galera Cluster

Our Setup:

  • 3 Node Galera MySQL Cluster
  • 3 HAProxy Services (Each HAProxy Service Running on the MySQL Nodes)
  • MySQL Listens on Port 3307
  • HAProxy Listens on Port 3306 and Proxies through to 3307

I have setup HAProxy on the same node as the MySQL Servers for my use case, but you can also setup HAProxy on a node outside the MySQL Host.

So essentially our MySQL Galera Cluster is a Multi Master Setup, but for now we will only accept connections from Node-A, and have Node-B and Node-C as Backup servers. Should Node-A go down, HAProxy will route connections to Node-B, and if Node-B also goes down, connections will be routed to Node-C.

If the Primary Node, which is Node-A recovers, connections will be restored to Node-A.

Security:

We use iptables to allow traffic between the nodes for port TCP/3307 and allow all traffic for Port TCP/3306, as HAProxy will allow the IP Based Access:

Iptables for Each Node
1
2
3
4
5
$ iptables -I INPUT -s {Node-A} -p tcp --dport 3307 -j ACCEPT
$ iptables -I INPUT -s {Node-B} -p tcp --dport 3307 -j ACCEPT
$ iptables -I INPUT -s {Node-C} -p tcp --dport 3307 -j ACCEPT
$ iptables -A INPUT -p tcp --dport 3306 -j ACCEPT
$ iptables -A INPUT -p tcp --dport 3307 -j DROP

HAProxy:

Installing HAProxy on Ubuntu:

Install HAProxy
1
2
$ sudo apt update
$ sudo apt install haproxy -y

Configure HAProxy with a Port 3306 listener, specify your source addresses that you would like to be authorized to communicate with MySQL and then specify the servers to proxy the connections to our MySQL Galera Cluster, specifying 2 backup servers:

/etc/haproxy/haproxy.cfg
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
global
  log         127.0.0.1 local2
  chroot      /var/lib/haproxy
  pidfile     /var/run/haproxy.pid
  maxconn     1020
  user        haproxy
  group       haproxy
  daemon

  stats socket /var/lib/haproxy/stats.sock mode 600 level admin
  stats timeout 2m

defaults
  mode    tcp
  log     global
  option  dontlognull
  option  redispatch
  retries                   3
  timeout queue             45s
  timeout connect           5s
  timeout client            1m
  timeout server            1m
  timeout check             10s
  maxconn                   1020

listen stats
  bind    *:80
  mode    http
  stats   enable
  stats   show-legends
  stats   refresh           5s
  stats   uri               /
  stats   realm             Haproxy\ Statistics
  stats   auth              admin:secret
  stats   admin             if TRUE

listen galera-lb
  bind    *:3306
  mode    tcp
  acl     network_allowed src 10.10.1.0/24 10.32.15.2/32
  tcp-request               content accept if network_allowed
  tcp-request               content reject
  default_backend           galera-cluster

backend galera-cluster
  balance roundrobin
  server  scw-mysql-1 10.0.0.2:3307  check
  server  scw-mysql-2 10.0.0.3:3307  check backup
  server  scw-mysql-3 10.0.0.4:3307  check backup

Start HAProxy:

Start HAProxy Service
1
2
$ sudo systemctl enable haproxy
$ sudo systemctl restart haproxy

Authorize HAProxy Hostnames to Connect to MySQL:

In this case we need to allow the Hostnames to be able to connect to mysql:

1
2
mysql> GRANT ALL PRIVILEGES ON *.* TO 'root'@'%' IDENTIFIED BY 'secrets' WITH GRANT OPTION;
mysql> FLUSH PRIVILEGES;

Resources:

Secure Your Access to Kibana 5 and Elasticsearch 5 With Nginx for AWS

As until now, AWS does not offer VPC Support for Elasticsearch, so this make things a bit difficult authorizing Private IP Ranges.

One workaround would be to setup a Nginx Reverse Proxy on AWS within the your Private VPC, associate a EIP on your Nginx EC2 Instance, then authorize your EIP on your Elasticsearch IP Access Policy.

Update:

Our Setup:

In this setup, we will have an Internal ELB (Elastic Load Balancer), which we will associate 1 or more EC2 Nginx Instances behind the ELB, then setup our Nginx to Revere Proxy our connections through to our Elasticsearch Endpoint.

We will also setup Basic HTTP Authentication for our / elasticsearch endpoint, and our /kibana endpoint. But we will keep the authentication seperate from each other, so that credentials for ES and Kibana is not the same, but depending on your use case, you can allow both endpoints to reference the same credential file.

Install Nginx

Depending on your Linux Distribution, the package manager may differ, I am using Amazon Linux:

Install Nginx
1
2
$ sudo yum update -y
$ sudo yum install nginx httpd-tools -y

Configure Nginx:

Remove the default configuration and replace the nginx.conf with the following:

Remove Default Nginx Config
1
$ sudo rm -r /etc/nginx/nginx.conf

Main Nginx Configuration:

/etc/nginx/nginx.conf
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
user nginx;
worker_processes auto;
pid /run/nginx.pid;
error_log /var/log/nginx/error.log;

events {
  worker_connections 1024;
}

http {

  # Basic Settings
  sendfile on;
  tcp_nopush on;
  tcp_nodelay on;
  keepalive_timeout 65;
  types_hash_max_size 2048;
  server_names_hash_bucket_size 128;

  include /etc/nginx/mime.types;
  default_type application/octet-stream;

  # Logging Settings
        log_format  main  '$remote_addr - $remote_user [$time_local] "$request" '
                      '$status $body_bytes_sent "$http_referer" '
                      '"$http_user_agent" "$http_x_forwarded_for"';

  access_log /var/log/nginx/access.log main;

  # Gzip Settings
  gzip on;
  gzip_disable "msie6";

  # Elasticsearch Config
  include /etc/nginx/conf.d/elasticsearch.conf;
}

The Reverse Proxy Configuration:

/etc/nginx/conf.d/elasticsearch.conf
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
server {

  listen 80;
  server_name elk.mydomain.com;

  # error logging
  error_log /var/log/nginx/elasticsearch_error.log;

  # authentication: server wide
  #auth_basic "Auth";
  #auth_basic_user_file /etc/nginx/.secrets;

  location / {

    # authentication: elasticsearch
    auth_basic "Elasticsearch Auth";
    auth_basic_user_file /etc/nginx/.secrets_elasticsearch;

    proxy_http_version 1.1;
    proxy_set_header Host https://search.eu-west-1.es.amazonaws.com;
    proxy_set_header X-Real-IP {NGINX-EIP};
    proxy_set_header Connection "Keep-Alive";
    proxy_set_header Proxy-Connection "Keep-Alive";
    proxy_set_header Authorization "";

    proxy_pass https://search.eu-west-1.es.amazonaws.com/;
    proxy_redirect https://search.eu-west-1.es.amazonaws.com/ http://{NGINX-EIP}/;

  }

  location /kibana {

    # authentication: kibana
    auth_basic "Kibana Auth";
    auth_basic_user_file /etc/nginx/.secrets_kibana;

    proxy_http_version 1.1;
    proxy_set_header Host https://search.eu-west-1.es.amazonaws.com;
    proxy_set_header X-Real-IP {NGINX-EIP};
    proxy_set_header Connection "Keep-Alive";
    proxy_set_header Proxy-Connection "Keep-Alive";
    proxy_set_header Authorization "";

    proxy_pass https://search.eu-west-1.es.amazonaws.com/_plugin/kibana/;
    proxy_redirect https://search.eu-west-1.es.amazonaws.com/_plugin/kibana/ http://{NGINX_EIP}/kibana/;

  }

  # elb checks
  location /status {
    root /usr/share/nginx/html/;
  }

}

Setup Authentication:

Setup the authentication for elasticsearch and kibana:

Create Auth for Kibana and Elasticsearch
1
2
$ sudo htpasswd -c /etc/nginx/.secrets_elasticsearch admin
$ sudo htpasswd -c /etc/nginx/.secrets_kibana admin

Restart Nginx and Enable on Startup

Restart the nginx process and enable the process on boot:

Restart Nginx
1
2
$ sudo /etc/init.d/nginx restart
$ sudo chkconfig nginx on

Configure ELB:

Create a New Internal ELB, set the Backend Instances on Port 80, and the healthcheck should point to /status/index.html as this location block does not require authentication and our ELB will be able to get a 200 reponse if all is good. Next you can configure your Route 53 Hosted Zone, elk.mydomain.com to map to your ELB.

End Result

Now you should be able to access Elasticsearch on http://elk.mydomain.com/ and Kibana on http://elk.mydomain.com/kibana after authenticating.

Reference Credentials Outside Your Main Application in Python

In this post I will show one way of referencing credentials from your application in Python, without setting them in your applications code. We will create a seperate python file which will hold our credentials, and then call them from our main application.

Our Main Application

This app will print our username, just for the sake of this example:

app.py
1
2
3
4
5
6
from config import credentials as secrets

my_username = secrets['APP1']['username']
my_password = secrets['APP1']['password']

print("Hello, your username is: {username}".format(username=my_username))

Our Credentials File

Then we have our file which will hold our credentials:

config.py
1
2
3
4
5
6
credentials = {
        'APP1': {
            'username': 'foo',
            'password': 'bar'
            }
        }

That is at least one way of doing it, you could also use environment variables using the os module, which is described here

References:

Change IAM Username With AWS CLI

You may find yourself in a position where you need to rename more than one IAM Username, and one way of doing this is using the AWS CLI tools to rename the username.

The benefit of this is that the user’s access keys remains the same, any policies associated to the user, will stay on the user after the username gets renamed.

The only thing that changes, is ofcourse the username that the user will use when logging onto the AWS Management Console:

Details of our User:

We will change the IAM User peter to peter.franklin. Currently Peter’s ACCESS_KEY will be AKIA123456ABCDEF1234 which is configured with the profile name peter.

Lets first get details of our user before changing it:

1
2
3
4
5
6
7
8
9
10
11
$ aws --profile admin iam get-user --user-name peter
{
    "User": {
        "UserName": "peter",
        "PasswordLastUsed": "2017-08-28T13:17:22Z",
        "CreateDate": "2017-08-28T13:11:25Z",
        "UserId": "ABCDEFGHIJKLMNOPQRST",
        "Path": "/",
        "Arn": "arn:aws:iam::123456789012:user/peter"
    }
}

Rename the IAM User

Update user peter to peter.franklin:

Rename the IAM User
1
$ aws --profile aws iam update-user --user-name peter --new-user-name peter.franklin

Describe peter’s new username:

1
2
3
4
5
6
7
8
9
10
11
$ aws --profile aws iam get-user --user-name peter.franklin
{
    "User": {
        "UserName": "peter.franklin",
        "PasswordLastUsed": "2017-08-28T13:23:18Z",
        "CreateDate": "2017-08-28T13:11:25Z",
        "UserId": "ABCDEFGHIJKLNMOPQRST",
        "Path": "/",
        "Arn": "arn:aws:iam::123456789012:user/peter.franklin"
    }
}

Verify that access keys are the same:

1
2
3
4
5
6
7
8
9
10
11
$ aws --profile aws iam list-access-keys --user-name peter.franklin
{
    "AccessKeyMetadata": [
        {
            "UserName": "peter.franklin",
            "Status": "Active",
            "CreateDate": "2017-08-28T13:11:27Z",
            "AccessKeyId": "AKIA123456ABCDEF1234"
        }
    ]
}

At this momemnt we can see that Peter’s AccessKeyId is still the same, which means he does not have to update his credentials on his end.

Some Useful CLI Commands:

Get only the Access Key for a User:

1
2
$ aws --profile admin iam list-access-keys --user-name peter.franklin | jq -r '.[][].AccessKeyId'
AKIA123456ABCDEF1234

Determine when the AccessKey was last used, and for which Service:

For auditing, or verifying if a AccessKeyId is being used, we can call the get-access-key-last-used, which will give us the last time the key was used, and also see for which service in question.

Let Peter create a DynamoDB Table:

1
2
3
4
5
$ aws --profile peter dynamodb \
create-table --table-name test01 \
--attribute-definitions "AttributeName=username,AttributeType=S" \
--key-schema "AttributeName=username,KeyType=HASH" \
--provisioned-throughput "ReadCapacityUnits=1,WriteCapacityUnits=1"
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
{
    "TableDescription": {
        "TableArn": "arn:aws:dynamodb:eu-west-1:123456789012:table/test01",
        "AttributeDefinitions": [
            {
                "AttributeName": "username",
                "AttributeType": "S"
            }
        ],
        "ProvisionedThroughput": {
            "NumberOfDecreasesToday": 0,
            "WriteCapacityUnits": 1,
            "ReadCapacityUnits": 1
        },
        "TableSizeBytes": 0,
        "TableName": "test01",
        "TableStatus": "CREATING",
        "KeySchema": [
            {
                "KeyType": "HASH",
                "AttributeName": "username"
            }
        ],
        "ItemCount": 0,
        "CreationDateTime": 1503928537.671
    }
}

Get Detail on LastUsedDate:

1
2
3
4
5
6
7
$ aws --profile admin iam get-access-key-last-used  --access-key $(aws --profile aws iam list-access-keys --user-name peter.franklin | jq -r '.[][].AccessKeyId') | jq -r '.[]'
peter.franklin
{
  "Region": "eu-west-1",
  "ServiceName": "dynamodb",
  "LastUsedDate": "2017-08-28T13:55:00Z"
}

Only getting the LastUsedDate of the AccessKeyId:

1
2
$ aws --profile admin iam get-access-key-last-used  --access-key $(aws --profile aws iam list-access-keys --user-name peter.franklin | jq -r '.[][].AccessKeyId') | jq '.AccessKeyLastUsed.LastUsedDate'
"2017-08-28T13:55:00Z"

Resources:

Using the Python API for MongoDB Using PyMongo

Using the Python API for MongoDB using Pymongo

Requirements:

You will need to install the pymongo driver using pip:

Install Pymongo
1
$ pip install pymongo

A configuration file with your access credentials, which I like to use outside my code:

config.py
1
2
3
4
5
6
7
credentials = {
    "mongodb": {
        "HOSTNAME": "host.domain.com",
        "USERNAME": "username",
        "PASSWORD": "password"
    }
}

Connecting to MongoDB:

From the python interpreter, connect to MongoDB:

1
2
3
4
5
6
>>> from pymongo import MongoClient
>>> from config import credentials as secrets
>>> mongo_host = secrets['mongodb']['HOSTNAME']
>>> mongo_username = secrets['mongodb']['USERNAME']
>>> mongo_password = secrets['mongodb']['PASSWORD']
>>> mongodb_client = MongoClient('mongodb://%s:%s@%s:27017/admin?authMechanism=SCRAM-SHA-1' % (mongo_username, mongo_password, mongo_host))

Find the Database that you are connected to:

1
2
>>> mongodb_client.get_database().name
u'admin'

Find all the databases that is currently on your MongoDB Server:

1
2
3
4
5
6
7
>>> dbs = mongodb_client.database_names()
>>> for x in dbs:
...     print(x)
...
admin
flask_reminders
local

Create a Database, Collection and Write a Document into your Database:

Let’s create a database, in my case it will be ruan-test, and my collection name mycollection and the write one item into it:

1
2
3
4
5
6
>>> newdb = mongodb_client['ruan-test']
>>> newdb_collection = newdb['mycollection']
>>> doc = {"name": "frank", "surname": "jeffreys", "tags": ["person", "name"]}
>>> doc_id = newdb_collection.insert_one(doc).inserted_id
>>> print(doc_id)
59a319ec1f15a5088ba3a339

Note: you can also connect to your collection like the following

1
>>> newdb_collection = mongodb_client['ruan-test']['mycollection']

We have inserted one item into our database, which we can verify with count():

1
2
>>> newdb_collection.find().count()
1

As you can see I have the value of the item’s id, we can use that to find it from our collection:

1
2
>>> newdb_collection.find_one({"_id": doc_id})
{u'_id': ObjectId('59a319ec1f15a5088ba3a339'), u'surname': u'jeffreys', u'name': u'frank', u'tags': [u'person', u'name']}

As we only have one item in our database, we can also use find_one() which will give us the exact same data:

1
2
>>> newdb_collection.find_one()
{u'_id': ObjectId('59a319ec1f15a5088ba3a339'), u'surname': u'jeffreys', u'name': u'frank', u'tags': [u'person', u'name']}

We can write some more data to our database, but this time, lets write to a different collection:

1
2
3
>>> newdb_collection2 = newdb['new-collection-2']
>>> item = newdb_collection2.insert_one({"name": "ruby", "surname": "james"}).inserted_id
>>> item2 = newdb_collection2.insert_one({"name": "ruby", "surname": "james"}).inserted_id

As we captured the items _id, we can view the:

1
2
3
4
>>> print(item)
59a31acf1f15a5088ba3a33b
>>> print(item2)
59a31a8a1f15a5088ba3a33a

Query Data from MongoDB:

We can then query for this data:

1
2
3
4
5
>>> newdb2.find_one({"name": "ruby"})
{u'_id': ObjectId('59a31acf1f15a5088ba3a33b'), u'surname': u'james', u'name': u'ruby'}

>>> newdb2.find_one({"_id": item})
{u'_id': ObjectId('59a31acf1f15a5088ba3a33b'), u'surname': u'james', u'name': u'ruby'}

Also scan for all items in the collection:

1
2
3
4
5
6
7
8
9
>>> scan = newdb_collection2.find({})
>>> for x in scan:
...     print(x)
...
{u'_id': ObjectId('59a31a8a1f15a5088ba3a33a'), u'surname': u'james', u'name': u'phillip'}
{u'_id': ObjectId('59a31acf1f15a5088ba3a33b'), u'surname': u'james', u'name': u'ruby'}

>>> newdb2.find().count()
2

We can now verify that we have 2 collections in our database:

1
2
>>> newdb.collection_names()
[u'mycollection-2', u'mycollection']

Connecting to an existing Database:

Let’s connect to an existing database on our MongoDB Server:

1
>>> flaskdb = mongodb_client['flask_reminders']

List the collections:

1
2
>>> flaskdb.collection_names()
[u'reminders', u'usersessions']

Count the number of items in our reminders Collection:

1
2
>>> flaskdb.reminders.find().count()
624

Find a Random Item:

1
2
>>> flaskdb.reminders.find_one()
{u'category': u'Python', u'description': u'Chatbot with SQLite', u'link': u'http://rodic.fr/blog/python-chatbot-1/', u'date': u'2017-01-03', u'_id': ObjectId('586bb6dd0269103671afce32'), u'type': u'Discovered Service'}

Find One Item, with a Specific Value, for example the value AWS for our Category key:

1
2
>>> flaskdb.reminders.find_one({"category": "AWS"})
{u'category': u'AWS', u'description': u'Elasticsearch Documentation Access Policies', u'link': u'http://docs.aws.amazon.com/elasticsearch-service/latest/developerguide/es-createupdatedomains.html#es-createdomain-configure-access-policies', u'date': u'2017-02-13', u'_id': ObjectId('58a1d45202691070616947c3'), u'type': u'Documentation'}

Find All Items, with a specific value:

1
2
3
4
5
6
>>> data = flaskdb.reminders.find({"category": "AWS"})
>>> for x in data:
...     print(x)
...
{u'category': u'Python', u'description': u'Chatbot with SQLite', u'link': u'http://rodic.fr/blog/python-chatbot-1/', u'date': u'2017-01-03', u'_id': ObjectId('586bb6dd0269103671afce32'), u'type': u'Discovered Service'}
{u'category': u'Python', u'description': u'Boto: Kinesis List', u'link': u'https://gitlab.com/rbekker87/code-examples/blob/master/kinesis/firehose/python/firehose.list.py', u'date': u'2017-01-05', u'_id': ObjectId('586dde1e0269103671afce36'), u'type': u'Stuff Done'}

Deleting Databases:

Cleaning up, deleting the database that we created, when a database is delete, the collections within that database also gets removed.

First list the databases:

1
2
3
4
5
6
7
8
>>> dbs = mongodb_client.database_names()
>>> for x in dbs:
...     print(x)
...
admin
flask_reminders
local
ruan-test

Then delete the database that you want to delete:

1
>>> mongodb_client.drop_database("ruan-test")

Then verify if the database was removed:

1
2
3
4
5
6
7
>>> dbs = mongodb_client.database_names()
>>> for x in dbs:
...     print(x)
...
admin
flask_reminders
local

Resources:

Setup a Local MongoDB Development 3 Member Replica Set

Setup a Development Environment of a MongoDB Replica Set consisting of 3 mongod MongoDB Instances.

This is purely aimed for a testing or development environment, as one of the key points is that security is disabled, and that for this post, all 3 instances will be running on the same node.

Resources:

Installation:

I am using Ubuntu 16.04, for other distributions, have a look at MongoDBs Installation Page

MongoDB Installation
1
2
3
4
$ sudo apt-key adv --keyserver hkp://keyserver.ubuntu.com:80 --recv 0C49F3730359A14518585931BC711F9BA15703C6
$ echo "deb [ arch=amd64,arm64 ] http://repo.mongodb.org/apt/ubuntu xenial/mongodb-org/3.4 multiverse" | sudo tee /etc/apt/sources.list.d/mongodb-org-3.4.list
$ apt update
$ apt install -y mongodb-org -y

Prepare Directories:

Prepare the data directories, and as I am planning to use the --fork option, I need to specify the the --logpath, so therefore I will create the log directories as well:

Create the Directory Paths
1
2
$ mkdir -p /srv/mongodb/rs0-0 /srv/mongodb/rs0-1 /srv/mongodb/rs0-2
$ mkdir -p /var/log/mongodb/rs0-0 /var/log/mongodb/rs0-1 /var/log/mongodb/rs0-2

Run 3 MongoDB Instances:

Create 3 MongoDB Instances, each instance listening on it’s unique port.

From MongoDB’s Documentation:

“The –smallfiles and –oplogSize settings reduce the disk space that each mongod instance uses”

1
2
3
$ mongod --port 27017 --dbpath /srv/mongodb/rs0-0 --replSet rs0 --smallfiles --oplogSize 128 --logpath /var/log/mongodb/rs0-0/server.log --fork
$ mongod --port 27018 --dbpath /srv/mongodb/rs0-1 --replSet rs0 --smallfiles --oplogSize 128 --logpath /var/log/mongodb/rs0-1/server.log --fork
$ mongod --port 27019 --dbpath /srv/mongodb/rs0-2 --replSet rs0 --smallfiles --oplogSize 128 --logpath /var/log/mongodb/rs0-2/server.log --fork

Cofirm:

Confirm that the processes are listening on the ports that we defined:

1
2
3
4
5
6
$ netstat -tulpn
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address           Foreign Address         State       PID/Program name
tcp        0      0 0.0.0.0:27017           0.0.0.0:*               LISTEN      1100/mongod
tcp        0      0 0.0.0.0:27018           0.0.0.0:*               LISTEN      1127/mongod
tcp        0      0 0.0.0.0:27019           0.0.0.0:*               LISTEN      1154/mongod

Connect to the first MongoDB Instnace:

Connect to our first MongoDB Instance, where we will setup the replica set:

1
2
$ mongo --port 27017
\>

Create the Replica Set Configuration Object:

1
2
3
4
5
6
7
8
9
> rsconf = {
             _id: "rs0",
             members: [
                        {
                         _id: 0,
                         host: "10.78.1.24:27017"
                        }
                      ]
           }

Initiate the replica set configuration:

1
2
> rs.initiate( rsconf )
{ "ok" : 1 }

Display the Replica Configuration with rs.conf():

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
rs0:SECONDARY> rs.conf()
{
        "_id" : "rs0",
        "version" : 1,
        "protocolVersion" : NumberLong(1),
        "members" : [
                {
                        "_id" : 0,
                        "host" : "10.78.1.24:27017",
                        "arbiterOnly" : false,
                        "buildIndexes" : true,
                        "hidden" : false,
                        "priority" : 1,
                        "tags" : {

                        },
                        "slaveDelay" : NumberLong(0),
                        "votes" : 1
                }
        ],
        "settings" : {
                "chainingAllowed" : true,
                "heartbeatIntervalMillis" : 2000,
                "heartbeatTimeoutSecs" : 10,
                "electionTimeoutMillis" : 10000,
                "catchUpTimeoutMillis" : 60000,
                "getLastErrorModes" : {

                },
                "getLastErrorDefaults" : {
                        "w" : 1,
                        "wtimeout" : 0
                },
                "replicaSetId" : ObjectId("59a2339f5ff27709a1645d28")
        }
}

Add the other two mongodb instances to the replica set using rs.add():

1
2
3
4
5
rs0:PRIMARY> rs.add("10.78.1.24:27018")
{ "ok" : 1 }

rs0:PRIMARY> rs.add("10.78.1.24:27019")
{ "ok" : 1 }

View the status of our MongoDB Replica Set with rs.status():

1
rs0:PRIMARY> rs.status()
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
{
        "set" : "rs0",
        "date" : ISODate("2017-08-27T02:52:08.106Z"),
        "myState" : 1,
        "term" : NumberLong(1),
        "heartbeatIntervalMillis" : NumberLong(2000),
        "optimes" : {
                "lastCommittedOpTime" : {
                        "ts" : Timestamp(1503802316, 1),
                        "t" : NumberLong(1)
                },
                "appliedOpTime" : {
                        "ts" : Timestamp(1503802316, 1),
                        "t" : NumberLong(1)
                },
                "durableOpTime" : {
                        "ts" : Timestamp(1503802316, 1),
                        "t" : NumberLong(1)
                }
        },
        "members" : [
                {
                        "_id" : 0,
                        "name" : "10.78.1.24:27017",
                        "health" : 1,
                        "state" : 1,
                        "stateStr" : "PRIMARY",
                        "uptime" : 890,
                        "optime" : {
                                "ts" : Timestamp(1503802316, 1),
                                "t" : NumberLong(1)
                        },
                        "optimeDate" : ISODate("2017-08-27T02:51:56Z"),
                        "infoMessage" : "could not find member to sync from",
                        "electionTime" : Timestamp(1503802272, 1),
                        "electionDate" : ISODate("2017-08-27T02:51:12Z"),
                        "configVersion" : 3,
                        "self" : true
                },
                {
                        "_id" : 1,
                        "name" : "10.78.1.24:27018",
                        "health" : 1,
                        "state" : 2,
                        "stateStr" : "SECONDARY",
                        "uptime" : 16,
                        "optime" : {
                                "ts" : Timestamp(1503802316, 1),
                                "t" : NumberLong(1)
                        },
                        "optimeDurable" : {
                                "ts" : Timestamp(1503802316, 1),
                                "t" : NumberLong(1)
                        },
                        "optimeDate" : ISODate("2017-08-27T02:51:56Z"),
                        "optimeDurableDate" : ISODate("2017-08-27T02:51:56Z"),
                        "lastHeartbeat" : ISODate("2017-08-27T02:52:06.638Z"),
                        "lastHeartbeatRecv" : ISODate("2017-08-27T02:52:07.638Z"),
                        "pingMs" : NumberLong(0),
                        "syncingTo" : "10.78.1.24:27017",
                        "configVersion" : 3
                },
                {
                        "_id" : 2,
                        "name" : "10.78.1.24:27019",
                        "health" : 1,
                        "state" : 2,
                        "stateStr" : "SECONDARY",
                        "uptime" : 11,
                        "optime" : {
                                "ts" : Timestamp(1503802316, 1),
                                "t" : NumberLong(1)
                        },
                        "optimeDurable" : {
                                "ts" : Timestamp(1503802316, 1),
                                "t" : NumberLong(1)
                        },
                        "optimeDate" : ISODate("2017-08-27T02:51:56Z"),
                        "optimeDurableDate" : ISODate("2017-08-27T02:51:56Z"),
                        "lastHeartbeat" : ISODate("2017-08-27T02:52:06.638Z"),
                        "lastHeartbeatRecv" : ISODate("2017-08-27T02:52:07.241Z"),
                        "pingMs" : NumberLong(0),
                        "configVersion" : 3
                }
        ],
        "ok" : 1
}

Write some Data to MongoDB:

Create a Database named mydb:

1
2
rs0:PRIMARY> use mydb
switched to db mydb

Create a Collection, named mycol1:

1
2
3
4
5
rs0:PRIMARY> db.createCollection("mycol1")
{ "ok" : 1 }

rs0:PRIMARY> show collections
mycol1

Write 2 documents with:

  • Name: James, Home Address: Country => South Africa, City => Cape Town
  • Name: Frank, Home Address: Country => Ireland, City => Dublin
Write some Data
1
2
3
4
5
rs0:PRIMARY> db.mycol1.insert({"name": "james", "home address": {"country": "south africa", "city": "cape town"}})
WriteResult({ "nInserted" : 1 })

rs0:PRIMARY> db.mycol1.insert({"name": "frank", "home address": {"country": "ireland", "city": "dublin"}})
WriteResult({ "nInserted" : 1 })

Count all Documents in our Database:

Counting
1
2
rs0:PRIMARY> db.mycol1.find().count()
2

Scan through all documents, and show the in pretty print:

Pretty Print
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
rs0:PRIMARY> db.mycol1.find().pretty()
{
        "_id" : ObjectId("59a23d26c0c3824694f79ff6"),
        "name" : "james",
        "home address" : {
                "country" : "south africa",
                "city" : "cape town"
        }
}
{
        "_id" : ObjectId("59a23dbdc0c3824694f79ff7"),
        "name" : "frank",
        "home address" : {
                "country" : "ireland",
                "city" : "dublin"
        }
}

Find Information about Frank:

Franks Info
1
2
rs0:PRIMARY> db.mycol1.find({"name": "frank"})
{ "_id" : ObjectId("59a23dbdc0c3824694f79ff7"), "name" : "frank", "home address" : { "country" : "ireland", "city" : "dublin" } }

Delete the Database, but confirm which database your are logged on, the delete using dropDatabase():

Drop Database
1
2
3
4
5
6
7
8
rs0:PRIMARY> db
mydb

rs0:PRIMARY> db.dropDatabase()
{ "dropped" : "mydb", "ok" : 1 }

rs0:PRIMARY> exit
bye