Ruan Bekker's Blog

From a Curious mind to Posts on Github

Display PHP Content Through HTML Files

While I was working with a root index page which is in HTML that had PHP content in, it did not render all of the PHP and some content was displayed as text, instead of rendered.

The Issue:

Some PHP content not rendered, as I am seeing this at the top of the page as plain text:

1
; somePhpFunction(); ?>

My Nginx Config:

nginx.conf
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
http {
    include                     /etc/nginx/mime.types;
    default_type                application/octet-stream;
    sendfile                    on;
    access_log                  /var/log/nginx/access.log;
    keepalive_timeout           3000;

    proxy_cache_path /var/cache/nginx/ levels=1:2 keys_zone=nginx_cache:10m max_size=16m inactive=60m;

    server {
        listen                  80;
        root                    /www;
        index                   index.php index.html index.htm;
        server_name             _;
        client_max_body_size    32m;
        error_page              500 502 503 504  /50x.html;
        proxy_cache       nginx_cache;
        add_header        X-Proxy-Cache "public";

        location = /50x.html {
              root              /var/lib/nginx/html;
        }

        location ~ \.php$ {
              fastcgi_pass      127.0.0.1:9000;
              fastcgi_index     index.php;
              include           fastcgi.conf;
        }
    }
}

The Fix:

a .htaccess had to be placed in my root directory of the website:

.htaccess
1
AddType application/x-httpd-php .html .htm

After that was in place, all of the PHP was rendered

Sending Mail With SSMTP on Alpine Linux

Quick Post on how to use ssmtp on Alpine Linux to Send Mail:

Update and Install SSMTP

1
2
$ apk update
$ apk add ssmtp

Configure SSMTP

1
2
3
4
5
6
$ cat > /etc/ssmtp/ssmtp.conf << EOF
root=postmaster
mailhub=mail.domain.com:25
hostname=`hostname`
FromLineOverride=YES
EOF

Create the Mail Content

1
2
3
4
5
6
7
$ cat > mail.txt << EOF
To: recipient@domain.com
From: sender@domain.com
Subject: Mail with SSMTP

Hello, this is a test mail.
EOF

Testing Mail Delivery

1
$ ssmtp recipient@domain.com < file.txt

Related:

Backup and Restore Mutliple Collections From a Database With MongoDB

From a previous post we’ve Setup a MongoDB Cluster, and in this post we will go through the steps of backing up a database and restoring it to another mongodb cluster.

MLab offers a free Shared MongoDB Hosted Service with a limitation of 500MB, which I will be using to restore my data from my own hosted cluster to the free MLab service.

Create the MongoDB Backup

First we will need to create our backup path, and then backup our database, in my case, I am backing up my rocketchat database:

1
2
$ mkdir -p /opt/backups/mongodb
$ mongodump --host mongodb.example.com --port 27017 -u <mongouser> --authenticationDatabase <authdb> --db rocketchat --out /opt/backups/mongodb/

Change into the backup directory:

1
$ cd /opt/backups/mongodb/rocketchat/

You will find the bson and json metadata files for each collection:

1
2
3
4
5
6
7
8
9
10
$ ls -l | awk '{print $9}' | head -9
custom_emoji.chunks.bson
custom_emoji.chunks.metadata.json
custom_emoji.files.bson
custom_emoji.files.metadata.json
instances.bson
instances.metadata.json
meteor_accounts_loginServiceConfiguration.bson
meteor_accounts_loginServiceConfiguration.metadata.json
...

Restore MongoDB Database

We will need to restore all the collections to our new mongodb service, I have created a bash script (restore-mongodb.sh) that will restore each collection to our rocketchat database:

1
2
3
4
5
6
7
8
9
10
11
12
13
#!/usr/bin/env bash

mongo_user=<mongouser>
mongo_pass=<mongopass>

for file in `ls | grep bson`
  do
    for collection in `echo $file | sed 's/.bson//g'`
  do
    mongorestore --host mymongoid.mlab.com --port 12345 -u $mongo_user -p $mongo_pass -d rocketchat -c $collection $file
    sleep 2
  done
done

Change the permissions of your script to make it executable and execute the script:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
$ chmod +x restore-mongodb.sh
$ ./restore-mongodb.sh

2017-10-03T22:05:39.138+0200    checking for collection data in custom_emoji.chunks.bson
2017-10-03T22:05:39.159+0200    reading metadata for rocketchat.custom_emoji.chunks from custom_emoji.chunks.metadata.json
2017-10-03T22:05:39.211+0200    restoring rocketchat.custom_emoji.chunks from custom_emoji.chunks.bson
2017-10-03T22:05:39.900+0200    restoring indexes for collection rocketchat.custom_emoji.chunks from metadata
2017-10-03T22:05:39.922+0200    finished restoring rocketchat.custom_emoji.chunks (20 documents)
2017-10-03T22:05:39.922+0200    done
2017-10-03T22:05:42.188+0200    checking for collection data in custom_emoji.files.bson
2017-10-03T22:05:42.231+0200    reading metadata for rocketchat.custom_emoji.files from custom_emoji.files.metadata.json
2017-10-03T22:05:42.252+0200    restoring rocketchat.custom_emoji.files from custom_emoji.files.bson
2017-10-03T22:05:42.623+0200    restoring indexes for collection rocketchat.custom_emoji.files from metadata
2017-10-03T22:05:42.645+0200    finished restoring rocketchat.custom_emoji.files (20 documents)
2017-10-03T22:05:42.645+0200    done
...

Checkout the New MongoDB Database:

Once the restore has been done, logon to your new mongodb database and have a look at the collections in the database:

1
2
3
4
5
6
7
8
9
10
11
12
$ mongo mymongoid.mlab.com:12345/rocketchat -u <mongouser> -p
MongoDB shell version v3.4.7
Enter password:
connecting to: mongodb://mymongoid.mlab.com:12345/rocketchat
MongoDB server version: 3.4.9

rs-mymongoid:PRIMARY> show collections
_raix_push_app_tokens
_raix_push_notifications
custom_emoji.chunks
custom_emoji.files
instances

Resources:

Creating a Nodejs Hostname App With Docker Stacks on Swarm

Create a Nodejs Application that responds GET requests with its Hostname.

Our nodejs application will sit beind a HAProxy Load Balancer, we are mounting the docker.sock from the host to the container, so as we scale our web application, our load balancer is aware of the changes, and scales as we scale our web application.

Creating the Application:

Our nodejs application:

app.js
1
2
3
4
5
6
var http = require('http');
var os = require('os');
http.createServer(function (req, res) {
    res.writeHead(200, {'Content-Type': 'text/html'});
    res.end(`My Hostname: ${os.hostname()}\n`);
}).listen(8080);

Our Dockerfile:

Dockerfile
1
2
3
FROM node:alpine
ADD app.js /app.js
CMD ["node", "/app.js"]

Build and Push to your registry, or you could use my image on Dockerhub: hub.docker.com/r/rbekker87/node-containername

Build and Push
1
2
3
$ docker login
$ docker build -t <username>/<repo>:<tag> .
$ docker push  <username>/<repo>:<tag>

Creating the Compose file

Create the compose file that will define our services:

docker-compose.yml
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
version: '3'

services:
  node-app:
    image: rbekker87/node-containername
    networks:
      - nodenet
    environment:
      - SERVICE_PORTS=8080
    deploy:
      replicas: 20
      update_config:
        parallelism: 5
        delay: 10s
      restart_policy:
        condition: on-failure
        max_attempts: 3
        window: 120s

  loadbalancer:
    image: dockercloud/haproxy:latest
    depends_on:
      - node-app
    environment:
      - BALANCE=leastconn
    volumes:
      - /var/run/docker.sock:/var/run/docker.sock
    ports:
      - 80:80
    networks:
      - nodenet
    deploy:
      placement:
        constraints: [node.role == manager]

networks:
  nodenet:
    driver: overlay

Create the Stack:

Deploy the Stack by specifying the compose file and name of our stack:

Deploy our Stack
1
$ docker stack deploy -c docker-compose.yml node

List the Services in the Stack:

List Services in our Stack
1
2
3
$ docker stack ls
NAME                SERVICES
node                2

List the Tasks in the Stack:

Tasks in our Stack
1
2
3
4
5
6
7
8
$ docker stack ps node
ID                  NAME                  IMAGE                                 NODE     DESIRED STATE       CURRENT STATE            ERROR               PORTS
l5ryfaedzzaq        node_loadbalancer.1   dockercloud/haproxy:latest            dsm-01   Running             Running 40 minutes ago
c8nrrcvek79h        node_node-app.5       rbekker87/node-containername:latest   dsm-01   Running             Running 40 minutes ago
dqii18b2q5nn        node_node-app.10      rbekker87/node-containername:latest   dsm-01   Running             Running 40 minutes ago
vkpw2rugy0ah        node_node-app.11      rbekker87/node-containername:latest   dsm-01   Running             Running 40 minutes ago
mm88nvnvy5lg        node_node-app.12      rbekker87/node-containername:latest   dsm-01   Running             Running 40 minutes ago
oyx8rfqc1xl2        node_node-app.16      rbekker87/node-containername:latest   dsm-01   Running             Running 41 minutes ago

Test out our Application

Test out the Service:

GET Requests
1
2
3
4
5
6
7
8
$ curl -XGET http://127.0.0.1/
My Hostname: a6e34246e73b

$ curl -XGET http://127.0.0.1/
My Hostname: 5de71278be38

$ curl -XGET http://127.0.0.1/
My Hostname: e0b7316fdd51

Scaling Out:

Scale our Application out to 30 replica’s

Scaling Up
1
$ docker service scale node-app=30

Scale our Application down to 10 replica’s

Scaling Down
1
$ docker service scale node-app=10

Cleanup

Remove the Stack:

Delete the Stack
1
2
3
4
$ docker stack rm node
Removing service node_loadbalancer
Removing service node_node-app
Removing network node_nodenet

Resources:

Create a 3 Node Elasticsearch Stack With HAProxy on Docker Swarm

Tried out creating a 3 node elasticsearch stack on docker swarm using docker-compose, that sits behind a haproxy service.

Environment:

Images:

Stack:

  • 1 x haproxy
  • 1 x elasticsearch master (haproxy wont send requests to this one)
  • 2 x elasticsearch master/data
  • 1 x esnet overlay network

Defining our Stack

First we will create our compose file, which we will call es-compose.yml:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
version: '3'

services:
  es-master:
    image: rbekker87/elasticsearch:master-5.6-alpine
    networks:
      - esnet
    deploy:
      replicas: 1

  es-data-1:
    image: rbekker87/elasticsearch:master-5.6-alpine
    environment:
     - SERVICE_PORTS=9200
    networks:
      - esnet
    deploy:
      replicas: 2

  es-data-2:
    image: rbekker87/elasticsearch:master-5.6-alpine
    environment:
     - SERVICE_PORTS=9200
    networks:
      - esnet
    deploy:
      replicas: 2

  loadbalancer:
    image: dockercloud/haproxy:latest
    depends_on:
      - es-data-1
      - es-data-2
    environment:
      - BALANCE=leastconn
    volumes:
      - /var/run/docker.sock:/var/run/docker.sock
    ports:
      - 9200:80
    networks:
      - esnet
    deploy:
      placement:
        constraints: [node.role == manager]

networks:
  esnet:
    driver: overlay

The above compose file defines that we want a overlay network, which we will associate with all our services, 3 elasticsearch services, haproxy service which will expose port 9200, then from haproxy it has a container port of 80, which sends to the backend SERVICE_PORTS of each elasticsearch service.

We have only defined SERVICE_PORTS=9200 on our es-data services, as I just want to proxy client connections to them.

Creating our Elasticsearch Stack

Now that we have our compose file ready, let’s create our stack using docker stack deploy:

Create the Stack
1
2
3
4
5
6
7
$ docker stack deploy -c es-compose.yml analytics

Creating network analytics_esnet
Creating service analytics_loadbalancer
Creating service analytics_es-master
Creating service analytics_es-data-1
Creating service analytics_es-data-2

Let’s have a look at our stack:

Docker Stack Status
1
2
3
4
5
6
$ docker stack ps analytics
ID                  NAME                       IMAGE                                       NODE                  DESIRED STATE       CURRENT STATE            ERROR               PORTS
4t3ukxl2kch3        analytics_loadbalancer.1   dockercloud/haproxy:latest                  scw-swarm-master-01   Running             Running 27 seconds ago
jgbxtgqkg9jp        analytics_es-data-2.1      rbekker87/elasticsearch:master-5.6-alpine   scw-swarm-master-01   Running             Running 33 seconds ago
x5cq6pm7u7mn        analytics_es-data-1.1      rbekker87/elasticsearch:master-5.6-alpine   scw-swarm-master-01   Running             Running 36 seconds ago
5v22w1hvtdvm        analytics_es-master.1      rbekker87/elasticsearch:master-5.6-alpine   scw-swarm-master-01   Running             Running 38 seconds ago

View the logs of our haproxy service:

HAProxy Service Logs
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
$ docker service logs -f analytics_loadbalancer
analytics_loadbalancer.1.lcpgiz0ooeas@scw-swarm-master-01    | INFO:haproxy:dockercloud/haproxy 1.6.7 is running outside Docker Cloud
analytics_loadbalancer.1.lcpgiz0ooeas@scw-swarm-master-01    | INFO:haproxy:Haproxy is running in SwarmMode, loading HAProxy definition through docker api
analytics_loadbalancer.1.lcpgiz0ooeas@scw-swarm-master-01    | INFO:haproxy:dockercloud/haproxy PID: 7
analytics_loadbalancer.1.lcpgiz0ooeas@scw-swarm-master-01    | INFO:haproxy:=> Add task: Initial start - Swarm Mode
analytics_loadbalancer.1.lcpgiz0ooeas@scw-swarm-master-01    | INFO:haproxy:=> Executing task: Initial start - Swarm Mode
analytics_loadbalancer.1.lcpgiz0ooeas@scw-swarm-master-01    | INFO:haproxy:==========BEGIN==========
analytics_loadbalancer.1.lcpgiz0ooeas@scw-swarm-master-01    | INFO:haproxy:Linked service: analytics_es-data-1, analytics_es-data-2, analytics_es-master
analytics_loadbalancer.1.lcpgiz0ooeas@scw-swarm-master-01    | INFO:haproxy:Linked container: analytics_es-data-1.1.u641c5bq5vkjklk8sb1scnnlc, analytics_es-data-2.1.ic9an6bzj6aejs0lx0vzfpia6, analytics_es-master.1.h4erlgwzit509p0zehzmozy3u
analytics_loadbalancer.1.lcpgiz0ooeas@scw-swarm-master-01    | INFO:haproxy:HAProxy configuration:
analytics_loadbalancer.1.lcpgiz0ooeas@scw-swarm-master-01    | global
analytics_loadbalancer.1.lcpgiz0ooeas@scw-swarm-master-01    |   log 127.0.0.1 local0
analytics_loadbalancer.1.lcpgiz0ooeas@scw-swarm-master-01    |   log 127.0.0.1 local1 notice
analytics_loadbalancer.1.lcpgiz0ooeas@scw-swarm-master-01    |   log-send-hostname
analytics_loadbalancer.1.lcpgiz0ooeas@scw-swarm-master-01    |   maxconn 4096
analytics_loadbalancer.1.lcpgiz0ooeas@scw-swarm-master-01    |   pidfile /var/run/haproxy.pid
analytics_loadbalancer.1.lcpgiz0ooeas@scw-swarm-master-01    |   user haproxy
analytics_loadbalancer.1.lcpgiz0ooeas@scw-swarm-master-01    |   group haproxy
analytics_loadbalancer.1.lcpgiz0ooeas@scw-swarm-master-01    |   daemon
analytics_loadbalancer.1.lcpgiz0ooeas@scw-swarm-master-01    |   stats socket /var/run/haproxy.stats level admin
analytics_loadbalancer.1.lcpgiz0ooeas@scw-swarm-master-01    |   ssl-default-bind-options no-sslv3
analytics_loadbalancer.1.lcpgiz0ooeas@scw-swarm-master-01    |   ssl-default-bind-ciphers ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA256:ECDHE-ECDSA-AES128-SHA:ECDHE-RSA-AES128-SHA:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA384:ECDHE-RSA-AES256-SHA:ECDHE-ECDSA-AES256-SHA:AES128-GCM-SHA256:AES128-SHA256:AES128-SHA:AES256-GCM-SHA384:AES256-SHA256:AES256-SHA:DHE-DSS-AES128-SHA:DES-CBC3-SHA
analytics_loadbalancer.1.lcpgiz0ooeas@scw-swarm-master-01    | defaults
analytics_loadbalancer.1.lcpgiz0ooeas@scw-swarm-master-01    |   balance leastconn
analytics_loadbalancer.1.lcpgiz0ooeas@scw-swarm-master-01    |   log global
analytics_loadbalancer.1.lcpgiz0ooeas@scw-swarm-master-01    |   mode http
analytics_loadbalancer.1.lcpgiz0ooeas@scw-swarm-master-01    |   option redispatch
analytics_loadbalancer.1.lcpgiz0ooeas@scw-swarm-master-01    |   option httplog
analytics_loadbalancer.1.lcpgiz0ooeas@scw-swarm-master-01    |   option dontlognull
analytics_loadbalancer.1.lcpgiz0ooeas@scw-swarm-master-01    |   option forwardfor
analytics_loadbalancer.1.lcpgiz0ooeas@scw-swarm-master-01    |   timeout connect 5000
analytics_loadbalancer.1.lcpgiz0ooeas@scw-swarm-master-01    |   timeout client 50000
analytics_loadbalancer.1.lcpgiz0ooeas@scw-swarm-master-01    |   timeout server 50000
analytics_loadbalancer.1.lcpgiz0ooeas@scw-swarm-master-01    | listen stats
analytics_loadbalancer.1.lcpgiz0ooeas@scw-swarm-master-01    |   bind :1936
analytics_loadbalancer.1.lcpgiz0ooeas@scw-swarm-master-01    |   mode http
analytics_loadbalancer.1.lcpgiz0ooeas@scw-swarm-master-01    |   stats enable
analytics_loadbalancer.1.lcpgiz0ooeas@scw-swarm-master-01    |   timeout connect 10s
analytics_loadbalancer.1.lcpgiz0ooeas@scw-swarm-master-01    |   timeout client 1m
analytics_loadbalancer.1.lcpgiz0ooeas@scw-swarm-master-01    |   timeout server 1m
analytics_loadbalancer.1.lcpgiz0ooeas@scw-swarm-master-01    |   stats hide-version
analytics_loadbalancer.1.lcpgiz0ooeas@scw-swarm-master-01    |   stats realm Haproxy\ Statistics
analytics_loadbalancer.1.lcpgiz0ooeas@scw-swarm-master-01    |   stats uri /
analytics_loadbalancer.1.lcpgiz0ooeas@scw-swarm-master-01    |   stats auth stats:stats
analytics_loadbalancer.1.lcpgiz0ooeas@scw-swarm-master-01    | frontend default_port_80
analytics_loadbalancer.1.lcpgiz0ooeas@scw-swarm-master-01    |   bind :80
analytics_loadbalancer.1.lcpgiz0ooeas@scw-swarm-master-01    |   reqadd X-Forwarded-Proto:\ http
analytics_loadbalancer.1.lcpgiz0ooeas@scw-swarm-master-01    |   maxconn 4096
analytics_loadbalancer.1.lcpgiz0ooeas@scw-swarm-master-01    |   default_backend default_service
analytics_loadbalancer.1.lcpgiz0ooeas@scw-swarm-master-01    | backend default_service
analytics_loadbalancer.1.lcpgiz0ooeas@scw-swarm-master-01    |   server analytics_es-data-1.1.u641c5bq5vkjklk8sb1scnnlc 10.0.7.5:9200 check inter 2000 rise 2 fall 3
analytics_loadbalancer.1.lcpgiz0ooeas@scw-swarm-master-01    |   server analytics_es-data-2.1.ic9an6bzj6aejs0lx0vzfpia6 10.0.7.7:9200 check inter 2000 rise 2 fall 3
analytics_loadbalancer.1.lcpgiz0ooeas@scw-swarm-master-01    | INFO:haproxy:Launching HAProxy
analytics_loadbalancer.1.lcpgiz0ooeas@scw-swarm-master-01    | INFO:haproxy:HAProxy has been launched(PID: 10)
analytics_loadbalancer.1.lcpgiz0ooeas@scw-swarm-master-01    | INFO:haproxy:===========END===========

Testing Elasticsearch:

Do a GET request on our HAProxy’s Expose port: 9200

Test Elasticsearch on port 9200
1
2
3
4
5
6
7
8
9
10
11
12
13
14
$ curl -XGET http://127.0.0.1:9200
{
  "name" : "5306a0c2ee24",
  "cluster_name" : "es-cluster",
  "cluster_uuid" : "FUJmMekFQVq6zXofPCin2A",
  "version" : {
    "number" : "5.6.0",
    "build_hash" : "781a835",
    "build_date" : "2017-09-07T03:09:58.087Z",
    "build_snapshot" : false,
    "lucene_version" : "6.6.0"
  },
  "tagline" : "You Know, for Search"
}

Have a look at the /_cat/nodes API:

Get the Node Info
1
2
3
4
5
$  curl -XGET http://127.0.0.1:9200/_cat/nodes?v
ip       heap.percent ram.percent cpu load_1m load_5m load_15m node.role master name
10.0.7.6           28          84  14    3.09    2.28     1.49 mdi       -      56c1b0aebc5f
10.0.7.2           27          84  15    3.09    2.28     1.49 mdi       *      572c68bca904
10.0.7.4           29          84  15    3.09    2.28     1.49 mdi       -      5306a0c2ee24

Simple Program With C Language on Linux

Today the idea popped up on how to write a Simple “Hello World” Application using C Programming Language, as I just wanted to see how it works.

Requirements:

You will need the gcc package to compile the program:

RHEL
1
$ yum install gcc -y
Debian
1
$ apt install gcc -y

Writing our first Program:

We will create a app that just prints out a static defined value:

Create any file with a .c extension, in my case it will be app.c:

app.c
1
2
3
4
5
6
#include <stdio.h>

int main(){
    printf("Hello, World\n");
    return 0;
}

Now compile app.c with gcc and specify the output path of your app with -o <app-name>

app.c
1
$ gcc -o app app.c

Testing our App:

You will see that there is a executable file with the name that you have specified as the output:

app.c
1
2
$ ./app
Hello, World

Really basic, but quite cool.

Using the AWS CLI Tools to Grab CloudWatch Metrics for Elasticsearch

Using the AWS CLI Tools to get CloudWatch Metrics for Elasticsearch.

Elasticsearch:

List the JVM Memory Pressure Metric:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
$ aws cloudwatch list-metrics --namespace AWS/ES --metric-name JVMMemoryPressure
{
    "Metrics": [
        {
            "Namespace": "AWS/ES",
            "Dimensions": [
                {
                    "Name": "DomainName",
                    "Value": "elasticsearch-cluster"
                },
                {
                    "Name": "ClientId",
                    "Value": "123456789012"
                }
            ],
            "MetricName": "JVMMemoryPressure"
        }
    ]
}

Metric: JVMMemoryPressure

Getting Metrics for JVMMemoryPressure, every 10 Minutes for Max Statistic:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
$ aws cloudwatch get-metric-statistics --namespace AWS/ES --dimensions Name=DomainName,Value=elasticsearch-cluster Name=ClientId,Value=123456789012 --metric-name JVMMemoryPressure --start-time 2017-09-08T04:00:00 --end-time 2017-09-08T05:00:00 --period 600 --statistics Maximum
{
    "Datapoints": [
        {
            "Timestamp": "2017-09-08T04:40:00Z",
            "Maximum": 58.7,
            "Unit": "Percent"
        },
        {
            "Timestamp": "2017-09-08T04:00:00Z",
            "Maximum": 58.5,
            "Unit": "Percent"
        },
        {
            "Timestamp": "2017-09-08T04:30:00Z",
            "Maximum": 58.7,
            "Unit": "Percent"
        },
        {
            "Timestamp": "2017-09-08T04:20:00Z",
            "Maximum": 58.5,
            "Unit": "Percent"
        },
        {
            "Timestamp": "2017-09-08T04:50:00Z",
            "Maximum": 58.7,
            "Unit": "Percent"
        },
        {
            "Timestamp": "2017-09-08T04:10:00Z",
            "Maximum": 58.5,
            "Unit": "Percent"
        }
    ],
    "Label": "JVMMemoryPressure"
}

Metric: WriteIOPS

Getting Metrics for WriteIOPS, Every 10 Minutes for Max Statistic:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
$ aws cloudwatch get-metric-statistics --namespace AWS/ES --dimensions Name=DomainName,Value=elasticsearch-cluster Name=ClientId,Value=123456789012 --metric-name WriteIOPS --start-time 2017-09-08T04:00:00 --end-time 2017-09-08T05:00:00 --period 600 --statistics Maximum
{
    "Datapoints": [
        {
            "Timestamp": "2017-09-08T04:30:00Z",
            "Maximum": 0.5266666666666666,
            "Unit": "Count/Second"
        },
        {
            "Timestamp": "2017-09-08T04:00:00Z",
            "Maximum": 0.0,
            "Unit": "Count/Second"
        },
        {
            "Timestamp": "2017-09-08T04:40:00Z",
            "Maximum": 0.09666666666666666,
            "Unit": "Count/Second"
        },
        {
            "Timestamp": "2017-09-08T04:10:00Z",
            "Maximum": 0.0,
            "Unit": "Count/Second"
        },
        {
            "Timestamp": "2017-09-08T04:50:00Z",
            "Maximum": 0.07,
            "Unit": "Count/Second"
        },
        {
            "Timestamp": "2017-09-08T04:20:00Z",
            "Maximum": 0.0,
            "Unit": "Count/Second"
        }
    ],
    "Label": "WriteIOPS"
}

Metric: FreeStorageSpace

Getting Metrics for FreeStorageSpace in Megabytes:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
$ aws cloudwatch get-metric-statistics --namespace AWS/ES --dimensions Name=DomainName,Value=elasticsearch-cluster Name=ClientId,Value=123456789012 --metric-name FreeStorageSpace --start-time 2017-09-11T05:00:00 --end-time 2017-09-11T06:00:00 --period 600 --statistics Minimum --unit Megabytes
{
    "Datapoints": [
        {
            "Timestamp": "2017-09-11T05:50:00Z",
            "Minimum": 25510.438,
            "Unit": "Megabytes"
        },
        {
            "Timestamp": "2017-09-11T05:10:00Z",
            "Minimum": 25573.032,
            "Unit": "Megabytes"
        },
        {
            "Timestamp": "2017-09-11T05:20:00Z",
            "Minimum": 25554.051,
            "Unit": "Megabytes"
        },
        {
            "Timestamp": "2017-09-11T05:30:00Z",
            "Minimum": 25540.957,
            "Unit": "Megabytes"
        },
        {
            "Timestamp": "2017-09-11T05:40:00Z",
            "Minimum": 25525.473,
            "Unit": "Megabytes"
        },
        {
            "Timestamp": "2017-09-11T05:00:00Z",
            "Minimum": 25584.383,
            "Unit": "Megabytes"
        }
    ],
    "Label": "FreeStorageSpace"
}

Using the Python Sys Library to Read Data From Stdin

Using Python’s sys library to read data from stdin.

In this basic example we will strip our input, delimited by the comma character, add it to a list, and print it out

Python: Read Data from Standard Input

1
2
3
4
5
6
7
8
9
10
11
12
13
import sys
import json

mylist = []

data_input = sys.stdin.read()
destroy_newline = data_input.replace('\n', '')
mylist = destroy_newline.split(', ')

print("Stripping each word and adding it to 'mylist'")
print("Found: {} words in 'mylist'".format(len(mylist)))
for x in mylist:
    print("Word: {}".format(x))

We will echo three words and pipe it into our python script:

1
2
3
4
5
6
$ echo "one, two, three" | python basic-stdin.py
Stripping each word and adding it to 'mylist'
Found: 3 words in 'mylist'
Word: one
Word: two
Word: three

Setup a Postfix Relay Server That Uses SES to Relay Outbound Mail

We will setup a Postfix Relay Servcer which our clients will use to send out mail, the Postfix server will use Amazon’s SES Service to send out mail, which we will configure as a relay host in Postfix.

Setup EC2 Instance to Relay through AWS SES:

Install Postfix and SASL:

1
2
$ apt install postfix mailutils libsasl2-2 sasl2-bin libsasl2-modules ca-certificates -y
$ update-ca-certificates

Section we need to configure in /etc/postfix/main.cf:

1
2
3
4
5
6
relayhost = [email-smtp.eu-west-1.amazonaws.com]:587
smtp_use_tls = yes
smtp_sasl_auth_enable = yes
smtp_sasl_security_options =
smtp_sasl_password_maps = hash:/etc/postfix/sasl_passwd
smtp_tls_CAfile = /etc/ssl/certs/ca-certificates.crt

Populate SASL Passwd:

1
2
$ cat /etc/postfix/sasl_passwd
[email-smtp.eu-west-1.amazonaws.com]:587    AKIAABCDEFGHIJKLM:SomeRandomSecretString

Postmap the changes:

1
$ postmap /etc/postfix/sasl_passwd

Restart Postfix:

1
$ sudo /etc/init.d/postfix restart

Test the Mail Flow:

1
2
3
4
$ echo test | mail -r ruan@ruanbekker.com -s 'ses test mail ' ruan@ruanbekker.com && tail -f /var/log/mail.log

Jul 18 11:29:06 ip-10-1-4-250 postfix/smtp[5056]: 9FDCB469AA: to=<ruan@ruanbekker.com>, relay=email-smtp.eu-west-1.amazonaws.com[52.10.20.30]:587, delay=0.29, delays=0.02/0.03/0.12/0.13, dsn=2.0.0, status=sent (250 Ok 0234567d557572f2-76f56252-0a00-4d94-af87-38bd213914d2-000000)
Jul 18 11:29:06 ip-10-1-4-250 postfix/qmgr[4392]: 9FDCB469AA: removed

If your output looks more or less like the snippet from above, your mail should be working fine.

Nginx Reverse Proxy for Elasticsearch and Kibana 5 on AWS

As up untill today, there’s currently no VPC Support for Amazon’s Elasticsearch Service.

So for scenarios where you would like to allow private network traffic to Elasticsearch is impossible straight out of the box as Amazon’s Elasticsearch Services, only sees Public Internet Traffic.

We will setup 2 configs, one for Kibana and one for Elasticsearch, each one having its own FQDN:

  • Kibana: http://kibana.domain.com
  • Elasticsearch: http://elasticsearch.domain.com

Workaround:

There’s a couple of workarounds, which includes:

  • Nginx Reverse Proxy
  • NAT Gateway
  • Allow IAM Users/Roles

Today we will tackle the Nginx Reverse Proxy Route.

The benefit of this, would be to associate an EIP to the Nginx EC2 Instnace, then whitelist your EIP with Elasticsearch, so the only traffic that will be accepted will be the traffic that is coming from the Nginx Instance. We will also apply an additional layer of security, in this case we will use HTTP Basic Authentication, then also authorize network sources on a Security Group level.

Installing Nginx:

In this case I am using Ubuntu 16.04, so we will need to install nginx and apache2-utils for creating the Basic HTTP Auth accounts.

1
2
$ apt update && apt upgrade -y
$ apt install nginx apache2-utils -y

Configure Nginx:

Our main config: /etc/nginx/nginx.conf:

/etc/nginx/nginx.conf
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
user www-data;
worker_processes auto;
pid /run/nginx.pid;
error_log /var/log/nginx/error.log;

events {
  worker_connections 1024;
}

http {

  # Basic Settings
  sendfile on;
  tcp_nopush on;
  tcp_nodelay on;
  keepalive_timeout 65;
  types_hash_max_size 2048;
  server_names_hash_bucket_size 128;

  include /etc/nginx/mime.types;
  default_type application/octet-stream;

  # Logging Settings
        log_format  main  '$remote_addr - $remote_user [$time_local] "$request" '
                      '$status $body_bytes_sent "$http_referer" '
                      '"$http_user_agent" "$http_x_forwarded_for"';

  access_log /var/log/nginx/access.log main;

  # Gzip Settings
  gzip on;
  gzip_disable "msie6";

  # Elasticsearch and Kibana Configs
  include /etc/nginx/conf.d/elasticsearch.conf;
  include /etc/nginx/conf.d/kibana.conf;
}

Our /etc/nginx/conf.d/elasticsearch.conf configuration:

/etc/nginx/conf.d/elasticsearch.conf
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
server {

  listen 80;
  server_name elasticsearch.domain.com;

  # error logging
  error_log /var/log/nginx/elasticsearch_error.log;

  # authentication: elasticsearch
  auth_basic "Elasticsearch Auth";
  auth_basic_user_file /etc/nginx/.secrets_elasticsearch;

  location / {

    proxy_http_version 1.1;
    proxy_set_header Host https://search-elasticsearch-name.eu-west-1.es.amazonaws.com;
    proxy_set_header X-Real-IP <ELASTIC-IP>;
    proxy_set_header Connection "Keep-Alive";
    proxy_set_header Proxy-Connection "Keep-Alive";
    proxy_set_header Authorization "";

    proxy_pass https://search-elasticsearch-name.eu-west-1.es.amazonaws.com/;
    proxy_redirect https://search-elasticsearch-name.eu-west-1.es.amazonaws.com/ http://<ELASTIC-IP>/;

  }

  # ELB Health Checks
  location /status {
    root /usr/share/nginx/html/;
  }

}

Our /etc/nginx/conf.d/kibana.conf configuration:

/etc/nginx/conf.d/kibana.conf
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
server {

  listen 80;
  server_name kibana.domain.com;

  # error logging
  error_log /var/log/nginx/kibana_error.log;

  # authentication: kibana
  auth_basic "Kibana Auth";
  auth_basic_user_file /etc/nginx/.secrets_kibana;

  location / {

    proxy_http_version 1.1;
    proxy_set_header Host https://search.elasticsearch-name.eu-west-1.es.amazonaws.com;
    proxy_set_header X-Real-IP <ELASTIC-IP>;
    proxy_set_header Connection "Keep-Alive";
    proxy_set_header Proxy-Connection "Keep-Alive";
    proxy_set_header Authorization "";

    proxy_pass https://search.elasticsearch-name.eu-west-1.es.amazonaws.com/_plugin/kibana/;
    proxy_redirect https://search.elasticsearch-name.eu-west-1.es.amazonaws.com/_plugin/kibana/ http://<ELASTIC-IP>/kibana/;

  }

      location ~ (/app/kibana|/app/timelion|/bundles|/es_admin|/plugins|/api|/ui|/elasticsearch) {
         proxy_pass              https://search.elasticsearch-name.eu-west-1.es.amazonaws.com;
         proxy_set_header        Host $host;
         proxy_set_header        X-Real-IP $remote_addr;
         proxy_set_header        X-Forwarded-For $proxy_add_x_forwarded_for;
         proxy_set_header        X-Forwarded-Proto $scheme;
         proxy_set_header        X-Forwarded-Host $http_host;
         proxy_set_header      Authorization  "";
    }
}

Once you have replaced the elasticsearch endpoint and your EPI values, we can go ahead and create the auth accounts.

Create User Accounts for HTTP Basic Auth

Create the 2 accounts for authentication on kibana and elasticsearch:

1
2
$ htpasswd -c /etc/nginx/.secrets_elasticsearch elasticsearch-admin
$ htpasswd -c /etc/nginx/.secrets_kibana kibana-admin

Restart Nginx:

Restart and enable Nginx on boot:

1
2
$ systemctl enable nginx
$ systemctl restart nginx

Once your Nginx Service is running, you should be able to access Kibana and Elasticsearch using the credentials that you created.

Resources: