Ruan Bekker's Blog

From a Curious mind to Posts on Github

Creating a Nodejs Hostname App With Docker Stacks on Swarm

Create a Nodejs Application that responds GET requests with its Hostname.

Our nodejs application will sit beind a HAProxy Load Balancer, we are mounting the docker.sock from the host to the container, so as we scale our web application, our load balancer is aware of the changes, and scales as we scale our web application.

Creating the Application:

Our nodejs application:

app.js
1
2
3
4
5
6
var http = require('http');
var os = require('os');
http.createServer(function (req, res) {
    res.writeHead(200, {'Content-Type': 'text/html'});
    res.end(`My Hostname: ${os.hostname()}\n`);
}).listen(8080);

Our Dockerfile:

Dockerfile
1
2
3
FROM node:alpine
ADD app.js /app.js
CMD ["node", "/app.js"]

Build and Push to your registry, or you could use my image on Dockerhub: hub.docker.com/r/rbekker87/node-containername

Build and Push
1
2
3
$ docker login
$ docker build -t <username>/<repo>:<tag> .
$ docker push  <username>/<repo>:<tag>

Creating the Compose file

Create the compose file that will define our services:

docker-compose.yml
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
version: '3'

services:
  node-app:
    image: rbekker87/node-containername
    networks:
      - nodenet
    environment:
      - SERVICE_PORTS=8080
    deploy:
      replicas: 20
      update_config:
        parallelism: 5
        delay: 10s
      restart_policy:
        condition: on-failure
        max_attempts: 3
        window: 120s

  loadbalancer:
    image: dockercloud/haproxy:latest
    depends_on:
      - node-app
    environment:
      - BALANCE=leastconn
    volumes:
      - /var/run/docker.sock:/var/run/docker.sock
    ports:
      - 80:80
    networks:
      - nodenet
    deploy:
      placement:
        constraints: [node.role == manager]

networks:
  nodenet:
    driver: overlay

Create the Stack:

Deploy the Stack by specifying the compose file and name of our stack:

Deploy our Stack
1
$ docker stack deploy -c docker-compose.yml node

List the Services in the Stack:

List Services in our Stack
1
2
3
$ docker stack ls
NAME                SERVICES
node                2

List the Tasks in the Stack:

Tasks in our Stack
1
2
3
4
5
6
7
8
$ docker stack ps node
ID                  NAME                  IMAGE                                 NODE     DESIRED STATE       CURRENT STATE            ERROR               PORTS
l5ryfaedzzaq        node_loadbalancer.1   dockercloud/haproxy:latest            dsm-01   Running             Running 40 minutes ago
c8nrrcvek79h        node_node-app.5       rbekker87/node-containername:latest   dsm-01   Running             Running 40 minutes ago
dqii18b2q5nn        node_node-app.10      rbekker87/node-containername:latest   dsm-01   Running             Running 40 minutes ago
vkpw2rugy0ah        node_node-app.11      rbekker87/node-containername:latest   dsm-01   Running             Running 40 minutes ago
mm88nvnvy5lg        node_node-app.12      rbekker87/node-containername:latest   dsm-01   Running             Running 40 minutes ago
oyx8rfqc1xl2        node_node-app.16      rbekker87/node-containername:latest   dsm-01   Running             Running 41 minutes ago

Test out our Application

Test out the Service:

GET Requests
1
2
3
4
5
6
7
8
$ curl -XGET http://127.0.0.1/
My Hostname: a6e34246e73b

$ curl -XGET http://127.0.0.1/
My Hostname: 5de71278be38

$ curl -XGET http://127.0.0.1/
My Hostname: e0b7316fdd51

Scaling Out:

Scale our Application out to 30 replica’s

Scaling Up
1
$ docker service scale node-app=30

Scale our Application down to 10 replica’s

Scaling Down
1
$ docker service scale node-app=10

Cleanup

Remove the Stack:

Delete the Stack
1
2
3
4
$ docker stack rm node
Removing service node_loadbalancer
Removing service node_node-app
Removing network node_nodenet

Resources:

Create a 3 Node Elasticsearch Stack With HAProxy on Docker Swarm

Tried out creating a 3 node elasticsearch stack on docker swarm using docker-compose, that sits behind a haproxy service.

Environment:

Images:

Stack:

  • 1 x haproxy
  • 1 x elasticsearch master (haproxy wont send requests to this one)
  • 2 x elasticsearch master/data
  • 1 x esnet overlay network

Defining our Stack

First we will create our compose file, which we will call es-compose.yml:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
version: '3'

services:
  es-master:
    image: rbekker87/elasticsearch:master-5.6-alpine
    networks:
      - esnet
    deploy:
      replicas: 1

  es-data-1:
    image: rbekker87/elasticsearch:master-5.6-alpine
    environment:
     - SERVICE_PORTS=9200
    networks:
      - esnet
    deploy:
      replicas: 2

  es-data-2:
    image: rbekker87/elasticsearch:master-5.6-alpine
    environment:
     - SERVICE_PORTS=9200
    networks:
      - esnet
    deploy:
      replicas: 2

  loadbalancer:
    image: dockercloud/haproxy:latest
    depends_on:
      - es-data-1
      - es-data-2
    environment:
      - BALANCE=leastconn
    volumes:
      - /var/run/docker.sock:/var/run/docker.sock
    ports:
      - 9200:80
    networks:
      - esnet
    deploy:
      placement:
        constraints: [node.role == manager]

networks:
  esnet:
    driver: overlay

The above compose file defines that we want a overlay network, which we will associate with all our services, 3 elasticsearch services, haproxy service which will expose port 9200, then from haproxy it has a container port of 80, which sends to the backend SERVICE_PORTS of each elasticsearch service.

We have only defined SERVICE_PORTS=9200 on our es-data services, as I just want to proxy client connections to them.

Creating our Elasticsearch Stack

Now that we have our compose file ready, let’s create our stack using docker stack deploy:

Create the Stack
1
2
3
4
5
6
7
$ docker stack deploy -c es-compose.yml analytics

Creating network analytics_esnet
Creating service analytics_loadbalancer
Creating service analytics_es-master
Creating service analytics_es-data-1
Creating service analytics_es-data-2

Let’s have a look at our stack:

Docker Stack Status
1
2
3
4
5
6
$ docker stack ps analytics
ID                  NAME                       IMAGE                                       NODE                  DESIRED STATE       CURRENT STATE            ERROR               PORTS
4t3ukxl2kch3        analytics_loadbalancer.1   dockercloud/haproxy:latest                  scw-swarm-master-01   Running             Running 27 seconds ago
jgbxtgqkg9jp        analytics_es-data-2.1      rbekker87/elasticsearch:master-5.6-alpine   scw-swarm-master-01   Running             Running 33 seconds ago
x5cq6pm7u7mn        analytics_es-data-1.1      rbekker87/elasticsearch:master-5.6-alpine   scw-swarm-master-01   Running             Running 36 seconds ago
5v22w1hvtdvm        analytics_es-master.1      rbekker87/elasticsearch:master-5.6-alpine   scw-swarm-master-01   Running             Running 38 seconds ago

View the logs of our haproxy service:

HAProxy Service Logs
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
$ docker service logs -f analytics_loadbalancer
analytics_loadbalancer.1.lcpgiz0ooeas@scw-swarm-master-01    | INFO:haproxy:dockercloud/haproxy 1.6.7 is running outside Docker Cloud
analytics_loadbalancer.1.lcpgiz0ooeas@scw-swarm-master-01    | INFO:haproxy:Haproxy is running in SwarmMode, loading HAProxy definition through docker api
analytics_loadbalancer.1.lcpgiz0ooeas@scw-swarm-master-01    | INFO:haproxy:dockercloud/haproxy PID: 7
analytics_loadbalancer.1.lcpgiz0ooeas@scw-swarm-master-01    | INFO:haproxy:=> Add task: Initial start - Swarm Mode
analytics_loadbalancer.1.lcpgiz0ooeas@scw-swarm-master-01    | INFO:haproxy:=> Executing task: Initial start - Swarm Mode
analytics_loadbalancer.1.lcpgiz0ooeas@scw-swarm-master-01    | INFO:haproxy:==========BEGIN==========
analytics_loadbalancer.1.lcpgiz0ooeas@scw-swarm-master-01    | INFO:haproxy:Linked service: analytics_es-data-1, analytics_es-data-2, analytics_es-master
analytics_loadbalancer.1.lcpgiz0ooeas@scw-swarm-master-01    | INFO:haproxy:Linked container: analytics_es-data-1.1.u641c5bq5vkjklk8sb1scnnlc, analytics_es-data-2.1.ic9an6bzj6aejs0lx0vzfpia6, analytics_es-master.1.h4erlgwzit509p0zehzmozy3u
analytics_loadbalancer.1.lcpgiz0ooeas@scw-swarm-master-01    | INFO:haproxy:HAProxy configuration:
analytics_loadbalancer.1.lcpgiz0ooeas@scw-swarm-master-01    | global
analytics_loadbalancer.1.lcpgiz0ooeas@scw-swarm-master-01    |   log 127.0.0.1 local0
analytics_loadbalancer.1.lcpgiz0ooeas@scw-swarm-master-01    |   log 127.0.0.1 local1 notice
analytics_loadbalancer.1.lcpgiz0ooeas@scw-swarm-master-01    |   log-send-hostname
analytics_loadbalancer.1.lcpgiz0ooeas@scw-swarm-master-01    |   maxconn 4096
analytics_loadbalancer.1.lcpgiz0ooeas@scw-swarm-master-01    |   pidfile /var/run/haproxy.pid
analytics_loadbalancer.1.lcpgiz0ooeas@scw-swarm-master-01    |   user haproxy
analytics_loadbalancer.1.lcpgiz0ooeas@scw-swarm-master-01    |   group haproxy
analytics_loadbalancer.1.lcpgiz0ooeas@scw-swarm-master-01    |   daemon
analytics_loadbalancer.1.lcpgiz0ooeas@scw-swarm-master-01    |   stats socket /var/run/haproxy.stats level admin
analytics_loadbalancer.1.lcpgiz0ooeas@scw-swarm-master-01    |   ssl-default-bind-options no-sslv3
analytics_loadbalancer.1.lcpgiz0ooeas@scw-swarm-master-01    |   ssl-default-bind-ciphers ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA256:ECDHE-ECDSA-AES128-SHA:ECDHE-RSA-AES128-SHA:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA384:ECDHE-RSA-AES256-SHA:ECDHE-ECDSA-AES256-SHA:AES128-GCM-SHA256:AES128-SHA256:AES128-SHA:AES256-GCM-SHA384:AES256-SHA256:AES256-SHA:DHE-DSS-AES128-SHA:DES-CBC3-SHA
analytics_loadbalancer.1.lcpgiz0ooeas@scw-swarm-master-01    | defaults
analytics_loadbalancer.1.lcpgiz0ooeas@scw-swarm-master-01    |   balance leastconn
analytics_loadbalancer.1.lcpgiz0ooeas@scw-swarm-master-01    |   log global
analytics_loadbalancer.1.lcpgiz0ooeas@scw-swarm-master-01    |   mode http
analytics_loadbalancer.1.lcpgiz0ooeas@scw-swarm-master-01    |   option redispatch
analytics_loadbalancer.1.lcpgiz0ooeas@scw-swarm-master-01    |   option httplog
analytics_loadbalancer.1.lcpgiz0ooeas@scw-swarm-master-01    |   option dontlognull
analytics_loadbalancer.1.lcpgiz0ooeas@scw-swarm-master-01    |   option forwardfor
analytics_loadbalancer.1.lcpgiz0ooeas@scw-swarm-master-01    |   timeout connect 5000
analytics_loadbalancer.1.lcpgiz0ooeas@scw-swarm-master-01    |   timeout client 50000
analytics_loadbalancer.1.lcpgiz0ooeas@scw-swarm-master-01    |   timeout server 50000
analytics_loadbalancer.1.lcpgiz0ooeas@scw-swarm-master-01    | listen stats
analytics_loadbalancer.1.lcpgiz0ooeas@scw-swarm-master-01    |   bind :1936
analytics_loadbalancer.1.lcpgiz0ooeas@scw-swarm-master-01    |   mode http
analytics_loadbalancer.1.lcpgiz0ooeas@scw-swarm-master-01    |   stats enable
analytics_loadbalancer.1.lcpgiz0ooeas@scw-swarm-master-01    |   timeout connect 10s
analytics_loadbalancer.1.lcpgiz0ooeas@scw-swarm-master-01    |   timeout client 1m
analytics_loadbalancer.1.lcpgiz0ooeas@scw-swarm-master-01    |   timeout server 1m
analytics_loadbalancer.1.lcpgiz0ooeas@scw-swarm-master-01    |   stats hide-version
analytics_loadbalancer.1.lcpgiz0ooeas@scw-swarm-master-01    |   stats realm Haproxy\ Statistics
analytics_loadbalancer.1.lcpgiz0ooeas@scw-swarm-master-01    |   stats uri /
analytics_loadbalancer.1.lcpgiz0ooeas@scw-swarm-master-01    |   stats auth stats:stats
analytics_loadbalancer.1.lcpgiz0ooeas@scw-swarm-master-01    | frontend default_port_80
analytics_loadbalancer.1.lcpgiz0ooeas@scw-swarm-master-01    |   bind :80
analytics_loadbalancer.1.lcpgiz0ooeas@scw-swarm-master-01    |   reqadd X-Forwarded-Proto:\ http
analytics_loadbalancer.1.lcpgiz0ooeas@scw-swarm-master-01    |   maxconn 4096
analytics_loadbalancer.1.lcpgiz0ooeas@scw-swarm-master-01    |   default_backend default_service
analytics_loadbalancer.1.lcpgiz0ooeas@scw-swarm-master-01    | backend default_service
analytics_loadbalancer.1.lcpgiz0ooeas@scw-swarm-master-01    |   server analytics_es-data-1.1.u641c5bq5vkjklk8sb1scnnlc 10.0.7.5:9200 check inter 2000 rise 2 fall 3
analytics_loadbalancer.1.lcpgiz0ooeas@scw-swarm-master-01    |   server analytics_es-data-2.1.ic9an6bzj6aejs0lx0vzfpia6 10.0.7.7:9200 check inter 2000 rise 2 fall 3
analytics_loadbalancer.1.lcpgiz0ooeas@scw-swarm-master-01    | INFO:haproxy:Launching HAProxy
analytics_loadbalancer.1.lcpgiz0ooeas@scw-swarm-master-01    | INFO:haproxy:HAProxy has been launched(PID: 10)
analytics_loadbalancer.1.lcpgiz0ooeas@scw-swarm-master-01    | INFO:haproxy:===========END===========

Testing Elasticsearch:

Do a GET request on our HAProxy’s Expose port: 9200

Test Elasticsearch on port 9200
1
2
3
4
5
6
7
8
9
10
11
12
13
14
$ curl -XGET http://127.0.0.1:9200
{
  "name" : "5306a0c2ee24",
  "cluster_name" : "es-cluster",
  "cluster_uuid" : "FUJmMekFQVq6zXofPCin2A",
  "version" : {
    "number" : "5.6.0",
    "build_hash" : "781a835",
    "build_date" : "2017-09-07T03:09:58.087Z",
    "build_snapshot" : false,
    "lucene_version" : "6.6.0"
  },
  "tagline" : "You Know, for Search"
}

Have a look at the /_cat/nodes API:

Get the Node Info
1
2
3
4
5
$  curl -XGET http://127.0.0.1:9200/_cat/nodes?v
ip       heap.percent ram.percent cpu load_1m load_5m load_15m node.role master name
10.0.7.6           28          84  14    3.09    2.28     1.49 mdi       -      56c1b0aebc5f
10.0.7.2           27          84  15    3.09    2.28     1.49 mdi       *      572c68bca904
10.0.7.4           29          84  15    3.09    2.28     1.49 mdi       -      5306a0c2ee24

Simple Program With C Language on Linux

Today the idea popped up on how to write a Simple “Hello World” Application using C Programming Language, as I just wanted to see how it works.

Requirements:

You will need the gcc package to compile the program:

RHEL
1
$ yum install gcc -y
Debian
1
$ apt install gcc -y

Writing our first Program:

We will create a app that just prints out a static defined value:

Create any file with a .c extension, in my case it will be app.c:

app.c
1
2
3
4
5
6
#include <stdio.h>

int main(){
    printf("Hello, World\n");
    return 0;
}

Now compile app.c with gcc and specify the output path of your app with -o <app-name>

app.c
1
$ gcc -o app app.c

Testing our App:

You will see that there is a executable file with the name that you have specified as the output:

app.c
1
2
$ ./app
Hello, World

Really basic, but quite cool.

Using the AWS CLI Tools to Grab CloudWatch Metrics for Elasticsearch

Using the AWS CLI Tools to get CloudWatch Metrics for Elasticsearch.

Elasticsearch:

List the JVM Memory Pressure Metric:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
$ aws cloudwatch list-metrics --namespace AWS/ES --metric-name JVMMemoryPressure
{
    "Metrics": [
        {
            "Namespace": "AWS/ES",
            "Dimensions": [
                {
                    "Name": "DomainName",
                    "Value": "elasticsearch-cluster"
                },
                {
                    "Name": "ClientId",
                    "Value": "123456789012"
                }
            ],
            "MetricName": "JVMMemoryPressure"
        }
    ]
}

Metric: JVMMemoryPressure

Getting Metrics for JVMMemoryPressure, every 10 Minutes for Max Statistic:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
$ aws cloudwatch get-metric-statistics --namespace AWS/ES --dimensions Name=DomainName,Value=elasticsearch-cluster Name=ClientId,Value=123456789012 --metric-name JVMMemoryPressure --start-time 2017-09-08T04:00:00 --end-time 2017-09-08T05:00:00 --period 600 --statistics Maximum
{
    "Datapoints": [
        {
            "Timestamp": "2017-09-08T04:40:00Z",
            "Maximum": 58.7,
            "Unit": "Percent"
        },
        {
            "Timestamp": "2017-09-08T04:00:00Z",
            "Maximum": 58.5,
            "Unit": "Percent"
        },
        {
            "Timestamp": "2017-09-08T04:30:00Z",
            "Maximum": 58.7,
            "Unit": "Percent"
        },
        {
            "Timestamp": "2017-09-08T04:20:00Z",
            "Maximum": 58.5,
            "Unit": "Percent"
        },
        {
            "Timestamp": "2017-09-08T04:50:00Z",
            "Maximum": 58.7,
            "Unit": "Percent"
        },
        {
            "Timestamp": "2017-09-08T04:10:00Z",
            "Maximum": 58.5,
            "Unit": "Percent"
        }
    ],
    "Label": "JVMMemoryPressure"
}

Metric: WriteIOPS

Getting Metrics for WriteIOPS, Every 10 Minutes for Max Statistic:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
$ aws cloudwatch get-metric-statistics --namespace AWS/ES --dimensions Name=DomainName,Value=elasticsearch-cluster Name=ClientId,Value=123456789012 --metric-name WriteIOPS --start-time 2017-09-08T04:00:00 --end-time 2017-09-08T05:00:00 --period 600 --statistics Maximum
{
    "Datapoints": [
        {
            "Timestamp": "2017-09-08T04:30:00Z",
            "Maximum": 0.5266666666666666,
            "Unit": "Count/Second"
        },
        {
            "Timestamp": "2017-09-08T04:00:00Z",
            "Maximum": 0.0,
            "Unit": "Count/Second"
        },
        {
            "Timestamp": "2017-09-08T04:40:00Z",
            "Maximum": 0.09666666666666666,
            "Unit": "Count/Second"
        },
        {
            "Timestamp": "2017-09-08T04:10:00Z",
            "Maximum": 0.0,
            "Unit": "Count/Second"
        },
        {
            "Timestamp": "2017-09-08T04:50:00Z",
            "Maximum": 0.07,
            "Unit": "Count/Second"
        },
        {
            "Timestamp": "2017-09-08T04:20:00Z",
            "Maximum": 0.0,
            "Unit": "Count/Second"
        }
    ],
    "Label": "WriteIOPS"
}

Metric: FreeStorageSpace

Getting Metrics for FreeStorageSpace in Megabytes:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
$ aws cloudwatch get-metric-statistics --namespace AWS/ES --dimensions Name=DomainName,Value=elasticsearch-cluster Name=ClientId,Value=123456789012 --metric-name FreeStorageSpace --start-time 2017-09-11T05:00:00 --end-time 2017-09-11T06:00:00 --period 600 --statistics Minimum --unit Megabytes
{
    "Datapoints": [
        {
            "Timestamp": "2017-09-11T05:50:00Z",
            "Minimum": 25510.438,
            "Unit": "Megabytes"
        },
        {
            "Timestamp": "2017-09-11T05:10:00Z",
            "Minimum": 25573.032,
            "Unit": "Megabytes"
        },
        {
            "Timestamp": "2017-09-11T05:20:00Z",
            "Minimum": 25554.051,
            "Unit": "Megabytes"
        },
        {
            "Timestamp": "2017-09-11T05:30:00Z",
            "Minimum": 25540.957,
            "Unit": "Megabytes"
        },
        {
            "Timestamp": "2017-09-11T05:40:00Z",
            "Minimum": 25525.473,
            "Unit": "Megabytes"
        },
        {
            "Timestamp": "2017-09-11T05:00:00Z",
            "Minimum": 25584.383,
            "Unit": "Megabytes"
        }
    ],
    "Label": "FreeStorageSpace"
}

Using the Python Sys Library to Read Data From Stdin

Using Python’s sys library to read data from stdin.

In this basic example we will strip our input, delimited by the comma character, add it to a list, and print it out

Python: Read Data from Standard Input

1
2
3
4
5
6
7
8
9
10
11
12
13
import sys
import json

mylist = []

data_input = sys.stdin.read()
destroy_newline = data_input.replace('\n', '')
mylist = destroy_newline.split(', ')

print("Stripping each word and adding it to 'mylist'")
print("Found: {} words in 'mylist'".format(len(mylist)))
for x in mylist:
    print("Word: {}".format(x))

We will echo three words and pipe it into our python script:

1
2
3
4
5
6
$ echo "one, two, three" | python basic-stdin.py
Stripping each word and adding it to 'mylist'
Found: 3 words in 'mylist'
Word: one
Word: two
Word: three

Setup a Postfix Relay Server That Uses SES to Relay Outbound Mail

We will setup a Postfix Relay Servcer which our clients will use to send out mail, the Postfix server will use Amazon’s SES Service to send out mail, which we will configure as a relay host in Postfix.

Setup EC2 Instance to Relay through AWS SES:

Install Postfix and SASL:

1
$ apt install postfix mailutils libsasl2-2 sasl2-bin libsasl2-modules -y

Section we need to configure in /etc/nginx/main.cf:

1
2
3
4
5
6
relayhost = [email-smtp.eu-west-1.amazonaws.com]:587
smtp_use_tls = yes
smtp_sasl_auth_enable = yes
smtp_sasl_security_options =
smtp_sasl_password_maps = hash:/etc/postfix/sasl_passwd
smtp_tls_CAfile = /etc/ssl/certs/ca-certificates.crt

Populate SASL Passwd:

1
2
$ cat /etc/postfix/sasl_passwd
[email-smtp.eu-west-1.amazonaws.com]:587    AKIAABCDEFGHIJKLM:SomeRandomSecretString

Postmap the changes:

1
$ postmap /etc/postfix/sasl_passwd

Restart Postfix:

1
$ sudo /etc/init.d/postfix restart

Test the Mail Flow:

1
2
3
4
$ echo test | mail -r ruan@ruanbekker.com -s 'ses test mail ' ruan@ruanbekker.com && tail -f /var/log/mail.log

Jul 18 11:29:06 ip-10-1-4-250 postfix/smtp[5056]: 9FDCB469AA: to=<ruan@ruanbekker.com>, relay=email-smtp.eu-west-1.amazonaws.com[52.10.20.30]:587, delay=0.29, delays=0.02/0.03/0.12/0.13, dsn=2.0.0, status=sent (250 Ok 0234567d557572f2-76f56252-0a00-4d94-af87-38bd213914d2-000000)
Jul 18 11:29:06 ip-10-1-4-250 postfix/qmgr[4392]: 9FDCB469AA: removed

If your output looks more or less like the snippet from above, your mail should be working fine.

Nginx Reverse Proxy for Elasticsearch and Kibana 5 on AWS

As up untill today, there’s currently no VPC Support for Amazon’s Elasticsearch Service.

So for scenarios where you would like to allow private network traffic to Elasticsearch is impossible straight out of the box as Amazon’s Elasticsearch Services, only sees Public Internet Traffic.

We will setup 2 configs, one for Kibana and one for Elasticsearch, each one having its own FQDN:

  • Kibana: http://kibana.domain.com
  • Elasticsearch: http://elasticsearch.domain.com

Workaround:

There’s a couple of workarounds, which includes:

  • Nginx Reverse Proxy
  • NAT Gateway
  • Allow IAM Users/Roles

Today we will tackle the Nginx Reverse Proxy Route.

The benefit of this, would be to associate an EIP to the Nginx EC2 Instnace, then whitelist your EIP with Elasticsearch, so the only traffic that will be accepted will be the traffic that is coming from the Nginx Instance. We will also apply an additional layer of security, in this case we will use HTTP Basic Authentication, then also authorize network sources on a Security Group level.

Installing Nginx:

In this case I am using Ubuntu 16.04, so we will need to install nginx and apache2-utils for creating the Basic HTTP Auth accounts.

1
2
$ apt update && apt upgrade -y
$ apt install nginx apache2-utils -y

Configure Nginx:

Our main config: /etc/nginx/nginx.conf:

/etc/nginx/nginx.conf
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
user www-data;
worker_processes auto;
pid /run/nginx.pid;
error_log /var/log/nginx/error.log;

events {
  worker_connections 1024;
}

http {

  # Basic Settings
  sendfile on;
  tcp_nopush on;
  tcp_nodelay on;
  keepalive_timeout 65;
  types_hash_max_size 2048;
  server_names_hash_bucket_size 128;

  include /etc/nginx/mime.types;
  default_type application/octet-stream;

  # Logging Settings
        log_format  main  '$remote_addr - $remote_user [$time_local] "$request" '
                      '$status $body_bytes_sent "$http_referer" '
                      '"$http_user_agent" "$http_x_forwarded_for"';

  access_log /var/log/nginx/access.log main;

  # Gzip Settings
  gzip on;
  gzip_disable "msie6";

  # Elasticsearch and Kibana Configs
  include /etc/nginx/conf.d/elasticsearch.conf;
  include /etc/nginx/conf.d/kibana.conf;
}

Our /etc/nginx/conf.d/elasticsearch.conf configuration:

/etc/nginx/conf.d/elasticsearch.conf
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
server {

  listen 80;
  server_name elasticsearch.domain.com;

  # error logging
  error_log /var/log/nginx/elasticsearch_error.log;

  # authentication: elasticsearch
  auth_basic "Elasticsearch Auth";
  auth_basic_user_file /etc/nginx/.secrets_elasticsearch;

  location / {

    proxy_http_version 1.1;
    proxy_set_header Host https://search-elasticsearch-name.eu-west-1.es.amazonaws.com;
    proxy_set_header X-Real-IP <ELASTIC-IP>;
    proxy_set_header Connection "Keep-Alive";
    proxy_set_header Proxy-Connection "Keep-Alive";
    proxy_set_header Authorization "";

    proxy_pass https://search-elasticsearch-name.eu-west-1.es.amazonaws.com/;
    proxy_redirect https://search-elasticsearch-name.eu-west-1.es.amazonaws.com/ http://<ELASTIC-IP>/;

  }

  # ELB Health Checks
  location /status {
    root /usr/share/nginx/html/;
  }

}

Our /etc/nginx/conf.d/kibana.conf configuration:

/etc/nginx/conf.d/kibana.conf
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
server {

  listen 80;
  server_name kibana.domain.com;

  # error logging
  error_log /var/log/nginx/kibana_error.log;

  # authentication: kibana
  auth_basic "Kibana Auth";
  auth_basic_user_file /etc/nginx/.secrets_kibana;

  location / {

    proxy_http_version 1.1;
    proxy_set_header Host https://search.elasticsearch-name.eu-west-1.es.amazonaws.com;
    proxy_set_header X-Real-IP <ELASTIC-IP>;
    proxy_set_header Connection "Keep-Alive";
    proxy_set_header Proxy-Connection "Keep-Alive";
    proxy_set_header Authorization "";

    proxy_pass https://search.elasticsearch-name.eu-west-1.es.amazonaws.com/_plugin/kibana/;
    proxy_redirect https://search.elasticsearch-name.eu-west-1.es.amazonaws.com/_plugin/kibana/ http://<ELASTIC-IP>/kibana/;

  }

      location ~ (/app/kibana|/app/timelion|/bundles|/es_admin|/plugins|/api|/ui|/elasticsearch) {
         proxy_pass              https://search.elasticsearch-name.eu-west-1.es.amazonaws.com;
         proxy_set_header        Host $host;
         proxy_set_header        X-Real-IP $remote_addr;
         proxy_set_header        X-Forwarded-For $proxy_add_x_forwarded_for;
         proxy_set_header        X-Forwarded-Proto $scheme;
         proxy_set_header        X-Forwarded-Host $http_host;
         proxy_set_header      Authorization  "";
    }
}

Once you have replaced the elasticsearch endpoint and your EPI values, we can go ahead and create the auth accounts.

Create User Accounts for HTTP Basic Auth

Create the 2 accounts for authentication on kibana and elasticsearch:

1
2
$ htpasswd -c /etc/nginx/.secrets_elasticsearch elasticsearch-admin
$ htpasswd -c /etc/nginx/.secrets_kibana kibana-admin

Restart Nginx:

Restart and enable Nginx on boot:

1
2
$ systemctl enable nginx
$ systemctl restart nginx

Once your Nginx Service is running, you should be able to access Kibana and Elasticsearch using the credentials that you created.

Resources:

AWS: IAM S3 Policy for Cyberduck to Allow Listing Buckets and Access to One Bucket

When using Cyberduck to access S3, and a account has restrictive policies, you may find error Listing Directory: / failed.

If you have restrictive IAM Policies in your account, this may be due to the fact that S3:ListMyBuckets is not allowed.

In this post we want to allow a user to list all buckets, so that Cyberduck can do the initial list after configuration / launch, and we would like to give the user access to their designated bucket.

Creating the IAM Policy:

We will create this IAM Policy and associate the policy to the user’s account:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Sid": "Stmt1480515305000",
            "Effect": "Allow",
            "Action": [
                "s3:ListAllMyBuckets",
                "s3:GetBucketLocation"
            ],
            "Resource": [
                "arn:aws:s3:::*"
            ]
        },
        {
            "Sid": "Stmt1480515305002",
            "Effect": "Allow",
            "Action": [
                "s3:List*",
                "s3:GetObject",
                "s3:PutObject",
                "s3:DeleteObject"
            ],
            "Resource": [
                "arn:aws:s3:::allowed-bucket",
                "arn:aws:s3:::allowed-bucket/*"
            ]
        }
    ]
}

So here we should be able to list the buckets:

1
2
3
4
5
6
$ aws --profile cyberduck s3 ls /
2017-06-08 08:27:01 allowed-bucket
2017-05-21 13:39:21 private-bucket
2016-12-21 08:23:45 confidential-bucket
2017-08-10 14:18:19 test-bucket
2016-08-03 12:38:29 datalake-bucket

Able to list inside the bucket, as well as Get, Put etc.

1
2
$ aws --profile cyberduck s3 ls allowed-bucket/
                           PRE data/

Unable to list the buckets content which is expected, as we did not mention in the policy:

1
2
3
$ aws --profile cyberduck s3 ls confidential-bucket/

An error occurred (AccessDenied) when calling the ListObjects operation: Access Denied

Resources:

Using Python for Image Analysis With Amazons Rekognition Service

Amazon’s Rekognition Service, which falls under their Artificial Intelligence tier, makes it easy to add image analysis to your applications.

Today we will use Rekognition to analyze an image, to determine the percentage of detection that the service analyzes. We will be using the Python SDK to do this.

Getting a Random Image:

So, I got this drunk guy on the couch, which I thought we could use to analyze.

Image Used: - http://imgur.com/a/CHnSu

Our Python Code:

Our code will use boto3 to use rekognition from Amazon Web Services, detects the image, and prints out the values.

Note that I am not specifying any credentials, as my credentials is configured in my local credential provider, where boto will pick it up from.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
import boto3

BUCKET = "rekognition-bucket"
KEY = "images/image-02.jpg"

def detect_labels(bucket, key, max_labels=10, min_confidence=90, region="eu-west-1", profile_name="aws"):
    rekognition = boto3.client("rekognition")
    response = rekognition.detect_labels(
        Image={
        "S3Object": {
        "Bucket": BUCKET,
        "Name": KEY,
    }
        },
        MaxLabels=max_labels,
        MinConfidence=min_confidence,
    )
    return response['Labels']


for label in detect_labels(BUCKET, KEY):
    print("{Name} - {Confidence}%".format(**label))

Running the App:

Running our Python App, will result in the following:

1
2
3
4
5
6
7
8
9
10
11
$ python rekog.py
People - 98.9893875122%
Person - 98.9893951416%
Human - 98.9505844116%
Alcohol - 98.573425293%
Beer - 98.573425293%
Beer Bottle - 98.573425293%
Beverage - 98.573425293%
Bottle - 98.573425293%
Drink - 98.573425293%
Couch - 98.4713821411%

Resources:

Setup RocketChat on Docker Swarm

Rocket Chat, a Self Hosted Alternative, which is very similar to Slack.

We will setup a RocketChat Server which is hosted on Docker Swarm. In future posts, I will also go through the steps on working with the API, Custom Emoji’s etc.

Requirements:

RocketChat uses MongoDB as its Database, we will keep the database outside of our swarm, if you don’t already have a MongoDB Server in place, follow the Setup a 3 Node MongoDB post to get that sorted.

Another requirement is to have docker swarm running, alternatively, you can also follow RocketChat’s Documentation if you prefer setting it up elsewhere.

Setup Rocket Chat

We will assume MongoDB is accessible via mongodb.domain.com on port 27017, with a username and password.

Creating the RocketChat service and associate it to our appnet overlay network:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
docker service create --name rocketchat \
--replicas 1 \
--network appnet \
--env DEPLOY_METHOD=docker \
--env NODE_ENV=production \
--env PORT=3000 \
--env MONGO_URL="mongodb://mongoadmin:mongopass@mongodb.domain.com:27017/rocketchat?authSource=admin" \
--env ROOT_URL=http://rocketchat.domain.com \
--env ADMIN_USERNAME=myadmin \
--env ADMIN_PASS=secret \
--env ADMIN_EMAIL=mail@domain.com \
--env Accounts_AvatarStorePath=/app/uploads \
--secret rocketchat_secret \
rocketchat/rocket.chat

View the RocketChat Service Logs

Lets monitor the docker service logs for our rocketchat service:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
$ docker service logs -f rocketchat

rocketchat.1.lnbyfjiotqpz@docker-swarm-worker-03    | Using GridFS for custom sounds storage
rocketchat.1.lnbyfjiotqpz@docker-swarm-worker-03    | Using GridFS for custom emoji storage
rocketchat.1.lnbyfjiotqpz@docker-swarm-worker-03    | ufs: temp directory created at "/tmp/ufs"
rocketchat.1.lnbyfjiotqpz@docker-swarm-worker-03    | System startup
rocketchat.1.lnbyfjiotqpz@docker-swarm-worker-03    | +--------------------------------------------------------------+
rocketchat.1.lnbyfjiotqpz@docker-swarm-worker-03    | |                        SERVER RUNNING                        |
rocketchat.1.lnbyfjiotqpz@docker-swarm-worker-03    | +--------------------------------------------------------------+
rocketchat.1.lnbyfjiotqpz@docker-swarm-worker-03    | |                                                              |
rocketchat.1.lnbyfjiotqpz@docker-swarm-worker-03    | |  Rocket.Chat Version: 0.58.2                                 |
rocketchat.1.lnbyfjiotqpz@docker-swarm-worker-03    | |       NodeJS Version: 4.8.4 - x64                            |
rocketchat.1.lnbyfjiotqpz@docker-swarm-worker-03    | |             Platform: linux                                  |
rocketchat.1.lnbyfjiotqpz@docker-swarm-worker-03    | |         Process Port: 3000                                   |
rocketchat.1.lnbyfjiotqpz@docker-swarm-worker-03    | |             Site URL: http://rocketchat.domain.com           |
rocketchat.1.lnbyfjiotqpz@docker-swarm-worker-03    | |     ReplicaSet OpLog: Disabled                               |
rocketchat.1.lnbyfjiotqpz@docker-swarm-worker-03    | |          Commit Hash: 988103d449                             |
rocketchat.1.lnbyfjiotqpz@docker-swarm-worker-03    | |        Commit Branch: HEAD                                   |
rocketchat.1.lnbyfjiotqpz@docker-swarm-worker-03    | |                                                              |
rocketchat.1.lnbyfjiotqpz@docker-swarm-worker-03    | +--------------------------------------------------------------+
rocketchat.1.lnbyfjiotqpz@docker-swarm-worker-03    | Inserting admin user:
rocketchat.1.lnbyfjiotqpz@docker-swarm-worker-03    | Name: Administrator
rocketchat.1.lnbyfjiotqpz@docker-swarm-worker-03    | Email: mail@domain.com
rocketchat.1.lnbyfjiotqpz@docker-swarm-worker-03    | Username: myadmin
rocketchat.1.lnbyfjiotqpz@docker-swarm-worker-03    | Password: secret

Now you should be able to access Rocket Chat on the ROOT_URL that you have specified.

Resources: