Ruan Bekker's Blog

From a Curious mind to Posts on Github

Build a Traefik Proxy Image for Your Raspberry Pi on Docker Swarm

In this post we will setup a Docker Image for Traefik Proxy on the ARM Architecture, specifically on the Raspberry Pi, which we will deploy to our Raspberry Pi Docker Swarm.

Then we will build and push our image to a registry, then setup traefik and also setup a web application that sits behind our Traefik Proxy.

What is Traefik

Traefik is a modern load balancer and reverse proxy built for micro services.

Dockerfile

We will be running Traefik on Alpine 3.8:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
FROM rbekker87/armhf-alpine:3.8

ENV TRAEFIK_VERSION 1.7.0-rc3
ENV ARCH arm

ADD https://github.com/containous/traefik/releases/download/v${TRAEFIK_VERSION}/traefik_linux-${ARCH} /traefik

RUN apk add --no-cache ca-certificates \
    && chmod +x /traefik \
    && rm -rf /var/cache/apk/*

EXPOSE 80 8080 443

ENTRYPOINT ["/traefik"]

Build and Push

Build and Push your image to your registry of choice:

1
2
$ docker build -t your-user/repo:tag .
$ docker push your-user/repo:tag

If you do not want to push to a registry, I have a public image available at https://hub.docker.com/r/rbekker87/armhf-traefik/, the image itself is rbekker87/armhf-traefik:1.7.0-rc3

Deploy Traefik to the Swarm

From our traefik-compose.yml, you will notice that I have set that our network is external, so the network should exist prior to deploying the stack.

Let’s create the overlay network:

1
$ docker network create --driver overlay appnet

Below, the traefik-compose.yml, note that I’m using pistack.co.za as my domain:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
version: "3.4"

services:
  traefik:
    image: rbekker87/armhf-traefik:1.7.0-rc3
    command:
      - "--api"
      - "--docker"
      - "--docker.swarmmode"
      - "--docker.domain=pistack.co.za"
      - "--docker.watch"
      - "--logLevel=DEBUG"
      - "--web"
    networks:
      - appnet
    volumes:
      - /var/run/docker.sock:/var/run/docker.sock
    ports:
      - 80:80
      - 8080:8080
    deploy:
      mode: global
      restart_policy:
        condition: on-failure
      placement:
        constraints: [node.role == manager]

networks:
  appnet:
    external: true

Deploy the stack:

1
$ docker stack deploy -c traefik-compose.yml proxy

List the stacks:

1
2
3
$ docker stack ls
NAME                SERVICES
proxy               1

Check if the services in your stack is running. Since our deploy mode was global, there will be a replica running on each node, and in my swarm I’ve got 3 nodes:

1
2
3
$ docker stack services proxy
ID                  NAME                MODE                REPLICAS            IMAGE                    PORTS
16x31j7o0f0r        proxy_traefik       global              3/3                 rbekker87/armhf-traefik:1.7.0-rc3   *:80->80/tcp,*:8080->8080/tcp

Deploy a Web Service hooked up to Traefik

Pre-Requirement:

To register subdomains on the fly, set you DNS for your domain to the following (im using pistack.co.za in this example):

  • pistack.co.za A x.x.x.x
  • *.pistack.co.za A x.x.x.x

Next, we will deploy we app that will be associated to our Traefik service domain, so we will inform Traefik that our web app fqdn and port that will be registered with the proxy.

Our app-compose.yml file for our webapp:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
version: "3.4"

services:
  whoami:
    image: rbekker87/golang-whoami:alpine-amrhf
    networks:
      - appnet
    deploy:
      replicas: 3
      labels:
        - "traefik.backend=whoami"
        - "traefik.port=80"
        - "traefik.frontend.rule=Host:whoami.pistack.co.za"
      update_config:
        parallelism: 1
        delay: 10s
      restart_policy:
        condition: on-failure
      placement:
        constraints: [node.role == worker]
    healthcheck:
      test: nc -vz 127.0.0.1 80 || exit 1
      interval: 60s
      timeout: 3s
      retries: 3

networks:
  appnet:
    external: true

In the above compose, you will notice that our traefik backend is set to our service name, our port is the port that the proxy will forward requests to the containers port, since the proxy and the whoami container is in the same network, they will be able to communicate with each other. Then we also have our frontend rule which will be the endpoint we will reach our application on.

Deploy the stack:

1
2
$ docker stack deploy -c whoami.yml web
Creating service web_whoami

List the tasks running in our web stack:

1
2
3
$ docker stack services web
ID                  NAME                MODE                REPLICAS            IMAGE                                  PORTS
31ylfcfb7uyw        web_whoami          replicated          3/3                 rbekker87/golang-whoami:alpine-amrhf

Once all the replicas is running, move along to test the application

Testing our Application:

I have 3 replicas each running on their own container, so each container will respond with its own hostname:

1
2
3
4
5
$ docker service ps web_whoami
ID                  NAME                IMAGE                                  NODE                DESIRED STATE       CURRENT STATE            ERROR                              PORTS
ivn8fgfosvgd        web_whoami.1        rbekker87/golang-whoami:alpine-amrhf   rpi-01              Running             Running 26 minutes ago
rze6u6z56aop        web_whoami.2        rbekker87/golang-whoami:alpine-amrhf   rpi-02              Running             Running 26 minutes ago
6fjua869r498        web_whoami.3        rbekker87/golang-whoami:alpine-amrhf   rpi-04              Running             Running 23 minutes ago

Making our 1st GET request:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
$ $ curl http://whoami.pistack.co.za/
Hostname: 43f5f0a6682f
IP: 127.0.0.1
IP: 10.0.0.138
IP: 10.0.0.218
IP: 172.18.0.4
GET / HTTP/1.1
Host: whoami.pistack.co.za
User-Agent: curl/7.38.0
Accept: */*
Accept-Encoding: gzip
X-Forwarded-For: 165.73.96.95, 10.255.0.2
X-Forwarded-Host: whoami.pistack.co.za
X-Forwarded-Port: 80
X-Forwarded-Proto: http
X-Forwarded-Server: 31b37f9714d3
X-Real-Ip: 10.255.0.2

Our 2nd GET Request:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
$ curl http://whoami.pistack.co.za/
Hostname: d1c17a476414
IP: 127.0.0.1
IP: 10.0.0.138
IP: 10.0.0.71
IP: 172.19.0.5
GET / HTTP/1.1
Host: whoami.pistack.co.za
User-Agent: curl/7.38.0
Accept: */*
Accept-Encoding: gzip
X-Forwarded-For: 165.73.96.95, 10.255.0.2
X-Forwarded-Host: whoami.pistack.co.za
X-Forwarded-Port: 80
X-Forwarded-Proto: http
X-Forwarded-Server: 02b0ff6eab73
X-Real-Ip: 10.255.0.2

And our 3rd GET Request:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
$ curl http://whoami.pistack.co.za/
Hostname: 17c817a1813b
IP: 172.18.0.6
IP: 127.0.0.1
IP: 10.0.0.138
IP: 10.0.0.73
GET / HTTP/1.1
Host: whoami.pistack.co.za
User-Agent: curl/7.38.0
Accept: */*
Accept-Encoding: gzip
X-Forwarded-For: 165.73.96.95, 10.255.0.2
X-Forwarded-Host: whoami.pistack.co.za
X-Forwarded-Port: 80
X-Forwarded-Proto: http
X-Forwarded-Server: 31b37f9714d3
X-Real-Ip: 10.255.0.2

Hope this was useful.

Resources:

Building a Raspberry Pi Nginx Image With Caching on Alpine for Docker Swarm

In this guide, we will be creating a nginx reverse proxy with the ability to cache static content using a alpine image.

We will then push the image to gitlab’s private registry, and then run the service on docker swarm.

Create the backend service:

We will upstream to our blog using ghost, which you can deploy using:

1
$ docker service create --name blog --network docknet rbekker87/armhf-ghost:2.0.3

Current File Structure:

Our file structure for the assets we need to build the reverse proxy:

1
2
3
4
5
$ find .
./conf.d
./conf.d/blog.conf
./Dockerfile
./nginx.conf
  • Dockerfile
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
FROM hypriot/rpi-alpine-scratch
MAINTAINER Ruan Bekker

RUN apk update && \
    apk add nginx && \
    rm -rf /etc/nginx/nginx.conf && \
    chown -R nginx:nginx /var/lib/nginx && \
    rm -rf /var/cache/apk/*

ADD nginx.conf /etc/nginx/
ADD conf.d/blog.conf /etc/nginx/conf.d/

EXPOSE 80

CMD ["nginx", "-g", "daemon off;"]
  • nginx.conf
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
user nginx;
worker_processes 1;

events {
    worker_connections 1024;
    }

error_log  /var/log/nginx/nginx_error.log warn;

http {

    sendfile        on;
    tcp_nodelay         on;

    gzip              on;
    gzip_http_version 1.0;
    gzip_proxied      any;
    gzip_min_length   500;
    gzip_disable      "MSIE [1-6]\.";
    gzip_types        text/plain text/xml text/css
                      text/comma-separated-values
                      text/javascript
                      application/x-javascript
                      application/atom+xml;

    log_format  main  '$remote_addr - $remote_user [$time_local] "$request" '
                      '$status $body_bytes_sent "$http_referer" '
                      '"$http_user_agent" "$http_x_forwarded_for"';

    access_log  /var/log/nginx/access.log  main;
    error_log   /var/log/nginx/error.log;

    proxy_cache_path /var/cache/nginx/ levels=1:2 keys_zone=nginx_cache:5m max_size=128m inactive=60m;

    keepalive_timeout  60;
    server_tokens      off;

    include /etc/nginx/conf.d/*.conf;

}

Hostname resolution to our Ghost Blog Service: In our swarm we have a service called blog which is associated to the docknet network, so the dns resolution will resolve to the vip of the service. As seen in the figure below:

1
2
3
$ docker service ls
ID                  NAME                MODE                REPLICAS            IMAGE                                                    PORTS
nq42a6jfwx3d        blog                replicated          1/1                 rbekker87/armhf-ghost:2.0.3
  • conf.d/blog.conf
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
upstream ghost_blog {
    server blog:2368;
    }

server {
    listen 80;
    server_name blog.yourdomain.com;

    access_log  /var/log/nginx/blog_access.log  main;
    error_log   /var/log/nginx/blog_error.log;

    location / {

        proxy_cache                 nginx_cache;
        add_header                  X-Proxy-Cache $upstream_cache_status;
        proxy_ignore_headers        Cache-Control;
        proxy_cache_valid any       10m;
        proxy_cache_use_stale       error timeout http_500 http_502 http_503 http_504;

        proxy_pass                  http://ghost_blog;
        proxy_redirect              off;

        proxy_set_header            Host $host;
        proxy_set_header            X-Real-IP $remote_addr;
        proxy_set_header            X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_set_header            X-Forwarded-Host $server_name;
    }
}

Building the Image and Pushing to Gitlab

I’m using Gitlab in this demonstration, but you can use the registry of your choice:

1
2
3
4
$ docker login registry.gitlab.com
$ docker build -t registry.gitlab.com/user/docker/arm-nginx:caching .
$ docker tag registry.gitlab.com/user/docker/arm-nginx:caching registry.gitlab.com/user/docker/arm-nginx:caching
$ docker push registry.gitlab.com/user/docker/arm-nginx:caching

Deploy

Create the Nginx Reverse Proxy Service on Docker Swarm:

1
2
3
4
5
$ docker service create --name nginx_proxy \
--network docknet \
--publish 80:80 \
--replicas 1 \
--with-registry-auth registry.gitlab.com/user/docker/arm-nginx:caching

Listing our Services:

1
2
3
4
$ docker service ls
ID                  NAME                MODE                REPLICAS            IMAGE                                                    PORTS
je7x21l7egoh        nginx_proxy         replicated          1/1                 registry.gitlab.com/user/docker/arm-nginx:caching   *:80->80/tcp
nq42a6jfwx3d        blog                replicated          1/1                 rbekker87/armhf-ghost:2.0.3

Once you access your proxy on port 80, you should see your Ghost Blog Homepage like below:

Have a look at the benchmark performance when using Nginx with caching enabled

Resources:

Nginx Caching Performance for Static Content on Docker Swarm With RaspberryPi

The Environment:

I had my Ghost Blog listening on port 2368 and exposing port 80 on Docker so that the port translation directs port 80 traffic to port 2368 on Ghost directly.

Alex responded on my tweet and introduced Nginx Caching:

With this approach benchmarking results was not so great in terms of requests per second, and as this hostname will be only used for a blog, its a great idea to cache the content, this was achieved with the help from Alex’s blog: blog.alexellis.io/save-and-boost-with-nginx/

How Nginx was Configured:

I have a blogpost on how I setup Nginx on an Alpine Image, where I setup caching and proxy-pass the connections through to my ghost blog.

Benchmarking: Before Nginx with Caching was Implemented:

When doing an apache benchmark I got 9.31 requests per second performing the test on my LAN:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
$ ab -n 500 -c 10 http://rbkr.ddns.net/

This is ApacheBench, Version 2.3 <$Revision: 1706008 $>
Copyright 1996 Adam Twiss, Zeus Technology Ltd, http://www.zeustech.net/
Licensed to The Apache Software Foundation, http://www.apache.org/

Benchmarking rbkr.ddns.net (be patient)
Completed 100 requests
Completed 200 requests
Completed 300 requests
Completed 400 requests
Completed 500 requests
Finished 500 requests


Server Software:
Server Hostname:        blog.pistack.co.za
Server Port:            80

Document Path:          /
Document Length:        5470 bytes

Concurrency Level:      10
Time taken for tests:   53.725 seconds
Complete requests:      500
Failed requests:        0
Total transferred:      2863000 bytes
HTML transferred:       2735000 bytes
Requests per second:    9.31 [#/sec] (mean)
Time per request:       1074.501 [ms] (mean)
Time per request:       107.450 [ms] (mean, across all concurrent requests)
Transfer rate:          52.04 [Kbytes/sec] received

Connection Times (ms)
              min  mean[+/-sd] median   max
Connect:        1    2   0.5      2       6
Processing:   685 1068  68.7   1057    1306
Waiting:      683 1067  68.6   1056    1306
Total:        689 1070  68.7   1058    1312

Percentage of the requests served within a certain time (ms)
  50%   1058
  66%   1088
  75%   1102
  80%   1110
  90%   1163
  95%   1218
  98%   1240
  99%   1247
 100%   1312 (longest request)

Benchmarking: After Nginx Caching was Implemented:

After Nginx Caching was Implemented, I got 1067.73 requests per second using apache benchmark over a LAN connection! Absolutely awesome!

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
$ ab -n 500 -c 10 http://blog.pistack.co.za/
This is ApacheBench, Version 2.3 <$Revision: 1706008 $>
Copyright 1996 Adam Twiss, Zeus Technology Ltd, http://www.zeustech.net/
Licensed to The Apache Software Foundation, http://www.apache.org/

Benchmarking blog.pistack.co.za (be patient)
Completed 100 requests
Completed 200 requests
Completed 300 requests
Completed 400 requests
Completed 500 requests
Finished 500 requests


Server Software:        nginx
Server Hostname:        blog.pistack.co.za
Server Port:            80

Document Path:          /
Document Length:        5470 bytes

Concurrency Level:      10
Time taken for tests:   0.468 seconds
Complete requests:      500
Failed requests:        0
Total transferred:      2880500 bytes
HTML transferred:       2735000 bytes
Requests per second:    1067.73 [#/sec] (mean)
Time per request:       9.366 [ms] (mean)
Time per request:       0.937 [ms] (mean, across all concurrent requests)
Transfer rate:          6007.05 [Kbytes/sec] received

Connection Times (ms)
              min  mean[+/-sd] median   max
Connect:        3    4   1.4      4      10
Processing:     3    5   1.6      4      10
Waiting:        2    4   1.6      4      10
Total:          6    9   2.7      8      17

Percentage of the requests served within a certain time (ms)
  50%      8
  66%      8
  75%      9
  80%      9
  90%     15
  95%     15
  98%     15
  99%     16
 100%     17 (longest request)

Resources:

Thanks to Alex Ellis for the suggestion on this, and definitely have a look at blog.alexellis.io as he has some epic content on his blog!

Setting Up a Docker Swarm Cluster on 3 RaspberryPi Nodes

As the curious person that I am, I like to play around with new stuff that I stumble upon, and one of them was having a docker swarm cluster running on 3 Raspberry Pi’s on my LAN.

The idea is to have 3 Raspberry Pi’s (Model 3 B), a Manager Node, and 2 Worker Nodes, each with a 32 GB SanDisk SD Card, which I will also be part of a 3x Replicated GlusterFS Volume that will come in handy later for some data that needs persistent data.

More Inforamtion on: Docker Swarm

Provision Raspbian on each RaspberryPi

Grab the Latest Raspbian Lite ISO and the following source will help provisioning your RaspberryPi with Raspbian.

Installing Docker on Raspberry PI

On each node, run the following to install docker, and also add your user to the docker group, so that you can run docker commands with a normal user:

1
2
3
4
$ apt-get update && sudo apt-get upgrade -y
$ sudo apt-get remove docker.io
$ curl https://get.docker.com | sudo bash
$ sudo usermod -aG docker pi

If you have an internal DNS Server, set an A Record for each node, or for simplicity, set your hosts file on each node so that your hostname for each node responds to it’s provisioned IP Address:

1
2
3
4
$ cat /etc/hosts
192.168.0.2   rpi-01
192.168.0.3   rpi-02
192.168.0.4   rpi-03

Also, to have passwordless SSH, from each node:

1
2
3
4
$ ssh-keygen -t rsa
$ ssh-copy-id rpi-01
$ ssh-copy-id rpi-02
$ ssh-copy-id rpi-03

Initialize the Swarm

Time to set up our swarm. As we have more than one network interface, we will need to setup our swarm by specifying the IP Address of our network interface that is accessible from our LAN:

1
2
3
$ ifconfig eth0
eth0      Link encap:Ethernet  HWaddr a1:12:bc:d3:cd:4d
          inet addr:192.168.0.2  Bcast:192.168.0.255  Mask:255.255.255.0

Now that we have our IP Address, initialize the swarm on the manager node:

1
2
3
4
5
6
7
8
9
10
pi@rpi-01:~ $ docker swarm init --advertise-addr 192.168.0.2
Swarm initialized: current node (siqyf3yricsvjkzvej00a9b8h) is now a manager.

To add a worker to this swarm, run the following command:

    docker swarm join \
    --token SWMTKN-1-0eith07xkcg93lzftuhjmxaxwfa6mbkjsmjzb3d3sx9cobc2zp-97s6xzdt27y2gk3kpm0cgo6y2 \
    192.168.0.2:2377

To add a manager to this swarm, run 'docker swarm join-token manager' and follow the instructions.

Then from rpi-02 join the manager node of the swarm:

1
2
pi@rpi-02:~ $ docker swarm join --token SWMTKN-1-0eith07xkcg93lzftuhjmxaxwfa6mbkjsmjzb3d3sx9cobc2zp-97s6xzdt27y2gk3kpm0cgo6y2 192.168.0.2:2377
This node joined a swarm as a worker.

Then from rpi-03 join the manager node of the swarm:

1
2
pi@rpi-03:~ $ docker swarm join --token SWMTKN-1-0eith07xkcg93lzftuhjmxaxwfa6mbkjsmjzb3d3sx9cobc2zp-97s6xzdt27y2gk3kpm0cgo6y2 192.168.0.2:2377
This node joined a swarm as a worker.

Then from the manager node: rpi-01, ensure that the nodes are checked in:

1
2
3
4
5
pi@rpi-01:~ $ docker node ls
ID                            HOSTNAME            STATUS              AVAILABILITY        MANAGER STATUS
62s7gx1xdm2e3gp5qoca2ru0d     rpi-03              Ready               Active
6fhyfy9yt761ar9pl84dkxck3 *   rpi-01              Ready               Active              Leader
pg0nyy9l27mtfc13qnv9kywe7     rpi-02              Ready               Active

Setting Up a Replicated GlusterFS Volume

I have decided to setup a replicated glusterfs volume to have data replicated throughout the cluster if I would like to have some persistent data. From each node, install the GlusterFS Client and Server:

1
$ sudo apt install glusterfs-server glusterfs-client -y && sudo systemctl enable glusterfs-server

Probe the other nodes from the manager node:

1
2
3
4
5
pi@rpi-01:~ $ sudo gluster peer probe rpi-02
peer probe: success.

pi@rpi-01:~ $ sudo gluster peer probe rpi-03
peer probe: success.

Ensure that we can see all 3 nodes in our GlusterFS Pool:

1
2
3
4
5
pi@rpi-01:~ $ sudo gluster pool list
UUID                                    Hostname        State
778c7463-ba48-43de-9f97-83a960bba99e    rpi-02          Connected
00a20a3c-5902-477e-a8fe-da35aa955b5e    rpi-03          Connected
d82fb688-c50b-405d-a26f-9cb2922cce75    localhost       Connected

From each node, create the directory where GlusterFS will store the data for the bricks that we will specify when creating the volume:

1
2
3
pi@rpi-01:~ $ sudo mkdir -p /gluster/brick
pi@rpi-02:~ $ sudo mkdir -p /gluster/brick
pi@rpi-03:~ $ sudo mkdir -p /gluster/brick

Next, create a 3 Way Replicated GlusterFS Volume:

1
2
3
4
5
6
7
pi@rpi-01:~ $ sudo gluster volume create rpi-gfs replica 3 \
rpi-01:/gluster/brick \
rpi-02:/gluster/brick \
rpi-03:/gluster/brick \
force

volume create: rpi-gfs: success: please start the volume to access data

Start the GlusterFS Volume:

1
2
pi@rpi-01:~ $ sudo gluster volume start rpi-gfs
volume start: rpi-gfs: success

Verify the GlusterFS Volume Info, and from the below output you will see that the volume is replicated 3 ways from the 3 bricks that we specified

1
2
3
4
5
6
7
8
9
10
11
12
pi@rpi-01:~ $ sudo gluster volume info

Volume Name: rpi-gfs
Type: Replicate
Volume ID: b879db15-63e9-44ca-ad76-eeaa3e247623
Status: Started
Number of Bricks: 1 x 3 = 3
Transport-type: tcp
Bricks:
Brick1: rpi-01:/gluster/brick
Brick2: rpi-02:/gluster/brick
Brick3: rpi-03:/gluster/brick

Mount the GlusterFS Volume on each Node, first on rpi-01:

1
2
3
4
pi@rpi-01:~ $ sudo umount /mnt
pi@rpi-01:~ $ sudo echo 'localhost:/rpi-gfs /mnt glusterfs defaults,_netdev,backupvolfile-server=localhost 0 0' >> /etc/fstab
pi@rpi-01:~ $ sudo mount.glusterfs localhost:/rpi-gfs /mnt
pi@rpi-01:~ $ sudo chown -R pi:docker /mnt

Then on rpi-02:

1
2
3
4
pi@rpi-02:~ $ sudo umount /mnt
pi@rpi-02:~ $ sudo echo 'localhost:/rpi-gfs /mnt glusterfs defaults,_netdev,backupvolfile-server=localhost 0 0' >> /etc/fstab
pi@rpi-02:~ $ sudo mount.glusterfs localhost:/rpi-gfs /mnt
pi@rpi-02:~ $ sudo chown -R pi:docker /mnt

And lastly on rpi-03:

1
2
3
4
pi@rpi-03:~ $ sudo umount /mnt
pi@rpi-03:~ $ sudo echo 'localhost:/rpi-gfs /mnt glusterfs defaults,_netdev,backupvolfile-server=localhost 0 0' >> /etc/fstab
pi@rpi-03:~ $ sudo mount.glusterfs localhost:/rpi-gfs /mnt
pi@rpi-03:~ $ sudo chown -R pi:docker /mnt

Then your GlusterFS Volume will be mounted on all the nodes, and when a file is written to the /mnt/ partition, data will be replicated to all the nodes in the Cluster:

1
2
3
4
pi@rpi-01:~ $ df -h
Filesystem          Size  Used Avail Use% Mounted on
/dev/root            30G  4.5G   24G  16% /
localhost:/rpi-gfs   30G  4.5G   24G  16% /mnt

Create a Web Service on Docker Swarm:

Let’s create a Web Service in our Swarm, called web and by specifying 1 replica and publishing the exposed port 80 to our containers port 80:

1
2
pi@rpi-01:~ $ docker service create --name web --replicas 1 --publish 80:80 hypriot/rpi-busybox-httpd
vsvyanuw6q6yf4jr52m5z7vr1

Verifying that our Service is Started and equals to the desired replica count:

1
2
3
pi@rpi-01:~ $ docker service ls
ID                  NAME                MODE                REPLICAS            IMAGE                                                    PORTS
vsvyanuw6q6y        web                 replicated          1/1                 hypriot/rpi-busybox-httpd:latest                         *:891->80/tcp

Inspecting the Service:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
pi@rpi-01:~ $ docker service inspect web
[
    {
        "ID": "vsvyanuw6q6yf4jr52m5z7vr1",
        "Version": {
            "Index": 2493
        },
        "CreatedAt": "2017-07-16T21:20:00.017836646Z",
        "UpdatedAt": "2017-07-16T21:20:00.026359794Z",
        "Spec": {
            "Name": "web",
            "Labels": {},
            "TaskTemplate": {
                "ContainerSpec": {
                    "Image": "hypriot/rpi-busybox-httpd:latest@sha256:c00342f952d97628bf5dda457d3b409c37df687c859df82b9424f61264f54cd1",
                    "StopGracePeriod": 10000000000,
                    "DNSConfig": {}
                },
                "Resources": {
                    "Limits": {},
                    "Reservations": {}
                },
                "RestartPolicy": {
                    "Condition": "any",
                    "Delay": 5000000000,
                    "MaxAttempts": 0
                },
                "Placement": {},
                "ForceUpdate": 0
            },
            "Mode": {
                "Replicated": {
                    "Replicas": 1
                }
            },
            "UpdateConfig": {
                "Parallelism": 1,
                "FailureAction": "pause",
                "Monitor": 5000000000,
                "MaxFailureRatio": 0,
                "Order": "stop-first"
            },
            "RollbackConfig": {
                "Parallelism": 1,
                "FailureAction": "pause",
                "Monitor": 5000000000,
                "MaxFailureRatio": 0,
                "Order": "stop-first"
            },
            "EndpointSpec": {
                "Mode": "vip",
                "Ports": [
                    {
                        "Protocol": "tcp",
                        "TargetPort": 80,
                        "PublishedPort": 80,
                        "PublishMode": "ingress"
                    }
                ]
            }
        },
        "Endpoint": {
            "Spec": {
                "Mode": "vip",
                "Ports": [
                    {
                        "Protocol": "tcp",
                        "TargetPort": 80,
                        "PublishedPort": 80,
                        "PublishMode": "ingress"
                    }
                ]
            },
            "Ports": [
                {
                    "Protocol": "tcp",
                    "TargetPort": 80,
                    "PublishedPort": 80,
                    "PublishMode": "ingress"
                }
            ],
            "VirtualIPs": [
                {
                    "NetworkID": "zjerz0xsw39icnh24enja4cgk",
                    "Addr": "10.255.0.13/16"
                }
            ]
        }
    }
]

Docker Swarm’s Routing mesh takes care of the internal routing, so requests will respond even if the container is not running on the node that you are making the request against.

With that said, verifying on which node our service is running:

1
2
3
pi@rpi-01:~ $ docker service ps web
ID                  NAME                IMAGE                              NODE                DESIRED STATE       CURRENT STATE           ERROR               PORTS
sd67cd18s5m0        web.1               hypriot/rpi-busybox-httpd:latest   rpi-02              Running             Running 2 minutes ago

When we make a HTTP Request to one of these Nodes IP Addresses, our request will be responded with this awesome static page:

We can see we only have one container in our swarm, let’s scale that up to 3 containers:

1
2
pi@rpi-01:~ $ docker service scale web01=3
web01 scaled to 3

Now that the service is scaled to 3 containers, requests will be handled using the round-robin algorithm. To ensured that the service scaled, we will see that we will have 3 replicas:

1
2
3
pi@rpi-01:~ $ docker service ls
ID                  NAME                MODE                REPLICAS            IMAGE                                                    PORTS
vsvyanuw6q6y        web                 replicated          3/3                 hypriot/rpi-busybox-httpd:latest                         *:891->80/tcp

Verifying where these containers are running on:

1
2
3
4
5
pi@rpi-01:~ $ docker service ps web01
ID                  NAME                IMAGE                              NODE                DESIRED STATE       CURRENT STATE            ERROR               PORTS
sd67cd18s5m0        web.1               hypriot/rpi-busybox-httpd:latest   rpi-02              Running             Running 2 minutes ago
ope3ya7hh9j4        web.2               hypriot/rpi-busybox-httpd:latest   rpi-03              Running             Running 30 seconds ago
07m1ww7ptxro        web.3               hypriot/rpi-busybox-httpd:latest   rpi-01              Running             Running 28 seconds ago

Lastly, removing the service from our swarm:

1
2
pi@rpi-01:~ $ docker service rm web01
web01

Massive Thanks:

a Massive thanks to Alex Ellis for mentioning me on one of his blogposts:

My PiStack Blog Proudly Hosted on My RaspberryPi Swarm Cluster

This is a repost of my first blogpost which is hosted on my Raspberry Pi Cluster (04 July 2017), that runs Docker Swarm and is served from my Home in South Africa, and can be accessed on http://blog.pistack.co.za

Just Look at It!

  • 3x Raspberry Pi 3 Model B
  • Quad Core 1.2GHz Broadcom BCM2837 64bit CPU
  • 1GB RAM
  • BCM43438 wireless LAN and Bluetooth Low Energy (BLE) on board
  • 3x 32GB Sandisk SD Cards (Replicated GlusterFS Volume for /gluster partition)
  • Upgraded switched Micro USB power source up to 2.5A

My Setup:

I have 3x Raspberrypi 3’s, each with a 32GB SanDisk SD Card, formatted with Raspbian Jessie Lite, powered by a 6 Port USB Hub and networked with a Totolink 5 Port Gigabit Switch, but note that: the Rpi does not support Gigabit Networking

For persistent storage I have setup a Replicated GlusterFS Volume across the 3 nodes.

More details on how I did the setup, can be found from the Setting Up a Docker Swarm Cluster on RaspberryPi Nodes blog post.

Thanks!

Thanks for the visit, I will blog about awesome Docker and RaspberryPi related stuff as my mind stumble along awesome ideas :)

Capturing 54 Million Passwords With a Docker SSH Honeypot

The last couple of days I picked up on my ELK Stack a couple thousands of SSH Brute Force Attacks, so I decided I will just revisit my SSH Server configuration, and change my SSH Port to something else for the interim. The dashboard that showed me the results at that point in time:

Then I decided I actually would like to setup a SSH Honeypot to listen on Port 22 and change my SSH Server to listen on 222 and capture their IP Addresses, Usernames and Passwords that they are trying to use and dump it all in a file so that I can build up my own password dictionary :D

SSH Configuration:

Changing the SSH Port:

1
$ sudo vim /etc/ssh/sshd_config

Change the port to 222:

1
Port 222

Restart the SSH Server:

1
$ sudo /etc/init.d/ssh restart

Verify that the SSH Server is running on the new port:

1
2
$ sudo netstat -tulpn | grep sshd
tcp        0      0 0.0.0.0:222            0.0.0.0:*               LISTEN      28838/sshd

Docker SSH Honeypot:

Thanks to random-robbie, as he had everything I was looking for on Github.

Setup the SSH Honeypot:

1
2
3
4
$ git clone https://github.com/random-robbie/docker-ssh-honey
$ cd docker-ssh-honey/
$ docker build . -t local:ssh-honepot
$ docker run -itd --name ssh-honeypot -p 22:22 local:ssh-honepot

Once people attempt to ssh, you will get the output to stdout:

1
2
3
4
5
6
7
8
9
10
11
$ docker logs -f $(docker ps -f name=ssh-honeypot -q) | grep -v 'Error exchanging' | head -10
[Tue Jul 31 01:13:41 2018] ssh-honeypot 0.0.8 by Daniel Roberson started on port 22. PID 5
[Tue Jul 31 01:19:49 2018] 1xx.1xx.1xx.1x gambaa gambaa
[Tue Jul 31 01:23:26 2018] 1xx.9x.1xx.1xx root toor
[Tue Jul 31 01:25:57 2018] 1xx.2xx.1xx.1xx root Passw0rd1234
[Tue Jul 31 01:26:00 2018] 1xx.2xx.1xx.1xx root Qwer1234
[Tue Jul 31 01:26:00 2018] 1xx.2xx.1xx.1xx root Abcd1234
[Tue Jul 31 01:26:08 2018] 1xx.2xx.1xx.1xx root ubuntu
[Tue Jul 31 01:26:09 2018] 1xx.2xx.1xx.1xx root PassWord
[Tue Jul 31 01:26:10 2018] 1xx.2xx.1xx.1xx root password321
[Tue Jul 31 01:26:15 2018] 1xx.2xx.1xx.1xx root zxcvbnm

Saving results to disk:

Redirecting the output to a log file, running in the foreground as a screen session:

1
2
$ screen -S honeypot
$ docker logs -f f6cb | grep -v 'Error exchanging' | awk '{print $6, $7, $8}' >> /var/log/ssh-honeypot.log

Detach from your screen session:

1
Ctrl + a; d

Checking out the logs

1
2
3
4
$ head -3 /var/log/ssh-honeypot.log
2.7.2x.1x root jiefan
4x.7.2x.1x root HowAreYou
4x.7.2x.1x root Sqladmin

Leaving this running for a couple of months, and I have a massive password database:

1
2
$ wc -l /var/log/honeypot/ssh.log
54184260 /var/log/honeypot/ssh.log

That is correct, 54 million password attempts. 5372 Unique IPs, 4082 Unique Usernames, 88829 Unique Passwords.

Splitting Query String Parameters From a URL in Python

I’m working on capturing some data that I want to use for analytics, and a big part of that is capturing the query string parameters that is in the request URL.

So essentially I would like to break the data up into key value pairs, using Python and the urllib module, which will then pushed into a database like MongoDB or DynamoDB.

Our URL:

So the URL’s that we will have, will more or less look like the following:

1
https://surveys.mydomain.com/one/abc123?companyId=178231&group_name=abc_12&utm_source=survey&utm_medium=email&utm_campaign=survey-top-1

So we have a couple of utm parameters, company id, group name etc, which will be use for analysis

Python to Capture the Parameters:

Using Python, it’s quite easy:

1
2
3
4
5
6
7
>>> from urllib import parse
>>> url = 'https://surveys.mydomain.com/one/abc123?companyId=178231&group_name=abc_12&utm_source=survey&utm_medium=email&utm_campaign=survey-top-1'

>>> parse.urlsplit(url)
SplitResult(scheme='https', netloc='surveys.mydomain.com', path='/one/abc123', query='companyId=178231&group_name=abc_12&utm_source=survey&utm_medium=email&utm_campaign=survey-top-1', fragment='')
>>> parse.parse_qsl(parse.urlsplit(url).query)
[('companyId', '178231'), ('group_name', 'abc_12'), ('utm_source', 'survey'), ('utm_medium', 'email'), ('utm_campaign', 'survey-top-1')]

Now to get our data in a dictionary, we can just convert it using the dict() function:

1
2
>>> dict(parse.parse_qsl(parse.urlsplit(url).query))
{'companyId': '178231', 'group_name': 'abc_12', 'utm_source': 'survey', 'utm_medium': 'email', 'utm_campaign': 'survey-top-1'}

This data can then be used to write to a database, which can then be used for analysis.

Resources:

Using the GeoIP Processor Plugin With Elasticsearch to Enrich Your Location Based Data

So we have documents ingested into Elasticsearch, and one of the fields has a IP Address, but at this moment it’s just an IP Address, the goal is to have more information from this IP Address, so that we can use Kibana’s Coordinate Maps to map our data on a Geographical Map.

In order to do this we need to make use of the GeoIP Ingest Processor Plugin, which adds information about the grographical location of the IP Address that it receives. This information is retrieved from the Maxmind Datases.

So when we pass our IP Address through the processor, for example one of Github’s IP Addresses: 192.30.253.113 we will in return get:

1
2
3
4
5
6
7
8
9
10
11
12
13
"_source" : {
  "geoip" : {
    "continent_name" : "North America",
    "city_name" : "San Francisco",
    "country_iso_code" : "US",
    "region_name" : "California",
    "location" : {
      "lon" : -122.3933,
      "lat" : 37.7697
    }
  },
  "ip" : "192.30.253.113",
}

Installation

First we need to install the ingest-geoip plugin. Change to your elasticsearch home path:

1
2
$ cd /usr/share/elasticsearch/
$ sudo bin/elasticsearch-plugin install ingest-geoip

Setting up the Pipeline

Now that we’ve installed the plugin, lets setup our Pipeline where we will reference our GeoIP Processor:

1
2
3
4
5
6
7
8
9
10
11
12
$ curl -H 'Content-Type: application/json' -XPUT 'http://localhost:9200/_ingest/pipeline/geoip' -d '
{
  "description" : "Add GeoIP Info",
  "processors" : [
    {
      "geoip" : {
        "field" : "ip"
      }
    }
  ]
}
'

Ingest and Test

Let’s create the Index and apply the mapping:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
$ curl -H 'Content-Type: application/json' -XPUT 'http://localhost:9200/my_index' -d '
{
  "mappings": {
    "doc": {
      "properties": {
        "geoip": {
          "properties": {
            "location": {
              "type": "geo_point"
            }
          }
        }
      }
    }
  }
}'

Create the Document and specify the pipeline name:

1
2
3
4
5
6
7
8
$ curl -H 'Content-Type: application/json' -XPOST 'http://localhost:9200/my_index/metrics/?pipeline=geoip' -d '
{
  "identifier": "github", 
  "service": "test", 
  "os": "linux", 
  "ip": "192.30.253.113"
}
'

Once the document is ingested, have a look at the document:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
$ curl -XGET 'http://localhost:9200/my_index/_search?q=identifier:github&pretty'
{
  "took" : 4,
  "timed_out" : false,
  "_shards" : {
    "total" : 5,
    "successful" : 5,
    "skipped" : 0,
    "failed" : 0
  },
  "hits" : {
    "total" : 1,
    "max_score" : 0.6931472,
    "hits" : [
      {
        "_index" : "my_index",
        "_type" : "doc",
        "_id" : "2QVXzmUBZLvWjZA0DvLO",
        "_score" : 0.6931472,
        "_source" : {
          "identifier" : "github",
          "geoip" : {
            "continent_name" : "North America",
            "city_name" : "San Francisco",
            "country_iso_code" : "US",
            "region_name" : "California",
            "location" : {
              "lon" : -122.3933,
              "lat" : 37.7697
            }
          },
          "service" : "test",
          "ip" : "192.30.253.113",
          "os" : "linux"
        }
      }
    ]
  }
}

Kibana

Let’s plot our data on Kibana:

  • From Management: Select Index Patterns, Create index pattern, set: my_index
  • From Visualize: Select Geo Coordinates, select your index: my_index
  • From Buckets select Geo Corrdinates, Aggregation by GeoHash, then field, select geoip.location then hit run and you should see something like this:

Resources:

Investigating High Request Latencies on Amazon DynamoDB

While testing DynamoDB for a specific use case I picked up at times where a GetItem will incur about 150ms in RequestLatency on the Max Statistic. This made me want to understand the behavior that I’m observing.

I will go through my steps drilling down on pointers where latency can be reduced.

DynamoDB Performance Testing Overview

Tests:

  • Create 2 Tables with 10 WCU / 10 RCU, one encrypted, one non-encrypted
  • Seed both tables with 10 items, 18KB per item
  • Do 4 tests:
    • Encrypted: Consistent Reads
    • Encrypted: Eventual Consistent Reads
    • Non-Encrypted: Consistent Reads
    • Non-Encrypted: Eventual Consistent Reads

Seed the Table(s):

Seed the Table with 10 items, 18KB per item:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
from boto3 import Session as boto3_session
from time import sleep, strftime
from random import sample

# session ids that will be fetched in a random.choice order
session_ids = [
    '77c81e29-c86a-411e-a5b3-9a8fb3b2595f',
    'b9a2b8ee-17ab-423c-8dbc-91020cd66097',
    'cbe01734-c506-4998-8727-45f1aa0de7e3',
    'e789f69b-420b-4e6d-9095-cd4482820454',
    'c808a4e6-311e-48d2-b3fd-e9b0602a16ac',
    '2ddf0416-6206-4c95-b6e5-d88b5325a7b1',
    'e8157439-95f4-49a9-91e3-d1afc60a812f',
    'f032115b-b04f-423c-9dfe-e004445b771b',
    'dd6904c5-b65b-4da4-b0b2-f9e1c5895086',
    '075e59be-9114-447b-8187-a0acf1b2f127'
]

generated_string = 'DHUijK8WK03hU0OF3UusjI0hhNd0hLRg03Vh430hJdUh9JsFi2hwO0s00FwO3wwI90Ju28UH94UUHU90fj9S99IfhVu89fU3o0OF04D8ljKNLsK83HJsLK8hUfsffURuuJfgR984h98jjF3sjsL0I3W0g039FJ8I0gh0IPI0Wisd8hhUg3308W3iVOJ34uO9OfhFJ900uL30oFu9JkwjfFUR4008OFkiO8H49DD4ONkKu0hUuiL0hR3NLNjh9uu0J0h09hH33gKf9980JhVL9483Ngh0h0fII4IOUOhHuIWodJlF90wuuK8uF84J4uRL00i30IO0d8g0UhIHo0I90D8U0Jwh0u08f9R9r9j00ujsOKJ3HU0hI90hrR8Sjs9jwhj0WFWrfo9g3j09IUu0ufjU83uHWK89i3r4Of8N00KN04O3I9DF09hfUOSdjHhs8I30hPkV0iRUhk9UN0J9983i9UIWioNFSUSWP0903U04RS84f0UI80j3U9I0wUFd90u9JJ9kJk9Hff43ul99hhhU0wdwIHusuh0HoV9Jh4I3U3hj4KH40NLH8L008N9FU8hjsWUjwUIO9N8FUs9fhkfV8NjF8g0OKU9UuS8I44009J8UJu99NVDhjH9S9wDO00fKh3uJgu3JosJD899jPR4L8839Duj9J0SjjLI90Uhhhh4hIsN8uIrjhJjIoi80084wf9KH0LiuNwPhOFOFhIO0o80UF0s308shwIj9iOJhkRUOI3J8fPUJhfN8uJhj8LU0804Ji803OwOHWgU90NU8N3hFF0fJ8008hh98jU9JJ0OFwfJ92JR909909Uh8sFf9P9lUgIH9IOI93rsIJlHKNH0hf099h9Nl98IskhV0hIN8oUo0JhfDs4JlhFl0I99jRhO80J9JoPjN008F03H94U9PHrPuOsPV0FIogD40Js94LIw3JwRP8S3LFk09uj8WFWUJ0iRJjhLh9UoOIjP403kji9hsswRfwNh940jwHwKLIsRL8suouIl9IiH99h0j04h0H989O0uLKhJSjN0D39jffOO88K8hKu8R3V9VRU3Hug09WF0ssJhofU4fW8Fj08KKs2IjVI0i3NjfD0uPFLsO2IIFJdH90KJNR8uR0h90H3J9UFR0890OJI88jfs4Kgf99w800JUHIlh9rs98k3u8249ghju9UFuO8Iks0OOfKO00w94KL980Fw88UiO9IfFh903OIF8dIO3IN99hJh4ufFO3hNUS2i9P9rKlIVl00WH3I3hKP9Rs0u09HNhNHUVhd98IL9fhu0DUhgH0OrJIPsJ0d8fIj9uJOF090R9O8JokI00U9jOLH9h8Us9ifiIhw00g4HKiK09oK8Ij09IUh9US3h8F9FhN39Kd09r99R0dI30huhfR0jL0Hs8u898ji8WjNORuiJKf39kJ099V99h300gJFwl0gUs0j0h0I0wI4Uh9o0l0hNIKjNO02JIj0jJj90iIR0uLhJH09L9H9h4SVsF89hiJV8hh8fhu394uIshIkLH8JKI9JFhsiOJ4s9gk0hu9kfOh8HP9jo4hN0k8JfogI48hr0FwfRjU0SijwNh8VF9J9jUk80J92H93F8h0S29j392RiSuNoh9i00HhFL8I9jN08OHhj0IIh0RKIJ8N0j03uHd0fDUhJ090033FH9luhjugfRhUL8JwH0if8f0hH9h8hfOjiVL8giJSsWgkh9UhOwhjOLhiUL8IjF3JsUJ3HFDh9KKF9Fuh8FDju3uwNFJhPH29shJjjJUOr0f9P94Ff8NIDHOR09WKhUI9hUURhN8FwJP990of9dVsH8W8h889j9g800I8wJjKPR9fu04uINkujF94f9w0O939f0I0ORL94f00O0ujDrIUI0D8rUK9fu4I9lFwogio9WLj0flf8SV0w8F0OUJNVhjWO0KI2RIjh49h3I9NUiIhHUsgDkhhhsJJj9iH8KjI0h0fkUfUsIujOVrLjUi8098Rjih943380D9wss9Kugdh3SH400N888h0j9IPg0Fu8joJ9Lf4U2oJJN833JJ8Fu94kfwR3Oh0iuIH2SFFIh3J93Jj03J39hKI9D8Ouhh0ih8IJsI000I8NRhkI80hkfgjJN0N9hhr09wULF3is0uJ9jV0POIdIO0gu808KH3FF8O2P39r9003dPOIJ9iK3lNI30h0wJN0II4803KNjf9uW9490j080I009H0R9uJ33ggVhlk9j0Hs09uIfdJ009oU9hiiRLi0s8fhFN83d303hhF0N9l0h9F3SF9998gddu0iK9RuiS4ow8Ssg9hi0JNKffOj33Iuh98IHO8sjNRusK0ushWN0JsKO43fd98F490NJg0iU8hhUUFPOJNjg4HO3hh00P0ihUI9wgw9JO0hr0NhOr0wF890U8FK44hhOVN89Kh99IFS90UOI0UIJ8UwjKj9i48jj0sU00ho39RrUIfIuOK8h0O0h0RP8dJ0SOI9sON9l0wl9jo38LOFIURuk0KJiu9L9uI9J3LfdhODujHoW0DIOD3RLUI8Ih0VIFhu80LOO9FJ0hJI89jhfF00HIJuh9gKR8UNfhV8wiI0sIN40iN0fOKIRukDO48JRIIFO0g3IR9fHFhi4uFIN9393JH89g0OVhi8lONuJ8UkW0K098J8INUh9uOjKuJu88gjsWDFJjOLf0OFww93FjiJSH90F0ujPOhwO2oPUNUOkj0uH9j409jh3hJhf0rV8JfwI3PhIIOukffuhDhU2ujh80IJLU0IIuU0wF3g8iu9ik09jF09fhO90449H8uIFLOjFR9RiI84f88oug4jJ3gIhHfKHi2S0u9JFUNS0U4OsOlUIdI3JrikU80h93iRL8fu3ffuIH0U8ulo29Owh39us9I9900DjJwg90rR8JhHhw4ODuRrL8Lgg34jsN00OJhRuJJJuHjSFS0Nh8hjIfsi90494i0Low0u3w83D4j0RRuNI8JL2hKNRIRJ0dNhL83uwL9hfFIdFJO0wJF90OIP0O2Ig94Ww8NKH09w9DO3090OJ89DOo0h0L8s0f888fUdhfPWr090oI3UwNjNKfNUUW3hHifDS4Ji9IHK000hh88u09OufIKI3l0098hO0030O0I8W980lRI0J94JW9j9lRJOh4uNiJP0fhUf9h02uFoFjIOU88i4hj48W99fUjko0L8FSJ8fjf99I2oi4J0d0jPjJj9Lo8fjh0K8u9JsU3f8FUh8F3k8HFU900wU8U9Rhwfsw0U90oF84008Uf98wfRJ003w2uUhhlW0L0Vh9H9OuoIUj08L898KU9388UUh0093g3fFJU8kF3j0WFoFhL90N9KI9838IKfr9djgK08ghVFW098N89hJJ0JU3fhh0jU0F83dhgV8lVLh020L3L2VWOD3ldL080hh8JiuhUjFN9uIjwIFk2FhP8K99Joi9s093034hU3r4r0FO0j3IIK903ff08kU988ohru4whO4uRh8Iuj9oIuhFJIJwhugLF8KW0j9hwsKf3wH0J83RjN3LhKINJK9DSUFJNU083sF03KfRhI830lj9h90i4FiJW9NRhi0suh990jJOR9woIUU99HK29R8H0J9VJh9LRf40Fshukj990IFgf902uiH009FKFJKNOU0o4fI9WhVV0ulVh3ROfiHf9FJLwI30d2ii0OOWhDwR00NN2juf8U90IPWfJ0JOFh00wsh8gg99l083lNJNuF039JFssPh09DO0O2UV9FO0guK0oLKrjklj909u880gN9iWDKl40uOhu8o0r9hwIoKN0rOk0h3S3SUhuw0F0Hrh9fIjVFjVVUNJ8VjkhU0Us0f009U8URhj4Iw89O9sod39NFhw9hHUUhfd3uR0hfhhhP9f00JVLI3O90JjORH0FFs2J8if288jd0h8hFf0j89KRNh9kN080lhURiJ0jFPjW498LuhJh9OgK0wNJ803f9hfV0O989Jhk8wf0UrKFuU9hf9LF0DdOuHI4I800j4wD0w899jF9kJ8f0u0P4iPWf8u8I9hF9Is8gHJ9hIDfI00Jd99OKsIO30UdI88800N993JsJF089OswgFiwhu9sj880l9RkwHJw0K9ULLw80h080J49o00U0Osks89N2lwIL08u0kOg800h9NWHjjgF8Lj3kKlOfRh9h0L990N8lSfLu92hjhR9hkIf093sl9uJf0s9I0IPF3Sh9IK08U9dO3S80dJI94LlsVJ0hNVuh8R909h3I8fuh408R3o89h0uW9J0sHw0RF8iD0I9OU2hH9fPS9fjPP4fUgPhi0J9jJOOu8h9Du0L0998U98UsRs99jsID408URP09jk9fiJLiiL89IhV9Ks94U2fULHr00D809JjVi94u4FI2N0INhJHu903988U98hUsRg08F08UjfJF0Uk0fhHI89oRhl9N0ufoO3Is9hhju3rIgRJ0OkPFLOh9i0JfJhOhhuuNIhI8003uL9ljH39L9h2L4SJ0Hi8uKh398K9gh9H0SI9WJK98hUjj09J00DL9URNJNihI39H0sKu00fj880HIUwFI44iNL9UshH99jwI8lH39H9NWRO9LUURK8uoS80J893lr8P399OJjJ24wV409IjN9u0I4s8IJfhfh49udF9hlj00uIsUhfNdFIkK9h999UJi8Hh0lKL04hh308UVl0J4O9g48fHh4Of909Il09higwR9I9WUR28V8OIk9K088hiih204I9o9K89dIijL000HN0RKrS94IOwWdSW8s0wiJ0JlRjVIId988Wu93Kk3u0fw8930808HsisF3LUPdNrL9LP9IJPOL083Id8wJ9h0oHJUoW0JJ38f8u3ws43JRr9Oj9OKf8oJ0ohiF898UjDuLhj0LW9U8hN2j0I2iJO038u49Phs9RUj883HOfWIO480wRUh000fj0HNh4VL9Jjh0W2FJdJJfJOR099g3gk0oI0R0Og9jU0jgiOOfOIjdULJIfI94HN9gJo0I9909O83UOJS0rJ8J0L09d9og8w090DUhfKhPshRKIIIf90Vf8Sf8I0K40us8LFj88W0Fh23922O80JNsJoKwLSwhKhW4Ir8Pi8JNhi9KUIifs88UN3j0HiH0L9NjDPsIL00s8jJh9890dUWNj9NsIhh9J9o8g00dk4s8hJH8IjhFg348W9uOgO9h9RlhHFi0WJhwdiPfKgJuuuh0hOi3Jd8hF008hjJ9j90WJhhiUWJhuOjDwHPsUw4L8wUKU0Sh8wh9h02h8KiU09j800PNu08dhlSuIN0oi9uJD38w9IR0hhdLHhJo2wJWI0RhHIhi0J4dFi3ugV0F498hhLsI0Ui0Od8jKSRghU2wP9H99J30Wj0hj8IJF8hU09FSrF0FWrjhf0Luo0I9j8V3JfuKR880j0Ofi9Su8uUj98hIuOJid3h0FwJJFhoUI909SWjoF9I90H90LowN00sU929JsjFdij0gh0NrJHHhRs9hh880FIL93h9J0038PoIh83N00J8PjUJf0899Fhi309l98IJiSISPRFL0HK8ji0k0LUjwgJ00hu48FoIPW0hHu0HDJjK40uhUOLIU8H8hN9IUf2gSNU8NNOlg8LwOw0uRjJHhSPO8NFf84Hh893hD932LV90300FUuJKh8F08uOhwK3LI9V00hI9hlRs8WL8J9IhJrDNWwHVKifUh0fNLU8h99Rhh3hs9whgJh2Jf8UhIuJLksodj0UlIuJwJ904jDjW9s0i9S90i8NK03i299w3dw3fFjHI0ssN4009KiUSKNf9hNUhKDi80i9OL0fKK8hIJgJO2ULJOgfN8h0iIIhwJJ0uf99uKIO9j80f9RuRIHIoU3PhIW3K40Jud8h902w03h09UhIO3OwJIIgHujf488HgFOhdhiis8OLf0kIJJWHJojV4Rh08ODOO8FLg9fUf98jJJ9484L8hji9J3f9ROWFiO409hP00090WU9ww9Ih3H4KDjHF8UuwWOOw00OOig3IIJfI0k0Oi098UifRPJJ4D99us9FDIgRijI9UfR84wU0F9V0hIhI8rs9KrRh0jRF9uUw8K4HhSh8UIUPsWJ0of8SsVuL900H8o0g884ukJLOfu00hl080RuhUOu83OPKH909DDf8iuurLL94h9989lOJhr8JUJL0hhjIshf09U090IFUj0880f0UJP83ojNS08880dw32040sifJl39r82w8P9h40r8OUO0WhIFljwh8KuNJf94KhI8800u9j0j40jhiRN0f9493h8hIJ9D8RhOKVHUj99ONIHROFihIRwHUhi0F9903Nf0fN0O4ROo3udf8i3hIi0F9UI8L88993hwI0UI9i8i9f89IIdhR8ffF9hIh93hhjFf8DKD0UI0j9LPIu8Nr04008990jflO3ldsFo0skhK0JjVOw8O8409sFuFJf08h0KIujrJ928i2RNHfjO88008089HON2dJu9N90hfFh9hK9UP0h088hO8f09S8Uhjo0U8g88JgFI8VIJ0UJi00F3guP9L9l8uVH0V4iih8k0fJ8OKjIO9F3JjR9ro0h9Iij09irVj0If8jU9HIuS89khJ9RUUjrh9oF8L0oNu9094FN8kh08O88jh8sFIf4I4F8FKliVh0DN09FhJuhwiu9uU9Fi8HNhjff9h80F9VK0h00NLjWdj9RJ8hJ80rKhU8Jf08ur00l888u4JOHKPOui98hUJ8PhNOh0990iu9K03IIOHROPUJwS00wuO8iuIOU90D49hS88Ld39KiNI48SRw09j9FV4u0wi0Wfj4shR3HolP9f0Jlu0IRNi8Dr308f8jhudhh49hF0II38RR80rRh9hUuh0K9IuW0jOHNkIhs88fHgu00su8Jf4UHIj9I400F09fUJo3r3SNJ038JjDLhjfUN9JlUj9UVKS0J80893ugjOUhUJ9wuDsNVi9huDRhfJlVK0KR9I998ihfsWOfULOFUhOFU3H0O0I2P4rFu9D9F9gkh8Ogj80K8s938HhKJg08HJRhKPI48hu9ODUH93VD0uk9FH0V04h89jig0fOIHOjhh0d4FIh9UIUH9UfUhi98Kl9UugwUs4JuOI402IL3VJWgO0hf00g9sHH329KsSuJr902O0Phh9S0U0j0RiKhfLhs038LNOR8sh0k8lhRj0h4FN39oJ8NWlhsWJ00hf090Ihj9RNuI0J0o9uKO89h040KjJS9osjs9KJIuKr0wi3UuD2U9r990Dih99j09fj9k9jl3Su0JfkJi3JJhJh9KOUhi0WVSlUiKJSju0fLFDh99OFuJi0uHj0uUKj0fOHHfu0f00LO0082K90hR04J90hOLIhd8uh9O3LJH8s89hiL9O09jL9999f3h4NhR88U0ri9R08sKKLh0U8Lw9hUHIfi0000R90hsKHJN0wUlfgFjfhHIDU0P320NFO0ihOrO9JIVUU8000rr00UriNus84hj0OgU0dF3OHJuFD8hIuL00sJ880Vf938j09J00Hj9dR8L88I0ji9WOs99FIuh980083IVsUfdIS9rO8hhLhOsL080hO8fwjOI8gF3Ff0Lhu0JJhF8fS0DwOwKs9NoNJ3P0hhO834g4NS0h98w3i9uhj9990080LfrUo808h0R0DFhFIjw3RjfgU98i894r8R9uF9f8i90g3ifjL8uIw0JWiO09O04u8Nh0iOOV8j0I0I099909VwhJJNUFKrjdh0KhJ99HuJLo3W0N8UUI4Ishj3N9jIfS9S9FJs04fIrsl9uw99fhjRo8jO09U8JwjjjROFf0Ni8u0lU0o0u9O0V0N0w9j84W94jFh09Fhh4Oi9UHF40jU0furFsIJJfI8hO9LFUoLF4j08889V3N9FhUifjuUNRu38jJHj83huNLH80whhIKsF0JFU8I0fIij9Ph0R8fhUhN2h03hu4rKV9L0wOh3fhjU0O09R0gWJu4F0hRuVsgINsDHjWuW89098f90ljHUwUON3Kwo9fWU000fFN89RJuNJKhI003F9DSNOP09hu3oIJJNg2u880J0wh0PIsPH09P843rgIhFIIOk9i04U3hIgIIuIIs8ir892HFO2djLH9899hI93I0NO0i0JljH2RgU8h3D0ui9OfHgRRW3r0J888WINhLSUh83DKN3L3099wR9HHFUIuK9394Kl0VUJh0NJUOg0JwH98Ih8duK00iuhhUhkwoJfiL0hONf9U9uI0HU8U89NPF9HIhOf02fgoNh00R9r8ff38IswL40Hi08h98h8uJuug8Sl4IhrwfOHsi9HI8Idh8I4FNo9KIL902oUfHgVl0HI9N909SgkFDhh0JgK0lgf9F9J3009du03jfRiIw0i998O8lJ0gJ0wkRjSsJwg9lI8088FrR9wFFi3jsFNh9w0OjFJ298k04Uhu9NF9DJ08h9Us8g0hsRFWO9909hUjj9w98UNONOou9028809hIPo8fDL38sJhw8V0IwUJ4w9Hk9Hj9JoiFo89ijiL90guu3J2K0KNsNUfrOPJKuh0KH098SO0PUUs0hRNLs0u3O0j3O8uIuIsI94O0hHjVfu0oilFiJSN0Jf9890ShFJr2uds0H43D88O9IP0h98IJJOPh3ru0300L009hRhLdsOVs0hH0SO0fFiDF890o8hfg4JOj4089U0RfV3f9HjdsD808VOF0O8LwfF8ruJ80i8O0P4h8RhIOh9wjK9Nf3F298N0HIdUNNr9dkO29DOw8NhoH0SR9H2kPfhR0jP9su00L9OFkOs4jIh839Fh9I9JuOwR9iJ09SIIjI09fj0390h2U9O8UkNjdjF890hkJ908808K4UoP9uFRWL02LfHw0R4ffNIPJ0909i8u9fhkJshjh98JJN00hOjhj0Fu43VI8suH8UkwhOd243UN9J00h0hKwhfFFO8P9dN09W9hRI2IIjjJf942LRhD2UOg2fDsfuOUFF2OgluuW49j09L3gUN80h2hIjs9s8UuR0hohj99hgK0LVwO9FhNH000jLhV98sINhf89HiwK8h99KL04uUIjJuh9903sJK9Kk99D00jhIIugJ4JF8W0f999I9iFD9k0N0OI9F00OL9P03gJ8wI3L3FKW0F0I98hhsdoDS08jJ3I9LR4kLFkHhLFF9s8I09j8JDRLNhUfg0UfJsKV00J9f9KLR3soP8jRodJ9gO0h00hUu3Ojf8Dh83LHN90ws00u4u8usu8S990oLj0I3O80NFRJ0h08hjOfJu3l993Nh09hU3w98h00P9HJ9hIihouh32r98Jhh9JuOok0900Uh9fo8Rh889809wUh8Ls0i98VhHu0lU90V9JghLJW8FJUJUhiujdokhPfu8OJJ82ffrhfi9Nhijw9uwUhh92uN48FKUjNg0VRjF08IOiJUr08Fr08UuIIwDKuNP0j98SuJLdfIuOUJd80D94LlRUlN9KHRihFwI0ud900h80L8lIj9RJ0DNlwflhw9hJlK9hh9N8diDhOU0osO30VRh0L00dS98LS8KfH9NhdH0888uUIWhK80O8d8hR000sJJPNHhUfsu0ish3fU2w9UF200UJOiPJr9O0KhWSjj4RIjl0jVI9030O0hjFl99uh89W04UfO89U0iu9jODho03s9809R0Ph9loh9808V9R09fUKJj0k9R0UH9J2hjLUflf9uojLRNVOuKO0FhjIL9I93h9fUfuFJihfiFNuFdJRuU0shJh003w0ugo0Hh0RI2iD4IW000gs99h08I9PFS990S4w0HKuIW8K9909sI9lHUUL9UiiN9SOjFfkoLI89gIHiHj8hIJ8OJhij9h9OJIf0IrL89djJ0SIo9339H0iFR909II09lFK9uFIU99jfsI08ffR08s8jjIHfhfPu9909HI80889OUIO804uDW034O9d803I8Fl2ssFR0wi999S9U0k9I049douUIdIPu030FN8KS9RhIuUhV90uJl8uUIHfP9Rshj08HuISF3kj0Uh89fHuhuh89jHUfOf0w0Sk23J089s8080JkWPuuR98wUhU003k0IhwDKLh99ghL4I0VRj4ijOg8fJrLJJ9iIfH82F009UHh39h0Fjj0Ijh08498k9s3swFUF94hIO409Ni08Lwi3jh88F2hs9s48Rg03u9I9sUhuIkiL8wFudUOffS9ShJN83JjJ0ih9l80UJh0ROwH9kHhPPsr0i90fIJNN0sDISw0PshOjdFujo930RSIU04u988LhJwrU0H089Sjw9083088Oho99980FLiOIUduuuj04hJN0h4hJru9J0FDO89IuWUJ0FFghIhorJVPIgJh89WuV8iOIIhwj00U9w2I0Pr9IFj9S9UgLF9JHJ8H0KHj09jh8ir90hUi8J30PgFoiwhJK9j4HOVJf3fH39ho33IJPJj0i94L9j8NJOiD4w2R3i9LFLN9RO4u88f0s9O0LjO0Jl9Pol0Uih8wDijkJHHhN308dw99h9U3H38IK0LO0L8iFOFI0394i0hw9ruF0989h0090w8j9Fu0h9hSVUVH8hw33900UJJI0fhJFKi49u9so09NuFI0fIwDs0uJ8j0hu0ODUhhURKuFLjUN8D49j0rWsh0DkSjfuU90kJsjV90dO9Jf3K8gwF34JOj8hiF0HJ0hU0II0092hjUI40IJIK9I8UPIoI80sLUHRif4jJ0JI9SIhiHjV8hFFsfUIjOJL3jUI04rd04JS0O9FJI8hfFIJj09kSui9w843l88h398OfJKFwoj0JlhO8iii93hhk8KU0VifUhh8KKIOhUINiURHHFOS9RKk8K0f00i8h9NOJk0kjJF00V09hwuI4JKR2R80s0K0Lf809huu9o9UNh0oshh9hP9hJOI9K0NI004jww9D3uu9wrfwuijfJU0urN038sh49JOwOi90IIUlhfSJU3i099h0R3IsjU3OuKKH9IhI0ufKhJ98HFFh00hI999JRIFwO998hDif9KUU0RfIRjwu0L8IJIJj8i009H90Iw9wsjjdOsohFO0FIJi49943KoRJ9W9rO0uUlguwH4jLFJH9kuk8sL43PjF8989K8wdWHs0U0Idwh09N0h2IFhJR0ud0D90r8jOj9OFf92UDkLH4IK88ho2iw0hkwjN89IHPiFUNRjfHhJ0NRH998h90Rf3hjwWu9jr08U92i30FfNUPk8iJfj9L9K0990O4UhfIjFJgV989LgH8V989sDd8i9u00jU0hhUIDuI8h8U9fd3hUhI9J9kFw800RkLu39OO9F30uhWF8IfhUFLR99ohrhh8h9OwUi9V3ih90L0JKDVJhOR89Ju0KhduUgNII9RN98h0OF0HfhuI9I3KI0k9WI4K8OsV9IhhIFDjhPK0u099hJKuLu84K8hfR0FOUNl0W4I93I9l9VNfDJulj90JUjJ9JL9hR9LI2JVVK9H99hJF89R0kuj3u9090iIF99NIfh0F4UFiwkFHU0S0h80OFioPJ3Ks04HhO9hjsh0oNPuR8LUU08gHu8K20PR9u0Ihj8F9fL3J98L9Jh8320SUI39889uLu8iF0sfIH0i0098l0i9u39IP3020h0FHFO8LIji38oL228842hR4shOU8duu9jdKOIUJ4i09Ohh04uH0sUjI0JH90S8S00w4JDLh0o04O9I30w0k0jj0k00sIh90kId88U0iHlkO80sj9I08LN9Jgj83R4J0h00P9wJ9Ri90fOih9h4kufiiRhFL38Dwh3sO090KOJ802RIhhUh00VK9f8f2Dl00fiF9KPRwwr9hK988KL0Ff98J9Nuu8Nrihf9LFOKu0jLj09JdhsSKHr0URofjUJUKw0JNiJ0ROiJJO9839O4WFhfhoDfIU0j0Ndui008939fIV2SN00J00FJV80VF20LwJNd38jUD0I00hL8F0jf9Ils84KswFUHO0998hu409Jd0Ljjl00Ok8O08jO3hUHKPi89I3J9989998jsrfLu8IhJ0N98hR909OO9UIW3PfU8fK9hhIssI90IINRHhOUh0iJ8j9isJJf0hh89HLfWiNj9h3W3s3urU3oi39hshw08fSL0ih839L4h22oPjUkUl9F03h900FhjViV8u8IH0R9F9kI8iJO9Okw9W0k8329r8JjOh0fKOjOjIU8sKL88h8urjIgrSF9O9hI8340KfI9wJkH0uFhdO9O0u9OU0Jr0IhR3hFJRR8uJ9hhfK98OuJJFS8sNJ90308l3gi8N9uOh9SOUfdU38FIg9393P9VwLN4lhiuFKHRUrsFNfII0oO0OJLDK09wSI3JOrUJ08hfk3UU8u9OiP2Us9P08idOFFODWOH8uikfs80O993j9uL9SNVL9J0F9hsOIuV40R80j0L9d04hh04N8gf0I9R2K3I0ws400uN98089FK8jFW2hLuOf9I999f4RUhi3L0D90JfS0lu8w0800huhSN9VFjHkIuHPFUj0hKhOgiUwSNFN0ouiI80F49h403HRUulLD02J9Wu883302UFjV39Wr9NihJ3FUN8h40K9fLJJjL989wNIH394NF8fhNjsJJfoNLr8udIj8h29UuRLHHhHJUIVJ0Ns943D8Ps8hOhjr00OVjFsj08RFkUj49099UhfKu9sHuLN0oFIhV8i99JJSfH0fs03uwwwS0f90JL0rJ39J9KWjJ9F0I0KNOjhhj4H0IKFFo8r0O0sFN0sOF4uLd9WHih8OuO0o00gffNsU39880NfI09H898U8NUW8ls0ISJ0UD00i0NL990fJuJJP9h0ghIUKIgwIIwh8I8j8FK0w003Hff00F999u98Ou9UI3JhdH9LO2V8dfJso089OHu24W3IFUihViUJOjhiwS98ijhVJIhjL0u30K98FJu0N3sh9uk0RVi23iI0Jjih8fHVl098i8hU899R99Pd8I0S8IDJSIJ8IuurfiW28833040huLj4jiI008huI00SlJ0fUhklF99Hrk2LRIKR2RfhOf8K8fK40Hh98H0wRO8O4L0UPLK4F9wO99OjNL43jf0hD9suuj9uUN9uUuUwWUj009ksUf9L00S930fUU8kR4NUdh38J0H89sf9fLr0LFgJfu0UWJRUjUUH033h0IwK8U8lR0Ul09hLOHOuwg8Rw8Ji9HgKUuHjwuRhIf0h9hKi9f0LHI8IwWNhJOOS9RhOLk9uf29U4KUIRw9JR998wRU43u0i89LifFP8L3P9h338h8Jg48FUu9diUO0PjusJiRIFkhUOV9PF9R3u0g99j9ODFiKK0jO30IROu4O9I3sjku0838W0FgFNDUJ9shu9rW8RgRi00Of9O8hg0P3j8gSK9RS0f9Ui9UshH9uJI89FofIFrjfRhIR90I0fVUh0J00I0FgufJHOdJ9ufhfk9u88IOKKOuLS9kRLfuWl0LIhRI90JJPjh00DsUhlfwUs0iJi8RFRJ9i0g999O09J09KUoJ4whh8i4u9sFu8IO8909Kfgs9j0Oj9O0k8F0Rji9u9o9Lj9j3i98iDlHjIl098IU9N0JkWUh9f0dKdfI40FjjF0Uiwu09JjjUNiFLhssJHVOsws89RS3LI0RROIUIJLwI0U44jUsrJlHjLH043j90hlLlwh0H9lLjUjL98f8L0dfJ9U2FHURH0RJ9KOw3UjF33J0LIJLH8r9h090fOROOwPHHjiiLI0sOR98K0UuUUIKJ0wh00g0JSJW0RiR898Wf8LsWUJ9Luu98Kg40fjUU00899FIJfIUh0880JUOIFWs98Uwu0iu09wFsDORgS0J0uH2I0JF099IIFKFuwdds9U88IUjsUju4hkf8N8N9R9OSuIiuOH9r9lu090jL000VI39fiF9I89f89luD0F980DSO08ihDwj9u0WN9hjSiWHUOorFL034jO8ilOK8kll4L900fRhh9h90hu9O9fuIVN8jw8shsiru9hgf99Vi08jSUW9U088w03FwRu334gO00uPu9gf0f3UlWg090jwSUVHoFO9jNKr8FH0Rsifk8J9djhO9009hIH3f483DHKJIKiHs2sU0JOj9K00hju034NjiIOhsN9sNJjuhhwOHu899lfHNKVgSKO3f8IsfgUkfjr04uh080L3HIKVfIg80H00N0huj9R9OHVRK0LIdjRULu80hU4UJ300OsUIfhUN9IUR8sF0k0fuOP8LhHhLj0W8FUUH0U0u0sh0i0lINfFIUoR3rIw8UJjwJ8Jl3RIl8938j099800WKluuJ9Nh0DI00o480R98JhH89dhFu0IILhsU3i0Jiou0UORjh9OfOHh8r9gLhsU9or3FUksI89FFO0NOwhUKKh3fi8f84RI09jFFhFjO4R0UFuIKg8D03i403Ooi30j40IJ0NU0OLN8wD00u0w3I9O02hhjjirH0j8JJIVN0f29O3HUDjuh3K0ujku0020ikhD8J3W9O9KhUh83R09gLJ9uD99DFhL90rK0FsI0h89w0JDUuflUU40wIo30F8HOgsFRhu80IJh0FdI0U3O883l0wR9IhRV4d99HhO9FhS03dH9902wwD8slV4iNfHHshJN09LIf9f9UF9D8KJ3Hfi3w9K80ffu2FJIKK3u9OiJI90hLLuj3JhiJ0fiJhIhON0foRUO8hhU03RU9LjfFujDN4UVOJ3IKHfo8r0uN9sw3Iuj8D8230VUF82NI39JJ9ofKOu9hjPo4USUOds8joUf8OI8UV0FI9F88Ndrl48f024lLJ3FfjL0KUNfIfONS8j0Rr30H8f2HIoKs8fw94fh8UljhD8Jhr08FRfihh8WVOLIU34sFk9dkN8NV4HWF2h9uhFu03sgKV0j08FiFhI0jrUFf9OJ0huRFU9o99RHhR8Nj0O08h0099Uj0J88oOjwfhUwNijw84ONhhIOsNj99hWh9jPNOK989fOI8oUK2933j38NshRIldUPh9i9S0KOs9OJh4UI400jiiflFu0hiDJ80lwgfdjWkoR9lRhh3su90IK9P0lh0IS3L900dod9ro8099J8sKhrU8UJOs8I03kgHOJIIjhr9HLDOd9999uOh0Ig0fsHk0J8F89J8whh9DJ9NOf3JJILh2kj090P00UKIIuw9P040rUJUU9JihfJHhJ4RU93F02FH9V0Jr933j94JLs9jk9I2ih9RJ4Ig9W0jjIFhhFhkU2LoKH40J8hu0809wwShSK9juI048Nh3rihw8039wFKiPI880usoj84wuL889fJI9uhOFK8l0huhju4Fh9JhLVk38KjJfu0L0h3FNKius90Ff0oji8sUffN9L9sR8RPfIiFPu93jLr889FVHWw0HuuhsdJuJ04U38u80FluILh43UikhUwgh3JKI9R9N0908hHLIJsHSUOI9h8O09ikFIwsKDufW4OIhJ0JNNwj89fIihwO09f3h9U30FK8WHJw9R8fr9IIf0iFN890IU8gh8UNo88u0LI2UHN0U0fUf00V88wiKIFi0U9343WdI0388g02hFI0hwJfHfwsrOHHFIh9sUhhrNK9hjhl0Pj0NIhfjF9J9hihfl9H8DHJuuhIWf0HFLoUN8F3w0IRh3w8N2f03JUN3KRH99849hh34H3hIIw9iD3Hk38ksRRO0ufj3d9wjPh0h9Ui0JkOSJRKjW0FI9i29s8hFIk8SK99H8dWw89UiU0iUR0NK0uk4H9iO98L9ORhR8FPOVfFiIjU8d8VhD9w8sU20LW9NI8jWjUjhwDrhNJF8k3ihjFlU039KLJrN098i8I0UFgIJ09gO9oUNhI0l9iwIlO9UJJ43h4r0OS83uFr03hIjdiP0HJw9hU9dulKO0SFIOJKKOJJ0F98988Ifu8jo9P08S0D0OIl9JJ94Hkh900I9juouUJHNhsjjJK9SWS8FR08rUDK9hPU4OJIFhd3LURhfjufhw9I8LId9JihhV9u00J00R9ki9Dh0899ho9i90HO8rFw0UhJfu8ofoo9lOO0JN0IN0WID2i2R43LI3IfFNj09389hRU0VKLu880uog8Hh98O002sUFN9Uh99j9or8r02OS809909NWroRjKhl9f9IL8490I9kHF0f9u9JO83LJ0h8008j4S0wf9SJ99KhjF0W9DOk0goKhu90HLiJF90kUKgO99JLJ0hh90h0Dfou29kiwVI0h0uHR0H0s98Oh0H9lr9s490JHf9N4089Lih9O0Hhw20khSKKw900F3H0h3iJINU3w9WN08wRd0jRN9UWfO909J00O0ju9FIs99RJ9L300O09g99k8H9UswuWis3H094j9Nd8N8kdoh8rfKL8Io800OSiWIRrH9lR9s8Is40RFI9uF00ur8oou4flN3wd0iw0u0h0L0N9sVw0FULiui9IjI98W4kjlPjo0O0h0OK8Hi33iIju9Ruf309do8SL4dr9FIui0L3Jh88JS3F0JJg999oShNI0fouIN0r9h8HN82Fdu48sNN8449w04hIV0Ur09IJg9RhRUU9rO9080JIP048hwfJsIJ0Hs09l8hh2ff9WifR9luNi4Jj3KdfNR0F4wDuK33u8i2hh3R480H009UsiuUL4N0grf4HJsdU0LP0I0fw9O00O00kO8OIFJ3DhN9u98RH8wh9RPghOoOu9V28JiFdWhF3s0UuJd089fH289L93u8000UoIWKigVhI0g383JU9rl9N43hdJ9NkO8j0lu0hVV90LHSjUF0jj0uh0Ku808Oj20g9NF20w08u4ND0F8D8IVPshHh99932Jo9J880J0SjLwuJHDI9Ju8INRr0H9jK9OU39hsJh3ifh8hUfh0Ihh0FWoHd0OVwHh9gR800f90sf0jVh0I98lFL0OLKhRPhI08jPh40swJiU04j0F3o8JK08ND9048oU9IwOI9h9j8s90f3RPdRi0OdhsUJ3900hFi4owikK8hRUhK8Dh0JlJUILu89f8jO90jDINNW3h9g9UIFLh0IFK4PJ0808hd0l309uDsPgUuHjhlkF00F39WjL02S840Ou3KO3ihNJKoiOOlIWJ9Oo08ufs0s9jSw83L0LVwr9VK0I8wH8F9hjIOJ3V0Fik34w28IFVJwF03W988RNfF9sf0IU0H8Uwhw00F3Ihujhd0UF3I9090s809RrhdfRJNwUrOU0hfR9909ShJ3JhVsUhwJRH0jh3JF2883s80O80Oo3dhj8IJIHFWh00Ff900fWhLhHJj93J8j09J9O0S080uhK80uIN003IfK8RI00fUu8h8jF9s0N80883hH9k89O8j8IFHhSIrVILIOuwgJ0f0h0iou9H089U8290hh8i9I9o900wjsf89O0KU9w3J9020KJiJ0jVI398fJHuJJ4O4O0S9U0hkfJr9FU9f8I0Hhw990Ps4KgO3jPs8sjhF99PJuI08FIWu8fh0jPs03HjfSIK9dL0I8l0fr0N8Fh93u3F9I9skufuh80098gS008u90hiL0uUuII4Rd88JO8gLdsUKw9F9uwiljruLJ0jhiJi4j4F8U88irFjFRJUUhhFhsf0WFu0r40FLhUu9IFWhKHU0P9ji093s0Hj9HiUo00UhfO39N0jShUP900uUdj8i08g0IhIIFijN9DUH3NUHIRLiS9L3ufuU9r0i90VDoL03ODi2lu9hfNl34jFRLKk8JDw83LIh0l902h90LOf80gfK88D9I9fuJLJ9Uu09H3sN0wOhF8kh3IL8hujw9U9F8JdhP9f0j03KUNiVU8IUR8wfhwNd0RIs8J8hrJ0lh8uS0IoOH0ffjR8SUu8H2U00829J0FhRu90h8Olf889Okhr9g9030Nh00grWho93JwuFSoJ9iJRfu8D9hK2JJukURPK29NfhDIHOu28hFjJOSN4NLHgV0Hsu0W0K9jLhfHfFOi0I49u3Ps9d00Kujh8N2hNhLlJNwLwSH098hu0PIFf0j2FLs0I9sNF0HiRi8oUjFjiI8fKNWsUuJO0hIOiuKwhNNj9VOr9P2U8UuuFJ0HwKHjr90PihHWjrRh982fhLR0KFwOj092FdDNiNKhi0j8'

# instantiating dynamodb client
session = boto3_session(region_name='eu-west-1', profile_name='perf')
dynamodb = session.client('dynamodb')

timestamp = strftime("%Y-%m-%dT-%H:%M")
results = open('dynamodb-put-results_{}.txt'.format(timestamp), 'a')
count = 0

for sid in session_ids:
    count += 1
    gen_data = ''.join(sample(generated_string, len(generated_string)))
    sleep(1)

    response = dynamodb.put_item(
        TableName='ddb-perf-testing',
        Item={
            'session_id': {'S': sid },
            'data': {'S': gen_data },
            'item_num': {'S': str(count) }
        }
    )

    results.write('Call Number: {call_num} \n'.format(call_num=count))
    results.write('Call ResponseMetadata: {metadata} \n\n'.format(metadata=response['ResponseMetadata']))

results.close()

Read from the Table(s):

  • Read 18KB per second for 3 Hours:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
from boto3 import Session as boto3_session
from time import sleep, strftime
from random import choice

# delay between each iteration
iteration_delay = 1

# iterations number - 3 hours
iterations = 10800

# session ids that will be fetched in a random.choice order
session_ids = [
    '77c81e29-c86a-411e-a5b3-9a8fb3b2595f',
    'b9a2b8ee-17ab-423c-8dbc-91020cd66097',
    'cbe01734-c506-4998-8727-45f1aa0de7e3',
    'e789f69b-420b-4e6d-9095-cd4482820454',
    'c808a4e6-311e-48d2-b3fd-e9b0602a16ac',
    '2ddf0416-6206-4c95-b6e5-d88b5325a7b1',
    'e8157439-95f4-49a9-91e3-d1afc60a812f',
    'f032115b-b04f-423c-9dfe-e004445b771b',
    'dd6904c5-b65b-4da4-b0b2-f9e1c5895086',
    '075e59be-9114-447b-8187-a0acf1b2f127'
]

# instantiating dynamodb client
session = boto3_session(region_name='eu-west-1', profile_name='perf')
dynamodb = session.client('dynamodb')
dynamodb-table = 'ddb-perf-testing'

timestamp = strftime("%Y-%m-%dT-%H:%M")
results = open('dynamodb-results_{}.txt'.format(timestamp), 'a')

for iteration in range(iterations):
    count = iteration + 1
    print(count)
    sleep(iteration_delay)

    response = dynamodb.get_item(
        TableName=dynamodb-table,
        Key={'session_id': {'S': choice(session_ids)}},
        ConsistentRead=False
    )

    results.write('Call Number: {cur_iter}/{max_iter} \n'.format(cur_iter=count, max_iter=iterations))
    results.write('Call Item Response => Key: {attr_id}, Key Number:{attr_num} \n'.format(attr_id=response['Item']['session_id']['S'], attr_num=response['Item']['item_num']['S']))
    results.write('Call ResponseMetadata: {metadata} \n\n'.format(metadata=response['ResponseMetadata']))

results.close()

Results

Notes from AWS Support:

Reasons for High Latencies:

  • RequestLatency is a Server Side Metric
  • Long requests could relate to metadata lookups
  • Executing Relative Low Amount of Requests there is Frequent Metadata Lookups; This may cause a spike in latency
  • Consistent Requests can have higher average latency then Eventual Consistent Reads
  • Requests in general can encounter higher then normal latency at times, due to network issue, storage node issue, metadata issue.
  • The p90 should still be single digit
  • Using Encryption has to interact with KMS API as well (mechanisms in place to deal with KMS integration though to still offer p90 under 10 ms)
  • DAX: Strongly consistent reads will be passed on to DynamoDB and not handled by the cache
  • 1 RCU reading in Eventual Consistent manner can read 8 kb
  • Consistent read costs double an eventual consistent read
  • DDB not 100% of requests will be under 10 ms

Resources: - https://aws.amazon.com/blogs/developer/tuning-the-aws-sdk-for-java-to-improve-resiliency/ - https://aws.amazon.com/blogs/developer/enabling-metrics-with-the-aws-sdk-for-java/ - https://en.wikipedia.org/wiki/Eventual_consistency

Give Your Database a Break and Use Memcached to Return Frequently Accessed Data

So let’s take this scenario:

Your database is getting hammered with requests and building up some load over time and we would like to place a caching layer in front of our database that will return data from the caching layer, to reduce some traffic to our database and also improve our performance for our application.

The Scenario:

Our scenario will be very simple for this demonstration:

  • Database will be using SQLite with product information (product_name, product_description)
  • Caching Layer will be Memcached
  • Our Client will be written in Python, which checks if the product name is in cache, if not a GET_MISS will be returned, then the data will be fetched from the database, returns it to the client and save it to the cache
  • Next time the item will be read, a GET_HIT will be received, then the item will be delivered to the client directly from the cache

SQL Database:

As mentioned we will be using sqlite for demonstration.

Create the table, populate some very basic data:

1
2
3
4
5
6
7
8
$ sqlite3 db.sql -header -column
import sqlite3 as sql
SQLite version 3.16.0 2016-11-04 19:09:39
Enter ".help" for usage hints.

sqlite> create table products (product_name STRING(32), product_description STRING(32));
sqlite> insert into products values('apple', 'fruit called apple');
sqlite> insert into products values('guitar', 'musical instrument');

Read all the data from the table:

1
2
3
4
5
6
sqlite> select * from products;
product_name  product_description
------------  -------------------
apple         fruit called apple
guitar        musical instrument
sqlite> .exit

Run a Memcached Container:

We will use docker to run a memcached container on our workstation:

1
$ docker run -itd --name memcached -p 11211:11211 rbekker87/memcached:alpine

Our Application Code:

I will use pymemcache as our client library. Install:

1
2
$ virtualenv .venv && source .venv/bin/activate
$ pip install pymemcache

Our Application Code which will be in Python

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
import sqlite3 as sql
from pymemcache.client import base

product_name = 'guitar'

client = base.Client(('localhost', 11211))
result = client.get(product_name)

def query_db(product_name):
    db_connection = sql.connect('db.sql')
    c = db_connection.cursor()
    try:
        c.execute('select product_description from products where product_name = "{k}"'.format(k=product_name))
        data = c.fetchone()[0]
        db_connection.close()
    except:
        data = 'invalid'
    return data

if result is None:
    print("got a miss, need to get the data from db")
    result = query_db(product_name)
    if result == 'invalid':
        print("requested data does not exist in db")
    else:
        print("returning data to client from db")
        print("=> Product: {p}, Description: {d}".format(p=product_name, d=result))
        print("setting the data to memcache")
        client.set(product_name, result)

else:
    print("got the data directly from memcache")
    print("=> Product: {p}, Description: {d}".format(p=product_name, d=result))

Explanation:

  • We have a function that takes a argument of the product name, that makes the call to the database and returns the description of that product
  • We will make a get operation to memcached, if nothing is returned, then we know the item does not exists in our cache,
  • Then we will call our function to get the data from the database and return it directly to our client, and
  • Save it to the cache in memcached so the next time the same product is queried, it will be delivered directly from the cache

The Demo:

Our Product Name is guitar, lets call the product, which will be the first time so memcached wont have the item in its cache:

1
2
3
4
5
$ python app.py
got a miss, need to get the data from db
returning data to client from db
=> Product: guitar, Description: musical instrument
setting the data to memcache

Now from the output, we can see that the item was delivered from the database and saved to the cache, lets call that same product and observe the behavior:

1
2
3
$ python app.py
got the data directly from memcache
=> Product: guitar, Description: musical instrument

When our cache instance gets rebooted we will lose our data that is in the cache, but since the source of truth will be in our database, data will be re-added to the cache as they are requested. That is one good reason not to rely on a cache service to be your primary data source.

What if the product we request is not in our cache or database, let’s say the product tree

1
2
3
$ python app.py
got a miss, need to get the data from db
requested data does not exist in db

This was a really simple scenario, but when working with masses amount of data, you can benefit from a lot of performance using caching.

Resources: