Ruan Bekker's Blog

From a Curious mind to Posts on Github

Using Elasticdump to Backup Elasticsearch Indexes to JSON

We will use Elasticdump to dump data from Elasticsearch to json files on disk, then delete the index, then restore data back to elasticsearch

Install Elasticdump:

1
2
$ docker run -it node:alpine sh
$ npm install elasticdump -g

Create a Index:

1
2
$ curl -XPUT http://10.79.2.193:9200/test-index
{"acknowledged":true}

Ingest Some Data into the Index:

1
2
3
4
5
$ curl -XPUT http://10.79.2.193:9200/test-index/docs/doc1 -d '{"name": "ruan", "age": 30}'
{"_index":"test-index","_type":"docs","_id":"doc1","_version":1,"_shards":{"total":2,"successful":1,"failed":0},"created":true}

$ curl -XPUT http://10.79.2.193:9200/test-index/docs/doc2 -d '{"name": "stefan", "age": 29}'
{"_index":"test-index","_type":"docs","_id":"doc2","_version":1,"_shards":{"total":2,"successful":1,"failed":0},"created":true}

Elasticdump to dump the ata

First dump the mappings:

1
2
3
4
5
6
7
$ elasticdump --input=http://10.79.2.193:9200/test-index --output=/opt/backup/elasticsearch/es_test-index_mapping.json --type=mapping
Mon, 26 Jun 2017 14:15:34 GMT | starting dump
Mon, 26 Jun 2017 14:15:34 GMT | got 1 objects from source elasticsearch (offset: 0)
Mon, 26 Jun 2017 14:15:34 GMT | sent 1 objects to destination file, wrote 1
Mon, 26 Jun 2017 14:15:34 GMT | got 0 objects from source elasticsearch (offset: 1)
Mon, 26 Jun 2017 14:15:34 GMT | Total Writes: 1
Mon, 26 Jun 2017 14:15:34 GMT | dump complete

Then dump the data:

1
2
3
4
5
6
7
$ elasticdump --input=http://10.79.2.193:9200/test-index --output=/opt/backup/elasticsearch/es_test-index.json --type=data
Mon, 26 Jun 2017 14:15:43 GMT | starting dump
Mon, 26 Jun 2017 14:15:43 GMT | got 2 objects from source elasticsearch (offset: 0)
Mon, 26 Jun 2017 14:15:43 GMT | sent 2 objects to destination file, wrote 2
Mon, 26 Jun 2017 14:15:43 GMT | got 0 objects from source elasticsearch (offset: 2)
Mon, 26 Jun 2017 14:15:43 GMT | Total Writes: 2
Mon, 26 Jun 2017 14:15:43 GMT | dump complete

Preview the Metadata

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
$ cat /opt/backup/elasticsearch/es_test-index_mapping.json | python -m json.tool
{
    "test-index": {
        "mappings": {
            "docs": {
                "properties": {
                    "age": {
                        "type": "long"
                    },
                    "name": {
                        "type": "string"
                    }
                }
            }
        }
    }
}

Preview the Data

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
$ cat /opt/backup/elasticsearch/es_test-index.json | jq
{
  "_index": "test-index",
  "_type": "docs",
  "_id": "doc1",
  "_score": 1,
  "_source": {
    "name": "ruan",
    "age": 30
  }
}
{
  "_index": "test-index",
  "_type": "docs",
  "_id": "doc2",
  "_score": 1,
  "_source": {
    "name": "stefan",
    "age": 29
  }
}

Restore The Data

Let’s test the restoring part, go ahead and delete The index:

1
2
$ curl -XDELETE http://10.79.2.193:9200/test-index
{"acknowledged":true}

Restore the Index by Importing the Mapping:

1
2
3
4
5
6
7
$ elasticdump --input=/opt/backup/elasticsearch/es_test-index_mapping.json --output=http://10.79.2.193:9200/test-index --type=mapping
Mon, 26 Jun 2017 14:51:48 GMT | starting dump
Mon, 26 Jun 2017 14:51:48 GMT | got 1 objects from source file (offset: 0)
Mon, 26 Jun 2017 14:51:48 GMT | sent 1 objects to destination elasticsearch, wrote 1
Mon, 26 Jun 2017 14:51:48 GMT | got 0 objects from source file (offset: 1)
Mon, 26 Jun 2017 14:51:48 GMT | Total Writes: 1
Mon, 26 Jun 2017 14:51:48 GMT | dump complete

Verify to see if the Index Exist:

1
2
3
$ curl -s -XGET http://10.79.2.193:9200/_cat/indices?v | grep -E '(docs.count|test)'
health status index                     pri rep docs.count docs.deleted store.size pri.store.size
yellow open   test-index                  5   1          0            0       650b           650b

Restore the Data for the Index:

Use elasticdump to restore the data from json to elasticsearch:

1
2
3
4
5
6
7
$ elasticdump --input=/opt/backup/elasticsearch/es_test-index.json --output=http://10.79.2.193:9200/test-index --type=data
Mon, 26 Jun 2017 14:53:56 GMT | starting dump
Mon, 26 Jun 2017 14:53:56 GMT | got 2 objects from source file (offset: 0)
Mon, 26 Jun 2017 14:53:56 GMT | sent 2 objects to destination elasticsearch, wrote 2
Mon, 26 Jun 2017 14:53:56 GMT | got 0 objects from source file (offset: 2)
Mon, 26 Jun 2017 14:53:56 GMT | Total Writes: 2
Mon, 26 Jun 2017 14:53:56 GMT | dump complete

Verify to see if the Documents was Ingested:

1
2
3
$ curl -s -XGET http://10.79.2.193:9200/_cat/indices?v | grep -E '(docs.count|test)'
health status index                     pri rep docs.count docs.deleted store.size pri.store.size
yellow open   test-index                  5   1          2            0       650b           650b

Preview the Data from Elasticsearch:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
$ curl -s -XGET http://10.79.2.193:9200/test-index/_search?pretty

{
  "took" : 3,
  "timed_out" : false,
  "_shards" : {
    "total" : 5,
    "successful" : 5,
    "failed" : 0
  },
  "hits" : {
    "total" : 2,
    "max_score" : 1.0,
    "hits" : [ {
      "_index" : "test-index",
      "_type" : "docs",
      "_id" : "doc1",
      "_score" : 1.0,
      "_source" : {
        "name" : "ruan",
        "age" : 30
      }
    }, {
      "_index" : "test-index",
      "_type" : "docs",
      "_id" : "doc2",
      "_score" : 1.0,
      "_source" : {
        "name" : "stefan",
        "age" : 29
      }
    } ]
  }
}

Resources:

Routing Web Traffic With a SOCKS Tunnel

I wanted to access a Non Standard HTTP Port on one of my RaspberryPi Hosts, which was not directly available to the Internet, so I have chosen to establish a SOCKS Tunnel to achieve that.

Web Application on my LAN

Getting my RaspberryPi’s Private IP Address:

1
2
$ ifconfig eth0 | grep 'inet 192' | awk '{print $2}'
192.168.1.118

For demonstration purposes, I will use Python’s SimpleHTTPServer:

1
2
3
4
5
$ mkdir web
$ cd web
$ echo 'yeehaa' > index.html
$ python -m SimpleHTTPServer 5050
Serving HTTP on 0.0.0.0 port 5050 ...

Establish the SOCKS Tunnel

From my laptop, establishing the SOCKS Tunnel with SSH, you can use -f to fork it in the background:

1
$ ssh -D 8157 -CqN user@home.domain.com

Configure your Browser:

Configure your browser to Proxy via:

  • Host: localhost
  • Port: 8157

Now when you access the destined host’s private ip, you will get a response:

1
2
Browse to http://192.168.1.118:5050/ and in my case my response is:
-> yeehaa

Local Dev Environment With Docker MySQL and Adminer WebUI With Docker Compose

Let’s setup a local development environment with Docker, MySQL and Adminer WebUI using Docker Compose

Docker Compose File:

Let’s look at our docker-compose file:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
version: '3.2'

services:
  mysql-client:
    image: alpine:edge
    volumes:
      - type: bind
        source: ./workspace
        target: /root/workspace
    networks:
      - docknet
    command: ping 127.0.0.1

  db:
    image: mysql
    restart: always
    environment:
      MYSQL_ROOT_PASSWORD: example
    networks:
      - docknet
    volumes:
      - type: volume
        source: dbdata
        target: /var/lib/mysql

  adminer:
    image: adminer
    restart: always
    ports:
      - 8080:8080
    networks:
      - docknet

networks:
    docknet:
        external: true

volumes:
  dbdata:
    external: true

Environment Variables for the MySQL Docker image is:

1
2
3
4
5
6
- MYSQL_ROOT_PASSWORD
- MYSQL_DATABASE
- MYSQL_USER, MYSQL_PASSWORD
- MYSQL_ALLOW_EMPTY_PASSWORD
- MYSQL_RANDOM_ROOT_PASSWORD
- MYSQL_ONETIME_PASSWORD

More info can be viewed on this resource: hub.docker.com/_/mysql/

Pre-Requirements:

Let’s create our pre-requirement:

  1. Networks:
1
$ docker network create docknet
  1. Volumes:

Our Volume for MySQL so that we have persistent data:

1
$ docker volume create dbdata

Our workspace directory that will be persistent in our debug-client alpine container:

1
$ mkdir -p workspace/python

Launching our Services:

Let’s launch our services:

1
2
3
4
$ docker-compose -f mysql-compose.yml up -d
Creating mysql_db_1 ...
Creating mysql_adminer_1
Creating mysql_debug-client_1

Listing our Containers:

1
2
3
4
5
$ docker ps
CONTAINER ID        IMAGE               COMMAND                  CREATED             STATUS              PORTS                      NAMES
e05804ab6d64        alpine:edge         "ping 127.0.0.1"         21 seconds ago      Up 4 seconds                                   mysql_debug-client_1
c052ceeb6d3b        mysql               "docker-entrypoint..."   21 seconds ago      Up 5 seconds        3306/tcp                   mysql_db_1
2b0446daab4c        adminer             "entrypoint.sh doc..."   26 seconds ago      Up 5 seconds        0.0.0.0:8080->8080/tcp     mysql_adminer_1

Using the Debug Container:

I will use the debug container as the client to connect to the internal services, for example, the mysql-client:

1
2
3
4
$ apk update
$ apk add mysql-client
$ mysql -h db -u root -ppassword
MySQL [(none)]>

Also, you will find the persistent data directory for our workspace:

1
2
$ ls /root/workspace/
python

Accessing the MySQL WebUI: Adminer

Access the service via the exposed endpoint:

1
+ http://localhost:8080/

The login view:

Creating the Table:

Deleting the Environment:

The External Resources will not be deleted:

1
2
3
4
5
$ docker-compose -f mysql-compose.yml down
Removing mysql_debug-client_1 ... done
Removing mysql_db_1           ... done
Removing mysql_adminer_1      ... done
Network docknet is external, skipping

Resources:

Setup a Concourse-CI Server on Ubuntu 16

Concourse is a Pipeline Based Continious Integration system written in Go

Resources:

What is Concourse CI:

Concourse CI is a Continious Integration Platform. Concourse enables you to construct pipelines with a yaml configuration that can consist out of 3 core concepts, tasks, resources, and jobs that compose them. For more information about this have a look at their docs

What will we be doing today

We will setup a Concourse Server on Ubuntu 16.04 and run the traditional Hello, World pipeline

Setup the Server:

Concourse needs PostgresSQL 9.3+

1
2
3
$ apt update && apt upgrade -y
$ apt install postgresql postgresql-contrib -y
$ systemctl enable postgresql

Create the Database and User for Concourse on Postgres:

1
2
$ sudo -u postgres createuser concourse
$ sudo -u postgres createdb --owner=concourse atc

Download the Concourse and Fly Cli Binaries:

1
2
3
4
5
$ wget https://github.com/concourse/concourse/releases/download/v4.2.2/concourse_linux_amd64
$ wget https://github.com/concourse/concourse/releases/download/v4.2.2/fly_linux_amd64
$ chmod +x concourse_linux_amd64 fly_linux_amd64
$ mv concourse_linux_amd64 /usr/bin/concourse
$ mv fly_linux_amd64 /usr/bin/fly

Create the Encryption Keys:

1
2
3
4
5
$ mkdir /etc/concourse
$ ssh-keygen -t rsa -q -N '' -f /etc/concourse/tsa_host_key
$ ssh-keygen -t rsa -q -N '' -f /etc/concourse/worker_key
$ ssh-keygen -t rsa -q -N '' -f /etc/concourse/session_signing_key
$ cp /etc/concourse/worker_key.pub /etc/concourse/authorized_worker_keys

Concourse Web Process Configuration:

1
2
3
4
5
6
7
8
9
10
11
12
$ cat /etc/concourse/web_environment

CONCOURSE_ADD_LOCAL_USER=ruan:pass
CONCOURSE_SESSION_SIGNING_KEY=/etc/concourse/session_signing_key
CONCOURSE_TSA_HOST_KEY=/etc/concourse/tsa_host_key
CONCOURSE_TSA_AUTHORIZED_KEYS=/etc/concourse/authorized_worker_keys
CONCOURSE_POSTGRES_HOST=127.0.0.1
CONCOURSE_POSTGRES_USER=concourse
CONCOURSE_POSTGRES_PASSWORD=concourse
CONCOURSE_POSTGRES_DATABASE=atc
CONCOURSE_MAIN_TEAM_LOCAL_USER=ruan
CONCOURSE_EXTERNAL_URL=http://10.20.30.40:8080

Concourse Worker Process Configuration:

1
2
3
4
5
6
$ cat /etc/concourse/worker_environment

CONCOURSE_WORK_DIR=/var/lib/concourse
CONCOURSE_TSA_HOST=127.0.0.1:2222
CONCOURSE_TSA_PUBLIC_KEY=/etc/concourse/tsa_host_key.pub
CONCOURSE_TSA_WORKER_PRIVATE_KEY=/etc/concourse/worker_key

Create a Concourse user:

1
2
3
4
$ mkdir /var/lib/concourse
$ sudo adduser --system --group concourse
$ sudo chown -R concourse:concourse /etc/concourse /var/lib/concourse
$ sudo chmod 600 /etc/concourse/*_environment

Create SystemD Unit Files, first for the Web Service:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
$ cat /etc/systemd/system/concourse-web.service

[Unit]
Description=Concourse CI web process (ATC and TSA)
After=postgresql.service

[Service]
User=concourse
Restart=on-failure
EnvironmentFile=/etc/concourse/web_environment
ExecStart=/usr/bin/concourse web

[Install]
WantedBy=multi-user.target

Then the SystemD Unit File for the Worker Service:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
$ cat /etc/systemd/system/concourse-worker.service

[Unit]
Description=Concourse CI worker process
After=concourse-web.service

[Service]
User=root
Restart=on-failure
EnvironmentFile=/etc/concourse/worker_environment
ExecStart=/usr/bin/concourse worker

[Install]
WantedBy=multi-user.target

Create a postgres password for the concourse user:

1
2
3
4
$ cd /home/concourse/
$ sudo -u concourse psql atc
atc=> ALTER USER concourse WITH PASSWORD 'concourse';
atc=> \q

Start and Enable the Services:

1
2
3
4
5
6
7
$ systemctl start concourse-web concourse-worker
$ systemctl enable concourse-web concourse-worker postgresql
$ systemctl status concourse-web concourse-worker

$ systemctl is-active concourse-worker concourse-web
active
active

The listening ports should more or less look like the following:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
$ netstat -tulpn

Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address           Foreign Address         State       PID/Program name
tcp        0      0 127.0.0.1:7777          0.0.0.0:*               LISTEN      4530/concourse
tcp        0      0 127.0.0.1:7788          0.0.0.0:*               LISTEN      4530/concourse
tcp        0      0 127.0.0.1:8079          0.0.0.0:*               LISTEN      4525/concourse
tcp        0      0 0.0.0.0:22              0.0.0.0:*               LISTEN      1283/sshd
tcp        0      0 127.0.0.1:5432          0.0.0.0:*               LISTEN      4047/postgres
tcp6       0      0 :::36159                :::*                    LISTEN      4525/concourse
tcp6       0      0 :::46829                :::*                    LISTEN      4525/concourse
tcp6       0      0 :::2222                 :::*                    LISTEN      4525/concourse
tcp6       0      0 :::8080                 :::*                    LISTEN      4525/concourse
tcp6       0      0 :::22                   :::*                    LISTEN      1283/sshd
udp        0      0 0.0.0.0:68              0.0.0.0:*                           918/dhclient
udp        0      0 0.0.0.0:42165           0.0.0.0:*                           4530/concourse

Client Side:

I will be using a the Fly cli from a Mac, so first we need to download the fly-cli for Mac:

1
2
3
$ wget https://github.com/concourse/concourse/releases/download/v4.2.2/fly_darwin_amd64
$ chmod +x fly_darwin_amd64
$ alias fly='./fly_darwin_amd64'

Next, we need to setup our Concourse Target by Authenticating against our Concourse Endpoint, lets setup our target with the name ci:

1
2
3
4
5
6
7
$ fly -t ci login -c http://10.20.30.40:8080
logging in to team 'main'

username: admin
password:

target saved

Lets list our targets:

1
2
3
$ fly targets
name  url                        team  expiry
ci    http://10.20.30.40:8080    main  Wed, 08 Nov 2017 15:32:59 UTC

Listing Registered Workers:

1
2
3
$ fly -t ci workers
name              containers  platform  tags  team  state    version
ip-172-31-12-134  0           linux     none  none  running  1.2

Listing Active Containers:

1
2
$ fly -t ci containers
handle                                worker            pipeline     job            build #  build id  type   name                  attempt

Hello World Pipeline:

Let’s create a basic pipeline, that will print out Hello, World!:

Our hello-world.yml

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
jobs:
- name: my-job
  plan:
  - task: say-hello
    config:
      platform: linux
      image_resource:
        type: docker-image
        source:
          repository: alpine
          tag: edge
      run:
        path: /bin/sh
        args:
        - -c
        - |
          echo "============="
          echo "Hello, World!"
          echo "============="

Applying the configuration to our pipeline:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
$ fly -t ci set-pipeline -p yeeehaa -c hello-world.yml
jobs:
  job my-job has been added:
    name: my-job
    plan:
    - task: say-hello
      config:
        platform: linux
        image_resource:
          type: docker-image
          source:
            repository: alpine
            tag: edge
        run:
          path: /bin/sh
          args:
          - -c
          - |
            echo "============="
            echo "Hello, World!"
            echo "============="

apply configuration? [yN]: y
pipeline created!
you can view your pipeline here: http://10.20.30.40:8080/teams/main/pipelines/yeeehaa

the pipeline is currently paused. to unpause, either:
  - run the unpause-pipeline command
  - click play next to the pipeline in the web ui

We can browse to the WebUI to unpause the pipeline, but since I like to do everything on cli as far as possible, I will unpause the pipeline via cli:

1
2
$ fly -t ci unpause-pipeline -p yeeehaa
unpaused 'yeeehaa'

Now our Pipeline is unpaused, but since we did not specify any triggers, we need to manually trigger the pipeline to run, you can either via the WebUI, select your pipeline which in this case will be named yeeehaa and then select the job, which will be my-job then hit the + sign, which will trigger the pipeline.

I will be using the cli:

1
2
$ fly -t ci trigger-job --job yeeehaa/my-job
started yeeehaa/my-job #1

Via the WebUI on http://10.20.30.40:8080/teams/main/pipelines/yeeehaa/jobs/my-job/builds/1 you should see the Hello, World! output, or via the cli, we also have the option to see the output, so let’s trigger it again, but this time passing the --watch flag:

1
2
3
4
5
6
7
8
9
10
11
12
$ fly -t ci trigger-job --job yeeehaa/my-job --watch
started yeeehaa/my-job #2

initializing
running /bin/sh -c echo "============="
echo "Hello, World!"
echo "============="

=============
Hello, World!
=============
succeeded

Listing our Workers and Containers again:

1
2
3
4
5
6
7
$ fly -t ci workers
name              containers  platform  tags  team  state    version
ip-172-31-12-134  2           linux     none  none  running  1.2

$ fly -t ci containers
handle                                worker            pipeline     job         build #  build id  type   name           attempt
36982955-54fd-4c1b-57b8-216486c58db8  ip-172-31-12-134  yeeehaa      my-job      2        729       task   say-hello      n/a

Installing Elastalert for Elasticsearch on Amazon Linux

Elastalert, a service for Alerting with Elasticsearch:

Setting up Elastalert

We will setup Elastalert for Elasticsearch on Amazon Linux which is a RHEL Based Distribution.

Setting up dependencies

1
2
3
4
5
6
7
8
9
10
$ sudo su
# yum update -y
# yum install git python-devel lib-devel libevent-devel bzip2-devel openssl-devel ncurses-devel zlib zlib-devel xz-devel gcc -y
# yum install python-setuptools -y
# easy_install pip
# pip install virtualenv
# virtualenv .venv
# source .venv/bin/activate
# pip install pip --upgrade
# pip install setuptools --upgrade

Clone Elastalert Repository and Install Dependencies:

1
2
3
4
$ cd /opt/
$ git clone https://github.com/Yelp/elastalert
$ cd elastalert/
$ pip install -r requirements.txt

Configs:

1
2
3
$ cp config.yaml.example config.yaml
$ vim config.yaml
$ vim example_rules/example_frequency.yaml

After opening the config, populate the configuration where needed.

Installation of elastalert:

1
2
$ python setup.py install
$ elastalert-create-index

Running elastalert:

1
2
$ python -m elastalert.elastalert --verbose --rule example_frequency.yaml
INFO:elastalert:Starting up

Systemd Unit File:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
# /etc/systemd/system/elastalert.service
[Unit]
Description=Elastalert
# executed after this
After=syslog.target
After=network.target

[Service]
Type=simple
User=root
Group=root
WorkingDirectory=/opt/elastalert
Environment="SOME_KEY_1=value" "SOME_KEY_2=value2"
# restart on unexpected exits
Restart=always
# first argument must be an absolute path, rest are arguments to it
ExecStart=/usr/bin/python -m elastalert.elastalert --verbose --rule example_frequency.yaml
# startup/shutdown grace period
TimeoutSec=60

[Install]
# executed before this
WantedBy=multi-user.target
# Thanks:
# https://cloudership.com/blog/2016/4/8/init-scripts-for-web-apps-on-linux-and-why-you-should-be-using-them

Reload, enable and start:

1
2
3
$ systemctl daemon-reload
$ systemctl enable elastalert.service
$ systemctl start elastalert.service

Linux Shell Commands With the Python Commands Module

Using Python to Execute Shell Commands in Linux

Status Code and Output:

Getting the Status Code and the Output:

1
2
3
4
5
6
7
8
9
>>> import commands
>>> commands.getstatusoutput('echo foo')
(0, 'foo')

>>> status, output = commands.getstatusoutput('echo foo')
>>> print(status)
0
>>> print(output)
foo

Command Output Only:

Only getting the Shell Output:

1
2
3
>>> import commands
>>> commands.getoutput('echo foo')
'foo'

Basic Script

Test file with a one line of data:

1
2
$ cat file.txt
test-string

Our very basic python script:

1
2
3
4
5
6
7
import commands

status = None
output = None

status, output = commands.getstatusoutput('cat file.txt')
print("Status: {}, Output: {}".format(status, output))

Running the script:

1
2
$ python script.py
Status: 0, Output: test-string

Using Python to Query MySQL Database With MySQLdb Library

a Quick post to demostrate how to use Python to Query data from MySQL. We will use the MySQL Docker Image for the demonstration.

Provision MySQL

We will use the latest mysql image, and use the environment variable to pass the root password, and also expose the mysql port:

1
$ docker run -itd -p 3306:3306 -e MYSQL_ROOT_PASSWORD=password mysql

Populate some data in MySQL

Connect to MySQL:

1
$ mysql -h 127.0.0.1 -u root -ppasword

Create some test data:

1
2
3
4
5
6
mysql> create database foo;
mysql> use foo;
mysql> create table bar (name VARCHAR(20), surname VARCHAR(20));
mysql> insert into bar values('ruan', 'bekker');
mysql> insert into bar values('stefan', 'bester');
mysql> insert into bar values('peter', 'williams');

Python with MySQL: Setup the Environment

We will use virtualenv to create a virtual environment to keep our installation isolated from the rest of our system. Install virtualenv:

1
$ pip install virtualenv

Create a virtual environment and install the required dependency:

1
2
3
$ virtualenv venv-mysql
$ source venv-mysql/bin/activate
(venv-mysql) pip install MySQL-python

Python with MySQL: Develop the Client

1
2
3
4
5
6
7
8
9
10
11
12
13
>>> import MySQLdb
>>> db = MySQLdb.connect('127.0.0.1', 'root', 'password', 'foo')
>>> con = db.cursor()
>>> con.execute("SELECT * from bar")
4L
>>> rows = con.fetchall()
>>> for row in rows:
...     print(row[0], row[1])
...
('ruan', 'bekker')
('stefan', 'bester')
('peter', 'williams')
>>> exit()

Your First Hello World App With Golang

So everyone has been saying how awesome Golang is, and at this moment, I am quite curious to fiddle with it.

Golang Environment: Golang Docker Image

A quick way to get a Golang Environment, will be to use Docker. We will be using the Alpine tag:

1
$ docker run -it golang:alpine sh

Our Basic App

After we are in our container, lets write our first Hello World App:

app.go
1
2
3
4
5
6
7
package main

import "fmt"

func main() {
  fmt.Println("Hello, World!")
}

Running our App:

Using golang to run our app:

1
2
$ go run app.go
Hello, World!

We can also build our app to create a executable binary:

1
$ go build app.go

You will find that there is a executable binary named app placed in the current working directory. Let’s execute it:

1
2
$ ./app
Hello, World!

This was a very basic example, but will add more examples as I am learning the language

Resources:

New Posts on Github Pages With Octopress Not Showing on Your Blog

So today I had the issue where new posts that was generated and pushed to github, not being displayed on my blog.

I was able to see the markdown pages on my github respository, but via the blog itself, getting 404’s.

The Issue:

When I did a rake generate I found the following error:

1
jekyll 2.5.3 | Error: invalid byte sequence in US-ASCII

Resolving the Issue:

After running the following, I was able to get rid of the error, and posts showing again:

1
2
$ export LC_ALL="en_US.UTF-8"
$ export LANG="en_US.UTF-8"

Running Java Web Applications on Tomcat With Docker Swarm

From this post we used Payara Micro to Setup a Web Application, and a full example was provided on how to create a war file that will be used for the deployment.

Today we will be using Tomcat to deploy the same application. The official repository can be found on hub.docker.com/_/tomcat .

Our Dockerfile for our Own Tomcat Image:

The Dockerfile is modified a bit (CATALINA_OPTS) to be able to pass JVM environment variables, but if you would like to use the standard image you can skip this and just use the image from their repository.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
FROM openjdk:8-jre-alpine

ENV CATALINA_HOME /usr/local/tomcat
ENV PATH $CATALINA_HOME/bin:$PATH
RUN mkdir -p "$CATALINA_HOME"
WORKDIR $CATALINA_HOME
ENV CATALINA_OPTS -Xmx768m -Xms512m -XX:PermSize=256m -XX:MaxPermSize=512m -XX:ReservedCodeCacheSize=64m -XX:+UseG1GC -XX:+CMSClassUnloadingEnabled -XX:+PrintHeapAtGC -XX:+PrintGCDetails -XX:+PrintGCTimeStamps
# let "Tomcat Native" live somewhere isolated
ENV TOMCAT_NATIVE_LIBDIR $CATALINA_HOME/native-jni-lib
ENV LD_LIBRARY_PATH ${LD_LIBRARY_PATH:+$LD_LIBRARY_PATH:}$TOMCAT_NATIVE_LIBDIR

RUN apk add --no-cache gnupg

# see https://www.apache.org/dist/tomcat/tomcat-$TOMCAT_MAJOR/KEYS
# see also "update.sh" (https://github.com/docker-library/tomcat/blob/master/update.sh)
ENV GPG_KEYS 05AB33110949707C93A279E3D3EFE6B686867BA6 07E48665A34DCAFAE522E5E6266191C37C037D42 47309207D818FFD8DCD3F83F1931D684307A10A5 541FBE7D8F78B25E055DDEE13C370389288584E7 61B832AC2F1C5A90F0F9B00A1C506407564C17A3 713DA88BE50911535FE716F5208B0AB1D63011C7 79F7026C690BAA50B92CD8B66A3AD3F4F22C4FED 9BA44C2621385CB966EBA586F72C284D731FABEE A27677289986DB50844682F8ACB77FC2E86E29AC A9C5DF4D22E99998D9875A5110C01C5A2F6059E7 DCFD35E0BF8CA7344752DE8B6FB21E8933C60243 F3A04C595DB5B6A5F1ECA43E3B7BBB100D811BBE F7DA48BB64BCB84ECBA7EE6935CD23C10D498E23
RUN set -ex; \
  for key in $GPG_KEYS; do \
      gpg --keyserver ha.pool.sks-keyservers.net --recv-keys "$key"; \
  done

ENV TOMCAT_MAJOR 8
ENV TOMCAT_VERSION 8.5.23
ENV TOMCAT_SHA1 1ba27c1bb86ab9c8404e98068800f90bd662523c

ENV TOMCAT_TGZ_URLS \
# https://issues.apache.org/jira/browse/INFRA-8753?focusedCommentId=14735394#comment-14735394
  https://www.apache.org/dyn/closer.cgi?action=download&filename=tomcat/tomcat-$TOMCAT_MAJOR/v$TOMCAT_VERSION/bin/apache-tomcat-$TOMCAT_VERSION.tar.gz \
# if the version is outdated, we might have to pull from the dist/archive :/
  https://www-us.apache.org/dist/tomcat/tomcat-$TOMCAT_MAJOR/v$TOMCAT_VERSION/bin/apache-tomcat-$TOMCAT_VERSION.tar.gz \
  https://www.apache.org/dist/tomcat/tomcat-$TOMCAT_MAJOR/v$TOMCAT_VERSION/bin/apache-tomcat-$TOMCAT_VERSION.tar.gz \
  https://archive.apache.org/dist/tomcat/tomcat-$TOMCAT_MAJOR/v$TOMCAT_VERSION/bin/apache-tomcat-$TOMCAT_VERSION.tar.gz

ENV TOMCAT_ASC_URLS \
  https://www.apache.org/dyn/closer.cgi?action=download&filename=tomcat/tomcat-$TOMCAT_MAJOR/v$TOMCAT_VERSION/bin/apache-tomcat-$TOMCAT_VERSION.tar.gz.asc \
# not all the mirrors actually carry the .asc files :'(
  https://www-us.apache.org/dist/tomcat/tomcat-$TOMCAT_MAJOR/v$TOMCAT_VERSION/bin/apache-tomcat-$TOMCAT_VERSION.tar.gz.asc \
  https://www.apache.org/dist/tomcat/tomcat-$TOMCAT_MAJOR/v$TOMCAT_VERSION/bin/apache-tomcat-$TOMCAT_VERSION.tar.gz.asc \
  https://archive.apache.org/dist/tomcat/tomcat-$TOMCAT_MAJOR/v$TOMCAT_VERSION/bin/apache-tomcat-$TOMCAT_VERSION.tar.gz.asc

RUN set -eux; \
  \
  apk add --no-cache --virtual .fetch-deps \
      ca-certificates \
      openssl \
  ; \
  \
  success=; \
  for url in $TOMCAT_TGZ_URLS; do \
      if wget -O tomcat.tar.gz "$url"; then \
          success=1; \
          break; \
      fi; \
  done; \
  [ -n "$success" ]; \
  \
  echo "$TOMCAT_SHA1 *tomcat.tar.gz" | sha1sum -c -; \
  \
  success=; \
  for url in $TOMCAT_ASC_URLS; do \
      if wget -O tomcat.tar.gz.asc "$url"; then \
          success=1; \
          break; \
      fi; \
  done; \
  [ -n "$success" ]; \
  \
  gpg --batch --verify tomcat.tar.gz.asc tomcat.tar.gz; \
  tar -xvf tomcat.tar.gz --strip-components=1; \
  rm bin/*.bat; \
  rm tomcat.tar.gz*; \
  \
  nativeBuildDir="$(mktemp -d)"; \
  tar -xvf bin/tomcat-native.tar.gz -C "$nativeBuildDir" --strip-components=1; \
  apk add --no-cache --virtual .native-build-deps \
      apr-dev \
      coreutils \
      dpkg-dev dpkg \
      gcc \
      libc-dev \
      make \
      "openjdk${JAVA_VERSION%%[-~bu]*}"="$JAVA_ALPINE_VERSION" \
      openssl-dev \
  ; \
  ( \
      export CATALINA_HOME="$PWD"; \
      cd "$nativeBuildDir/native"; \
      gnuArch="$(dpkg-architecture --query DEB_BUILD_GNU_TYPE)"; \
      ./configure \
          --build="$gnuArch" \
          --libdir="$TOMCAT_NATIVE_LIBDIR" \
          --prefix="$CATALINA_HOME" \
          --with-apr="$(which apr-1-config)" \
          --with-java-home="$(docker-java-home)" \
          --with-ssl=yes; \
      make -j "$(nproc)"; \
      make install; \
  ); \
  runDeps="$( \
     scanelf --needed --nobanner --format '%n#p' --recursive "$TOMCAT_NATIVE_LIBDIR" \
         | tr ',' '\n' \
         | sort -u \
         | awk 'system("[ -e /usr/local/lib/" $1 " ]") == 0 { next } { print "so:" $1 }' \
 )"; \
  apk add --virtual .tomcat-native-rundeps $runDeps; \
  apk del .fetch-deps .native-build-deps; \
  rm -rf "$nativeBuildDir"; \
  rm bin/tomcat-native.tar.gz; \
  \
# sh removes env vars it doesn't support (ones with periods)
# https://github.com/docker-library/tomcat/issues/77
  apk add --no-cache bash; \
  find ./bin/ -name '*.sh' -exec sed -ri 's|^#!/bin/sh$|#!/usr/bin/env bash|' '{}' +

# verify Tomcat Native is working properly
RUN set -e \
  && nativeLines="$(catalina.sh configtest 2>&1)" \
  && nativeLines="$(echo "$nativeLines" | grep 'Apache Tomcat Native')" \
  && nativeLines="$(echo "$nativeLines" | sort -u)" \
  && if ! echo "$nativeLines" | grep 'INFO: Loaded APR based Apache Tomcat Native library' >&2; then \
      echo >&2 "$nativeLines"; \
      exit 1; \
  fi

EXPOSE 8080
CMD ["catalina.sh", "run"]

Building our Image and Pusing to our Registry:

1
2
$ docker build -t registry.gitlab.com/<user>/<repo>/<image>:<tag>
$ docker push registry.gitlab.com/<user>/<repo>/<image>:<tag>

Dockerfile for our Application:

Now that we have built our image for Tomcat, we can write our Dockerfile for our application, note that the hello.war file also needs to be in the same working directory, unless written otherwise:

1
2
FROM registry.gitlab.com/<user>/<repo>/<image>:<tag>
COPY hello.war /usr/local/tomcat/webapps/hello.war

Setup the Compose file for our Stack:

We will use docker stack to deploy our application, note that I have Traefik that acts as my reverse proxy.

Below, our app.yml compose file:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
version: '3'

services:
  hello:
    image: registry.gitlab.com/<user>/<repo>/<image>:<tag>
    networks:
      - appnet
    deploy:
      labels:
        - "traefik.port=8080"
        - "traefik.docker.network=appnet"
        - "traefik.frontend.rule=Host:apps.mydomain.com; PathPrefix: /hello/"
      mode: replicated
      replicas: 1
      placement:
        constraints:
          - 'node.role==worker'

networks:
  appnet:
    external: true

Deploy our Application:

From our compose file we defined that our network is external, so if you are using the same name, and you have not yet setup the overlay network:

1
$ docker network create --driver overlay appnet

Now deploy the stack:

1
$ docker stack deploy --compose-file app.yml apps

Testing our Application:

1
2
3
4
5
6
7
8
9
$ curl http://apps.mydomain.com/hello/

<!DOCTYPE html>
<html>
            Hello World!
   Test Page with Docker + Payara Micro</h3>

   Serving From ContainerId: d24f8cd982fc
</html>