Ruan Bekker's Blog

From a Curious mind to Posts on Github

Setup a Basic Hello World Pipeline on Concourse

We will setup a basic pipeline that pulls down content from github, then executes a task that prints hello world.

Content on Github

The config can be found on my Github Branch but I will display each file in this post.

Running our Pipeline

Our pipeline.yml that we need to have for concourse to know what to do:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
---
resources:
- name: my-git-repo
  type: git
  source:
    uri: https://github.com/ruanbekker/concourse-test
    branch: basic-helloworld

jobs:
- name: hello-world-job
  public: true
  plan:
  - get: my-git-repo
  - task: task_print-hello-world
    file: my-git-repo/ci/task-hello-world.yml

We can see from our pipeline.yml file, it points to a task-hello-world.yml, which I will preview below, but can be found in the repo:

1
2
3
4
5
6
7
8
9
10
11
---
platform: linux

image_resource:
  type: docker-image
  source:
    repository: busybox

run:
  path: echo
  args: ["hello world"]

Set Pipeline:

1
$ fly -t tutorial sp -c pipeline.yml -p pipeline-01

Unpause Pipeline:

1
$ fly -t tutorial up -p pipeline-01

Trigger Job:

1
2
3
4
5
6
7
8
9
10
$ fly -t tutorial tj -j pipeline-01/hello-world-job --watch
started pipeline-01/hello-world-job #2

Cloning into '/tmp/build/get'...
Fetching HEAD
292c84b change task name
initializing
running echo hello world
hello world
succeeded

This was all done through the command line, but you can also accessed it from the web ui

Change Your Relayhost on Postfix Using Sed

a Quick post on how to change your relayhost on Postfix to a External SMTP Provider and aswell how to revert back the changes so the Relay server sends out mail directly.

Checking your current relayhost configuration:

We will assume your /etc/postfix/main.cf has a relayhost entry of #relayhost =, in my example it will look like this:

1
2
$ cat /etc/postfix/main.cf
#relayhost =

If not, you can just adjust your sed command accordingly.

Changing your relayhost configuration to a External SMTP Provider:

We will use sed to change the relayhost to za-smtp-outbound-1.mimecast.co.za for example:

1
$ sed -i 's/#relayhost\ =/relayhost\ =\ \[za-smtp-outbound-1.mimecast.co.za\]/g' /etc/postfix/main.cf

to verify that we have set the config, look at the config:

1
2
$ cat /etc/postfix/main.cf | grep relayhost
relayhost = [za-smtp-outbound-1.mimecast.co.za]

Once you see the changes looks as expected, you can restart postfix:

1
$ /etc/init.d/postfix restart

Then you can tail the logs to see if the mail gets delivered:

1
$ tail -f /var/log/maillog

Revert your changes so that postfix sends out directly:

To revert your changes, let’s change the config back to what it was:

1
$ sed -i 's/relayhost\ =\ \[za-smtp-outbound-1.mimecast.co.za\]/#relayhost\ =/g' /etc/postfix/main.cf

To verify your changes:

1
2
$ cat /etc/postfix/main.cf | grep relayhost
#relayhost =

As you can see the relayhost is commented out, meaning that the relayhost property will not be active, go ahead and restart the service for the changes to take effect:

1
$ /etc/init.d/postfix restart

Same as before, look at the logs to confirm mailflow is as expected:

1
$ tail -f /var/log/maillog

Graphing Pretty Charts With Python Flask and Chartjs

I am a big sucker for Charts and Graphs, and today I found one awesome library called Chart.js, which we will use with Python Flask Web Framework, to graph our data.

As Bitcoin is doing so well, I decided to graph the monthly Bitcoin price from January up until now.

Dependencies:

Install Flask:

1
$ pip install flask

Create the files and directories:

1
2
$ touch app.py
$ mkdir templates

We need the Chart.js library, but I will use the CDN version, in my html.

Creating the Flask App:

Our data that we want to graph will be hard-coded in our application, but there are many ways to make this more dynamic, in your app.py:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
from flask import Flask, Markup, render_template

app = Flask(__name__)

labels = [
    'JAN', 'FEB', 'MAR', 'APR',
    'MAY', 'JUN', 'JUL', 'AUG',
    'SEP', 'OCT', 'NOV', 'DEC'
]

values = [
    967.67, 1190.89, 1079.75, 1349.19,
    2328.91, 2504.28, 2873.83, 4764.87,
    4349.29, 6458.30, 9907, 16297
]

colors = [
    "#F7464A", "#46BFBD", "#FDB45C", "#FEDCBA",
    "#ABCDEF", "#DDDDDD", "#ABCABC", "#4169E1",
    "#C71585", "#FF4500", "#FEDCBA", "#46BFBD"]

@app.route('/bar')
def bar():
    bar_labels=labels
    bar_values=values
    return render_template('bar_chart.html', title='Bitcoin Monthly Price in USD', max=17000, labels=bar_labels, values=bar_values)

@app.route('/line')
def line():
    line_labels=labels
    line_values=values
    return render_template('line_chart.html', title='Bitcoin Monthly Price in USD', max=17000, labels=line_labels, values=line_values)

@app.route('/pie')
def pie():
    pie_labels = labels
    pie_values = values
    return render_template('pie_chart.html', title='Bitcoin Monthly Price in USD', max=17000, set=zip(values, labels, colors))

if __name__ == '__main__':
    app.run(host='0.0.0.0', port=8080)

Populating the HTML Static Content:

As we are using render_template we need to populate our html files in our templates/ directory. As you can see we have 3 different html files:

  • templates/bar_chart.html :
  • templates/line_chart.html:
  • templates/pie_chart.html:

Running our Application:

As you can see, we have 3 endpoints, each representing a different chart style:

  • /line
  • /bar
  • /pie

Let’s start our flask application:

1
$ python app.py

When we access our /line endpoint:

When we access our /bar endpoint:

When we access our /pie endpoint:

Resources:

Create a Chatbot With Chatterbot on Python

So I’ve been wanting to take a stab at chatbots for some time, and recently discovered Chatterbot, so in this tutorial I will go through some examples on setting up a very basic chatbot.

Getting the Dependencies:

I will be using Alpine on Docker to run all the the examples, I am using Alpine so that we have a basic container with nothing special pre-installed.

Chatterbot is written in Python, so let’s install Python and Chatterbot:

1
2
3
4
$ docker run -it --name chatbot alpine:edge sh
$ apk update && apk add python py2-pip
$ pip install pip --upgrade --user
$ pip install chatterbot

Setup the Basic Chatbot:

Now that our dependencies is installed, enter the Python interpreter where we will instantiate our Chatbot, and get a response from our Chatbot. By default the library will create a sqlite database to build up statements that is passed to and from the bot.

At this point, the bot is still pretty useless:

1
2
3
4
5
6
7
$ python
>>> from chatterbot import ChatBot
>>> chatbot = ChatBot('Ben')
>>> chatbot.get_response('What is your name?')
<Statement text:What is your name?>
>>> chatbot.get_response('My name is Ruan, what is your name?')
<Statement text:What is your name?>

Training your Bot:

To enable your bot to have some knowledge, we can train the bot with training data. The training data is populated in a list, which will represent the conversation.

Exit the python interpreter and delete the sqlite database:

1
$ rm -rf db.sqlite3

Now our Bot wont have any history of what we said. Start the interpreter again and add some data to train our bot. In this example, we want our Chatbot to respond when we ask it, what his name is:

1
2
3
4
5
6
>>> from chatterbot import ChatBot
>>> from chatterbot.trainers import ListTrainer
>>> chatbot = ChatBot('Ben')
>>> chatbot.set_trainer(ListTrainer)
>>> chatbot.train(['What is your name?', 'My name is Ben'])
List Trainer: [####################] 100%

Now that we have trained our bot, let’s try to chat to our bot:

1
2
3
4
>>> chatbot.get_response('What is your name?')
<Statement text:My name is Ben>
>>> chatbot.get_response('Who is Ben?')
<Statement text:My name is Ben>

We can also enable our bot to respond on multiple statements:

1
2
3
4
5
6
7
>>> chatbot.train(['Do you know someone with the name of Sarah?', 'Yes, my sisters name is Sarah', 'Is your sisters name, Sarah?', 'Faw shizzle!'])
List Trainer: [####################] 100%

>>> chatbot.get_response('do you know someone with the name of Sarah?')
<Statement text:Yes, my sisters name is Sarah>
>>> chatbot.get_response('is your sisters name Sarah?')
<Statement text:Faw shizzle!>

With that said, we can define our list of statements in our code:

1
2
3
4
5
6
7
8
9
10
11
>>> conversations = [
...     'Are you an athlete?', 'No, are you mad? I am a bot',
...     'Do you like big bang theory?', 'Bazinga!',
...     'What is my name?', 'Ruan',
...     'What color is the sky?', 'Blue, stop asking me stupid questions'
... ]

>>> chatbot.train(conversations)
List Trainer: [####################] 100%
>>> chatbot.get_response('What color is the sky?')
<Statement text:Blue, stop asking me stupid questions>

So we can see it works as expected, but let’s state one of the answers from our statements, to see what happens:

1
2
3
4
>>> chatbot.get_response('Bazinga')
<Statement text:What is my name?>
>>> chatbot.get_response('Your name is Ben')
<Statement text:Yes, my name is Ben>

So we can see it uses natural language processing to learn from the data that we provide our bot. Just to check another question:

1
2
>>> chatbot.get_response('Do you like big bang theory?')
<Statement text:Bazinga!>

If we have quite a large subset of learning data, we can add all the data in a file, seperated by new lines then we can use python to read the data from disk, and split up the data in the expected format.

The training file will reside in our working directory, let’s name it training-data.txt and the content will look like this:

1
2
3
4
What is Bitcoin?
Bitcoin is a Crypto Currency
Where is this blog hosted?
Github

A visual example of how we will process this data will look like this:

1
2
3
>>> data = open('training-data.txt').read()
>>> data.strip().split('\n')
['What is Bitcoin?', 'Bitcoin is a Crypto Currency', 'Where is this blog hosted?', 'Github']

And in action, it will look like this:

1
2
3
4
5
6
7
>>> data = open('training-data.txt').read()
>>> conversations = data.strip().split('\n')
>>> chatbot.train(conversations)
List Trainer: [####################] 100%

>>> chatbot.get_response('Where is this blog hosted?')
<Statement text:Github>

There is also pre-populated data that you can use to train your bot, on the documentation is a couple of examples, but for demonstration, we will use the CorpusTrainer:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
>>> from chatterbot.trainers import ChatterBotCorpusTrainer
>>> chatterbot.set_trainer(ChatterBotCorpusTrainer)
>>> chatbot.train("chatterbot.corpus.english")
ai.yml Training: [####################] 100%
botprofile.yml Training: [####################] 100%
computers.yml Training: [####################] 100%
conversations.yml Training: [####################] 100%
emotion.yml Training: [####################] 100%
food.yml Training: [####################] 100%
gossip.yml Training: [####################] 100%
greetings.yml Training: [####################] 100%
history.yml Training: [####################] 100%
humor.yml Training: [####################] 100%
literature.yml Training: [####################] 100%
money.yml Training: [####################] 100%
movies.yml Training: [####################] 100%
politics.yml Training: [####################] 100%
psychology.yml Training: [####################] 100%
science.yml Training: [####################] 100%
sports.yml Training: [####################] 100%
trivia.yml Training: [####################] 100%

>>> chatbot.get_response('Do you like peace?')
<Statement text:not especially. i am not into violence.>
>>> chatbot.get_response('Are you emotional?')
<Statement text:Sort of.>
>>> chatbot.get_response('What language do you speak?')
<Statement text:Python.>
>>> chatbot.get_response('What is your name?')
<Statement text:My name is Ben>
>>> chatbot.get_response('Who is the President of America?')
<Statement text:Richard Nixon> #data seems outdated :D
>>> chatbot.get_response('I like cheese')
<Statement text:What kind of movies do you like?>

Using an External Database like MongoDB

Instead of using sqlite on the same host, we can use a NoSQL Database like MongoDB that resides outside our application.

For the sake of this tutorial, I will use Docker to spin up a MongoDB Container:

1
$ docker run -d --name mongodb -p 27017:27017 -p 28017:28017 -e AUTH=no -e OPLOG_SIZE=50 tutum/mongodb

Below is my code of a terminal application that uses Chatterbot, MongoDB as a Storage Adapter, and we are using a while loop, so that we can chat with our bot, and in our except statement, we can stop our application by using our keyboard to exit:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
from chatterbot import ChatBot
from chatterbot.trainers import ChatterBotCorpusTrainer

chatbot = ChatBot(
    "Chatbot Backed by MongoDB",
    storage_adapter="chatterbot.storage.MongoDatabaseAdapter",
    database="chatterbot_db",
    database_uri="mongodb://172.17.0.3:27017/",
    logic_adapters=[
        'chatterbot.logic.BestMatch'
    ],
    trainer='chatterbot.trainers.ChatterBotCorpusTrainer',
    filters=[
        'chatterbot.filters.RepetitiveResponseFilter'
    ],
    input_adapter='chatterbot.input.TerminalAdapter',
    output_adapter='chatterbot.output.TerminalAdapter'
)

chatbot.set_trainer(ChatterBotCorpusTrainer)
chatbot.train("chatterbot.corpus.english")

print('Chatbot Started:')

while True:
    try:
        print(" -> You:")
        botInput = chatbot.get_response(None)
    except (KeyboardInterrupt, EOFError, SystemExit):
        break

Running the example:

1
2
3
4
5
6
7
$ python bot.py
 -> You:
How are you?
I am doing well.
 -> You:
Tell me a joke
A 3-legged dog walks into an old west saloon, slides up to the bar and announces "I'm looking for the man who shot my paw."

And from mongodb, we can see some data:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
$ mongo
> show dbs
admin          0.078GB
chatterbot_db  0.078GB
local          0.078GB

> use chatterbot_db
switched to db chatterbot_db

> show collections;
conversations
statements
system.indexes

> db.conversations.find().count()
4
> db.statements.find().count()
1240
> db.system.indexes.find().count()
3

That was a basic tutorial on Chatterbot, next I will be looking into mining data from Twitter’s API and see how clever our bot can become.

Resources:

SSH Host Key Warnings With Strict Checking Enabled

When you format / reload a server and the host gets the same IP, when you try to SSH to that host, you might get a warning like this:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
$ ssh 192.168.1.104
@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@
@    WARNING: REMOTE HOST IDENTIFICATION HAS CHANGED!     @
@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@
IT IS POSSIBLE THAT SOMEONE IS DOING SOMETHING NASTY!
Someone could be eavesdropping on you right now (man-in-the-middle attack)!
It is also possible that a host key has just been changed.
The fingerprint for the ECDSA key sent by the remote host is
a1:a2:a3:a4:a5:a6:a7:a8:a9:b0:b1:b2:b3:b4:b5:b6.
Please contact your system administrator.
Add correct host key in /home/pi/.ssh/known_hosts to get rid of this message.
Offending ECDSA key in /home/pi/.ssh/known_hosts:10
ECDSA host key for 192.168.1.104 has changed and you have requested strict checking.
Host key verification failed.

This is because we have StrictMode enabled in our SSH Configuration:

1
2
$ cat /etc/ssh/sshd_config | grep -i stric
StrictModes yes

To remove the offending key from your known_hosts file, without opening it, you can use ssh-keygen to remove it:

1
2
3
4
$ ssh-keygen -f .ssh/known_hosts -R 192.168.1.104
# Host 192.168.1.104 found: line 10 type ECDSA
.ssh/known_hosts updated.
Original contents retained as .ssh/known_hosts.old

Now when you SSH the warning should be removed.

Setup a 3 Node Kubernetes Cluster on Ubuntu

Setup a 3 Node Kubernetes Cluster on Ubuntu 16.04

What is Kubernetes?

As referenced from their website:

  • “Kubernetes is an open-source system for automating deployment, scaling, and management of containerized applications.”

Our Setup:

For this setup I will be using 3 AWS EC2 Instances with Ubuntu 16.04. One node will act as the master node, and the other 2 nodes, will act as nodes, previously named minions.

We will deploy Kubernetes on all 3 nodes, the master will be the node where we will initialize our cluster, deploy our weave network, applications and we will execute the join command on the worker nodes to join the master to form the cluster.

Deploy Kubernetes: Master

The following commands will be used to install Kubernetes, it will be executed with root permissions:

1
2
3
4
5
6
7
$ apt update && sudo apt upgrade -y
$ sudo apt install docker.io apt-transport-https -qy
$ sudo apt update
$ curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add -
$ sudo su -c 'echo "deb http://apt.kubernetes.io/ kubernetes-xenial main" > /etc/apt/sources.list.d/app' root
$ apt update
$ sudo apt install kubelet kubeadm kubernetes-cni -y

Now we would like to set up the master by initializing the cluster:

1
$ sudo kubeadm init --kubernetes-version stable-1.8

The output will provide you with instructions to setup the configurations for the master node, and provide you with a join token for your worker nodes, remember to make not of this token string, as we will need it later for our worker nodes. As your normal user, run the following to setup the config:

Remember to not run this as root, and as the normal user:

1
2
3
$ mkdir -p $HOME/.kube
$ sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
$ sudo chown $(id -u):$(id -g) $HOME/.kube/config

Now we need to deploy a network for our pods:

1
$ kubectl apply -f "https://cloud.weave.works/k8s/net?k8s-version=$(kubectl version | base64 | tr -d '\n')"

Lets confirm if all our resources are in its desired state, a small snippet of the output will look like the one below:

1
2
3
4
5
6
$ kubectl get all -n kube-system

...
NAME                                          READY     STATUS    RESTARTS   AGE
po/etcd-ip-172-31-40-211                      1/1       Running   0          6h
po/kube-apiserver-ip-172-31-40-211            1/1       Running   0          6h

Once all of the resources are in its desired state, we can head along to our worker nodes, to join them to the cluster

Deploy Kubernetes: Worker Nodes

As I have 2 worker nodes, we will need to deploy the following on both of our worker nodes, first to deploy Kubernetes on our nodes with root permission:

1
2
3
4
5
6
7
$ apt update && sudo apt upgrade -y
$ sudo apt install docker.io apt-transport-https -qy
$ sudo apt update
$ curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add -
$ sudo su -c 'echo "deb http://apt.kubernetes.io/ kubernetes-xenial main" > /etc/apt/sources.list.d/app' root
$ apt update
$ sudo apt install kubelet kubeadm kubernetes-cni -y

Once Kubernetes is installed, join the Master node by executing the join command:

1
$ sudo kubeadm join --token 49abf7.247d663db97f8504 172.31.40.211:6443 --discovery-token-ca-cert-hash sha256:3a3b301cfbac0995c69a0115989ea384230470d6836ae0e13e71dbdf15ffbb48

Do the 2 steps on the other node, then head back to the master node.

Verifying if All Nodes are Checked In

To verify if all nodes are available and reachable in the cluster:

1
2
3
4
5
$ kubectl get nodes
NAME               STATUS    ROLES     AGE       VERSION
ip-172-31-36-68    Ready     <none>    6h        v1.8.5
ip-172-31-40-211   Ready     master    6h        v1.8.5
ip-172-31-44-80    Ready     <none>    6h        v1.8.5

Deploy Services to Kubernetes:

Kubernetes has Awesome Examples on their Github Repository.

Since the awesomeness of OpenFaas, I will deploy OpenFaas on Kubernetes:

1
2
3
$ git clone https://github.com/openfaas/faas-netes
$ cd faas-netes
$ kubectl apply -f faas.yml,monitoring.yml,rbac.yml

Give it about a minute or so, then you should see the pods running in their desired state:

1
2
3
4
5
6
7
$ kubectl get pods
NAME                           READY     STATUS    RESTARTS   AGE
alertmanager-77b4b476b-zxtcz   1/1       Running   0          4h
crypto-7d8b7f999c-7l85k        1/1       Running   0          1h
faas-netesd-64fb9b4dfb-hc8gh   1/1       Running   0          4h
gateway-69c9d949f-q57zh        1/1       Running   0          4h
prometheus-7fbfd8bfb8-d4cft    1/1       Running   0          4h

When we have the desired state, head over to the OpenFaas Gateway WebUI: http://master-public-ip:31112/ui/, select “Deploy New Function”, you can use your own function or select one from the store.

I am going to use Figlet from the store, once the pod has been deployed, select the function, enter any text into the request body and select invoke. I have used my name and surname, and turns out into:

1
2
3
4
5
6
 ____                      ____       _    _
|  _ \ _   _  __ _ _ __   | __ )  ___| | _| | _____ _ __
| |_) | | | |/ _` | '_ \  |  _ \ / _ \ |/ / |/ / _ \ '__|
|  _ <| |_| | (_| | | | | | |_) |  __/   <|   <  __/ |
|_| \_\\__,_|\__,_|_| |_| |____/ \___|_|\_\_|\_\___|_|

Resources:

Rejoining or Bootstrapping MySQL Galera Cluster Nodes After Shutdown

I have a 3 Node MySQL Galera Cluster that faced a shutdown on all 3 nodes at the same time, luckily this is only a testing environment, but at that time it was down and did not want to start up.

Issues Faced

When trying to start MySQL the only error visible was:

1
2
3
4
5
$ /etc/init.d/mysql restart
 * MySQL server PID file could not be found!
Starting MySQL
........ * The server quit without updating PID file (/var/run/mysqld/mysqld.pid).
 * Failed to restart server.

At that time I can see that the galera port is started, but not mysql:

1
2
3
4
5
6
7
8
$ ps aux | grep mysql
root     23580  0.0  0.0   4508  1800 pts/0    S    00:37   0:00 /bin/sh /usr/bin/mysqld_safe --datadir=/var/lib/mysql --pid-file=/var/run/mysqld/mysqld.pid
mysql    24144  0.7 22.2 1185116 455660 pts/0  Sl   00:38   0:00 /usr/sbin/mysqld --basedir=/usr --datadir=/var/lib/mysql --plugin-dir=/usr/lib/mysql/plugin --user=mysql --log-error=/var/log/mysql/error.log --pid-file=/var/run/mysqld/mysqld.pid --socket=/var/run/mysqld/mysqld.sock --port=3306 --wsrep_start_position=long:string

$ netstat -tulpn
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address           Foreign Address         State       PID/Program name
tcp        0      0 0.0.0.0:4567            0.0.0.0:*               LISTEN      25507/mysqld

Why?

More in detail is explained on a SeveralNines Blog Post, but due to the fact that all the nodes left the cluster, one of the nodes needs to be started as a referencing point, before the other nodes can rejoin or bootstrapped to the cluster.

Rejoining the Cluster

Consult the blog for more information, but from my end, I had a look at the node with the highest seqno and then updated safe_to_bootstrap to 1:

1
2
3
4
5
6
$ cat /var/lib/mysql/grastate.dat
# GALERA saved state
version: 2.1
uuid:    e9f9cf6a-87a1-11e7-9fb4-52612b906897
seqno:   123512
safe_to_bootstrap: 1

Then made sure that no mysql processes are running, then did a bootstrap:

1
2
3
$ /etc/init.d/mysql bootstrap
Bootstrapping the cluster
Starting MySQL

Then restarted mysql on the other nodes.

Verifying

To verify that all your nodes has checked in, I have 3 nodes:

1
2
3
4
5
6
7
8
9
10
11
12
mysql> SHOW STATUS LIKE 'wsrep_%';
+------------------------------+---------------------------------------------------+
| Variable_name                | Value                                             |
+------------------------------+---------------------------------------------------+
| wsrep_local_recv_queue_avg   | 0.000000                                          |
| wsrep_local_state_comment    | Synced                                            |
| wsrep_incoming_addresses     | 10.3.132.91:3306,10.4.1.201:3306,10.4.113.21:3306 |
| wsrep_evs_state              | OPERATIONAL                                       |
| wsrep_cluster_size           | 3                                                 |
| wsrep_cluster_status         | Primary                                           |
| wsrep_connected              | ON                                                |
+------------------------------+---------------------------------------------------+

or a shorter version:

1
2
3
4
5
6
mysql> SHOW GLOBAL STATUS LIKE 'wsrep_cluster_size';
+------------------------------+---------------------------------------------------+
| Variable_name                | Value                                             |
+------------------------------+---------------------------------------------------+
| wsrep_cluster_size           | 3                                                 |
+------------------------------+---------------------------------------------------+

Unmask a Masked Service in Systemd

I was busy setting up a docker-volume-netshare plugin to use NFS Volumes for Docker, which relies on the nfs-utils/nfs-common package, and when trying to start the service, I found that the nfs-common service is masked:

1
2
$ sudo systemctl start docker-volume-netshare.service
Failed to start docker-volume-netshare.service: Unit nfs-common.service is masked.

Looking at the nfs-common service:

1
2
3
4
5
6
7
sudo systemctl is-enabled nfs-common
masked

$ sudo systemctl enable nfs-common
Synchronizing state of nfs-common.service with SysV service script with /lib/systemd/systemd-sysv-install.
Executing: /lib/systemd/systemd-sysv-install enable nfs-common
Failed to enable unit: Unit file /lib/systemd/system/nfs-common.service is masked.

It appears that the unit file has a symbolic link to /dev/null:

1
2
$ file /lib/systemd/system/nfs-common.service
/lib/systemd/system/nfs-common.service: symbolic link to /dev/null

I was able to unmask the service by removing the file:

1
$ sudo rm /lib/systemd/system/nfs-common.service

Then reloading the daemon:

1
$ sudo systemctl daemon-reload

As we can see the nfs-common service is not running:

1
2
3
4
5
$ sudo systemctl status nfs-common
● nfs-common.service - LSB: NFS support files common to client and server
   Loaded: loaded (/etc/init.d/nfs-common; generated; vendor preset: enabled)
   Active: inactive (dead)
     Docs: man:systemd-sysv-generator(8)

Let’s go ahead and start the service:

1
2
3
4
5
6
7
8
9
10
$ sudo systemctl start nfs-common
$ sudo systemctl status nfs-common
● nfs-common.service - LSB: NFS support files common to client and server
   Loaded: loaded (/etc/init.d/nfs-common; generated; vendor preset: enabled)
   Active: active (running) since Sat 2017-12-09 08:59:47 SAST; 2s ago
     Docs: man:systemd-sysv-generator(8)
  Process: 7382 ExecStart=/etc/init.d/nfs-common start (code=exited, status=0/SUCCESS)
      CPU: 162ms
   CGroup: /system.slice/nfs-common.service
           └─7403 /usr/sbin/rpc.idmapd

Now we can see the serive is unmasked and started, also remember to enable to service on boot:

1
2
3
4
5
6
$ sudo systemctl enable nfs-common
nfs-common.service is not a native service, redirecting to systemd-sysv-install.
Executing: /lib/systemd/systemd-sysv-install enable nfs-common

$ sudo systemctl is-enabled nfs-common
enabled

Setup a NFS Server on a RaspberryPi

Setup a NFS Server/Client on RaspberryPi 3

Setup the Server Side - Disks and Directories

Prepare the directories:

1
2
3
$ sudo mkdir -p /opt/nfs
$ sudo chown pi:pi /opt/nfs
$ sudo chmod 755 /opt/nfs

For demonstration, I will be using the same disk as my OS, but if you have other disks that you would like to mount, mount them like the following:

1
2
3
4
5
$ sudo lsblk
$ sudo mount /dev/sda2 /opt/nfs
$ sudo chown -R pi:pi /opt/nfs/existing_dirs
$ sudo find /opt/nfs/existing_dirs/ -type d -exec chmod 755 {} \;
$ sudo find /opt/nfs/existing_dirs/ -type f -exec chmod 644 {} \;

If you mounted the disk, and you would like to mount the disk on boot, we need to add it to our /etc/fstab. We can get the disk by running either:

1
2
3
$ sudo lsblk
# or
$ sudo blkid

Populate the /etc/fstab with your disk info, it will look more or less like:

1
/dev/sda2 /opt/nfs ext4 defaults,noatime 0 0

Append rootdelay=10 after rootwait in /boot/cmdline.txt, then reboot for the changes to become active.

Setup the Server Side - Installing NFS Server

Install the NFS Server packages:

1
$ sudo apt install nfs-kernel-server nfs-common rpcbind -y

Configure the paths in /etc/exports, we need to uid gid for the user that owns permission that we need to pass to the NFS Client. To get that:

1
2
$ id pi
uid=1000(pi) gid=1000(pi)

Setup our path that we would like to be accessible via NFS:

1
/opt/nfs 192.168.1.0/24(rw,all_squash,no_hide,insecure,async,no_subtree_check,anonuid=1000,anongid=1000)

If you would like to have open access:

1
/opt/nfs *(rw,all_squash,no_hide,insecure,async,no_subtree_check,anonuid=1000,anongid=1000)

Export the config, enable the services on boot and start NFS:

1
2
3
4
5
6
7
$ sudo exportfs -ra
$ sudo systemctl enable rpcbind
$ sudo systemctl enable nfs-kernel-server
$ sudo systemctl enable nfs-common
$ sudo systemctl start rpcbind
$ sudo systemctl start nfs-kernel-server
$ sudo systemctl start nfs-common

Setup the NFS Client

On the client install the NFS Client packages:

1
$ sudo apt install nfs-common -y

Create the mountpoint of choice and change the ownership:

1
$ sudo chown pi:pi /mnt

Setup the /etc/idmapd.conf to match the user:

1
2
3
[Mapping]
Nobody-User = pi
Nobody-Group = pi

Mount the NFS Share to your local mount point:

1
$ sudo mount 192.168.1.2:/opt/nfs /mnt

Enable mount on boot via /etc/fstab:

1
192.168.1.2:/opt/nfs /mnt nfs rw 0 0

Resources:

Elasticsearch Curator to Manage and Curate Your Elasticsearch Indexes

Elasticsearch Curator helps you to manage and curate your Elasticsearch Indices. I will show how to use the Curator in the following ways:

  • Create Indexes
  • Reindex Indexes
  • Set Replica Counts on Indexes
  • Delete Indexes

Install Elasticsearch Curator

Install Elasticsearch Curator as follows:

1
2
3
$ virtualenv .venv
$ source .venv/bin/activate
$ pip install elasticsearch-curator

Populate the configuration whith your Elasticsearch Host details:

config.yml
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
---
client:
  hosts:
    - es.domain.com
  port: 443
  use_ssl: True
  ssl_no_validate: False
  http_auth:
  timeout: 30
  master_only: False

logging:
  loglevel: INFO
  logfile:
  logformat: default
  blacklist: ['urllib3']

Action: Create Indices

Use Curator to Create Elasticsearch Indexes:

action-create-indices.yml
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
---
actions:
  create_web-app1-metrics:
    action: create_index
    description: >-
      Create Elasticsearch Index based on Todays Date
      Specify Number of Primary and Replica Shards
      web-app1-metrics-2017.12.04
    options:
      name: '<web-app1-metrics-{now/d}>'
      extra_settings:
        settings:
          number_of_shards: 5
          number_of_replicas: 1
        continue_if_exception: True
        disable_action: False

  create_web-app2-metrics:
    action: create_index
    description: "Create Index with the 1st of this Month in Daily Format - web-app2-metrics-2017.12.01"
    options:
      name: '<web-app2-metrics-{now/M}>'
      extra_settings:
        settings:
          number_of_shards: 5
          number_of_replicas: 2
        continue_if_exception: True
        disable_action: False

  create_web-app3-metrics:
    action: create_index
    description: "Create Index with Last Months Date in Month Format - web-app3-metrics-2017.11"
    options:
      name: '<web-app2-metrics-{now/M-1M{YYYY.MM}}>'
      extra_settings:
        settings:
          number_of_shards: 5
          number_of_replicas: 2
        continue_if_exception: True
        disable_action: False

  create_web-app4-metrics:
    action: create_index
    description: "Create Index with Daily Format 12 Hours from Now - web-app4-metrics-2017.12.05"
    options:
      name: '<web-app2-metrics-{now/d{YYYY.MM.dd|+12:00}}>'
      extra_settings:
        settings:
          number_of_shards: 5
          number_of_replicas: 2
        continue_if_exception: True
        disable_action: False

When Running curator, you can append --dry-run to test your config/action without touching your data. To create these indexes:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
$ curator --config config.yml action-create-indices.yml

2017-12-04 14:22:40,252 INFO      Preparing Action ID: create_web-app1-metrics, "create_index"
2017-12-04 14:22:40,303 INFO      GET https://es.domain.com:443/ [status:200 request:0.036s]
2017-12-04 14:22:40,304 INFO      Trying Action ID: create_web-app1-metrics, "create_index": Create Elasticsearch Index based on Todays Date Specify Number of Primary and Replica Shards web-app1-metrics-2017.12.04
2017-12-04 14:22:40,304 INFO      "<web-app1-metrics-{now/d}>" is using Elasticsearch date math.
2017-12-04 14:22:40,304 INFO      Creating index "<web-app1-metrics-{now/d}>" with settings: {'continue_if_exception': True, 'settings': {'number_of_replicas': 1, 'number_of_shards': 5}, 'disable_action': False}
2017-12-04 14:22:41,490 INFO      PUT https://es.domain.com:443/%3Cweb-app1-metrics-%7Bnow%2Fd%7D%3E [status:200 request:1.185s]
2017-12-04 14:22:41,490 INFO      Action ID: create_web-app1-metrics, "create_index" completed.
2017-12-04 14:22:41,490 INFO      Preparing Action ID: create_web-app2-metrics, "create_index"
2017-12-04 14:22:41,533 INFO      GET https://es.domain.com:443/ [status:200 request:0.033s]
2017-12-04 14:22:41,534 INFO      Trying Action ID: create_web-app2-metrics, "create_index": Create Index with the 1st of this Month in Daily Format - web-app2-metrics-2017.12.01
2017-12-04 14:22:41,534 INFO      "<web-app2-metrics-{now/M}>" is using Elasticsearch date math.
2017-12-04 14:22:41,534 INFO      Creating index "<web-app2-metrics-{now/M}>" with settings: {'continue_if_exception': True, 'settings': {'number_of_replicas': 2, 'number_of_shards': 5}, 'disable_action': False}
2017-12-04 14:22:41,634 INFO      PUT https://es.domain.com:443/%3Cweb-app2-metrics-%7Bnow%2FM%7D%3E [status:200 request:0.099s]
2017-12-04 14:22:41,634 INFO      Action ID: create_web-app2-metrics, "create_index" completed.
2017-12-04 14:22:41,634 INFO      Preparing Action ID: create_web-app3-metrics, "create_index"
2017-12-04 14:22:41,673 INFO      GET https://es.domain.com:443/ [status:200 request:0.028s]
2017-12-04 14:22:41,674 INFO      Trying Action ID: create_web-app3-metrics, "create_index": Create Index with Last Months Date in Month Format - web-app3-metrics-2017.11
2017-12-04 14:22:41,674 INFO      "<web-app2-metrics-{now/M-1M{YYYY.MM}}>" is using Elasticsearch date math.
2017-12-04 14:22:41,674 INFO      Creating index "<web-app2-metrics-{now/M-1M{YYYY.MM}}>" with settings: {'continue_if_exception': True, 'settings': {'number_of_replicas': 2, 'number_of_shards': 5}, 'disable_action': False}
2017-12-04 14:22:41,750 INFO      PUT https://es.domain.com:443/%3Cweb-app2-metrics-%7Bnow%2FM-1M%7BYYYY.MM%7D%7D%3E [status:200 request:0.076s]
2017-12-04 14:22:41,751 INFO      Action ID: create_web-app3-metrics, "create_index" completed.
2017-12-04 14:22:41,751 INFO      Preparing Action ID: create_web-app4-metrics, "create_index"
2017-12-04 14:22:41,785 INFO      GET https://es.domain.com:443/ [status:200 request:0.027s]
2017-12-04 14:22:41,786 INFO      Trying Action ID: create_web-app4-metrics, "create_index": Create Index with Daily Format 12 Hours from Now - web-app4-metrics-2017.12.05
2017-12-04 14:22:41,786 INFO      "<web-app2-metrics-{now/d{YYYY.MM.dd|+12:00}}>" is using Elasticsearch date math.
2017-12-04 14:22:41,786 INFO      Creating index "<web-app2-metrics-{now/d{YYYY.MM.dd|+12:00}}>" with settings: {'continue_if_exception': True, 'settings': {'number_of_replicas': 2, 'number_of_shards': 5}, 'disable_action': False}
2017-12-04 14:22:42,182 INFO      PUT https://es.domain.com:443/%3Cweb-app2-metrics-%7Bnow%2Fd%7BYYYY.MM.dd%7C%2B12%3A00%7D%7D%3E [status:200 request:0.396s]
2017-12-04 14:22:42,183 INFO      Action ID: create_web-app4-metrics, "create_index" completed.
2017-12-04 14:22:42,183 INFO      Job completed.

Lets have a look at our indices to confirm that our indices was created:

1
2
3
4
5
6
$ curl -s -XGET "https://es.domain.com/_cat/indices/web-*?v"
health status index                       uuid                   pri rep docs.count docs.deleted store.size pri.store.size
green  open   web-app2-metrics-2017.12.01 qJHVyft1THemh1qGvA8u0w   5   2          0            0       810b           810b
green  open   web-app2-metrics-2017.11    y5R4vNfOSh2tiC-yGtkgLg   5   2          0            0       810b           810b
green  open   web-app2-metrics-2017.12.05 -ohbgD6-TmmCeJtVv84dPw   5   2          0            0       810b           810b
green  open   web-app1-metrics-2017.12.04 WeGkgB9FSq-cuLVR7ccQFQ   5   1          0            0       810b           810b

Action: Reindex Indices based on Timestring

I would like to reindex a months worth of index data to a monthly index:

action-reindex.yml
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
---
actions:
  re-index_web-app1-metrics:
    action: reindex
    description: "reindex web-app1-metrics to monthly index of last months date - archive-web-app1-metrics-2017.11"
    options:
      continue_if_exception: False
      disable_action: False
      wait_interval: 9
      max_wait: -1
      request_body:
        source:
          index: '<web-app1-metrics-{now/d-31d{YYYY.MM.dd}}>'
        dest:
          index: '<archive-web-app1-metrics-{now/M-1M{YYYY.MM}}>'
    filters:
    - filtertype: none

Running the Curator to reindex all last months data: web-app1-metrics-2017.11.{01-31} to the index: web-app1-metrics-2017.11:

1
$ curator --config config action-reindex.yml

Curator to Change Replica Counts on your Indices:

We will change all our indices settings to replica count of 2, that is matched with our prefix pattern. We are using wait_for_completion so the job will only be completed once the replica count number is updated and data has been replicated to the replica shards.

Our action file:

action-replicas.yml
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
---
actions:
  increase_replica_2:
    action: replicas
    description: >-
      Increase the replica count to 2 for logstash- prefixed indices older than
      10 days (based on index creation_date)
    options:
      count: 2
      max_wait: -1
      wait_interval: 10
      wait_for_completion: True
      disable_action: False
    filters:
    - filtertype: pattern
      kind: prefix
      value: packet-capture-2017.11.

Using Curator to increase our replica count on all the matched indices:

action-replicas.yml
1
2
3
4
$ curator --config config.yml action-replicas.yml
2017-12-04 13:42:41,322 INFO      Health Check for all provided keys passed.
2017-12-04 13:42:41,323 INFO      Action ID: increase_replica_2, "replicas" completed.
2017-12-04 13:42:41,323 INFO      Job completed.

Curator to Delete your Indices:

action-replicas.yml
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
---
# documentation:
# https://www.elastic.co/guide/en/elasticsearch/client/curator/current/ex_delete_indices.html

actions:
  delete-index_web-app1-metrics:
    action: delete_indices
    description: >-
      Delete indices older than 21 days - based on index name, web-app1-metrics-
      prefixed indices. Ignore the error if the filter does not result in an
      actionable list of indices (ignore_empty_list) and exit cleanly.
    options:
      ignore_empty_list: True
      disable_action: False
    filters:
    - filtertype: pattern
      kind: prefix
      value: web-app1-metrics-
    - filtertype: age
      source: name
      direction: older
      timestring: '%Y.%m.%d'
      unit: days
      unit_count: 21
      exclude:

  delete-index_web-app2-metrics:
    action: delete_indices
    description: >-
      Delete indices older than 1 month - based on index name, web-app2-metrics-
      prefixed indices. Ignore the error if the filter does not result in an
      actionable list of indices (ignore_empty_list) and exit cleanly.
    options:
      ignore_empty_list: True
      disable_action: False
    filters:
    - filtertype: pattern
      kind: prefix
      value: web-app2-metrics-
    - filtertype: age
      source: name
      direction: older
      timestring: '%Y.%m.%d'
      unit: months
      unit_count: 1
      exclude:

First we will execute a Dry Run:

action-replicas.yml
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
$ curator --config /opt/curator/es-dev/config.yml /opt/curator/es-dev/actions/action-delete.yml --dry-run

2017-12-04 14:43:19,789 INFO      Preparing Action ID: delete-index_web-app1-metrics, "delete_indices"
2017-12-04 14:43:19,850 INFO      GET https://es.domain.com:443/ [status:200 request:0.037s]
2017-12-04 14:43:19,851 INFO      Trying Action ID: delete-index_web-app1-metrics, "delete_indices": Delete indices older than 21 days - based on index name, web-app1-metrics- prefixed indices. Ignore the error if the filter does not result in an actionable list of indices (ignore_empty_list) and exit cleanly.
2017-12-04 14:43:19,859 INFO      GET https://es.domain.com:443/_all/_settings?expand_wildcards=open%2Cclosed [status:200 request:0.008s]
2017-12-04 14:43:19,862 INFO      GET https://es.domain.com:443/ [status:200 request:0.002s]
2017-12-04 14:43:19,957 INFO      DRY-RUN MODE.  No changes will be made.
2017-12-04 14:43:19,957 INFO      (CLOSED) indices may be shown that may not be acted on by action "delete_indices".
2017-12-04 14:43:19,957 INFO      DRY-RUN: delete_indices: web-app1-metrics-2017.11.01 with arguments: {}
2017-12-04 14:43:19,958 INFO      DRY-RUN: delete_indices: web-app1-metrics-2017.11.02 with arguments: {}
2017-12-04 14:43:19,958 INFO      DRY-RUN: delete_indices: web-app1-metrics-2017.11.03 with arguments: {}
2017-12-04 14:43:19,958 INFO      DRY-RUN: delete_indices: web-app1-metrics-2017.11.04 with arguments: {}
2017-12-04 14:43:19,958 INFO      DRY-RUN: delete_indices: web-app1-metrics-2017.11.05 with arguments: {}
2017-12-04 14:43:19,958 INFO      DRY-RUN: delete_indices: web-app1-metrics-2017.11.06 with arguments: {}
2017-12-04 14:43:19,958 INFO      DRY-RUN: delete_indices: web-app1-metrics-2017.11.07 with arguments: {}
2017-12-04 14:43:19,958 INFO      DRY-RUN: delete_indices: web-app1-metrics-2017.11.08 with arguments: {}
2017-12-04 14:43:19,959 INFO      DRY-RUN: delete_indices: web-app1-metrics-2017.11.09 with arguments: {}
2017-12-04 14:43:19,959 INFO      DRY-RUN: delete_indices: web-app1-metrics-2017.11.10 with arguments: {}
2017-12-04 14:43:19,959 INFO      DRY-RUN: delete_indices: web-app1-metrics-2017.11.11 with arguments: {}
2017-12-04 14:43:19,959 INFO      DRY-RUN: delete_indices: web-app1-metrics-2017.11.12 with arguments: {}
2017-12-04 14:43:19,959 INFO      DRY-RUN: delete_indices: web-app1-metrics-2017.11.13 with arguments: {}
2017-12-04 14:43:19,959 INFO      Action ID: delete-index_web-app1-metrics, "delete_indices" completed.
2017-12-04 14:43:19,959 INFO      Preparing Action ID: delete-index_web-app2-metrics, "delete_indices"
2017-12-04 14:43:20,025 INFO      GET https://es.domain.com:443/ [status:200 request:0.050s]
2017-12-04 14:43:20,026 INFO      Trying Action ID: delete-index_web-app2-metrics, "delete_indices": Delete indices older than 1 month - based on index name, web-app2-metrics- prefixed indices. Ignore the error if the filter does not result in an actionable list of indices (ignore_empty_list) and exit cleanly.
2017-12-04 14:43:20,034 INFO      GET https://es.domain.com:443/_all/_settings?expand_wildcards=open%2Cclosed [status:200 request:0.008s]
2017-12-04 14:43:20,039 INFO      GET https://es.domain.com:443/ [status:200 request:0.003s]
2017-12-04 14:43:20,090 INFO      DRY-RUN MODE.  No changes will be made.
2017-12-04 14:43:20,090 INFO      (CLOSED) indices may be shown that may not be acted on by action "delete_indices".
2017-12-04 14:43:20,090 INFO      DRY-RUN: delete_indices: web-app2-metrics-2017.11.01 with arguments: {}
2017-12-04 14:43:20,090 INFO      DRY-RUN: delete_indices: web-app2-metrics-2017.11.02 with arguments: {}
2017-12-04 14:43:20,090 INFO      DRY-RUN: delete_indices: web-app2-metrics-2017.11.03 with arguments: {}
2017-12-04 14:43:20,090 INFO      DRY-RUN: delete_indices: web-app2-metrics-2017.11.04 with arguments: {}
2017-12-04 14:43:20,090 INFO      Action ID: delete-index_web-app2-metrics, "delete_indices" completed.
2017-12-04 14:43:20,090 INFO      Job completed.

Everything seems to be as expected, lets run the Curator without the Dry-Run mode:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
$ curator --config config.yml action-delete.yml

2017-12-04 14:43:40,042 INFO      Deleting selected indices: [u'web-app1-metrics-2017.11.06', u'web-app1-metrics-2017.11.07', u'web-app1-metrics-2017.11.04', u'web-app1-metrics-2017.11.05', u'web-app1-metrics-2017.11.02', u'web-app1-metrics-2017.11.03', u'web-app1-metrics-2017.11.01', u'web-app1-metrics-2017.11.08', u'web-app1-metrics-2017.11.09', u'web-app1-metrics-2017.11.11', u'web-app1-metrics-2017.11.10', u'web-app1-metrics-2017.11.13', u'web-app1-metrics-2017.11.12']
2017-12-04 14:43:40,043 INFO      ---deleting index web-app1-metrics-2017.11.06
2017-12-04 14:43:40,043 INFO      ---deleting index web-app1-metrics-2017.11.07
2017-12-04 14:43:40,043 INFO      ---deleting index web-app1-metrics-2017.11.04
2017-12-04 14:43:40,043 INFO      ---deleting index web-app1-metrics-2017.11.05
2017-12-04 14:43:40,043 INFO      ---deleting index web-app1-metrics-2017.11.02
2017-12-04 14:43:40,043 INFO      ---deleting index web-app1-metrics-2017.11.03
2017-12-04 14:43:40,043 INFO      ---deleting index web-app1-metrics-2017.11.01
2017-12-04 14:43:40,044 INFO      ---deleting index web-app1-metrics-2017.11.08
2017-12-04 14:43:40,044 INFO      ---deleting index web-app1-metrics-2017.11.09
2017-12-04 14:43:40,044 INFO      ---deleting index web-app1-metrics-2017.11.11
2017-12-04 14:43:40,044 INFO      ---deleting index web-app1-metrics-2017.11.10
2017-12-04 14:43:40,044 INFO      ---deleting index web-app1-metrics-2017.11.13
2017-12-04 14:43:40,044 INFO      ---deleting index web-app1-metrics-2017.11.12
2017-12-04 14:43:40,287 INFO      DELETE https://es.domain.com:443/web-app1-metrics-2017.11.01,web-app1-metrics-2017.11.02,web-app1-metrics-2017.11.03,web-app1-metrics-2017.11.04,web-app1-metrics-2017.11.05,web-app1-metrics-2017.11.06,web-app1-metrics-2017.11.07,web-app1-metrics-2017.11.08,web-app1-metrics-2017.11.09,web-app1-metrics-2017.11.10,web-app1-metrics-2017.11.11,web-app1-metrics-2017.11.12,web-app1-metrics-2017.11.13?master_timeout=30s [status:200 request:0.243s]
2017-12-04 14:43:40,417 INFO      Action ID: delete-index_web-app1-metrics, "delete_indices" completed.
2017-12-04 14:43:40,417 INFO      Preparing Action ID: delete-index_web-app2-metrics, "delete_indices"
2017-12-04 14:43:40,453 INFO      Trying Action ID: delete-index_web-app2-metrics, "delete_indices": Delete indices older than 1 month - based on index name, web-app2-metrics- prefixed indices. Ignore the error if the filter does not result in an actionable list of indices (ignore_empty_list) and exit cleanly.
2017-12-04 14:43:40,491 INFO      Deleting selected indices: [u'web-app2-metrics-2017.11.03', u'web-app2-metrics-2017.11.01', u'web-app2-metrics-2017.11.02', u'web-app2-metrics-2017.11.04']
2017-12-04 14:43:40,492 INFO      ---deleting index web-app2-metrics-2017.11.03
2017-12-04 14:43:40,492 INFO      ---deleting index web-app2-metrics-2017.11.01
2017-12-04 14:43:40,492 INFO      ---deleting index web-app2-metrics-2017.11.02
2017-12-04 14:43:40,492 INFO      ---deleting index web-app2-metrics-2017.11.04
2017-12-04 14:43:40,566 INFO      DELETE https://es.domain.com:443/web-app2-metrics-2017.11.01,web-app2-metrics-2017.11.02,web-app2-metrics-2017.11.03,web-app2-metrics-2017.11.04?master_timeout=30s [status:200 request:0.074s]
2017-12-04 14:43:40,595 INFO      GET https://es.domain.com:443/ [status:200 request:0.002s]
2017-12-04 14:43:40,596 INFO      Action ID: delete-index_web-app2-metrics, "delete_indices" completed.
2017-12-04 14:43:40,596 INFO      Job completed.

Resources: