Ruan Bekker's Blog

From a Curious mind to Posts on Github

Capture Geo Location Data With Python Flask and PyGeoIP

With the PyGeoIP package you can capture geo location data, which is pretty cool, for example, when you have IOT devices pushing location data to elasticsearch and visualizing the data with Kibana. That will be one example, but the possibilites are endless.

Dependencies:

Get the Maxmind Geo Database:

1
2
$ wget -N http://geolite.maxmind.com/download/geoip/database/GeoLiteCity.dat.gz
$ gunzip GeoLiteCity.dat.gz

Install Python Flask and PyGeoIP:

1
$ pip install flask pygeoip

Getting Started with PyGeoIP:

Let’s run through a couple of examples on how to get:

  • Country Name by IP Address and DNS
  • Country Code by IP Address and DNS
  • GeoData by IP Address
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
>>> import pygeoip, json
>>> gi = pygeoip.GeoIP('GeoLiteCity.dat')

>>> gi.country_name_by_addr('8.8.8.8')
'United States'
>>> gi.country_code_by_addr('8.8.8.8')
'US'

>>> gi.country_name_by_name('scaleway.com')
'France'
>>> gi.country_code_by_name('scaleway.com')
'FR'

>>> gi.region_by_name('scaleway.com')
{'region_code': None, 'country_code': 'FR'}

>>> data = gi.record_by_addr('104.244.42.193')
>>> print(json.dumps(data, indent=2))
{
  "city": "San Francisco",
  "region_code": "CA",
  "area_code": 415,
  "time_zone": "America/Los_Angeles",
  "dma_code": 807,
  "metro_code": "San Francisco, CA",
  "country_code3": "USA",
  "latitude": 37.775800000000004,
  "postal_code": "94103",
  "longitude": -122.4128,
  "country_code": "US",
  "country_name": "United States",
  "continent": "NA"
}

>>> data = gi.record_by_name('twitter.com')
>>> print(json.dumps(data, indent=2))
{
  "city": "San Francisco",
  "region_code": "CA",
  "area_code": 415,
  "time_zone": "America/Los_Angeles",
  "dma_code": 807,
  "metro_code": "San Francisco, CA",
  "country_code3": "USA",
  "latitude": 37.775800000000004,
  "postal_code": "94103",
  "longitude": -122.4128,
  "country_code": "US",
  "country_name": "United States",
  "continent": "NA"
}

Python Flask Web App to Capture Data

Let’s create a basic Flask App that will capture the data from the client making the request to the server. In this example we will just return the data, but we can filter the data and ingest it into a database like elasticsearch, etc.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
from flask import Flask, request, jsonify
import pygeoip, json

app = Flask(__name__)

geo = pygeoip.GeoIP('GeoLiteCity.dat', pygeoip.MEMORY_CACHE)

@app.route('/')
def index():
    client_ip = request.remote_addr
    geo_data = geo.record_by_addr(client_ip)
    return json.dumps(geo_data, indent=2) + '\n'

if __name__ == '__main__':
    app.run(host='0.0.0.0', port=80, debug=False)

Run the Server:

1
$ python app.py

Make a request from the client over a remote connection:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
$ curl http://remote-endpoint.com
{
  "city": "Cape Town",
  "region_code": "11",
  "area_code": 0,
  "time_zone": "Africa/Johannesburg",
  "dma_code": 0,
  "metro_code": null,
  "country_code3": "ZAF",
  "latitude": -01.12345,
  "postal_code": "8000",
  "longitude": 02.123456789,
  "country_code": "ZA",
  "country_name": "South Africa",
  "continent": "AF"
}

Resources:

Install Java Development Kit 10 on Ubuntu

With the announcement of improved docker container integration with Java 10, the JVM is now aware of resource constraints, as not from prior versions. More information on this post

Differences in Java 8 and Java 10:

As you can see with Java 8:

1
2
3
4
5
$ docker run -it -m512M --entrypoint bash openjdk:latest

$ docker-java-home/bin/java -XX:+PrintFlagsFinal -version | grep MaxHeapSize
    uintx MaxHeapSize                              := 524288000                          {product}
openjdk version "1.8.0_162"

And with Java 10:

1
2
3
4
5
$ docker run -it -m512M --entrypoint bash openjdk:10-jdk

$ docker-java-home/bin/java -XX:+PrintFlagsFinal -version | grep MaxHeapSize
   size_t MaxHeapSize                              = 134217728                                {product} {ergonomic}
openjdk version "10" 2018-03-20

Installing JDK 10 on Ubuntu:

Installing Java Development Kit 10:

1
2
3
4
5
$ apt update && apt upgrade -y
$ add-apt-repository ppa:linuxuprising/java
$ apt update
$ apt install oracle-java10-installer
$ apt install oracle-java10-set-default

Confirming the Java Version:

1
2
3
4
$ java -version
java version "10.0.1" 2018-04-17
Java(TM) SE Runtime Environment 18.3 (build 10.0.1+10)
Java HotSpot(TM) 64-Bit Server VM 18.3 (build 10.0.1+10, mixed mode)

Setup a LAMP Stack With Ansible Using Ubuntu

This is Part-2 of our Ansible-Tutorial and in this post we will cover how to setup a LAMP Stack on Ubuntu using Ansible. We will only have one host in our inventory, but this can be scaled easily by increasing the number of nodes in your invetory configuration file.

Our Playbook:

Our lamp.yml playbook:

lamp.yml
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
---
# Setup LAMP Stack
- hosts: newhost
  tasks:
    - name: install lamp stack
      become: yes
      become_user: root
      apt:
        pkg:
          - apache2
          - mysql-server
          - php7.0
          - php7.0-mysql
        state: present
        update_cache: yes

    - name: start apache service
      become: yes
      become_user: root
      service:
        name: apache2
        state: started
        enabled: yes

    - name: start mysql service
      become: yes
      become_user: root
      service:
        name: mysql
        state: started
        enabled: yes

    - name: create target directory
      file: path=/var/www/html state=directory mode=0755

    - name: deploy index.html
      become: yes
      become_user: root
      copy:
        src: /tmp/index.html
        dest: /var/www/html/index.html

Our index.html that will be deployed on our servers:

/tmp/index.html
1
2
3
4
5
6
<!DOCTYPE html>
<html>
  <body>
    <h1>Deployed with Ansible</h1>
  </body>
</html>

Deploy your LAMP Stack:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
$ ansible-playbook -i inventory.ini -u root lamp.yml

PLAY [newhost] ***************************************************************************************************************************

TASK [Gathering Facts] *******************************************************************************************************************
ok: [web-1]

TASK [install lamp stack] ****************************************************************************************************************
ok: [web-1] => (item=[u'apache2', u'mysql-server', u'php7.0', u'php7.0-mysql'])

TASK [start services] ********************************************************************************************************************
ok: [web-1] => (item=apache2)
ok: [web-1] => (item=mysql)

TASK [deploy index.html] *****************************************************************************************************************
changed: [web-1]

PLAY RECAP *******************************************************************************************************************************
web-1                      : ok=4    changed=1    unreachable=0    failed=0

Test our web server:

1
2
3
$ curl http://10.0.0.4

Deployed with Ansible

Getting Started With Ansible on Ubuntu

Part 1 - This is a getting started series on Ansible.

The first post will be on how to setup ansible and how to reach your nodes in order to deploy software to your nodes.

Install Ansible:

Ansible relies on python, so we will first install the dependencies:

1
2
3
$ apt update && apt install python python-setuptools -y
$ easy_install pip
$ pip install ansible

Populate the invetory configuration:

Your invetory file will hold your host and variable information. Lets say we have 3 nodes that we want to deploy software to; node-1, node-2 and node-3. We will group them under nodes. This will be saved under the a new file inventory.init:

inventory.ini
1
2
3
4
[nodes]
node-1
node-2
node-3

Next we will populate information about our node names, this will be done under our ~/.ssh/config configuration:

~/.ssh/config
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
Host node-1
  Hostname 10.0.0.2
  User root
  IdentityFile ~/.ssh/id_rsa
  StrictHostKeyChecking no
  UserKnownHostsFile /dev/null

Host node-2
  Hostname 10.0.0.3
  User root
  IdentityFile ~/.ssh/id_rsa
  StrictHostKeyChecking no
  UserKnownHostsFile /dev/null

Host node-3
  Hostname 10.0.0.4
  User root
  IdentityFile ~/.ssh/id_rsa
  StrictHostKeyChecking no
  UserKnownHostsFile /dev/null

Now we need to generate a ssh key for our node where we will run our ansible commands from:

1
$ ssh-keygen -b 2048 -f ~/.ssh/id_rsa -t rsa -q -N ""

Now we will copy the contents of ~/.ssh/id_rsa.pub into our destination nodes ~/.ssh/authorized_keys or if you have password authentication enabled, we can do $ ssh-copy-id root@10.0.0.x etc. Now we should be able to ssh to our nodes to node-1, node-2 and node-3.

Deploy Python:

As Ansible requires Python, we need to bootstrap our nodes with Python. Since we are able to ssh to our nodes, we will use ansible to deploy Python to our nodes:

1
$ ansible -m raw -s -a "apt update && apt install python -y" -i inventory.ini nodes

This should succeed, then we can test our connection by running the ping module:

1
2
3
4
5
6
7
8
9
10
11
12
13
$ ansible -i inventory.ini nodes -m ping
node-2 | SUCCESS => {
    "changed": false,
    "ping": "pong"
}
node-3 | SUCCESS => {
    "changed": false,
    "ping": "pong"
}
node-1 | SUCCESS => {
    "changed": false,
    "ping": "pong"
}

Run a command on your nodes:

Let’s run a cat command on all the nodes:

1
2
3
4
5
6
7
8
9
$ ansible -i inventory.ini nodes -a "/bin/cat /etc/hostname"
node-3 | SUCCESS | rc=0 >>
node-3

node-1 | SUCCESS | rc=0 >>
node-1

node-2 | SUCCESS | rc=0 >>
node-2

Ansible Playbooks:

Let’s run shell commands, the traditional hello world, using the ansible-playbook command. First we need a task definition, which I will name shell_command-1.yml:

shell_command.yml
1
2
3
4
5
6
7
8
---
# Echo Static String
- hosts: nodes
  tasks:
  - name: echo static value
    shell: /bin/echo "hello world"
    register: echo_static
  - debug: msg=""

Now we have defined that our commands will be executed against the host group defined in our inventory.ini. Let’s run our ansible playbook command:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
$ ansible-playbook -i inventory.ini shell_command.yml

PLAY [nodes] *************************************************************************************

TASK [Gathering Facts] **********************************************************************************
ok: [node-1]
ok: [node-2]
ok: [node-3]

TASK [echo static value] ********************************************************************************
changed: [node-1]
changed: [node-2]
changed: [node-3]

TASK [debug] ********************************************************************************************
ok: [node-1] => {
    "msg": "hello world"
}
ok: [node-2] => {
    "msg": "hello world"
}
ok: [node-3] => {
    "msg": "hello world"
}

PLAY RECAP **********************************************************************************************
node-1              : ok=3    changed=1    unreachable=0    failed=0
node-2              : ok=3    changed=1    unreachable=0    failed=0
node-3              : ok=3    changed=1    unreachable=0    failed=0

Let’s define a variable location_city = Cape Town in our inventory.ini configuration, then we will call the variable key in our task definition:

inventory.ini
1
2
3
4
5
6
7
[nodes]
node-1
node-2
node-3

[nodes:vars]
location_city="Cape Town"

Now our task definition with our variable:

shell_command-2.yml
1
2
3
4
5
6
7
8
---
# Echo Variable
- hosts: nodes
  tasks:
  - name: echo variable value
    shell: /bin/echo ""
    register: echo
  - debug: msg=""

Running our ansible-playbook:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
$ ansible-playbook -i inventory.ini shell_command.yml

PLAY [nodes] **************************************************************************************************************************************************************************************

TASK [Gathering Facts] ***********************************************************************************************************************************************************************************
ok: [node-1]
ok: [node-2]
ok: [node-3]

TASK [echo variable value] *******************************************************************************************************************************************************************************
changed: [node-1]
changed: [node-2]
changed: [node-3]

TASK [debug] *********************************************************************************************************************************************************************************************
ok: [node-1] => {
    "msg": "Cape Town"
}
ok: [node-2] => {
    "msg": "Cape Town"
}
ok: [node-3] => {
    "msg": "Cape Town"
}

PLAY RECAP ***********************************************************************************************************************************************************************************************
node-1              : ok=3    changed=1    unreachable=0    failed=0
node-2              : ok=3    changed=1    unreachable=0    failed=0
node-3              : ok=3    changed=1    unreachable=0    failed=0

This is it for this post, all posts for this tutorial will be posted under #ansible-tutorials

Salt and Hash Example Using Python With Bcrypt on Alpine

This is a post on a example of how to hash a password with a salt. A salt in cryptography is a method that applies a one way function to hash data like passwords. The advantage of using salts is to protect your sensitive data against dictionary attacks, etc. Everytime a salt is applied to the same string, the hashed string will provide a different result.

Installing Bcrypt

I will be using bcrypt to hash my password. I always use alpine images and this is how I got bcrypt running on alpine:

1
2
3
$ docker run -it apline sh
$ apk add python python-dev py2-pip autoconf automake g++ make --no-cache
$ pip install py-bcrypt

This command should produce a 0 exit code:

1
$ python -c 'import bcrypt'; echo $?

Bcrypt Example to Hash a Password

Here is a example to show you the output when a salt is applied to a string, such as a password. First we will define our very weak password:

1
2
3
4
>>> import bcrypt
>>> password = 'pass123'
>>> password
'pass123'

The bcrypt package has a function called gensalt() that accepts a parameter log_rounds which defines the complexity of the hashing. Lets create a hash for our password:

1
2
3
4
5
>>> bcrypt.hashpw(password, bcrypt.gensalt(12))
'$2a$12$iquyyyJAlA9nZwlGo0CYK.J37Qn.to/0mTtiCspNAyO8778006XZG'

>>> bcrypt.hashpw(password, bcrypt.gensalt(12))
'$2a$12$UzNjJ1W/cWqBrt5rzNkb..j.gUvrW64DbvVkNbhRDzBtbRvNInaqq'

As you can see, the hashed string was different when we called it for the second time.

Bcrypt Salt Hash and Verification Example:

Thanks to this post, here is a example on how to hash strings and how to verify the plain text password with the provided salt.

Our functions to create the hash and to verify the password:

1
2
3
4
5
6
7
8
9
>>> import bcrypt
>>> def get_hashed_password(plain_text_password):
...     return bcrypt.hashpw(plain_text_password, bcrypt.gensalt())
...
>>>
>>> def check_password(plain_text_password, hashed_password):
...     return bcrypt.checkpw(plain_text_password, hashed_password)
...
>>>

Create a hashed string:

1
2
>>> print(get_hashed_password('mynewpassword'))
$2a$12$/MemcgbnwJLN8XE86VQZseVxopU6tY76KxnH/AJ0I9T9y1Ldko5gm

Verify the hash with your plain text password and the salt that was created:

1
2
>>> print(check_password('mynewpassword', '$2a$12$/MemcgbnwJLN8XE86VQZseVxopU6tY76KxnH/AJ0I9T9y1Ldko5gm'))
True

When you you provide the wrong password, with the correct salt, the verification will fail:

1
2
>>> print(check_password('myOLDpassword', '$2a$12$/MemcgbnwJLN8XE86VQZseVxopU6tY76KxnH/AJ0I9T9y1Ldko5gm'))
False

When you provide the correct password with the incorrect salt, the verification will also fail:

1
2
>>> print(check_password('mynewpassword', '$2a$12$/MemcgbnwJLN8XE86VQZseVxopU6tY76KxnH/AJ0I9T9y1Ldko5gmX'))
False

Setup a PPTP VPN on Ubuntu

In this post we will setup a PPTP VPN on Ubuntu 16.04

Disable IPv6 Networking:

Edit the grub config:

1
$ vi /etc/default/grub

Make the following changes:

1
2
GRUB_CMDLINE_LINUX_DEFAULT="ipv6.disable=1"
GRUB_CMDLINE_LINUX="ipv6.disable=1"

Update Grub and Reboot:

1
2
$ update-grub
$ reboot

Updates and Install PPTP:

Update Repositories and install PPTPD:

1
2
$ apt update && apt upgrade -y
$ apt install pptpd -y

Configure your Authentication

1
$ vi /etc/ppp/chap-secrets
1
2
# client server  secret          IP addresses
youruser      pptpd   yourpass        *

Configure Local and Remote IP, in this case I want 10.1.1.2 to 10.1.5.1-254

1
$ vi /etc/pptpd.conf
1
2
3
4
5
6
7
option /etc/ppp/pptpd-options
logwtmp
connections 10000
localip 10.1.1.1
remoteip 10.1.1.2-254,10.1.2.1-254,10.1.3.2-254,10.1.4.1-254,10.1.5.1-254
# for a /24 you can set
# remoteip 10.1.1.2-254

Enable IP Forwarding:

Edit the sysctl.conf and enable IP Forwarding:

1
$ vim /etc/sysctl.conf

Populate the following value:

1
net.ipv4.ip_forward=1

Update the Changes:

1
$ sysctl -p

Enable and Start PPTPD:

Enable the service on boot and start the service:

1
2
3
$ systemctl enable pptpd
$ systemctl start pptpd
$ systemctl status pptpd

Connect to your VPN.

Resources:

Deploy Docker Swarm Using Ansible

In this setup we will use Ansible to Deploy Docker Swarm.

With this setup, I have a client node, which will be my jump box, as it will be used to ssh with the docker user to my swarm nodes with passwordless ssh access.

The repository for the source code can be found on my Github Repository

Pre-Check

Hosts file:

1
2
3
4
5
$ cat /etc/hosts
10.0.8.2 client
192.168.1.10 swarm-manager
192.168.1.11 swarm-worker-1
192.168.1.12 swarm-worker-2

SSH Config:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
$ cat ~/.ssh/config 
Host client
  Hostname client
  User root
  IdentityFile /tmp/key.pem
  StrictHostKeyChecking no
  UserKnownHostsFile /dev/null

Host swarm-manager
  Hostname swarm-manager
  User root
  IdentityFile /tmp/key.pem
  StrictHostKeyChecking no
  UserKnownHostsFile /dev/null

Host swarm-worker-1
  Hostname swarm-worker-1
  User root
  IdentityFile /tmp/key.pem
  StrictHostKeyChecking no
  UserKnownHostsFile /dev/null

Host swarm-worker-2
  Hostname swarm-worker-2
  User root
  IdentityFile /tmp/key.pem
  StrictHostKeyChecking no
  UserKnownHostsFile /dev/null

Install Ansible:

1
2
3
$ apt install python-setuptools -y
$ easy_install pip
$ pip install ansible

Ensure passwordless ssh is working:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
$ ansible -i inventory.ini -u root -m ping all
client | SUCCESS => {
    "changed": false, 
    "ping": "pong"
}
swarm-manager | SUCCESS => {
    "changed": false, 
    "ping": "pong"
}
swarm-worker-2 | SUCCESS => {
    "changed": false, 
    "ping": "pong"
}
swarm-worker-1 | SUCCESS => {
    "changed": false, 
    "ping": "pong"
}

Deploy Docker Swarm

1
2
3
4
5
6
7
$ ansible-playbook -i inventory.ini -u root deploy-swarm.yml 
PLAY RECAP 

client                     : ok=11   changed=3    unreachable=0    failed=0   
swarm-manager              : ok=18   changed=4    unreachable=0    failed=0   
swarm-worker-1             : ok=15   changed=1    unreachable=0    failed=0   
swarm-worker-2             : ok=15   changed=1    unreachable=0    failed=0   

SSH to the Swarm Manager and List the Nodes:

1
2
3
4
5
$ docker node ls
ID                            HOSTNAME            STATUS              AVAILABILITY        MANAGER STATUS      ENGINE VERSION
0ead0jshzkpyrw7livudrzq9o *   swarm-manager       Ready               Active              Leader              18.03.1-ce
iwyp6t3wcjdww0r797kwwkvvy     swarm-worker-1      Ready               Active                                  18.03.1-ce
ytcc86ixi0kuuw5mq5xxqamt1     swarm-worker-2      Ready               Active                                  18.03.1-ce

Test Application on Swarm

Create a Nginx Demo Service:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
$ docker network create --driver overlay appnet
$ docker service create --name nginx --publish 80:80 --network appnet --replicas 6 nginx
$ docker service ls
ID                  NAME                MODE                REPLICAS            IMAGE               PORTS
k3vwvhmiqbfk        nginx               replicated          6/6                 nginx:latest        *:80->80/tcp

$ docker service ps nginx
ID                  NAME                IMAGE               NODE                DESIRED STATE       CURRENT STATE            ERROR               PORTS
tspsypgis3qe        nginx.1             nginx:latest        swarm-manager       Running             Running 34 seconds ago                       
g2f0ytwb2jjg        nginx.2             nginx:latest        swarm-worker-1      Running             Running 34 seconds ago                       
clcmew8bcvom        nginx.3             nginx:latest        swarm-manager       Running             Running 34 seconds ago                       
q293r8zwu692        nginx.4             nginx:latest        swarm-worker-2      Running             Running 34 seconds ago                       
sv7bqa5e08zw        nginx.5             nginx:latest        swarm-worker-1      Running             Running 34 seconds ago                       
r7qg9nk0a9o2        nginx.6             nginx:latest        swarm-worker-2      Running             Running 34 seconds ago   

Test the Application:

1
2
3
4
5
6
7
8
9
10
$ curl -i http://192.168.1.10
HTTP/1.1 200 OK
Server: nginx/1.15.0
Date: Thu, 14 Jun 2018 10:01:34 GMT
Content-Type: text/html
Content-Length: 612
Last-Modified: Tue, 05 Jun 2018 12:00:18 GMT
Connection: keep-alive
ETag: "5b167b52-264"
Accept-Ranges: bytes

Delete the Service:

1
2
3

$ docker service rm nginx
nginx

Delete the Swarm:

1
2
3
4
5
6
$ ansible-playbook -i inventory.ini -u root delete-swarm.yml 

PLAY RECAP 
swarm-manager              : ok=2    changed=1    unreachable=0    failed=0   
swarm-worker-1             : ok=2    changed=1    unreachable=0    failed=0   
swarm-worker-2             : ok=2    changed=1    unreachable=0    failed=0   

Ensure the Nodes is removed from the Swarm, SSH to your Swarm Manager:

1
2
$ docker node ls
Error response from daemon: This node is not a swarm manager. Use "docker swarm init" or "docker swarm join" to connect this node to swarm and try again.

Setup a 3 Node Ceph Storage Cluster on Ubuntu 16

For some time now, I wanted to do a setup of Ceph, and I finally got the time to do it. This setup was done on Ubuntu 16.04

What is Ceph

Ceph is a storage platform that implements object storage on a single distributed computer cluster and provides interfaces for object, block and file-level storage.

  • Object Storage:

Ceph provides seemless access to objects via native language bindings or via the REST interface, RadosGW and also compatible for applications written for S3 and Swift.

  • Block Storage:

Ceph’s Rados Block Device (RBD) provides access to block device images that are replicated and striped across the storage cluster.

  • File System:

Ceph provides a network file system (CephFS) that aims for high performance.

Our Setup

We will have 4 nodes. 1 Admin node where we will deploy our cluster with, and 3 nodes that will hold the data:

  • ceph-admin (10.0.8.2)
  • ceph-node1 (10.0.8.3)
  • ceph-node2 (10.0.8.4)
  • ceph-node3 (10.0.8.5)

Host Entries

If you don’t have dns for your servers, setup the /etc/hosts file so that the names can resolves to the ip addresses:

1
2
3
4
10.0.8.2 ceph-admin
10.0.8.3 ceph-node1
10.0.8.4 ceph-node2
10.0.8.5 ceph-node3

User Accounts and Passwordless SSH

Setup the ceph-system user accounts on all the servers:

1
2
$ useradd -d /home/ceph-system -s /bin/bash -m ceph-system
$ passwd ceph-system

Setup the created user part of the sudoers that is able to issue sudo commands without a pssword:

1
2
$ echo "ceph-system ALL = (root) NOPASSWD:ALL" | sudo tee /etc/sudoers.d/ceph-system
$ chmod 0440 /etc/sudoers.d/ceph-system

Switch user to ceph-system and generate SSH keys and copy the keys from the ceph-admin server to the ceph-nodes:

1
2
3
4
5
6
$ sudo su - ceph-system
$ ssh-keygen -t rsa -f ~/.ssh/id_rsa -P ""
$ ssh-copy-id ceph-system@ceph-node1
$ ssh-copy-id ceph-system@ceph-node2
$ ssh-copy-id ceph-system@ceph-node3
$ ssh-copy-id ceph-system@ceph-admin

Pre-Requisite Software:

Install Python and Ceph Deploy on each node:

1
2
$ sudo apt-get install python -y
$ sudo apt install ceph-deploy -y

Note: Please skip this section if you have additional disks on your servers.

The instances that im using to test this setup only has one disk, so I will be creating loop block devices using allocated files. This is not recommended as when the disk fails, all the (files/block device images) will be gone with that. But since im demonstrating this, I will create the block devices from a file:

I will be creating a 12GB file on each node

1
2
$ sudo mkdir /raw-disks
$ sudo dd if=/dev/zero of=/raw-disks/rd0 bs=1M count=12288

The use losetup to create the loop0 block device:

1
$ sudo losetup /dev/loop0 /raw-disks/rd0

As you can see the loop device is showing when listing the block devices:

1
2
3
$ lsblk
NAME      MAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
loop0       7:0    0   12G  0 loop

Install Ceph

Now let’s install ceph using ceph-deploy to all our nodes:

1
2
$ sudo apt update && sudo apt upgrade -y
$ ceph-deploy install ceph-admin ceph-node1 ceph-node2 ceph-node3

The version I was running at the time:

1
2
$ ceph --version
ceph version 10.2.9

Initialize Ceph

Initialize the Cluster with 3 Monitors:

1
$ ceph-deploy new ceph-node1 ceph-node2 ceph-node3

Add the initial monitors and gather the keys from the previous command:

1
$ ceph-deploy mon create-initial

At this point, we should be able to scan the block devices on our nodes:

1
2
3
$ ceph-deploy disk list ceph-node3
[ceph-node3][INFO  ] Running command: sudo /usr/sbin/ceph-disk list
[ceph-node3][DEBUG ] /dev/loop0 other

Prepare the Disks:

First we will zap the block devices and then prepare to create the partitions:

1
2
3
4
5
$ ceph-deploy disk zap ceph-node1:/dev/loop0 ceph-node2:/dev/loop0 ceph-node3:/dev/loop0
$ ceph-deploy osd prepare ceph-node1:/dev/loop0 ceph-node2:/dev/loop0 ceph-node3:/dev/loop0
[ceph_deploy.osd][DEBUG ] Host ceph-node1 is now ready for osd use.
[ceph_deploy.osd][DEBUG ] Host ceph-node2 is now ready for osd use.
[ceph_deploy.osd][DEBUG ] Host ceph-node3 is now ready for osd use.

When you scan the nodes for their disks, you will notice that the partitions has been created:

1
2
3
$ ceph-deploy disk list ceph-node1
[ceph-node1][DEBUG ] /dev/loop0p2 ceph journal, for /dev/loop0p1
[ceph-node1][DEBUG ] /dev/loop0p1 ceph data, active, cluster ceph, osd.0, journal /dev/loop0p2

Now let’s activate the OSD’s by using the data partitions:

1
$ ceph-deploy osd activate ceph-node1:/dev/loop0p1 ceph-node2:/dev/loop0p1 ceph-node3:/dev/loop0p1

Redistribute Keys:

Copy the configuration files and admin key to your admin node and ceph data nodes:

1
$ ceph-deploy admin ceph-admin ceph-node1 ceph-node2 ceph-node3

If you would like to add more OSD’s (not tested):

1
2
3
4
$ ceph-deploy disk zap ceph-node1:/dev/loop1 ceph-node2:/dev/loop1 ceph-node3:/dev/loop1
$ ceph-deploy osd prepare ceph-node1:/dev/loop1 ceph-node2:/dev/loop1 ceph-node3:/dev/loop1
$ ceph-deploy osd activate ceph-node2:/dev/loop1p1:/dev/loop1p2 ceph-node2:/dev/loop1p1:/dev/loop1p2 ceph-node3:/dev/loop1p1:/dev/loop1p2
$ ceph-deploy admin ceph-node1 ceph-node2 ceph-node3

Ceph Status:

Have a look at your cluster status:

1
2
3
4
5
6
7
8
9
10
$ sudo ceph -s
    cluster 8d704c8a-ac19-4454-a89f-89a5d5b7d94d
     health HEALTH_OK
     monmap e1: 3 mons at {ceph-node1=10.0.8.3:6789/0,ceph-node2=10.0.8.4:6789/0,ceph-node3=10.0.8.5:6789/0}
            election epoch 10, quorum 0,1,2 ceph-node2,ceph-node3,ceph-node1
     osdmap e14: 3 osds: 3 up, 3 in
            flags sortbitwise,require_jewel_osds
      pgmap v29: 64 pgs, 1 pools, 0 bytes data, 0 objects
            100 MB used, 18298 MB / 18398 MB avail
                  64 active+clean

Everything looks good. Also change the permissions on this file, on all the nodes in order to execute the ceph, rados commands:

1
$ sudo chmod +r /etc/ceph/ceph.client.admin.keyring

Storage Pools:

List your pool in your Ceph Cluster:

1
2
$ rados lspools
rbd

Let’s create a new storage pool called mypool:

1
2
$ ceph osd pool create mypool 32 32
pool 'mypool' created

Let’s the list the storage pools again:

1
2
3
$ rados lspools
rbd
mypool

You can also use the ceph command to list the pools:

1
2
3
$ ceph osd pool ls
rbd
mypool

Create a Block Device Image:

1
$ rbd create --size 1024 mypool/disk1 --image-feature layering

List the Block Device Images under your Pool:

1
2
$ rbd list mypool
disk1

Retrieve information from your image:

1
2
3
4
5
6
7
8
9
$ rbd info mypool/disk1
rbd image 'disk1':
        size 1024 MB in 256 objects
        order 22 (4096 kB objects)
        block_name_prefix: rbd_data.1021643c9869
        format: 2
        features: layering
        flags:
        create_timestamp: Thu Jun  7 23:48:23 2018

Create a local mapping of the image to a block device:

1
2
$ sudo rbd map mypool/disk1
/dev/rbd0

Now we have a block device available at /dev/rbd0. Go ahead and mount it to /mnt:

1
$ sudo mount /dev/rbd0 /mnt

We can then see it when we list our mounted disk partitions:

1
2
3
4
$ df -h
Filesystem      Size  Used Avail Use% Mounted on
/dev/sda1        19G   13G  5.2G  72% /
/dev/rbd0       976M  1.3M  908M   1% /mnt

We can also resize the disk on the fly, let’s resize it from 1GB to 2GB:

1
2
$ rbd resize mypool/disk1 --size 2048
Resizing image: 100% complete...done.

To grow the space we can use resize2fs for ext4 partitions and xfs_growfs for xfs partitions:

1
2
3
4
5
$ sudo resize2fs /dev/rbd0
resize2fs 1.42.13 (17-May-2015)
Filesystem at /dev/rbd0 is mounted on /mnt; on-line resizing required
old_desc_blocks = 1, new_desc_blocks = 1
The filesystem on /dev/rbd0 is now 524288 (4k) blocks long.

When we look at our mounted partitions, you will notice that the size of our mounted partition has been increased in size:

1
2
3
4
$ df -h
Filesystem      Size  Used Avail Use% Mounted on
/dev/sda1        19G   13G  5.2G   72% /
/dev/rbd0       2.0G  1.5M  1.9G   1% /mnt

Object Storage RadosGW

Let’s create a new pool where we will store our objects:

1
2
$ ceph osd pool create object-pool 32 32
pool 'object-pool' created

We will now create a local file, push the file to our object storage service, then delete our local file, download the file as a file with a different name, and read the contents:

Create the local file:

1
$ echo "ok" > test.txt

Push the local file to our pool in our object storage:

1
$ rados put objects/data/test.txt ./test.txt --pool object-pool

List the pool (note that this can be executed from any node):

1
2
$ $ rados ls --pool object-pool
objects/data/test.txt

Delete the local file, download the file from our object storage and read the contents:

1
2
3
4
5
6
$ rm -rf test.txt

$ rados get objects/data/test.txt ./newfile.txt --pool object-pool

$ cat ./newfile.txt
ok

View the disk space from our storage-pool:

1
2
3
4
5
6
$ rados df --pool object-pool
pool name                 KB      objects       clones     degraded      unfound           rd        rd KB           wr        wr KB
object-pool                1            1            0            0            0            0            0            1            1
  total used          261144           37
  total avail       18579372
  total space       18840516

Resources:

Hello World Programs in Different Languages

This post will demonstrate running hello world programs in different languages and also providing return time statistics

C++

Code

1
2
3
4
5
6
7
8
#include <iostream>
using namespace std;

int main()
{
    std::cout << "Hello, World!" << std::endl;
    return 0;
}

Compile:

1
$ c++ hello_cpp.cpp -o hello_cpp

Run:

1
2
3
4
5
6
$ time ./hello_cpp
Hello, World!

real  0m0.005s
user  0m0.001s
sys     0m0.001s

Golang:

Code

1
2
3
4
5
6
7
package main

import "fmt"

func main() {
  fmt.Println("Hello, World!")
}

Compile:

1
$ go build hello_golang.go

Run:

1
2
3
4
5
6
time ./hello_golang
Hello, World!

real  0m0.006s
user  0m0.001s
sys     0m0.003s

Python

Code:

1
2
#!/usr/bin/env python
print("Hello, World!")

Make it executable:

1
$ chmod +x ./hello_python.py

Run:

1
2
3
4
5
6
$ time ./hello_python.py
Hello, World!

real  0m0.033s
user  0m0.015s
sys     0m0.010s

Ruby

Code:

1
2
#!/usr/bin/env ruby
puts "Hello, World!"

Make it executable:

1
$ chmod +x ./hello_ruby.rb

Run:

1
2
3
4
5
6
$ time ./hello_ruby.rb
Hello, World!

real  0m0.136s
user  0m0.080s
sys     0m0.024s

Java

Code:

1
2
3
4
5
public class hello_java {
    public static void main(String[] args) {
        System.out.println("Hello, World!");
    }
}

Compile:

1
$ javac hello_java.java

Run:

1
2
3
4
5
6
$ time java hello_java
Hello, World!

real  0m0.114s
user  0m0.086s
sys     0m0.023s

Resource:

Setup a Peer to Peer VPN With VPNCloud on Ubuntu

So I got 3 Dedicated Servers each having its own Static IP and I wanted a way to build a private network between these servers.

The Scenario:

3 Servers with the following IP’s (not real IP addresses):

1
2
3
- Server 1: 52.1.99.10
- Server 2: 52.1.84.20
- Server 3: 52.1.49.30

So I want to have a private network, so that I can have the following internal network:

1
2
3
- Server 1: 10.0.1.1
- Server 2: 10.0.1.2
- Server 3: 10.0.1.3

A couple of years ago, I accomplished the end goal using GRE Tunnels, which works well, but wanted to try something different.

VPNCloud

So I stumbled upon VPNCloud.rs, which is a peer to peer VPN. Their description, quoted from their Github page:

“VpnCloud is a simple VPN over UDP. It creates a virtual network interface on the host and forwards all received data via UDP to the destination. VpnCloud establishes a fully-meshed VPN network in a peer-to-peer manner. It can work on TUN devices (IP based) and TAP devices (Ethernet based).”

This is exactly what I was looking for.

Setting up a 3 node Private Network:

Given the IP configuration above, we will setup a Private network between our 3 hosts.

Do some updates then grab the package from Github and install VPNCloud:

1
2
3
$ apt update && apt ugprade -y
$ wget https://github.com/dswd/vpncloud.rs/releases/download/v0.8.1/vpncloud_0.8.1_amd64.deb
$ dpkg -i ./vpncloud_0.8.1_amd64.deb

Let’s start the configuration on Server-1, this config should also be setup on the other 2 servers, the config will remain the same, except for the ifup command. The other servers will look like:

1
2
Server-2: -> ifup: "ifconfig $IFNAME 10.0.1.2/24 mtu 1400"
Server-3: -> ifup: "ifconfig $IFNAME 10.0.1.3/24 mtu 1400"

Getting back to the Server-1 config:

1
$ vim /etc/vpncloud/private.net

Example Config that I am using:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
# each vpn running on their own port
port: 3210

# members of our private network
peers:
  - srv2.domain.com:3210
  - srv3.domain.com:3210

# timeouts
peer_timeout: 1800
dst_timeout: 300

# token that identifies the network and helps to distinguish from other networks
magic: "76706e01"

# pre shared key
shared_key: "VeryStrongPreSharedKey_ThatShouldBeChanged"

# encryption
crypto: aes256

# device info
device_name: "vpncloud%d"
device_type: tap

# vpn modes: hub / switch / router / normal
mode: normal

# subnet to be used for our private network
subnets:
  - 10.0.1.0/24

# command to setup the network
ifup: "ifconfig $IFNAME 10.0.1.1/24 mtu 1400"
ifdown: "ifconfig $IFNAME down"

# user/group owning the process
user: "root"
group: "root"

Repeat the config on the other servers.

Start the VPN Service:

Restart the VPNCloud Service on all the Servers:

1
$ service vpncloud@private start

Check the status:

1
$ service vpncloud@private status

Check if the interface is up:

1
2
3
4
5
6
7
8
$ ifconfig vpncloud0
vpncloud0 Link encap:Ethernet  HWaddr aa:bb:cc:dd:ee:ff
          inet addr:10.0.1.1  Bcast:10.0.1.255  Mask:255.255.255.0
          UP BROADCAST RUNNING MULTICAST  MTU:1400  Metric:1
          RX packets:55 errors:0 dropped:0 overruns:0 frame:0
          TX packets:71 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000
          RX bytes:5046 (5.0 KB)  TX bytes:5526 (5.5 KB)

Ping the 3rd server via the private network:

1
2
3
4
5
6
7
8
9
$ ping -c 3 10.0.1.3
PING 10.0.1.2 (10.0.1.3) 56(84) bytes of data.
64 bytes from 10.0.1.3: icmp_seq=1 ttl=64 time=0.852 ms
64 bytes from 10.0.1.3: icmp_seq=2 ttl=64 time=0.831 ms
64 bytes from 10.0.1.3: icmp_seq=3 ttl=64 time=0.800 ms

--- 10.0.1.3 ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 2028ms
rtt min/avg/max/mdev = 0.800/0.827/0.852/0.039 ms

Awesome service, please check their Github Repo out.