With the PyGeoIP package you can capture geo location data, which is pretty cool, for example, when you have IOT devices pushing location data to elasticsearch and visualizing the data with Kibana. That will be one example, but the possibilites are endless.
Let’s create a basic Flask App that will capture the data from the client making the request to the server. In this example we will just return the data, but we can filter the data and ingest it into a database like elasticsearch, etc.
With the announcement of improved docker container integration with Java 10, the JVM is now aware of resource constraints, as not from prior versions. More information on this post
Differences in Java 8 and Java 10:
As you can see with Java 8:
12345
$ docker run -it -m512M --entrypoint bash openjdk:latest
$ docker-java-home/bin/java -XX:+PrintFlagsFinal -version | grep MaxHeapSize
uintx MaxHeapSize :=524288000{product}openjdk version "1.8.0_162"
And with Java 10:
12345
$ docker run -it -m512M --entrypoint bash openjdk:10-jdk
$ docker-java-home/bin/java -XX:+PrintFlagsFinal -version | grep MaxHeapSize
size_t MaxHeapSize=134217728{product}{ergonomic}openjdk version "10" 2018-03-20
This is Part-2 of our Ansible-Tutorial and in this post we will cover how to setup a LAMP Stack on Ubuntu using Ansible. We will only have one host in our inventory, but this can be scaled easily by increasing the number of nodes in your invetory configuration file.
Your invetory file will hold your host and variable information. Lets say we have 3 nodes that we want to deploy software to; node-1, node-2 and node-3. We will group them under nodes. This will be saved under the a new file inventory.init:
inventory.ini
1234
[nodes]node-1
node-2
node-3
Next we will populate information about our node names, this will be done under our ~/.ssh/config configuration:
~/.ssh/config
1234567891011121314151617181920
Host node-1
Hostname 10.0.0.2
User root
IdentityFile ~/.ssh/id_rsa
StrictHostKeyChecking no
UserKnownHostsFile /dev/null
Host node-2
Hostname 10.0.0.3
User root
IdentityFile ~/.ssh/id_rsa
StrictHostKeyChecking no
UserKnownHostsFile /dev/null
Host node-3
Hostname 10.0.0.4
User root
IdentityFile ~/.ssh/id_rsa
StrictHostKeyChecking no
UserKnownHostsFile /dev/null
Now we need to generate a ssh key for our node where we will run our ansible commands from:
Now we will copy the contents of ~/.ssh/id_rsa.pub into our destination nodes ~/.ssh/authorized_keys or if you have password authentication enabled, we can do $ ssh-copy-id root@10.0.0.x etc. Now we should be able to ssh to our nodes to node-1, node-2 and node-3.
Deploy Python:
As Ansible requires Python, we need to bootstrap our nodes with Python. Since we are able to ssh to our nodes, we will use ansible to deploy Python to our nodes:
1
$ ansible -m raw -s -a "apt update && apt install python -y" -i inventory.ini nodes
This should succeed, then we can test our connection by running the ping module:
Let’s run shell commands, the traditional hello world, using the ansible-playbook command. First we need a task definition, which I will name shell_command-1.yml:
This is a post on a example of how to hash a password with a salt. A salt in cryptography is a method that applies a one way function to hash data like passwords. The advantage of using salts is to protect your sensitive data against dictionary attacks, etc. Everytime a salt is applied to the same string, the hashed string will provide a different result.
Installing Bcrypt
I will be using bcrypt to hash my password. I always use alpine images and this is how I got bcrypt running on alpine:
123
$ docker run -it apline sh
$ apk add python python-dev py2-pip autoconf automake g++ make --no-cache
$ pip install py-bcrypt
This command should produce a 0 exit code:
1
$ python -c 'import bcrypt';echo$?
Bcrypt Example to Hash a Password
Here is a example to show you the output when a salt is applied to a string, such as a password. First we will define our very weak password:
The bcrypt package has a function called gensalt() that accepts a parameter log_rounds which defines the complexity of the hashing. Lets create a hash for our password:
# client server secret IP addressesyouruser pptpd yourpass *
Configure Local and Remote IP, in this case I want 10.1.1.2 to 10.1.5.1-254
1
$ vi /etc/pptpd.conf
1234567
option /etc/ppp/pptpd-options
logwtmp
connections 10000
localip 10.1.1.1
remoteip 10.1.1.2-254,10.1.2.1-254,10.1.3.2-254,10.1.4.1-254,10.1.5.1-254
# for a /24 you can set# remoteip 10.1.1.2-254
In this setup we will use Ansible to Deploy Docker Swarm.
With this setup, I have a client node, which will be my jump box, as it will be used to ssh with the docker user to my swarm nodes with passwordless ssh access.
The repository for the source code can be found on my Github Repository
$ docker node ls
ID HOSTNAME STATUS AVAILABILITY MANAGER STATUS ENGINE VERSION
0ead0jshzkpyrw7livudrzq9o * swarm-manager Ready Active Leader 18.03.1-ce
iwyp6t3wcjdww0r797kwwkvvy swarm-worker-1 Ready Active 18.03.1-ce
ytcc86ixi0kuuw5mq5xxqamt1 swarm-worker-2 Ready Active 18.03.1-ce
Test Application on Swarm
Create a Nginx Demo Service:
1234567891011121314
$ docker network create --driver overlay appnet
$ docker service create --name nginx --publish 80:80 --network appnet --replicas 6 nginx
$ docker service ls
ID NAME MODE REPLICAS IMAGE PORTS
k3vwvhmiqbfk nginx replicated 6/6 nginx:latest *:80->80/tcp
$ docker service ps nginx
ID NAME IMAGE NODE DESIRED STATE CURRENT STATE ERROR PORTS
tspsypgis3qe nginx.1 nginx:latest swarm-manager Running Running 34 seconds ago
g2f0ytwb2jjg nginx.2 nginx:latest swarm-worker-1 Running Running 34 seconds ago
clcmew8bcvom nginx.3 nginx:latest swarm-manager Running Running 34 seconds ago
q293r8zwu692 nginx.4 nginx:latest swarm-worker-2 Running Running 34 seconds ago
sv7bqa5e08zw nginx.5 nginx:latest swarm-worker-1 Running Running 34 seconds ago
r7qg9nk0a9o2 nginx.6 nginx:latest swarm-worker-2 Running Running 34 seconds ago
Ensure the Nodes is removed from the Swarm, SSH to your Swarm Manager:
12
$ docker node ls
Error response from daemon: This node is not a swarm manager. Use "docker swarm init" or "docker swarm join" to connect this node to swarm and try again.
For some time now, I wanted to do a setup of Ceph, and I finally got the time to do it. This setup was done on Ubuntu 16.04
What is Ceph
Ceph is a storage platform that implements object storage on a single distributed computer cluster and provides interfaces for object, block and file-level storage.
Object Storage:
Ceph provides seemless access to objects via native language bindings or via the REST interface, RadosGW and also compatible for applications written for S3 and Swift.
Block Storage:
Ceph’s Rados Block Device (RBD) provides access to block device images that are replicated and striped across the storage cluster.
File System:
Ceph provides a network file system (CephFS) that aims for high performance.
Our Setup
We will have 4 nodes. 1 Admin node where we will deploy our cluster with, and 3 nodes that will hold the data:
ceph-admin (10.0.8.2)
ceph-node1 (10.0.8.3)
ceph-node2 (10.0.8.4)
ceph-node3 (10.0.8.5)
Host Entries
If you don’t have dns for your servers, setup the /etc/hosts file so that the names can resolves to the ip addresses:
Note: Please skip this section if you have additional disks on your servers.
The instances that im using to test this setup only has one disk, so I will be creating loop block devices using allocated files. This is not recommended as when the disk fails, all the (files/block device images) will be gone with that. But since im demonstrating this, I will create the block devices from a file:
$ ceph-deploy new ceph-node1 ceph-node2 ceph-node3
Add the initial monitors and gather the keys from the previous command:
1
$ ceph-deploy mon create-initial
At this point, we should be able to scan the block devices on our nodes:
123
$ ceph-deploy disk list ceph-node3
[ceph-node3][INFO ] Running command: sudo /usr/sbin/ceph-disk list
[ceph-node3][DEBUG ] /dev/loop0 other
Prepare the Disks:
First we will zap the block devices and then prepare to create the partitions:
12345
$ ceph-deploy disk zap ceph-node1:/dev/loop0 ceph-node2:/dev/loop0 ceph-node3:/dev/loop0
$ ceph-deploy osd prepare ceph-node1:/dev/loop0 ceph-node2:/dev/loop0 ceph-node3:/dev/loop0
[ceph_deploy.osd][DEBUG ] Host ceph-node1 is now ready for osd use.
[ceph_deploy.osd][DEBUG ] Host ceph-node2 is now ready for osd use.
[ceph_deploy.osd][DEBUG ] Host ceph-node3 is now ready for osd use.
When you scan the nodes for their disks, you will notice that the partitions has been created:
123
$ ceph-deploy disk list ceph-node1
[ceph-node1][DEBUG ] /dev/loop0p2 ceph journal, for /dev/loop0p1
[ceph-node1][DEBUG ] /dev/loop0p1 ceph data, active, cluster ceph, osd.0, journal /dev/loop0p2
Now let’s activate the OSD’s by using the data partitions:
To grow the space we can use resize2fs for ext4 partitions and xfs_growfs for xfs partitions:
12345
$ sudo resize2fs /dev/rbd0
resize2fs 1.42.13 (17-May-2015)Filesystem at /dev/rbd0 is mounted on /mnt; on-line resizing required
old_desc_blocks= 1, new_desc_blocks= 1
The filesystem on /dev/rbd0 is now 524288(4k) blocks long.
When we look at our mounted partitions, you will notice that the size of our mounted partition has been increased in size:
Let’s create a new pool where we will store our objects:
12
$ ceph osd pool create object-pool 32 32
pool 'object-pool' created
We will now create a local file, push the file to our object storage service, then delete our local file, download the file as a file with a different name, and read the contents:
Create the local file:
1
$ echo"ok" > test.txt
Push the local file to our pool in our object storage:
1
$ rados put objects/data/test.txt ./test.txt --pool object-pool
List the pool (note that this can be executed from any node):
12
$ $ rados ls --pool object-pool
objects/data/test.txt
Delete the local file, download the file from our object storage and read the contents:
123456
$ rm -rf test.txt
$ rados get objects/data/test.txt ./newfile.txt --pool object-pool
$ cat ./newfile.txt
ok
View the disk space from our storage-pool:
123456
$ rados df --pool object-pool
pool name KB objects clones degraded unfound rd rd KB wr wr KB
object-pool 11000001 1
total used 261144 37
total avail 18579372
total space 18840516
So I got 3 Dedicated Servers each having its own Static IP and I wanted a way to build a private network between these servers.
The Scenario:
3 Servers with the following IP’s (not real IP addresses):
123
- Server 1: 52.1.99.10
- Server 2: 52.1.84.20
- Server 3: 52.1.49.30
So I want to have a private network, so that I can have the following internal network:
123
- Server 1: 10.0.1.1
- Server 2: 10.0.1.2
- Server 3: 10.0.1.3
A couple of years ago, I accomplished the end goal using GRE Tunnels, which works well, but wanted to try something different.
VPNCloud
So I stumbled upon VPNCloud.rs, which is a peer to peer VPN. Their description, quoted from their Github page:
“VpnCloud is a simple VPN over UDP. It creates a virtual network interface on the host and forwards all received data via UDP to the destination. VpnCloud establishes a fully-meshed VPN network in a peer-to-peer manner. It can work on TUN devices (IP based) and TAP devices (Ethernet based).”
This is exactly what I was looking for.
Setting up a 3 node Private Network:
Given the IP configuration above, we will setup a Private network between our 3 hosts.
Do some updates then grab the package from Github and install VPNCloud:
Let’s start the configuration on Server-1, this config should also be setup on the other 2 servers, the config will remain the same, except for the ifup command. The other servers will look like:
12
Server-2: -> ifup: "ifconfig $IFNAME 10.0.1.2/24 mtu 1400"Server-3: -> ifup: "ifconfig $IFNAME 10.0.1.3/24 mtu 1400"
# each vpn running on their own portport: 3210
# members of our private networkpeers:
- srv2.domain.com:3210
- srv3.domain.com:3210
# timeoutspeer_timeout: 1800
dst_timeout: 300
# token that identifies the network and helps to distinguish from other networksmagic: "76706e01"# pre shared keyshared_key: "VeryStrongPreSharedKey_ThatShouldBeChanged"# encryptioncrypto: aes256
# device infodevice_name: "vpncloud%d"device_type: tap
# vpn modes: hub / switch / router / normalmode: normal
# subnet to be used for our private networksubnets:
- 10.0.1.0/24
# command to setup the networkifup: "ifconfig $IFNAME 10.0.1.1/24 mtu 1400"ifdown: "ifconfig $IFNAME down"# user/group owning the processuser: "root"group: "root"