This is Part-2 of our Ansible-Tutorial and in this post we will cover how to setup a LAMP Stack on Ubuntu using Ansible. We will only have one host in our inventory, but this can be scaled easily by increasing the number of nodes in your invetory configuration file.
Your invetory file will hold your host and variable information. Lets say we have 3 nodes that we want to deploy software to; node-1, node-2 and node-3. We will group them under nodes. This will be saved under the a new file inventory.init:
Next we will populate information about our node names, this will be done under our ~/.ssh/config configuration:
Now we need to generate a ssh key for our node where we will run our ansible commands from:
Now we will copy the contents of ~/.ssh/id_rsa.pub into our destination nodes ~/.ssh/authorized_keys or if you have password authentication enabled, we can do $ ssh-copy-id firstname.lastname@example.org etc. Now we should be able to ssh to our nodes to node-1, node-2 and node-3.
As Ansible requires Python, we need to bootstrap our nodes with Python. Since we are able to ssh to our nodes, we will use ansible to deploy Python to our nodes:
$ ansible -m raw -s -a "apt update && apt install python -y" -i inventory.ini nodes
This should succeed, then we can test our connection by running the ping module:
This is a post on a example of how to hash a password with a salt. A salt in cryptography is a method that applies a one way function to hash data like passwords. The advantage of using salts is to protect your sensitive data against dictionary attacks, etc. Everytime a salt is applied to the same string, the hashed string will provide a different result.
I will be using bcrypt to hash my password. I always use alpine images and this is how I got bcrypt running on alpine:
$ docker run -it apline sh
$ apk add python python-dev py2-pip autoconf automake g++ make --no-cache
$ pip install py-bcrypt
This command should produce a 0 exit code:
$ python -c 'import bcrypt';echo$?
Bcrypt Example to Hash a Password
Here is a example to show you the output when a salt is applied to a string, such as a password. First we will define our very weak password:
$ docker node ls
ID HOSTNAME STATUS AVAILABILITY MANAGER STATUS ENGINE VERSION
0ead0jshzkpyrw7livudrzq9o * swarm-manager Ready Active Leader 18.03.1-ce
iwyp6t3wcjdww0r797kwwkvvy swarm-worker-1 Ready Active 18.03.1-ce
ytcc86ixi0kuuw5mq5xxqamt1 swarm-worker-2 Ready Active 18.03.1-ce
Test Application on Swarm
Create a Nginx Demo Service:
$ docker network create --driver overlay appnet
$ docker service create --name nginx --publish 80:80 --network appnet --replicas 6 nginx
$ docker service ls
ID NAME MODE REPLICAS IMAGE PORTS
k3vwvhmiqbfk nginx replicated 6/6 nginx:latest *:80->80/tcp
$ docker service ps nginx
ID NAME IMAGE NODE DESIRED STATE CURRENT STATE ERROR PORTS
tspsypgis3qe nginx.1 nginx:latest swarm-manager Running Running 34 seconds ago
g2f0ytwb2jjg nginx.2 nginx:latest swarm-worker-1 Running Running 34 seconds ago
clcmew8bcvom nginx.3 nginx:latest swarm-manager Running Running 34 seconds ago
q293r8zwu692 nginx.4 nginx:latest swarm-worker-2 Running Running 34 seconds ago
sv7bqa5e08zw nginx.5 nginx:latest swarm-worker-1 Running Running 34 seconds ago
r7qg9nk0a9o2 nginx.6 nginx:latest swarm-worker-2 Running Running 34 seconds ago
Note: Please skip this section if you have additional disks on your servers.
The instances that im using to test this setup only has one disk, so I will be creating loop block devices using allocated files. This is not recommended as when the disk fails, all the (files/block device images) will be gone with that. But since im demonstrating this, I will create the block devices from a file:
$ ceph-deploy new ceph-node1 ceph-node2 ceph-node3
Add the initial monitors and gather the keys from the previous command:
$ ceph-deploy mon create-initial
At this point, we should be able to scan the block devices on our nodes:
$ ceph-deploy disk list ceph-node3
[ceph-node3][INFO ] Running command: sudo /usr/sbin/ceph-disk list
[ceph-node3][DEBUG ] /dev/loop0 other
Prepare the Disks:
First we will zap the block devices and then prepare to create the partitions:
$ ceph-deploy disk zap ceph-node1:/dev/loop0 ceph-node2:/dev/loop0 ceph-node3:/dev/loop0
$ ceph-deploy osd prepare ceph-node1:/dev/loop0 ceph-node2:/dev/loop0 ceph-node3:/dev/loop0
[ceph_deploy.osd][DEBUG ] Host ceph-node1 is now ready for osd use.
[ceph_deploy.osd][DEBUG ] Host ceph-node2 is now ready for osd use.
[ceph_deploy.osd][DEBUG ] Host ceph-node3 is now ready for osd use.
When you scan the nodes for their disks, you will notice that the partitions has been created:
$ ceph-deploy disk list ceph-node1
[ceph-node1][DEBUG ] /dev/loop0p2 ceph journal, for /dev/loop0p1
[ceph-node1][DEBUG ] /dev/loop0p1 ceph data, active, cluster ceph, osd.0, journal /dev/loop0p2
Now let’s activate the OSD’s by using the data partitions:
To grow the space we can use resize2fs for ext4 partitions and xfs_growfs for xfs partitions:
$ sudo resize2fs /dev/rbd0
resize2fs 1.42.13 (17-May-2015)Filesystem at /dev/rbd0 is mounted on /mnt; on-line resizing required
old_desc_blocks= 1, new_desc_blocks= 1
The filesystem on /dev/rbd0 is now 524288(4k) blocks long.
When we look at our mounted partitions, you will notice that the size of our mounted partition has been increased in size:
So I got 3 Dedicated Servers each having its own Static IP and I wanted a way to build a private network between these servers.
3 Servers with the following IP’s (not real IP addresses):
- Server 1: 188.8.131.52
- Server 2: 184.108.40.206
- Server 3: 220.127.116.11
So I want to have a private network, so that I can have the following internal network:
- Server 1: 10.0.1.1
- Server 2: 10.0.1.2
- Server 3: 10.0.1.3
A couple of years ago, I accomplished the end goal using GRE Tunnels, which works well, but wanted to try something different.
So I stumbled upon VPNCloud.rs, which is a peer to peer VPN. Their description, quoted from their Github page:
“VpnCloud is a simple VPN over UDP. It creates a virtual network interface on the host and forwards all received data via UDP to the destination. VpnCloud establishes a fully-meshed VPN network in a peer-to-peer manner. It can work on TUN devices (IP based) and TAP devices (Ethernet based).”
This is exactly what I was looking for.
Setting up a 3 node Private Network:
Given the IP configuration above, we will setup a Private network between our 3 hosts.
Do some updates then grab the package from Github and install VPNCloud:
# each vpn running on their own portport: 3210
# members of our private networkpeers:
# timeoutspeer_timeout: 1800
# token that identifies the network and helps to distinguish from other networksmagic: "76706e01"# pre shared keyshared_key: "VeryStrongPreSharedKey_ThatShouldBeChanged"# encryptioncrypto: aes256
# device infodevice_name: "vpncloud%d"device_type: tap
# vpn modes: hub / switch / router / normalmode: normal
# subnet to be used for our private networksubnets:
# command to setup the networkifup: "ifconfig $IFNAME 10.0.1.1/24 mtu 1400"ifdown: "ifconfig $IFNAME down"# user/group owning the processuser: "root"group: "root"
To get the headers, you can use headers.get("X-Api-Key") or headers["X-Api-Key"]
Create a virtual environment, install flask and run the app:
$ virtualenv .venv
$ source .venv/bin/activate
$ python app.py
* Serving Flask app "app"(lazy loading) * Environment: production
WARNING: Do not use the development server in a production environment.
Use a production WSGI server instead.
* Debug mode: off
* Running on http://127.0.0.1:5000/ (Press CTRL+C to quit)
Requests to our App:
Let’s first make a request with no headers, which should then give us a 401 Unautorhized response:
From a best practice, its not a good decision to hard code sensitive details in your code, but rather read that from an encrypted database and store that in your applications environment variables, and let your application read from the environment variables, something like that :D
After some time, your system can run out of disk space when running a lot of containers / volumes etc. You will find that at times, you will have a lot of unused containers, stopped containers, unused images, unused networks that is just sitting there, which consumes data on your nodes.
One way to clean them is by using docker system prune.
Check Docker Disk Space
The command below will show the amount of disk space consumed, and how much is reclaimable:
$ docker system df
TYPE TOTAL ACTIVE SIZE RECLAIMABLE
Images 229125 23.94GB 14.65GB (61%)Containers 32216 8.229GB 8.222GB (99%)Local Volumes 7741 698MB 19.13MB (2%)Build Cache 0B 0B
Removing Unsued Data:
By using Prune, we can remove the unused resources that is consuming data:
$ docker system prune
WARNING! This will remove:
- all stopped containers
- all networks not used by at least one container
- all dangling images
- all build cache
Are you sure you want to continue? [y/N] y
Total reclaimed space: 14.18GB