$ docker node ls
ID HOSTNAME STATUS AVAILABILITY MANAGER STATUS ENGINE VERSION
0ead0jshzkpyrw7livudrzq9o * swarm-manager Ready Active Leader 18.03.1-ce
iwyp6t3wcjdww0r797kwwkvvy swarm-worker-1 Ready Active 18.03.1-ce
ytcc86ixi0kuuw5mq5xxqamt1 swarm-worker-2 Ready Active 18.03.1-ce
Test Application on Swarm
Create a Nginx Demo Service:
$ docker network create --driver overlay appnet
$ docker service create --name nginx --publish 80:80 --network appnet --replicas 6 nginx
$ docker service ls
ID NAME MODE REPLICAS IMAGE PORTS
k3vwvhmiqbfk nginx replicated 6/6 nginx:latest *:80->80/tcp
$ docker service ps nginx
ID NAME IMAGE NODE DESIRED STATE CURRENT STATE ERROR PORTS
tspsypgis3qe nginx.1 nginx:latest swarm-manager Running Running 34 seconds ago
g2f0ytwb2jjg nginx.2 nginx:latest swarm-worker-1 Running Running 34 seconds ago
clcmew8bcvom nginx.3 nginx:latest swarm-manager Running Running 34 seconds ago
q293r8zwu692 nginx.4 nginx:latest swarm-worker-2 Running Running 34 seconds ago
sv7bqa5e08zw nginx.5 nginx:latest swarm-worker-1 Running Running 34 seconds ago
r7qg9nk0a9o2 nginx.6 nginx:latest swarm-worker-2 Running Running 34 seconds ago
Note: Please skip this section if you have additional disks on your servers.
The instances that im using to test this setup only has one disk, so I will be creating loop block devices using allocated files. This is not recommended as when the disk fails, all the (files/block device images) will be gone with that. But since im demonstrating this, I will create the block devices from a file:
$ ceph-deploy new ceph-node1 ceph-node2 ceph-node3
Add the initial monitors and gather the keys from the previous command:
$ ceph-deploy mon create-initial
At this point, we should be able to scan the block devices on our nodes:
$ ceph-deploy disk list ceph-node3
[ceph-node3][INFO ] Running command: sudo /usr/sbin/ceph-disk list
[ceph-node3][DEBUG ] /dev/loop0 other
Prepare the Disks:
First we will zap the block devices and then prepare to create the partitions:
$ ceph-deploy disk zap ceph-node1:/dev/loop0 ceph-node2:/dev/loop0 ceph-node3:/dev/loop0
$ ceph-deploy osd prepare ceph-node1:/dev/loop0 ceph-node2:/dev/loop0 ceph-node3:/dev/loop0
[ceph_deploy.osd][DEBUG ] Host ceph-node1 is now ready for osd use.
[ceph_deploy.osd][DEBUG ] Host ceph-node2 is now ready for osd use.
[ceph_deploy.osd][DEBUG ] Host ceph-node3 is now ready for osd use.
When you scan the nodes for their disks, you will notice that the partitions has been created:
$ ceph-deploy disk list ceph-node1
[ceph-node1][DEBUG ] /dev/loop0p2 ceph journal, for /dev/loop0p1
[ceph-node1][DEBUG ] /dev/loop0p1 ceph data, active, cluster ceph, osd.0, journal /dev/loop0p2
Now let’s activate the OSD’s by using the data partitions:
To grow the space we can use resize2fs for ext4 partitions and xfs_growfs for xfs partitions:
$ sudo resize2fs /dev/rbd0
resize2fs 1.42.13 (17-May-2015)Filesystem at /dev/rbd0 is mounted on /mnt; on-line resizing required
old_desc_blocks= 1, new_desc_blocks= 1
The filesystem on /dev/rbd0 is now 524288(4k) blocks long.
When we look at our mounted partitions, you will notice that the size of our mounted partition has been increased in size:
So I got 3 Dedicated Servers each having its own Static IP and I wanted a way to build a private network between these servers.
3 Servers with the following IP’s (not real IP addresses):
- Server 1: 188.8.131.52
- Server 2: 184.108.40.206
- Server 3: 220.127.116.11
So I want to have a private network, so that I can have the following internal network:
- Server 1: 10.0.1.1
- Server 2: 10.0.1.2
- Server 3: 10.0.1.3
A couple of years ago, I accomplished the end goal using GRE Tunnels, which works well, but wanted to try something different.
So I stumbled upon VPNCloud.rs, which is a peer to peer VPN. Their description, quoted from their Github page:
“VpnCloud is a simple VPN over UDP. It creates a virtual network interface on the host and forwards all received data via UDP to the destination. VpnCloud establishes a fully-meshed VPN network in a peer-to-peer manner. It can work on TUN devices (IP based) and TAP devices (Ethernet based).”
This is exactly what I was looking for.
Setting up a 3 node Private Network:
Given the IP configuration above, we will setup a Private network between our 3 hosts.
Do some updates then grab the package from Github and install VPNCloud:
# each vpn running on their own portport: 3210
# members of our private networkpeers:
# timeoutspeer_timeout: 1800
# token that identifies the network and helps to distinguish from other networksmagic: "76706e01"# pre shared keyshared_key: "VeryStrongPreSharedKey_ThatShouldBeChanged"# encryptioncrypto: aes256
# device infodevice_name: "vpncloud%d"device_type: tap
# vpn modes: hub / switch / router / normalmode: normal
# subnet to be used for our private networksubnets:
# command to setup the networkifup: "ifconfig $IFNAME 10.0.1.1/24 mtu 1400"ifdown: "ifconfig $IFNAME down"# user/group owning the processuser: "root"group: "root"
To get the headers, you can use headers.get("X-Api-Key") or headers["X-Api-Key"]
Create a virtual environment, install flask and run the app:
$ virtualenv .venv
$ source .venv/bin/activate
$ python app.py
* Serving Flask app "app"(lazy loading) * Environment: production
WARNING: Do not use the development server in a production environment.
Use a production WSGI server instead.
* Debug mode: off
* Running on http://127.0.0.1:5000/ (Press CTRL+C to quit)
Requests to our App:
Let’s first make a request with no headers, which should then give us a 401 Unautorhized response:
From a best practice, its not a good decision to hard code sensitive details in your code, but rather read that from an encrypted database and store that in your applications environment variables, and let your application read from the environment variables, something like that :D
After some time, your system can run out of disk space when running a lot of containers / volumes etc. You will find that at times, you will have a lot of unused containers, stopped containers, unused images, unused networks that is just sitting there, which consumes data on your nodes.
One way to clean them is by using docker system prune.
Check Docker Disk Space
The command below will show the amount of disk space consumed, and how much is reclaimable:
$ docker system df
TYPE TOTAL ACTIVE SIZE RECLAIMABLE
Images 229125 23.94GB 14.65GB (61%)Containers 32216 8.229GB 8.222GB (99%)Local Volumes 7741 698MB 19.13MB (2%)Build Cache 0B 0B
Removing Unsued Data:
By using Prune, we can remove the unused resources that is consuming data:
$ docker system prune
WARNING! This will remove:
- all stopped containers
- all networks not used by at least one container
- all dangling images
- all build cache
Are you sure you want to continue? [y/N] y
Total reclaimed space: 14.18GB
When dealing with a lot of servers where you need to ssh to different servers and especially if they require different authentication from different private ssh keys, it kinda gets annoying specifying the private key you need, when you want to SSH to them.
SSH Config: ~/.ssh/config is powerful!
In this config file, you can specify the remote host, the key, user and the alias, so that when you want to SSH to it, you dont have to use the fully qualified domain name or IP address.
Let’s take for example our server-a with the following details:
Disable Strict Host Checking
So to access that host, you would use the following command (without ssh config):
Now, if we need to SSH to it, we can do it as simply as:
$ ssh host1
as it will pull in the configs from the config that is described from the host alias that you calling from the argument of the ssh binary.
Appending to our SSH Config, we can configure either our client or server to prevent SSH Timeouts due to inactivity.
SSH Timeout on our Client:
$ vim ~/.ssh/config
Here we can set how often a NULL Packet is sent to the SSH Connections to keep the connection alive, in this case every 120 seconds:
SSH Timeout on the Servers:
$ vim /etc/ssh/sshd_config
Below we have 2 properties, the interval of how often to instruct the client connected to send a NULL packet to keep the connection alive and the max number of intervals, so for a idle connection to timeout in 24 hours, we will take 86400 seconds which is 24 hours, divide into 120 second intervals, which gives as 720 intervals.
So the config will look like this:
The restart the sshd service:
$ /etc/init.d/sshd restart
Another handy tool is ssh-agent, if you have password encryption on your key, everytime you need to ssh, a password will be prompted. A way to get around this is to use the ssh-agent.
We also want to set a TTL to the ssh-agent, as we don’t want it to run forever (unless you want it to). In this case I will let the ssh-agent exit after 2 hours. It will also only run in the shell session from where you execute it. Lets start up our ssh-agent:
$ eval$(ssh-agent -t 7200)Agent pid 88760
Now add the private key to the ssh-agent. If your private key is password protected, it will prompt you for the password and after successful verification the key will be added:
With Letsencrypt supporting Wildcard certificates is really awesome. Now, we can setup traefik to listen on 443, acting as a reverse proxy and is doing HTTPS Termination to our Applications thats running in our Swarm.
At the moment we have 3 Manager Nodes, and 5 Worker Nodes:
Using a Dummy Domain example.com which is set to the 3 Public IP’s of our Manager Nodes
DNS is set for: example.com A Record to: 18.104.22.168, 22.214.171.124, 126.96.36.199
DNS is set for: *.example.com CNAME to example.com
Any application that is spawned into our Swarm, will be labeled with a traefik.frontend.rule which will be routed to the service and redirected from HTTP to HTTPS
Create the Overlay Network:
Create the overlay network that will be used for our stack:
$ docker network create --driver overlay appnet
Create the Compose Files for our Stacks:
Create the Traefik Service Compose file, we will deploy it in Global Mode, constraint to our Manager Nodes, so that every manager node has a copy of traefik running.
Quick demo with Web Forms using the WTForms module in Python Flask.
Install the required dependencies:
$ pip install flask wtforms
The Application code of the Web Forms Application. Note that we are also using validation, as we want the user to complete all the fields. I am also including a function that logs to the directory where the application is running, for previewing the data that was logged.