In this post we will create a basic Python Flask WebApp on Docker Swarm, but we will read our Flask Host, and Flask Port from Environment Variables, which will be populated from Docker Secrets, which we will read in from a python script.
The exporter script checks all the secrets that is mounted to the container, then formats the secrets to a key/value pair, which then exports the environment variables to the current shell, which thereafter gets read by the flask application.
Exec into the container, list to see where the secrets got populated:
boot.sh
12
$ ls /run/secrets/
flask_host flask_port
Do a netstat, to see that the value from the created secret is listening:
boot.sh
1234
$ netstat -tulpn
Active Internet connections (only servers)Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name
tcp 00 0.0.0.0:5001 0.0.0.0:* LISTEN 7/python
This post will guide you through the steps on how to send SMS messages with Python and Twilio. We will use talaikis.com API to get a random quote that we will include in the body of the sms.
Signup for a Trail Account:
Sign up for a trail account at Twilio then create a number, which I will refer to as the sender number, take note of your accountid and token.
Create the Config:
Create the config, that will keep the accountid, token, sender number and recipient number:
We will get a random quote via talaikis.com’s API which we will be using for the body of our text message, and then use twilio’s API to send the text message:
From our Previous Post we wrote a basic golang app that reads the contents of a file and writes it back to disk, but in a static way as we defined the source and destination filenames in the code.
Today we will use arguments to specify what the source and destination filenames should be instead of hardcoding it.
Our Golang Application:
We will be using if statements to determine if the number of arguments provided is as expected, if not, then a usage string should be returned to stdout. Then we will loop through the list of arguments to determine what the values for our source and destination file should be.
Once it completes, it prints out the coice of filenames that was used:
packagemainimport("io/ioutil""os""fmt")var(input_filenamestringoutput_filenamestring)funcmain(){iflen(os.Args)<5{fmt.Printf("Usage: (-i/--input) 'input_filename' (-o/--output) 'output_filename' \n")os.Exit(0)}fori,arg:=rangeos.Args{ifarg=="-i"||arg=="--input"{input_filename=os.Args[i+1]}ifarg=="-o"||arg=="--output"{output_filename=os.Args[i+1]}}input_file_content,error:=ioutil.ReadFile(input_filename)iferror!=nil{panic(error)}fmt.Println("File used for reading:",input_filename)ioutil.WriteFile(output_filename,input_file_content,0644)fmt.Println("File used for writing:",output_filename)}
Build your application:
1
$ go build app.go
Run your application with no additional arguments to determine the expected behaviour:
This was a very static way of doing it, as you need to hardcode the filenames. In the next post I will show how to use arguments to make it more dynamic.
Today we will setup a KVM (Kernel Virtual Machine) Hypervisor, where we can host Virtual Machines. In order to do so, your host needs to Support Hardware Virtualization.
What we will be doing today:
Check if your host supports Hardware Virtualization
Setup the KVM Hypervisor
Setup a Alpine VM
Check for Hardware Virtualization Support:
We will install the package required to do the check:
Starting install...
Allocating 'alpine1.img'|8 GB 00:00:01
Creating domain... |0 B 00:00:00
Connected to domain alpine1
Escape character is ^]ISOLINUX 6.04 6.04-pre1 Copyright (C) 1994-2015 H. Peter Anvin et al
boot:
OpenRC 0.24.1.a941ee4a0b is starting up Linux 4.9.65-1-virthardened (x86_64)Welcome to Alpine Linux 3.7
Kernel 4.9.65-1-virthardened on an x86_64 (/dev/ttyS0)localhost login:
Login with the root user and no password, then setup the VM by running setup-alpine:
1234
localhost login: root
Welcome to Alpine!
localhost:~# setup-alpine
After completing the prompts reboot the VM by running reboot, then you will be dropped out of the console. Check the status of the reboot:
1234
$ virsh list
Id Name State
----------------------------------------------------
2 alpine1 running
As we can see our guest is running, lets console to our guest, provide the root user and password that you provided during the setup phase:
1234567
$ virsh console 2
Connected to domain alpine1
Escape character is ^]alpine1 login: root
Password:
Welcome to Alpine!
Setup OpenSSH so that we can SSH to our guest over the network:
12
$ apk update
$ apk add openssh
Configure SSH to accept Root Passwords, this is not advisable for production environments, but for testing this is okay. For Production servers, we will rather look at Key Based Authentication etc.
12
$ sed -i 's/#PermitRootLogin prohibit-password/PermitRootLogin yes/g' /etc/ssh/sshd_config
$ /etc/init.d/sshd restart
This essentially means that we will have a NFS Volume, when the service gets created on Docker Swarm, the cluster creates these volumes with path mapping, so when a container gets spawned, restarted, scaled etc, the container that gets started on the new node will be aware of the volume, and will get the data that its expecting.
Its also good to note that our NFS Server will be a single point of failure, therefore its also good to look at a Distributed Volume like GlusterFS, XtreemFS, Ceph, etc.
NFS Server (10.8.133.83)
Rancher Convoy Plugin on Each Docker Node in the Swarm (10.8.133.83, 10.8.166.19, 10.8.142.195)
Setup NFS:
Setup the NFS Server
Update:
In order for the containers to be able to change permissions, you need to set (rw,async,no_subtree_check,no_wdelay,crossmnt,insecure,all_squash,insecure_locks,sec=sys,anonuid=0,anongid=0)
#!/bin/sh### BEGIN INIT INFO# Provides:# Required-Start: $remote_fs $syslog# Required-Stop: $remote_fs $syslog# Default-Start: 2 3 4 5# Default-Stop: 0 1 6# Short-Description: Start daemon at boot time# Description: Enable service provided by daemon.### END INIT INFOdir="/usr/local/bin"cmd="convoy daemon --drivers vfs --driver-opts vfs.path=/mnt/docker/volumes"user="root"name="convoy"pid_file="/var/run/$name.pid"stdout_log="/var/log/$name.log"stderr_log="/var/log/$name.err"get_pid(){ cat "$pid_file"}is_running(){[ -f "$pid_file"]&& ps `get_pid` > /dev/null 2>&1
}case"$1" in
start)if is_running;thenecho"Already started"elseecho"Starting $name"cd"$dir"if[ -z "$user"];then sudo $cmd >> "$stdout_log" 2>> "$stderr_log"&else sudo -u "$user"$cmd >> "$stdout_log" 2>> "$stderr_log"&fiecho$! > "$pid_file"if ! is_running;thenecho"Unable to start, see $stdout_log and $stderr_log"exit 1
fifi;; stop)if is_running;thenecho -n "Stopping $name.."kill`get_pid`for i in {1..10}doif ! is_running;thenbreakfiecho -n "." sleep 1
doneechoif is_running;thenecho"Not stopped; may still be shutting down or shutdown may have failed"exit 1
elseecho"Stopped"if[ -f "$pid_file"];then rm "$pid_file"fifielseecho"Not running"fi;; restart)$0 stop
if is_running;thenecho"Unable to stop, will not attempt to start"exit 1
fi$0 start
;; status)if is_running;thenecho"Running"elseecho"Stopped"exit 1
fi;; *)echo"Usage: $0 {start|stop|restart|status}"exit 1
;;esacexit 0
Make the script executable:
1
$ chmod +x /etc/init.d/convoy
Enable the service on boot:
1
$ sudo systemctl enable convoy
Start the service:
1
$ sudo /etc/init.d/convoy start
This should be done on all the nodes.
Externally Managed Convoy Volumes
One thing to note is that, after your delete a volume, you will still need to delete the directory from the path where its hosted, as the application does not do that by itself.
Creating the Volume Before hand:
123456789
$ convoy create test1
test1
$ docker volume ls
DRIVER VOLUME NAME
convoy test1
$ cat /mnt/docker/volumes/config/vfs_volume_test1.json
{"Name":"test1","Size":0,"Path":"/mnt/docker/volumes/test1","MountPoint":"","PrepareForVM":false,"CreatedTime":"Mon Feb 05 13:07:05 +0000 2018","Snapshots":{}}
Viewing the volume from another node:
123
$ docker volume ls
DRIVER VOLUME NAME
convoy test1
Creating a Test Service:
Create a test service to test the data persistence, our docker-compose.yml:
$ docker service scale apps_test=2
apps_test scaled to 2
Inspect to see if the new replica is on another node:
123456
$ docker service ps apps_test
ID NAME IMAGE NODE DESIRED STATE CURRENT STATE ERROR PORTS
myrq2pc3z26z apps_test.1 alpine:edge scw-docker-1 Running Running 45 seconds ago
ny8t97l2q00c \_ apps_test.1 alpine:edge scw-docker-1 Shutdown Failed 51 seconds ago "task: non-zero exit (137)"iojo7fpw8jir \_ apps_test.1 alpine:edge scw-docker-1 Shutdown Failed about a minute ago "task: non-zero exit (137)"tt0nuusvgeki apps_test.2 alpine:edge scw-docker-2 Running Running 15 seconds ago
Logon to the new container and test if the data is persisted:
123
$ docker exec -it apps_test.2.tt0nuusvgekirw1c5myu720ga sh
/ # cat /data/file.txtok
Delete the Stack and Redeploy and have a look at the data we created earlier, and you will notice the data is persisted:
We need to set in the exports file, the clients we would like to allow:
rw: Allows Client R/W Access to the Volume.
sync: This option forces NFS to write changes to disk before replying. More stable and Consistent. Note, it does reduce the speed of file operations.
no_subtree_check: This prevents subtree checking, which is a process where the host must check whether the file is actually still available in the exported tree for every request. This can cause many problems when a file is renamed while the client has it opened. In almost all cases, it is better to disable subtree checking.
On Amazon Web Services with RDS for MySQL or Aurora with MySQL compatibility, you can authenticate to your Database instance or cluster using IAM for database authentication. The benefit of using this authentication method is that you don’t need to use a password when you connect to your database, but you use your authentication token instead
Create the database account on the MySQL RDS instance as described from their docs. IAM handles the authentication via AWSAuthenticationPlugin, therefore we do not need to set passwords on the database.
Connect to the database:
1
$ mysql -u dbadmin -h rbtest.abcdefgh.eu-west-1.rds.amazonaws.com -p
While you are on the database, create 2 databases (db1 and db2) with some tables, which we will use for our user to have read only access to, and create one database (db3) which the user will not have access to:
IAM Permissions to allow our user to authenticate to our RDS.
First to create the user and configure awscli tools. My default profile has administrative access, so we will create our db user in its own profile and configure our awscli tools with its new access key and secret key:
12345
$ aws configure --profile dbuser
AWS Access Key ID [None]: xxxxxxxxxxxxx
AWS Secret Access Key [None]: xxxxxxxxxxxxxxxxxx
Default region name [None]: eu-west-1
Default output format [None]: json
Now we need to create a IAM policy to allow our user to authenticate to our RDS Instance via IAM, which we will associate with our Users account.
We need the AWS Account ID, the Database Identifier Resource ID, and the User Account that we created on MySQL.
The bash script will get the authentication token which will be used as the password. Note that the authentication token will expire after 15 minutes after creation. The docs
Now that our policies are in place, credentials from the credential provider has been set and our bash script is setup, lets connect to our database:
1234567891011121314151617181920212223242526
./conn-mysql.sh
mysql> show databases;+--------------------+
| Database |+--------------------+
| information_schema || db1 || db2 |+--------------------+
3 rows in set(0.16 sec)mysql> select * from db2.foo;+--------------+
| location |+--------------+
| south africa || new zealand || australia |+--------------+
mysql> select * from db3.foo;ERROR 1044(42000): Access denied for user 'mydbaccount'@'*' to database 'db3'mysql> create database test123;ERROR 1044(42000): Access denied for user 'mydbaccount'@'%' to database 'test123'
Changing the IAM Policy to revoke access:
123
./conn-mysql.sh
mysql: [Warning] Using a password on the command line interface can be insecure.
ERROR 1045(28000): Access denied for user 'mydbaccount'@'10.0.0.10'(using password: YES)
Creating a MySQL Client Wrapper Script:
Using bash we can create a wrapper script so we can connect to our database like the following: