Allocating 'alpine1.img'|8 GB 00:00:01
Creating domain... |0 B 00:00:00
Connected to domain alpine1
Escape character is ^]ISOLINUX 6.04 6.04-pre1 Copyright (C) 1994-2015 H. Peter Anvin et al
OpenRC 0.24.1.a941ee4a0b is starting up Linux 4.9.65-1-virthardened (x86_64)Welcome to Alpine Linux 3.7
Kernel 4.9.65-1-virthardened on an x86_64 (/dev/ttyS0)localhost login:
Login with the root user and no password, then setup the VM by running setup-alpine:
localhost login: root
Welcome to Alpine!
After completing the prompts reboot the VM by running reboot, then you will be dropped out of the console. Check the status of the reboot:
$ virsh list
Id Name State
2 alpine1 running
As we can see our guest is running, lets console to our guest, provide the root user and password that you provided during the setup phase:
$ virsh console 2
Connected to domain alpine1
Escape character is ^]alpine1 login: root
Welcome to Alpine!
Setup OpenSSH so that we can SSH to our guest over the network:
$ apk update
$ apk add openssh
Configure SSH to accept Root Passwords, this is not advisable for production environments, but for testing this is okay. For Production servers, we will rather look at Key Based Authentication etc.
$ sed -i 's/#PermitRootLogin prohibit-password/PermitRootLogin yes/g' /etc/ssh/sshd_config
$ /etc/init.d/sshd restart
This essentially means that we will have a NFS Volume, when the service gets created on Docker Swarm, the cluster creates these volumes with path mapping, so when a container gets spawned, restarted, scaled etc, the container that gets started on the new node will be aware of the volume, and will get the data that its expecting.
Its also good to note that our NFS Server will be a single point of failure, therefore its also good to look at a Distributed Volume like GlusterFS, XtreemFS, Ceph, etc.
NFS Server (10.8.133.83)
Rancher Convoy Plugin on Each Docker Node in the Swarm (10.8.133.83, 10.8.166.19, 10.8.142.195)
Setup the NFS Server
In order for the containers to be able to change permissions, you need to set (rw,async,no_subtree_check,no_wdelay,crossmnt,insecure,all_squash,insecure_locks,sec=sys,anonuid=0,anongid=0)
$ docker service scale apps_test=2
apps_test scaled to 2
Inspect to see if the new replica is on another node:
$ docker service ps apps_test
ID NAME IMAGE NODE DESIRED STATE CURRENT STATE ERROR PORTS
myrq2pc3z26z apps_test.1 alpine:edge scw-docker-1 Running Running 45 seconds ago
ny8t97l2q00c \_ apps_test.1 alpine:edge scw-docker-1 Shutdown Failed 51 seconds ago "task: non-zero exit (137)"iojo7fpw8jir \_ apps_test.1 alpine:edge scw-docker-1 Shutdown Failed about a minute ago "task: non-zero exit (137)"tt0nuusvgeki apps_test.2 alpine:edge scw-docker-2 Running Running 15 seconds ago
Logon to the new container and test if the data is persisted:
$ docker exec -it apps_test.2.tt0nuusvgekirw1c5myu720ga sh
/ # cat /data/file.txtok
Delete the Stack and Redeploy and have a look at the data we created earlier, and you will notice the data is persisted:
We need to set in the exports file, the clients we would like to allow:
rw: Allows Client R/W Access to the Volume.
sync: This option forces NFS to write changes to disk before replying. More stable and Consistent. Note, it does reduce the speed of file operations.
no_subtree_check: This prevents subtree checking, which is a process where the host must check whether the file is actually still available in the exported tree for every request. This can cause many problems when a file is renamed while the client has it opened. In almost all cases, it is better to disable subtree checking.
On Amazon Web Services with RDS for MySQL or Aurora with MySQL compatibility, you can authenticate to your Database instance or cluster using IAM for database authentication. The benefit of using this authentication method is that you don’t need to use a password when you connect to your database, but you use your authentication token instead
Create the database account on the MySQL RDS instance as described from their docs. IAM handles the authentication via AWSAuthenticationPlugin, therefore we do not need to set passwords on the database.
Connect to the database:
$ mysql -u dbadmin -h rbtest.abcdefgh.eu-west-1.rds.amazonaws.com -p
While you are on the database, create 2 databases (db1 and db2) with some tables, which we will use for our user to have read only access to, and create one database (db3) which the user will not have access to:
IAM Permissions to allow our user to authenticate to our RDS.
First to create the user and configure awscli tools. My default profile has administrative access, so we will create our db user in its own profile and configure our awscli tools with its new access key and secret key:
$ aws configure --profile dbuser
AWS Access Key ID [None]: xxxxxxxxxxxxx
AWS Secret Access Key [None]: xxxxxxxxxxxxxxxxxx
Default region name [None]: eu-west-1
Default output format [None]: json
Now we need to create a IAM policy to allow our user to authenticate to our RDS Instance via IAM, which we will associate with our Users account.
We need the AWS Account ID, the Database Identifier Resource ID, and the User Account that we created on MySQL.
Now that our policies are in place, credentials from the credential provider has been set and our bash script is setup, lets connect to our database:
mysql> show databases;+--------------------+
| Database |+--------------------+
| information_schema || db1 || db2 |+--------------------+
3 rows in set(0.16 sec)mysql> select * from db2.foo;+--------------+
| location |+--------------+
| south africa || new zealand || australia |+--------------+
mysql> select * from db3.foo;ERROR 1044(42000): Access denied for user 'mydbaccount'@'*' to database 'db3'mysql> create database test123;ERROR 1044(42000): Access denied for user 'mydbaccount'@'%' to database 'test123'
Changing the IAM Policy to revoke access:
mysql: [Warning] Using a password on the command line interface can be insecure.
ERROR 1045(28000): Access denied for user 'mydbaccount'@'10.0.0.10'(using password: YES)
Creating a MySQL Client Wrapper Script:
Using bash we can create a wrapper script so we can connect to our database like the following:
fromflaskimportFlask,Markup,render_templateapp=Flask(__name__)labels=['JAN','FEB','MAR','APR','MAY','JUN','JUL','AUG','SEP','OCT','NOV','DEC']values=[967.67,1190.89,1079.75,1349.19,2328.91,2504.28,2873.83,4764.87,4349.29,6458.30,9907,16297]colors=["#F7464A","#46BFBD","#FDB45C","#FEDCBA","#ABCDEF","#DDDDDD","#ABCABC","#4169E1","#C71585","#FF4500","#FEDCBA","#46BFBD"]@app.route('/bar')defbar():bar_labels=labelsbar_values=valuesreturnrender_template('bar_chart.html',title='Bitcoin Monthly Price in USD',max=17000,labels=bar_labels,values=bar_values)@app.route('/line')defline():line_labels=labelsline_values=valuesreturnrender_template('line_chart.html',title='Bitcoin Monthly Price in USD',max=17000,labels=line_labels,values=line_values)@app.route('/pie')defpie():pie_labels=labelspie_values=valuesreturnrender_template('pie_chart.html',title='Bitcoin Monthly Price in USD',max=17000,set=zip(values,labels,colors))if__name__=='__main__':app.run(host='0.0.0.0',port=8080)
Populating the HTML Static Content:
As we are using render_template we need to populate our html files in our templates/ directory. As you can see we have 3 different html files:
Running our Application:
As you can see, we have 3 endpoints, each representing a different chart style: