We will write a simple Python Flask application that requires authentication in order to respond with a 200 HTTP Status code.
Python Flask Application:
Our Python Flask application will require the Header x-api-key dhuejso2dj3d0 in the HTTP Request, to give us a 200 HTTP Status code, if not, we will respond with a 401 Unauthorized Response:
To get the headers, you can use headers.get("X-Api-Key") or headers["X-Api-Key"]
Create a virtual environment, install flask and run the app:
123456789
$ virtualenv .venv
$ source .venv/bin/activate
$ python app.py
* Serving Flask app "app"(lazy loading) * Environment: production
WARNING: Do not use the development server in a production environment.
Use a production WSGI server instead.
* Debug mode: off
* Running on http://127.0.0.1:5000/ (Press CTRL+C to quit)
Requests to our App:
Let’s first make a request with no headers, which should then give us a 401 Unautorhized response:
From a best practice, its not a good decision to hard code sensitive details in your code, but rather read that from an encrypted database and store that in your applications environment variables, and let your application read from the environment variables, something like that :D
After some time, your system can run out of disk space when running a lot of containers / volumes etc. You will find that at times, you will have a lot of unused containers, stopped containers, unused images, unused networks that is just sitting there, which consumes data on your nodes.
One way to clean them is by using docker system prune.
Check Docker Disk Space
The command below will show the amount of disk space consumed, and how much is reclaimable:
123456
$ docker system df
TYPE TOTAL ACTIVE SIZE RECLAIMABLE
Images 229125 23.94GB 14.65GB (61%)Containers 32216 8.229GB 8.222GB (99%)Local Volumes 7741 698MB 19.13MB (2%)Build Cache 0B 0B
Removing Unsued Data:
By using Prune, we can remove the unused resources that is consuming data:
12345678910111213141516171819202122
$ docker system prune
WARNING! This will remove:
- all stopped containers
- all networks not used by at least one container
- all dangling images
- all build cache
Are you sure you want to continue? [y/N] y
Deleted Containers:
a3d7db158e065d0c86160fd5d688875f8b7435848ea91db57ed007
47890dcfea4a105f43e790dd8ad3c6d7c4ad7e738186c034d7a46b
Deleted Networks:
traefik-net
app_appnet
Deleted Images:
deleted: sha256:5b9909c10e93afec
deleted: sha256:d81eesdfihweo3rk
Total reclaimed space: 14.18GB
When dealing with a lot of servers where you need to ssh to different servers and especially if they require different authentication from different private ssh keys, it kinda gets annoying specifying the private key you need, when you want to SSH to them.
SSH Config
SSH Config: ~/.ssh/config is powerful!
In this config file, you can specify the remote host, the key, user and the alias, so that when you want to SSH to it, you dont have to use the fully qualified domain name or IP address.
Let’s take for example our server-a with the following details:
FQDN: host1.eu.compute.domain.coom
User: james
PrivateKeyFile: /path/to/key.pem
Disable Strict Host Checking
So to access that host, you would use the following command (without ssh config):
Host host1
Hostname host1.eu.compute.domain.com
User james
IdentityFile /path/to/key.pem
StrictHostKeyChecking no
UserKnownHostsFile /dev/null
Now, if we need to SSH to it, we can do it as simply as:
1
$ ssh host1
as it will pull in the configs from the config that is described from the host alias that you calling from the argument of the ssh binary.
SSH Timeout
Appending to our SSH Config, we can configure either our client or server to prevent SSH Timeouts due to inactivity.
SSH Timeout on our Client:
1
$ vim ~/.ssh/config
Here we can set how often a NULL Packet is sent to the SSH Connections to keep the connection alive, in this case every 120 seconds:
1
ServerAliveInterval 120
SSH Timeout on the Servers:
1
$ vim /etc/ssh/sshd_config
Below we have 2 properties, the interval of how often to instruct the client connected to send a NULL packet to keep the connection alive and the max number of intervals, so for a idle connection to timeout in 24 hours, we will take 86400 seconds which is 24 hours, divide into 120 second intervals, which gives as 720 intervals.
So the config will look like this:
12
ClientAliveInterval 120
ClientAliveCountMax 720
The restart the sshd service:
1
$ /etc/init.d/sshd restart
SSH Agent
Another handy tool is ssh-agent, if you have password encryption on your key, everytime you need to ssh, a password will be prompted. A way to get around this is to use the ssh-agent.
We also want to set a TTL to the ssh-agent, as we don’t want it to run forever (unless you want it to). In this case I will let the ssh-agent exit after 2 hours. It will also only run in the shell session from where you execute it. Lets start up our ssh-agent:
12
$ eval$(ssh-agent -t 7200)Agent pid 88760
Now add the private key to the ssh-agent. If your private key is password protected, it will prompt you for the password and after successful verification the key will be added:
With Letsencrypt supporting Wildcard certificates is really awesome. Now, we can setup traefik to listen on 443, acting as a reverse proxy and is doing HTTPS Termination to our Applications thats running in our Swarm.
Architectural Design:
At the moment we have 3 Manager Nodes, and 5 Worker Nodes:
Using a Dummy Domain example.com which is set to the 3 Public IP’s of our Manager Nodes
DNS is set for: example.com A Record to: 52.10.1.10, 52.10.1.11, 52.10.1.12
DNS is set for: *.example.com CNAME to example.com
Any application that is spawned into our Swarm, will be labeled with a traefik.frontend.rule which will be routed to the service and redirected from HTTP to HTTPS
Create the Overlay Network:
Create the overlay network that will be used for our stack:
1
$ docker network create --driver overlay appnet
Create the Compose Files for our Stacks:
Create the Traefik Service Compose file, we will deploy it in Global Mode, constraint to our Manager Nodes, so that every manager node has a copy of traefik running.
We have a replicated volume under our /mnt partition, so that all our managers can read from that path, create the file and provide the sufficient permissions:
Quick demo with Web Forms using the WTForms module in Python Flask.
Requirements:
Install the required dependencies:
1
$ pip install flask wtforms
Application:
The Application code of the Web Forms Application. Note that we are also using validation, as we want the user to complete all the fields. I am also including a function that logs to the directory where the application is running, for previewing the data that was logged.
When generating random characters for whatever reason, passwords, secrets-keys etc, you could use the uuid module, which looks like this:
Random String with UUID
123
>>>fromuuidimportuuid4>>>print("Your string is: {0}".format(uuid4()))Yourstringis:53a6e1a7-a2c7-488e-bed9-d76662de9c5f
But if you want to be more specific, like digits, letters, capitalization etc, you can use the string and random modules to do so. First we will generate a random string containing only letters:
Random String with letters
123456789
>>>fromstringimportascii_letters,punctuation,digits>>>fromrandomimportchoice,randint>>>min=12>>>max=15>>>string_format=ascii_letters>>>generated_string="".join(choice(string_format)forxinrange(randint(min,max)))>>>print("Your String is: {0}".format(generated_string))YourStringis:zNeUFluvZwED
As you can see, you have a randomized string which will be always at least 12 characters and max 15 characters, which is lower and upper case. You can also use the lower and upper functions if you want to capitalize or lower case your string:
Let’s add some logic so that we can have a more randomized characters with digits, punctuations etc:
Random String with Letters, Punctuations and Digits
12345678
>>>fromstringimportascii_letters,punctuation,digits>>>fromrandomimportchoice,randint>>>min=12>>>max=15>>>string_format=ascii_letters+punctuation+digits>>>generated_string="".join(choice(string_format)forxinrange(randint(min,max)))>>>print("Your String is: {0}".format(generated_string))YourStringis:Bu>}x_/-H5)fLAr
Let’s set things straight: I am a command line fan boy, If I can do the things I have to do with a command line interface, i’m happy! And that means automation ftw! :D
Scaleway Command Line Interface:
I have been using Scaleway for about 2 years now, and absolutely loving their services! So I recently found their command line interface utility: scw, which is written in golang and has a very similar feel to docker.
Install the SCW CLI Tool:
A golang environment is needed and I will be using docker to drop myself into a golang environment and then install the scw utility:
1234
$ docker run -it golang:alpine sh
$ apk update
$ apk add openssl git openssh curl
$ go get -u github.com/scaleway/scaleway-cli/cmd/scw
Verify that it was installed:
12
$ scw --version
scw version v1.16+dev, build
Awesome sauce!
Authentication:
When we authenticate to Scaleway, it will prompt you to upload your public ssh key, as I am doing this in a container I have no ssh keys, so therefore will generate one before I authenticate.
Generate the SSH Key:
1234567
$ ssh-keygen -t rsa
Generating public/private rsa key pair.
Enter file in which to save the key (/root/.ssh/id_rsa):
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in /root/.ssh/id_rsa.
Your public key has been saved in /root/.ssh/id_rsa.pub.
Now loging to Scaleway using the cli tools:
123456789101112
$ scw login
Login (cloud.scaleway.com): <youremail@domain.com>
Password:
Do you want to upload an SSH key ?
[0] I don't want to upload a key !
[1] id_rsa.pub
Which [id]: 1
You are now authenticated on Scaleway.com as Ruan.
You can list your existing servers using `scw ps` or create a new one using `scw run ubuntu-xenial`.
You can get a list of all available commands using `scw -h` and get more usage examples on github.com/scaleway/scaleway-cli.
Happy cloud riding.
Get a list of available Images, in my case I am just looking for Ubuntu:
123
$ scw images | grep -i ubuntu
Ubuntu_Bionic latest a21bb700 11 days [ams1 par1][x86_64]Ubuntu_Mini_Xenial_25G latest bc75c00b 13 days [ams1 par1][x86_64]
List Running Instances:
12345
$ scw ps
SERVER ID IMAGE ZONE CREATED STATUS PORTS NAME COMMERCIAL TYPE
abc123de Ubuntu_Xenial_16_04_lates ams1 5 weeks running xx.xx.xx.xx scw-elasticsearch-01 ARM64-4GB
abc456de ruan-docker-swarm-17_03 par1 10 months running xx.xx.xxx.xxx scw-swarm-manager-01 VC1M
...
List All Instances (Running, Stopped, Started, etc):
1234
$ scw ps -a
SERVER ID IMAGE ZONE CREATED STATUS PORTS NAME COMMERCIAL TYPE
abc123df Ubuntu_Xenial_16_04_lates ams1 5 weeks stopped xx.xx.xx.xx scw-elasticsearch-02 ARM64-4GB
...
List Instances with a filter based on its name:
123
$ scw ps -f name=scw-swarm-worker-02
SERVER ID IMAGE ZONE CREATED STATUS PORTS NAME COMMERCIAL TYPE
1234abcd Ubuntu_Xenial par1 8 minutes running xx.xx.xxx.xxx scw-swarm-worker-2 START1-XS
List the Latest Instance that was created:
123
$ scw ps -l
SERVER ID IMAGE ZONE CREATED STATUS PORTS NAME COMMERCIAL TYPE
1234abce Ubuntu_Xenial par1 6 minutes running xx.xx.xxx.xxx scw-swarm-worker-3 START1-XS
Create Instances:
In my scenario, I would like to create a instance named docker-swarm-worker-4 with the instance type START1-XS in the Paris datacenter, and I will be using my key that I have uploaded, also the image id that I passed, was retrieved when listing for images:
Now that the instance is created, we can start it by calling either the name or the id:
1
$ scw start docker-swarm-worker-4
To verify the status of the instance, we can do:
123
$ scw ps -l
SERVER ID IMAGE ZONE CREATED STATUS PORTS NAME COMMERCIAL TYPE
102abc34 Ubuntu_Xenial 28 seconds starting docker-swarm-worker-4 START1-XS
At this moment it is still starting, after waiting a minute or so, run it again:
123
$ scw ps -l
SERVER ID IMAGE ZONE CREATED STATUS PORTS NAME COMMERCIAL TYPE
102abc34 Ubuntu_Xenial par1 About a minute running xx.xx.xx.xx docker-swarm-worker-4 START1-XS
As we can see its in a running state, so we are good to access our instance. You have 2 options to access your server, via exec and ssh.
Have a look at Scaleway-CLI Documentation and their Website for more info, and have a look at their new START1-XS instance types, that is only 1.99 Euro’s, that is insane!
Personally love what they are doing, feel free to head over to their pricing page to see some sweet deals!
From a Best Practice Perspective its good not having to pass sensitive information around, and especially not hard coding them.
Best Practice: Security
One good way is to use SSM with KMS to Encrypt/Decrypt them, but since EC2 has a Metadata Service available, we can make use of that to retrieve temporary credentials. One requirement though, is that the instance will require an IAM Role where the code will be executed on. The IAM Role also needs to have sufficient privileges to be able to execute, whatever you need to do.
The 12 Factor Methodology however states to use config in your environment variables, but from the application logic, its easy to save it in our environment.
Scenario: Applications on AWS EC2
When you run applications on Amazon EC2 the nodes has access to the EC2 Metadata Service, so in this case our IAM Role has a Policy that authorizes GetItem on our DynamoDB table, therefore we can define our code with no sensitive information, as the code will do all the work to get the credentials and use the credentials to access DynamoDB.
Use Temporary Credentials to Read from DynamoDB using botocore
In this example we will get the temporary credentials from the metadata service, then define the temporary credentials in our session to authorize our request against dynamodb to read from our table:
>>>response.json(){u'resultCount':1,u'results':[{u'collectionExplicitness':u'notExplicit',u'releaseDate':u'1987-07-21T07:00:00Z',u'currency':u'USD',u'artistId':106621,u'previewUrl':u'https://audio-ssl.itunes.apple.com/apple-assets-us-std-000001/Music6/v4/f2/7d/73/f27d7346-de92-bdc6-e148-56a3da406005/mzaf_2747902348777129728.plus.aac.p.m4a',u'trackPrice':1.29,u'isStreamable':True,u'trackViewUrl':u'https://itunes.apple.com/us/album/sweet-child-o-mine/5669937?i=5669911&uo=4',u'collectionName':u'Greatest Hits',u'collectionId':5669937,u'trackId':5669911,u'collectionViewUrl':u'https://itunes.apple.com/us/album/sweet-child-o-mine/5669937?i=5669911&uo=4',u'trackCount':14,u'trackNumber':2,u'discNumber':1,u'collectionPrice':9.99,u'trackCensoredName':u"Sweet Child O' Mine",u'trackName':u"Sweet Child O' Mine",u'trackTimeMillis':355267,u'primaryGenreName':u'Rock',u'artistViewUrl':u'https://itunes.apple.com/us/artist/guns-n-roses/106621?uo=4',u'kind':u'song',u'country':u'USA',u'wrapperType':u'track',u'artworkUrl100':u'https://is3-ssl.mzstatic.com/image/thumb/Music/v4/3c/18/87/3c188735-e462-3c99-92eb-50fb06afa73f/source/100x100bb.jpg',u'collectionCensoredName':u'Greatest Hits',u'artistName':u"Guns N' Roses",u'artworkUrl60':u'https://is3-ssl.mzstatic.com/image/thumb/Music/v4/3c/18/87/3c188735-e462-3c99-92eb-50fb06afa73f/source/60x60bb.jpg',u'trackExplicitness':u'notExplicit',u'artworkUrl30':u'https://is3-ssl.mzstatic.com/image/thumb/Music/v4/3c/18/87/3c188735-e462-3c99-92eb-50fb06afa73f/source/30x30bb.jpg',u'discCount':1}]}
View the request headers:
12
>>>response.headers{'Content-Length':'650','x-apple-translated-wo-url':'/WebObjects/MZStoreServices.woa/ws/wsSearch?term=guns+and+roses&limit=1&urlDesc=','Access-Control-Allow-Origin':'*','x-webobjects-loadaverage':'0','X-Cache':'TCP_MISS from a2-21-98-60.deploy.akamaitechnologies.com (AkamaiGHost/9.3.0.3-22245996) (-)','x-content-type-options':'nosniff','x-apple-orig-url':'https://itunes.apple.com/search?term=guns+and+roses&limit=1','x-apple-jingle-correlation-key':'GUOFR25MGUUK5J7LUKI6UUFUWM','x-apple-application-site':'ST11','Date':'Tue, 08 May 2018 20:50:39 GMT','apple-tk':'false','content-disposition':'attachment; filename=1.txt','Connection':'keep-alive','apple-seq':'0','x-apple-application-instance':'2001318','X-Apple-Partner':'origin.0','Content-Encoding':'gzip','strict-transport-security':'max-age=31536000','Vary':'Accept-Encoding','apple-timing-app':'109 ms','X-True-Cache-Key':'/L/itunes.apple.com/search ci2=limit=1&term=guns+and+roses__','X-Cache-Remote':'TCP_MISS from a23-57-75-64.deploy.akamaitechnologies.com (AkamaiGHost/9.3.0.3-22245996) (-)','Cache-Control':'max-age=86400','x-apple-request-uuid':'351c58eb-ac35-28ae-a7eb-a291ea50b4b3','Content-Type':'text/javascript; charset=utf-8','apple-originating-system':'MZStoreServices'}
Python Requests and the iTunes API:
Search for the Artist Guns and Roses and limit the output to 1 Song:
>>>importrequests>>>importjson>>>a='https://itunes.apple.com/search?term=guns+and+roses&limit=1'>>>b=requests.get(a).json()>>>print(json.dumps(b,indent=2)){"resultCount":1,"results":[{"collectionExplicitness":"notExplicit","releaseDate":"1987-07-21T07:00:00Z","currency":"USD","artistId":106621,"previewUrl":"https://audio-ssl.itunes.apple.com/apple-assets-us-std-000001/Music6/v4/f2/7d/73/f27d7346-de92-bdc6-e148-56a3da406005/mzaf_2747902348777129728.plus.aac.p.m4a","trackPrice":1.29,"isStreamable":true,"trackViewUrl":"https://itunes.apple.com/us/album/sweet-child-o-mine/5669937?i=5669911&uo=4","collectionName":"Greatest Hits","collectionId":5669937,"trackId":5669911,"collectionViewUrl":"https://itunes.apple.com/us/album/sweet-child-o-mine/5669937?i=5669911&uo=4","trackCount":14,"trackNumber":2,"discNumber":1,"collectionPrice":9.99,"trackCensoredName":"Sweet Child O' Mine","trackName":"Sweet Child O' Mine","trackTimeMillis":355267,"primaryGenreName":"Rock","artistViewUrl":"https://itunes.apple.com/us/artist/guns-n-roses/106621?uo=4","kind":"song","country":"USA","wrapperType":"track","artworkUrl100":"https://is3-ssl.mzstatic.com/image/thumb/Music/v4/3c/18/87/3c188735-e462-3c99-92eb-50fb06afa73f/source/100x100bb.jpg","collectionCensoredName":"Greatest Hits","artistName":"Guns N' Roses","artworkUrl60":"https://is3-ssl.mzstatic.com/image/thumb/Music/v4/3c/18/87/3c188735-e462-3c99-92eb-50fb06afa73f/source/60x60bb.jpg","trackExplicitness":"notExplicit","artworkUrl30":"https://is3-ssl.mzstatic.com/image/thumb/Music/v4/3c/18/87/3c188735-e462-3c99-92eb-50fb06afa73f/source/30x30bb.jpg","discCount":1}]}
From the response we got a "artistId": 106621, lets query the API on the ArtistId, to get info of the Artist:
123456789101112131415161718
>>>a='https://itunes.apple.com/lookup?id=106621'>>>b=requests.get(a).json()>>>print(json.dumps(b,indent=2)){"resultCount":1,"results":[{"artistType":"Artist","amgArtistId":4416,"wrapperType":"artist","artistId":106621,"artistLinkUrl":"https://itunes.apple.com/us/artist/guns-n-roses/106621?uo=4","artistName":"Guns N' Roses","primaryGenreId":21,"primaryGenreName":"Rock"}]}
Query all the Albums by Artist by using the ArtistId and Entity for Album:
>>>a='https://itunes.apple.com/lookup?id=106621&entity=album'>>>b=requests.get(a).json()>>>print(json.dumps(b,indent=2)){"resultCount":13,"results":[{"artistType":"Artist","amgArtistId":4416,"wrapperType":"artist","artistId":106621,"artistLinkUrl":"https://itunes.apple.com/us/artist/guns-n-roses/106621?uo=4","artistName":"Guns N' Roses","primaryGenreId":21,"primaryGenreName":"Rock"},{"artistViewUrl":"https://itunes.apple.com/us/artist/guns-n-roses/106621?uo=4","releaseDate":"2004-01-01T08:00:00Z","collectionType":"Compilation","collectionName":"Greatest Hits","amgArtistId":4416,"copyright":"\u2117 2004 Geffen Records","collectionId":5669937,"artworkUrl60":"https://is3-ssl.mzstatic.com/image/thumb/Music/v4/3c/18/87/3c188735-e462-3c99-92eb-50fb06afa73f/source/60x60bb.jpg","wrapperType":"collection","collectionViewUrl":"https://itunes.apple.com/us/album/greatest-hits/5669937?uo=4","artistId":106621,"collectionCensoredName":"Greatest Hits","artworkUrl100":"https://is3-ssl.mzstatic.com/image/thumb/Music/v4/3c/18/87/3c188735-e462-3c99-92eb-50fb06afa73f/source/100x100bb.jpg","trackCount":14,"currency":"USD","artistName":"Guns N' Roses","country":"USA","primaryGenreName":"Rock","collectionExplicitness":"notExplicit","collectionPrice":9.99},
Today we will look at a Elasticsearch logging driver for Docker.
Why a Log Driver?
By default the log output can be retrieved when using the docker service logs -f service_name, where log output of that service is shown via stdout. When having a lot of services in your swarm, it becomes useful logging all of your log output to a database service.
This is not just for Swarm but Docker stand alone as well.
In this tutorial we will use the Elasticsearch Log Driver, to log our logs for all our docker swarm services to Elasticsearch.
Installing to Elasticsearch Log Driver:
If you are running Docker Swarm, run this on all the nodes:
Have a look at your Elasticsearch indexes, and you will find the index which was specified in the log-options:
123
$ curl http://192.168.0.235:9200/_cat/indices?v
health status index uuid pri rep docs.count docs.deleted store.size pri.store.size
yellow open docker-2018.05.01 8FTqWq6nQlSGpYjD9M5qSg 5110 8.9kb 8.9kb
Lets have a look at the Elasticsearch Document which holds the data of the log entry:
Give it some time to launch and have a look at your indexes, and you will find the index which it wrote to:
1234
$ curl http://192.168.0.235:9200/_cat/indices?v
health status index uuid pri rep docs.count docs.deleted store.size pri.store.size
yellow open docker-2018.05.01 8FTqWq6nQlSGpYjD9M5qSg 5110 8.9kb 8.9kb
yellow open docker-whoami-2018.05.01 YebUtKa1RnCy86iP5_ylgg 51110 54.4kb 54.4kb