Ruan Bekker's Blog

From a Curious mind to Posts on Github

Add a Authentication Header to Your Python Flask App

We will write a simple Python Flask application that requires authentication in order to respond with a 200 HTTP Status code.

Python Flask Application:

Our Python Flask application will require the Header x-api-key dhuejso2dj3d0 in the HTTP Request, to give us a 200 HTTP Status code, if not, we will respond with a 401 Unauthorized Response:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
from flask import Flask, request, jsonify

app = Flask(__name__)

@app.route('/')
def index():
    headers = request.headers
    auth = headers.get("X-Api-Key")
    if auth == 'asoidewfoef':
        return jsonify({"message": "OK: Authorized"}), 200
    else:
        return jsonify({"message": "ERROR: Unauthorized"}), 401

if __name__ == '__main__':
    app.run()

To get the headers, you can use headers.get("X-Api-Key") or headers["X-Api-Key"]

Create a virtual environment, install flask and run the app:

1
2
3
4
5
6
7
8
9
$ virtualenv .venv
$ source .venv/bin/activate
$ python app.py
 * Serving Flask app "app" (lazy loading)
 * Environment: production
   WARNING: Do not use the development server in a production environment.
   Use a production WSGI server instead.
 * Debug mode: off
 * Running on http://127.0.0.1:5000/ (Press CTRL+C to quit)

Requests to our App:

Let’s first make a request with no headers, which should then give us a 401 Unautorhized response:

1
2
3
4
5
6
7
8
9
$ curl -i http://localhost:5000

HTTP/1.0 401 UNAUTHORIZED
Content-Type: application/json
Content-Length: 33
Server: Werkzeug/0.14.1 Python/3.6.5
Date: Fri, 01 Jun 2018 07:26:25 GMT

{"message":"ERROR: Unauthorized"}

Now let’s include the authentication token in our headers. If the string is the same as the one in the code, we should see a 200 HTTP Response:

1
2
3
4
5
6
7
8
9
$ curl -i -H 'x-api-key: asoidewfoef' http://localhost:5000

HTTP/1.0 200 OK
Content-Type: application/json
Content-Length: 29
Server: Werkzeug/0.14.1 Python/3.6.5
Date: Fri, 01 Jun 2018 07:27:03 GMT

{"message":"OK: Authorized"}

Note:

From a best practice, its not a good decision to hard code sensitive details in your code, but rather read that from an encrypted database and store that in your applications environment variables, and let your application read from the environment variables, something like that :D

Clearing Up Disk Space on Docker Swarm by Removing Unused Data With Prune

After some time, your system can run out of disk space when running a lot of containers / volumes etc. You will find that at times, you will have a lot of unused containers, stopped containers, unused images, unused networks that is just sitting there, which consumes data on your nodes.

One way to clean them is by using docker system prune.

Check Docker Disk Space

The command below will show the amount of disk space consumed, and how much is reclaimable:

1
2
3
4
5
6
$ docker system df
TYPE                TOTAL               ACTIVE              SIZE                RECLAIMABLE
Images              229                 125                 23.94GB             14.65GB (61%)
Containers          322                 16                  8.229GB             8.222GB (99%)
Local Volumes       77                  41                  698MB               19.13MB (2%)
Build Cache                                                 0B                  0B

Removing Unsued Data:

By using Prune, we can remove the unused resources that is consuming data:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
$ docker system prune

WARNING! This will remove:
        - all stopped containers
        - all networks not used by at least one container
        - all dangling images
        - all build cache
Are you sure you want to continue? [y/N] y

Deleted Containers:
a3d7db158e065d0c86160fd5d688875f8b7435848ea91db57ed007
47890dcfea4a105f43e790dd8ad3c6d7c4ad7e738186c034d7a46b

Deleted Networks:
traefik-net
app_appnet

Deleted Images:
deleted: sha256:5b9909c10e93afec
deleted: sha256:d81eesdfihweo3rk

Total reclaimed space: 14.18GB

For related Docker posts.

SSH Tools That Comes in Handy When Dealing With Multiple Servers

When dealing with a lot of servers where you need to ssh to different servers and especially if they require different authentication from different private ssh keys, it kinda gets annoying specifying the private key you need, when you want to SSH to them.

SSH Config

SSH Config: ~/.ssh/config is powerful!

In this config file, you can specify the remote host, the key, user and the alias, so that when you want to SSH to it, you dont have to use the fully qualified domain name or IP address.

Let’s take for example our server-a with the following details:

  • FQDN: host1.eu.compute.domain.coom
  • User: james
  • PrivateKeyFile: /path/to/key.pem
  • Disable Strict Host Checking

So to access that host, you would use the following command (without ssh config):

1
$ ssh -oStrictHostKeyChecking=no -oUserKnownHostsFile=/dev/null -i /path/to/key.pem james@host1.eu.compute.domain.com

Now with SSH Config, open up the config file:

1
$ vim ~/.ssh/config

and declare the host details:

1
2
3
4
5
6
Host host1
  Hostname host1.eu.compute.domain.com
  User james
  IdentityFile /path/to/key.pem
  StrictHostKeyChecking no
  UserKnownHostsFile /dev/null

Now, if we need to SSH to it, we can do it as simply as:

1
$ ssh host1

as it will pull in the configs from the config that is described from the host alias that you calling from the argument of the ssh binary.

SSH Timeout

Appending to our SSH Config, we can configure either our client or server to prevent SSH Timeouts due to inactivity.

  • SSH Timeout on our Client:
1
$ vim ~/.ssh/config

Here we can set how often a NULL Packet is sent to the SSH Connections to keep the connection alive, in this case every 120 seconds:

1
ServerAliveInterval 120
  • SSH Timeout on the Servers:
1
$ vim /etc/ssh/sshd_config

Below we have 2 properties, the interval of how often to instruct the client connected to send a NULL packet to keep the connection alive and the max number of intervals, so for a idle connection to timeout in 24 hours, we will take 86400 seconds which is 24 hours, divide into 120 second intervals, which gives as 720 intervals.

So the config will look like this:

1
2
ClientAliveInterval 120
ClientAliveCountMax 720

The restart the sshd service:

1
$ /etc/init.d/sshd restart

SSH Agent

Another handy tool is ssh-agent, if you have password encryption on your key, everytime you need to ssh, a password will be prompted. A way to get around this is to use the ssh-agent.

We also want to set a TTL to the ssh-agent, as we don’t want it to run forever (unless you want it to). In this case I will let the ssh-agent exit after 2 hours. It will also only run in the shell session from where you execute it. Lets start up our ssh-agent:

1
2
$ eval $(ssh-agent -t 7200)
Agent pid 88760

Now add the private key to the ssh-agent. If your private key is password protected, it will prompt you for the password and after successful verification the key will be added:

1
2
$ ssh-add /path/to/key.pem
Identity added: /path/to/key.pem (/path/to/key.pem)

Multiple Github Accounts:

Here is a great post on how to work with different GitHub Accounts: - https://gist.github.com/jexchan/2351996

Wildcard SSL Certificate With Letsencrypt on Docker Swarm Using Traefik

With Letsencrypt supporting Wildcard certificates is really awesome. Now, we can setup traefik to listen on 443, acting as a reverse proxy and is doing HTTPS Termination to our Applications thats running in our Swarm.

Architectural Design:

At the moment we have 3 Manager Nodes, and 5 Worker Nodes:

  • Using a Dummy Domain example.com which is set to the 3 Public IP’s of our Manager Nodes
  • DNS is set for: example.com A Record to: 52.10.1.10, 52.10.1.11, 52.10.1.12
  • DNS is set for: *.example.com CNAME to example.com
  • Any application that is spawned into our Swarm, will be labeled with a traefik.frontend.rule which will be routed to the service and redirected from HTTP to HTTPS

Create the Overlay Network:

Create the overlay network that will be used for our stack:

1
$ docker network create --driver overlay appnet

Create the Compose Files for our Stacks:

Create the Traefik Service Compose file, we will deploy it in Global Mode, constraint to our Manager Nodes, so that every manager node has a copy of traefik running.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
$ cat > traefik-compose.yml << EOF

version: "3.4"
services:
  proxy:
    image: traefik:latest
    command:
      - "--api"
      - "--entrypoints=Name:http Address::80 Redirect.EntryPoint:https"
      - "--entrypoints=Name:https Address::443 TLS"
      - "--defaultentrypoints=http,https"
      - "--acme"
      - "--acme.storage=/etc/traefik/acme/acme.json"
      - "--acme.entryPoint=https"
      - "--acme.httpChallenge.entryPoint=http"
      - "--acme.onHostRule=true"
      - "--acme.onDemand=false"
      - "--acme.email=me@example.com"
      - "--docker"
      - "--docker.swarmMode"
      - "--docker.domain=example.com"
      - "--docker.watch"
    volumes:
      - /var/run/docker.sock:/var/run/docker.sock
      - /mnt/traefik/acme.json:/etc/traefik/acme/acme.json
    networks:
      - appnet
    ports:
      - target: 80
        published: 80
        mode: host
      - target: 443
        published: 443
        mode: host
      - target: 8080
        published: 8080
        mode: host
    deploy:
      mode: global
      placement:
        constraints:
          - node.role == manager
      update_config:
        parallelism: 1
        delay: 10s
      restart_policy:
        condition: on-failure
networks:
  appnet:
    external: true

EOF

Create the Application Compose file, in this example we will be deploying a Ghost Blog:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
$ cat > ghost-compose.yml << EOF

version: '3.4'

services:
  blog:
    image: ghost:1.22.7-alpine
    networks:
      - appnet
    deploy:
      mode: replicated
      replicas: 1
      placement:
        constraints: 
          - node.role == worker
      labels:
        - "traefik.backend.loadbalancer.sticky=false"
        - "traefik.backend.loadbalancer.swarm=true"
        - "traefik.backend=blog-1"
        - "traefik.docker.network=appnet"
        - "traefik.entrypoints=https"
        - "traefik.frontend.passHostHeader=true"
        - "traefik.frontend.rule=Host:blog.example.com"
        - "traefik.port=2368"

networks:
  appnet:
    external: true

EOF

Prepare the Path for Traefik:

We have a replicated volume under our /mnt partition, so that all our managers can read from that path, create the file and provide the sufficient permissions:

1
2
3
$ mkdir -p /mnt/traefik
$ touch /mnt/traefik/acme.json
$ chmod 600 /mnt/traefik/acme.json

Deploy the Stacks:

Deploy the Traefik Stack:

1
$ docker stack deploy -c traefik-compose.yml traefik

Wait until the services are deployed:

1
2
3
$ docker stack services traefik
ID                  NAME                MODE                REPLICAS            IMAGE               PORTS
f8ru5gbcgd2v        traefik_proxy       global              3/3                 traefik:latest

Deploy the Application Stack:

1
$ docker stack deploy -c ghost-compose.yml apps

Verify that the Application Stack has been deployed:

1
2
3
$ docker stack services apps
ID                  NAME                MODE                REPLICAS            IMAGE                          PORTS
516zlfs2cfdv        apps_blog           replicated          1/1                 ghost:1.22.7-alpine

At the moment we will have 2 stacks in our Swarm:

1
2
3
4
$ docker stack ls
NAME                SERVICES
apps                1
traefik             1

Test the Application:

Let’s test our blog to see if we get redirected to HTTPS:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
$ curl -iL http://blog.example.com
HTTP/1.1 302 Found
Location: https://blog.example.com:443/
Date: Mon, 28 May 2018 22:02:41 GMT
Content-Length: 5
Content-Type: text/plain; charset=utf-8

HTTP/1.1 200 OK
Cache-Control: public, max-age=0
Content-Type: text/html; charset=utf-8
Date: Mon, 28 May 2018 22:02:42 GMT
Etag: W/"4166-J2ooSIa8gtTkYjbnr7vnPUFlRJI"
Vary: Accept-Encoding
X-Powered-By: Express
Transfer-Encoding: chunked

Works like a charm! Traefik FTW!

Web Forms With Python Flask and the WTForms Module With Bootstrap

Quick demo with Web Forms using the WTForms module in Python Flask.

Requirements:

Install the required dependencies:

1
$ pip install flask wtforms

Application:

The Application code of the Web Forms Application. Note that we are also using validation, as we want the user to complete all the fields. I am also including a function that logs to the directory where the application is running, for previewing the data that was logged.

app.py
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
from random import randint
from time import strftime
from flask import Flask, render_template, flash, request
from wtforms import Form, TextField, TextAreaField, validators, StringField, SubmitField

DEBUG = True
app = Flask(__name__)
app.config.from_object(__name__)
app.config['SECRET_KEY'] = 'SjdnUends821Jsdlkvxh391ksdODnejdDw'

class ReusableForm(Form):
    name = TextField('Name:', validators=[validators.required()])
    surname = TextField('Surname:', validators=[validators.required()])

def get_time():
    time = strftime("%Y-%m-%dT%H:%M")
    return time

def write_to_disk(name, surname, email):
    data = open('file.log', 'a')
    timestamp = get_time()
    data.write('DateStamp={}, Name={}, Surname={}, Email={} \n'.format(timestamp, name, surname, email))
    data.close()

@app.route("/", methods=['GET', 'POST'])
def hello():
    form = ReusableForm(request.form)

    #print(form.errors)
    if request.method == 'POST':
        name=request.form['name']
        surname=request.form['surname']
  email=request.form['email']
  password=request.form['password']

        if form.validate():
            write_to_disk(name, surname, email)
            flash('Hello: {} {}'.format(name, surname))

        else:
            flash('Error: All Fields are Required')

    return render_template('index.html', form=form)

if __name__ == "__main__":
    app.run()

HTML Template:

templates/index.html

This will result in a basic web form like this:

Resources:

Generate Random Characters With Python Using Random and String Modules

When generating random characters for whatever reason, passwords, secrets-keys etc, you could use the uuid module, which looks like this:

Random String with UUID
1
2
3
>>> from uuid import uuid4
>>> print("Your string is: {0}".format(uuid4()) )
Your string is: 53a6e1a7-a2c7-488e-bed9-d76662de9c5f

But if you want to be more specific, like digits, letters, capitalization etc, you can use the string and random modules to do so. First we will generate a random string containing only letters:

Random String with letters
1
2
3
4
5
6
7
8
9
>>> from string import ascii_letters, punctuation, digits
>>> from random import choice, randint
>>> min = 12
>>> max = 15
>>> string_format = ascii_letters
>>> generated_string = "".join(choice(string_format) for x in range(randint(min, max)))

>>> print("Your String is: {0}".format(generated_string))
Your String is: zNeUFluvZwED

As you can see, you have a randomized string which will be always at least 12 characters and max 15 characters, which is lower and upper case. You can also use the lower and upper functions if you want to capitalize or lower case your string:

1
2
3
4
5
>>> generated_string.lower()
'zneufluvzwed'

>>> generated_string.upper()
'ZNEUFLUVZWED'

Let’s add some logic so that we can have a more randomized characters with digits, punctuations etc:

Random String with Letters, Punctuations and Digits
1
2
3
4
5
6
7
8
>>> from string import ascii_letters, punctuation, digits
>>> from random import choice, randint
>>> min = 12
>>> max = 15
>>> string_format = ascii_letters + punctuation + digits
>>> generated_string = "".join(choice(string_format) for x in range(randint(min, max)))
>>> print("Your String is: {0}".format(generated_string))
Your String is: Bu>}x_/-H5)fLAr

More Python related blog posts.

Manage Scaleway Instances via Their API Like a Boss With Their Command Line Tool Scw

Let’s set things straight: I am a command line fan boy, If I can do the things I have to do with a command line interface, i’m happy! And that means automation ftw! :D

Scaleway Command Line Interface:

I have been using Scaleway for about 2 years now, and absolutely loving their services! So I recently found their command line interface utility: scw, which is written in golang and has a very similar feel to docker.

Install the SCW CLI Tool:

A golang environment is needed and I will be using docker to drop myself into a golang environment and then install the scw utility:

1
2
3
4
$ docker run -it golang:alpine sh
$ apk update
$ apk add openssl git openssh curl
$ go get -u github.com/scaleway/scaleway-cli/cmd/scw

Verify that it was installed:

1
2
$ scw --version
scw version v1.16+dev, build

Awesome sauce!

Authentication:

When we authenticate to Scaleway, it will prompt you to upload your public ssh key, as I am doing this in a container I have no ssh keys, so therefore will generate one before I authenticate.

Generate the SSH Key:

1
2
3
4
5
6
7
$ ssh-keygen -t rsa
Generating public/private rsa key pair.
Enter file in which to save the key (/root/.ssh/id_rsa):
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in /root/.ssh/id_rsa.
Your public key has been saved in /root/.ssh/id_rsa.pub.

Now loging to Scaleway using the cli tools:

1
2
3
4
5
6
7
8
9
10
11
12
$ scw login
Login (cloud.scaleway.com): <youremail@domain.com>
Password:
Do you want to upload an SSH key ?
[0] I don't want to upload a key !
[1] id_rsa.pub
Which [id]: 1

You are now authenticated on Scaleway.com as Ruan.
You can list your existing servers using `scw ps` or create a new one using `scw run ubuntu-xenial`.
You can get a list of all available commands using `scw -h` and get more usage examples on github.com/scaleway/scaleway-cli.
Happy cloud riding.

Sweeet!

Getting Info from Scaleway

List Instance Types:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
$ scw products servers
COMMERCIAL TYPE     ARCH     CPUs      RAM  BAREMETAL
ARM64-128GB        arm64       64   137 GB      false
ARM64-16GB         arm64       16    17 GB      false
ARM64-2GB          arm64        4   2.1 GB      false
ARM64-32GB         arm64       32    34 GB      false
ARM64-4GB          arm64        6   4.3 GB      false
ARM64-64GB         arm64       48    69 GB      false
ARM64-8GB          arm64        8   8.6 GB      false
C1                   arm        4   2.1 GB       true
C2L               x86_64        8    34 GB       true
C2M               x86_64        8    17 GB       true
C2S               x86_64        4   8.6 GB       true
START1-L          x86_64        8   8.6 GB      false
START1-M          x86_64        4   4.3 GB      false
START1-S          x86_64        2   2.1 GB      false
START1-XS         x86_64        1   1.1 GB      false
VC1L              x86_64        6   8.6 GB      false
VC1M              x86_64        4   4.3 GB      false
VC1S              x86_64        2   2.1 GB      false
X64-120GB         x86_64       12   129 GB      false
X64-15GB          x86_64        6    16 GB      false
X64-30GB          x86_64        8    32 GB      false
X64-60GB          x86_64       10    64 GB      false

Get a list of available Images, in my case I am just looking for Ubuntu:

1
2
3
$ scw images | grep -i ubuntu
Ubuntu_Bionic               latest              a21bb700            11 days             [ams1 par1]         [x86_64]
Ubuntu_Mini_Xenial_25G      latest              bc75c00b            13 days             [ams1 par1]         [x86_64]

List Running Instances:

1
2
3
4
5
$ scw ps
SERVER ID           IMAGE                       ZONE                CREATED             STATUS              PORTS               NAME                  COMMERCIAL TYPE
abc123de            Ubuntu_Xenial_16_04_lates   ams1                5 weeks             running             xx.xx.xx.xx         scw-elasticsearch-01  ARM64-4GB
abc456de            ruan-docker-swarm-17_03     par1                10 months           running             xx.xx.xxx.xxx       scw-swarm-manager-01  VC1M
...

List All Instances (Running, Stopped, Started, etc):

1
2
3
4
$ scw ps -a
SERVER ID           IMAGE                       ZONE                CREATED             STATUS              PORTS               NAME                  COMMERCIAL TYPE
abc123df            Ubuntu_Xenial_16_04_lates   ams1                5 weeks             stopped             xx.xx.xx.xx         scw-elasticsearch-02  ARM64-4GB
...

List Instances with a filter based on its name:

1
2
3
$ scw ps -f name=scw-swarm-worker-02
SERVER ID           IMAGE               ZONE                CREATED             STATUS              PORTS               NAME                COMMERCIAL TYPE
1234abcd            Ubuntu_Xenial       par1                8 minutes           running             xx.xx.xxx.xxx       scw-swarm-worker-2  START1-XS

List the Latest Instance that was created:

1
2
3
$ scw ps -l
SERVER ID           IMAGE               ZONE                CREATED             STATUS              PORTS               NAME                COMMERCIAL TYPE
1234abce            Ubuntu_Xenial       par1                6 minutes           running             xx.xx.xxx.xxx       scw-swarm-worker-3  START1-XS

Create Instances:

In my scenario, I would like to create a instance named docker-swarm-worker-4 with the instance type START1-XS in the Paris datacenter, and I will be using my key that I have uploaded, also the image id that I passed, was retrieved when listing for images:

1
2
$ scw --region=par1 create --commercial-type=START1-XS --ip-address=dynamic --ipv6=false --name="docker-swarm-worker-4" --tmp-ssh-key=false  bc75c00b
<response: random uuid string>

Now that the instance is created, we can start it by calling either the name or the id:

1
$ scw start docker-swarm-worker-4

To verify the status of the instance, we can do:

1
2
3
$ scw ps -l
SERVER ID           IMAGE               ZONE                CREATED             STATUS              PORTS               NAME                   COMMERCIAL TYPE
102abc34            Ubuntu_Xenial                           28 seconds          starting                                docker-swarm-worker-4  START1-XS

At this moment it is still starting, after waiting a minute or so, run it again:

1
2
3
$ scw ps -l
SERVER ID           IMAGE               ZONE                CREATED             STATUS              PORTS               NAME                   COMMERCIAL TYPE
102abc34            Ubuntu_Xenial       par1                About a minute      running             xx.xx.xx.xx         docker-swarm-worker-4  START1-XS

As we can see its in a running state, so we are good to access our instance. You have 2 options to access your server, via exec and ssh.

1
2
$ scw exec docker-swarm-worker-4 /bin/bash
root@docker-swarm-worker-4:~

or via SSH:

1
2
$ ssh root@xx.xx.xx.xx
root@docker-swarm-worker-4:~

If you would like to access your server without uploading your SSH key to your account, you can pass --tmp-ssh-key=true as in:

1
$ scw --region=par1 create --commercial-type=START1-XS --ip-address=dynamic --ipv6=false --name="scw-temp-instance" --tmp-ssh-key=true  bc75c00b

Terminating Resources:

This wil stop, terminate the instance with the associated volumes and reserved ip

1
2
$ scw stop --terminate=true scw-temp-instance
scw-temp-instance

If you had to remove a volume that is not needed, or unused:

1
$ scw rmi test-1-snapshot-<long-string>--2018-04-26_12:42

To logout:

1
$ scw logout

Resources:

Have a look at Scaleway-CLI Documentation and their Website for more info, and have a look at their new START1-XS instance types, that is only 1.99 Euro’s, that is insane!

Personally love what they are doing, feel free to head over to their pricing page to see some sweet deals!

Temporary IAM Credentials From EC2 Instance Metadata Using Python

From a Best Practice Perspective its good not having to pass sensitive information around, and especially not hard coding them.

Best Practice: Security

One good way is to use SSM with KMS to Encrypt/Decrypt them, but since EC2 has a Metadata Service available, we can make use of that to retrieve temporary credentials. One requirement though, is that the instance will require an IAM Role where the code will be executed on. The IAM Role also needs to have sufficient privileges to be able to execute, whatever you need to do.

The 12 Factor Methodology however states to use config in your environment variables, but from the application logic, its easy to save it in our environment.

Scenario: Applications on AWS EC2

When you run applications on Amazon EC2 the nodes has access to the EC2 Metadata Service, so in this case our IAM Role has a Policy that authorizes GetItem on our DynamoDB table, therefore we can define our code with no sensitive information, as the code will do all the work to get the credentials and use the credentials to access DynamoDB.

Use Temporary Credentials to Read from DynamoDB using botocore

In this example we will get the temporary credentials from the metadata service, then define the temporary credentials in our session to authorize our request against dynamodb to read from our table:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
>>> import boto3
>>> from botocore.utils import InstanceMetadataFetcher
>>> from botocore.credentials import InstanceMetadataProvider
>>> provider = InstanceMetadataProvider(iam_role_fetcher=InstanceMetadataFetcher(timeout=1000, num_attempts=2))
>>> creds = provider.load()

>>> session = boto3.Session(
    aws_access_key_id=creds.access_key,
    aws_secret_access_key=creds.secret_key,
    aws_session_token=creds.token
)

>>> ddb = session.client('dynamodb')

>>> response = ddb.get_item(
    TableName='my-dynamodb-table',
    Key={
        'node_type': {
            'S': 'primary_manager'
        }
    }
)

>>> print(response['Item']['ip']['S'])
'10.0.0.32

Also, when you are logged onto the EC2 instance, you can use curl to see the temporary credentials information:

1
2
3
4
5
6
7
8
9
10
11
$ iam_role_name=$(curl -s http://169.254.169.254/latest/meta-data/iam/security-credentials/)
$ curl -s http://169.254.169.254/latest/meta-data/iam/security-credentials/${iam_role_name}
{
  "Code" : "Success",
  "LastUpdated" : "2018-05-09T14:25:48Z",
  "Type" : "AWS-HMAC",
  "AccessKeyId" : "",
  "SecretAccessKey" : "",
  "Token" : "",
  "Expiration" : "2018-05-09T20:46:55Z"
}

Another method is boto3 Session:

You can also use boto3.Session to achieve this:

1
2
3
4
5
6
7
8
9
10
11
12
>>> session = boto3.Session(region_name='eu-west-1')
>>> credentials = session.get_credentials()
>>> credentials = credentials.get_frozen_credentials()
>>> credentials.access_key
u'ABC...'
>>> credentials.secret_key
u'DEF...'
>>> credentials.token
u'ZXC...'
>>> access_key = credentials.access_key
>>> secret_key = credentials.secret_key
>>> ddb = session.client('dynamodb')

Use Python Requests to Interact With the iTunes API to Search for Music Info

Tutorial on using Python Requests and using Apple iTunes Music API, where we will be doing the following:

  • Basics of using the Requests module
  • Query iTunes API on Songs by Artist
  • Query iTunes API on Artists Info
  • Query iTunes API on All Albums by Artist
  • Query iTunes API on Top 5 Albums
  • Query iTunes API on Multipe Artists

Resources:

Install the Request Module:

1
2
3
$ virtualenv -p /usr/bin/python .venv
$ source .venv/bin/activate
$ pip install requests

Basic Usage of Requests:

In this demonstration we will only use the GET HTTP Method.

Make the GET Request to the endpoint:

1
2
>>> import requests
>>> response = requests.get('https://itunes.apple.com/search?term=guns+and+roses&limit=1')

View the HTTP Status Code of the Response:

1
2
>>> response.status_code
200

To view some of the status codes of the request library:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
>>> requests.codes.ok
200
>>> requests.codes.no_content
204
>>> requests.codes.temporary_redirect
307
>>> requests.codes.permanent_redirect
308
>>> requests.codes.bad
400
>>> requests.codes.not_found
404
>>> requests.codes.bad_gateway
502

Call .ok for the status lookup, the boolean answer will indicate if it responded with a 200 OK:

1
2
>>> response.ok
True

Measure the amount of time the request took:

1
2
>>> response.elapsed.total_seconds()
0.706043

View the content of the response:

1
2
>>> response.content
'\n\n\n{\n "resultCount":1,\n "results": [\n{"wrapperType":"track", "kind":"song", "artistId":106621, "collectionId":5669937, "trackId":5669911, "artistName":"Guns N\' Roses", "collectionName":"Greatest Hits", "trackName":"Sweet Child O\' Mine", "collectionCensoredName":"Greatest Hits", "trackCensoredName":"Sweet Child O\' Mine", "artistViewUrl":"https://itunes.apple.com/us/artist/guns-n-roses/106621?uo=4", "collectionViewUrl":"https://itunes.apple.com/us/album/sweet-child-o-mine/5669937?i=5669911&uo=4", "trackViewUrl":"https://itunes.apple.com/us/album/sweet-child-o-mine/5669937?i=5669911&uo=4", \n"previewUrl":"https://audio-ssl.itunes.apple.com/apple-assets-us-std-000001/Music6/v4/f2/7d/73/f27d7346-de92-bdc6-e148-56a3da406005/mzaf_2747902348777129728.plus.aac.p.m4a", "artworkUrl30":"https://is3-ssl.mzstatic.com/image/thumb/Music/v4/3c/18/87/3c188735-e462-3c99-92eb-50fb06afa73f/source/30x30bb.jpg", "artworkUrl60":"https://is3-ssl.mzstatic.com/image/thumb/Music/v4/3c/18/87/3c188735-e462-3c99-92eb-50fb06afa73f/source/60x60bb.jpg", "artworkUrl100":"https://is3-ssl.mzstatic.com/image/thumb/Music/v4/3c/18/87/3c188735-e462-3c99-92eb-50fb06afa73f/source/100x100bb.jpg", "collectionPrice":9.99, "trackPrice":1.29, "releaseDate":"1987-07-21T07:00:00Z", "collectionExplicitness":"notExplicit", "trackExplicitness":"notExplicit", "discCount":1, "discNumber":1, "trackCount":14, "trackNumber":2, "trackTimeMillis":355267, "country":"USA", "currency":"USD", "primaryGenreName":"Rock", "isStreamable":true}]\n}\n\n\n'

View the content in json format:

1
2
>>> response.json()
{u'resultCount': 1, u'results': [{u'collectionExplicitness': u'notExplicit', u'releaseDate': u'1987-07-21T07:00:00Z', u'currency': u'USD', u'artistId': 106621, u'previewUrl': u'https://audio-ssl.itunes.apple.com/apple-assets-us-std-000001/Music6/v4/f2/7d/73/f27d7346-de92-bdc6-e148-56a3da406005/mzaf_2747902348777129728.plus.aac.p.m4a', u'trackPrice': 1.29, u'isStreamable': True, u'trackViewUrl': u'https://itunes.apple.com/us/album/sweet-child-o-mine/5669937?i=5669911&uo=4', u'collectionName': u'Greatest Hits', u'collectionId': 5669937, u'trackId': 5669911, u'collectionViewUrl': u'https://itunes.apple.com/us/album/sweet-child-o-mine/5669937?i=5669911&uo=4', u'trackCount': 14, u'trackNumber': 2, u'discNumber': 1, u'collectionPrice': 9.99, u'trackCensoredName': u"Sweet Child O' Mine", u'trackName': u"Sweet Child O' Mine", u'trackTimeMillis': 355267, u'primaryGenreName': u'Rock', u'artistViewUrl': u'https://itunes.apple.com/us/artist/guns-n-roses/106621?uo=4', u'kind': u'song', u'country': u'USA', u'wrapperType': u'track', u'artworkUrl100': u'https://is3-ssl.mzstatic.com/image/thumb/Music/v4/3c/18/87/3c188735-e462-3c99-92eb-50fb06afa73f/source/100x100bb.jpg', u'collectionCensoredName': u'Greatest Hits', u'artistName': u"Guns N' Roses", u'artworkUrl60': u'https://is3-ssl.mzstatic.com/image/thumb/Music/v4/3c/18/87/3c188735-e462-3c99-92eb-50fb06afa73f/source/60x60bb.jpg', u'trackExplicitness': u'notExplicit', u'artworkUrl30': u'https://is3-ssl.mzstatic.com/image/thumb/Music/v4/3c/18/87/3c188735-e462-3c99-92eb-50fb06afa73f/source/30x30bb.jpg', u'discCount': 1}]}

View the request headers:

1
2
>>> response.headers
{'Content-Length': '650', 'x-apple-translated-wo-url': '/WebObjects/MZStoreServices.woa/ws/wsSearch?term=guns+and+roses&limit=1&urlDesc=', 'Access-Control-Allow-Origin': '*', 'x-webobjects-loadaverage': '0', 'X-Cache': 'TCP_MISS from a2-21-98-60.deploy.akamaitechnologies.com (AkamaiGHost/9.3.0.3-22245996) (-)', 'x-content-type-options': 'nosniff', 'x-apple-orig-url': 'https://itunes.apple.com/search?term=guns+and+roses&limit=1', 'x-apple-jingle-correlation-key': 'GUOFR25MGUUK5J7LUKI6UUFUWM', 'x-apple-application-site': 'ST11', 'Date': 'Tue, 08 May 2018 20:50:39 GMT', 'apple-tk': 'false', 'content-disposition': 'attachment; filename=1.txt', 'Connection': 'keep-alive', 'apple-seq': '0', 'x-apple-application-instance': '2001318', 'X-Apple-Partner': 'origin.0', 'Content-Encoding': 'gzip', 'strict-transport-security': 'max-age=31536000', 'Vary': 'Accept-Encoding', 'apple-timing-app': '109 ms', 'X-True-Cache-Key': '/L/itunes.apple.com/search ci2=limit=1&term=guns+and+roses__', 'X-Cache-Remote': 'TCP_MISS from a23-57-75-64.deploy.akamaitechnologies.com (AkamaiGHost/9.3.0.3-22245996) (-)', 'Cache-Control': 'max-age=86400', 'x-apple-request-uuid': '351c58eb-ac35-28ae-a7eb-a291ea50b4b3', 'Content-Type': 'text/javascript; charset=utf-8', 'apple-originating-system': 'MZStoreServices'}

Python Requests and the iTunes API:

Search for the Artist Guns and Roses and limit the output to 1 Song:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
>>> import requests
>>> import json
>>> a = 'https://itunes.apple.com/search?term=guns+and+roses&limit=1'
>>> b = requests.get(a).json()
>>> print(json.dumps(b, indent=2))
{
  "resultCount": 1,
  "results": [
    {
      "collectionExplicitness": "notExplicit",
      "releaseDate": "1987-07-21T07:00:00Z",
      "currency": "USD",
      "artistId": 106621,
      "previewUrl": "https://audio-ssl.itunes.apple.com/apple-assets-us-std-000001/Music6/v4/f2/7d/73/f27d7346-de92-bdc6-e148-56a3da406005/mzaf_2747902348777129728.plus.aac.p.m4a",
      "trackPrice": 1.29,
      "isStreamable": true,
      "trackViewUrl": "https://itunes.apple.com/us/album/sweet-child-o-mine/5669937?i=5669911&uo=4",
      "collectionName": "Greatest Hits",
      "collectionId": 5669937,
      "trackId": 5669911,
      "collectionViewUrl": "https://itunes.apple.com/us/album/sweet-child-o-mine/5669937?i=5669911&uo=4",
      "trackCount": 14,
      "trackNumber": 2,
      "discNumber": 1,
      "collectionPrice": 9.99,
      "trackCensoredName": "Sweet Child O' Mine",
      "trackName": "Sweet Child O' Mine",
      "trackTimeMillis": 355267,
      "primaryGenreName": "Rock",
      "artistViewUrl": "https://itunes.apple.com/us/artist/guns-n-roses/106621?uo=4",
      "kind": "song",
      "country": "USA",
      "wrapperType": "track",
      "artworkUrl100": "https://is3-ssl.mzstatic.com/image/thumb/Music/v4/3c/18/87/3c188735-e462-3c99-92eb-50fb06afa73f/source/100x100bb.jpg",
      "collectionCensoredName": "Greatest Hits",
      "artistName": "Guns N' Roses",
      "artworkUrl60": "https://is3-ssl.mzstatic.com/image/thumb/Music/v4/3c/18/87/3c188735-e462-3c99-92eb-50fb06afa73f/source/60x60bb.jpg",
      "trackExplicitness": "notExplicit",
      "artworkUrl30": "https://is3-ssl.mzstatic.com/image/thumb/Music/v4/3c/18/87/3c188735-e462-3c99-92eb-50fb06afa73f/source/30x30bb.jpg",
      "discCount": 1
    }
  ]
}

From the response we got a "artistId": 106621, lets query the API on the ArtistId, to get info of the Artist:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
>>> a = 'https://itunes.apple.com/lookup?id=106621'
>>> b = requests.get(a).json()
>>> print(json.dumps(b, indent=2))
{
  "resultCount": 1,
  "results": [
    {
      "artistType": "Artist",
      "amgArtistId": 4416,
      "wrapperType": "artist",
      "artistId": 106621,
      "artistLinkUrl": "https://itunes.apple.com/us/artist/guns-n-roses/106621?uo=4",
      "artistName": "Guns N' Roses",
      "primaryGenreId": 21,
      "primaryGenreName": "Rock"
    }
  ]
}

Query all the Albums by Artist by using the ArtistId and Entity for Album:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
>>> a = 'https://itunes.apple.com/lookup?id=106621&entity=album'
>>> b = requests.get(a).json()
>>> print(json.dumps(b, indent=2))
{
  "resultCount": 13,
  "results": [
    {
      "artistType": "Artist",
      "amgArtistId": 4416,
      "wrapperType": "artist",
      "artistId": 106621,
      "artistLinkUrl": "https://itunes.apple.com/us/artist/guns-n-roses/106621?uo=4",
      "artistName": "Guns N' Roses",
      "primaryGenreId": 21,
      "primaryGenreName": "Rock"
    },
    {
      "artistViewUrl": "https://itunes.apple.com/us/artist/guns-n-roses/106621?uo=4",
      "releaseDate": "2004-01-01T08:00:00Z",
      "collectionType": "Compilation",
      "collectionName": "Greatest Hits",
      "amgArtistId": 4416,
      "copyright": "\u2117 2004 Geffen Records",
      "collectionId": 5669937,
      "artworkUrl60": "https://is3-ssl.mzstatic.com/image/thumb/Music/v4/3c/18/87/3c188735-e462-3c99-92eb-50fb06afa73f/source/60x60bb.jpg",
      "wrapperType": "collection",
      "collectionViewUrl": "https://itunes.apple.com/us/album/greatest-hits/5669937?uo=4",
      "artistId": 106621,
      "collectionCensoredName": "Greatest Hits",
      "artworkUrl100": "https://is3-ssl.mzstatic.com/image/thumb/Music/v4/3c/18/87/3c188735-e462-3c99-92eb-50fb06afa73f/source/100x100bb.jpg",
      "trackCount": 14,
      "currency": "USD",
      "artistName": "Guns N' Roses",
      "country": "USA",
      "primaryGenreName": "Rock",
      "collectionExplicitness": "notExplicit",
      "collectionPrice": 9.99
    },

Get the Top 5 Albums by the Artist:

1
a = 'https://itunes.apple.com/lookup?id=106621&entity=album&limit=5'

How to get AMG ID (all music id):

1
2
3
4
5
6
7
8
9
10
11
>>> a = 'https://itunes.apple.com/search?term=jack+johnson&limit=2'
>>> b = requests.get(a).json()
>>> print(json.dumps(b, indent=2))
{
  "resultCount": 2,
  "results": [
    {
      "collectionExplicitness": "notExplicit",
      "releaseDate": "2005-03-01T08:00:00Z",
      "currency": "USD",
      "artistId": 909253,

Query Multiple Artists by using the amgId’s:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
>>> a = 'https://itunes.apple.com/lookup?amgArtistId=468749,5723'
>>> b = requests.get(a).json()
>>> print(json.dumps(b, indent=2))
{
  "resultCount": 2,
  "results": [
    {
      "artistType": "Artist",
      "amgArtistId": 468749,
      "wrapperType": "artist",
      "artistId": 909253,
      "artistLinkUrl": "https://itunes.apple.com/us/artist/jack-johnson/909253?uo=4",
      "artistName": "Jack Johnson",
      "primaryGenreId": 21,
      "primaryGenreName": "Rock"
    },
    {
      "artistType": "Artist",
      "amgArtistId": 5723,
      "wrapperType": "artist",
      "artistId": 78500,
      "artistLinkUrl": "https://itunes.apple.com/us/artist/u2/78500?uo=4",
      "artistName": "U2",
      "primaryGenreId": 21,
      "primaryGenreName": "Rock"
    }
  ]
}

If we Query the ArtistId from the previous response we will get the same artist:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
>>> a = 'https://itunes.apple.com/lookup?id=909253'
>>> b = requests.get(a).json()
>>> print(json.dumps(b, indent=2))
{
  "resultCount": 1,
  "results": [
    {
      "artistType": "Artist",
      "amgArtistId": 468749,
      "wrapperType": "artist",
      "artistId": 909253,
      "artistLinkUrl": "https://itunes.apple.com/us/artist/jack-johnson/909253?uo=4",
      "artistName": "Jack Johnson",
      "primaryGenreId": 21,
      "primaryGenreName": "Rock"
    }
  ]
}

Only get the Artist Name:

1
2
3
4
5
>> b
{u'resultCount': 1, u'results': [{u'artistType': u'Artist', u'amgArtistId': 468749, u'wrapperType': u'artist', u'artistId': 909253, u'artistLinkUrl': u'https://itunes.apple.com/us/artist/jack-johnson/909253?uo=4', u'artistName': u'Jack Johnson', u'primaryGenreId': 21, u'primaryGenreName': u'Rock'}]}

>>> b['results'][0]['artistName']
u'Jack Johnson'

Printing out the Artist Name and Genre with String Formatting:

1
2
>>> print('Artist: {artist_name}, Genre: {genre_name}'.format(artist_name=b['results'][0]['artistName'], genre_name=b['results'][0]['primaryGenreName']))
Artist: Jack Johnson, Genre: Rock

Setup the Elasticsearch Log Driver on Docker Swarm

Today we will look at a Elasticsearch logging driver for Docker.

Why a Log Driver?

By default the log output can be retrieved when using the docker service logs -f service_name, where log output of that service is shown via stdout. When having a lot of services in your swarm, it becomes useful logging all of your log output to a database service.

This is not just for Swarm but Docker stand alone as well.

In this tutorial we will use the Elasticsearch Log Driver, to log our logs for all our docker swarm services to Elasticsearch.

Installing to Elasticsearch Log Driver:

If you are running Docker Swarm, run this on all the nodes:

1
$ docker plugin install rchicoli/docker-log-elasticsearch:latest --alias elasticsearch_latest

Verify that the log driver has been installed:

1
2
3
$ docker plugin ls
ID                  NAME                          DESCRIPTION                          ENABLED
eadf06ad3d2a        elasticsearch_latest:latest   Send log messages to elasticsearch   true

Test the Log Driver:

Run a container of Alpine and echo a string of text:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
$ docker run --rm -ti \
    --log-driver elasticsearch_latest \
    --log-opt elasticsearch-url=http://192.168.0.235:9200 \
    --log-opt elasticsearch-insecure=false \
    --log-opt elasticsearch-sniff=false \
    --log-opt elasticsearch-index=docker-%F \
    --log-opt elasticsearch-type=log \
    --log-opt elasticsearch-timeout=10 \
    --log-opt elasticsearch-version=5 \
    --log-opt elasticsearch-fields=containerID,containerName,containerImageID,containerImageName,containerCreated \
    --log-opt elasticsearch-bulk-workers=1 \
    --log-opt elasticsearch-bulk-actions=1000 \
    --log-opt elasticsearch-bulk-size=1024 \
    --log-opt elasticsearch-bulk-flush-interval=1s \
    --log-opt elasticsearch-bulk-stats=false \
        alpine echo -n "this is a test logging message"

Have a look at your Elasticsearch indexes, and you will find the index which was specified in the log-options:

1
2
3
$ curl http://192.168.0.235:9200/_cat/indices?v
health status index             uuid                   pri rep docs.count docs.deleted store.size pri.store.size
yellow open   docker-2018.05.01 8FTqWq6nQlSGpYjD9M5qSg   5   1          1            0      8.9kb          8.9kb

Lets have a look at the Elasticsearch Document which holds the data of the log entry:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
$ curl http://192.168.0.235:9200/docker-2018.05.01/_search?pretty
{
  "took" : 5,
  "timed_out" : false,
  "_shards" : {
    "total" : 5,
    "successful" : 5,
    "skipped" : 0,
    "failed" : 0
  },
  "hits" : {
    "total" : 1,
    "max_score" : 1.0,
    "hits" : [
      {
        "_index" : "docker-2018.05.01",
        "_type" : "log",
        "_id" : "hMTUG2MBIFc8kAgSNkYo",
        "_score" : 1.0,
        "_source" : {
          "containerID" : "cee0dc758528",
          "containerName" : "jolly_goodall",
          "containerImageID" : "sha256:3fd9065eaf02feaf94d68376da52541925650b81698c53c6824d92ff63f98353",
          "containerImageName" : "alpine",
          "containerCreated" : "2018-05-01T13:11:20.819447101Z",
          "message" : "this is a test logging message",
          "source" : "stdout",
          "timestamp" : "2018-05-01T13:11:21.119861767Z",
          "partial" : true
        }
      }
    ]
  }
}

Using Swarm and Docker Compose:

We will deploy a stack with a whoami golang web app, which will use the elasticsearch log driver:

docker-compose.yml
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
version: '3.4'

services:
  whoami:
    image: rbekker87/golang-whoami:latest
    networks:
      - appnet
    deploy:
      labels:
        - "traefik.port=80"
        - "traefik.backend.loadbalancer.swarm=true"
        - "traefik.docker.network=appnet"
        - "traefik.frontend.rule=Host:whoami.homecloud.mydomain.com"
      mode: replicated
      replicas: 10
      restart_policy:
        condition: any
      update_config:
        parallelism: 1
        delay: 70s
        order: start-first
        failure_action: rollback
      placement:
        constraints:
          - 'node.role==worker'
      resources:
        limits:
          cpus: '0.01'
          memory: 128M
        reservations:
          cpus: '0.001'
          memory: 64M
    logging:
      driver: elasticsearch_latest
      options:
        elasticsearch-url: "http://192.168.0.235:9200"
        elasticsearch-sniff: "false"
        elasticsearch-index: "docker-whoami-%F"
        elasticsearch-type: "log"
        elasticsearch-timeout: "10"
        elasticsearch-version: "6"
        elasticsearch-fields: "containerID,containerName,containerImageID,containerImageName,containerCreated"
        elasticsearch-bulk-workers: "1"
        elasticsearch-bulk-actions: "1000"
        elasticsearch-bulk-size: "1024"
        elasticsearch-bulk-flush-interval: "1s"
        elasticsearch-bulk-stats: "false"
networks:
  appnet:
    external: true

Deploy the Stack:

1
$ docker stack deploy -c docker-compose.yml web

Give it some time to launch and have a look at your indexes, and you will find the index which it wrote to:

1
2
3
4
$ curl http://192.168.0.235:9200/_cat/indices?v
health status index                     uuid                   pri rep docs.count docs.deleted store.size pri.store.size
yellow open   docker-2018.05.01         8FTqWq6nQlSGpYjD9M5qSg   5   1          1            0      8.9kb          8.9kb
yellow open   docker-whoami-2018.05.01  YebUtKa1RnCy86iP5_ylgg   5   1         11            0     54.4kb         54.4kb

Having a look at the data:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
$ curl 'http://192.168.0.235:9200/docker-whoami-2018.05.01/_search?pretty&size=1'
{
  "took" : 18,
  "timed_out" : false,
  "_shards" : {
    "total" : 5,
    "successful" : 5,
    "skipped" : 0,
    "failed" : 0
  },
  "hits" : {
    "total" : 11,
    "max_score" : 1.0,
    "hits" : [
      {
        "_index" : "docker-whoami-2018.05.01",
        "_type" : "log",
        "_id" : "acbgG2MBIFc8kAgShQa7",
        "_score" : 1.0,
        "_source" : {
          "containerID" : "97c3b337735f",
          "containerName" : "web_whoami.6.t2prjiexkym14isbx3yfxa99w",
          "containerImageID" : "sha256:0f7762d2ce569fc2ccf95fbc4c7191dde727551a180253fac046daecc580c7e9",
          "containerImageName" : "rbekker87/golang-whoami:latest@sha256:5a55c5de9cc16fbdda376791c90efb7c704c81b8dba949dce21199945c14cc88",
          "containerCreated" : "2018-05-01T13:24:43.089365528Z",
          "message" : "Starting up on port 80",
          "source" : "stdout",
          "timestamp" : "2018-05-01T13:24:48.636773709Z",
          "partial" : false
        }
      }
    ]
  }
}

For more info about this, have a look at the referenced documentation below.

Resources: