Ruan Bekker's Blog

From a Curious mind to Posts on Github

Python Flask Tutorial Series: Create a Hello World App

This is post 1 of the Python Flask Tutorial Series

What is Python Flask

Flask is a Micro Web Framework which is written in Python and is based on the Werkzeug Toolkit and the Jinja2 Template Engine.

Flask is super lightweight, and you import the modules as you need them, from some research some say that Flask is more designed for smaller applications whereas Django is designed for your larger applications.

a Good read on the [Differences and Performance Comparison]. With that being said, if you are planning with scale I am pretty sure that Flask can handle big applications, but it probably depends what your application is doing. More Detailed Discussion on Reddit.

Hello World in Python Flask

In this post we will be creating a “Hello, World” application to demonstrate how easy it is to run a Flask Appliation.

The only requirement you need to run this app, would be to to have python and pip installed so that we can install the Flask package which is needed.

Creating your Traditional Hello World App

We will install flask globally, but will write up a future post on how to setup a virtual environment for you application. Install the flask package:

1
$ pip install flask

The code for the Hello World Flask Application:

1
2
3
4
5
6
7
8
9
10
from flask import Flask

app = Flask(__name__)

@app.route('/')
def index():
    return 'Hello, World!'

if __name__ == '__main__':
    app.run(host='127.0.0.1', port=5000, debug=True)

Save the above code as app.py and then run the application as follows:

1
2
3
4
5
6
$ python app.py
 * Debug mode: on
 * Running on http://127.0.0.1:5000/ (Press CTRL+C to quit)
 * Restarting with stat
 * Debugger is active!
 * Debugger PIN: 282-492-812

It’s Running What Now?

We can see that our application is running on 127.0.0.1 and listening on port: 5000, if you point your browser to this URL, you will be returned with: Hello, World!

1
2
3
4
5
6
7
8
$ curl -i -XGET http://127.0.0.1:5000/
HTTP/1.0 200 OK
Content-Type: text/html; charset=utf-8
Content-Length: 13
Server: Werkzeug/0.12.1 Python/2.7.12
Date: Thu, 27 Nov 2018 13:51:15 GMT

Hello, World!

Explaining the Application Code

  • First, we imported the Flask class from the flask module, using: from flask import Flask
  • Then we instantiate our application from the Flask class: app = Flask(__name__) using our module’s name as a parameter, where our app object will use this to resolve resources. We are using __name__ , which links our module to our app object.
  • Next up we have the @app.route('/') decorator. Flask uses decorators for URL Routing.
  • Below our decorator, we have a view function, this function will be executed when the / route gets matched, in this case returning Hello, World!
  • The last line starts our server, and from this example it runs locally on 127.0.0.1 on port: 5000 and debug is enabled, so any error details will be logged directly in the browser. This is only recommended for test/dev and not for production as you can make your service vulnerable for hackers.

Let’s Extend our Hello World App

We would like to add the route ‘/movie’ which will return a random movie name:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
import random
from flask import Flask

app = Flask(__name__)

@app.route('/')
def index():
    return 'Hello, World!'

@app.route('/movie')
def movie():
    movies = ['godfather', 'deadpool', 'toy story', 'top gun', 'forrest gump']
    return random.choice(movies)

if __name__ == '__main__':
    app.run(host='127.0.0.1', port=5000, debug=True)

Making a GET Request on the ‘/movie’ route:

1
2
$ curl -XGET http://127.0.0.1/movie
forrest gump

This was just a basic example and will be covering more topics in detail at a further stage.

Next up, setting up our Python Environment, with Virtual Environment (virtualenv)

Related Content

All posts related to this tutorial series will be listed under Python Flask Tutorial Series tag.

Introduction to Python Flask: Tutorial Series

This post is the index for all the posts that will be covered in our Python Flask Tutorial Series:

What will be covered

This is intended for people starting out with Python Flask and the basics will be covered on using Flask so that you can get familliar with the framework.

The following will be covered:

  • Hello World Basic App
  • Routing in Flask
  • Jinja Templating
  • Static Files
  • etc

More will be posted

Setup a Relayhost With Postfix to Send Mail via Sendgrid

In this post we will setup Postfix to Relay Mail through SendGrid and we will also configure the authentication as SendGrid is not an open relay, but you can obtain credentials by signing up with the for a free account to obtain your username and password which will use to relay mail through them.

Access Control on Postfix

For this demonstration we can make use of the mynetworks configuration to specify the cidr of the source which we want to allow clients to be able to relay from. This is a acceptable way of controlling which source addresses you would like to authorize to relay mail via your smtp relay server.

Sendgrid

Sendgrid offers 100 free outbound emails per day, sign up with them via sendgrid.com/free, create a API Key and save your credentials in a safe place.

You first need to verify your account by sending a mail using their API, but it’s step by step so won’t take more than 2 minutes to complete.

Setup Postifx

I will be using ubuntu to setup postfix and configure postfix to specify sendgrid as the relayhost and also configure the authentication for the destination server in question:

1
$ apt install postfix libsasl2-modules -y

Configure postfix to relay all outbound mail via sendgrid, enable sasl auth, tls, relayhost etc via /etc/postfix/main.cf. The settings that needs to be set/configured:

1
2
3
4
5
6
7
8
smtp_sasl_auth_enable = yes
smtp_sasl_password_maps = hash:/etc/postfix/sasl_passwd
smtp_sasl_security_options = noanonymous
smtp_sasl_tls_security_options = noanonymous
smtp_tls_security_level = encrypt
header_size_limit = 4096000
relayhost = [smtp.sendgrid.net]:587
mynetworks = /etc/postfix/mynetworks

Create the /etc/postfix/mynetworks file where the whitelisted source addresses will be specified. In our case the loopback address and the class c subnet 10.0.1.0 :

1
2
127.0.0.1/32
10.0.1.0/24

Create the credential file where the credentials for the sendgrid service will be stored, in my case it will be in /etc/postfix/sasl_passwd:

1
[smtp.sendgrid.net]:587 your_username:your_password

Apply permissions and update postfix hashtables on the file in question:

1
2
$ chmod 600 /etc/postfix/sasl_passwd
$ postmap /etc/postfix/sasl_passwd

Enable and Start the Service:

1
2
$ systemctl enable postfix
$ systemctl restart postfix

Send a Test Mail

From the server you can test your mail delivery by sending a mail:

1
$ echo "the body of the mail" | mail -r user@authenticated-domain.com -s "my subject" recipient-mail@mydomain.com

or using telnet for a remote system:

1
2
3
4
5
6
7
8
9
10
11
$ telnet smtp-server.ip 25
helo admin
mail from: me@mydomain.com
rcpt to: recipient-main@mydomain.com
DATA
Subject: This is a test
From: James John <me@mydomain.com>
To: Peter Smith <recipient-mail@mydomain.com>

ctrl + ]
q

You can monitor /var/log/maillog to see log messages of your email.

Setup a Golang Environment on Ubuntu

In this post I will demonstrate how to setup a golang environment on Ubuntu.

Get the sources:

Get the latest stable release golang tarball from https://golang.org/dl/ and download to the directory path of choice, and extract the archive:

1
2
3
$ cd /tmp
$ wget https://dl.google.com/go/go1.11.2.linux-amd64.tar.gz
$ tar -xf go1.11.2.linux-amd64.tar.gz

Once the archive is extracted, set root permissions and move it to the path where your other executable binaries reside:

1
2
$ sudo chown -R root:root ./go
$ sudo mv go /usr/local/

Cleanup the downloaded archive:

1
$ rm -rf go1.*.tar.gz

Path Variables:

Adjust your path variables in your ~/.profile and append the following:

~/.profile
1
2
export GOPATH=$HOME/go
export PATH=$PATH:/usr/local/go/bin:$GOPATH/bin

Source your profile, or open a new tab:

1
$ source ~/.profile

Test if you can return the version:

1
2
$ go version
go version go1.11.2 linux/amd64

Create a Golang Application

Create a simple golang app that prints a string to stdout:

1
2
3
4
$ cd ~/
$ mkdir -p go/src/hello
$ cd go/src/hello
$ vim app.go

Add the following golang code:

1
2
3
4
5
6
7
package main

import "fmt"

func main() {
    fmt.Printf("Hello!\n")
}

Build the binary:

1
$ go build

Run it:

1
2
$ ./app
Hello!

Golang: Building a Basic Web Server in Go

Continuing with our #golang-tutorial blog series, in this post we will setup a Basic HTTP Server in Go.

Our Web Server:

Our Web Server will respond on 2 Request Paths:

1
2
- / -> returns "Hello, Wolrd!"
- /cheers -> returns "Goodbye!"

Application Code:

If you have not setup your golang environment, you can do so by visiting @AkyunaAkish’s Post on Setting up a Golang Development Enviornment on MacOSX.

Create the server.go or any filename of your choice. Note: I created 2 ways of returning the content of http response for demonstration

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
package main

import (
  "io"
        "log"
  "net/http"
)

func hello(w http.ResponseWriter, r *http.Request) {
  w.Header().Set("Content-Type", "text/plain; charset=utf-8")
  w.WriteHeader(http.StatusOK)
  w.Write([]byte("Hello, World!" + "\n")
  log.Println("hello function handler was executed")
}

func goodbye(w http.ResponseWriter, r *http.Request) {
  w.Header().Set("Content-Type", "text/plain; charset=utf-8")
  w.WriteHeader(http.StatusOK)
  io.WriteString(w, "Cheers!" + "\n")
  log.Println("goodbye function handler was executed")
}

func main() {
  http.HandleFunc("/", hello)
  http.HandleFunc("/cheers", goodbye)
  http.ListenAndServe(":8000", nil)
}

Explanation of what we are doing:

  • Programs runs in the package main
  • We are importing 3 packages: io, log and net/http
  • HandleFunc registers the handler function for the given pattern in the DefaultServeMux, in this case the HandleFunc registers / to the hello handler function and /cheers to the goodbye handler function.
  • In our 2 handler functions, we have two arguments:
    • The first one is http.ResponseWriter and its corresponding response stream, which is actually an interface type.
    • The second is *http.Request and its corresponding HTTP request. io.WriteString is a helper function to let you write a string into a given writable stream, this is named the io.Writer interface in Golang.
  • ListenAndServe starts an HTTP server with a given address and handler. The handler is usually nil, which means to use DefaultServeMux
  • The logging is not a requirement, but used it for debugging/verbosity

Running our Server:

Run the http server:

1
$ go run server.go

Client Side Requests:

Run client side http requests to your golang web server:

1
2
3
4
5
6
7
$ curl -i http://localhost:8000/
HTTP/1.1 200 OK
Content-Type: text/plain; charset=utf-8
Date: Wed, 21 Nov 2018 21:33:42 GMT
Content-Length: 14

Hello, World!

And another request to /cheers:

1
2
3
4
5
6
7
$ curl -i http://localhost:8000/cheers
HTTP/1.1 200 OK
Content-Type: text/plain; charset=utf-8
Date: Wed, 21 Nov 2018 21:29:46 GMT
Content-Length: 8

Cheers!

Server Side Output:

As we used the log package, the logging gets returned to stdout:

1
2
3
$ go run server.go
2018/11/21 23:29:36 hello function handler was executed
2018/11/21 23:29:46 goodbye function handler was executed

Resources:

Create Read Only Users in MongoDB

In this post I will demonstrate how to setup 2 read only users in MongoDB, one user that will have access to one MongoDB Database and all the Collections, and one user with access to one MongoDB Database and only one Collection.

First Method: Creating and Assigning the User

The first method we will create the user and assign it the read permissions that he needs. In this case read only access to the mytest db.

First logon to mongodb and switch to the admin database:

1
2
3
$ mongo -u dbadmin -p --authenticationDatabase admin
> use admin
switched to db admin

Now list the dbs:

1
2
3
> show dbs
admin       0.000GB
mytest      0.000GB

List the collections and read the data from it for demonstration purposes:

1
2
3
4
5
6
7
8
> use mytest
> show collections;
col1
col2
> db.col1.find()
{ "_id" : ObjectId("5be3d377b54849bb738e3b6b"), "name" : "ruan" }
> db.col2.find()
{ "_id" : ObjectId("5be3d383b54849bb738e3b6c"), "name" : "stefan" }

Now create the user collectionreader that will have access to read all the collections from the database:

1
2
3
4
5
6
7
8
9
10
$ > db.createUser({user: "collectionreader", pwd: "secretpass", roles: [{role: "read", db: "mytest"}]})
Successfully added user: {
  "user" : "collectionreader",
  "roles" : [
    {
      "role" : "read",
      "db" : "mytest"
    }
  ]
}

Exit and log out and log in with the new user to test the permissions:

1
2
3
4
5
6
7
8
9
10
$ mongo -u collectionreader -p --authenticationDatabase mytest
> use mytest
switched to db mytest

> show collections
col1
col2

> db.col1.find()
{ "_id" : ObjectId("5be3d377b54849bb738e3b6b"), "name" : "ruan" }

Now lets try to write to a collection:

1
2
3
4
5
6
7
> db.col1.insert({"name": "james"})
WriteResult({
  "writeError" : {
    "code" : 13,
    "errmsg" : "not authorized on mytest to execute command { insert: \"col1\", documents: [ { _id: ObjectId('5be3d6c0492818b2c966d61a'), name: \"james\" } ], ordered: true }"
  }
})

So we can see it works as expected.

Second Method: Create Roles and Assign Users to the Roles

In the second method, we will create the roles then assign the users to the roles. And in this scenario, we will only grant a user reader access to one collection on a database. Login with the admin user:

1
2
$ mongo -u dbadmin -p --authenticationDatabase admin
> use admin

First create the read only role myReadOnlyRole:

1
> db.createRole({ role: "myReadOnlyRole", privileges: [{ resource: { db: "mytest", collection: "col2"}, actions: ["find"]}], roles: []})

Now create the user and assign it to the role:

1
> db.createUser({ user: "reader", pwd: "secretpass", roles: [{ role: "myReadOnlyRole", db: "mytest"}]})

Similarly, if we had an existing user that we also would like to add to that role, we can do that by doing this:

1
> db.grantRolesToUser("anotheruser", [ { role: "myReadOnlyRole", db: "mytest" } ])

Logout and login with the reader user:

1
2
$ mongo -u reader -p --authenticationDatabase mytest
> use mytest

Now try to list the collections:

1
2
3
4
5
6
7
> show collections
2018-11-08T07:42:39.907+0100 E QUERY    [thread1] Error: listCollections failed: {
  "ok" : 0,
  "errmsg" : "not authorized on mytest to execute command { listCollections: 1.0, filter: {} }",
  "code" : 13,
  "codeName" : "Unauthorized"
}

As we only have read (find) access on col2, lets try to read data from collection col1:

1
2
3
4
5
6
7
> db.col1.find()
Error: error: {
  "ok" : 0,
  "errmsg" : "not authorized on mytest to execute command { find: \"col1\", filter: {} }",
  "code" : 13,
  "codeName" : "Unauthorized"
}

And finally try to read data from the collection we are allowed to read from:

1
2
> db.col2.find()
{ "_id" : ObjectId("5be3d383b54849bb738e3b6c"), "name" : "stefan" }

And also making sure we cant write to that collection:

1
2
3
4
5
6
7
> db.col2.insert({"name": "frank"})
WriteResult({
  "writeError" : {
    "code" : 13,
    "errmsg" : "not authorized on mytest to execute command { insert: \"col2\", documents: [ { _id: ObjectId('5be3db1530a86d900c361465'), name: \"frank\" } ], ordered: true }"
  }
})

Assigning Permissions to Roles

If you later on want to add more permissions to the role, this can easily be done by using grantPrivilegesToRole():

1
2
3
$ mongo -u dbadmin -p --authenticationDatabase admin
> use mytest
> db.grantPrivilegesToRole("myReadOnlyRole", [{ resource: { db : "mytest", collection : "col1"}, actions : ["find"] }])

To view the permissions for that role:

1
> db.getRole("myReadOnlyRole", { showPrivileges : true })

Resources:

IAM Policy to Allow Team Wide and User Level Permissions on AWS Secrets Manager

In this post we will simulate a scenario where a team would like to have access to create secrets under a team path name like /security-team/prod/* and /security-team/dev/* and allow all the users from that team to be able to write and read secrets from that path. Then have individual users create and read secrets from their own isolated path: /security-team/personal/aws-username/* so they can create their personal secrets.

Our Scenario:

  • Create IAM Policy
  • Create 2 IAM Users: jack.smith and steve.adams
  • Create IAM Group, Associate IAM Policy to the Group
  • Attach 2 Users to the Group

The IAM Policy:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Sid": "Stmt1541597166491",
            "Action": [
                "secretsmanager:CreateSecret",
                "secretsmanager:DeleteSecret",
                "secretsmanager:DescribeSecret",
                "secretsmanager:GetRandomPassword",
                "secretsmanager:GetSecretValue",
                "secretsmanager:ListSecretVersionIds",
                "secretsmanager:ListSecrets",
                "secretsmanager:PutSecretValue",
                "secretsmanager:TagResource",
                "secretsmanager:UpdateSecret"
            ],
            "Effect": "Allow",
            "Resource": [
                "arn:aws:secretsmanager:eu-west-1:123456789012:secret:/security-team/prod/*",
                "arn:aws:secretsmanager:eu-west-1:123456789012:secret:/security-team/dev/*",
                "arn:aws:secretsmanager:eu-west-1:123456789012:secret:/security-team/personal/${aws:username}/*"
            ]
        }
    ]
}

Either configure the access keys and secret keys into the credential provider using aws cli, or for this demonstration I will use them inside the code. But never hardcode your credentials.

Create Secrets with Secrets Manager in AWS using Python Boto3

Instantiate user1 and user2:

1
2
3
>>> import boto3
>>> jack = boto3.Session(aws_access_key_id='ya', aws_secret_access_key='xx', region_name='eu-west-1').client('secretsmanager')
>>> steve = boto3.Session(aws_access_key_id='yb', aws_secret_access_key='xx', region_name='eu-west-1').client('secretsmanager')

Create a team wide secret with jack:

1
2
>>> jack.create_secret(Name='/security-team/prod/app1/username', SecretString='appreader')
{'ResponseMetadata': {'RetryAttempts': 0, 'HTTPStatusCode': 200, 'RequestId': 'x', 'HTTPHeaders': {'date': 'Thu, 08 Nov 2018 07:50:35 GMT', 'x-amzn-requestid': 'x', 'content-length': '193', 'content-type': 'application/x-amz-json-1.1', 'connection': 'keep-alive'}}, u'VersionId': u'x', u'Name': u'/security-team/prod/app1/username', u'ARN': u'arn:aws:secretsmanager:eu-west-1:123456789012:secret:/security-team/prod/app1/username-12ABC00'}

Let jack and steve try to read the secret:

1
2
3
4
>>> jack.get_secret_value(SecretId='/security-team/prod/app1/username')['SecretString']
'appreader'
>>> steve.get_secret_value(SecretId='/security-team/prod/app1/username')['SecretString']
'appreader'

Now let jack create a personal secret, let him read it:

1
2
3
>>> jack.create_secret(Name='/security-team/personal/jack.smith/svc1/password', SecretString='secret')
>>> jack.get_secret_value(SecretId='/security-team/personal/jack.smith/svc1/password')['SecretString']
'secret'

Now let steve try to read the secret and you will see that access is denied:

1
2
3
4
5
6
>>> steve.get_secret_value(SecretId='/security-team/personal/jack.smith/username')['SecretString']
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
...
    raise error_class(parsed_response, operation_name)
botocore.exceptions.ClientError: An error occurred (AccessDeniedException) when calling the GetSecretValue operation: User: arn:aws:iam::123456789012:user/steve.adams is not authorized to perform: secretsmanager:GetSecretValue on resource: arn:aws:secretsmanager:eu-west-1:123456789012:secret:/security-team/personal/jack.smith/svc1/password-a1234b

Thats it for this post

Get Application Performance Metrics on Python Flask With Elastic APM on Kibana and Elasticsearch

In this post we will setup a Python Flask Application which includes the APM Agent which will collect metrics, that gets pushed to the APM Server. If you have not setup the Elastic Stack with / or APM Server, you can follow this post to setup the needed.

Then we will make a bunch of HTTP Requests to our Application and will go through the metrics per request type.

Application Metrics

Our Application will have the following Request Paths:

  • / - Returns static text
  • /delay - random delays to simulate increased response latencies
  • /upstream - get data from a upstream provider, if statements to provide dummy 200, 404 and 502 reponses to visualize
  • /5xx - request path that will raise an exception so that we can see the error via apm
  • /sql-write - inserts 5 rows into a sqlite database
  • /sql-read - executes a select all from the database
  • /sql-group - sql query to group all the cities and count them

This is just simple request paths to demonstrate the metrics via APM (Application Performance Monitoring) on Kibana.

Install Flask and APM Agent

Create a virtual environment and install the dependencies:

1
2
3
4
5
$ apt install python python-setuptools -y
$ easy_install pip
$ pip install virtualenv
$ pip install elastic-apm[flask]
$ pip install flask

For more info on APM Configuration.

Instrument a Bare Bones Python Flask app with APM:

A Barebones app with APM Configured will look like this:

1
2
3
4
5
6
7
8
9
10
11
12
13
from flask import Flask, jsonify
from elasticapm.contrib.flask import ElasticAPM
from elasticapm.handlers.logging import LoggingHandler

app = Flask(__name__)
apm = ElasticAPM(app, server_url='http://localhost:8200', service_name='flask-app-1', logging=True)

@app.route('/')
def index():
    return jsonify({"message": "response ok"}), 200

if __name__ == '__main__':
    app.run(host='0.0.0.0', port=80)

This will provide metrics on the / request path. In order to trace transaction ids from the metrics, we need to configure the index on Kibana. To do this, head over to Kibana, Management, Index Patterns, Add Index Pattern, apm*, select @timestamp as the time filter field name.

This will allow you to see the data when tracing the transaction id’s via the Discover UI.

Create the Python Flask App

Create the Flask App with the request paths as mentioned in the beginning:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
import sqlite3, requests, time, logging, random
from flask import Flask, jsonify
from elasticapm.contrib.flask import ElasticAPM
from elasticapm.handlers.logging import LoggingHandler

names = ['ruan', 'stefan', 'philip', 'norman', 'frank', 'pete', 'johnny', 'peter', 'adam']
cities = ['cape town', 'johannesburg', 'pretoria', 'dublin', 'kroonstad', 'bloemfontein', 'port elizabeth', 'auckland', 'sydney']
lastnames = ['smith', 'bekker', 'admams', 'phillips', 'james', 'adamson']

conn = sqlite3.connect('database.db')
conn.execute('CREATE TABLE IF NOT EXISTS people (name STRING, age INTEGER, surname STRING, city STRING)')
#sqlquery_write = conn.execute('INSERT INTO people VALUES("{}", "{}", "{}", "{}")'.format(random.choice(names), random.randint(18,40), random.choice(lastnames), random.choice(cities)))
seconds = [0.002, 0.003, 0.004, 0.01, 0.3, 0.2, 0.009, 0.015, 0.02, 0.225, 0.009, 0.001, 0.25, 0.030, 0.018]

app = Flask(__name__)
apm = ElasticAPM(app, server_url='http://localhost:8200', service_name='my-app-01', logging=False)

@app.route('/')
def index():
    return jsonify({"message": "response ok"})

@app.route('/delay')
def delay():
    time.sleep(random.choice(seconds))
    return jsonify({"message": "response delay"})

@app.route('/upstream')
def upstream():
    r = requests.get('https://api.ruanbekker.com/people').json()
    r.get('country')
    if r.get('country') == 'italy':
        return 'Italalia!', 200
    elif r.get('country') == 'canada':
        return 'Canada!', 502
    else:
        return 'Not Found', 404

@app.route('/5xx')
def fail_with_5xx():
    value = 'a' + 1
    return jsonify({"message": value})

@app.route('/sql-write')
def sqlw():
    conn = sqlite3.connect('database.db')
    conn.execute('INSERT INTO people VALUES("{}", "{}", "{}", "{}")'.format(random.choice(names), random.randint(18,40), random.choice(lastnames), random.choice(cities)))
    conn.execute('INSERT INTO people VALUES("{}", "{}", "{}", "{}")'.format(random.choice(names), random.randint(18,40), random.choice(lastnames), random.choice(cities)))
    conn.execute('INSERT INTO people VALUES("{}", "{}", "{}", "{}")'.format(random.choice(names), random.randint(18,40), random.choice(lastnames), random.choice(cities)))
    conn.execute('INSERT INTO people VALUES("{}", "{}", "{}", "{}")'.format(random.choice(names), random.randint(18,40), random.choice(lastnames), random.choice(cities)))
    conn.execute('INSERT INTO people VALUES("{}", "{}", "{}", "{}")'.format(random.choice(names), random.randint(18,40), random.choice(lastnames), random.choice(cities)))
    conn.commit()
    conn.close()
    return 'ok', 200

@app.route('/sql-read')
def sqlr():
    conn = sqlite3.connect('database.db')
    conn.row_factory = sqlite3.Row
    cur = conn.cursor()
    cur.execute('select * from people')
    rows = cur.fetchall()
    conn.close()
    return 'ok', 200

@app.route('/sql-group')
def slqg():
    conn = sqlite3.connect('database.db')
    conn.row_factory = sqlite3.Row
    cur = conn.cursor()
    cur.execute('select count(*) as num, city from people group by city')
    rows = cur.fetchall()
    conn.close()
    return 'ok', 200

if __name__ == '__main__':
    app.run(host='0.0.0.0', port=80)

Run the app:

1
$ python app.py

At this point, we wont have any data on APM as we need to make requests to our application. Let’s make 10 HTTP GET Requests on the / Request Path:

1
$ count=0 && while [ $count -lt 10 ]; do curl http://application-routable-address:80/; sleep 1; count=$((count+1)); done

Visualize the Root Request Path

Head over to Kibana, Select APM and you will see something similar like below when selecting the timepicker to 15 minutes at the right top corner. This page will give you the overview of all your configured applications and the average response times over the selected time, transactions per minute, errors per minute etc:

When you select your application, you will find the graphs on you response times and requests per minute, also a breakdown per request path:

When selecting the request path, in this case GET /, you will find a breakdown of metrics only for that request and also the response time distribution for that request path, if you select frame from the response time distribution, it will filter the focus to that specific transaction.

When you scroll a bit down to the Transaction Sample section, you will find data about the request, response, system etc:

From the Transaction Sample, you can select the View Transaction in Discover button, which will trace that transaction id on the Discover UI:

Increasing the http curl clients running simultaneously from different servers and increasing the time for 15 minutes to have more metrics will result in the screenshot below, notice the 6ms response time can easily be traced selecting it in the response time distribution, then discovering it in the UI, which will give you the raw data from that request:

Viewing Application Errors in APM

Make a couple of requests to /5xx:

1
$ curl http://application-routable-endpoint:80/5xx

Navigate to the app, select Errors, then you will see the exception details that was returned. Here we can see that in our code we tried to concatenate integers with strings:

Furthermore we can select that error and it will provide us a direct view on where in our code the error gets generated:

Pretty cool right?! You can also further select the library frames, which will take you to the lower level code that raised the exception. If this errors can be drilled down via the discover ui, to group by source address etc.

Simulate Response Latencies:

Make a couple of requests to the /delay request path, and you should see the increased response times from earlier:

Requests where Database Calls are Executed

The while loop to call random request paths:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
count=0 && while [ $count -lt 1000 ];
do
  curl -H "Host: my-za-server" -i http://x.x.x.x/sql-write;
  curl -H "Host: my-za-server" -i http://x.x.x.x/sql-read;
  curl -H "Host: my-za-server" -i http://x.x.x.x/sql-group;
  curl -H "Host: my-eu-server" -i http://x.x.x.x/sql-write;
  curl -H "Host: my-us-server" -i http://x.x.x.x/sql-write;
  curl -H "Host: my-za-server" -i http://x.x.x.x/sql-write;
  curl -H "Host: my-za-server" -i http://x.x.x.x/sql-write;
  curl -H "Host: my-za-server" -i http://x.x.x.x/sql-read;
  curl -H "Host: my-eu-server" -i http://x.x.x.x/sql-group;
  curl -H "Host: my-us-server" -i http://x.x.x.x/sql-group;
  curl -H "Host: my-za-server" -i http://x.x.x.x/sql-write;
  curl -H "Host: my-za-server" -i http://x.x.x.x/sql-write;
  curl -H "Host: my-eu-server" -i http://x.x.x.x/sql-group;
  curl -H "Host: my-za-server" -i http://x.x.x.x/sql-group;
  count=$((count+1));
done

When we look at our applications performance monitoring overview, we can see the writes provide more latencies as the group by’s:

The /sql-write request overview:

When selecting a transaction sample, we can see the timeline of each database call:

When looking at the /sql-group request overview, we can see that the response times increasing overtime, as more data is written to the database, it takes longer to read and group all the data from the database:

The transaction details shows the timeline of the database query from that request:

When you select the database select query on the timeline view, it will take you to the exact database query that was executed:

When we include a database call with a external request to a remote http endpoint, we will see something like:

Viewing 4xx and 5xx Response Codes

From the application code we are returning 2xx, 4xx, and 5xx response codes for this demonstration to visualize them:

Configuring more Applications

Once more apps are configured, and they start serving traffic, they will start appearing on the APM UI as below:

APM is available for other languages as well and provides a getting started snippets from the APM UI. For more information on APM, have a look at their Documentation

Hope this was useful.

Setup APM Server on Ubuntu for Your Elastic Stack to Get Insights in Your Application Performance Metrics

In this post we will setup the Elastic Stack with Elasticsearc, Kibana and APM . The APM Server (Application Performance Metrics) which will receive the metric data from the application side, and is then pushed to apm indices on Elasticsearch.

This will be a 2 post blog on APM:

What is APM

From their website APM is described as: “Elastic APM is an application performance monitoring system built on the Elastic Stack. It allows you to monitor software services and applications in real time, collecting detailed performance information on response time for incoming requests, database queries, calls to caches, external HTTP requests, etc.”

You get metrics like average, p99 response times etc, and also have insights when errors occur, it even allows you to look at the stacktrace, poinpointing on which line of your code it ocurred etc.

APM Agents:

The APM Agents will be loaded inside your application, application metrics will then be pushed to the APM Server (which we will setup in this post), which then gets pushed to Elasticsearch and is then consumed by Kibana.

At the time of writing, the APM Agents are supported in the following languages:

  • Node.js
  • Django
  • Flask
  • Ruby on Rails
  • Rack
  • RUM
  • Golang
  • Java

Setup the Elastic Stack

One thing to note, every service in your Elastic Stack needs to be running on the same version. In this post we will setup Elasticsearch, APM and Kibana all running on version 6.4.3

Setup the Pre-Requirements:

Elasticsearch depends on Java, se we will go ahead and setup the repositories:

1
2
3
4
5
$ wget -qO - https://artifacts.elastic.co/GPG-KEY-elasticsearch | sudo apt-key add -
$ apt-get install apt-transport-https -y
$ echo "deb https://artifacts.elastic.co/packages/6.x/apt stable main" | sudo tee -a /etc/apt/sources.list.d/elastic-6.x.list
$ apt update && apt upgrade -y
$ apt install openjdk-8-jdk -y

Verify that Java is installed:

1
2
3
4
$ java -version
openjdk version "1.8.0_181"
OpenJDK Runtime Environment (build 1.8.0_181-8u181-b13-1ubuntu0.16.04.1-b13)
OpenJDK 64-Bit Server VM (build 25.181-b13, mixed mode)

Setup Kernel parameters for Elasticsearch:

1
2
$ sysctl -w vm.max_map_count=262144
$ echo 'vm.max_map_count=262144' >> /etc/sysctl.conf

Setup Elasticsearch:

Search for the latest versions (when already having elasticsearch, either upgrade or install apm on the same version as elasticsearch/kibana):

1
2
3
$ apt-cache madison elasticsearch
elasticsearch |      6.4.3 | https://artifacts.elastic.co/packages/6.x/apt stable/main amd64 Packages
elasticsearch |      6.4.2 | https://artifacts.elastic.co/packages/6.x/apt stable/main amd64 Packages

Install Elasticsearch:

1
$ apt-get install elasticsearch=6.4.3 -y

Configure Elasticsearch to lock the memory on startup:

1
$ sed -i 's/#bootstrap.memory_lock: true/bootstrap.memory_lock: true/g' /etc/elasticsearch/elasticsearch.yml

Enable Elasticsearch on startup and start the service:

1
2
3
$ systemctl daemon-reload
$ systemctl enable elasticsearch.service
$ systemctl start elasticsearch.service

Install Kibana:

Install Kibana version 6.4.3:

1
$ apt install kibana=6.4.3 -y

For demonstration, I will configure Kibana to listen on all interfaces on port 5601, but note this will enable access for everyone, you can [follow this blogpost] to setup a Nginx Reverse Proxy to upstream to localhost on port 5601.

Since this demonstration we are using Elasticsearch locally, so if you have a remote cluster, configuration can be applied where needed.

1
2
$ sed -i 's/#server.host: "localhost"/server.host: "0.0.0.0"/'g /etc/kibana/kibana.yml
$ sed -i 's/#elasticsearch.url: "http:\/\/localhost:9200"/elasticsearch.url: "http:\/\/localhost:9200"/'g /etc/kibana/kibana.yml

Enable Kibana on startup and start the service:

1
2
$ systemctl enable kibana.service
$ systemctl restart kibana.service

Install the APM Server

Install APM Server version 6.4.3:

1
$ apt install apm-server=6.4.3 -y

Since we have everything locally, the configuration can be kept as is, but if you need to configure the elasticsearch or kibana hosts, it can be done via /etc/apm-server/apm-server.yml

Then once Kibana and Elasticsearch is started, load the mapping templates, enable and start the service:

1
2
3
$ apm-server setup
$ systemctl enable apm-server.service
$ systemctl restart apm-server.service

Ensure all the services are running with netstat -tulpn and port 9200, 9300, 5601 and 8300 should be listening

Access Your Elastic Stack

Access Kibana on your routable endpoint on port 5601 and you should see something like this:

Configuring a Application to push metrics to APM

In the next post I will setup a Python Flask Application on APM

Benchmark Website Response Times With CURL

We can gain insights when making requests to websites such as:

  • Lookup time
  • Connect time
  • AppCon time
  • Redirect time
  • PreXfer time
  • StartXfer time

We will make a request to a website that has caching enabled, the first hit will be a MISS:

1
2
3
4
5
6
7
8
9
10
$ curl -s -w '\nLookup time:\t%{time_namelookup}\nConnect time:\t%{time_connect}\nAppCon time:\t%{time_appconnect}\nRedirect time:\t%{time_redirect}\nPreXfer time:\t%{time_pretransfer}\nStartXfer time:\t%{time_starttransfer}\n\nTotal time:\t%{time_total}\n' -o /dev/null https://user-images.githubusercontent.com/567298/53351889-85572000-392a-11e9-9720-464e9318206e.jpg

Lookup time:  1.524465
Connect time: 1.707561
AppCon time:  0.000000
Redirect time:    0.000000
PreXfer time: 1.707656
StartXfer time:   1.897660

Total time:   2.451824

The next hit will be a HIT:

1
2
3
4
5
6
7
8
9
10
$ curl -s -w '\nLookup time:\t%{time_namelookup}\nConnect time:\t%{time_connect}\nAppCon time:\t%{time_appconnect}\nRedirect time:\t%{time_redirect}\nPreXfer time:\t%{time_pretransfer}\nStartXfer time:\t%{time_starttransfer}\n\nTotal time:\t%{time_total}\n' -o /dev/null https://user-images.githubusercontent.com/567298/53351889-85572000-392a-11e9-9720-464e9318206e.jpg

Lookup time:  0.004441
Connect time: 0.188065
AppCon time:  0.000000
Redirect time:    0.000000
PreXfer time: 0.188160
StartXfer time:   0.381344

Total time:   0.926420

Similar Posts: