Ruan Bekker's Blog

From a Curious mind to Posts on Github

Python Flask Tutorial Series: Setup a Python Virtual Environment

In our previous post we wrote a basic Hello World App in Flask. This is post 2 of the Python Flask Tutorial Series

In this section we will be covering our Environment Setup, where I will be showing you how to setup a typical Python Flask Environment using virtualenv

What is VirtualEnv?

Virtualenv allows you to have isolated Python Environments, where each project or environment can have their own versions. Some applications may need a specific version of a certain package, so lets say you are running multiple applications on one server, then having to manage each ones dependencies can be a pain. As you may run into scenarios where they are dependent on specific versions, where you have to upgrade/downgrade packages like no-ones business.

Luckily with the help of virtualenv, each environment is isolated from each other, so system wide you might be running Python 2.7 with minimal packages installed, then you can create a virtual environment with Python 3 with packages for the application you are developing.

Setup a Virtual Environment:

We will setup a virtualenv for our project with our default python version which in this case is 2.7:

1
2
3
$ mkdir ~/projects/mywebapp
$ cd ~/projects/mywebapp
$ virtualenv .venv

At this moment you should have your virtual environment ready, now to enter and activate our environment:

1
$ source .venv/bin/activate

To confirm your python version:

1
2
$ python --version
Python 2.7.6

If you have multiple versions of python, you can create your virtual environment with a different python version by using the -p flag, as in:

1
$ virtualenv -p /usr/local/bin/python2.7 .venv

Now that we are in our virtualenv, lets install 2 packages, Flask and Requests:

1
2
$ pip install flask
$ pip install requests

With pip we can list the installed packages we have with pip freeze. Since this is our virtual environment, we will only see the packages that was installed into this environment:

1
2
3
4
5
6
7
8
9
10
$ pip freeze
click==6.7
Flask==0.12
itsdangerous==0.24
Jinja2==2.9.5.1
MarkupSafe==1.0
requests==2.7.0
six==1.10.0
virtualenv==15.0.1
Werkzeug==0.12.1

We can dump this to a file, which we can later use to install packages from a list so that we don’t have to specify them manually. We can dump them by doing this:

1
$ pip freeze > requirements.txt

Now lets say you are on a different host and you would like to install the packages from the requirements.txt file, we do this by using the following command:

1
$ pip install -r requirements.txt

To exit your virtualenv, you do the following:

1
$ deactivate

I hope this was useful, next up in our Python Flask Tutorial Series will be Routing in Flask

How to Setup a Serverless URL Shortener With API Gateway Lambda and DynamoDB on AWS

image

Today we will set a Serverless URL Shortener using API Gateway, Lambda with Python and DynamoDB.

Overview

The service that we will be creating, will shorten URLs via our API which will create an entry on DynamoDB. When a GET method is performed on the shortened URL, a GetItem is executed on DynamoDB to get the Long URL and a 301 Redirect is performed to redirect the client to intended destination URL.

Note, I am using a domain name which is quite long, but its only for demonstration, if you can get hold of any short domains like t.co etc, that will make your Shortened URLs really short in character count.

Update: URL Shortener UI available in this post

The Setup

Code has been published to my Github Repository

The following services will be used to create a URL Shortener:

  • AWS API Gateway: ( /create: to create a shortened url and /t/{id} to redirect to long url)
  • AWS IAM: (Role and Policy for Permissions to call DynamoDB from Lambda)
  • AWS Lambda: (Application Logic)
  • AWS DynamoDB: (Persistent Store to save our Data)
  • AWS ACM: (Optional: Certificate for your Domain)
  • AWS Route53: (Optional: DNS for the domain that you want to associate to your API)

The flow will be like the following:

  • POST Request gets made to the /create request path with the long_url data in the payload
  • This data is then used by the Lambda function to create a short url and create a entry in DynamoDB
  • In DynamoDB the entry is created with the short id as the hash key and the long url as one of the attributes
  • The response to the client will be the short url
  • When a GET method is performed on the id eg /t/{short_id}, a lookup gets done on the DynamoDB table, retrieves the long url from the table
  • A 301 redirect gets performed on API Gateway and the client gets redirected to the intended url

Creating the URL Shortener

After completing this tutorial you will have your own Serverless URL Shortener using API Gateway, Lambda and DynamoDB.

IAM Permissions

On AWS IAM, create a IAM Policy, in my case the policy name is lambda-dynamodb-url-shortener and note that I masked out my account number:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Sid": "VisualEditor0",
            "Effect": "Allow",
            "Action": [
                "dynamodb:PutItem",
                "dynamodb:DeleteItem",
                "dynamodb:GetItem",
                "dynamodb:Query",
                "dynamodb:UpdateItem"
            ],
            "Resource": "arn:aws:dynamodb:eu-west-1:xxxxxxxxxxxx:table/url-shortener-table"
        }
    ]
}

Head over to IAM Roles, select Create Role, Select Lambda as the Trusted Entitiy from the AWS Service section, go ahead with the permissions and select your IAM Policy that was created, in my case lambda-dynamodb-url-shortener and AWSLambdaBasicExecution role. Give your Role a name, in my case lambda-dynamodb-url-shortener-role.

DynamoDB Table

Next, head over to DynamoDB create a table, in my case the table name: url-shortener-table and the primary key short_id set to string:

image

Lambda Functions

Once the table is created, head over to Lambda and create a Lambda function, in my case using Python 3.6 and provide a name, where I used: url-shortener-create and select the IAM role from the previous role that we created, this function will be the lambda function that will create the shortened urls:

image

The code for your lambda function which will take care of creating the short urls and save them to dynamodb, take note on the region and table name to ensure that it matches your setup:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
import os
import json
import boto3
from string import ascii_letters, digits
from random import choice, randint
from time import strftime, time
from urllib import parse

app_url = os.getenv('APP_URL')
min_char = int(os.getenv('MIN_CHAR'))
max_char = int(os.getenv('MAX_CHAR'))
string_format = ascii_letters + digits

ddb = boto3.resource('dynamodb', region_name = 'eu-west-1').Table('url-shortener-table')

def generate_timestamp():
    response = strftime("%Y-%m-%dT%H:%M:%S")
    return response

def expiry_date():
    response = int(time()) + int(604800)
    return response

def check_id(short_id):
    if 'Item' in ddb.get_item(Key={'short_id': short_id}):
        response = generate_id()
    else:
        return short_id

def generate_id():
    short_id = "".join(choice(string_format) for x in range(randint(min_char, max_char)))
    print(short_id)
    response = check_id(short_id)
    return response

def lambda_handler(event, context):
    analytics = {}
    print(event)
    short_id = generate_id()
    short_url = app_url + short_id
    long_url = json.loads(event.get('body')).get('long_url')
    timestamp = generate_timestamp()
    ttl_value = expiry_date()

    analytics['user_agent'] = event.get('headers').get('User-Agent')
    analytics['source_ip'] = event.get('headers').get('X-Forwarded-For')
    analytics['xray_trace_id'] = event.get('headers').get('X-Amzn-Trace-Id')

    if len(parse.urlsplit(long_url).query) > 0:
        url_params = dict(parse.parse_qsl(parse.urlsplit(long_url).query))
        for k in url_params:
            analytics[k] = url_params[k]

    response = ddb.put_item(
        Item={
            'short_id': short_id,
            'created_at': timestamp,
            'ttl': int(ttl_value),
            'short_url': short_url,
            'long_url': long_url,
            'analytics': analytics,
            'hits': int(0)
        }
    )

    return {
        "statusCode": 200,
        "body": short_url
    }

Set a couple of environment variables that will be used in our function, min and max chars from the screenshot below is the amount of characters that will be used in a random manner to make the short id unique. The app_url will be your domain name, as this will be returned to the client with the short id eg. https://tiny.myserverlessapp.net/t/3f8Hf38n398t :

image

While you are on Lambda, create the function that will retrieve the long url, in my case url-shortener-retrieve:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
import os
import json
import boto3

ddb = boto3.resource('dynamodb', region_name = 'eu-west-1').Table('url-shortener-table')

def lambda_handler(event, context):
    short_id = event.get('short_id')

    try:
        item = ddb.get_item(Key={'short_id': short_id})
        long_url = item.get('Item').get('long_url')
        # increase the hit number on the db entry of the url (analytics?)
        ddb.update_item(
            Key={'short_id': short_id},
            UpdateExpression='set hits = hits + :val',
            ExpressionAttributeValues={':val': 1}
        )

    except:
        return {
            'statusCode': 301,
            'location': 'https://objects.ruanbekker.com/assets/images/404-blue.jpg'
        }

    return {
        "statusCode": 301,
        "location": long_url
    }

API Gateway

Head over to API Gateway and create your API, in my case url-shortener-api

image

Head over to Resources:

image

and create a new resource called /create:

image

Once the resource is created, create a post method on the create resource and select Lambda as the integration type and lambda proxy integration as seen below:

image

Once you save it, it will ask to give api gateway permission to invoike your lambda function wich you can accept by hitting ok as below:

image

When you look at the POST method on your create resource, it should look like this:

image

Select the root resource / and from Actions create a new resource /t:

image

Select the /t resource and create a new resource named shortid and provide {shortid} in the resource path as this will be the data that will be proxied through to our lambda function:

image

Create a GET method on the /t/{shortid} resource and select url-shortener-retrieve lambda function as the function from the lambda integration selection as seen below:

image

Again, grant api gateway permission to invoke your function:

image

When you select the GET method, it should look like this:

image

Select the Integration Request and head over to Mapping Templates:

image

from the Request body passtrhough, add a mapping template application/json and provide the following mapping template:

1
2
3
{
    "short_id": "$input.params('shortid')"
}

On the Method Response:

image

Delete the 200 HTTP Status Response and create a new response by “Add Response”, add 301 HTTP Status, add Location Header to the response.

Navigate to the Integration Response from the /{shortid} GET method:

image

delete the 200 HTTP Response, add “integration response”, set method response status to 301 and add header mapping for location to integration.response.body.location as below:

image

make sure to select the integration response to - so that the method response reflects to 301:

image

Navigate to Actions and select “Deploy API”, select your stage, in my case test and deploy:

image

Go to stages, select your stage, select the post request to reveal the API URL:

image

Time to test out the URL Shortener:

1
2
curl -XPOST -H "Content-Type: application/json" https://xxxxxx.execute-api.eu-west-1.amazonaws.com/test/create -d '{"long_url": "https://www.google.com/search?q=helloworld"}'
https://tiny.myserverlessapp.net/t/pcnWoCGCr2ad1x

ACM Certificates

At this moment we dont have our domain connected with our API Gateway, and we would also want a certificate on our application, which we can use ACM to request a certificate that can be associated to our domain. So in order to do that, first request a certificate on ACM. Select Request a certificate, create a wildcard entry: *.yourdomain.com, select DNS Validation (If you host with Route53, they allow you the option to create the record).

Head back to API Gateway to associate the Domain and ACM Certificate to our API:

From the “Custom Domain Names” section, create a custom domain name, once you selected regional, it will ask for the target domain name, which will be the resolved to your API Endpoint that was created, and from the “Base Path Mappings” section, select / as the path to your API stage, in my case url-shortener-api:test:

image

Route 53

Last part is to create a Route53 entry for tiny.yourdomain.com to resolve to the CNAME value of the target domain name that was provided in the custom domain names section:

image

Demo the URL Shortener Service:

Once everything is setup we can test, by creating a Shortened URL:

1
2
$ curl -XPOST -H "Content-Type: application/json" https://tiny.myserverlessapp.net/create -d '{"long_url": "https://www.google.com/search?q=helloworld"}'
https://tiny.myserverlessapp.net/t/p7ISNcxTByXhN

Testing out the Short URL to redirect to the Destination URL:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
$ curl -ivL https://tiny.myserverlessapp.net/t/p7ISNcxTByXhN
*   Trying 34.226.10.0...
* TCP_NODELAY set
* Connected to tiny.myserverlessapp.net (34.226.10.0) port 443 (#0)
* TLS 1.2 connection using TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256
* Server certificate: *.myserverlessapp.net
* Server certificate: Amazon
* Server certificate: Amazon Root CA 1
* Server certificate: Starfield Services Root Certificate Authority - G2
> GET /t/p7ISNcxTByXhN HTTP/1.1
> Host: tiny.myserverlessapp.net
> User-Agent: curl/7.54.0
> Accept: */*
>
< HTTP/1.1 301 Moved Permanently
HTTP/1.1 301 Moved Permanently
< Date: Tue, 29 Nov 2018 00:05:02 GMT
Date: Tue, 29 Nov 2018 00:05:02 GMT
< Content-Type: application/json
Content-Type: application/json
< Content-Length: 77
Content-Length: 77
< Connection: keep-alive
Connection: keep-alive
< x-amzn-RequestId: f79048c8-cb56-41e8-b21d-b45fac47453a
x-amzn-RequestId: f79048c8-cb56-41e8-b21d-b45fac47453a
< x-amz-apigw-id: OeKPHH7_DoEFdjg=
x-amz-apigw-id: OeKPHH7_DoEFdjg=
< Location: https://www.google.com/search?q=helloworld
Location: https://www.google.com/search?q=helloworld

At this moment our API is open to the world, which is probably not the best as everyone will be able to Shorten URL’s. You can check out Set Up API Keys Using the API Gateway Console documentation on how to secure your application by utilizing a api key which can be included in your request headers when Shortening URLs.

For a bit of housekeeping, you can implement TTL on DynamoDB so that old items expire, which can help you to refrain your dynamodb table from growing into large amounts of storage, you can have a look at a post on Delete Old Items with Amazons DynamoDB TTL Feature to implement that.

Thank You

Please feel free to show support by, sharing this post, making a donation, subscribing or reach out to me if you want me to demo and write up on any specific tech topic.


Python Flask Tutorial Series: Create a Hello World App

This is post 1 of the Python Flask Tutorial Series

What is Python Flask

Flask is a Micro Web Framework which is written in Python and is based on the Werkzeug Toolkit and the Jinja2 Template Engine.

Flask is super lightweight, and you import the modules as you need them, from some research some say that Flask is more designed for smaller applications whereas Django is designed for your larger applications.

a Good read on the [Differences and Performance Comparison]. With that being said, if you are planning with scale I am pretty sure that Flask can handle big applications, but it probably depends what your application is doing. More Detailed Discussion on Reddit.

Hello World in Python Flask

In this post we will be creating a “Hello, World” application to demonstrate how easy it is to run a Flask Appliation.

The only requirement you need to run this app, would be to to have python and pip installed so that we can install the Flask package which is needed.

Creating your Traditional Hello World App

We will install flask globally, but will write up a future post on how to setup a virtual environment for you application. Install the flask package:

1
$ pip install flask

The code for the Hello World Flask Application:

1
2
3
4
5
6
7
8
9
10
from flask import Flask

app = Flask(__name__)

@app.route('/')
def index():
    return 'Hello, World!'

if __name__ == '__main__':
    app.run(host='127.0.0.1', port=5000, debug=True)

Save the above code as app.py and then run the application as follows:

1
2
3
4
5
6
$ python app.py
 * Debug mode: on
 * Running on http://127.0.0.1:5000/ (Press CTRL+C to quit)
 * Restarting with stat
 * Debugger is active!
 * Debugger PIN: 282-492-812

It’s Running What Now?

We can see that our application is running on 127.0.0.1 and listening on port: 5000, if you point your browser to this URL, you will be returned with: Hello, World!

1
2
3
4
5
6
7
8
$ curl -i -XGET http://127.0.0.1:5000/
HTTP/1.0 200 OK
Content-Type: text/html; charset=utf-8
Content-Length: 13
Server: Werkzeug/0.12.1 Python/2.7.12
Date: Thu, 27 Nov 2018 13:51:15 GMT

Hello, World!

Explaining the Application Code

  • First, we imported the Flask class from the flask module, using: from flask import Flask
  • Then we instantiate our application from the Flask class: app = Flask(__name__) using our module’s name as a parameter, where our app object will use this to resolve resources. We are using __name__ , which links our module to our app object.
  • Next up we have the @app.route('/') decorator. Flask uses decorators for URL Routing.
  • Below our decorator, we have a view function, this function will be executed when the / route gets matched, in this case returning Hello, World!
  • The last line starts our server, and from this example it runs locally on 127.0.0.1 on port: 5000 and debug is enabled, so any error details will be logged directly in the browser. This is only recommended for test/dev and not for production as you can make your service vulnerable for hackers.

Let’s Extend our Hello World App

We would like to add the route ‘/movie’ which will return a random movie name:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
import random
from flask import Flask

app = Flask(__name__)

@app.route('/')
def index():
    return 'Hello, World!'

@app.route('/movie')
def movie():
    movies = ['godfather', 'deadpool', 'toy story', 'top gun', 'forrest gump']
    return random.choice(movies)

if __name__ == '__main__':
    app.run(host='127.0.0.1', port=5000, debug=True)

Making a GET Request on the ‘/movie’ route:

1
2
$ curl -XGET http://127.0.0.1/movie
forrest gump

This was just a basic example and will be covering more topics in detail at a further stage.

Next up, setting up our Python Environment, with Virtual Environment (virtualenv)

Related Content

All posts related to this tutorial series will be listed under Python Flask Tutorial Series tag.

Introduction to Python Flask: Tutorial Series

This post is the index for all the posts that will be covered in our Python Flask Tutorial Series:

What will be covered

This is intended for people starting out with Python Flask and the basics will be covered on using Flask so that you can get familliar with the framework.

The following will be covered:

  • Hello World Basic App
  • Routing in Flask
  • Jinja Templating
  • Static Files
  • etc

More will be posted

Setup a Relayhost With Postfix to Send Mail via Sendgrid

In this post we will setup Postfix to Relay Mail through SendGrid and we will also configure the authentication as SendGrid is not an open relay, but you can obtain credentials by signing up with the for a free account to obtain your username and password which will use to relay mail through them.

Access Control on Postfix

For this demonstration we can make use of the mynetworks configuration to specify the cidr of the source which we want to allow clients to be able to relay from. This is a acceptable way of controlling which source addresses you would like to authorize to relay mail via your smtp relay server.

Sendgrid

Sendgrid offers 100 free outbound emails per day, sign up with them via sendgrid.com/free, create a API Key and save your credentials in a safe place.

You first need to verify your account by sending a mail using their API, but it’s step by step so won’t take more than 2 minutes to complete.

Setup Postifx

I will be using ubuntu to setup postfix and configure postfix to specify sendgrid as the relayhost and also configure the authentication for the destination server in question:

1
$ apt install postfix libsasl2-modules -y

Configure postfix to relay all outbound mail via sendgrid, enable sasl auth, tls, relayhost etc via /etc/postfix/main.cf. The settings that needs to be set/configured:

1
2
3
4
5
6
7
8
smtp_sasl_auth_enable = yes
smtp_sasl_password_maps = hash:/etc/postfix/sasl_passwd
smtp_sasl_security_options = noanonymous
smtp_sasl_tls_security_options = noanonymous
smtp_tls_security_level = encrypt
header_size_limit = 4096000
relayhost = [smtp.sendgrid.net]:587
mynetworks = /etc/postfix/mynetworks

Create the /etc/postfix/mynetworks file where the whitelisted source addresses will be specified. In our case the loopback address and the class c subnet 10.0.1.0 :

1
2
127.0.0.1/32
10.0.1.0/24

Create the credential file where the credentials for the sendgrid service will be stored, in my case it will be in /etc/postfix/sasl_passwd:

1
[smtp.sendgrid.net]:587 your_username:your_password

Apply permissions and update postfix hashtables on the file in question:

1
2
$ chmod 600 /etc/postfix/sasl_passwd
$ postmap /etc/postfix/sasl_passwd

Enable and Start the Service:

1
2
$ systemctl enable postfix
$ systemctl restart postfix

Send a Test Mail

From the server you can test your mail delivery by sending a mail:

1
$ echo "the body of the mail" | mail -r user@authenticated-domain.com -s "my subject" recipient-mail@mydomain.com

or using telnet for a remote system:

1
2
3
4
5
6
7
8
9
10
11
$ telnet smtp-server.ip 25
helo admin
mail from: me@mydomain.com
rcpt to: recipient-main@mydomain.com
DATA
Subject: This is a test
From: James John <me@mydomain.com>
To: Peter Smith <recipient-mail@mydomain.com>

ctrl + ]
q

You can monitor /var/log/maillog to see log messages of your email.

Setup a Golang Environment on Ubuntu

In this post I will demonstrate how to setup a golang environment on Ubuntu.

Get the sources:

Get the latest stable release golang tarball from https://golang.org/dl/ and download to the directory path of choice, and extract the archive:

1
2
3
$ cd /tmp
$ wget https://dl.google.com/go/go1.11.2.linux-amd64.tar.gz
$ tar -xf go1.11.2.linux-amd64.tar.gz

Once the archive is extracted, set root permissions and move it to the path where your other executable binaries reside:

1
2
$ sudo chown -R root:root ./go
$ sudo mv go /usr/local/

Cleanup the downloaded archive:

1
$ rm -rf go1.*.tar.gz

Path Variables:

Adjust your path variables in your ~/.profile and append the following:

~/.profile
1
2
export GOPATH=$HOME/go
export PATH=$PATH:/usr/local/go/bin:$GOPATH/bin

Source your profile, or open a new tab:

1
$ source ~/.profile

Test if you can return the version:

1
2
$ go version
go version go1.11.2 linux/amd64

Create a Golang Application

Create a simple golang app that prints a string to stdout:

1
2
3
4
$ cd ~/
$ mkdir -p go/src/hello
$ cd go/src/hello
$ vim app.go

Add the following golang code:

1
2
3
4
5
6
7
package main

import "fmt"

func main() {
    fmt.Printf("Hello!\n")
}

Build the binary:

1
$ go build

Run it:

1
2
$ ./app
Hello!

Golang: Building a Basic Web Server in Go

Continuing with our #golang-tutorial blog series, in this post we will setup a Basic HTTP Server in Go.

Our Web Server:

Our Web Server will respond on 2 Request Paths:

1
2
- / -> returns "Hello, Wolrd!"
- /cheers -> returns "Goodbye!"

Application Code:

If you have not setup your golang environment, you can do so by visiting @AkyunaAkish’s Post on Setting up a Golang Development Enviornment on MacOSX.

Create the server.go or any filename of your choice. Note: I created 2 ways of returning the content of http response for demonstration

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
package main

import (
  "io"
        "log"
  "net/http"
)

func hello(w http.ResponseWriter, r *http.Request) {
  w.Header().Set("Content-Type", "text/plain; charset=utf-8")
  w.WriteHeader(http.StatusOK)
  w.Write([]byte("Hello, World!" + "\n")
  log.Println("hello function handler was executed")
}

func goodbye(w http.ResponseWriter, r *http.Request) {
  w.Header().Set("Content-Type", "text/plain; charset=utf-8")
  w.WriteHeader(http.StatusOK)
  io.WriteString(w, "Cheers!" + "\n")
  log.Println("goodbye function handler was executed")
}

func main() {
  http.HandleFunc("/", hello)
  http.HandleFunc("/cheers", goodbye)
  http.ListenAndServe(":8000", nil)
}

Explanation of what we are doing:

  • Programs runs in the package main
  • We are importing 3 packages: io, log and net/http
  • HandleFunc registers the handler function for the given pattern in the DefaultServeMux, in this case the HandleFunc registers / to the hello handler function and /cheers to the goodbye handler function.
  • In our 2 handler functions, we have two arguments:
    • The first one is http.ResponseWriter and its corresponding response stream, which is actually an interface type.
    • The second is *http.Request and its corresponding HTTP request. io.WriteString is a helper function to let you write a string into a given writable stream, this is named the io.Writer interface in Golang.
  • ListenAndServe starts an HTTP server with a given address and handler. The handler is usually nil, which means to use DefaultServeMux
  • The logging is not a requirement, but used it for debugging/verbosity

Running our Server:

Run the http server:

1
$ go run server.go

Client Side Requests:

Run client side http requests to your golang web server:

1
2
3
4
5
6
7
$ curl -i http://localhost:8000/
HTTP/1.1 200 OK
Content-Type: text/plain; charset=utf-8
Date: Wed, 21 Nov 2018 21:33:42 GMT
Content-Length: 14

Hello, World!

And another request to /cheers:

1
2
3
4
5
6
7
$ curl -i http://localhost:8000/cheers
HTTP/1.1 200 OK
Content-Type: text/plain; charset=utf-8
Date: Wed, 21 Nov 2018 21:29:46 GMT
Content-Length: 8

Cheers!

Server Side Output:

As we used the log package, the logging gets returned to stdout:

1
2
3
$ go run server.go
2018/11/21 23:29:36 hello function handler was executed
2018/11/21 23:29:46 goodbye function handler was executed

Resources:

Create Read Only Users in MongoDB

In this post I will demonstrate how to setup 2 read only users in MongoDB, one user that will have access to one MongoDB Database and all the Collections, and one user with access to one MongoDB Database and only one Collection.

First Method: Creating and Assigning the User

The first method we will create the user and assign it the read permissions that he needs. In this case read only access to the mytest db.

First logon to mongodb and switch to the admin database:

1
2
3
$ mongo -u dbadmin -p --authenticationDatabase admin
> use admin
switched to db admin

Now list the dbs:

1
2
3
> show dbs
admin       0.000GB
mytest      0.000GB

List the collections and read the data from it for demonstration purposes:

1
2
3
4
5
6
7
8
> use mytest
> show collections;
col1
col2
> db.col1.find()
{ "_id" : ObjectId("5be3d377b54849bb738e3b6b"), "name" : "ruan" }
> db.col2.find()
{ "_id" : ObjectId("5be3d383b54849bb738e3b6c"), "name" : "stefan" }

Now create the user collectionreader that will have access to read all the collections from the database:

1
2
3
4
5
6
7
8
9
10
$ > db.createUser({user: "collectionreader", pwd: "secretpass", roles: [{role: "read", db: "mytest"}]})
Successfully added user: {
  "user" : "collectionreader",
  "roles" : [
    {
      "role" : "read",
      "db" : "mytest"
    }
  ]
}

Exit and log out and log in with the new user to test the permissions:

1
2
3
4
5
6
7
8
9
10
$ mongo -u collectionreader -p --authenticationDatabase mytest
> use mytest
switched to db mytest

> show collections
col1
col2

> db.col1.find()
{ "_id" : ObjectId("5be3d377b54849bb738e3b6b"), "name" : "ruan" }

Now lets try to write to a collection:

1
2
3
4
5
6
7
> db.col1.insert({"name": "james"})
WriteResult({
  "writeError" : {
    "code" : 13,
    "errmsg" : "not authorized on mytest to execute command { insert: \"col1\", documents: [ { _id: ObjectId('5be3d6c0492818b2c966d61a'), name: \"james\" } ], ordered: true }"
  }
})

So we can see it works as expected.

Second Method: Create Roles and Assign Users to the Roles

In the second method, we will create the roles then assign the users to the roles. And in this scenario, we will only grant a user reader access to one collection on a database. Login with the admin user:

1
2
$ mongo -u dbadmin -p --authenticationDatabase admin
> use admin

First create the read only role myReadOnlyRole:

1
> db.createRole({ role: "myReadOnlyRole", privileges: [{ resource: { db: "mytest", collection: "col2"}, actions: ["find"]}], roles: []})

Now create the user and assign it to the role:

1
> db.createUser({ user: "reader", pwd: "secretpass", roles: [{ role: "myReadOnlyRole", db: "mytest"}]})

Similarly, if we had an existing user that we also would like to add to that role, we can do that by doing this:

1
> db.grantRolesToUser("anotheruser", [ { role: "myReadOnlyRole", db: "mytest" } ])

Logout and login with the reader user:

1
2
$ mongo -u reader -p --authenticationDatabase mytest
> use mytest

Now try to list the collections:

1
2
3
4
5
6
7
> show collections
2018-11-08T07:42:39.907+0100 E QUERY    [thread1] Error: listCollections failed: {
  "ok" : 0,
  "errmsg" : "not authorized on mytest to execute command { listCollections: 1.0, filter: {} }",
  "code" : 13,
  "codeName" : "Unauthorized"
}

As we only have read (find) access on col2, lets try to read data from collection col1:

1
2
3
4
5
6
7
> db.col1.find()
Error: error: {
  "ok" : 0,
  "errmsg" : "not authorized on mytest to execute command { find: \"col1\", filter: {} }",
  "code" : 13,
  "codeName" : "Unauthorized"
}

And finally try to read data from the collection we are allowed to read from:

1
2
> db.col2.find()
{ "_id" : ObjectId("5be3d383b54849bb738e3b6c"), "name" : "stefan" }

And also making sure we cant write to that collection:

1
2
3
4
5
6
7
> db.col2.insert({"name": "frank"})
WriteResult({
  "writeError" : {
    "code" : 13,
    "errmsg" : "not authorized on mytest to execute command { insert: \"col2\", documents: [ { _id: ObjectId('5be3db1530a86d900c361465'), name: \"frank\" } ], ordered: true }"
  }
})

Assigning Permissions to Roles

If you later on want to add more permissions to the role, this can easily be done by using grantPrivilegesToRole():

1
2
3
$ mongo -u dbadmin -p --authenticationDatabase admin
> use mytest
> db.grantPrivilegesToRole("myReadOnlyRole", [{ resource: { db : "mytest", collection : "col1"}, actions : ["find"] }])

To view the permissions for that role:

1
> db.getRole("myReadOnlyRole", { showPrivileges : true })

Resources:

IAM Policy to Allow Team Wide and User Level Permissions on AWS Secrets Manager

In this post we will simulate a scenario where a team would like to have access to create secrets under a team path name like /security-team/prod/* and /security-team/dev/* and allow all the users from that team to be able to write and read secrets from that path. Then have individual users create and read secrets from their own isolated path: /security-team/personal/aws-username/* so they can create their personal secrets.

Our Scenario:

  • Create IAM Policy
  • Create 2 IAM Users: jack.smith and steve.adams
  • Create IAM Group, Associate IAM Policy to the Group
  • Attach 2 Users to the Group

The IAM Policy:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Sid": "Stmt1541597166491",
            "Action": [
                "secretsmanager:CreateSecret",
                "secretsmanager:DeleteSecret",
                "secretsmanager:DescribeSecret",
                "secretsmanager:GetRandomPassword",
                "secretsmanager:GetSecretValue",
                "secretsmanager:ListSecretVersionIds",
                "secretsmanager:ListSecrets",
                "secretsmanager:PutSecretValue",
                "secretsmanager:TagResource",
                "secretsmanager:UpdateSecret"
            ],
            "Effect": "Allow",
            "Resource": [
                "arn:aws:secretsmanager:eu-west-1:123456789012:secret:/security-team/prod/*",
                "arn:aws:secretsmanager:eu-west-1:123456789012:secret:/security-team/dev/*",
                "arn:aws:secretsmanager:eu-west-1:123456789012:secret:/security-team/personal/${aws:username}/*"
            ]
        }
    ]
}

Either configure the access keys and secret keys into the credential provider using aws cli, or for this demonstration I will use them inside the code. But never hardcode your credentials.

Create Secrets with Secrets Manager in AWS using Python Boto3

Instantiate user1 and user2:

1
2
3
>>> import boto3
>>> jack = boto3.Session(aws_access_key_id='ya', aws_secret_access_key='xx', region_name='eu-west-1').client('secretsmanager')
>>> steve = boto3.Session(aws_access_key_id='yb', aws_secret_access_key='xx', region_name='eu-west-1').client('secretsmanager')

Create a team wide secret with jack:

1
2
>>> jack.create_secret(Name='/security-team/prod/app1/username', SecretString='appreader')
{'ResponseMetadata': {'RetryAttempts': 0, 'HTTPStatusCode': 200, 'RequestId': 'x', 'HTTPHeaders': {'date': 'Thu, 08 Nov 2018 07:50:35 GMT', 'x-amzn-requestid': 'x', 'content-length': '193', 'content-type': 'application/x-amz-json-1.1', 'connection': 'keep-alive'}}, u'VersionId': u'x', u'Name': u'/security-team/prod/app1/username', u'ARN': u'arn:aws:secretsmanager:eu-west-1:123456789012:secret:/security-team/prod/app1/username-12ABC00'}

Let jack and steve try to read the secret:

1
2
3
4
>>> jack.get_secret_value(SecretId='/security-team/prod/app1/username')['SecretString']
'appreader'
>>> steve.get_secret_value(SecretId='/security-team/prod/app1/username')['SecretString']
'appreader'

Now let jack create a personal secret, let him read it:

1
2
3
>>> jack.create_secret(Name='/security-team/personal/jack.smith/svc1/password', SecretString='secret')
>>> jack.get_secret_value(SecretId='/security-team/personal/jack.smith/svc1/password')['SecretString']
'secret'

Now let steve try to read the secret and you will see that access is denied:

1
2
3
4
5
6
>>> steve.get_secret_value(SecretId='/security-team/personal/jack.smith/username')['SecretString']
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
...
    raise error_class(parsed_response, operation_name)
botocore.exceptions.ClientError: An error occurred (AccessDeniedException) when calling the GetSecretValue operation: User: arn:aws:iam::123456789012:user/steve.adams is not authorized to perform: secretsmanager:GetSecretValue on resource: arn:aws:secretsmanager:eu-west-1:123456789012:secret:/security-team/personal/jack.smith/svc1/password-a1234b

Thats it for this post

Get Application Performance Metrics on Python Flask With Elastic APM on Kibana and Elasticsearch

In this post we will setup a Python Flask Application which includes the APM Agent which will collect metrics, that gets pushed to the APM Server. If you have not setup the Elastic Stack with / or APM Server, you can follow this post to setup the needed.

Then we will make a bunch of HTTP Requests to our Application and will go through the metrics per request type.

Application Metrics

Our Application will have the following Request Paths:

  • / - Returns static text
  • /delay - random delays to simulate increased response latencies
  • /upstream - get data from a upstream provider, if statements to provide dummy 200, 404 and 502 reponses to visualize
  • /5xx - request path that will raise an exception so that we can see the error via apm
  • /sql-write - inserts 5 rows into a sqlite database
  • /sql-read - executes a select all from the database
  • /sql-group - sql query to group all the cities and count them

This is just simple request paths to demonstrate the metrics via APM (Application Performance Monitoring) on Kibana.

Install Flask and APM Agent

Create a virtual environment and install the dependencies:

1
2
3
4
5
$ apt install python python-setuptools -y
$ easy_install pip
$ pip install virtualenv
$ pip install elastic-apm[flask]
$ pip install flask

For more info on APM Configuration.

Instrument a Bare Bones Python Flask app with APM:

A Barebones app with APM Configured will look like this:

1
2
3
4
5
6
7
8
9
10
11
12
13
from flask import Flask, jsonify
from elasticapm.contrib.flask import ElasticAPM
from elasticapm.handlers.logging import LoggingHandler

app = Flask(__name__)
apm = ElasticAPM(app, server_url='http://localhost:8200', service_name='flask-app-1', logging=True)

@app.route('/')
def index():
    return jsonify({"message": "response ok"}), 200

if __name__ == '__main__':
    app.run(host='0.0.0.0', port=80)

This will provide metrics on the / request path. In order to trace transaction ids from the metrics, we need to configure the index on Kibana. To do this, head over to Kibana, Management, Index Patterns, Add Index Pattern, apm*, select @timestamp as the time filter field name.

This will allow you to see the data when tracing the transaction id’s via the Discover UI.

Create the Python Flask App

Create the Flask App with the request paths as mentioned in the beginning:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
import sqlite3, requests, time, logging, random
from flask import Flask, jsonify
from elasticapm.contrib.flask import ElasticAPM
from elasticapm.handlers.logging import LoggingHandler

names = ['ruan', 'stefan', 'philip', 'norman', 'frank', 'pete', 'johnny', 'peter', 'adam']
cities = ['cape town', 'johannesburg', 'pretoria', 'dublin', 'kroonstad', 'bloemfontein', 'port elizabeth', 'auckland', 'sydney']
lastnames = ['smith', 'bekker', 'admams', 'phillips', 'james', 'adamson']

conn = sqlite3.connect('database.db')
conn.execute('CREATE TABLE IF NOT EXISTS people (name STRING, age INTEGER, surname STRING, city STRING)')
#sqlquery_write = conn.execute('INSERT INTO people VALUES("{}", "{}", "{}", "{}")'.format(random.choice(names), random.randint(18,40), random.choice(lastnames), random.choice(cities)))
seconds = [0.002, 0.003, 0.004, 0.01, 0.3, 0.2, 0.009, 0.015, 0.02, 0.225, 0.009, 0.001, 0.25, 0.030, 0.018]

app = Flask(__name__)
apm = ElasticAPM(app, server_url='http://localhost:8200', service_name='my-app-01', logging=False)

@app.route('/')
def index():
    return jsonify({"message": "response ok"})

@app.route('/delay')
def delay():
    time.sleep(random.choice(seconds))
    return jsonify({"message": "response delay"})

@app.route('/upstream')
def upstream():
    r = requests.get('https://api.ruanbekker.com/people').json()
    r.get('country')
    if r.get('country') == 'italy':
        return 'Italalia!', 200
    elif r.get('country') == 'canada':
        return 'Canada!', 502
    else:
        return 'Not Found', 404

@app.route('/5xx')
def fail_with_5xx():
    value = 'a' + 1
    return jsonify({"message": value})

@app.route('/sql-write')
def sqlw():
    conn = sqlite3.connect('database.db')
    conn.execute('INSERT INTO people VALUES("{}", "{}", "{}", "{}")'.format(random.choice(names), random.randint(18,40), random.choice(lastnames), random.choice(cities)))
    conn.execute('INSERT INTO people VALUES("{}", "{}", "{}", "{}")'.format(random.choice(names), random.randint(18,40), random.choice(lastnames), random.choice(cities)))
    conn.execute('INSERT INTO people VALUES("{}", "{}", "{}", "{}")'.format(random.choice(names), random.randint(18,40), random.choice(lastnames), random.choice(cities)))
    conn.execute('INSERT INTO people VALUES("{}", "{}", "{}", "{}")'.format(random.choice(names), random.randint(18,40), random.choice(lastnames), random.choice(cities)))
    conn.execute('INSERT INTO people VALUES("{}", "{}", "{}", "{}")'.format(random.choice(names), random.randint(18,40), random.choice(lastnames), random.choice(cities)))
    conn.commit()
    conn.close()
    return 'ok', 200

@app.route('/sql-read')
def sqlr():
    conn = sqlite3.connect('database.db')
    conn.row_factory = sqlite3.Row
    cur = conn.cursor()
    cur.execute('select * from people')
    rows = cur.fetchall()
    conn.close()
    return 'ok', 200

@app.route('/sql-group')
def slqg():
    conn = sqlite3.connect('database.db')
    conn.row_factory = sqlite3.Row
    cur = conn.cursor()
    cur.execute('select count(*) as num, city from people group by city')
    rows = cur.fetchall()
    conn.close()
    return 'ok', 200

if __name__ == '__main__':
    app.run(host='0.0.0.0', port=80)

Run the app:

1
$ python app.py

At this point, we wont have any data on APM as we need to make requests to our application. Let’s make 10 HTTP GET Requests on the / Request Path:

1
$ count=0 && while [ $count -lt 10 ]; do curl http://application-routable-address:80/; sleep 1; count=$((count+1)); done

Visualize the Root Request Path

Head over to Kibana, Select APM and you will see something similar like below when selecting the timepicker to 15 minutes at the right top corner. This page will give you the overview of all your configured applications and the average response times over the selected time, transactions per minute, errors per minute etc:

When you select your application, you will find the graphs on you response times and requests per minute, also a breakdown per request path:

When selecting the request path, in this case GET /, you will find a breakdown of metrics only for that request and also the response time distribution for that request path, if you select frame from the response time distribution, it will filter the focus to that specific transaction.

When you scroll a bit down to the Transaction Sample section, you will find data about the request, response, system etc:

From the Transaction Sample, you can select the View Transaction in Discover button, which will trace that transaction id on the Discover UI:

Increasing the http curl clients running simultaneously from different servers and increasing the time for 15 minutes to have more metrics will result in the screenshot below, notice the 6ms response time can easily be traced selecting it in the response time distribution, then discovering it in the UI, which will give you the raw data from that request:

Viewing Application Errors in APM

Make a couple of requests to /5xx:

1
$ curl http://application-routable-endpoint:80/5xx

Navigate to the app, select Errors, then you will see the exception details that was returned. Here we can see that in our code we tried to concatenate integers with strings:

Furthermore we can select that error and it will provide us a direct view on where in our code the error gets generated:

Pretty cool right?! You can also further select the library frames, which will take you to the lower level code that raised the exception. If this errors can be drilled down via the discover ui, to group by source address etc.

Simulate Response Latencies:

Make a couple of requests to the /delay request path, and you should see the increased response times from earlier:

Requests where Database Calls are Executed

The while loop to call random request paths:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
count=0 && while [ $count -lt 1000 ];
do
  curl -H "Host: my-za-server" -i http://x.x.x.x/sql-write;
  curl -H "Host: my-za-server" -i http://x.x.x.x/sql-read;
  curl -H "Host: my-za-server" -i http://x.x.x.x/sql-group;
  curl -H "Host: my-eu-server" -i http://x.x.x.x/sql-write;
  curl -H "Host: my-us-server" -i http://x.x.x.x/sql-write;
  curl -H "Host: my-za-server" -i http://x.x.x.x/sql-write;
  curl -H "Host: my-za-server" -i http://x.x.x.x/sql-write;
  curl -H "Host: my-za-server" -i http://x.x.x.x/sql-read;
  curl -H "Host: my-eu-server" -i http://x.x.x.x/sql-group;
  curl -H "Host: my-us-server" -i http://x.x.x.x/sql-group;
  curl -H "Host: my-za-server" -i http://x.x.x.x/sql-write;
  curl -H "Host: my-za-server" -i http://x.x.x.x/sql-write;
  curl -H "Host: my-eu-server" -i http://x.x.x.x/sql-group;
  curl -H "Host: my-za-server" -i http://x.x.x.x/sql-group;
  count=$((count+1));
done

When we look at our applications performance monitoring overview, we can see the writes provide more latencies as the group by’s:

The /sql-write request overview:

When selecting a transaction sample, we can see the timeline of each database call:

When looking at the /sql-group request overview, we can see that the response times increasing overtime, as more data is written to the database, it takes longer to read and group all the data from the database:

The transaction details shows the timeline of the database query from that request:

When you select the database select query on the timeline view, it will take you to the exact database query that was executed:

When we include a database call with a external request to a remote http endpoint, we will see something like:

Viewing 4xx and 5xx Response Codes

From the application code we are returning 2xx, 4xx, and 5xx response codes for this demonstration to visualize them:

Configuring more Applications

Once more apps are configured, and they start serving traffic, they will start appearing on the APM UI as below:

APM is available for other languages as well and provides a getting started snippets from the APM UI. For more information on APM, have a look at their Documentation

Hope this was useful.