In this section we will be covering our Environment Setup, where I will be showing you how to setup a typical Python Flask Environment using virtualenv
What is VirtualEnv?
Virtualenv allows you to have isolated Python Environments, where each project or environment can have their own versions. Some applications may need a specific version of a certain package, so lets say you are running multiple applications on one server, then having to manage each ones dependencies can be a pain. As you may run into scenarios where they are dependent on specific versions, where you have to upgrade/downgrade packages like no-ones business.
Luckily with the help of virtualenv, each environment is isolated from each other, so system wide you might be running Python 2.7 with minimal packages installed, then you can create a virtual environment with Python 3 with packages for the application you are developing.
Setup a Virtual Environment:
We will setup a virtualenv for our project with our default python version which in this case is 2.7:
123
$ mkdir ~/projects/mywebapp
$ cd ~/projects/mywebapp
$ virtualenv .venv
At this moment you should have your virtual environment ready, now to enter and activate our environment:
1
$ source .venv/bin/activate
To confirm your python version:
12
$ python --version
Python 2.7.6
If you have multiple versions of python, you can create your virtual environment with a different python version by using the -p flag, as in:
1
$ virtualenv -p /usr/local/bin/python2.7 .venv
Now that we are in our virtualenv, lets install 2 packages, Flask and Requests:
12
$ pip install flask
$ pip install requests
With pip we can list the installed packages we have with pip freeze. Since this is our virtual environment, we will only see the packages that was installed into this environment:
We can dump this to a file, which we can later use to install packages from a list so that we don’t have to specify them manually. We can dump them by doing this:
1
$ pip freeze > requirements.txt
Now lets say you are on a different host and you would like to install the packages from the requirements.txt file, we do this by using the following command:
Today we will set a Serverless URL Shortener using API Gateway, Lambda with Python and DynamoDB.
Overview
The service that we will be creating, will shorten URLs via our API which will create an entry on DynamoDB. When a GET method is performed on the shortened URL, a GetItem is executed on DynamoDB to get the Long URL and a 301 Redirect is performed to redirect the client to intended destination URL.
Note, I am using a domain name which is quite long, but its only for demonstration, if you can get hold of any short domains like t.co etc, that will make your Shortened URLs really short in character count.
Head over to IAM Roles, select Create Role, Select Lambda as the Trusted Entitiy from the AWS Service section, go ahead with the permissions and select your IAM Policy that was created, in my case lambda-dynamodb-url-shortener and AWSLambdaBasicExecution role. Give your Role a name, in my case lambda-dynamodb-url-shortener-role.
DynamoDB Table
Next, head over to DynamoDB create a table, in my case the table name: url-shortener-table and the primary key short_id set to string:
Lambda Functions
Once the table is created, head over to Lambda and create a Lambda function, in my case using Python 3.6 and provide a name, where I used: url-shortener-create and select the IAM role from the previous role that we created, this function will be the lambda function that will create the shortened urls:
The code for your lambda function which will take care of creating the short urls and save them to dynamodb, take note on the region and table name to ensure that it matches your setup:
Set a couple of environment variables that will be used in our function, min and max chars from the screenshot below is the amount of characters that will be used in a random manner to make the short id unique. The app_url will be your domain name, as this will be returned to the client with the short id eg. https://tiny.myserverlessapp.net/t/3f8Hf38n398t :
While you are on Lambda, create the function that will retrieve the long url, in my case url-shortener-retrieve:
1234567891011121314151617181920212223242526272829
importosimportjsonimportboto3ddb=boto3.resource('dynamodb',region_name='eu-west-1').Table('url-shortener-table')deflambda_handler(event,context):short_id=event.get('short_id')try:item=ddb.get_item(Key={'short_id':short_id})long_url=item.get('Item').get('long_url')# increase the hit number on the db entry of the url (analytics?)ddb.update_item(Key={'short_id':short_id},UpdateExpression='set hits = hits + :val',ExpressionAttributeValues={':val':1})except:return{'statusCode':301,'location':'https://objects.ruanbekker.com/assets/images/404-blue.jpg'}return{"statusCode":301,"location":long_url}
API Gateway
Head over to API Gateway and create your API, in my case url-shortener-api
Head over to Resources:
and create a new resource called /create:
Once the resource is created, create a post method on the create resource and select Lambda as the integration type and lambda proxy integration as seen below:
Once you save it, it will ask to give api gateway permission to invoike your lambda function wich you can accept by hitting ok as below:
When you look at the POST method on your create resource, it should look like this:
Select the root resource / and from Actions create a new resource /t:
Select the /t resource and create a new resource named shortid and provide {shortid} in the resource path as this will be the data that will be proxied through to our lambda function:
Create a GET method on the /t/{shortid} resource and select url-shortener-retrieve lambda function as the function from the lambda integration selection as seen below:
Again, grant api gateway permission to invoke your function:
When you select the GET method, it should look like this:
Select the Integration Request and head over to Mapping Templates:
from the Request body passtrhough, add a mapping template application/json and provide the following mapping template:
123
{"short_id":"$input.params('shortid')"}
On the Method Response:
Delete the 200 HTTP Status Response and create a new response by “Add Response”, add 301 HTTP Status, add Location Header to the response.
Navigate to the Integration Response from the /{shortid} GET method:
delete the 200 HTTP Response, add “integration response”, set method response status to 301 and add header mapping for location to integration.response.body.location as below:
make sure to select the integration response to - so that the method response reflects to 301:
Navigate to Actions and select “Deploy API”, select your stage, in my case test and deploy:
Go to stages, select your stage, select the post request to reveal the API URL:
At this moment we dont have our domain connected with our API Gateway, and we would also want a certificate on our application, which we can use ACM to request a certificate that can be associated to our domain. So in order to do that, first request a certificate on ACM. Select Request a certificate, create a wildcard entry: *.yourdomain.com, select DNS Validation (If you host with Route53, they allow you the option to create the record).
Head back to API Gateway to associate the Domain and ACM Certificate to our API:
From the “Custom Domain Names” section, create a custom domain name, once you selected regional, it will ask for the target domain name, which will be the resolved to your API Endpoint that was created, and from the “Base Path Mappings” section, select / as the path to your API stage, in my case url-shortener-api:test:
Route 53
Last part is to create a Route53 entry for tiny.yourdomain.com to resolve to the CNAME value of the target domain name that was provided in the custom domain names section:
Demo the URL Shortener Service:
Once everything is setup we can test, by creating a Shortened URL:
$ curl -ivL https://tiny.myserverlessapp.net/t/p7ISNcxTByXhN
* Trying 34.226.10.0...
* TCP_NODELAY set* Connected to tiny.myserverlessapp.net (34.226.10.0) port 443(#0)* TLS 1.2 connection using TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256
* Server certificate: *.myserverlessapp.net
* Server certificate: Amazon
* Server certificate: Amazon Root CA 1
* Server certificate: Starfield Services Root Certificate Authority - G2
> GET /t/p7ISNcxTByXhN HTTP/1.1
> Host: tiny.myserverlessapp.net
> User-Agent: curl/7.54.0
> Accept: */*
>
< HTTP/1.1 301 Moved Permanently
HTTP/1.1 301 Moved Permanently
< Date: Tue, 29 Nov 2018 00:05:02 GMT
Date: Tue, 29 Nov 2018 00:05:02 GMT
< Content-Type: application/json
Content-Type: application/json
< Content-Length: 77
Content-Length: 77
< Connection: keep-alive
Connection: keep-alive
< x-amzn-RequestId: f79048c8-cb56-41e8-b21d-b45fac47453a
x-amzn-RequestId: f79048c8-cb56-41e8-b21d-b45fac47453a
< x-amz-apigw-id: OeKPHH7_DoEFdjg=x-amz-apigw-id: OeKPHH7_DoEFdjg=< Location: https://www.google.com/search?q=helloworld
Location: https://www.google.com/search?q=helloworld
At this moment our API is open to the world, which is probably not the best as everyone will be able to Shorten URL’s. You can check out Set Up API Keys Using the API Gateway Console documentation on how to secure your application by utilizing a api key which can be included in your request headers when Shortening URLs.
For a bit of housekeeping, you can implement TTL on DynamoDB so that old items expire, which can help you to refrain your dynamodb table from growing into large amounts of storage, you can have a look at a post on Delete Old Items with Amazons DynamoDB TTL Feature to implement that.
Thank You
Please feel free to show support by, sharing this post, making a donation, subscribing or reach out to me if you want me to demo and write up on any specific tech topic.
Flask is a Micro Web Framework which is written in Python and is based on the Werkzeug Toolkit and the Jinja2 Template Engine.
Flask is super lightweight, and you import the modules as you need them, from some research some say that Flask is more designed for smaller applications whereas Django is designed for your larger applications.
a Good read on the [Differences and Performance Comparison]. With that being said, if you are planning with scale I am pretty sure that Flask can handle big applications, but it probably depends what your application is doing. More Detailed Discussion on Reddit.
Hello World in Python Flask
In this post we will be creating a “Hello, World” application to demonstrate how easy it is to run a Flask Appliation.
The only requirement you need to run this app, would be to to have python and pip installed so that we can install the Flask package which is needed.
Creating your Traditional Hello World App
We will install flask globally, but will write up a future post on how to setup a virtual environment for you application. Install the flask package:
We can see that our application is running on 127.0.0.1 and listening on port: 5000, if you point your browser to this URL, you will be returned with: Hello, World!
First, we imported the Flask class from the flask module, using: from flask import Flask
Then we instantiate our application from the Flask class: app = Flask(__name__) using our module’s name as a parameter, where our app object will use this to resolve resources. We are using __name__ , which links our module to our app object.
Next up we have the @app.route('/') decorator. Flask uses decorators for URL Routing.
Below our decorator, we have a view function, this function will be executed when the / route gets matched, in this case returning Hello, World!
The last line starts our server, and from this example it runs locally on 127.0.0.1 on port: 5000 and debug is enabled, so any error details will be logged directly in the browser. This is only recommended for test/dev and not for production as you can make your service vulnerable for hackers.
Let’s Extend our Hello World App
We would like to add the route ‘/movie’ which will return a random movie name:
This post is the index for all the posts that will be covered in our Python Flask Tutorial Series:
What will be covered
This is intended for people starting out with Python Flask and the basics will be covered on using Flask so that you can get familliar with the framework.
In this post we will setup Postfix to Relay Mail through SendGrid and we will also configure the authentication as SendGrid is not an open relay, but you can obtain credentials by signing up with the for a free account to obtain your username and password which will use to relay mail through them.
Access Control on Postfix
For this demonstration we can make use of the mynetworks configuration to specify the cidr of the source which we want to allow clients to be able to relay from. This is a acceptable way of controlling which source addresses you would like to authorize to relay mail via your smtp relay server.
Sendgrid
Sendgrid offers 100 free outbound emails per day, sign up with them via sendgrid.com/free, create a API Key and save your credentials in a safe place.
You first need to verify your account by sending a mail using their API, but it’s step by step so won’t take more than 2 minutes to complete.
Setup Postifx
I will be using ubuntu to setup postfix and configure postfix to specify sendgrid as the relayhost and also configure the authentication for the destination server in question:
1
$ apt install postfix libsasl2-modules -y
Configure postfix to relay all outbound mail via sendgrid, enable sasl auth, tls, relayhost etc via /etc/postfix/main.cf. The settings that needs to be set/configured:
Create the /etc/postfix/mynetworks file where the whitelisted source addresses will be specified. In our case the loopback address and the class c subnet 10.0.1.0 :
12
127.0.0.1/32
10.0.1.0/24
Create the credential file where the credentials for the sendgrid service will be stored, in my case it will be in /etc/postfix/sasl_passwd:
From the server you can test your mail delivery by sending a mail:
1
$ echo"the body of the mail"| mail -r user@authenticated-domain.com -s "my subject" recipient-mail@mydomain.com
or using telnet for a remote system:
1234567891011
$ telnet smtp-server.ip 25
helo admin
mail from: me@mydomain.com
rcpt to: recipient-main@mydomain.com
DATA
Subject: This is a testFrom: James John <me@mydomain.com>
To: Peter Smith <recipient-mail@mydomain.com>
ctrl + ]q
You can monitor /var/log/maillog to see log messages of your email.
Create the server.go or any filename of your choice. Note: I created 2 ways of returning the content of http response for demonstration
123456789101112131415161718192021222324252627
packagemainimport("io""log""net/http")funchello(whttp.ResponseWriter,r*http.Request){w.Header().Set("Content-Type","text/plain; charset=utf-8")w.WriteHeader(http.StatusOK)w.Write([]byte("Hello, World!"+"\n")log.Println("hello function handler was executed")}funcgoodbye(whttp.ResponseWriter,r*http.Request){w.Header().Set("Content-Type","text/plain; charset=utf-8")w.WriteHeader(http.StatusOK)io.WriteString(w,"Cheers!"+"\n")log.Println("goodbye function handler was executed")}funcmain(){http.HandleFunc("/",hello)http.HandleFunc("/cheers",goodbye)http.ListenAndServe(":8000",nil)}
Explanation of what we are doing:
Programs runs in the package main
We are importing 3 packages: io, log and net/http
HandleFunc registers the handler function for the given pattern in the DefaultServeMux, in this case the HandleFunc registers / to the hello handler function and /cheers to the goodbye handler function.
In our 2 handler functions, we have two arguments:
The first one is http.ResponseWriter and its corresponding response stream, which is actually an interface type.
The second is *http.Request and its corresponding HTTP request. io.WriteString is a helper function to let you write a string into a given writable stream, this is named the io.Writer interface in Golang.
ListenAndServe starts an HTTP server with a given address and handler. The handler is usually nil, which means to use DefaultServeMux
The logging is not a requirement, but used it for debugging/verbosity
Running our Server:
Run the http server:
1
$ go run server.go
Client Side Requests:
Run client side http requests to your golang web server:
1234567
$ curl -i http://localhost:8000/
HTTP/1.1 200 OK
Content-Type: text/plain;charset=utf-8
Date: Wed, 21 Nov 2018 21:33:42 GMT
Content-Length: 14
Hello, World!
And another request to /cheers:
1234567
$ curl -i http://localhost:8000/cheers
HTTP/1.1 200 OK
Content-Type: text/plain;charset=utf-8
Date: Wed, 21 Nov 2018 21:29:46 GMT
Content-Length: 8
Cheers!
Server Side Output:
As we used the log package, the logging gets returned to stdout:
123
$ go run server.go
2018/11/21 23:29:36 hello function handler was executed
2018/11/21 23:29:46 goodbye function handler was executed
In this post I will demonstrate how to setup 2 read only users in MongoDB, one user that will have access to one MongoDB Database and all the Collections, and one user with access to one MongoDB Database and only one Collection.
First Method: Creating and Assigning the User
The first method we will create the user and assign it the read permissions that he needs. In this case read only access to the mytest db.
First logon to mongodb and switch to the admin database:
123
$ mongo -u dbadmin -p --authenticationDatabase admin
> use admin
switched to db admin
Now list the dbs:
123
> show dbs
admin 0.000GB
mytest 0.000GB
List the collections and read the data from it for demonstration purposes:
12345678
> use mytest
> show collections;col1
col2
> db.col1.find(){"_id" : ObjectId("5be3d377b54849bb738e3b6b"), "name" : "ruan"}> db.col2.find(){"_id" : ObjectId("5be3d383b54849bb738e3b6c"), "name" : "stefan"}
Now create the user collectionreader that will have access to read all the collections from the database:
Second Method: Create Roles and Assign Users to the Roles
In the second method, we will create the roles then assign the users to the roles. And in this scenario, we will only grant a user reader access to one collection on a database. Login with the admin user:
12
$ mongo -u dbadmin -p --authenticationDatabase admin
> use admin
In this post we will simulate a scenario where a team would like to have access to create secrets under a team path name like /security-team/prod/* and /security-team/dev/* and allow all the users from that team to be able to write and read secrets from that path. Then have individual users create and read secrets from their own isolated path: /security-team/personal/aws-username/* so they can create their personal secrets.
Our Scenario:
Create IAM Policy
Create 2 IAM Users: jack.smith and steve.adams
Create IAM Group, Associate IAM Policy to the Group
Either configure the access keys and secret keys into the credential provider using aws cli, or for this demonstration I will use them inside the code. But never hardcode your credentials.
Create Secrets with Secrets Manager in AWS using Python Boto3
In this post we will setup a Python Flask Application which includes the APM Agent which will collect metrics, that gets pushed to the APM Server. If you have not setup the Elastic Stack with / or APM Server, you can follow this post to setup the needed.
Then we will make a bunch of HTTP Requests to our Application and will go through the metrics per request type.
Application Metrics
Our Application will have the following Request Paths:
/ - Returns static text
/delay - random delays to simulate increased response latencies
/upstream - get data from a upstream provider, if statements to provide dummy 200, 404 and 502 reponses to visualize
/5xx - request path that will raise an exception so that we can see the error via apm
/sql-write - inserts 5 rows into a sqlite database
/sql-read - executes a select all from the database
/sql-group - sql query to group all the cities and count them
This is just simple request paths to demonstrate the metrics via APM (Application Performance Monitoring) on Kibana.
Install Flask and APM Agent
Create a virtual environment and install the dependencies:
This will provide metrics on the / request path. In order to trace transaction ids from the metrics, we need to configure the index on Kibana. To do this, head over to Kibana, Management, Index Patterns, Add Index Pattern, apm*, select @timestamp as the time filter field name.
This will allow you to see the data when tracing the transaction id’s via the Discover UI.
Create the Python Flask App
Create the Flask App with the request paths as mentioned in the beginning:
importsqlite3,requests,time,logging,randomfromflaskimportFlask,jsonifyfromelasticapm.contrib.flaskimportElasticAPMfromelasticapm.handlers.loggingimportLoggingHandlernames=['ruan','stefan','philip','norman','frank','pete','johnny','peter','adam']cities=['cape town','johannesburg','pretoria','dublin','kroonstad','bloemfontein','port elizabeth','auckland','sydney']lastnames=['smith','bekker','admams','phillips','james','adamson']conn=sqlite3.connect('database.db')conn.execute('CREATE TABLE IF NOT EXISTS people (name STRING, age INTEGER, surname STRING, city STRING)')#sqlquery_write = conn.execute('INSERT INTO people VALUES("{}", "{}", "{}", "{}")'.format(random.choice(names), random.randint(18,40), random.choice(lastnames), random.choice(cities)))seconds=[0.002,0.003,0.004,0.01,0.3,0.2,0.009,0.015,0.02,0.225,0.009,0.001,0.25,0.030,0.018]app=Flask(__name__)apm=ElasticAPM(app,server_url='http://localhost:8200',service_name='my-app-01',logging=False)@app.route('/')defindex():returnjsonify({"message":"response ok"})@app.route('/delay')defdelay():time.sleep(random.choice(seconds))returnjsonify({"message":"response delay"})@app.route('/upstream')defupstream():r=requests.get('https://api.ruanbekker.com/people').json()r.get('country')ifr.get('country')=='italy':return'Italalia!',200elifr.get('country')=='canada':return'Canada!',502else:return'Not Found',404@app.route('/5xx')deffail_with_5xx():value='a'+1returnjsonify({"message":value})@app.route('/sql-write')defsqlw():conn=sqlite3.connect('database.db')conn.execute('INSERT INTO people VALUES("{}", "{}", "{}", "{}")'.format(random.choice(names),random.randint(18,40),random.choice(lastnames),random.choice(cities)))conn.execute('INSERT INTO people VALUES("{}", "{}", "{}", "{}")'.format(random.choice(names),random.randint(18,40),random.choice(lastnames),random.choice(cities)))conn.execute('INSERT INTO people VALUES("{}", "{}", "{}", "{}")'.format(random.choice(names),random.randint(18,40),random.choice(lastnames),random.choice(cities)))conn.execute('INSERT INTO people VALUES("{}", "{}", "{}", "{}")'.format(random.choice(names),random.randint(18,40),random.choice(lastnames),random.choice(cities)))conn.execute('INSERT INTO people VALUES("{}", "{}", "{}", "{}")'.format(random.choice(names),random.randint(18,40),random.choice(lastnames),random.choice(cities)))conn.commit()conn.close()return'ok',200@app.route('/sql-read')defsqlr():conn=sqlite3.connect('database.db')conn.row_factory=sqlite3.Rowcur=conn.cursor()cur.execute('select * from people')rows=cur.fetchall()conn.close()return'ok',200@app.route('/sql-group')defslqg():conn=sqlite3.connect('database.db')conn.row_factory=sqlite3.Rowcur=conn.cursor()cur.execute('select count(*) as num, city from people group by city')rows=cur.fetchall()conn.close()return'ok',200if__name__=='__main__':app.run(host='0.0.0.0',port=80)
Run the app:
1
$ python app.py
At this point, we wont have any data on APM as we need to make requests to our application. Let’s make 10 HTTP GET Requests on the / Request Path:
Head over to Kibana, Select APM and you will see something similar like below when selecting the timepicker to 15 minutes at the right top corner. This page will give you the overview of all your configured applications and the average response times over the selected time, transactions per minute, errors per minute etc:
When you select your application, you will find the graphs on you response times and requests per minute, also a breakdown per request path:
When selecting the request path, in this case GET /, you will find a breakdown of metrics only for that request and also the response time distribution for that request path, if you select frame from the response time distribution, it will filter the focus to that specific transaction.
When you scroll a bit down to the Transaction Sample section, you will find data about the request, response, system etc:
From the Transaction Sample, you can select the View Transaction in Discover button, which will trace that transaction id on the Discover UI:
Increasing the http curl clients running simultaneously from different servers and increasing the time for 15 minutes to have more metrics will result in the screenshot below, notice the 6ms response time can easily be traced selecting it in the response time distribution, then discovering it in the UI, which will give you the raw data from that request:
Navigate to the app, select Errors, then you will see the exception details that was returned. Here we can see that in our code we tried to concatenate integers with strings:
Furthermore we can select that error and it will provide us a direct view on where in our code the error gets generated:
Pretty cool right?! You can also further select the library frames, which will take you to the lower level code that raised the exception. If this errors can be drilled down via the discover ui, to group by source address etc.
Simulate Response Latencies:
Make a couple of requests to the /delay request path, and you should see the increased response times from earlier:
When we look at our applications performance monitoring overview, we can see the writes provide more latencies as the group by’s:
The /sql-write request overview:
When selecting a transaction sample, we can see the timeline of each database call:
When looking at the /sql-group request overview, we can see that the response times increasing overtime, as more data is written to the database, it takes longer to read and group all the data from the database:
The transaction details shows the timeline of the database query from that request:
When you select the database select query on the timeline view, it will take you to the exact database query that was executed:
When we include a database call with a external request to a remote http endpoint, we will see something like:
Viewing 4xx and 5xx Response Codes
From the application code we are returning 2xx, 4xx, and 5xx response codes for this demonstration to visualize them:
Configuring more Applications
Once more apps are configured, and they start serving traffic, they will start appearing on the APM UI as below:
APM is available for other languages as well and provides a getting started snippets from the APM UI. For more information on APM, have a look at their Documentation