Ruan Bekker's Blog

From a Curious mind to Posts on Github

Setup a Gitlab Runner on Your Own Server to Run Your Jobs That Gets Triggered From Gitlab CI

From our previous post, we went through the setup on setting up a Basic CI Pipeline on Gitlab, in conjunction with Gitlab CI which coordinates your jobs, where we used the Shared Runners, which runs your jobs on Gitlab’s Infrastructure.

In Gitlab, you have Shared Runners and your Own Runners, which is used to run your jobs and send the results back to GitLab.

In this tutorial we will Setup a Server with gitlab-runner and Docker on Ubuntu and then Setup a Basic Pipeline to Utilize your Gitlab Runner.

Setup Docker

Install Docker:

1
2
3
4
5
6
7
8
$ sudo apt update && sudo apt upgrade -y
$ sudo apt-get install apt-transport-https ca-certificates curl software-properties-common -y
$ curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add -
$ sudo add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable"

$ sudo apt update
$ sudo apt install docker-ce -y
$ docker run hello-world

Install and Setup Gitlab Runner

This setup is intended for Linux 64bit, for other distributions, have a look at their docs

Install the Runner:

1
2
3
4
5
$ wget -O /usr/local/bin/gitlab-runner https://gitlab-runner-downloads.s3.amazonaws.com/latest/binaries/gitlab-runner-linux-amd64
$ chmod +x /usr/local/bin/gitlab-runner
$ useradd --comment 'GitLab Runner' --create-home gitlab-runner --shell /bin/bash
$ gitlab-runner install --user=gitlab-runner --working-directory=/home/gitlab-runner
$ gitlab-runner start

Register the Runner. The Gitlab-CI Token is available in your CI/CD Settings panel from the UI: https://gitlab.com/<account>/<repo>/settings/ci_cd

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
$ gitlab-runner register
Please enter the gitlab-ci coordinator URL (e.g. https://gitlab.com/):
https://gitlab.com/

Please enter the gitlab-ci token for this runner:
__masked__

Please enter the gitlab-ci description for this runner:
[my-runner]: my-runner

Please enter the gitlab-ci tags for this runner (comma separated):
my-runner,foobar
Registering runner... succeeded                     runner=66m_339h

Please enter the executor: docker-ssh+machine, docker, docker-ssh, parallels, shell, ssh, virtualbox, docker+machine, kubernetes:
docker

Please enter the default Docker image (e.g. ruby:2.1):
alpine:latest

Runner registered successfully. Feel free to start it, but if it's running already the config should be automatically reloaded!

Verify the Status and check if Docker and Gitlab Runner is enabled on startup:

1
2
3
4
5
6
7
8
9
$ gitlab-runner status
Runtime platform                                    arch=amd64 os=linux pid=30363 revision=7f00c780 version=11.5.1
gitlab-runner: Service is running!

$ systemctl is-enabled gitlab-runner
enabled

$ systemctl is-enabled docker
enabled

Gitlab-CI Config for Shared Runners

If you would like to use the shared runners that Gitlab Offers, the .gitlab-ci.yml config will look like this:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
stages:
  - build
  - test

build:
  stage: build
  script:
    - echo "this is building"
    - hostname
    - mkdir builds
    - touch builds/data.txt
    - echo "true" > builds/data.txt
  artifacts:
    paths:
      - builds/

test:
  stage: test
  script:
    - echo "this is testing"
    - hostname
    - test -f builds/data.txt
    - grep "true" builds/data.txt

Gitlab-CI Config for your own Gitlab Runner

Gitlab utilizes the tags that was specified on registration to determine where the jobs gets executed on, for more information on this, have a look at their docs

The .gitlab-ci.yml config for using your gitlab runner:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
stages:
  - build
  - test

build:
  stage: build
  tags:
    - my-runner
  script:
    - echo "this is building"
    - hostname
    - mkdir builds
    - touch builds/data.txt
    - echo "true" > builds/data.txt
  artifacts:
    paths:
      - builds/

test:
  stage: test
  tags:
    - my-runner
  script:
    - echo "this is testing"
    - hostname
    - test -f builds/data.txt
    - grep "true" builds/data.txt

Trigger and Check Docker

Commit the config to master, let your pipeline run their jobs upon completion have a look at docker on your server for the containers that the jobs ran on:

1
2
3
4
5
$ docker ps -a
CONTAINER ID        IMAGE               COMMAND                  CREATED              STATUS                          PORTS               NAMES
04292a78de0b        c04b8be95e1e        "gitlab-runner-cache.."  About a minute ago   Exited (0) About a minute ago                       runner-xx-project-xx-concurrent-0-cache-3cxx0
49b1b3c4adf9        c04b8be95e1e        "gitlab-runner-cache.."  About a minute ago   Exited (0) About a minute ago                       runner-xx-project-xx-concurrent-0-cache-6cxxa
422b23191e8c        hello-world         "/hello"                 24 minutes ago       Exited (0) 24 minutes ago                           wizardly_meninsky

As we know each job gets executed in different containers, you can see from the output above that there was 2 different containers for the 2 jobs that was specified in our pipeline.

Resources:

Thank You

Please feel free to show support by, sharing this post, making a donation, subscribing or reach out to me if you want me to demo and write up on any specific tech topic.


Ad space:

Thanks for reading!

Local Dev Environment for Wordpress Using Docker Compose

Let’s setup a local development environment with Docker, Wordpress, MySQL using Docker Compose

Docker Compose File

Let’s look at our docker-compose.yml file:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
version: '3.1'

services:

  wordpress:
    image: wordpress
    restart: always
    ports:
      - 8080:80
    environment:
      - WORDPRESS_DB_NAME=wordpress
      - WORDPRESS_DB_HOST=mysql
      - WORDPRESS_DB_USER=wordpress
      - WORDPRESS_DB_PASSWORD=wordpress
    networks:
      - wordpress

  mysql:
    image: mysql:5.7
    restart: always
    environment:
      - MYSQL_ROOT_PASSWORD=password
      - MYSQL_DATABASE=wordpress
      - MYSQL_USER=wordpress
      - MYSQL_PASSWORD=wordpress
    networks:
      - wordpress

networks:
  wordpress:

Environment Variables for the MySQL Docker image is:

1
2
3
4
5
6
- MYSQL_ROOT_PASSWORD
- MYSQL_DATABASE
- MYSQL_USER, MYSQL_PASSWORD
- MYSQL_ALLOW_EMPTY_PASSWORD
- MYSQL_RANDOM_ROOT_PASSWORD
- MYSQL_ONETIME_PASSWORD

More info can be viewed on this resource: hub.docker.com/_/mysql/

Launching our Wordpress Application:

Lets deploy wordpress:

1
2
3
4
5
$ docker-compose up
Creating network "wordpress_wordpress" with the default driver
Creating wordpress_mysql_1_3e6e3cfe07b1     ... done
Creating wordpress_wordpress_1_a9cb16f277af ... done
Attaching to wordpress_wordpress_1_9227f3d3e587, wordpress_mysql_1_65cc98d222d0

Accessing Wordpress

You should be able to access Wordpress on http://localhost:80/

Local Dev Environment for Mediawiki Using Docker Compose

Let’s setup a local development environment with Docker, Mediawiki, MySQL using Docker Compose

Docker Compose File

Let’s look at our docker-compose.yml file:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
version: "3.4"

services:

  db:
    image: mysql:5.6
    restart: always
    environment:
      - MYSQL_ROOT_PASSWORD=password
      - MYSQL_USER=mw
      - MYSQL_DATABASE=mediawiki
      - MYSQL_PASSWORD=pass
    volumes:
      - /Users/ruan/workspace/docker/mediawiki/mediawiki-mysql-data:/var/lib/mysql
    networks:
      - mediawiki
    ports:
      - 3306:3306

  memcached:
    image: rbekker87/memcached:alpine
    environment:
      - MEMCACHED_USER=memcached
      - MEMCACHED_HOST=0.0.0.0
      - MEMCACHED_PORT=11211
      - MEMCACHED_MEMUSAGE=128
      - MEMCACHED_MAXCONN=1024
    networks:
      - mediawiki

  mediawiki:
    image: benhutchins/mediawiki:latest
    networks:
      - mediawiki
    environment:
      - MEDIAWIKI_DB_TYPE=mysql
      - MEDIAWIKI_DB_HOST=db
      - MEDIAWIKI_DB_USER=mw
      - MEDIAWIKI_DB_PASSWORD=pass
      - MEDIAWIKI_SITE_SERVER=http://localhost
      - MEDIAWIKI_SITE_NAME="My Lekke Wiki"
      - MEDIAWIKI_SITE_LANG=en
      - MEDIAWIKI_ADMIN_USER=admin
      - MEDIAWIKI_ADMIN_PASS=password123
      - MEDIAWIKI_UPDATE=true
      - MEDIAWIKI_ENABLE_SSL=false
    volumes:
      - /Users/ruan/workspace/docker/mediawiki/mediawiki-data:/data
    ports:
      - 80:80
    depends_on:
      - db
      - memcached

networks:
  mediawiki:

Your current working directory in this case: /Users/ruan/workspace/docker/mediawiki

Environment Variables for the MySQL Docker image is:

1
2
3
4
5
6
- MYSQL_ROOT_PASSWORD
- MYSQL_DATABASE
- MYSQL_USER, MYSQL_PASSWORD
- MYSQL_ALLOW_EMPTY_PASSWORD
- MYSQL_RANDOM_ROOT_PASSWORD
- MYSQL_ONETIME_PASSWORD

More info can be viewed on this resource: hub.docker.com/_/mysql/

Launching our Mediawiki Application:

Lets deploy mediawiki:

1
2
3
4
5
$ docker-compose up
Creating network "mediawiki_mediawiki" with the default driver
Creating mediawiki_memcached_1_bbbe8d3fa8b3 ... done
Creating mediawiki_db_1_257775fcf65b        ... done
Creating mediawiki_mediawiki_1_56813d66cbe2 ... done

Accessing Mediawiki

You should be able to access Mediawiki on http://localhost:80/

Resources:

Setup a Basic CI Pipeline on Gitlab

In this tutorial we will setup a Basic CI (Continuous Integration) Pipeline on Gitlab.

The code for this example is available on gitlab.com/rbekker87/demo-ci-basic-pipeline.

If you would like to read more on Continuous Integration / Continuous Deliver (CI/CD).

What will we be doing?

The aim for this is every time there is a commit made to the master branch, that the jobs defined by the .gitlab-ci.yml will be executed and will only pass if exit code 0 has been returned on the scripts.

The jobs gets executed on gitlab runners which is hosted with Giltab. Important to note is that every job runs independently from each other.

Our Basic Pipeline

In this pipeline we will have 2 basic jobs, each job execute a set of scripts:

Build:

1
2
3
4
5
$ echo "this is building" 
$ hostname
$ mkdir builds
$ touch builds/data.txt
$ echo "true" > builds/data.txt

Test:

1
2
3
4
$ echo "this is testing"
$ hostname
$ test -f builds/data.txt
$ grep "true" builds/data.txt

Setup the Pipeline:

From a newly created repository which i’ve cloned to my workspace, create the config:

1
$ touch .gitlab-ci.yml

The config for above yaml file:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
stages:
  - build
  - test

build:
  stage: build
  script:
    - echo "this is building"
    - hostname
    - mkdir builds
    - touch builds/data.txt
    - echo "false" > builds/data.txt
  artifacts:
    paths:
      - builds/

test:
  stage: test
  script:
    - echo "this is testing"
    - hostname
    - test -f builds/data.txt
    - grep "true" builds/data.txt

Config Explained

  • We define 2 stages for this pipeline: build and test
  • We provide context of each job, the stage, the script (commands that will be executed in the lifecycle of the runner) and artifacts (artifacts will be the content that will be transferred, as each job runs in a different runner/container)

Note that I deliberately made a mistake so that my pipeline can fail. I populated the content “false” into the builds/data.txt file from the build job and grep for the word “true” on the test job, so this job will fail.

Push to Github

Save the content to the config file, add, commit and push to master:

1
2
3
$ git add .gitlab-ci.yml
$ git commit -m "add gitlab-ci config"
$ git push origin master

Gitlab Pipelines

From the Gitlab UI, if you head over to CI/CD -> Pipelines, you should see your pipeline running:

When you select the Pipeline ID, you should be presented with the jobs available in your pipeline:

Select Jobs, and you should see an overview of your jobs. At this moment we can see that the build job has completed, and that the test job is busy running:

Shortly thereafter the status of the test job should change to failed, select the Job ID and you should see the output:

From the above output it gives you a link to create a new issue, which is quite handy.

Fix the Pipeline Config

Let’s go ahead and change the content in the .gitlab-ci.yml config and push to master:

1
$ vim .gitlab-ci.yml

Change line 12 from - echo "false" > builds/data.txt to - echo "true" > builds/data.txt, the full content of the file:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
stages:
  - build
  - test

build:
  stage: build
  script:
    - echo "this is building"
    - hostname
    - mkdir builds
    - touch builds/data.txt
    - echo "true" > builds/data.txt
  artifacts:
    paths:
      - builds/

test:
  stage: test
  script:
    - echo "this is testing"
    - hostname
    - test -f builds/data.txt
    - grep "true" builds/data.txt

Commit and push to master:

1
2
3
$ git add .gitlab-ci.yml
$ git commit -m "change content in script"
$ git push origin master

When you head over to Pipelines, you will see that the pipeline is busy running, and on the right the commit that we just made:

Great Success

Select the Pipeline ID, then select Jobs, you should see both jobs succeeded:

Select the Job ID of the test job, and from the output you will see that the job succeeded:

From this output you can also confirm from both jobs, that each job ran in a different runner as the hostnames that was returned to stdout was different.

Resources

This was a really basic example to demonstrate Gitlab CI. Some relevant resources to this post:

Thank You

Please feel free to show support by, sharing this post, making a donation, subscribing or reach out to me if you want me to demo and write up on any specific tech topic.


Ad space:

Thanks for reading!

Resizing Hetzner Cloud Block Storage Volumes on the Fly

Today we will be looking into Hetzner’s Cloud Storage Volumes and how you can resize volumes on the fly!

What is Hetzner’s Cloud Storage Volumes

Hetzner Cloud offers a fast, flexible, and cost-effective SSD based Block Storage which can be attach to your Hetzner Cloud Server. At this point in time its available in the Nuremberg and Helsinki regions.

Resizing of Volumes

Volumes can be resized up to 10TB and the console allows you to resize in 1GB increments. You are allowed to increase, but cannot decrease.

Demo through Cloud Volumes

Let’s run through a demo, where we will do the following:

  • Provision a Server
  • Provision a Volume (XFS Filesystem / EXT4 is also optional)
  • Inspect the Volume, do some performance testing
  • Resize the Volume via Hetzner Cloud Console
  • Grow the XFS Filesystem

After provisioning a server, which takes less than a minute, you should see that the server is created:

SSH into your server. At this moment, we have not provisioned any volumes, so only the root partition should be mounted. Look at the block allocation:

1
2
3
4
5
$ lsblk
NAME   MAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
sda      8:0    0 19.1G  0 disk
--sda1   8:1    0 19.1G  0 part /
sr0     11:0    1 1024M  0 rom

Have a look at the fstab:

1
2
3
4
cat /etc/fstab
# <file system> <mount point>   <type>  <options>       <dump>  <pass>
UUID=2f54e8e6-ff9c-497a-88ea-ce159f6cd283 /               ext4    discard,errors=remount-ro 0       1
/dev/fd0        /media/floppy0  auto    rw,user,noauto,exec,utf8 0       0

And have a look at the mounted disks layout:

1
2
3
4
5
6
7
8
9
$ df -h
Filesystem      Size  Used Avail Use% Mounted on
udev            959M     0  959M   0% /dev
tmpfs           195M  652K  194M   1% /run
/dev/sda1        19G  1.6G   17G   9% /
tmpfs           973M     0  973M   0% /dev/shm
tmpfs           5.0M     0  5.0M   0% /run/lock
tmpfs           973M     0  973M   0% /sys/fs/cgroup
tmpfs           195M     0  195M   0% /run/user/0

Now, time to provision a Volume. Head over to the Volumes section:

I’m going ahead with creating a volume with 10GB of space and assign it to my server, and yeah that’s right, 10GB of storage is 0,40 EUR per month, epic value for money!

After you volume is created, you should see similar output below:

Head back to your server, and have a look at the output when running the similar commands from earlier:

1
2
3
4
5
6
$ lsblk
NAME   MAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
sda      8:0    0 19.1G  0 disk
--sda1   8:1    0 19.1G  0 part /
sdb      8:16   0   10G  0 disk /mnt/HC_Volume_1497823
sr0     11:0    1 1024M  0 rom

The fstab config:

1
2
3
4
5
$ cat /etc/fstab
# <file system> <mount point>   <type>  <options>       <dump>  <pass>
UUID=2f54e8e6-ff9c-497a-88ea-ce159f6cd283 /               ext4    discard,errors=remount-ro 0       1
/dev/fd0        /media/floppy0  auto    rw,user,noauto,exec,utf8 0       0
/dev/disk/by-id/scsi-0HC_Volume_1497823 /mnt/HC_Volume_1497823 xfs discard,nofail,defaults 0 0

The disk layout:

1
2
3
4
5
6
7
8
9
10
$ df -h
Filesystem      Size  Used Avail Use% Mounted on
udev            959M     0  959M   0% /dev
tmpfs           195M  660K  194M   1% /run
/dev/sda1        19G  1.6G   17G   9% /
tmpfs           973M     0  973M   0% /dev/shm
tmpfs           5.0M     0  5.0M   0% /run/lock
tmpfs           973M     0  973M   0% /sys/fs/cgroup
tmpfs           195M     0  195M   0% /run/user/0
/dev/sdb         10G   43M   10G   1% /mnt/HC_Volume_1497823

We can see from the output above how easy it is to provision a volume to your Hetzner Cloud Server. And everything gets done for you, the disk is mounted and the /etc/fstab configuration is populated for you.

Time for some performance testing on the volume:

1
2
3
4
$ dd bs=2M count=256 if=/dev/zero of=/mnt/HC_Volume_1497823/test.dd
256+0 records in
256+0 records out
536870912 bytes (537 MB, 512 MiB) copied, 0.911306 s, 589 MB/s

Pretty neat right? :D

Let’s resize the volume via the Hetzner Cloud Console to 20GB and resize the filesystem. From the Console, head over to the volumes section, select the more options and select resize:

After the volume has been resized, head back to your server and resize the filesystem. As we are using XFS Filesystem, we will use xfs_growfs :

1
2
3
4
5
6
7
8
9
10
11
12
$ xfs_growfs /dev/sdb
meta-data=/dev/sdb               isize=512    agcount=4, agsize=655360 blks
         =                       sectsz=512   attr=2, projid32bit=1
         =                       crc=1        finobt=1 spinodes=0 rmapbt=0
         =                       reflink=0
data     =                       bsize=4096   blocks=2621440, imaxpct=25
         =                       sunit=0      swidth=0 blks
naming   =version 2              bsize=4096   ascii-ci=0 ftype=1
log      =internal               bsize=4096   blocks=2560, version=2
         =                       sectsz=512   sunit=0 blks, lazy-count=1
realtime =none                   extsz=4096   blocks=0, rtextents=0
data blocks changed from 2621440 to 5242880

Have a look at the disk layout and see that the filesystem was resized on the fly. If you have applications writing/reading to and from that volume, its better to unmount it first.

1
2
3
4
5
6
7
8
9
10
$ df -h
Filesystem      Size  Used Avail Use% Mounted on
udev            959M     0  959M   0% /dev
tmpfs           195M  660K  194M   1% /run
/dev/sda1        19G  2.1G   16G  12% /
tmpfs           973M     0  973M   0% /dev/shm
tmpfs           5.0M     0  5.0M   0% /run/lock
tmpfs           973M     0  973M   0% /sys/fs/cgroup
tmpfs           195M     0  195M   0% /run/user/0
/dev/sdb         20G  565M   20G   3% /mnt/HC_Volume_1497823

I must admit, I am really stoked with Hetzner’s offerings and their performance. I’ve been hosting servers with them for the past 5 months and so far they really impressed me.

Have a look at Hetzner Cloud’s offerings, they have great prices as you can start off with a server from as little as 2.49 EUR per month, which gives you 1vCPU, 2GB of RAM, 20GB disk Space and 20TB of traffic. I mean, thats awesome value for money. They also offer Floating IP’s, Backups, etc.

Resources:

Thank You

Please feel free to show support by, sharing this post, making a donation, subscribing or reach out to me if you want me to demo and write up on any specific tech topic.


Ad space:

Thanks for reading!

Creating a UI in Python Flask and Bootstrap for Our Serverless URL Shortener

From a previous post, we went through the setup of building a Serverless URL Shortener with API Gateway, Lambda, and DynamoDB on AWS. Today we will build a Web User Interface using Python Flask, Bootstrap and JavaScript that will communicate to our API to shorten URL’s.

Note: Although using Python Flask is a Hosted option, you could also use this example to host it as a web page on Amazon S3, for the complete serverless route.

Dependencies:

We need Flask, Gunicorn (optional) and Requests:

1
$ pip install flask gunicorn requests

Application Code:

It’s good practice to use a API Key for some level of security, but if not, you can just remove the headers section of x-api-key.

The application relies on 3 environment variables: APP_TITLE - which is the banner name (defaults to “My URL Shortener” if none is set), TINY_API_URL - which is the URL to create the shortened url and X_API_KEY which is the api key for your API.

The content of app.py :

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
from flask import Flask, render_template, request, url_for
import os
import sys
import socket
import requests
import json
import logging

tiny_api_url = os.getenv('TINY_API_URL', None)
tiny_api_key = os.getenv('X_API_KEY', None)
app_title = os.getenv('APP_TITLE', 'My URL Shortener')

if tiny_api_url == None or tiny_api_key == None:
    logging.error("Failed to load configuration")
    sys.exit(4)

headers = {'Content-Type': 'application/json', 'X-Api-Key': tiny_api_key}

app = Flask(__name__)

@app.route('/')
def index():
    return render_template('index.html', app_title=app_title)

@app.route('/shortened', methods=['GET', 'POST'])
def search_request():
    user_url = request.form["input"]
    response = requests.post(
        tiny_api_url,
        headers=headers,
        data=json.dumps({
            "long_url": user_url
            }
        )
    )
    return render_template('results.html', app_title=app_title, res=response.content )

if __name__ == '__main__':
    app.run(passthrough_errors=False)

JavaScript

We want to copy the value of the shortened url response to clipboard when clicking on a button. For that functionality, we need some javascript.

1
2
$ mkdir -p static/js
$ touch static/js/clipboard.js

the content for our javascript function - static/js/clipboard.js :

1
2
3
4
5
function copyToClipboard() {
  var copyText = document.getElementById("input");
  copyText.select();
  document.execCommand("Copy");
}

HTML

The content for templates/index.html :

The content for templates/results.html :

Run the Server

Before we run the server, we need to set the environment variables as mentioned earlier:

1
2
TINY_API_URL=https://tiny-api.mydomain.com/create
X_API_KEY=someRandomSecretKey09876543210

Run the Server:

1
$ gunicorn -w 2 -b 0.0.0.0:8080 --access-logfile=/dev/stdout --error-log=/dev/stderr app:app

After booting the server, access the server on http://localhost:8080/ and the response should look like:

Dockerizing this Application

The source code for this project is available on my github repository

Thank You

Please feel free to show support by, sharing this post, making a donation, subscribing or reach out to me if you want me to demo and write up on any specific tech topic.


VULTR Cloud Servers Limited Signup Promotion

It’s promotion time with VULTR! Get a head start with some free credits.

Promotion

VULTR has a promotion running for a limited time, when you sign up with this link below, use the coupon / promo code: VULTRMATCH and you will receive double the deposit in credits for up to $100. This applies to new customers only.

I’m not quite sure how long they will be running this promotion, but you can get $100 free in credits when you sign up. That is basically 20 months free hosting of a Cloud Server with 1CPU, 1GB RAM, 1TB Bandwidth.

Here’s the link:

About VULTR

If you are not familliar with VULTR, they are a Cloud Hosting Company that offers cloud servers, bare-metal servers and storage servers in 16 different regions and they provide a hourly billing model.

Below are some of their features:

  • 16 Locations: Silicon Valley, Seattle, LA, Dallas, Toronto, Miami, New Jersey, Chicago, Atlanta, London, Paris, Frankfurt, Amsterdam, Tokyo, Singapore, Sydney
  • 100% SLA Guaranteed
  • Solid-State Drives (SSD)
  • Private Networking
  • Reserved IP’s
  • Anti-DDOS Support
  • Backups
  • DNS
  • Startup Scripts
  • Firewalls
  • Pretty Slick User Interface
  • Root Access
  • Hourly Billing
  • Deploy Applications Instantly to your Servers with App Deploys
  • OS Support: Linux, Windows and Custom Uploads
  • API Support
  • Great Documentation and Tutorials

They also allow you to submit articles to them that can earn you up to $300 per article, check it out here

VULTR Mission

From their website, their about us section states:

“Vultr, founded in 2014, is on a mission to empower developers and businesses by simplifying the deployment of infrastructure via its advanced cloud platform. Vultr is strategically located in 16 datacenters around the globe and provides frictionless provisioning of public cloud, storage and single-tenant bare metal.”

“Vultr has made it our priority to offer a standardized highly reliable high performance cloud compute environment in all of the cities we serve. Launching a cloud server environment in 16 cities around the globe has never been easier!”

Launching a Server

I decided to deploy a server pre-configured with Docker, and just about a minute I had my server up and running with Docker, ready to go.

Screenshot of the UI:

Screenshot of the root login:

Overall

I’m quite impressed with VULTR and the ease of use. The pricing is really good and like the fact that you can deploy servers with pre-configured software on it.

Thank You

Please feel free to show support by, sharing this post, making a donation, subscribing or reach out to me if you want me to demo and write up on any specific tech topic.


Python Flask Tutorial Series: Routing in Flask

This is post 3 of our Python Flask Tutorial Series where we will go into Views and Routing.

In our previous post we went through the steps to setup a Virtual Environment for our Flask App

Flask Views and Routing:

Flask Routing is essentially mapping a URL eg. example.com/pages/test to a view function within your code. For example having /contact-us displaying a page about contact details.

The route() decorator in Flask is used to bind the URL to a function.

Some basic examples:

This is a basic web app that shows on which page you are:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
from flask import Flask

app = Flask(__name__)

@app.route('/home')
def home():
    return '<h2>You are on the Home Page</h2>

@app.route('/about-us')
def about():
    return '<h2>You are on the About Us Page</h2>'

if __main__ == '__name__':
    app.run()

With app.run() we have passed no arguments, so it will use the defaults, which is:

  • Host: 127.0.0.1
  • Port: 5000
  • Debug: False

To set your own values, you could do something like: app.run(host='0.0.0.0', port=8080, debug=True). Note: Never use debug mode in production.

So when you do a GET Request on http://localhost:5000/home you will be presented with the response that you are on the home page.

This is all good and well, but its static, so lets look how we can set this up in a dynamic way.

URL Variables:

We can use variables in the route() decorator which we can parse through to the function. In this next example we will use a name variable, and depending on what name is passed in the GET request, will be provided in the response.

1
2
3
4
5
6
7
8
9
from flask import Flask
app = Flask(__name__)

@app.route('/user/<name>')
def user(name):
    return 'Welcome, {}'.format(name)

if __main__ == '__name__':
    app.run()

So with the above example, <name> will be used as a placeholder or variable, and then passed through to our function and then returned in our response, for example:

1
2
3
4
5
$ curl -XGET http://localhost:5000/user/James
Welcome, James

$ curl -XGET http://localhost:5000/user/Frank
Welcome, Frank

So this can be really useful when dealing with dynamic data. You can also go deeper into this, like the following:

1
2
3
4
5
6
7
8
9
from flask import Flask
app = Flask(__name__)

@app.route('/user/<name>/<surname>/<prog_lang>')
def user(name, surname, prog_lang):
    return '{} {} likes {}'.format(name, surname, prog_lang)

if __main__ == '__name__':
    app.run()

This will produce:

1
2
$ curl -XGET http://localhost:5000/user/John/Smith/Python
John Smith likes Python

We can also have defaults, so if no values was passed, and you only hit the /user endpoint, you can have a default value returned:

1
2
3
4
5
6
7
8
9
10
from flask import Flask
app = Flask(__name__)

@app.route('/user', defaults={'name': 'Ruan', 'surname': 'B', 'prog_lang': 'Python'})
@app.route('/user/<name>/<surname>/<prog_lang>')
def user(name, surname, prog_lang):
    return '{} {} likes {}'.format(name, surname, prog_lang)

if __main__ == '__name__':
    app.run()

So then the output would look like this:

1
2
$ curl -XGET http://localhost:5000/user
Ruan B likes Python

This is a very simple example, but you could use it in many ways.

Data Types in URL Routing:

You could also explicitly set your datatypes, like string or integer etc in your route decorators.

Example for Strings:

1
2
3
4
5
6
7
8
9
from flask import Flask
app = Flask(__name__)

@app.route('/city/<string:cityname>')
def user(cityname):
    return 'Selected City is: {}'.format(cityname)

if __main__ == '__name__':
    app.run()

Example for Integers:

1
2
3
4
5
6
7
8
9
from flask import Flask
app = Flask(__name__)

@app.route('/user/<integer:age>')
def user(age):
    return 'Selected age is: {}'.format(age)

if __main__ == '__name__':
    app.run()

And now because the datatype is an integer, when you try to pass a string, you will be faced with an error. So the value that you will need to pass would then be strictly set to the type of integer.

Example with if statements:

You could also use if statements in your functions, like determining the age group, for example:

1
2
3
4
5
6
7
8
9
10
11
12
from flask import Flask
app = Flask(__name__)

@app.route('/user/<integer:age>')
def user(age):
    if age >= 28:
        return 'Your selected age is {}, so you are in the 28 and older group'.format(age)
    else:
        return 'Your selected age is {}, so you are in the younger then 28 group'.format(age)

if __main__ == '__name__':
    app.run()

So with the above example:

1
2
3
4
5
$ curl -XGET http://127.0.0.1:5000/user/12
Your selected age is 12, so you are in the younger then 28 group

$ curl -XGET http://127.0.0.1:5000/user/30
Your selected age is 30, so you are in the 28 and older group

Example with Floats:

1
@app.route('/myfloat/<float:floatnum>')

Example with Path Types:

We can also pass accept the URL Path, that is passed by using the path type:

1
2
3
4
5
6
7
8
9
from flask import Flask
app = Flask(__name__)

@app.route('/path/<path:mypath>')
def user(mypath):
    return 'Your selected path is: /{}'.format(mypath)

if __main__ == '__name__':
    app.run()

So with the above example:

1
2
$ curl -XGET http://127.0.0.1:5000/path/apps/data/my/app
Your selected path is: /apps/data/my/app

I hope this was useful, next up in our Python Flask Tutorial-Series will be rendering templates in flask with the jinja2 templating engine.

Python Flask Tutorial Series: Setup a Python Virtual Environment

In our previous post we wrote a basic Hello World App in Flask. This is post 2 of the Python Flask Tutorial Series

In this section we will be covering our Environment Setup, where I will be showing you how to setup a typical Python Flask Environment using virtualenv

What is VirtualEnv?

Virtualenv allows you to have isolated Python Environments, where each project or environment can have their own versions. Some applications may need a specific version of a certain package, so lets say you are running multiple applications on one server, then having to manage each ones dependencies can be a pain. As you may run into scenarios where they are dependent on specific versions, where you have to upgrade/downgrade packages like no-ones business.

Luckily with the help of virtualenv, each environment is isolated from each other, so system wide you might be running Python 2.7 with minimal packages installed, then you can create a virtual environment with Python 3 with packages for the application you are developing.

Setup a Virtual Environment:

We will setup a virtualenv for our project with our default python version which in this case is 2.7:

1
2
3
$ mkdir ~/projects/mywebapp
$ cd ~/projects/mywebapp
$ virtualenv .venv

At this moment you should have your virtual environment ready, now to enter and activate our environment:

1
$ source .venv/bin/activate

To confirm your python version:

1
2
$ python --version
Python 2.7.6

If you have multiple versions of python, you can create your virtual environment with a different python version by using the -p flag, as in:

1
$ virtualenv -p /usr/local/bin/python2.7 .venv

Now that we are in our virtualenv, lets install 2 packages, Flask and Requests:

1
2
$ pip install flask
$ pip install requests

With pip we can list the installed packages we have with pip freeze. Since this is our virtual environment, we will only see the packages that was installed into this environment:

1
2
3
4
5
6
7
8
9
10
$ pip freeze
click==6.7
Flask==0.12
itsdangerous==0.24
Jinja2==2.9.5.1
MarkupSafe==1.0
requests==2.7.0
six==1.10.0
virtualenv==15.0.1
Werkzeug==0.12.1

We can dump this to a file, which we can later use to install packages from a list so that we don’t have to specify them manually. We can dump them by doing this:

1
$ pip freeze > requirements.txt

Now lets say you are on a different host and you would like to install the packages from the requirements.txt file, we do this by using the following command:

1
$ pip install -r requirements.txt

To exit your virtualenv, you do the following:

1
$ deactivate

I hope this was useful, next up in our Python Flask Tutorial Series will be Routing in Flask

How to Setup a Serverless URL Shortener With API Gateway Lambda and DynamoDB on AWS

Today we will set a Serverless URL Shortener using API Gateway, Lambda with Python and DynamoDB.

Overview

The service that we will be creating, will shorten URLs via our API which will create an entry on DynamoDB. When a GET method is performed on the shortened URL, a GetItem is executed on DynamoDB to get the Long URL and a 301 Redirect is performed to redirect the client to intended destination URL.

Note, I am using a domain name which is quite long, but its only for demonstration, if you can get hold of any short domains like t.co etc, that will make your Shortened URLs really short in character count.

Update: URL Shortener UI available in this post

The Setup

Code has been published to my Github Repository

The following services will be used to create a URL Shortener:

  • AWS API Gateway: ( /create: to create a shortened url and /t/{id} to redirect to long url)
  • AWS IAM: (Role and Policy for Permissions to call DynamoDB from Lambda)
  • AWS Lambda: (Application Logic)
  • AWS DynamoDB: (Persistent Store to save our Data)
  • AWS ACM: (Optional: Certificate for your Domain)
  • AWS Route53: (Optional: DNS for the domain that you want to associate to your API)

The flow will be like the following:

  • POST Request gets made to the /create request path with the long_url data in the payload
  • This data is then used by the Lambda function to create a short url and create a entry in DynamoDB
  • In DynamoDB the entry is created with the short id as the hash key and the long url as one of the attributes
  • The response to the client will be the short url
  • When a GET method is performed on the id eg /t/{short_id}, a lookup gets done on the DynamoDB table, retrieves the long url from the table
  • A 301 redirect gets performed on API Gateway and the client gets redirected to the intended url

Creating the URL Shortener

After completing this tutorial you will have your own Serverless URL Shortener using API Gateway, Lambda and DynamoDB.

IAM Permissions

On AWS IAM, create a IAM Policy, in my case the policy name is lambda-dynamodb-url-shortener and note that I masked out my account number:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Sid": "VisualEditor0",
            "Effect": "Allow",
            "Action": [
                "dynamodb:PutItem",
                "dynamodb:DeleteItem",
                "dynamodb:GetItem",
                "dynamodb:Query",
                "dynamodb:UpdateItem"
            ],
            "Resource": "arn:aws:dynamodb:eu-west-1:xxxxxxxxxxxx:table/url-shortener-table"
        }
    ]
}

Head over to IAM Roles, select Create Role, Select Lambda as the Trusted Entitiy from the AWS Service section, go ahead with the permissions and select your IAM Policy that was created, in my case lambda-dynamodb-url-shortener and AWSLambdaBasicExecution role. Give your Role a name, in my case lambda-dynamodb-url-shortener-role.

DynamoDB Table

Next, head over to DynamoDB create a table, in my case the table name: url-shortener-table and the primary key short_id set to string:

Lambda Functions

Once the table is created, head over to Lambda and create a Lambda function, in my case using Python 3.6 and provide a name, where I used: url-shortener-create and select the IAM role from the previous role that we created, this function will be the lambda function that will create the shortened urls:

The code for your lambda function which will take care of creating the short urls and save them to dynamodb, take note on the region and table name to ensure that it matches your setup:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
import os
import json
import boto3
from string import ascii_letters, digits
from random import choice, randint
from time import strftime, time
from urllib import parse

app_url = os.getenv('APP_URL')
min_char = int(os.getenv('MIN_CHAR'))
max_char = int(os.getenv('MAX_CHAR'))
string_format = ascii_letters + digits

ddb = boto3.resource('dynamodb', region_name = 'eu-west-1').Table('url-shortener-table')

def generate_timestamp():
    response = strftime("%Y-%m-%dT%H:%M:%S")
    return response

def expiry_date():
    response = int(time()) + int(604800)
    return response

def check_id(short_id):
    if 'Item' in ddb.get_item(Key={'short_id': short_id}):
        response = generate_id()
    else:
        return short_id

def generate_id():
    short_id = "".join(choice(string_format) for x in range(randint(min_char, max_char)))
    print(short_id)
    response = check_id(short_id)
    return response

def lambda_handler(event, context):
    analytics = {}
    print(event)
    short_id = generate_id()
    short_url = app_url + short_id
    long_url = json.loads(event.get('body')).get('long_url')
    timestamp = generate_timestamp()
    ttl_value = expiry_date()

    analytics['user_agent'] = event.get('headers').get('User-Agent')
    analytics['source_ip'] = event.get('headers').get('X-Forwarded-For')
    analytics['xray_trace_id'] = event.get('headers').get('X-Amzn-Trace-Id')

    if len(parse.urlsplit(long_url).query) > 0:
        url_params = dict(parse.parse_qsl(parse.urlsplit(long_url).query))
        for k in url_params:
            analytics[k] = url_params[k]

    response = ddb.put_item(
        Item={
            'short_id': short_id,
            'created_at': timestamp,
            'ttl': int(ttl_value),
            'short_url': short_url,
            'long_url': long_url,
            'analytics': analytics,
            'hits': int(0)
        }
    )

    return {
        "statusCode": 200,
        "body": short_url
    }

Set a couple of environment variables that will be used in our function, min and max chars from the screenshot below is the amount of characters that will be used in a random manner to make the short id unique. The app_url will be your domain name, as this will be returned to the client with the short id eg. https://tiny.myserverlessapp.net/t/3f8Hf38n398t :

While you are on Lambda, create the function that will retrieve the long url, in my case url-shortener-retrieve:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
import os
import json
import boto3

ddb = boto3.resource('dynamodb', region_name = 'eu-west-1').Table('eng-url-shortener')

def lambda_handler(event, context):
    short_id = event.get('short_id')

    try:
        item = ddb.get_item(Key={'short_id': short_id})
        long_url = item.get('Item').get('long_url')
        # increase the hit number on the db entry of the url (analytics?)
        ddb.update_item(
            Key={'short_id': short_id},
            UpdateExpression='set hits = hits + :val',
            ExpressionAttributeValues={':val': 1}
        )

    except:
        return {
            'statusCode': 301,
            'location': 'https://objects.ruanbekker.com/assets/images/404-blue.jpg'
        }

    return {
        "statusCode": 301,
        "location": long_url
    }

API Gateway

Head over to API Gateway and create your API, in my case url-shortener-api

Head over to Resources:

and create a new resource called /create:

Once the resource is created, create a post method on the create resource and select Lambda as the integration type and lambda proxy integration as seen below:

Once you save it, it will ask to give api gateway permission to invoike your lambda function wich you can accept by hitting ok as below:

When you look at the POST method on your create resource, it should look like this:

Select the root resource / and from Actions create a new resource /t:

Select the /t resource and create a new resource named shortid and provide {shortid} in the resource path as this will be the data that will be proxied through to our lambda function:

Create a GET method on the /t/{shortid} resource and select url-shortener-retrieve lambda function as the function from the lambda integration selection as seen below:

Again, grant api gateway permission to invoke your function:

When you select the GET method, it should look like this:

Select the Integration Request and head over to Mapping Templates:

from the Request body passtrhough, add a mapping template application/json and provide the following mapping template:

1
2
3
{
    "short_id": "$input.params('shortid')"
}

On the Method Response:

Delete the 200 HTTP Status Response and create a new response by “Add Response”, add 301 HTTP Status, add Location Header to the response.

Navigate to the Integration Response from the /{shortid} GET method:

delete the 200 HTTP Response, add “integration response”, set method response status to 301 and add header mapping for location to integration.response.body.location as below:

make sure to select the integration response to - so that the method response reflects to 301:

Navigate to Actions and select “Deploy API”, select your stage, in my case test and deploy:

Go to stages, select your stage, select the post request to reveal the API URL:

Time to test out the URL Shortener:

1
2
curl -XPOST -H "Content-Type: application/json" https://xxxxxx.execute-api.eu-west-1.amazonaws.com/test/create -d '{"long_url": "https://www.google.com/search?q=helloworld"}'
https://tiny.myserverlessapp.net/t/pcnWoCGCr2ad1x

ACM Certificates

At this moment we dont have our domain connected with our API Gateway, and we would also want a certificate on our application, which we can use ACM to request a certificate that can be associated to our domain. So in order to do that, first request a certificate on ACM. Select Request a certificate, create a wildcard entry: *.yourdomain.com, select DNS Validation (If you host with Route53, they allow you the option to create the record).

Head back to API Gateway to associate the Domain and ACM Certificate to our API:

From the “Custom Domain Names” section, create a custom domain name, once you selected regional, it will ask for the target domain name, which will be the resolved to your API Endpoint that was created, and from the “Base Path Mappings” section, select / as the path to your API stage, in my case url-shortener-api:test:

Route 53

Last part is to create a Route53 entry for tiny.yourdomain.com to resolve to the CNAME value of the target domain name that was provided in the custom domain names section:

Demo the URL Shortener Service:

Once everything is setup we can test, by creating a Shortened URL:

1
2
$ curl -XPOST -H "Content-Type: application/json" https://tiny.myserverlessapp.net/create -d '{"long_url": "https://www.google.com/search?q=helloworld"}'
https://tiny.myserverlessapp.net/t/p7ISNcxTByXhN

Testing out the Short URL to redirect to the Destination URL:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
$ curl -ivL https://tiny.myserverlessapp.net/t/p7ISNcxTByXhN
*   Trying 34.226.10.0...
* TCP_NODELAY set
* Connected to tiny.myserverlessapp.net (34.226.10.0) port 443 (#0)
* TLS 1.2 connection using TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256
* Server certificate: *.myserverlessapp.net
* Server certificate: Amazon
* Server certificate: Amazon Root CA 1
* Server certificate: Starfield Services Root Certificate Authority - G2
> GET /t/p7ISNcxTByXhN HTTP/1.1
> Host: tiny.myserverlessapp.net
> User-Agent: curl/7.54.0
> Accept: */*
>
< HTTP/1.1 301 Moved Permanently
HTTP/1.1 301 Moved Permanently
< Date: Tue, 29 Nov 2018 00:05:02 GMT
Date: Tue, 29 Nov 2018 00:05:02 GMT
< Content-Type: application/json
Content-Type: application/json
< Content-Length: 77
Content-Length: 77
< Connection: keep-alive
Connection: keep-alive
< x-amzn-RequestId: f79048c8-cb56-41e8-b21d-b45fac47453a
x-amzn-RequestId: f79048c8-cb56-41e8-b21d-b45fac47453a
< x-amz-apigw-id: OeKPHH7_DoEFdjg=
x-amz-apigw-id: OeKPHH7_DoEFdjg=
< Location: https://www.google.com/search?q=helloworld
Location: https://www.google.com/search?q=helloworld

At this moment our API is open to the world, which is probably not the best as everyone will be able to Shorten URL’s. You can check out Set Up API Keys Using the API Gateway Console documentation on how to secure your application by utilizing a api key which can be included in your request headers when Shortening URLs.

For a bit of housekeeping, you can implement TTL on DynamoDB so that old items expire, which can help you to refrain your dynamodb table from growing into large amounts of storage, you can have a look at a post on Delete Old Items with Amazons DynamoDB TTL Feature to implement that.

Thank You

Please feel free to show support by, sharing this post, making a donation, subscribing or reach out to me if you want me to demo and write up on any specific tech topic.