Ruan Bekker's Blog

From a Curious mind to Posts on Github

The AWS CLI Cheatsheet for Bash

This is a post for all the AWS CLI oneliners that I stumble upon. Note that they will be updated over time.

RDS

Describe All RDS DB Instances:

1
$ aws --profile prod rds describe-db-instances --query 'DBInstances[*].[DBInstanceArn,DBInstanceIdentifier,DBInstanceClass,Endpoint]'

Describe a RDS DB Instance with a dbname:

1
2
3
4
5
6
7
8
9
10
11
12
13
$ aws --profile prod rds describe-db-instances --query 'DBInstances[?DBInstanceIdentifier==`db-staging`].[DBInstanceArn,DBInstanceIdentifier,DBInstanceClass,Endpoint]'
[
    [
        "arn:aws:rds:eu-west-1:<customer_id>:db:db-staging",
        "db-staging",
        "db.t2.micro",
        {
            "HostedZoneId": "ASKDJSAKDJBA",
            "Port": 5432,
            "Address": "db-staging.asdkjahsd.eu-west-1.rds.amazonaws.com"
        }
    ]
]

List all RDS DB Instances and limit output:

1
2
3
4
5
6
7
8
9
10
11
12
$ aws --profile prod rds describe-db-instances --query 'DBInstances[*].[DBInstanceArn,DBInstanceIdentifier,DBInstanceClass,Endpoint]'
[
    [
        "arn:aws:rds:eu-west-1:<customer_id>:db:db-name",
        "db-name",
        "db.t2.micro",
        {
            "HostedZoneId": "ABCDEFGHILKL",
            "Port": 5432,
            "Address": "db-name.abcdefg.eu-west-1.rds.amazonaws.com"
        }
    ],

List all RDS DB Instances that has backups enabled, and limit output:

1
2
3
4
5
6
7
8
9
10
11
12
$ aws --profile prod rds describe-db-instances --query 'DBInstances[?BackupRetentionPeriod>`0`].[DBInstanceArn,DBInstanceIdentifier,DBInstanceClass,Endpoint]'
[
    [
        "arn:aws:rds:eu-west-1:<customer_id>:db:db-name",
        "db-name",
        "db.t2.micro",
        {
            "HostedZoneId": "ABCDEFGHILKL",
            "Port": 5432,
            "Address": "db-name.abcdefg.eu-west-1.rds.amazonaws.com"
        }
    ],

Describe DB Snapshots for DB Instance Name:

1
2
3
4
5
6
7
8
$ aws --profile prod rds describe-db-snapshots --db-instance-identifier db --query 'DBSnapshots[?DBInstanceIdentifier==`db`].[DBInstanceIdentifier,DBSnapshotIdentifier,SnapshotCreateTime,Status]'
[
    [
        "db",
        "rds:db-2018-05-16-04-08",
        "2018-05-16T04:08:53.696Z",
        "available"
    ],

Events for the last 24 Hours:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
$ aws --profile prod rds describe-events --source-identifier "rds:db-2018-05-16-04-08" --source-type db-snapshot --duration 1440 --query 'Events[*]'
[
    {
        "EventCategories": [
            "creation"
        ],
        "SourceType": "db-snapshot",
        "SourceArn": "arn:aws:rds:eu-west-1:<customer_id>:snapshot:rds:db-2018-05-16-04-08",
        "Date": "2018-05-16T04:08:40.264Z",
        "Message": "Creating automated snapshot",
        "SourceIdentifier": "rds:db-2018-05-16-04-08"
    },
    {
        "EventCategories": [
            "creation"
        ],
        "SourceType": "db-snapshot",
        "SourceArn": "arn:aws:rds:eu-west-1:<customer_id>:snapshot:rds:db-2018-05-16-04-08",
        "Date": "2018-05-16T04:32:04.047Z",
        "Message": "Automated snapshot created",
        "SourceIdentifier": "rds:db-2018-05-16-04-08"
    }
]

List Public RDS Instances:

1
2
3
4
5
6
7
8
$ aws --profile prod rds describe-db-instances --query 'DBInstances[?PubliclyAccessible==`true`].[DBInstanceIdentifier,Endpoint.Address]'

[
  [
    "name",
    "name.abcdef.eu-west-1.rds.amazonaws.com"
  ]
]

SSM Parameter Store:

List all parameters by path:

1
2
3
4
$ aws --profile prod ssm get-parameters-by-path --path '/service-a/team-a/my-app-name/' | jq '.Parameters[]' | jq -r '.Name'
/service-a/team-a/my-app-name/db_hostname
/service-a/team-a/my-app-name/db_username
/service-a/team-a/my-app-name/db_password

Get a value from a parameter:

1
2
$ aws --profile prod ssm get-parameters --names '/service-a/team-a/my-app-name/db_username' --with-decryption | jq '.Parameters[]' | jq -r '.Value'
my_db_user

Thank You

Please feel free to show support by, sharing this post, making a donation, subscribing or reach out to me if you want me to demo and write up on any specific tech topic.


Python Multiprocessing Tutorial

I stumbled apon a great python multiprocessing tutorial, when I was looking into spawning multiple processes in parallel on a Lambda function.

In this example im getting latencies between regions using tcpping, but instead of running them one at a time, I was looking into spawning them in parralel:

(code made static for demonstration)

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
import boto3
import os
import json
import multiprocessing as mp
from decimal import Decimal

region_maps = {
    'eu-west-1': {
        'dynamodb': 'dynamodb.eu-west-1.amazonaws.com'
    },
    'us-east-1': {
        'dynamodb': 'dynamodb.us-east-1.amazonaws.com'
    },
    'us-west-1': {
        'dynamodb': 'dynamodb.us-west-1.amazonaws.com'
    },
    'us-west-2': {
        'dynamodb': 'dynamodb.us-west-2.amazonaws.com'
    }
}

def get_results(target_region, target_service, target_endpoint):
    static_results = {
        "address": target_endpoint,
        "attempts": 5,
        "avg": 481.80199999999996,
        "max": 816.25,
        "min": 312.46,
        "port": 443,
        "region": "eu-west-1_{}_{}".format(target_service, target_region),
        "regionTo": target_region,
        "results": [
            {"seq": 1,"time": "816.25"},
            {"seq": 2,"time": "331.50"},
            {"seq": 3,"time": "597.22"},
            {"seq": 4,"time": "312.46"},
            {"seq": 5,"time": "351.58"}
        ],
        "timestamp": "2019-02-05T17:10:32"
    }
    return static_results

def dynamodb_write(data):
    ddb = boto3.Session(profile_name='test', region_name='eu-west-1').resource('dynamodb').Table('mydynamotable')
    ddb_parsed = json.loads(json.dumps(data), parse_float=Decimal)
    response = ddb.put_item(Item=ddb_parsed)
    return response

def spawn_work(region):
    target_region = region
    target_service = 'dynamodb'
    target_endpoint = region_maps[target_region][target_service]
    data = get_results(region, target_service, target_endpoint)
    print("pid: {}, data: {}".format(os.getpid(), data))
    response = dynamodb_write(data)

if __name__ == "__main__":
    pool = mp.Pool(mp.cpu_count())
    result = pool.map(spawn_work, ['eu-west-1', 'us-east-1', 'us-west-1', 'us-west-2'])

When running it locally, I can see that each job ran in its own pid:

1
2
3
4
5
6
7
8
$ python foo.py
pid: 31224, data: {'attempts': 5, 'min': 312.46, 'timestamp': '2019-02-05T17:10:32', 'address': 'dynamodb.eu-west-1.amazonaws.com', 'max': 816.25, 'region': 'eu-west-1_dynamodb_eu-west-1', 'avg': 481.80199999999996, 'port': 443, 'regionTo': 'eu-west-1', 'results': [{'seq': 1, 'time': '816.25'}, {'seq': 2, 'time': '331.50'}, {'seq': 3, 'time': '597.22'}, {'seq': 4, 'time': '312.46'}, {'seq': 5, 'time': '351.58'}]}

pid: 31225, data: {'attempts': 5, 'min': 312.46, 'timestamp': '2019-02-05T17:10:32', 'address': 'dynamodb.us-east-1.amazonaws.com', 'max': 816.25, 'region': 'eu-west-1_dynamodb_us-east-1', 'avg': 481.80199999999996, 'port': 443, 'regionTo': 'us-east-1', 'results': [{'seq': 1, 'time': '816.25'}, {'seq': 2, 'time': '331.50'}, {'seq': 3, 'time': '597.22'}, {'seq': 4, 'time': '312.46'}, {'seq': 5, 'time': '351.58'}]}

pid: 31226, data: {'attempts': 5, 'min': 312.46, 'timestamp': '2019-02-05T17:10:32', 'address': 'dynamodb.us-west-1.amazonaws.com', 'max': 816.25, 'region': 'eu-west-1_dynamodb_us-west-1', 'avg': 481.80199999999996, 'port': 443, 'regionTo': 'us-west-1', 'results': [{'seq': 1, 'time': '816.25'}, {'seq': 2, 'time': '331.50'}, {'seq': 3, 'time': '597.22'}, {'seq': 4, 'time': '312.46'}, {'seq': 5, 'time': '351.58'}]}

pid: 31227, data: {'attempts': 5, 'min': 312.46, 'timestamp': '2019-02-05T17:10:32', 'address': 'dynamodb.us-west-2.amazonaws.com', 'max': 816.25, 'region': 'eu-west-1_dynamodb_us-west-2', 'avg': 481.80199999999996, 'port': 443, 'regionTo': 'us-west-2', 'results': [{'seq': 1, 'time': '816.25'}, {'seq': 2, 'time': '331.50'}, {'seq': 3, 'time': '597.22'}, {'seq': 4, 'time': '312.46'}, {'seq': 5, 'time': '351.58'}]}

Quite useful! Have a look at the link shared for more examples.

Thank You

Please feel free to show support by, sharing this post, making a donation, subscribing or reach out to me if you want me to demo and write up on any specific tech topic.


Convert Float to Decimal Data Types for Boto3 DynamoDB Using Python

A quick post on a workaround when you need to convert float to decimal types.


One thing I really don’t like about the AWS SDK for Python, specifically aimed towards DynamoDB is that Float types are not supported and that you should use Decimal types instead.

For example, my payload below:

1
2
>>> data
{'attempts': 5, 'min': 180.87, 'timestamp': '2019-02-05T15:48:27', 'address': 'dynamodb.us-east-1.amazonaws.com', 'max': 747.17, 'region': 'eu-west-1_dynamodb', 'avg': 311.32599999999996, 'port': 443, 'regionTo': 'us-east-1', 'results': [{'seq': 1, 'time': '747.17'}, {'seq': 2, 'time': '215.60'}, {'seq': 3, 'time': '230.67'}, {'seq': 4, 'time': '180.87'}, {'seq': 5, 'time': '182.32'}]}

Trying to write that as an Item to my DynamoDB table and you will be faced with the exception below:

1
2
>>> ddb.put_item(Item=data)
TypeError: Float types are not supported. Use Decimal types instead.

One way around this is to use parse_float in json.loads():

1
2
3
4
5
>>> from decimal import Decimal
>>> import json
>>> ddb_data = json.loads(json.dumps(data), parse_float=Decimal)
>>> ddb_data
{u'max': Decimal('747.17'), u'min': Decimal('180.87'), u'timestamp': u'2019-02-05T15:48:27', u'region': u'eu-west-1_dynamodb', u'regionTo': u'us-east-1', u'results': [{u'seq': 1, u'time': u'747.17'}, {u'seq': 2, u'time': u'215.60'}, {u'seq': 3, u'time': u'230.67'}, {u'seq': 4, u'time': u'180.87'}, {u'seq': 5, u'time': u'182.32'}], u'attempts': 5, u'address': u'dynamodb.us-east-1.amazonaws.com', u'avg': Decimal('311.32599999999996'), u'port': 443}

Thank You

Please feel free to show support by, sharing this post, making a donation, subscribing or reach out to me if you want me to demo and write up on any specific tech topic.


Paginate Through IAM Users on AWS Using Python and Boto3

When listing AWS IAM Users in Boto3, you will find that not all the users are retrieved. This is because they are paginated.

To do a normal list_users api call:

1
2
3
4
>>> import boto3
>>> iam = boto3.Session(region_name='eu-west-1', profile_name='default').client('iam')
>>> len(iam.list_users()['Users'])
100

Although I know there’s more than 200 users. Therefore we need to paginate through our users:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
>>> import boto3
>>> iam = boto3.Session(region_name='eu-west-1', profile_name='default').client('iam')
>>> paginator = iam.get_paginator('list_users')
>>> users = []
>>> all_users = []
>>> for response in paginator.paginate():
...     users.append(response['Users'])
...
>>> len(users)
3

>>> for iteration in xrange(len(users)):
...     for userobj in xrange(len(users[iteration])):
...         all_users.append((users[iteration][userobj]['UserName']))
...
>>> len(all_users)
210

For more information on this, have a look at AWS Documentation about Pagination

Thank You

Please feel free to show support by, sharing this post, making a donation, subscribing or reach out to me if you want me to demo and write up on any specific tech topic.


Setup a 3 Node Docker Swarm Cluster on Ubuntu 16.04

Docker Swarm is a Clustering and Orchestration Framework for the Docker ecosystem. Have a look at their official documentation for detailed information.

In this Tutorial we will Setup a 3 Node Docker Swarm Cluster and to Demonstrate How Easy it is to Deploy a Web Application with 2 Replicas from a Docker Image.



Overview of What we will be Doing

  • Install Docker on 3 Servers with Ubuntu 16.04
  • Initialize the Swarm and Join the Worker Nodes
  • Create a Nginx Service with 2 Replicas
  • Do some Inspection: View some info on the Service

Prerequisites

3 Fresh Deployed Ubuntu 16.04 Servers. ( 1GB Memory Servers will be good for development )

What is Docker

Docker is a Open Source Technology that allows you to create lightweight, isolated, reproducible application instances which is called Containers. Docker is built on top of the LXC technology, so it uses Linux Containers and as mentioned, it’s lightweight compared to a traditional VM.

A Container is isolated and uses the Kernel of the Docker host, it also utilizes Kernel features such as cgroups and namespaces in order to make them isolated.

Installing Docker Community Edition

Remove any older versions of Docker that might be present and install the dependencies:

1
2
3
$ sudo apt remove docker docker-engine -y
$ sudo apt install linux-image-extra-$(uname -r) linux-image-extra-virtual python-setuptools -y
$ sudo apt install apt-transport-https ca-certificates curl software-properties-common -y

Get the needed repository to setup Docker Community Edition:

1
2
3
$ curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add -
$ sudo apt-key fingerprint 0EBFCD88
$ sudo add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable"

Update the repository index and Install Docker Community Edition:

1
2
3
4
$ sudo apt update
$ sudo apt install docker-ce -y
$ sudo easy_install pip
$ sudo pip install docker-compose

Enable Docker on Startup and Start the Docker Engine:

1
2
$ sudo systemctl enable docker
$ sudo systemctl restart docker

If you would like to execute your docker commands without sudo, add your user to the docker group:

1
$ sudo usermod -aG docker $(whoami)

Test your Setup by Running a Hello World Container. You will see that if the image is not in the local docker image cache, it will pull the image from docker hub (or the respective docker registry), then once the image is saved locally, docker will then instantiate the container from that image:

1
2
3
4
5
6
7
8
9
$ docker run hello-world
Unable to find image 'hello-world:latest' locally
latest: Pulling from library/hello-world
78445dd45222: Pull complete
Digest: sha256:c5515758d4c5e1e838e9cd307f6c6a0d620b5e07e6f927b07d05f6d12a1ac8d7
Status: Downloaded newer image for hello-world:latest

Hello from Docker!
This message shows that your installation appears to be working correctly.

DNS Configuration

If you have a DNS Server you can configure the A Records for these hosts on DNS, but for simplicity, I will add the noted IP Addresses from the previous step into my /etc/hosts file so we can resolve names to IP’s

Open up the the hosts file:

1
$ sudo vim /etc/hosts

In my example, my IP Addresses:

1
2
3
192.0.2.41  manager
192.0.2.42  worker-1
192.0.2.43  worker-2

Repeat the above steps on the other 2 Servers and make note of the IP Addresses of each node. You should be able to ping and reach the nodes that was configured. Make sure to allow all traffic between these nodes.

Initialize the Swarm:

Now we will initialize the swarm on the manager node and as we have more than one network interface, we will specify the –advertise-addr option:

1
2
3
4
5
6
7
8
9
10
$ docker swarm init --advertise-addr 192.0.2.41
Swarm initialized: current node (siqyf3yricsvjkzvej00a9b8h) is now a manager.

    To add a worker to this swarm, run the following command:

    docker swarm join \
    --token SWMTKN-1-0eith07xkcg93lzftuhjmxaxwfa6mbkjsmjzb3d3sx9cobc2zp-97s6xzdt27y2gk3kpm0cgo6y2 \
    192.0.2.41:2377

    To add a manager to this swarm, run 'docker swarm join-token manager' and follow the instructions.

From the response above, we received the join token that allows the workers to register with the manager node. If its a scenario where you want to have more than one manager node, you can run docker swarm join-token manager to receive the join token for additional manager.

Let’s add the two worker nodes to the manager:

1
2
$ [worker-1] docker swarm join --token SWMTKN-1-0eith07xkcg93lzftuhjmxaxwfa6mbkjsmjzb3d3sx9cobc2zp-97s6xzdt27y2gk3kpm0cgo6y2 192.0.2.41:2377
This node joined a swarm as a worker.
1
2
$ [worker-2] docker swarm join --token SWMTKN-1-0eith07xkcg93lzftuhjmxaxwfa6mbkjsmjzb3d3sx9cobc2zp-97s6xzdt27y2gk3kpm0cgo6y2 192.0.2.41:2377
This node joined a swarm as a worker.

To see the node status, so that we can determine if the nodes are active/available etc, from the manager node, list all the nodes in the swarm:

1
2
3
4
5
[manager] $ docker node ls
ID                           HOSTNAME  STATUS  AVAILABILITY  MANAGER STATUS
j14mte3v1jhtbm3pb2qrpgwp6    worker-1  Ready   Active
siqyf3yricsvjkzvej00a9b8h *  master    Ready   Active        Leader
srl5yzme5hxnzxal2t1efmwje    worker-2  Ready   Active

Reobtaining the Join Tokens

If at any time, you lost your join token, it can be retrieved by running the following for the manager token:

1
$ docker swarm join-token manager -q SWMTKN-1-67chzvi4epx28ii18gizcia8idfar5hokojz660igeavnrltf0-09ijujbnnh4v960b8xel58pmj

And the following to retrieve the worker token:

1
$ docker swarm join-token worker -q SWMTKN-1-67chzvi4epx28ii18gizcia8idfar5hokojz660igeavnrltf0-acs21nn28v17uwhw0oqg5ibwx

Swarm Services in Docker uses a declarative model which means that you define the desired state of the service, and rely on Docker to maintain this state. More information on this can be found on their Documentation

At this moment, we will see that we have no services running in our swarm:

1
2
[manager] $ docker service ls
ID  NAME  MODE  REPLICAS  IMAGE

Deploying our First Service

Now onto the creation of a standard nginx service with 2 replicas, which means that there will be 2 containers of nginx running in our swarm.

But first, we need to create a overlay network, which is a network driver that creates a distributed network among multiple Docker daemon hosts. Swarm takes care of the routing automatically, which is routed via the port mappings. So you can have that your container sits on worker-2, when you hit your manager node on the published port, it will route the request to the desired application that resides on the respective container.

To create a overlay network called mynet:

1
[manager] $ docker network create --driver overlay mynet

Now onto creating the Service. If any of these containers fail, they will handled by the manager node and will be spawned again to have the desired number that we set on the replica option:

1
[manager] $ docker service create --name my-web --publish 8080:80 --replicas 2 --network mynet nginx

Let’s have a look at our nginx service:

1
2
3
[manager] $ docker service ls
ID            NAME    MODE        REPLICAS  IMAGE
1okycpshfusq  my-web  replicated  2/2       nginx:latest

After we see that the replica count is 2/2 our service is ready.

To see on which nodes our containers are running that makes up our service:

1
2
3
4
[manager] $ docker service ps my-web
ID            NAME      IMAGE         NODE      DESIRED STATE  CURRENT STATE           ERROR  PORTS
k0qqrh8s0c2d  my-web.1  nginx:latest  worker-1  Running        Running 30 seconds ago
nku9wer6tmll  my-web.2  nginx:latest  worker-2  Running        Running 30 seconds ago

From the above output, we can see that worker-1 and worker-2 are serving our containers for our service. We can also retrieve more information of our service by using the inspect option, which will give you a detailed response in json format of the service:

1
[manager] $ docker service inspect my-web

We can get the Endpoint Port info by using inspect and using the –format parameter to filter the output:

1
[manager] $ docker service inspect --format="" my-web  | python -m json.tool

From the output we will find the PublishedPort is the Port that we Expose, which will be the listener. Our TargetPort will be the port that is listening on the container:

1
2
3
4
5
6
7
8
[
    {
        "Protocol": "tcp",
        "PublishMode": "ingress",
        "PublishedPort": 8080,
        "TargetPort": 80
    }
]

Now that we went through the inspection of our service, its time to test our base nginx service.

Testing Nginx in our Swarm

Make a request against your docker node manager address on the port that was exposed, in this case 8080:

1
2
3
4
5
6
7
8
9
10
11
$ curl -I http://docker-node-manager-ip:8080

HTTP/1.1 200 OK
Server: nginx/1.15.5
Date: Thu, 10 Jan 2019 14:48:40 GMT
Content-Type: text/html
Content-Length: 612
Last-Modified: Tue, 02 Oct 2018 14:49:27 GMT
Connection: keep-alive
ETag: "5bb38577-264"
Accept-Ranges: bytes

Now we have successfull setup a 3 node docker swarm cluster and deployed a basic nginx service to our swarm. Please have a look at my other Docker Swarm Tutorials for other content.

Thank You

Please feel free to show support by, sharing this post, making a donation, subscribing or reach out to me if you want me to demo and write up on any specific tech topic.

Thanks for reading!

Fix Mac High Sierra Opendlrectoryd Too Many Corpses Being Created Issue

This morning my brother’s iMac gave some boot issues. The resolution to the issue was to drop into a terminal, rename the mrb_cache directory and reboot.

Steps to Resolution

When booting, the loading bar got stuck as seen below:

Starting to investigate, he ran cmd+s to logon to single user mode, and he noticed the error: crashed: opendlrectoryd. Toomay corpses being crashed, as seen from the screenshot below:

After some troubleshooting he had to hard reboot his mac, hit cmd+r repeatedly until he loaded his mac into recovery mode:

From thereon, from the top dropdown select Utilities -> Terminal, change into the directory where the cache folder needs to be moved:

1
$ cd /Volumes/Macintosh\ HD/var/db/caches/opendirectory

List to see if the cache directory is present:

1
2
$ ls -la | grep cache
-rw-------- root wheel 28655   Jan 3    22:22 mbr_cache

Rename the cache directory:

1
mv ./mbr_cache ./mbr_cache_old

Once that is done, reboot:

1
$ reboot

If you experienced the similar issue, you should be able to see the login screen after successful boot.

Thank You

Please feel free to show support by, sharing this post, making a donation, subscribing or reach out to me if you want me to demo and write up on any specific tech topic.


Ad space:

Thanks for reading!

Tutorial on Using Gitlab CI/CD Pipelines to Deploy Your Python Flask Restful API With Postgres on Heroku

Today we will build a Restful API using Python Flask, SQLAlchemy using Postgres as our Database, testing using Python Unittest, a CI/CD Pipeline on Gitlab, and Deployment to Heroku.

From our previous post, we demonstrated setting up a Custom Gitlab Runner on Your Own Server for Gitlab CI.

Heroku

If you don’t have an account already, Heroku offer’s 5 free applications in their free tier account. Once you have created your account, create 2 applications. I named mine flask-api-staging and flask-api-prod.

You can create the applications via cli or the ui, from the ui it will look more or less like this:

Select an app name and check if the name is available then select create. Note down the name and config as we will use it in our .gitlab-ci.yml config:

Heroku API Key

To allow the deployment of applications to Heroku from Gitlab, we need to generate a API Key on Heroku and save the config in Gitlab.

Head over to your Heroku Dashboard, select Account Settings, scroll to the API Key section and generate a API Key.

Head over to your Gitlab Repository, select Settings, CI/CD, then select Variables enter the Key: HEROKU_API_KEY and the Secret of the API Key into the Value and select Save Variable.

We will reference this variable from our deploy steps.

Heroku Postgres Add-on

Heroku offers a free Postgres Add-On, to activate: Select your application, select Resources, search for the Add-on Heroku Postgres, select and select the Hobby Dev Free version and select provision.

Our Application Code

Clone your repository then let’s start by creating our Flask API. Note this is more on Gitlab CI/CD than going into detail into the Flask Application.

Create the files that we will need:

1
$ touch app.py config.cfg requirements.txt tests.py Procfile

Let’s start by populating our configuration for our flask app: config.cfg

1
2
#SQLALCHEMY_DATABASE_URI='sqlite:///database.db'
SQLALCHEMY_TRACK_MODIFICATIONS=False

Our Flask Application: app.py

Note that we are using flask-heroku, with this package Heroku will automatically discover your configuration for your database using environment variables. So if you have a postgres add-on, you don’t need to specify the location of your database.

If you want to use sqlite, you can remove the heroku instantiation and uncomment the SQLALCHEMY_DATABASE_URI property in your config.cfg

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
from flask import Flask, jsonify, request
from flask_sqlalchemy import SQLAlchemy
from flask_marshmallow import Marshmallow
from flask_heroku import Heroku
from passlib.hash import sha256_crypt
from datetime import datetime

app = Flask(__name__)
app.config.from_pyfile('config.cfg')
heroku = Heroku(app)
db = SQLAlchemy(app)
ma = Marshmallow(app)

## --Database Models--
class Member(db.Model):
    __tablename__ = 'members'

    id = db.Column(db.Integer, primary_key=True, autoincrement=True)
    email = db.Column(db.String(255), unique=True, nullable=False)
    username = db.Column(db.String(50), unique=True)
    password_hash = db.Column(db.String(100))
    firstname = db.Column(db.String(50), unique=False)
    lastname = db.Column(db.String(50), unique=False)
    registered_on = db.Column(db.DateTime, nullable=False)

class MemberSchema(ma.ModelSchema):
    class Meta:
        model = Member
        fields = ('id', 'username', 'email')

member_schema = MemberSchema(strict=True, only=('id', 'username'))
members_schema = MemberSchema(strict=True, many=True)

## --Views--
@app.route('/')
def index():
    return jsonify({'message': 'ok'}), 200

# list users
@app.route('/api/user', methods=['GET'])
def list_users():
    all_users = Member.query.all()
    result = members_schema.dump(all_users)
    return jsonify(result.data)

# get user
@app.route('/api/user/<int:id>', methods=['GET'])
def get_user(id):
    user = Member.query.get(id)
    result = member_schema.dump(user)
    return jsonify(result.data)

# add user
@app.route('/api/user', methods=['POST'])
def add_user():
    email = request.json['email']
    username = request.json['username']
    password_hash = sha256_crypt.encrypt(request.json['password'])
    firstname = request.json['firstname']
    lastname = request.json['lastname']
    new_user = Member(email=email, username=username, password_hash=password_hash, firstname=firstname, lastname=lastname, registered_on=datetime.utcnow())
    try:
        db.session.add(new_user)
        db.session.commit()
        result = member_schema.dump(Member.query.get(new_user.id))
        return jsonify({'member': result.data})
    except:
        db.session.rollback()
        result = {'message': 'error'}
        return jsonify(result)

# update user
@app.route('/api/user/<int:id>', methods=['PUT'])
def update_user(id):
    user = Member.query.get(id)
    username = request.json['username']
    email = request.json['email']
    user.email = email
    user.username = username
    db.session.commit()
    return member_schema.jsonify(user)

# delete user
@app.route('/api/user/<int:id>', methods=['DELETE'])
def delete_user(id):
    user = Member.query.get(id)
    db.session.delete(user)
    db.session.commit()
    return jsonify({'message': '{} has been deleted'.format(user.username)})

if __name__ == '__main__':
    app.run()

Our tests: tests.py

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
import unittest
import app as myapi
import json
import sys

class TestFlaskApi(unittest.TestCase):
    def setUp(self):
        self.app = myapi.app.test_client()

    def test_hello_world(self):
        response = self.app.get('/')
        self.assertEqual(
            json.loads(response.get_data().decode(sys.getdefaultencoding())),
            {"message": "ok"}
        )

if __name__ == '__main__':
    unittest.main()

Our requirements file: requirements.txt

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
Click==7.0
Flask==1.0.2
flask-heroku==0.1.9
flask-marshmallow==0.9.0
Flask-SQLAlchemy==2.3.2
gunicorn==19.9.0
itsdangerous==1.1.0
Jinja2==2.10
MarkupSafe==1.1.0
marshmallow==2.17.0
marshmallow-sqlalchemy==0.15.0
passlib==1.7.1
psycopg2-binary==2.7.6.1
six==1.12.0
SQLAlchemy==1.2.15
Werkzeug==0.14.1

Our Procfile for Heroku: Procfile

1
web: gunicorn app:app

And lastly, our gitlab-ci configuration which will include our build, test and deploy steps. As soon as a commit to master is received the pipeline will be acticated. Note that our production deploy step is a manual trigger.

Our config for .gitlab-ci.yml. Note to replace your Heroku app names.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
image: rbekker87/build-tools:latest

stages:
  - ver
  - init
  - tests
  - deploy

ver:
  stage: ver
  script:
    - python --version
    - whoami

init:
  stage: init
  script:
    - apk add postgresql-dev --no-cache
    - pip install psycopg2-binary
    - pip install -r requirements.txt

run_tests:
  stage: tests
  script:
    - apk add postgresql-dev --no-cache
    - pip install psycopg2-binary
    - pip install -r requirements.txt
    - python tests.py

deploy_staging:
  stage: deploy
  script:
    - git remote add heroku https://heroku:$HEROKU_API_KEY@git.heroku.com/flask-api-staging.git
    - git push heroku master
    - echo "Deployed to Staging Server https://flask-api-staging.herokuapp.com"
  environment:
    name: staging
    url: https://flask-api-staging.herokuapp.com/
  only:
    - master

deploy_production:
  stage: deploy
  script:
    - git remote add heroku https://heroku:$HEROKU_API_KEY@git.heroku.com/flask-api-prod.git
    - git push heroku master
    - echo "Deployed to Production Server https://flask-api-prod.herokuapp.com"
  environment:
    name: production
    url: https://flask-api-prod.herokuapp.com/
  when: manual
  only:
    - master

Send to Gitlab:

Once everything is populated, stage your changes, commit your work and push to master:

1
2
3
$ git add .
$ git commit -m "blogpost demo commit"
$ git push origin master

Once the code has been pushed to master, gitlab will pick it up and trigger the pipeline to run.

Gitlab Pipelines

Head over to Gitlab, select CI/CD -> Pipelines, you should see a running pipeline, select it, then you should see the overview of all your jobs:

If everything has passed you should see the Passed status as shown above.

You will notice that the staging environment has been deployed. Now you can do some testing and when you are happy with it, you can select the play button which will deploy to production on the pipelines dashboard.

Creating the Tables on Postgres

Before we can interact with our API, we need to provision the postgres tables from the database models that we wrote in our application.

Open up a Python shell on Heroku and initialize the tables:

1
2
3
4
$ heroku run python -a flask-api-prod
>>> from app import db
>>> db.create_all()
>>> exit()

Testing the API:

Now that everything is up and running, its time to test our API.

List the users:

1
2
$ curl https://flask-api-staging.herokuapp.com/api/user
[]

Create a User:

1
2
3
4
5
6
7
$ curl -H 'Content-Type: application/json' -XPOST https://flask-api-staging.herokuapp.com/api/user -d '{"username": "ruanb", "password": "pass", "email": "r@r.com", "firstname": "ruan", "lastname": "bekker"}'
{
  "member": {
    "id": 1,
    "username": "ruanb"
  }
}

List Users:

1
2
3
4
5
6
7
8
$ curl -H 'Content-Type: application/json' -XGET https://flask-api-staging.herokuapp.com/api/user
[
  {
    "email": "ruan@r.com",
    "id": 1,
    "username": "ruanb"
  }
]

Update a User’s email address:

1
2
3
4
5
$ curl -H 'Content-Type: application/json' -XPUT https://flask-api-staging.herokuapp.com/api/user/1 -d '{"username": "ruanb", "email": "ruan@r.com"}'
{
  "id": 1,
  "username": "ruanb"
}

Retrieve a single user:

1
2
3
4
5
6
$ curl -H 'Content-Type: application/json' -XGET https://flask-api-staging.herokuapp.com/api/user/1
{
  "email": "ruan@r.com",
  "id": 1,
  "username": "ruanb"
}

Delete User:

1
2
3
4
$ curl -H 'Content-Type: application/json' -XDELETE https://flask-api-staging.herokuapp.com/api/user/1
{
  "message": "ruanb has been deleted"
}

Troubleshooting

I had some issues with Heroku, where one was after I deployed, I received this error in Heroku’s logs:

1
code=H14 desc="No web processes running" method=GET path="/"

I just had to scale my web dyno to 1:

1
2
$ heroku ps:scale web=1 -a flask-api-staging
Scaling dynos... done, now running web at 1:Free

Have a look at their documentation if you need help with the heroku cli.

And to troubleshoot within the dyno, you can exec into it by running this:

1
heroku ps:exec -a flask-api-staging

I seriously dig Gitlab-CI and with this demonstration you can see how easy it is to setup a CI/CD Pipeline on Gitlab and Deploy them to Heroku.

Resources:

The code for this demo is available at: gitlab.com/rbekker87/demo-cicd-flask-heroku

For more blog posts on Gitlab, have a look at my gitlab category on blog.ruanbekker.com

Thank You

Please feel free to show support by, sharing this post, making a donation, subscribing or reach out to me if you want me to demo and write up on any specific tech topic.


Ad space:

Thanks for reading!

Setup a Gitlab Runner on Your Own Server to Run Your Jobs That Gets Triggered From Gitlab CI

From our previous post, we went through the setup on setting up a Basic CI Pipeline on Gitlab, in conjunction with Gitlab CI which coordinates your jobs, where we used the Shared Runners, which runs your jobs on Gitlab’s Infrastructure.

In Gitlab, you have Shared Runners and your Own Runners, which is used to run your jobs and send the results back to GitLab.

In this tutorial we will Setup a Server with gitlab-runner and Docker on Ubuntu and then Setup a Basic Pipeline to Utilize your Gitlab Runner.

Setup Docker

Install Docker:

1
2
3
4
5
6
7
8
$ sudo apt update && sudo apt upgrade -y
$ sudo apt-get install apt-transport-https ca-certificates curl software-properties-common -y
$ curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add -
$ sudo add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable"

$ sudo apt update
$ sudo apt install docker-ce -y
$ docker run hello-world

Install and Setup Gitlab Runner

This setup is intended for Linux 64bit, for other distributions, have a look at their docs

Install the Runner:

1
2
3
4
5
$ wget -O /usr/local/bin/gitlab-runner https://gitlab-runner-downloads.s3.amazonaws.com/latest/binaries/gitlab-runner-linux-amd64
$ chmod +x /usr/local/bin/gitlab-runner
$ useradd --comment 'GitLab Runner' --create-home gitlab-runner --shell /bin/bash
$ gitlab-runner install --user=gitlab-runner --working-directory=/home/gitlab-runner
$ gitlab-runner start

Register the Runner. The Gitlab-CI Token is available in your CI/CD Settings panel from the UI: https://gitlab.com/<account>/<repo>/settings/ci_cd

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
$ gitlab-runner register
Please enter the gitlab-ci coordinator URL (e.g. https://gitlab.com/):
https://gitlab.com/

Please enter the gitlab-ci token for this runner:
__masked__

Please enter the gitlab-ci description for this runner:
[my-runner]: my-runner

Please enter the gitlab-ci tags for this runner (comma separated):
my-runner,foobar
Registering runner... succeeded                     runner=66m_339h

Please enter the executor: docker-ssh+machine, docker, docker-ssh, parallels, shell, ssh, virtualbox, docker+machine, kubernetes:
docker

Please enter the default Docker image (e.g. ruby:2.1):
alpine:latest

Runner registered successfully. Feel free to start it, but if it's running already the config should be automatically reloaded!

Verify the Status and check if Docker and Gitlab Runner is enabled on startup:

1
2
3
4
5
6
7
8
9
$ gitlab-runner status
Runtime platform                                    arch=amd64 os=linux pid=30363 revision=7f00c780 version=11.5.1
gitlab-runner: Service is running!

$ systemctl is-enabled gitlab-runner
enabled

$ systemctl is-enabled docker
enabled

Gitlab-CI Config for Shared Runners

If you would like to use the shared runners that Gitlab Offers, the .gitlab-ci.yml config will look like this:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
stages:
  - build
  - test

build:
  stage: build
  script:
    - echo "this is building"
    - hostname
    - mkdir builds
    - touch builds/data.txt
    - echo "true" > builds/data.txt
  artifacts:
    paths:
      - builds/

test:
  stage: test
  script:
    - echo "this is testing"
    - hostname
    - test -f builds/data.txt
    - grep "true" builds/data.txt

Gitlab-CI Config for your own Gitlab Runner

Gitlab utilizes the tags that was specified on registration to determine where the jobs gets executed on, for more information on this, have a look at their docs

The .gitlab-ci.yml config for using your gitlab runner:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
stages:
  - build
  - test

build:
  stage: build
  tags:
    - my-runner
  script:
    - echo "this is building"
    - hostname
    - mkdir builds
    - touch builds/data.txt
    - echo "true" > builds/data.txt
  artifacts:
    paths:
      - builds/

test:
  stage: test
  tags:
    - my-runner
  script:
    - echo "this is testing"
    - hostname
    - test -f builds/data.txt
    - grep "true" builds/data.txt

Trigger and Check Docker

Commit the config to master, let your pipeline run their jobs upon completion have a look at docker on your server for the containers that the jobs ran on:

1
2
3
4
5
$ docker ps -a
CONTAINER ID        IMAGE               COMMAND                  CREATED              STATUS                          PORTS               NAMES
04292a78de0b        c04b8be95e1e        "gitlab-runner-cache.."  About a minute ago   Exited (0) About a minute ago                       runner-xx-project-xx-concurrent-0-cache-3cxx0
49b1b3c4adf9        c04b8be95e1e        "gitlab-runner-cache.."  About a minute ago   Exited (0) About a minute ago                       runner-xx-project-xx-concurrent-0-cache-6cxxa
422b23191e8c        hello-world         "/hello"                 24 minutes ago       Exited (0) 24 minutes ago                           wizardly_meninsky

As we know each job gets executed in different containers, you can see from the output above that there was 2 different containers for the 2 jobs that was specified in our pipeline.

Resources:

Thank You

Please feel free to show support by, sharing this post, making a donation, subscribing or reach out to me if you want me to demo and write up on any specific tech topic.


Ad space:

Thanks for reading!

Local Dev Environment for Wordpress Using Docker Compose

Let’s setup a local development environment with Docker, Wordpress, MySQL using Docker Compose

Docker Compose File

Let’s look at our docker-compose.yml file:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
version: '3.1'

services:

  wordpress:
    image: wordpress
    restart: always
    ports:
      - 8080:80
    environment:
      - WORDPRESS_DB_NAME=wordpress
      - WORDPRESS_DB_HOST=mysql
      - WORDPRESS_DB_USER=wordpress
      - WORDPRESS_DB_PASSWORD=wordpress
    networks:
      - wordpress

  mysql:
    image: mysql:5.7
    restart: always
    environment:
      - MYSQL_ROOT_PASSWORD=password
      - MYSQL_DATABASE=wordpress
      - MYSQL_USER=wordpress
      - MYSQL_PASSWORD=wordpress
    networks:
      - wordpress

networks:
  wordpress:

Environment Variables for the MySQL Docker image is:

1
2
3
4
5
6
- MYSQL_ROOT_PASSWORD
- MYSQL_DATABASE
- MYSQL_USER, MYSQL_PASSWORD
- MYSQL_ALLOW_EMPTY_PASSWORD
- MYSQL_RANDOM_ROOT_PASSWORD
- MYSQL_ONETIME_PASSWORD

More info can be viewed on this resource: hub.docker.com/_/mysql/

Launching our Wordpress Application:

Lets deploy wordpress:

1
2
3
4
5
$ docker-compose up
Creating network "wordpress_wordpress" with the default driver
Creating wordpress_mysql_1_3e6e3cfe07b1     ... done
Creating wordpress_wordpress_1_a9cb16f277af ... done
Attaching to wordpress_wordpress_1_9227f3d3e587, wordpress_mysql_1_65cc98d222d0

Accessing Wordpress

You should be able to access Wordpress on http://localhost:80/

Local Dev Environment for Mediawiki Using Docker Compose

Let’s setup a local development environment with Docker, Mediawiki, MySQL using Docker Compose

Docker Compose File

Let’s look at our docker-compose.yml file:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
version: "3.4"

services:

  db:
    image: mysql:5.6
    restart: always
    environment:
      - MYSQL_ROOT_PASSWORD=password
      - MYSQL_USER=mw
      - MYSQL_DATABASE=mediawiki
      - MYSQL_PASSWORD=pass
    volumes:
      - /Users/ruan/workspace/docker/mediawiki/mediawiki-mysql-data:/var/lib/mysql
    networks:
      - mediawiki
    ports:
      - 3306:3306

  memcached:
    image: rbekker87/memcached:alpine
    environment:
      - MEMCACHED_USER=memcached
      - MEMCACHED_HOST=0.0.0.0
      - MEMCACHED_PORT=11211
      - MEMCACHED_MEMUSAGE=128
      - MEMCACHED_MAXCONN=1024
    networks:
      - mediawiki

  mediawiki:
    image: benhutchins/mediawiki:latest
    networks:
      - mediawiki
    environment:
      - MEDIAWIKI_DB_TYPE=mysql
      - MEDIAWIKI_DB_HOST=db
      - MEDIAWIKI_DB_USER=mw
      - MEDIAWIKI_DB_PASSWORD=pass
      - MEDIAWIKI_SITE_SERVER=http://localhost
      - MEDIAWIKI_SITE_NAME="My Lekke Wiki"
      - MEDIAWIKI_SITE_LANG=en
      - MEDIAWIKI_ADMIN_USER=admin
      - MEDIAWIKI_ADMIN_PASS=password123
      - MEDIAWIKI_UPDATE=true
      - MEDIAWIKI_ENABLE_SSL=false
    volumes:
      - /Users/ruan/workspace/docker/mediawiki/mediawiki-data:/data
    ports:
      - 80:80
    depends_on:
      - db
      - memcached

networks:
  mediawiki:

Your current working directory in this case: /Users/ruan/workspace/docker/mediawiki

Environment Variables for the MySQL Docker image is:

1
2
3
4
5
6
- MYSQL_ROOT_PASSWORD
- MYSQL_DATABASE
- MYSQL_USER, MYSQL_PASSWORD
- MYSQL_ALLOW_EMPTY_PASSWORD
- MYSQL_RANDOM_ROOT_PASSWORD
- MYSQL_ONETIME_PASSWORD

More info can be viewed on this resource: hub.docker.com/_/mysql/

Launching our Mediawiki Application:

Lets deploy mediawiki:

1
2
3
4
5
$ docker-compose up
Creating network "mediawiki_mediawiki" with the default driver
Creating mediawiki_memcached_1_bbbe8d3fa8b3 ... done
Creating mediawiki_db_1_257775fcf65b        ... done
Creating mediawiki_mediawiki_1_56813d66cbe2 ... done

Accessing Mediawiki

You should be able to access Mediawiki on http://localhost:80/

Resources: