Ruan Bekker's Blog

From a Curious mind to Posts on Github

Using Python Boto3 and DreamHosts DreamObjects to Interact With Their Object Storage Offering

In this post I will demonstrate how to interact with Dreamhost’s Object Storage Service Offering called DreamObjects using Python Boto3 library. Dreamhost offers Object Storage at great pricing, for more information have a look at their Documentation

Whats on the Menu:

We will do the following:

  • List Buckets
  • List Objects
  • Put Object
  • Get Object
  • Upload Object
  • Download Object
  • Delete Object(s)

Configuration

First we need to configure credentials, by providing the access key and access secret key, that is provided by DreamHost:

1
2
$ pip install awscli
$ aws configure --profile dreamhost

After your credentials is set to your profile, we will need to import boto3 and instantiate the s3 client with our profile name, region name and endpoint url:

1
2
3
>>> import boto3
>>> session = boto3.Session(region_name='us-west-2', profile_name='dreamhost')
>>> s3 = session.client('s3', endpoint_url='https://objects-us-west-1.dream.io')

List Buckets:

To list our Buckets:

1
2
3
4
5
6
7
8
>>> response = s3.list_buckets()
>>> print(response)
{u'Owner': {u'DisplayName': 'foobar', u'ID': 'foobar'}, u'Buckets': [{u'CreationDate': datetime.datetime(2017, 4, 15, 21, 51, 3, 921000, tzinfo=tzutc()), u'Name': 'ruanbucket'}], 'ResponseMetadata': {'HTTPStatusCode': 200, 'RetryAttempts': 0, 'HostId': '', 'RequestId': 'tx00000000000000003cd88-005ac361f5-foobar-default', 'HTTPHeaders': {'date': 'Tue, 03 Apr 2018 11:13:57 GMT', 'content-length': '306', 'x-amz-request-id': 'tx00000000000000003cd88-005ac361f5-foobar-default', 'content-type': 'application/xml'}}}

>>> for bucket in response['Buckets']:
...     print(bucket['Name'])
...
ruanbucket

List Objects:

List all the Objects, after the given prefix:

1
2
3
4
5
6
7
>>> response = s3.list_objects(Bucket='ruanbucket', Prefix='logs/sysadmins.co.za/access/')
>>> for obj in response['Contents']:
...     print obj['Key']
...
logs/sysadmins.co.za/access/access.log-2017-10-10.gz
logs/sysadmins.co.za/access/access.log-2017-10-11.gz
logs/sysadmins.co.za/access/access.log-2017-10-12.gz

Put Object:

Write text as the body to the destination key on the Bucket:

1
2
3
>>> response = s3.put_object(Bucket='ruanbucket', Body='My Name is Ruan\n', Key='uploads/docs/file.txt')
>>> print(response)
{u'Body': <botocore.response.StreamingBody object at 0x13cde10>, u'AcceptRanges': 'bytes', u'ContentType': 'binary/octet-stream', 'ResponseMetadata': {'HTTPStatusCode': 200, 'RetryAttempts': 0, 'HostId': '', 'RequestId': 'tx0000000000000000053f2-005ac3e0db-foobar-default', 'HTTPHeaders': {'content-length': '16', 'accept-ranges': 'bytes', 'last-modified': 'Tue, 03 Apr 2018 20:14:54 GMT', 'etag': '"292edceea84d1234465f725c3921fc2a"', 'x-amz-request-id': 'tx0000000000000000053f2-005ac3e0db-foobar-default', 'date': 'Tue, 03 Apr 2018 20:15:23 GMT', 'content-type': 'binary/octet-stream'}}, u'LastModified': datetime.datetime(2018, 4, 3, 20, 14, 54, tzinfo=tzutc()), u'ContentLength': 16, u'ETag': '"292edceea84d1234465f725c3921fc2a"', u'Metadata': {}}

List the Object that we have created in the Bucket::

1
2
3
4
5
>>> response = s3.list_objects(Bucket='ruanbucket', Prefix='uploads/')
>>> for obj in response['Contents']:
...     print obj['Key']
...
uploads/docs/file.txt

Get Object:

Read the value from the key that was uploaded:

1
2
3
>>> response = s3.get_object(Bucket='ruanbucket', Key='uploads/docs/file.txt')
>>> print(response['Body'].read())
My Name is Ruan

Upload Files:

Upload the file from disk to the Bucket:

1
2
3
>>> with open('myfile.txt', 'rb') as data:
...     s3.upload_fileobj(Fileobj=data, Bucket='ruanbucket', Key='uploads/docs/uploadobj.txt')
...

Read the contents from the uploaded file:

1
2
3
>>> response = s3.get_object(Bucket='ruanbucket', Key='uploads/docs/uploadobj.txt')
>>> print(response['Body'].read())
This is some text

Download File:

Download the file from the Bucket to the local disk:

1
2
3
>>> with open('downloaded.txt', 'wb') as data:
...     s3.download_fileobj(Bucket='ruanbucket', Key='uploads/docs/uploadobj.txt', Fileobj=data)
...

Read the file’s content from disk:

1
2
>>> print(open('downloaded.txt').read())
This is some text

Delete Object:

Delete one object:

1
2
3
>>> response = s3.delete_object(Bucket='ruanbucket', Key='uploads/docs/uploadobj.txt')
>>> print(response)
{'ResponseMetadata': {'HTTPStatusCode': 204, 'RetryAttempts': 0, 'HostId': '', 'RequestId': 'tx00000000000000000be5a-005ac3e61a-foobar-default', 'HTTPHeaders': {'date': 'Tue, 03 Apr 2018 20:37:46 GMT', 'x-amz-request-id': 'tx00000000000000000be5a-005ac3e61a-foobar-default'}}}

Delete Objects:

Delete more than one object with a single API call:

1
2
3
>>> response = s3.delete_objects(Bucket='ruanbucket', Delete={'Objects': [{'Key': 'uploads/docs/file.txt'}, {'Key': 'uploads/docs/file2.txt'}, {'Key': 'uploads/docs/file3.txt'}]})
>>> print(response)
{u'Deleted': [{u'Key': 'uploads/docs/file.txt'}, {u'Key': 'uploads/docs/file2.txt'}, {u'Key': 'uploads/docs/file3.txt'}], 'ResponseMetadata': {'HTTPStatusCode': 200, 'RetryAttempts': 0, 'HostId': '', 'RequestId': 'tx000000000000000011008-005ac3e951-foobar-default', 'HTTPHeaders': {'date': 'Tue, 03 Apr 2018 20:51:29 GMT', 'content-length': '270', 'x-amz-request-id': 'tx000000000000000011008-005ac3e951-217c0ac5-default', 'content-type': 'application/xml'}}}

For more information on the above, have a look at Boto’s Documentation and DreamHost’s Website

Setup MongoDB Server on ARM64 Using Scaleway

I’ve been using Scaleway for the past 18 months and I must admit, I love hosting my Applications on their Infrastructure. They have expanded rapidly recently, and currently deploying more infrstructure due to the high demand.

Scaleway is a Cloud Division of Online.net. They provide Baremetal and Cloud SSD Virtual Servers. Im currently hosting a Docker Swarm Cluster, Blogs, Payara Java Application Servers, Elasticsearch and MongoDB Clusters with them and really happy with the performance and stability of their services.

What will we be doing today:

Today I will be deploying MongoDB Server on a ARM64-2GB Instance, which costs you 2.99 Euros per month, absolutely awesome pricing! After we install MongoDB we will setup authentication, and then just a few basic examples on writing and reading from MongoDB.

Getting Started:

Logon to cloud.scaleway.com then launch an instance, which will look like the following:

After you deployed your instance, SSH to your instance, and it should look like this:

Dependencies:

Get the repository and install MongoDB:

1
2
3
4
5
$ apt update
$ apt-key adv --keyserver hkp://keyserver.ubuntu.com:80 --recv 2930ADAE8CAF5059EE73BB4B58712A2291FA4AD5
$ echo "deb [ arch=amd64,arm64 ] https://repo.mongodb.org/apt/ubuntu xenial/mongodb-org/3.6 multiverse" | sudo tee /etc/apt/sources.list.d/mongodb-org-3.6.list
$ apt update && apt upgrade -y
$ apt install mongodb-org -y

Enable MongoDB on Boot:

1
$ systemctl enable mongod

Configuration:

Your configuration might look different from mine, so I recommend to backup your config first, as the following command will overwrite the config to the configuration that I will be using for this demonstration:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
$ cat > /etc/mongod.conf << EOF
storage:
  dbPath: /var/lib/mongodb
  journal:
    enabled: false

storage:
  mmapv1:
    smallFiles: true

systemLog:
  destination: file
  logAppend: true
  path: /var/log/mongodb/mongod.log

net:
  port: 27017
  bindIp: 0.0.0.0

processManagement:
  timeZoneInfo: /usr/share/zoneinfo

security:
  authorization: enabled
EOF

Restart MongoDB for the config changes to take affect:

1
$ systemctl restart mongod

Authentication:

Create the Authentication:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
$ mongo
MongoDB shell version v3.6.3
connecting to: mongodb://127.0.0.1:27017
MongoDB server version: 3.6.3
Welcome to the MongoDB shell.

> use admin
> db.createUser({user: "ruan", pwd: "pass123", roles:[{role: "root", db: "admin"}]})
Successfully added user: {
        "user" : "ruan",
        "roles" : [
                {
                        "role" : "root",
                        "db" : "admin"
                }
        ]
}

> exit

Restart MongoDB and logon with your credentials:

1
2
3
4
5
6
7
8
$ systemctl restart mongod

$ mongo --authenticationDatabase admin --host localhost --port 27017 -u ruan -p
MongoDB shell version v3.6.3
Enter password:
connecting to: mongodb://localhost:27017/
MongoDB server version: 3.6.3
>

Write and Read from MongoDB

While you are on the MongoDB Shell, we will insert a couple of documents, first drop in to the database that you would like to write to:

1
2
> use testdb
switched to db testdb

Now we will write to the collection: collection1:

1
2
3
4
5
> db.collection1.insert({"name": "ruan", "surname": "bekker", "age": 31, "country": "south africa"})
WriteResult({ "nInserted" : 1 })

> db.collection1.insert({"name": "stefan", "surname": "bester", "age": 30, "country": "south africa"})
WriteResult({ "nInserted" : 1 })

To find all the documents in our collection:

1
2
3
> db.collection1.find()
{ "_id" : ObjectId("5ac15ff0f4a5500484defd23"), "name" : "ruan", "surname" : "bekker", "age" : 31, "country" : "south africa" }
{ "_id" : ObjectId("5ac16003f4a5500484defd24"), "name" : "stefan", "surname" : "bester", "age" : 30, "country" : "south africa" }

To prettify the output:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
> db.collection1.find().pretty()
{
        "_id" : ObjectId("5ac15ff0f4a5500484defd23"),
        "name" : "ruan",
        "surname" : "bekker",
        "age" : 31,
        "country" : "south africa"
}
{
        "_id" : ObjectId("5ac16003f4a5500484defd24"),
        "name" : "stefan",
        "surname" : "bester",
        "age" : 30,
        "country" : "south africa"
}

To find a document with the key/value of name: ruan:

1
2
3
4
5
6
7
8
> db.collection1.find({"name": "ruan"}).pretty()
{
        "_id" : ObjectId("5ac15ff0f4a5500484defd23"),
        "name" : "ruan",
        "surname" : "bekker",
        "age" : 31,
        "country" : "south africa"
}

To view the database that you are currently switched to:

1
2
> db
testdb

To view all the databases:

1
2
3
4
5
> show dbs
admin   0.000GB
config  0.000GB
local   0.000GB
testdb  0.000GB

To view the collections in the database:

1
2
3
4
> show collections
collection1

> exit

That was just a quick post on installing MongoDB on ARM64 using Scaleway. Try them out, and they are also hiring: careers.scaleway.com

Create a Logical Volume Using LVM on Ubuntu

Logical Volume Manager (LVM) - adds an extra layer between the physical disks and the file system, which allows you to resize your storage on the fly, use multiple disks, instead of one, etc.

Concepts:

Physical Volume: - Physical Volume represents the actual disk / block device.

Volume Group: - Volume Groups combines the collection of Logical Volumes and Physical Volumes into one administrative unit.

Logical Volume: - A Logical Volume is the conceptual equivalent of a disk partition in a non-LVM system.

File Systems: - File systems are built on top of logical volumes.

What we are doing today:

We have a disk installed on our server which is 150GB that is located on /dev/vdb, which we will manage via LVM and will be mounted under /mnt

Dependencies:

Update and Install LVM:

1
2
3
4
$ apt update && apt upgrade -y
$ apt install lvm2 -y
$ systemctl enable lvm2-lvmetad
$ systemctl start lvm2-lvmetad

Create the Logical Volume:

Initialize the Physical Volume to be managed by LVM, then create the Volume Group, then go ahead to create the Logical Volume:

1
2
3
$ pvcreate /dev/vdb
$ vgcreate vg1 /dev/vdb
$ lvcreate -l 100%FREE -n vol1 vg1

Build the Linux Filesystem with ext4 and mount the volume to the /mnt partition:

1
2
3
$ mkfs.ext4 /dev/vg1/vol1
$ mount /dev/vg1/vol1 /mnt
$ echo '/dev/mapper/vg1-vol1 /mnt ext4 defaults,nofail 0 0' >> /etc/fstab

Other useful commands:

To list Physical Volume Info:

1
2
3
$ pvs
PV         VG   Fmt  Attr PSize   PFree
/dev/vdb   vg1  lvm2 a--  139.70g    0

To list Volume Group Info:

1
2
3
$ vgs
VG   #PV #LV #SN Attr   VSize   VFree
vg1    1   1   0 wz--n- 139.70g    0

And viewing the logical volume size from the volume group:

1
2
3
$ vgs -o +lv_size,lv_name
VG   #PV #LV #SN Attr   VSize   VFree LSize   LV
vg1    1   1   0 wz--n- 139.70g    0  139.70g vol1

Information about Logical Volumes:

1
2
3
$ lvs
LV   VG   Attr       LSize   Pool Origin Data%  Meta%  Move Log Cpy%Sync Convert
vol1 vg1  -wi-ao---- 139.70g

Resources:

Setup Payara Application Server on Ubuntu 16.04

Today we will setup Payara 5 on Ubuntu 16.04

About:

Payara is an Open Source Java Application Server.

Pre-Requirements:

Update and Install Java 8:

1
2
3
4
5
6
$ apt update && apt upgrade -y
$ apt-get install wget curl unzip software-properties-common python-software-properties -y
$ add-apt-repository ppa:webupd8team/java
$ apt-get update
$ apt-get install oracle-java8-installer -y
$ source /etc/profile.d/jdk.sh

Install Payara:

Download and Install Payara 5:

1
2
3
4
5
6
$ cd /usr/local
$ wget --content-disposition 'https://info.payara.fish/cs/c/?cta_guid=b9609f35-f630-492f-b3c0-238fc55f489b&placement_guid=7cca6202-06a3-4c29-aee0-ca58af60528a&portal_id=334594&redirect_url=APefjpGt1aFvHUflpzz7Lec8jDz7CbeIIHZmgORmDSpteTCT2XjiMvjEzeY8yte3kiHi7Ph9mWDB7qUDEr96P0JS8Ev2ZFqahif2huSBfQV6lt4S6YUQpzPMrpHgf_n4VPV62NjKe8vLZBLnYkUALyR2mkrU3vWe7ME9XjHJqYPsHtxkHn-W7bYPFgY2LjEzKIYrdUsCviMgGrUh_LIbLxCESBa0N90vzaWKjK5EwZT021VaPP0jgfgvt0gF2UdtBQGcsTHrAlrb&hsutk=c279766888b67917a591ec4e209cb29a&canon=https%3A%2F%2Fwww.payara.fish%2Fall_downloads&click=5bad781c-f4f5-422d-ba2b-5e0c2bff7098&utm_referrer=https%3A%2F%2Fwww.google.co.za%2F&__hstc=229474563.c279766888b67917a591ec4e209cb29a.1519832301251.1521408251653.1521485598794.4&__hssc=229474563.7.1521485598794&__hsfp=2442083907'

$ unzip payara-5.181.zip
$ mv payara5 payara
$ rm -rf payara-5.181.zip

Permissions:

Create the Payara user and Grant Permissions:

1
2
3
4
5
6
$ echo 'export PATH=/usr/local/payara/glassfish/bin:$PATH' > /etc/profile.d/payara.sh
$ addgroup --system payara
$ adduser --system --shell /bin/bash --ingroup payara payara
$ echo 'payara soft nofile 32768' >> /etc/security/limits.conf
$ echo 'payara hard nofile 65536' >> /etc/security/limits.conf
$ chown -R payara:payara /usr/local/payara

Setup the Payara Domain:

Switch to the Payara user, delete the default domain and start the production domain. It is useful to configure the JVM Options under the domains config directory according to your servers resources.

1
2
3
4
5
6
7
8
9
10
$ su - payara

$ asadmin delete-domain domain1
$ asadmin change-admin-password --domain_name production # default blank pass for admin
$ asadmin --port 4848 enable-secure-admin production

$ asadmin start-domain production
$ asadmin stop-domain production

$ exit

SystemD Unit File:

Create the SystemD Unit File to be able to manage the state of the Payara Server via SystemD:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
$ cat > /etc/systemd/system/payara.service << EOF
[Unit]
Description=Payara Server
After=network.target remote-fs.target
 
[Service]
User=payara
WorkingDirectory=/usr/local/payara/glassfish
Environment=PATH=/usr/local/payara/glassfish/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/lib/jvm/java-8-oracle/bin:/usr/lib/jvm/java-8-oracle/db/bin:/usr/lib/jvm/java-8-oracle/jre/bin
Type=oneshot
RemainAfterExit=yes
ExecStart=/usr/local/payara/glassfish/bin/asadmin start-domain production
ExecReload=/usr/local/payara/glassfish/bin/asadmin restart-domain production
ExecStop=/usr/local/payara/glassfish/bin/asadmin stop-domain production
TimeoutStartSec=300
TimeoutStopSec=30
 
[Install]
WantedBy = multi-user.target
EOF

Reload the systemd daemon:

1
$ systemctl daemon-reload

Start the Payara Service:

1
2
$ systemctl enable payara
$ systemctl start payara

Verify that port 4848, 8080 and 8181 is running:

1
2
3
4
5
$ netstat -tulpn | grep java
tcp        0      0 :::8080                     :::*                        LISTEN      24542/java
tcp        0      0 :::4848                     :::*                        LISTEN      24542/java
tcp        0      0 :::8181                     :::*                        LISTEN      24542/java
...

Access Payara Admin UI:

Access the Payara DAS via https://ip-of-payara-server:4848

Expanding the Size of Your EBS Volume on AWS EC2 for Linux

Resizing your EBS Volume on the fly, that is attached to your EC2 Linux instance, on Amazon Web Services.

We want to resize our EBS Volume from 100GB to 1000GB and at the moment my EBS Volume is 100GB, as you can see:

1
2
3
4
5
$ df -h
Filesystem      Size  Used Avail Use% Mounted on
devtmpfs        7.9G   60K  7.9G   1% /dev
tmpfs           7.9G     0  7.9G   0% /dev/shm
/dev/xvda1       99G   32G   67G  32% /

Now we want to resize the volume to 1000GB, without shutting down our EC2 instance.

Go to your EC2 Management Console, Select your EC2 Instance, scroll down to the EBS Volume, click on it and click the EBS Volume ID, from there select Actions, modify and resize the disk to the needed size. As you can see the disk is now 1000GB:

1
2
3
4
$ lsblk
NAME    MAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
xvda    202:0    0 1000G  0 disk
xvda1 202:1    0 1000G  0 part /

But our partition is still 100GB:

1
2
3
4
5
$ df -h
Filesystem      Size  Used Avail Use% Mounted on
devtmpfs        7.9G   60K  7.9G   1% /dev
tmpfs           7.9G     0  7.9G   0% /dev/shm
/dev/xvda1       99G   32G   67G  32% /

We need to use growpart and resize2fs to resize our partition:

1
2
$ sudo growpart /dev/xvda 1
CHANGED: disk=/dev/xvda partition=1: start=4096 old: size=209711070,end=209715166 new: size=2097147870,end=2097151966
1
2
3
4
5
$ sudo resize2fs /dev/xvda1
resize2fs 1.42.12 (29-Aug-2014)
Filesystem at /dev/xvda1 is mounted on /; on-line resizing required
old_desc_blocks = 7, new_desc_blocks = 63
The filesystem on /dev/xvda1 is now 262143483 (4k) blocks long.

Note: If you are using XFS as your filesystem type, you will need to use xfs_growfs instead of resize2fs. (Thanks Donovan).

Example using XFS shown below:

1
$ sudo xfs_growfs /dev/xvda1

Now we will have a resized partition to 100GB:

1
2
3
4
5
$ df -h
Filesystem      Size  Used Avail Use% Mounted on
devtmpfs        7.9G   60K  7.9G   1% /dev
tmpfs           7.9G     0  7.9G   0% /dev/shm
/dev/xvda1      985G   33G  952G   4% /

Resources:

Nginx Basic Authentication With Source IP Whitelisting

Quick post on how to setup HTTP Basic Authentication and whitelist IP Based Sources to not get prompted for Authentication.

This could be useful for systems interacting with Nginx, so that they don’t have to provide authentication.

Dependencies:

Install nginx and the package required to create the auth file:

1
$ apt install nginx apache2-utils -y

Create the Password file:

1
$ htpasswd -c /etc/ngins/secrets admin

Configuration:

Create the site config:

1
2
$ rm -rf /etc/nginx/conf.d/*.conf
$ vim /etc/nginx/conf.d/default.conf
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
server {
    listen       80;
    server_name  localhost;

    location / {
        satisfy any;
        allow 127.0.0.1;
        deny all;

        auth_basic "restricted";
        auth_basic_user_file /etc/nginx/secrets;
        root   /usr/share/nginx/html;
        index  index.html index.htm;
    }

    error_page   500 502 503 504  /50x.html;
    location = /50x.html {
        root   /usr/share/nginx/html;
    }
}

Reload the Changes:

1
$ nginx -s reload

Testing:

Testing from our Whitelisted location (localhost):

1
2
curl -i http://127.0.0.1 
HTTP/1.1 200 OK

Testing from remote location:

1
2
3
4
5
$ curl -i http://localhost
HTTP/1.1 401 Unauthorized

$ curl -i http://admin:password@localhost
HTTP/1.1 200 OK

Populate Environment Variables From Docker Secrets With a Flask Demo App

In this post we will create a basic Python Flask WebApp on Docker Swarm, but we will read our Flask Host, and Flask Port from Environment Variables, which will be populated from Docker Secrets, which we will read in from a python script.

Our Directory Setup:

This can be retrieved from github.com/ruanbekker/docker-swarm-apps/tool-secrets-env-exporter, but I will place the code in here as well.

Dockerfile:
1
2
3
4
5
6
FROM alpine:edge
RUN apk add --no-cache python2 py2-pip && pip install flask
ADD exporter.py /exporter.py
ADD boot.sh /boot.sh
ADD app.py /app.py
CMD ["/bin/sh", "/boot.sh"]
exporter.py
1
2
3
4
5
6
7
8
import os
from glob import glob

for var in glob('/run/secrets/*'):
    k=var.split('/')[-1]
    v=open(var).read().rstrip('\n')
    os.environ[k] = v
    print("export {key}={value}".format(key=k,value=v))
app.py
1
2
3
4
5
6
7
8
9
10
11
12
13
14
import os
from flask import Flask

flask_host = str(os.environ['flask_host'])
flask_port = int(os.environ['flask_port'])

app = Flask(__name__)

@app.route('/')
def index():
    return 'ok\n'

if __name__ == '__main__':
    app.run(host=flask_host, port=flask_port)
boot.sh
1
2
3
4
#!/bin/sh
set -e
eval $(python /exporter.py)
python /app.py

Flow Information:

The exporter script checks all the secrets that is mounted to the container, then formats the secrets to a key/value pair, which then exports the environment variables to the current shell, which thereafter gets read by the flask application.

Usage:

Create Docker Secrets:

boot.sh
1
2
$ echo 5001 | docker secret create flask_port -
$ echo 0.0.0.0 | docker secret create flask_host -

Build and Push the Image:

boot.sh
1
2
$ docker build -t registry.gitlab.com/<user>/<repo>/<image>:<tag>
$ docker push registry.gitlab.com/<user>/<repo>/<image>:<tag>

Create the Service, and specify the secrets that we created earlier:

boot.sh
1
2
3
4
$ docker service create --name webapp \
--secret source=flask_host,target=flask_host \
--secret source=flask_port,target=flask_port \
registry.gitlab.com/<user>/<repo>/<image>:<tag>

Exec into the container, list to see where the secrets got populated:

boot.sh
1
2
$ ls /run/secrets/
flask_host  flask_port

Do a netstat, to see that the value from the created secret is listening:

boot.sh
1
2
3
4
$ netstat -tulpn
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address           Foreign Address         State       PID/Program name
tcp        0      0 0.0.0.0:5001            0.0.0.0:*               LISTEN      7/python

Do a GET request on the Flask Application:

boot.sh
1
2
$ curl http://0.0.0.0:5001/
ok

Send SMS Messages With Python and Twilio via Their API

This post will guide you through the steps on how to send SMS messages with Python and Twilio. We will use talaikis.com API to get a random quote that we will include in the body of the sms.

Signup for a Trail Account:

Sign up for a trail account at Twilio then create a number, which I will refer to as the sender number, take note of your accountid and token.

Create the Config:

Create the config, that will keep the accountid, token, sender number and recipient number:

config.py
1
2
3
4
5
6
secrets = {
    'account': 'xxxxxxxx',
    'token': 'xxxxxxx',
    'sender': '+1234567890',
    'receiver': '+0987654321'
}

Create the Client:

We will get a random quote via talaikis.com’s API which we will be using for the body of our text message, and then use twilio’s API to send the text message:

sms_client.py
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
from config import secrets
from twilio.rest import Client
import requests

twilio_acountid = secrets['account']
twilio_token = secrets['token']
twilio_receiver = secrets['receiver']
twilio_sender = secrets['sender']

quote_response = requests.get('https://talaikis.com/api/quotes/random').json()

client = Client(
    twilio_acountid,
    twilio_token
)

message = client.messages.create(
    to=twilio_receiver,
    from_=twilio_sender,
    body=quote_response['quote']
)

Message Preview:

Then within a couple of seconds your message should look something more or less like this:

For more info, have a look at their docs: - https://www.twilio.com/docs/

Golang: Reading From Files and Writing to Disk With Arguments

From our Previous Post we wrote a basic golang app that reads the contents of a file and writes it back to disk, but in a static way as we defined the source and destination filenames in the code.

Today we will use arguments to specify what the source and destination filenames should be instead of hardcoding it.

Our Golang Application:

We will be using if statements to determine if the number of arguments provided is as expected, if not, then a usage string should be returned to stdout. Then we will loop through the list of arguments to determine what the values for our source and destination file should be.

Once it completes, it prints out the coice of filenames that was used:

app.go
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
package main

import (
    "io/ioutil"
    "os"
    "fmt"
)

var (
    input_filename string
    output_filename string
)

func main() {

    if len(os.Args) < 5 {
        fmt.Printf("Usage: (-i/--input) 'input_filename' (-o/--output) 'output_filename' \n")
        os.Exit(0)
    }

    for i, arg := range os.Args {
        if arg == "-i" || arg == "--input" {
            input_filename = os.Args[i+1]
            }
        if arg == "-o" || arg == "--output" {
            output_filename = os.Args[i+1]
            }
        }

    input_file_content, error := ioutil.ReadFile(input_filename)

    if error != nil {
        panic(error)
    }

    fmt.Println("File used for reading:", input_filename)

    ioutil.WriteFile(output_filename, input_file_content, 0644)
    fmt.Println("File used for writing:", output_filename)
}

Build your application:

1
$ go build app.go

Run your application with no additional arguments to determine the expected behaviour:

1
2
$ ./app
Usage: (-i/--input) 'input_filename' (-o/--output) 'output_file-to-write'

It works as expected, now create a source file, then run the application:

1
$ echo $RANDOM > myfile.txt

Run the application, and in this run, we will set the destination file as newfile.txt:

1
2
3
$ ./app -i myfile.txt -o newfile.txt
File used for reading: myfile.txt
File used for writing: newfile.txt

Checking out the new file:

1
2
$ cat newfile.txt
8568

Golang: Reading From Files and Writing to Disk With Golang

)

Today we will create a very basic application to read content from a file, and write the content from the file back to disk, but to another filename.

Basically, doing a copy of the file to another filename.

Golang Environment: Golang Docker Image

Dropping into a Golang Environment using Docker:

1
$ docker run -it golang:alpine sh

Our Golang Application

After we are in our container, lets write our app:

app.go
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
package main

import (
    "io/ioutil"
)

func main() {

    content, error := ioutil.ReadFile("source-data.txt")
    if error != nil {
        panic(error)
    }

    error = ioutil.WriteFile("destination-data.txt", content, 0644)
    if error != nil {
        panic(error)
    }
}

Building our application to a binary:

1
$ go build app.go

Creating our source-data.txt :

1
$ echo "foo" > source-data.txt

Running the Golang App:

When we run this app, it will read the content of source-data.txt and write it to destination-data.txt:

1
$ ./app.go

We can see that the file has been written to disk:

1
2
3
$ ls | grep data
destination-data.txt
source-data.txt

Making sure the data is the same, we can do a md5sum hash function on them:

1
2
3
4
5
$ md5sum source-data.txt
d3b07384d113edec49eaa6238ad5ff00  source-data.txt

$ md5sum destination-data.txt
d3b07384d113edec49eaa6238ad5ff00  destination-data.txt

Next:

This was a very static way of doing it, as you need to hardcode the filenames. In the next post I will show how to use arguments to make it more dynamic.