Ruan Bekker's Blog

From a Curious mind to Posts on Github

Experimenting With Python and TinyMongo a MongoDB Wrapper for TinyDB

TinyMongo is a wrapper for MongoDB on top of TinyDB.

This is awesome for testing, where you need a local document orientated database which is backed by a flat file. It feels just like using MongoDB, except that its local, lightweight and using TinyDB in the backend.

Installing Dependencies:

1
$ pip install tinymongo

Usage Examples:

Initialize tinymongo and create the database and collection:

1
2
3
4
>>> from tinymongo import TinyMongoClient
>>> connection = TinyMongoClient('foo')
>>> db_init = connection.mydb
>>> db = db_init.users

Insert a Document, catch the document id and search for that document:

1
2
3
4
>>> record_id = db .insert_one({'username': 'ruanb', 'name': 'ruan', 'age': 31, 'gender': 'male', 'location': 'south africa'}).inserted_id
>>> user_info = db.find_one({"_id": record_id})
>>> print(user_info)
{u'username': u'ruanb', u'name': u'ruan', u'gender': u'male', u'age': 31, u'_id': u'8d2ce01140ec11e888110242ac110004', u'location': u'south africa'}

Update a document: Update the age attribute from 31 to 32

1
2
3
>>> db.users.update_one({'_id': '8d2ce01140ec11e888110242ac110004'}, {'$set': {'age': 32 }})
>>> print(user_info)
{u'username': u'ruanb', u'name': u'ruan', u'gender': u'male', u'age': 32, u'_id': u'8d2ce01140ec11e888110242ac110004', u'location': u'south africa'}

Insert some more data:

1
2
>>> record_id = db .insert_one({'username': 'stefanb', 'name': 'stefan', 'age': 30, 'gender': 'male', 'location': 'south africa'}).inserted_id
>>> record_id = db .insert_one({'username': 'alexa', 'name': 'alex', 'age': 34, 'gender': 'male', 'location': 'south africa'}).inserted_id

Find all the users, sorted by descending age, oldest to youngest:

1
2
3
4
5
6
7
>>> response = db.users.find(sort=[('age', -1)])
>>> for doc in response:
...     print(doc)
...
{u'username': u'alexa', u'name': u'alex', u'gender': u'male', u'age': 34, u'_id': u'66b1cc3d40ee11e892980242ac110004', u'location': u'south africa'}
{u'username': u'ruanb', u'name': u'ruan', u'gender': u'male', u'age': 32, u'_id': u'8d2ce01140ec11e888110242ac110004', u'location': u'south africa'}
{u'username': u'stefanb', u'name': u'stefan', u'gender': u'male', u'age': 30, u'_id': u'fbe9da8540ed11e88c5e0242ac110004', u'location': u'south africa'}

Find the number of documents in the collection:

1
2
>>> db.users.find().count()
3

Resources:

Experimenting With Python and Flata the Lightweight Document Orientated Database

Flata is a lightweight document orientated database, which was inspired by TinyDB and LowDB.

Why Flata?

Most of the times my mind gets in its curious states and I think about alternative ways on doing things, especially testing lightweight apps and today I wondered if theres any NoSQL-like software out there that is easy to spin up and is backed by a flat file, something like sqlite for SQL-like services, so this time just something for NoSQL-like.

So I stumbled upon TinyDB and Flata which is really easy to use and awesome!

What will we be doing today:

  • Create Database / Table
  • Write to the Table
  • Update Documents from the Table
  • Scan the Table
  • Query the Table
  • Delete Documents from the Table
  • Purge the Table

Getting the Dependencies:

Flata is written in Python, so no external dependencies is needed. To install it:

1
$ pip install flata

Usage Examples:

My home working directory:

1
2
$ pwd
/home/ruan

This will be the directory where we will save our database in .json format.

Import the Dependencies:

1
2
>>> from flata import Flata, Query, where
>>> from flata.storages import JSONStorage

Create the Database file where all the data will be persisted:

1
>>> db_init = Flata('mydb.json', storage=JSONStorage)

Create the collection / table, with a custom id field. If the resource already exists a retrieve will be done:

1
>>> db_init.table('collection1', id_field = 'uid')

List the tables:

1
2
>>> db_init.all()
{u'collection1': {}}

a get method can only be done if the resource exists, and we will assign it to the db object:

1
>>> db = db_init.get('collection1')

Insert some data into our table:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
>>> db.insert({'username': 'ruanb', 'name': 'ruan', 'age': 31, 'gender': 'male', 'location': 'south africa'})
{'username': 'ruanb', 'uid': 1, 'gender': 'male', 'age': 31, 'location': 'south africa', 'name': 'ruan'}

>>> db.insert({'username': 'stefanb', 'name': 'stefan', 'age': 30, 'gender': 'male', 'location': 'south africa'})
{'username': 'stefanb', 'uid': 2, 'gender': 'male', 'age': 30, 'location': 'south africa', 'name': 'stefan'}

>>> db.insert({'username': 'mikec', 'name': 'mike', 'age': 28, 'gender': 'male', 'location': 'south africa'})
{'username': 'mikec', 'uid': 3, 'gender': 'male', 'age': 28, 'location': 'south africa', 'name': 'mike'}

>>> db.insert({'username': 'sam', 'name': 'samantha', 'age': 24, 'gender': 'female', 'location': 'south africa'})
{'username': 'sam', 'uid': 4, 'gender': 'female', 'age': 24, 'location': 'south africa', 'name': 'samantha'}

>>> db.insert({'username': 'michellek', 'name': 'michelle', 'age': 32, 'gender': 'female', 'location': 'south africa'})
{'username': 'michellek', 'uid': 5, 'gender': 'female', 'age': 32, 'location': 'south africa', 'name': 'michelle'}

Scan the whole table:

1
2
>>> db.all()
[{u'username': u'ruanb', u'uid': 1, u'name': u'ruan', u'gender': u'male', u'age': 31, u'location': u'south africa'}, {u'username': u'stefanb', u'uid': 2, u'name': u'stefan', u'gender': u'male', u'age': 30, u'location': u'south africa'}, {u'username': u'mikec', u'uid': 3, u'name': u'mike', u'gender': u'male', u'age': 28, u'location': u'south africa'}, {u'username': u'sam', u'uid': 4, u'name': u'samantha', u'gender': u'female', u'age': 24, u'location': u'south africa'}, {u'username': u'michellek', u'uid': 5, u'name': u'michelle', u'gender': u'female', u'age': 32, u'location': u'south africa'}]

Query data from the table.

Query the table for the username => ruanb:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
>>> import json
>>> q = Query()

>>> response = db.search(q.username == 'ruanb')
>>> print(json.dumps(response, indent=2))
[
  {
    u'username': u'ruanb',
    u'uid': 1,
    u'name': u'ruan',
    u'gender': u'male',
    u'age': 31,
    u'location': u'south africa'
  }
]

Query the table for everyone that is older than 29 and only male genders:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
>>> db.search(( q.gender == 'male' ) & (q.age >= 29 ))
[
  {
    u'username': u'ruanb',
    u'uid': 1,
    u'name': u'ruan',
    u'gender': u'male',
    u'age': 31,
    u'location': u'south africa'
  },
  {
    u'username': u'stefanb',
    u'uid': 2,
    u'name': u'stefan',
    u'gender': u'male',
    u'age': 30,
    u'location': u'south africa'
  }
]

Query the table for everyone that is younger than 25 or males:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
>>> db.search(( q.age < 25 ) | (q.gender == 'male' ) )
[
  {
    "username": "ruanb",
    "uid": 1,
    "name": "ruan",
    "gender": "male",
    "age": 31,
    "location": "south africa"
  },
  {
    "username": "stefanb",
    "uid": 2,
    "name": "stefan",
    "gender": "male",
    "age": 30,
    "location": "south africa"
  },
  {
    "username": "mikec",
    "uid": 3,
    "name": "mike",
    "gender": "male",
    "age": 28,
    "location": "south africa"
  },
  {
    "username": "sam",
    "uid": 4,
    "name": "samantha",
    "gender": "female",
    "age": 24,
    "location": "south africa"
  }
]

Update the location value: Lets say Samantha relocated to New Zealand, and we need to update her location from South Africa to New Zealand:

1
2
3
4
5
>>> db.update({'location': 'new zealand'}, where('username') == 'sam' )
([4], [{u'username': u'sam', u'uid': 4, u'name': u'samantha', u'gender': u'female', u'age': 24, u'location': 'new zealand'}])

>>> db.search(q.username == 'sam')
[{u'username': u'sam', u'uid': 4, u'name': u'samantha', u'gender': u'female', u'age': 24, u'location': u'new zealand'}]

Delete a document by its id:

1
2
>>> db.remove(ids=[4])
([4], [])

Delete all documents matching a query, for this example, all people with the gender: male:

1
2
>>> db.remove(q.gender == 'male')
([1, 2, 3], [])

Delete all the data in the table:

1
>>> db.purge()

When we exit, you will find the database file, which we created:

1
2
$ ls
mydb.json

Resources:

Set Docker Environment Variables During Build Time

When using that ARG option in your Dockerfile, you can specify the --build-args option to define the value for the key that you specify in your Dockerfile to use for a environment variable as an example.

Today we will use the arg and env to set environment variables at build time.

The Dockerfile:

Our Dockerfile

1
2
3
4
FROM alpine:edge
ARG NAME
ENV OWNER=${NAME:-NOT_DEFINED}
CMD ["sh", "-c", "echo env var: ${OWNER}"]

Building our Image, we will pass the value to our NAME argument:

1
$ docker build --build-arg NAME=james -t ruan:test .

Now when we run our container, we will notice that the build time argument has passed through to our environment variable from the running container:

1
2
$ docker run -it ruan:test
env var: james

When we build the image without specifying build arguments, and running the container:

1
2
3
$ docker build -t ruan:test .
$ docker run -it ruan:test
env var: NOT_DEFINED

Thank You

Please feel free to show support by, sharing this post, making a donation, subscribing or reach out to me if you want me to demo and write up on any specific tech topic.


Ad space:

Thanks for reading!

Docker Environment Substitution With Dockerfile

The 12 Factor way, is a general guideline that provides best practices when building applications. One of them is using environment variables to store application configuration.

What will we be doing:

In this post we will build a simple docker application that returns the environment variable’s value to standard out. We are using environment substitution, so if the environment variable is not provided, we will set a default value of NOT_DEFINED.

We will have the environment variable OWNER and when no value is set for that Environment Variable, the NOT_DEFINED value will be returned.

The Dockerfile

Our Dockerfile:

1
2
3
FROM alpine:edge
ENV OWNER=${OWNER:-NOT_DEFINED}
CMD ["sh", "-c", "echo env var: ${OWNER}"]

Building the image:

1
$ docker build -t test:envs .

Putting it to action:

Now we will run a container and pass the OWNER environment variable as an option:

1
2
$ docker run -it -e OWNER=ruan test:envs .
env var: ruan

When we run a container without specifying the environment variable:

1
2
$ docker run -it ruan:test
env var: NOT_DEFINED

Resources:

Thank You

Please feel free to show support by, sharing this post, making a donation, subscribing or reach out to me if you want me to demo and write up on any specific tech topic.


Ad space:

Thanks for reading!

Using AWS SSM Parameter Store to Retrieve Secrets Encrypted by KMS Using Python

Today we will use Amazon Web Services SSM Service to store secrets in their Parameter Store which we will encyrpt using KMS.

Then we will read the data from SSM and decrypt using our KMS key. We will then end it off by writing a Python Script that reads the AWS credentials, authenticates with SSM and then read the secret values that we stored.

The Do List:

We will break up this post in the following topics:

  • Create a KMS Key which will use to Encrypt/Decrypt the Parameter in SSM
  • Create the IAM Policy which will be used to authorize the Encrypt/Decrypt by the KMS ID
  • Create the KMS Alias
  • Create the Parameter using PutParameter as a SecureString to use Encryption with KMS
  • Describe the Parameters
  • Read the Parameter with and without Decryption to determine the difference using GetParameter
  • Read the Parameters using GetParameters
  • Environment Variable Example

Create the KMS Key:

As the administrator, or root account, create the KMS Key:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
>>> import boto3
>>> session = boto3.Session(region_name='eu-west-1', profile_name='personal')
>>> iam = session.client('iam')
>>> kms = session.client('kms')
>>> response = kms.create_key(
    Description='Ruan Test Key',
    KeyUsage='ENCRYPT_DECRYPT',
    Origin='AWS_KMS',
    BypassPolicyLockoutSafetyCheck=False,
    Tags=[{'TagKey': 'Name', 'TagValue': 'RuanTestKey'}]
)

>>> print(response['KeyMetadata']['KeyId'])
foobar-2162-4363-ba02-a953729e5ce6

Create the IAM Policy:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
>>> response = iam.create_policy(
    PolicyName='ruan-kms-test-policy',
    PolicyDocument='{
        "Version": "2012-10-17",
        "Statement": [{
            "Sid": "Stmt1517212478199",
            "Action": [
                "kms:Decrypt",
                "kms:Encrypt"
            ],
            "Effect": "Allow",
            "Resource": "arn:aws:kms:eu-west-1:0123456789012:key/foobar-2162-4363-ba02-a953729e5ce6"
        }]
    }', 
    Description='Ruan KMS Test Policy'
)
>>> print(response['Policy']['Arn'])
arn:aws:iam::0123456789012:policy/ruan-kms-test-policy

Create the KMS Alias:

1
>>> response = kms.create_alias(AliasName='alias/ruan-test-kms', TargetKeyId='foobar-2162-4363-ba02-a953729e5ce6')

Publish the Secrets to SSM:

As the administrator, write the secret values to the parameter store in SSM. We will publish a secret with the Parameter: /test/ruan/mysql/db01/mysql_hostname and the Value: db01.eu-west-1.mycompany.com:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
>>> from getpass import getpass
>>> secretvalue = getpass()
Password:

>>> print(secretvalue)
db01.eu-west-1.mycompany.com

>>> response = ssm.put_parameter(
    Name='/test/ruan/mysql/db01/mysql_hostname',
    Description='RuanTest MySQL Hostname',
    Value=secretvalue,
    Type='SecureString',
    KeyId='foobar-2162-4363-ba02-a953729e5ce6',
    Overwrite=False
)

Describe Parameters

Describe the Parameter that we written to SSM:

1
2
3
4
5
>>> response = ssm.describe_parameters(
    Filters=[{'Key': 'Name', 'Values': ['/test/ruan/mysql/db01/mysql_hostname']}]
)
>>> print(response['ResponseMetadata']['Parameters'][0]['Name'])
'/test/ruan/mysql/db01/mysql_hostname'

Reading from SSM:

Read the Parameter value from SSM without using decryption via KMS:

1
2
3
>>> response = ssm.get_parameter(Name='/test/ruan/mysql/db01/mysql_hostname')
>>> print(response['Parameter']['Value'])
AQICAHh7jazUUBgNxMQbYFeve2/p+UWTuyAd5F3ZJkZkf9+hwgF+H+kSABfPCTEarjXqYBaJAAAAejB4BgkqhkiG9w0BBwagazBpAgEAMGQGCSqGSIb3DQEHATAeBglghkgBZQMEAS4wEQQMJUEuT8wDGCQ3zRBmAgEQgDc8LhLgFe+Rutgi0hOKnjTEVQa2lKTy3MmTDZEeLy3Tlr5VUl6AVJNBpd4IWJTbj5YuqrrAAWWJ

As you can see the value is encrypted, this time read the parameter value with specifying decryption via KMS:

1
2
3
>>> response = ssm.get_parameter(Name='/test/ruan/mysql/db01/mysql_hostname', WithDecryption=True)
>>> print(response['Parameter']['Value'])
db01.eu-west-1.mycompany.com

Grant Permissions to Instance Profile:

Now we will create a policy that can only decrypt and read values from SSM that matches the path: /test/ruan/mysql/db01/mysql_*. This policy will be associated to a instance profile role, which will be used by EC2, where our application will read the values from.

Our policy will look like this:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Sid": "Stmt1517398919242",
      "Action": [
        "kms:Decrypt"
      ],
      "Effect": "Allow",
      "Resource": "arn:aws:kms:eu-west-1:0123456789012:key/foobar-2162-4363-ba02-a953729e5ce6"
    },
    {
      "Sid": "Stmt1517399021096",
      "Action": [
        "ssm:GetParameter"
      ],
      "Effect": "Allow",
      "Resource": "arn:aws:ssm:eu-west-1:0123456789012:parameter/test/ruan/mysql/db01/mysql_*"
    }
  ]
}

Create the Policy:

1
2
>>> pol = '{"Version": "2012-10-17","Statement": [{"Sid": "Stmt1517398919242","Action": ["kms:Decrypt"],"Effect": "Allow","Resource": "arn:aws:kms:eu-west-1:0123456789012:key/foobar-2162-4363-ba02-a953729e5ce6"},{"Sid": "Stmt1517399021096","Action": ["ssm:GetParameter"],"Effect": "Allow","Resource": "arn:aws:ssm:eu-west-1:0123456789012:parameter/test/ruan/mysql/db01/mysql_*"}]}'
>>> response = iam.create_policy(PolicyName='RuanGetSSM-Policy', PolicyDocument=pol, Description='Test Policy to Get SSM Parameters')

Create the instance profile:

1
>>> response = iam.create_instance_profile(InstanceProfileName='RuanTestSSMInstanceProfile')

Create the Role:

1
>>> response = iam.create_role(RoleName='RuanTestGetSSM-Role', AssumeRolePolicyDocument='{"Version": "2012-10-17","Statement": [{"Sid": "","Effect": "Allow","Principal": {"Service": "ec2.amazonaws.com"},"Action": "sts:AssumeRole"}]}')

Associate the Role and Instance Profile:

1
>>> response = iam.add_role_to_instance_profile(InstanceProfileName='RuanTestSSMInstanceProfile', RoleName='RuanTestGetSSM-Role')

Attach the Policy to the Role:

1
>>> response = iam.put_role_policy(RoleName='RuanTestGetSSM-Role', PolicyName='RuanTestGetSSMPolicy1', PolicyDocument=pol')

Launch the EC2 instance with the above mentioned Role. Create the get_ssm.py and run it to decrypt and read the value from SSM:

get_ssm.py
1
2
3
4
5
import boto3
session = boto3.Session(region_name='eu-west-1')
ssm = session.client('ssm')
hostname = ssm.get_parameter(Name='/test/ruan/mysql/db01/mysql_hostname', WithDecryption=True)
print(hostname['Parameter']['Value'])

Run it:

1
2
$ python get_ssm.py
db01.eu-west-1.mycompany.com

Reading with GetParameters:

So say that we created more than one parameter in the path that we allowed, lets use GetParameters to read more than one Parameter:

get_parameters.py
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
import boto3
session = boto3.Session(region_name='eu-west-1')
ssm = session.client('ssm')
response = ssm.get_parameters(
    Names=[
        '/test/ruan/mysql/db01/mysql_hostname',
        '/test/ruan/mysql/db01/mysql_user'
    ],
    WithDecryption=True
)

for secrets in response['Parameters']:
    if secrets['Name'] == '/test/ruan/mysql/db01/mysql_hostname':
        print("Hostname: {}".format(secrets['Value']))
    if secrets['Name'] == '/test/ruan/mysql/db01/mysql_user':
        print("Username: {}".format(secrets['Value']))

Run it:

1
2
3
$ python get_parameters.py
Hostname: db01.eu-west-1.mycompany.com
Username: super_dba

Environment Variable Example from an Application:

Set the Environment Variable value to the SSM key:

1
2
$ export MYSQL_HOSTNAME="/test/ruan/mysql/db01/mysql_hostname"
$ export MYSQL_USERNAME="/test/ruan/mysql/db01/mysql_user"

The application code:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
import os
import boto3

session = boto3.Session(region_name='eu-west-1')
ssm = session.client('ssm')

MYSQL_HOSTNAME = os.environ.get('MYSQL_HOSTNAME')
MYSQL_USERNAME = os.environ.get('MYSQL_USERNAME')

hostname = ssm.get_parameter(Name=MYSQL_HOSTNAME, WithDecryption=True)
username = ssm.get_parameter(Name=MYSQL_USERNAME, WithDecryption=True)

print("Hostname: {}".format(hostname['Parameter']['Value']))
print("Username: {}".format(username['Parameter']['Value']))

Let the application transform the key to the SSM Value:

1
2
3
$ python app.py
Hostname: db01.eu-west-1.mycompany.com
Username: super_dba

Resources:

Great thanks to the following resources:

Using Python Boto3 and DreamHosts DreamObjects to Interact With Their Object Storage Offering

In this post I will demonstrate how to interact with Dreamhost’s Object Storage Service Offering called DreamObjects using Python Boto3 library. Dreamhost offers Object Storage at great pricing, for more information have a look at their Documentation

Whats on the Menu:

We will do the following:

  • List Buckets
  • List Objects
  • Put Object
  • Get Object
  • Upload Object
  • Download Object
  • Delete Object(s)

Configuration

First we need to configure credentials, by providing the access key and access secret key, that is provided by DreamHost:

1
2
$ pip install awscli
$ aws configure --profile dreamhost

After your credentials is set to your profile, we will need to import boto3 and instantiate the s3 client with our profile name, region name and endpoint url:

1
2
3
>>> import boto3
>>> session = boto3.Session(region_name='us-west-2', profile_name='dreamhost')
>>> s3 = session.client('s3', endpoint_url='https://objects-us-west-1.dream.io')

List Buckets:

To list our Buckets:

1
2
3
4
5
6
7
8
>>> response = s3.list_buckets()
>>> print(response)
{u'Owner': {u'DisplayName': 'foobar', u'ID': 'foobar'}, u'Buckets': [{u'CreationDate': datetime.datetime(2017, 4, 15, 21, 51, 3, 921000, tzinfo=tzutc()), u'Name': 'ruanbucket'}], 'ResponseMetadata': {'HTTPStatusCode': 200, 'RetryAttempts': 0, 'HostId': '', 'RequestId': 'tx00000000000000003cd88-005ac361f5-foobar-default', 'HTTPHeaders': {'date': 'Tue, 03 Apr 2018 11:13:57 GMT', 'content-length': '306', 'x-amz-request-id': 'tx00000000000000003cd88-005ac361f5-foobar-default', 'content-type': 'application/xml'}}}

>>> for bucket in response['Buckets']:
...     print(bucket['Name'])
...
ruanbucket

List Objects:

List all the Objects, after the given prefix:

1
2
3
4
5
6
7
>>> response = s3.list_objects(Bucket='ruanbucket', Prefix='logs/sysadmins.co.za/access/')
>>> for obj in response['Contents']:
...     print obj['Key']
...
logs/sysadmins.co.za/access/access.log-2017-10-10.gz
logs/sysadmins.co.za/access/access.log-2017-10-11.gz
logs/sysadmins.co.za/access/access.log-2017-10-12.gz

Put Object:

Write text as the body to the destination key on the Bucket:

1
2
3
>>> response = s3.put_object(Bucket='ruanbucket', Body='My Name is Ruan\n', Key='uploads/docs/file.txt')
>>> print(response)
{u'Body': <botocore.response.StreamingBody object at 0x13cde10>, u'AcceptRanges': 'bytes', u'ContentType': 'binary/octet-stream', 'ResponseMetadata': {'HTTPStatusCode': 200, 'RetryAttempts': 0, 'HostId': '', 'RequestId': 'tx0000000000000000053f2-005ac3e0db-foobar-default', 'HTTPHeaders': {'content-length': '16', 'accept-ranges': 'bytes', 'last-modified': 'Tue, 03 Apr 2018 20:14:54 GMT', 'etag': '"292edceea84d1234465f725c3921fc2a"', 'x-amz-request-id': 'tx0000000000000000053f2-005ac3e0db-foobar-default', 'date': 'Tue, 03 Apr 2018 20:15:23 GMT', 'content-type': 'binary/octet-stream'}}, u'LastModified': datetime.datetime(2018, 4, 3, 20, 14, 54, tzinfo=tzutc()), u'ContentLength': 16, u'ETag': '"292edceea84d1234465f725c3921fc2a"', u'Metadata': {}}

List the Object that we have created in the Bucket::

1
2
3
4
5
>>> response = s3.list_objects(Bucket='ruanbucket', Prefix='uploads/')
>>> for obj in response['Contents']:
...     print obj['Key']
...
uploads/docs/file.txt

Get Object:

Read the value from the key that was uploaded:

1
2
3
>>> response = s3.get_object(Bucket='ruanbucket', Key='uploads/docs/file.txt')
>>> print(response['Body'].read())
My Name is Ruan

Upload Files:

Upload the file from disk to the Bucket:

1
2
3
>>> with open('myfile.txt', 'rb') as data:
...     s3.upload_fileobj(Fileobj=data, Bucket='ruanbucket', Key='uploads/docs/uploadobj.txt')
...

Read the contents from the uploaded file:

1
2
3
>>> response = s3.get_object(Bucket='ruanbucket', Key='uploads/docs/uploadobj.txt')
>>> print(response['Body'].read())
This is some text

Download File:

Download the file from the Bucket to the local disk:

1
2
3
>>> with open('downloaded.txt', 'wb') as data:
...     s3.download_fileobj(Bucket='ruanbucket', Key='uploads/docs/uploadobj.txt', Fileobj=data)
...

Read the file’s content from disk:

1
2
>>> print(open('downloaded.txt').read())
This is some text

Delete Object:

Delete one object:

1
2
3
>>> response = s3.delete_object(Bucket='ruanbucket', Key='uploads/docs/uploadobj.txt')
>>> print(response)
{'ResponseMetadata': {'HTTPStatusCode': 204, 'RetryAttempts': 0, 'HostId': '', 'RequestId': 'tx00000000000000000be5a-005ac3e61a-foobar-default', 'HTTPHeaders': {'date': 'Tue, 03 Apr 2018 20:37:46 GMT', 'x-amz-request-id': 'tx00000000000000000be5a-005ac3e61a-foobar-default'}}}

Delete Objects:

Delete more than one object with a single API call:

1
2
3
>>> response = s3.delete_objects(Bucket='ruanbucket', Delete={'Objects': [{'Key': 'uploads/docs/file.txt'}, {'Key': 'uploads/docs/file2.txt'}, {'Key': 'uploads/docs/file3.txt'}]})
>>> print(response)
{u'Deleted': [{u'Key': 'uploads/docs/file.txt'}, {u'Key': 'uploads/docs/file2.txt'}, {u'Key': 'uploads/docs/file3.txt'}], 'ResponseMetadata': {'HTTPStatusCode': 200, 'RetryAttempts': 0, 'HostId': '', 'RequestId': 'tx000000000000000011008-005ac3e951-foobar-default', 'HTTPHeaders': {'date': 'Tue, 03 Apr 2018 20:51:29 GMT', 'content-length': '270', 'x-amz-request-id': 'tx000000000000000011008-005ac3e951-217c0ac5-default', 'content-type': 'application/xml'}}}

For more information on the above, have a look at Boto’s Documentation and DreamHost’s Website

Setup MongoDB Server on ARM64 Using Scaleway

I’ve been using Scaleway for the past 18 months and I must admit, I love hosting my Applications on their Infrastructure. They have expanded rapidly recently, and currently deploying more infrstructure due to the high demand.

Scaleway is a Cloud Division of Online.net. They provide Baremetal and Cloud SSD Virtual Servers. Im currently hosting a Docker Swarm Cluster, Blogs, Payara Java Application Servers, Elasticsearch and MongoDB Clusters with them and really happy with the performance and stability of their services.

What will we be doing today:

Today I will be deploying MongoDB Server on a ARM64-2GB Instance, which costs you 2.99 Euros per month, absolutely awesome pricing! After we install MongoDB we will setup authentication, and then just a few basic examples on writing and reading from MongoDB.

Getting Started:

Logon to cloud.scaleway.com then launch an instance, which will look like the following:

After you deployed your instance, SSH to your instance, and it should look like this:

Dependencies:

Get the repository and install MongoDB:

1
2
3
4
5
$ apt update
$ apt-key adv --keyserver hkp://keyserver.ubuntu.com:80 --recv 2930ADAE8CAF5059EE73BB4B58712A2291FA4AD5
$ echo "deb [ arch=amd64,arm64 ] https://repo.mongodb.org/apt/ubuntu xenial/mongodb-org/3.6 multiverse" | sudo tee /etc/apt/sources.list.d/mongodb-org-3.6.list
$ apt update && apt upgrade -y
$ apt install mongodb-org -y

Enable MongoDB on Boot:

1
$ systemctl enable mongod

Configuration:

Your configuration might look different from mine, so I recommend to backup your config first, as the following command will overwrite the config to the configuration that I will be using for this demonstration:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
$ cat > /etc/mongod.conf << EOF
storage:
  dbPath: /var/lib/mongodb
  journal:
    enabled: false

storage:
  mmapv1:
    smallFiles: true

systemLog:
  destination: file
  logAppend: true
  path: /var/log/mongodb/mongod.log

net:
  port: 27017
  bindIp: 0.0.0.0

processManagement:
  timeZoneInfo: /usr/share/zoneinfo

security:
  authorization: enabled
EOF

Restart MongoDB for the config changes to take affect:

1
$ systemctl restart mongod

Authentication:

Create the Authentication:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
$ mongo
MongoDB shell version v3.6.3
connecting to: mongodb://127.0.0.1:27017
MongoDB server version: 3.6.3
Welcome to the MongoDB shell.

> use admin
> db.createUser({user: "ruan", pwd: "pass123", roles:[{role: "root", db: "admin"}]})
Successfully added user: {
        "user" : "ruan",
        "roles" : [
                {
                        "role" : "root",
                        "db" : "admin"
                }
        ]
}

> exit

Restart MongoDB and logon with your credentials:

1
2
3
4
5
6
7
8
$ systemctl restart mongod

$ mongo --authenticationDatabase admin --host localhost --port 27017 -u ruan -p
MongoDB shell version v3.6.3
Enter password:
connecting to: mongodb://localhost:27017/
MongoDB server version: 3.6.3
>

Write and Read from MongoDB

While you are on the MongoDB Shell, we will insert a couple of documents, first drop in to the database that you would like to write to:

1
2
> use testdb
switched to db testdb

Now we will write to the collection: collection1:

1
2
3
4
5
> db.collection1.insert({"name": "ruan", "surname": "bekker", "age": 31, "country": "south africa"})
WriteResult({ "nInserted" : 1 })

> db.collection1.insert({"name": "stefan", "surname": "bester", "age": 30, "country": "south africa"})
WriteResult({ "nInserted" : 1 })

To find all the documents in our collection:

1
2
3
> db.collection1.find()
{ "_id" : ObjectId("5ac15ff0f4a5500484defd23"), "name" : "ruan", "surname" : "bekker", "age" : 31, "country" : "south africa" }
{ "_id" : ObjectId("5ac16003f4a5500484defd24"), "name" : "stefan", "surname" : "bester", "age" : 30, "country" : "south africa" }

To prettify the output:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
> db.collection1.find().pretty()
{
        "_id" : ObjectId("5ac15ff0f4a5500484defd23"),
        "name" : "ruan",
        "surname" : "bekker",
        "age" : 31,
        "country" : "south africa"
}
{
        "_id" : ObjectId("5ac16003f4a5500484defd24"),
        "name" : "stefan",
        "surname" : "bester",
        "age" : 30,
        "country" : "south africa"
}

To find a document with the key/value of name: ruan:

1
2
3
4
5
6
7
8
> db.collection1.find({"name": "ruan"}).pretty()
{
        "_id" : ObjectId("5ac15ff0f4a5500484defd23"),
        "name" : "ruan",
        "surname" : "bekker",
        "age" : 31,
        "country" : "south africa"
}

To view the database that you are currently switched to:

1
2
> db
testdb

To view all the databases:

1
2
3
4
5
> show dbs
admin   0.000GB
config  0.000GB
local   0.000GB
testdb  0.000GB

To view the collections in the database:

1
2
3
4
> show collections
collection1

> exit

That was just a quick post on installing MongoDB on ARM64 using Scaleway. Try them out, and they are also hiring: careers.scaleway.com

Create a Logical Volume Using LVM on Ubuntu

Logical Volume Manager (LVM) - adds an extra layer between the physical disks and the file system, which allows you to resize your storage on the fly, use multiple disks, instead of one, etc.

Concepts:

Physical Volume: - Physical Volume represents the actual disk / block device.

Volume Group: - Volume Groups combines the collection of Logical Volumes and Physical Volumes into one administrative unit.

Logical Volume: - A Logical Volume is the conceptual equivalent of a disk partition in a non-LVM system.

File Systems: - File systems are built on top of logical volumes.

What we are doing today:

We have a disk installed on our server which is 150GB that is located on /dev/vdb, which we will manage via LVM and will be mounted under /mnt

Dependencies:

Update and Install LVM:

1
2
3
4
$ apt update && apt upgrade -y
$ apt install lvm2 -y
$ systemctl enable lvm2-lvmetad
$ systemctl start lvm2-lvmetad

Create the Logical Volume:

Initialize the Physical Volume to be managed by LVM, then create the Volume Group, then go ahead to create the Logical Volume:

1
2
3
$ pvcreate /dev/vdb
$ vgcreate vg1 /dev/vdb
$ lvcreate -l 100%FREE -n vol1 vg1

Build the Linux Filesystem with ext4 and mount the volume to the /mnt partition:

1
2
3
$ mkfs.ext4 /dev/vg1/vol1
$ mount /dev/vg1/vol1 /mnt
$ echo '/dev/mapper/vg1-vol1 /mnt ext4 defaults,nofail 0 0' >> /etc/fstab

Other useful commands:

To list Physical Volume Info:

1
2
3
$ pvs
PV         VG   Fmt  Attr PSize   PFree
/dev/vdb   vg1  lvm2 a--  139.70g    0

To list Volume Group Info:

1
2
3
$ vgs
VG   #PV #LV #SN Attr   VSize   VFree
vg1    1   1   0 wz--n- 139.70g    0

And viewing the logical volume size from the volume group:

1
2
3
$ vgs -o +lv_size,lv_name
VG   #PV #LV #SN Attr   VSize   VFree LSize   LV
vg1    1   1   0 wz--n- 139.70g    0  139.70g vol1

Information about Logical Volumes:

1
2
3
$ lvs
LV   VG   Attr       LSize   Pool Origin Data%  Meta%  Move Log Cpy%Sync Convert
vol1 vg1  -wi-ao---- 139.70g

Resources:

Setup Payara Application Server on Ubuntu 16.04

Today we will setup Payara 5 on Ubuntu 16.04

About:

Payara is an Open Source Java Application Server.

Pre-Requirements:

Update and Install Java 8:

1
2
3
4
5
6
$ apt update && apt upgrade -y
$ apt-get install wget curl unzip software-properties-common python-software-properties -y
$ add-apt-repository ppa:webupd8team/java
$ apt-get update
$ apt-get install oracle-java8-installer -y
$ source /etc/profile.d/jdk.sh

Install Payara:

Download and Install Payara 5:

1
2
3
4
5
6
$ cd /usr/local
$ wget --content-disposition 'https://info.payara.fish/cs/c/?cta_guid=b9609f35-f630-492f-b3c0-238fc55f489b&placement_guid=7cca6202-06a3-4c29-aee0-ca58af60528a&portal_id=334594&redirect_url=APefjpGt1aFvHUflpzz7Lec8jDz7CbeIIHZmgORmDSpteTCT2XjiMvjEzeY8yte3kiHi7Ph9mWDB7qUDEr96P0JS8Ev2ZFqahif2huSBfQV6lt4S6YUQpzPMrpHgf_n4VPV62NjKe8vLZBLnYkUALyR2mkrU3vWe7ME9XjHJqYPsHtxkHn-W7bYPFgY2LjEzKIYrdUsCviMgGrUh_LIbLxCESBa0N90vzaWKjK5EwZT021VaPP0jgfgvt0gF2UdtBQGcsTHrAlrb&hsutk=c279766888b67917a591ec4e209cb29a&canon=https%3A%2F%2Fwww.payara.fish%2Fall_downloads&click=5bad781c-f4f5-422d-ba2b-5e0c2bff7098&utm_referrer=https%3A%2F%2Fwww.google.co.za%2F&__hstc=229474563.c279766888b67917a591ec4e209cb29a.1519832301251.1521408251653.1521485598794.4&__hssc=229474563.7.1521485598794&__hsfp=2442083907'

$ unzip payara-5.181.zip
$ mv payara5 payara
$ rm -rf payara-5.181.zip

Permissions:

Create the Payara user and Grant Permissions:

1
2
3
4
5
6
$ echo 'export PATH=/usr/local/payara/glassfish/bin:$PATH' > /etc/profile.d/payara.sh
$ addgroup --system payara
$ adduser --system --shell /bin/bash --ingroup payara payara
$ echo 'payara soft nofile 32768' >> /etc/security/limits.conf
$ echo 'payara hard nofile 65536' >> /etc/security/limits.conf
$ chown -R payara:payara /usr/local/payara

Setup the Payara Domain:

Switch to the Payara user, delete the default domain and start the production domain. It is useful to configure the JVM Options under the domains config directory according to your servers resources.

1
2
3
4
5
6
7
8
9
10
$ su - payara

$ asadmin delete-domain domain1
$ asadmin change-admin-password --domain_name production # default blank pass for admin
$ asadmin --port 4848 enable-secure-admin production

$ asadmin start-domain production
$ asadmin stop-domain production

$ exit

SystemD Unit File:

Create the SystemD Unit File to be able to manage the state of the Payara Server via SystemD:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
$ cat > /etc/systemd/system/payara.service << EOF
[Unit]
Description=Payara Server
After=network.target remote-fs.target
 
[Service]
User=payara
WorkingDirectory=/usr/local/payara/glassfish
Environment=PATH=/usr/local/payara/glassfish/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/lib/jvm/java-8-oracle/bin:/usr/lib/jvm/java-8-oracle/db/bin:/usr/lib/jvm/java-8-oracle/jre/bin
Type=oneshot
RemainAfterExit=yes
ExecStart=/usr/local/payara/glassfish/bin/asadmin start-domain production
ExecReload=/usr/local/payara/glassfish/bin/asadmin restart-domain production
ExecStop=/usr/local/payara/glassfish/bin/asadmin stop-domain production
TimeoutStartSec=300
TimeoutStopSec=30
 
[Install]
WantedBy = multi-user.target
EOF

Reload the systemd daemon:

1
$ systemctl daemon-reload

Start the Payara Service:

1
2
$ systemctl enable payara
$ systemctl start payara

Verify that port 4848, 8080 and 8181 is running:

1
2
3
4
5
$ netstat -tulpn | grep java
tcp        0      0 :::8080                     :::*                        LISTEN      24542/java
tcp        0      0 :::4848                     :::*                        LISTEN      24542/java
tcp        0      0 :::8181                     :::*                        LISTEN      24542/java
...

Access Payara Admin UI:

Access the Payara DAS via https://ip-of-payara-server:4848

Expanding the Size of Your EBS Volume on AWS EC2 for Linux

Resizing your EBS Volume on the fly, that is attached to your EC2 Linux instance, on Amazon Web Services.

We want to resize our EBS Volume from 100GB to 1000GB and at the moment my EBS Volume is 100GB, as you can see:

1
2
3
4
5
$ df -h
Filesystem      Size  Used Avail Use% Mounted on
devtmpfs        7.9G   60K  7.9G   1% /dev
tmpfs           7.9G     0  7.9G   0% /dev/shm
/dev/xvda1       99G   32G   67G  32% /

Now we want to resize the volume to 1000GB, without shutting down our EC2 instance.

Go to your EC2 Management Console, Select your EC2 Instance, scroll down to the EBS Volume, click on it and click the EBS Volume ID, from there select Actions, modify and resize the disk to the needed size. As you can see the disk is now 1000GB:

1
2
3
4
$ lsblk
NAME    MAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
xvda    202:0    0 1000G  0 disk
xvda1 202:1    0 1000G  0 part /

But our partition is still 100GB:

1
2
3
4
5
$ df -h
Filesystem      Size  Used Avail Use% Mounted on
devtmpfs        7.9G   60K  7.9G   1% /dev
tmpfs           7.9G     0  7.9G   0% /dev/shm
/dev/xvda1       99G   32G   67G  32% /

We need to use growpart and resize2fs to resize our partition:

1
2
$ sudo growpart /dev/xvda 1
CHANGED: disk=/dev/xvda partition=1: start=4096 old: size=209711070,end=209715166 new: size=2097147870,end=2097151966
1
2
3
4
5
$ sudo resize2fs /dev/xvda1
resize2fs 1.42.12 (29-Aug-2014)
Filesystem at /dev/xvda1 is mounted on /; on-line resizing required
old_desc_blocks = 7, new_desc_blocks = 63
The filesystem on /dev/xvda1 is now 262143483 (4k) blocks long.

Note: If you are using XFS as your filesystem type, you will need to use xfs_growfs instead of resize2fs. (Thanks Donovan).

Example using XFS shown below:

1
$ sudo xfs_growfs /dev/xvda1

Note: If you are using nvme, it will look like this:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
$ sudo lsblk
NAME        MAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
nvme1n1     259:0    0  160G  0 disk
-nvme1n1p1  259:1    0   80G  0 part /data

$ sudo growpart /dev/nvme1n1 1
CHANGED: partition=1 start=2048 old: size=167770112 end=167772160 new: size=335542239 end=335544287

$ resize2fs /dev/nvme1n1p1
resize2fs 1.45.5 (07-Jan-2020)

$ sudo lsblk
NAME        MAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
nvme1n1     259:0    0  160G  0 disk
-nvme1n1p1  259:1    0  160G  0 part /data

Now we will have a resized partition to 100GB:

1
2
3
4
5
$ df -h
Filesystem      Size  Used Avail Use% Mounted on
devtmpfs        7.9G   60K  7.9G   1% /dev
tmpfs           7.9G     0  7.9G   0% /dev/shm
/dev/xvda1      985G   33G  952G   4% /

Resources: