TinyMongo is a wrapper for MongoDB on top of TinyDB.
This is awesome for testing, where you need a local document orientated database which is backed by a flat file. It feels just like using MongoDB, except that its local, lightweight and using TinyDB in the backend.
Installing Dependencies:
1
$ pip install tinymongo
Usage Examples:
Initialize tinymongo and create the database and collection:
Flata is a lightweight document orientated database, which was inspired by TinyDB and LowDB.
Why Flata?
Most of the times my mind gets in its curious states and I think about alternative ways on doing things, especially testing lightweight apps and today I wondered if theres any NoSQL-like software out there that is easy to spin up and is backed by a flat file, something like sqlite for SQL-like services, so this time just something for NoSQL-like.
So I stumbled upon TinyDB and Flata which is really easy to use and awesome!
What will we be doing today:
Create Database / Table
Write to the Table
Update Documents from the Table
Scan the Table
Query the Table
Delete Documents from the Table
Purge the Table
Getting the Dependencies:
Flata is written in Python, so no external dependencies is needed. To install it:
1
$ pip install flata
Usage Examples:
My home working directory:
12
$ pwd/home/ruan
This will be the directory where we will save our database in .json format.
When using that ARG option in your Dockerfile, you can specify the --build-args option to define the value for the key that you specify in your Dockerfile to use for a environment variable as an example.
Today we will use the arg and env to set environment variables at build time.
The Dockerfile:
Our Dockerfile
1234
FROM alpine:edge
ARG NAME
ENVOWNER=${NAME:-NOT_DEFINED}CMD["sh", "-c", "echo env var: ${OWNER}"]
Building our Image, we will pass the value to our NAME argument:
Please feel free to show support by, sharing this post, making a donation, subscribing or reach out to me if you want me to demo and write up on any specific tech topic.
The 12 Factor way, is a general guideline that provides best practices when building applications. One of them is using environment variables to store application configuration.
What will we be doing:
In this post we will build a simple docker application that returns the environment variable’s value to standard out. We are using environment substitution, so if the environment variable is not provided, we will set a default value of NOT_DEFINED.
We will have the environment variable OWNER and when no value is set for that Environment Variable, the NOT_DEFINED value will be returned.
The Dockerfile
Our Dockerfile:
123
FROM alpine:edge
ENVOWNER=${OWNER:-NOT_DEFINED}CMD["sh", "-c", "echo env var: ${OWNER}"]
Building the image:
1
$ docker build -t test:envs .
Putting it to action:
Now we will run a container and pass the OWNER environment variable as an option:
Please feel free to show support by, sharing this post, making a donation, subscribing or reach out to me if you want me to demo and write up on any specific tech topic.
Today we will use Amazon Web Services SSM Service to store secrets in their Parameter Store which we will encyrpt using KMS.
Then we will read the data from SSM and decrypt using our KMS key. We will then end it off by writing a Python Script that reads the AWS credentials, authenticates with SSM and then read the secret values that we stored.
The Do List:
We will break up this post in the following topics:
Create a KMS Key which will use to Encrypt/Decrypt the Parameter in SSM
Create the IAM Policy which will be used to authorize the Encrypt/Decrypt by the KMS ID
Create the KMS Alias
Create the Parameter using PutParameter as a SecureString to use Encryption with KMS
Describe the Parameters
Read the Parameter with and without Decryption to determine the difference using GetParameter
Read the Parameters using GetParameters
Environment Variable Example
Create the KMS Key:
As the administrator, or root account, create the KMS Key:
1234567891011121314
>>>importboto3>>>session=boto3.Session(region_name='eu-west-1',profile_name='personal')>>>iam=session.client('iam')>>>kms=session.client('kms')>>>response=kms.create_key(Description='Ruan Test Key',KeyUsage='ENCRYPT_DECRYPT',Origin='AWS_KMS',BypassPolicyLockoutSafetyCheck=False,Tags=[{'TagKey':'Name','TagValue':'RuanTestKey'}])>>>print(response['KeyMetadata']['KeyId'])foobar-2162-4363-ba02-a953729e5ce6
Create the IAM Policy:
123456789101112131415161718
>>>response=iam.create_policy(PolicyName='ruan-kms-test-policy',PolicyDocument='{"Version":"2012-10-17","Statement":[{"Sid":"Stmt1517212478199","Action":["kms:Decrypt","kms:Encrypt"],"Effect":"Allow","Resource":"arn:aws:kms:eu-west-1:0123456789012:key/foobar-2162-4363-ba02-a953729e5ce6"}]}', Description='Ruan KMS Test Policy')>>>print(response['Policy']['Arn'])arn:aws:iam::0123456789012:policy/ruan-kms-test-policy
As the administrator, write the secret values to the parameter store in SSM. We will publish a secret with the Parameter: /test/ruan/mysql/db01/mysql_hostname and the Value: db01.eu-west-1.mycompany.com:
123456789101112131415
>>>fromgetpassimportgetpass>>>secretvalue=getpass()Password:>>>print(secretvalue)db01.eu-west-1.mycompany.com>>>response=ssm.put_parameter(Name='/test/ruan/mysql/db01/mysql_hostname',Description='RuanTest MySQL Hostname',Value=secretvalue,Type='SecureString',KeyId='foobar-2162-4363-ba02-a953729e5ce6',Overwrite=False)
Now we will create a policy that can only decrypt and read values from SSM that matches the path: /test/ruan/mysql/db01/mysql_*. This policy will be associated to a instance profile role, which will be used by EC2, where our application will read the values from.
In this post I will demonstrate how to interact with Dreamhost’s Object Storage Service Offering called DreamObjects using Python Boto3 library. Dreamhost offers Object Storage at great pricing, for more information have a look at their Documentation
Whats on the Menu:
We will do the following:
List Buckets
List Objects
Put Object
Get Object
Upload Object
Download Object
Delete Object(s)
Configuration
First we need to configure credentials, by providing the access key and access secret key, that is provided by DreamHost:
After your credentials is set to your profile, we will need to import boto3 and instantiate the s3 client with our profile name, region name and endpoint url:
I’ve been using Scaleway for the past 18 months and I must admit, I love hosting my Applications on their Infrastructure. They have expanded rapidly recently, and currently deploying more infrstructure due to the high demand.
Scaleway is a Cloud Division of Online.net. They provide Baremetal and Cloud SSD Virtual Servers. Im currently hosting a Docker Swarm Cluster, Blogs, Payara Java Application Servers, Elasticsearch and MongoDB Clusters with them and really happy with the performance and stability of their services.
What will we be doing today:
Today I will be deploying MongoDB Server on a ARM64-2GB Instance, which costs you 2.99 Euros per month, absolutely awesome pricing! After we install MongoDB we will setup authentication, and then just a few basic examples on writing and reading from MongoDB.
Getting Started:
Logon to cloud.scaleway.com then launch an instance, which will look like the following:
After you deployed your instance, SSH to your instance, and it should look like this:
Your configuration might look different from mine, so I recommend to backup your config first, as the following command will overwrite the config to the configuration that I will be using for this demonstration:
Logical Volume Manager (LVM) - adds an extra layer between the physical disks and the file system, which allows you to resize your storage on the fly, use multiple disks, instead of one, etc.
Concepts:
Physical Volume:
- Physical Volume represents the actual disk / block device.
Volume Group:
- Volume Groups combines the collection of Logical Volumes and Physical Volumes into one administrative unit.
Logical Volume:
- A Logical Volume is the conceptual equivalent of a disk partition in a non-LVM system.
File Systems:
- File systems are built on top of logical volumes.
What we are doing today:
We have a disk installed on our server which is 150GB that is located on /dev/vdb, which we will manage via LVM and will be mounted under /mnt
Switch to the Payara user, delete the default domain and start the production domain. It is useful to configure the JVM Options under the domains config directory according to your servers resources.
12345678910
$ su - payara
$ asadmin delete-domain domain1
$ asadmin change-admin-password --domain_name production # default blank pass for admin$ asadmin --port 4848enable-secure-admin production
$ asadmin start-domain production
$ asadmin stop-domain production
$ exit
SystemD Unit File:
Create the SystemD Unit File to be able to manage the state of the Payara Server via SystemD:
Now we want to resize the volume to 1000GB, without shutting down our EC2 instance.
Go to your EC2 Management Console, Select your EC2 Instance, scroll down to the EBS Volume, click on it and click the EBS Volume ID, from there select Actions, modify and resize the disk to the needed size. As you can see the disk is now 1000GB:
1234
$ lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
xvda 202:0 0 1000G 0 disk
xvda1 202:1 0 1000G 0 part /
$ sudo resize2fs /dev/xvda1
resize2fs 1.42.12 (29-Aug-2014)Filesystem at /dev/xvda1 is mounted on /; on-line resizing required
old_desc_blocks= 7, new_desc_blocks= 63
The filesystem on /dev/xvda1 is now 262143483(4k) blocks long.
Note: If you are using XFS as your filesystem type, you will need to use xfs_growfs instead of resize2fs. (Thanks Donovan).
Example using XFS shown below:
1
$ sudo xfs_growfs /dev/xvda1
Note: If you are using nvme, it will look like this:
123456789101112131415
$ sudo lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
nvme1n1 259:0 0 160G 0 disk
-nvme1n1p1 259:1 0 80G 0 part /data
$ sudo growpart /dev/nvme1n1 1
CHANGED: partition=1start=2048 old: size=167770112end=167772160 new: size=335542239end=335544287
$ resize2fs /dev/nvme1n1p1
resize2fs 1.45.5 (07-Jan-2020)$ sudo lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
nvme1n1 259:0 0 160G 0 disk
-nvme1n1p1 259:1 0 160G 0 part /data