Ruan Bekker's Blog

From a Curious mind to Posts on Github

Publish and Use Your Ansible Role From Git

In this tutorial we will be creating a ansible role, publish our ansible role to github, then we will install the role locally and create a ansible playbook to use the ansible role.

The source code for this blog post will be available on my github repository.

Ansible Installation

Create a virtual environment with Python:

1
2
$ virtualenv .venv -p python3
$ source .venv/bin/activate

Install ansible with pip:

1
$ pip install ansible==4.4.0

Now that we have ansible installed, we can create our role.

Initialize Ansible Role

A Ansible Role consists of a couple of files, and using ansible-galaxy makes it easy initializing a boilerplate structure to begin with::

1
2
$ ansible-galaxy init --init-path roles ssh_config
- Role ssh_config was created successfully

The role that we created is named ssh_config and will be placed under the directory roles under our current working directory.

Define Role Tasks

Create the dummy task under roles/ssh_config/tasks/main.yml:

Then define the defaults environment values in the file roles/ssh_config/defaults/main.yml:

1
2
3
---
# defaults file for ssh_config
ssh_port: 22

The value of ssh_port will default to 22 if we don’t define it in our variables.

Commit to Git

The assumption is made here that you already created a git repository and that your access is sorted. Add the files and commit it to git:

1
2
3
$ git add .
$ git commit -m "Your message"
$ git push origin main

Now your ansible role should be commited and visible in git.

SSH Config Client Side

I will be referencing the git source url via SSH, and since I am using my default ssh key, the ssh config isn’t really needed, but if you are using a different version control system, with different ports or different ssh keys, the following ssh config snippet may be useful:

1
2
3
4
5
$ cat ~/.ssh/config
Host github.com
    User git
    Port 22
    IdentityFile ~/.ssh/id_rsa

If you won’t be using SSH as the source url in your ansible setup for your role, you can skip the SSH setup.

Installing the Ansible Role from Git

When installing roles, ansible installs them by default under: ~/.ansible/roles, /usr/share/ansible/roles or /etc/ansible/roles.

From our previous steps, we still have the ansible role content locally (not under the default installed directory), so by saying installing the role kinda sounds like we are doing double the work. But the intention is that you have your ansible role centralized and versioned on git, and on new servers or workstations where you want to consume the role from, that specific role, won’t be present on that source.

To install the role from Git, we need to populate the requirements.yml file:

1
2
$ mkdir ~/my-project
$ cd ~/my-project

The requirements file is used to define where our role is located, which version and the type of version control, the requirements.yml:

1
2
3
4
5
6
---
roles:
  - name: ssh_config
    src: ssh://git@github.com/ruanbekker/ansible-demo-role.git
    version: main
    scm: git

For other variations of using the requirements file, you can have a look at their documentation

Then install the ansible role from our requirements file (I have used --force to overwrite my current one while testing):

1
2
3
4
5
$ ansible-galaxy install -r requirements.yml --force
Starting galaxy role install process
- changing role ssh_config from main to main
- extracting ssh_config to /Users/ruan/.ansible/roles/ssh_config
- ssh_config (main) was installed successfully

Ansible Playbook

Define the ansible playbook to use the role that we installed from git, in a file called playbook.yml:

1
2
3
4
5
6
---
- hosts: localhost
  roles:
    - ssh_config
  vars:
    ssh_port: 2202

Run the ansible playbook:

1
2
3
4
5
6
7
8
9
10
11
12
13
$ ansible-playbook playbook.yml
PLAY [localhost] *********************************************************************************************

TASK [Gathering Facts] ***************************************************************************************
ok: [localhost]

TASK [ssh_config : Dummy task] *******************************************************************************
ok: [localhost] => {
    "msg": "This is a dummy task changing ssh port to 2202."
}

PLAY RECAP ***************************************************************************************************
localhost                  : ok=2    changed=0    unreachable=0    failed=0    skipped=0    rescued=0    ignored=0

Thank You

Thanks for reading, if you like my content, check out my website, read my newsletter or follow me at @ruanbekker on Twitter.

Provision a AWS EC2 Instance With Terraform

In this tutorial I will demonstrate how to use Terraform (a Infrastructure as Code Tool), to provision a AWS EC2 Instance and the source code that we will be using in this tutorial will be published to my terraformfiles github repository.

Requirements

To follow along this tutorial, you will need an AWS Account and Terraform installed

Terraform

To install Terraform for your operating system, you can follow Terraform Installation Documentation, I am using Mac OSx, so for me it will be:

1
2
brew tap hashicorp/tap
brew install hashicorp/tap/terraform

To verify the installation, we can run terraform version and my output shows:

1
2
Terraform v1.1.8
on darwin_amd64

Terraform Project Structure

Create the directory:

1
2
mkdir terraform-aws-ec2
cd terraform-aws-ec2

Create the following files: main.tf, providers.tf, variables.tf, outputs.tf, locals.tf and terraform.tfvars:

1
touch main.tf providers.tf variables.tf outputs.tf locals.tf terraform.tfvars

Define Terraform Configuration Code

First we need to define the aws provider, which we will do in providers.tf:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
terraform {
  required_providers {
    aws = {
      version = "~> 3.27"
      source = "hashicorp/aws"
    }
  }
}

provider "aws" {
  region  = "eu-west-1"
  profile = "default"
  shared_credentials_file = "~/.aws/credentials"
}

You will notice that I am defining my profile name default from the ~/.aws/credentials credential provider in order for terraform to authenticate with AWS.

Next I am defining the main.tf which will be the file where we define our aws resources:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
data "aws_ami" "latest_ubuntu" {
  most_recent = true
  owners = ["099720109477"]

  filter {
    name   = "name"
    values = ["ubuntu/images/hvm-ssd/ubuntu-focal-20.04-*-server-*"]
  }

  filter {
    name   = "virtualization-type"
    values = ["hvm"]
  }

  filter {
    name   = "root-device-type"
    values = ["ebs"]
  }

  filter {
    name   = "architecture"
    values = ["x86_64"]
  }

}

data "aws_iam_policy_document" "assume_role_policy" {
  statement {
    actions = ["sts:AssumeRole"]
    principals {
      type        = "Service"
      identifiers = ["ec2.amazonaws.com"]
    }
  }
}

data "aws_iam_policy" "ec2_read_only_access" {
  arn = "arn:aws:iam::aws:policy/AmazonEC2ReadOnlyAccess"
}

resource "aws_iam_role" "ec2_access_role" {
  name               = "${local.project_name}-ec2-role"
  assume_role_policy = data.aws_iam_policy_document.assume_role_policy.json
}

resource "aws_iam_policy_attachment" "readonly_role_policy_attach" {
  name       = "${local.project_name}-ec2-role-attachment"
  roles      = [aws_iam_role.ec2_access_role.name]
  policy_arn = data.aws_iam_policy.ec2_read_only_access.arn
}

resource "aws_iam_instance_profile" "instance_profile" {
  name  = "${local.project_name}-ec2-instance-profile"
  role = aws_iam_role.ec2_access_role.name
}

resource "aws_security_group" "ec2" {
    name        = "${local.project_name}-ec2-sg"
    description = "${local.project_name}-ec2-sg"
    vpc_id      = var.vpc_id

    tags = merge(
      var.default_tags,
      {
       Name = "${local.project_name}-ec2-sg"
      },
    )
}

resource "aws_security_group_rule" "ssh" {
    description       = "allows public ssh access to ec2"
    security_group_id = aws_security_group.ec2.id
    type              = "ingress"
    protocol          = "tcp"
    from_port         = 22
    to_port           = 22
    cidr_blocks       = ["0.0.0.0/0"]
}

resource "aws_security_group_rule" "egress" {
    description       = "allows egress"
    security_group_id = aws_security_group.ec2.id
    type              = "egress"
    protocol          = "-1"
    from_port         = 0
    to_port           = 0
    cidr_blocks       = ["0.0.0.0/0"]
}

resource "aws_instance" "ec2" {
  ami                         = data.aws_ami.latest_ubuntu.id
  instance_type               = var.instance_type
  subnet_id                   = var.subnet_id
  key_name                    = var.ssh_keyname
  vpc_security_group_ids      = [aws_security_group.ec2.id]
  associate_public_ip_address = true
  monitoring                  = true
  iam_instance_profile        = aws_iam_instance_profile.instance_profile.name

  lifecycle {
    ignore_changes            = [subnet_id, ami]
  }

  root_block_device {
      volume_type           = "gp2"
      volume_size           = var.ebs_root_size_in_gb
      encrypted             = false
      delete_on_termination = true
  }

  tags = merge(
    var.default_tags,
    {
     Name = "${local.project_name}"
    },
  )

}

A couple of things are defined here:

  • A data resource to fetch the latest Ubuntu 20.04 AMI
  • The IAM Role and Policy that we will use to associate to our EC2 Instance Profile
  • The EC2 Security Group
  • The EC2 Instance
  • The VPC ID and Subnet ID are required variables which we will set in terraform.tfvars

The next file will be our variables.tf file where we will define all our variable definitions:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
variable "default_tags" {
  default = {
    Environment = "test"
    Owner       = "ruan.bekker"
    Project     = "terraform-blogpost"
    CostCenter  = "engineering"
    ManagedBy   = "terraform"
  }
}

variable "aws_region" {
  type        = string
  default     = "eu-west-1"
  description = "the region to use in aws"
}

variable "vpc_id" {
  type        = string
  description = "the vpc to use"
}

variable "ssh_keyname" {
  type        = string
  description = "ssh key to use"
}

variable "subnet_id" {
  type        = string
  description = "the subnet id where the ec2 instance needs to be placed in"
}

variable "instance_type" {
  type        = string
  default     = "t3.nano"
  description = "the instance type to use"
}

variable "project_id" {
  type        = string
  default     = "terraform-blogpost"
  description = "the project name"
}

variable "ebs_root_size_in_gb" {
  type        = number
  default     = 10
  description = "the size in GB for the root disk"
}

variable "environment_name" {
   type    = string
   default = "dev"
   description = "the environment this resource will go to (assumption being made theres one account)"
}

The next file is our locals.tf which just concatenates our project id and environment name:

1
2
3
locals {
  project_name = "${var.project_id}-${var.environment_name}"
}

Then our outputs.tf for the values that terraform should output:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
output "id" {
  description = "The ec2 instance id"
  value       = aws_instance.ec2.id
  sensitive   = false
}

output "ip" {
  description = "The ec2 instance public ip address"
  value       = aws_instance.ec2.public_ip
  sensitive   = false
}

output "subnet_id" {
  description = "the subnet id which will be used"
  value       = var.subnet_id
  sensitive   = false
}

Then lastly our terraform.tfvars, which you will need to supply your own values to match your AWS Account:

1
2
3
4
# required
vpc_id = "vpc-063d7xxxxxxxxxxxx"
ssh_keyname = "ireland-key"
subnet_id = "subnet-04b3xxxxxxxxxxxxx"

Deploy EC2 Instance

Now that all our configuration is in place, we need to intialize terraform by downloading the providers:

1
terraform init

Once the terraform init has completed, we can run a terraform plan which will show us what terraform will do. Since the terraform.tfvars are the default file for variables, we don’t have to specify the name of the file, but since I want to be excplicit, I will include it (should you want to change the file name):

1
terraform plan -var-file="terraform.tfvars"

Now it’s a good time to review what terraform wants to action by viewing the plan output, once you are happy you can deploy the changes by running a terraform apply:

1
terraform apply -var-file="terraform.tfvars"

Optional: You can override variables by either updating the terraform.tfvars or you can append them with terraform apply -var-file="terraform.tfvars" -var="ssh_key=default_key", a successful output should show something like this:

1
2
3
4
Outputs:
id = "i-0dgacxxxxxxxxxxxx"
ip = "18.26.xxx.92"
subnet = "subnet-04b3xxxxxxxxxxxxx"

Access your EC2 Instance

You can access the instance by SSH'ing to the IP that was returned by the output as well as the SSH key name that you provided, or you can make use of the terraform output to access the output value:

1
ssh -i ~/.ssh/id_rsa ubuntu@$(terraform output -raw ip)

Cleanup

To delete the infrastructure that Terraform provisioned:

1
terraform destroy

Thank You

Thanks for reading, if you like my content, check out my website, read my newsletter or follow me at @ruanbekker on Twitter.

Matrix Bot Using SimpleMatrixBotlib in Python

In this tutorial we will setup a python bot for our matrix chat server. We will only do a couple of basic commands, so that you have a solid base to build from.

Matrix Server

In our previous post we’ve setup a matrix and element server, so if you are following along, head over to that post to setup your matrix server before continuing.

Matrix Python Bot

We will be using simple-matrix-bot-lib as our bot, so first we need to install it:

1
2
python3 -m pip install simplematrixbotlib
python3 -m pip install requests

We will need to authenticate with a user, so I will create a dedicated bot user:

1
2
3
4
5
6
7
8
9
$ docker exec -it matrix_synapse_1 bash
> register_new_matrix_user -c /data/homeserver.yaml http://localhost:8008

New user localpart [root]: bot
Password:
Confirm password:
Make admin [no]: no
Sending registration request...
Success!

The most basic bot is the echo bot, which just returns your message:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
import subprocess
import simplematrixbotlib as botlib
from urllib.request import ssl, socket
import datetime, smtplib

MATRIX_URL="https://matrix.foodmain.co.za"
MATRIX_USER="@foobot:matrix.foodmain.co.za"
MATRIX_PASS="foo"

creds = botlib.Creds(MATRIX_URL, MATRIX_USER, MATRIX_PASS)
bot = botlib.Bot(creds)

PREFIX = '!'

# Help
@bot.listener.on_message_event
async def help(room, message):
    match = botlib.MessageMatch(room, message, bot, PREFIX)
    if match.is_not_from_this_bot() and match.prefix() and match.command("help"):
        help_message = """
        Help:
         - !help
        Echo
         - !echo your message
        """
        await bot.api.send_markdown_message(room.room_id, help_message)

# Echo
@bot.listener.on_message_event
async def echo(room, message):
    """
    Example function that "echoes" arguements.
    Usage:
    user:  !echo say something
    bot:   say something
    """
    match = botlib.MessageMatch(room, message, bot, PREFIX)
    if match.is_not_from_this_bot() and match.prefix() and match.command("echo"):
        print("Room: {r}, User: {u}, Message: {m}".format(r=room.room_id, u=str(message).split(':')[0], m=str(message).split(':')[-1].strip()))
        await bot.api.send_text_message(room.room_id, " ".join(arg for arg in match.args()))

bot.run()

Run the bot, invite the bot user to a room and test it with !echo hi

For a bot having to use the requests library, such as getting a quote from an api, we can use the following:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
import random
import subprocess
import simplematrixbotlib as botlib
import requests
from urllib.request import ssl, socket
import datetime, smtplib

MATRIX_URL="https://matrix.foodmain.co.za"
MATRIX_USER="@foobot:matrix.foodmain.co.za"
MATRIX_PASS="foo"

creds = botlib.Creds(MATRIX_URL, MATRIX_USER, MATRIX_PASS)
bot = botlib.Bot(creds)

PREFIX = '!'

# Help
@bot.listener.on_message_event
async def help(room, message):
    match = botlib.MessageMatch(room, message, bot, PREFIX)
    if match.is_not_from_this_bot() and match.prefix() and match.command("help"):
        help_message = """
        Help:
         - !help
        Echo
         - !echo msg
        Fortune:
         - !fortune
        Quote:
         - !quote
        """
        await bot.api.send_markdown_message(room.room_id, help_message)

# Echo
@bot.listener.on_message_event
async def echo(room, message):
    """
    Example function that "echoes" arguements.
    Usage:
    user: !echo say something
    bot:  say something
    """
    match = botlib.MessageMatch(room, message, bot, PREFIX)
    if match.is_not_from_this_bot() and match.prefix() and match.command("echo"):
        print("Room: {r}, User: {u}, Message: {m}".format(r=room.room_id, u=str(message).split(':')[0], m=str(message).split(':')[-1].strip()))
        await bot.api.send_text_message(room.room_id, " ".join(arg for arg in match.args()))

# Fortune
@bot.listener.on_message_event
async def fortune(room, message):
    match = botlib.MessageMatch(room, message, bot)
    if match.is_not_from_this_bot and match.command('!fortune'):
        fortune = subprocess.run(['/usr/games/fortune'], capture_output=True).stdout.decode('UTF-8')
        print(fortune)
        await bot.api.send_text_message(room.room_id, fortune)

# Quotes
@bot.listener.on_message_event
async def quote(room, message):
    match = botlib.MessageMatch(room, message, bot, PREFIX)
    if match.is_not_from_this_bot() and match.prefix() and (
            match.command("quote") or match.command("q")):

        response = requests.get('https://goquotes-api.herokuapp.com/api/v1/random?count=1').json()['quotes'][0]
        quote = response['text']
        author = response['author']
        tag = response['tag']
        formatted_message = f"""{quote}
        - {author}
        """
        #await bot.api.send_text_message(room.room_id, formatted_message)
        await bot.api.send_markdown_message(room.room_id,  formatted_message)

bot.run()

Resources

For more information, have a look at their documentation

Thank You

Thanks for reading, if you like my content, check out my website, read my newsletter or follow me at @ruanbekker on Twitter.

Setup Matrix and Element Chat Server

In this tutorial we will setup a Matrix and Element Chat Server using Docker on Ubuntu.

What is Matrix?

Matrix is an open standard and communication protocol for secure, decentralised, real-time communication. For more information on Matrix, see their website

Install Docker

I will assume that docker and docker compose is installed, if not, follow this resource to install them: - https://docs.docker.com/get-docker/

Install Matrix Server

Create the directory structure:

1
2
3
$ docker network create --driver=bridge --subnet=10.10.10.0/24 --gateway=10.10.10.1 matrix_net
$ mkdir matrix
$ cd matrix/

The docker-compose.yml:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
version: '3.8'

services:
  element:
    image: vectorim/element-web:latest
    restart: unless-stopped
    volumes:
      - ./element-config.json:/app/config.json
    networks:
      default:
        ipv4_address: 10.10.10.3

  synapse:
    image: matrixdotorg/synapse:latest
    restart: unless-stopped
    networks:
      default:
        ipv4_address: 10.10.10.4
    volumes:
     - ./synapse:/data

  postgres:
    image: postgres:11
    restart: unless-stopped
    networks:
      default:
        ipv4_address: 10.10.10.2
    volumes:
     - ./postgresdata:/var/lib/postgresql/data
    environment:
     - POSTGRES_DB=synapse
     - POSTGRES_USER=synapse
     - POSTGRES_PASSWORD=STRONGPASSWORD
     - POSTGRES_INITDB_ARGS=--lc-collate C --lc-ctype C --encoding UTF8

networks:
  default:
    external:
      name: matrix

Download a sample config:

1
2
$ wget https://develop.element.io/config.json
$ mv config.json element-config.json

And adjust the bits where needed in element-config.json:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
{
    "default_server_config": {
        "m.homeserver": {
            "base_url": "https://matrix.domain.co.za",
            "server_name": "matrix.domain.co.za"
        },
        "m.identity_server": {
            "base_url": "https://vector.im"
        }
    },
    "brand": "Element",
    "integrations_ui_url": "https://scalar.vector.im/",
    "integrations_rest_url": "https://scalar.vector.im/api",
    "integrations_widgets_urls": [
        "https://scalar.vector.im/_matrix/integrations/v1",
        "https://scalar.vector.im/api",
        "https://scalar-staging.vector.im/_matrix/integrations/v1",
        "https://scalar-staging.vector.im/api",
        "https://scalar-staging.riot.im/scalar/api"
    ],
    "hosting_signup_link": "https://element.io/matrix-services?utm_source=element-web&utm_medium=web",
    "bug_report_endpoint_url": "https://element.io/bugreports/submit",
    "uisi_autorageshake_app": "element-auto-uisi",
    "showLabsSettings": true,
    "piwik": {
        "url": "https://piwik.riot.im/",
        "siteId": 1,
        "policyUrl": "https://element.io/cookie-policy"
    },
    "roomDirectory": {
        "servers": [
            "matrix.org",
            "gitter.im",
            "libera.chat"
        ]
    },
    "enable_presence_by_hs_url": {
        "https://matrix.org": false,
        "https://matrix-client.matrix.org": false
    },
    "terms_and_conditions_links": [
        {
            "url": "https://element.io/privacy",
            "text": "Privacy Policy"
        },
        {
            "url": "https://element.io/cookie-policy",
            "text": "Cookie Policy"
        }
    ],
    "hostSignup": {
      "brand": "Element Home",
      "cookiePolicyUrl": "https://element.io/cookie-policy",
      "domains": [
          "matrix.org"
      ],
      "privacyPolicyUrl": "https://element.io/privacy",
      "termsOfServiceUrl": "https://element.io/terms-of-service",
      "url": "https://ems.element.io/element-home/in-app-loader"
    },
    "sentry": {
        "dsn": "https://xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxc5@sentry.matrix.org/6",
        "environment": "develop"
    },
    "posthog": {
        "projectApiKey": "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx",
        "apiHost": "https://posthog.hss.element.io"
    },
    "features": {},
    "map_style_url": "https://api.maptiler.com/maps/streets/style.json?key=xxxxxxxxxxxxx"
}

Generate the homeserver config:

1
$ docker run -it --rm -v "$HOME/matrix/synapse:/data" -e SYNAPSE_SERVER_NAME=matrix.domain.co.za -e SYNAPSE_REPORT_STATS=yes matrixdotorg/synapse:latest generate

Verify the generated config in synapse/homeserver.yaml (I only changed server name and database):

1
2
3
4
5
6
7
8
9
10
server_name: "matrix.domain.co.za"
database:
  name: psycopg2
  args:
    user: synapse
    password: STRONGPASSWORD
    database: synapse
    host: postgres
    cp_min: 5
    cp_max: 10

Boot the stack:

1
$ docker-compose up -d

Caddy Reverse Proxy

Install caddy as a reverse proxy (includes letsencrypt out of the box):

1
2
3
4
5
$ apt install -y debian-keyring debian-archive-keyring apt-transport-https
$ curl -1sLf 'https://dl.cloudsmith.io/public/caddy/stable/gpg.key' | sudo tee /etc/apt/trusted.gpg.d/caddy-stable.asc
$ curl -1sLf 'https://dl.cloudsmith.io/public/caddy/stable/debian.deb.txt' | sudo tee /etc/apt/sources.list.d/caddy-stable.list
$ apt update
$ apt install caddy -y

Create the /etc/caddy/Caddyfile with the following content:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
matrix.domain.co.za {
        reverse_proxy /_matrix/* 10.10.10.4:8008
        reverse_proxy /_synapse/client/* 10.10.10.4:8008

        header {
                X-Content-Type-Options nosniff
                Referrer-Policy strict-origin-when-cross-origin
                Strict-Transport-Security "max-age=63072000; includeSubDomains;"
                Permissions-Policy "accelerometer=(), camera=(), geolocation=(), gyroscope=(), magnetometer=(), microphone=(), payment=(), usb=(), interest-cohort=()"
                X-Frame-Options SAMEORIGIN
                X-XSS-Protection 1
                X-Robots-Tag none
                -server
        }
}

element.domain.co.za {
        encode zstd gzip
        reverse_proxy 10.10.10.3:80

        header {
                X-Content-Type-Options nosniff
                Referrer-Policy strict-origin-when-cross-origin
                Strict-Transport-Security "max-age=63072000; includeSubDomains;"
                Permissions-Policy "accelerometer=(), camera=(), geolocation=(), gyroscope=(), magnetometer=(), microphone=(), payment=(), usb=(), interest-cohort=()"
                X-Frame-Options SAMEORIGIN
                X-XSS-Protection 1
                X-Robots-Tag none
                -server
        }
}

Change to the /etc/caddy directory then reload:

1
2
3
4
$ pushd /etc/caddy
$ caddy fmt
$ caddy reload
$ popd

Wait a couple of minutes and visit element on https://element.domain.co.za/

Admin Element User

Create your admin user on the docker container:

1
2
3
4
5
6
7
8
9
$ docker exec -it matrix_synapse_1 bash
> register_new_matrix_user -c /data/homeserver.yaml http://localhost:8008

New user localpart [root]: ruan
Password:
Confirm password:
Make admin [no]: yes
Sending registration request...
Success!

Resources

Thanks to cyberhost.uk for credit on helping me with this post.

Thank You

Thanks for reading, if you like my content, check out my website, read my newsletter or follow me at @ruanbekker on Twitter.

Load Environment Variables From File in Python

In this quick tutorial we will demonstrate how to load additional environment variables from file into your python application.

It loads key value pairs from a file and append it to its current runtime environment variables, so your current environment is unaffected.

python-dotenv

We will make use of the package python-dotenv so we will need to install the python package with pip:

1
python3 -m pip install python-dotenv

The env file

I will create the .env in my current working directory with the content:

1
2
APPLICATION_NAME=foo
APPLICATION_OWNER=bar

The application

This is a basic demonstration of a python application which loads the additional environment variables from file, then we will use json.dumps(.., indent=2) so that we can get a pretty print of all our environment variables:

1
2
3
4
5
6
7
import os
import json
from dotenv import load_dotenv

load_dotenv('.env')

print(json.dumps(dict(os.environ), indent=2))

When we run the application the output will look something like this:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
{
  "SHELL": "/bin/bash",
  "PWD": "/home/ubuntu/env-vars",
  "LOGNAME": "ubuntu",
  "HOME": "/home/ubuntu",
  "LANG": "C.UTF-8",
  "TERM": "xterm-256color",
  "USER": "ubuntu",
  "LC_CTYPE": "C.UTF-8",
  "PATH": "/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin",
  "SSH_TTY": "/dev/pts/0",
  "OLDPWD": "/home/ubuntu",
  "APPLICATION_NAME": "foo",
  "APPLICATION_OWNER": "bar"
}

As we can see our two environment variables was added to the environment. If you would like to access your two environment variables, we can do the following:

1
2
3
4
5
6
7
8
9
import os
from dotenv import load_dotenv

load_dotenv('.env')

APPLICATION_NAME = os.getenv('APPLICATION_NAME')
APPLICATION_OWNER = os.getenv('APPLICATION_OWNER')

print('Name: {0}, Owner: {1}'.format(APPLICATION_NAME, APPLICATION_OWNER))

And when we run that, the output should be the following:

1
Name: foo, Owner: bar

Thank You

Thanks for reading, if you like my content, check out my website, read my newsletter or follow me at @ruanbekker on Twitter.

Run a Basic Python Flask Restful API

In this tutorial we will run a basic api using flask-restful, it will only have two routes which will be a get and post method for the purpose of demonstration.

What is Flask Restful

Flask-RESTful is an extension for Flask that adds support for quickly building REST APIs. It is a lightweight abstraction that works with your existing ORM/libraries. Flask-RESTful encourages best practices with minimal setup.

If you want to see a basic Flask API post, you can follow the link below: - https://blog.ruanbekker.com/blog/2018/11/27/python-flask-tutorial-series-create-a-hello-world-app-p1/

Installation

Install Flask and Flask Restful:

1
2
python3 -m pip install flask
python3 -m pip install flask-restful

Code

The basic code that we have, is to have two methods available (get and post):

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
import flask
import flask_restful

app = flask.Flask(__name__)
api = flask_restful.Api(app)

class HelloWorld(flask_restful.Resource):
    def get(self):
        return {'hello': 'world'}

    def post(self):
        json_data = request.get_json(force=True)
        firstname = json_data['firstname']
        lastname = json_data['lastname']
        return jsonify(firstname=firstname, lastname=lastname)

api.add_resource(HelloWorld, '/')

if __name__ == "__main__":
    app.run(debug=True)

Run the Server

Run the server:

1
python api.py

Then make a get request:

1
curl http://localhost:5000/

The response should be the following:

1
2
3
{
    "hello": "world"
}

Then make a post request:

1
curl -XPOST http://localhost:5000/ -d '{"firstname": "ruan", "lastname": "bekker"}'

The response should look something like this:

1
2
3
4
{
  "firstname": "ruan",
  "lastname": "bekker"
}

Integration Tests

We can setup integration tests with unittest by creating test_api.py:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
import unittest
import app as api

class TestFlaskApi(unittest.TestCase):
    def setUp(self):
        self.app = api.app.test_client()

    def test_get_method(self):
        response = self.app.get("/")
        self.assertEqual(
            response.get_json(),
            {"hello": "world"},
        )

    def test_post_method(self):
        # request payload
        payload = json.dumps({
            "firstname": "ruan",
            "lastname": "bekker"
        })

        # make request
        response = self.app.post("/", data=payload, headers={"Content-Type": "application/json"})

        # assert
        self.assertEqual(str, type(response.json['lastname']))
        self.assertEqual(200, response.status_code)

    def tearDown(self):
        # delete if anything was created
        pass

if __name__ == '__main__':
    unittest.main()

Then we can run our test with:

1
python -m unittest discover -p test_app.py -v

Since our first test is expecting {"hello": "world"} our test will pass, and our second test we are validating that our post request returns a 200 response code and that our lastname field is of string type.

The output of our tests will show something like this:

1
2
3
4
5
6
7
test_get_request (test_app.TestFlaskApi) ... ok
test_post_request (test_app.TestFlaskApi) ... ok

----------------------------------------------------------------------
Ran 2 tests in 0.009s

OK

More on Flask-Restful

This was a very basic example and their documentation provides a great tutorial on how to extend from this example. This is also a great blogpost on testing rest api’s.

Thank You

Thanks for reading, if you like my content, check out my website, read my newsletter or follow me at @ruanbekker on Twitter.

Run OpenLDAP With a UI on Docker

In this tutorial we will setup two containers, openldap and a openldap ui to manage our users on openldap.

What is OpenLDAP

OpenLDAP is an open source implementation of the Lightweight Directory Access Protocol, which makes it possible for organizations to use centralized authentication and directory access services over a network.

Configuration

This stack will boot a openldap and openldap-ui container container with the following docker-compose.yml:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
version: "3.8"

services:
  openldap:
    image: osixia/openldap:1.5.0
    container_name: openldap
    volumes:
      - ./storage/ldap_db:/var/lib/ldap
      - ./storage/ldap_config:/etc/ldap/slapd.d
    environment:
      - LDAP_ORGANISATION=example-org
      - LDAP_DOMAIN=example.org
      - LDAP_ADMIN_PASSWORD=admin
      - LDAP_CONFIG_PASSWORD=config
      - LDAP_RFC2307BIS_SCHEMA=true
      - LDAP_REMOVE_CONFIG_AFTER_SETUP=true
      - LDAP_TLS_VERIFY_CLIENT=never
    networks:
      - openldap

  openldap-ui:
    image: wheelybird/ldap-user-manager:v1.5
    container_name: openldap-ui
    environment:
      - LDAP_URI=ldap://openldap
      - LDAP_BASE_DN=dc=example,dc=org
      - LDAP_REQUIRE_STARTTLS=FALSE
      - LDAP_ADMINS_GROUP=admins
      - LDAP_ADMIN_BIND_DN=cn=admin,dc=example,dc=org
      - LDAP_ADMIN_BIND_PWD=admin
      - LDAP_IGNORE_CERT_ERRORS=true
      - NO_HTTPS=TRUE
      - PASSWORD_HASH=SSHA
      - SERVER_HOSTNAME=localhost:18080
    depends_on:
      - openldap
    ports:
      - 18080:80
    networks:
      - openldap

networks:
  openldap:
    name: openldap

Boot

Boot the stack with docker-compose:

1
docker-compose up -d

You can access OpenLDAP-UI on port 18080 and the admin password will be admin. You will have admin access to create users.

Verify Users

Access the openldap container:

1
docker-compose exec openldap bash

You can use ldapsearch to verify our user:

1
ldapsearch -x -h openldap -D "uid=ruan,ou=people,dc=example,dc=org" -b "ou=people,dc=example,dc=org" -w "$PASSWORD" -s base 'uid=ruan'

Or we can use ldapwhoami:

1
ldapwhoami -vvv -h ldap://openldap:389 -p 389 -D 'uid=ruan,ou=people,dc=example,dc=org' -x -w "$PASSWORD"

Which will provide a output with something like:

1
2
3
ldap_initialize( <DEFAULT> )
dn:uid=ruan,ou=people,dc=example,dc=org
Result: Success (0)

Thank You

Thanks for reading, if you like my content, check out my website, read my newsletter or follow me at @ruanbekker on Twitter.

Create DNS Records With Terraform on Cloudflare

In this tutorial we will use Terraform to create DNS records on Cloudflare.

Installing Terraform

I will be installing terraform for linux, but you can follow terraform’s documentation if you are using a different operating system: - https://learn.hashicorp.com/tutorials/terraform/install-cli

1
2
3
> curl -fsSL https://apt.releases.hashicorp.com/gpg | sudo apt-key add -
> sudo apt-add-repository "deb [arch=amd64] https://apt.releases.hashicorp.com $(lsb_release -cs) main"
> sudo apt update && sudo apt install terraform -y

Verify that terraform was installed:

1
2
3
> terraform version
Terraform v1.1.6
on linux_amd64

Cloudflare Authentication

We need to create an API Token in order to authenticate terraform to make the required API calls to create the DNS Record.

They have a great post on this, which you can follow below: - https://developers.cloudflare.com/api/tokens/create

You will need access to “Edit DNS Zones” and also include the Domain that you would like to edit.

Ensure that you save the API Token in a safe place.

Terraform Code

First we will create a project directory:

1
2
> mkdir terraform-cloudflare-dns
> cd terraform-cloudflare-dns

First we will create the providers.tf which we define our provider and the required parameters for the provider:

1
2
3
4
5
6
7
8
9
10
11
12
13
terraform {
  required_providers {
    cloudflare = {
      source = "cloudflare/cloudflare"
      version = "~> 3.0"
    }
  }
}

provider "cloudflare" {
  email   = var.cloudflare_email
  api_token = var.cloudflare_api_token
}

As you can see, we are referencing email and api_token as variables, therefore we need to define those variables in variables.tf:

1
2
3
4
5
6
7
8
9
variable "cloudflare_email" {
  type        = string
  description = "clouflare email address"
}

variable "cloudflare_api_token" {
  type        = string
  description = "cloudflare api token"
}

In our main.tf, we are first using a data resource to query cloudflare for our domain rbkr.xyz and then access the attribute id which we will be using in our cloudflare_record resource so that it knows which domain to add the DNS record for.

Then we are going to create the A record foobar and provide the value of 127.0.0.1:

1
2
3
4
5
6
7
8
9
10
11
data "cloudflare_zone" "this" {
  name = "rbkr.xyz"
}

resource "cloudflare_record" "foobar" {
  zone_id = data.cloudflare_zone.this.id
  name    = "foobar"
  value   = "127.0.0.1"
  type    = "A"
  proxied = false
}

Then we are defining our outputs in outputs.tf:

1
2
3
4
5
6
7
8
output "record" {
  value = cloudflare_record.foobar.hostname
}

output "metadata" {
  value       = cloudflare_record.foobar.metadata
  sensitive   = true
}

Creating the Record

Once our configuration code is in place we can run a init which will download the providers:

1
> terraform init

Once that is done, we can run a plan so we can see what will be deployed, but since our variables.tf has no default values, we will either have to define this in terraform.tfvars or use it in-line.

I will be using it in-line for this demonstration:

1
> terraform plan -var "cloudflare_email=$EMAIL" -var "cloudflare_api_token=$API_TOKEN"

Once you are happy, you can run a apply which will deploy the changes:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
> terraform apply -var "cloudflare_email=$EMAIL" -var "cloudflare_api_token=$API_TOKEN"

Terraform used the selected providers to generate the following execution plan. Resource actions are indicated with the following symbols:
  + create

Terraform will perform the following actions:

  # cloudflare_record.foobar will be created
  + resource "cloudflare_record" "foobar" {
      + allow_overwrite = false
      + created_on      = (known after apply)
      + hostname        = (known after apply)
      + id              = (known after apply)
      + metadata        = (known after apply)
      + modified_on     = (known after apply)
      + name            = "foobar"
      + proxiable       = (known after apply)
      + proxied         = false
      + ttl             = (known after apply)
      + type            = "A"
      + value           = "127.0.0.1"
      + zone_id         = "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx"
    }

Plan: 1 to add, 0 to change, 0 to destroy.

Changes to Outputs:
  + metadata = (sensitive value)
  + record   = (known after apply)

Do you want to perform these actions?
  Terraform will perform the actions described above.
  Only 'yes' will be accepted to approve.

  Enter a value: yes

cloudflare_record.foobar: Creating...
cloudflare_record.foobar: Creation complete after 4s [id=xxxxxxxxxxxxxxxxxxxxx]

Apply complete! Resources: 1 added, 0 changed, 0 destroyed.

Outputs:

metadata = <sensitive>
record = "foobar.rbkr.xyz"

Test DNS

We can now test if this is working as expected with a dns utility like dig:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
> dig foobar.rbkr.xyz

; <<>> DiG 9.10.6 <<>> foobar.rbkr.xyz
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 20800
;; flags: qr rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 0, ADDITIONAL: 1

;; OPT PSEUDOSECTION:
; EDNS: version: 0, flags:; udp: 4096
;; QUESTION SECTION:
;foobar.rbkr.xyz.       IN      A

;; ANSWER SECTION:
foobar.rbkr.xyz. 300    IN      A       127.0.0.1

;; Query time: 262 msec
;; SERVER: 172.31.0.2#53(172.31.0.2)
;; WHEN: Wed Feb 02 13:57:59 SAST 2022
;; MSG SIZE  rcvd: 68

Thank You

Thanks for reading, if you like my content, check out my website or follow me at @ruanbekker on Twitter.

Blockchain Basics

In this tutorial, we will cover the basics of blockchain and why you would want to run a full-node such as bitcoin, ethereum, etc.

Blockchain Basics

Before we start setting up our bitcoin full-node, we first need to get through some blockchain basics, if you already aware of it, you can skip the the setup section of this post.

Block

Transaction data is permanently recorded into files called blocks. You can think of it as a transaction ledger. Blocks are organised into a linear sequence over time.

New transactions are constantly being processed by miners into new blocks which are added to the end of the chain. As blocks are buried deeper and deeper into the blockchain they become harder and harder to change or remove, this gives rise of Bitcoin’s Irreversible Transactions.

The first block added to the blockchain is referred to as the genesis block

Blockchain

A blockchain is a transaction database shared by all nodes participating in a system based on the bitcoin protocol. A full copy of a currency’s blockchain contains every transaction ever executed in the currency. With this information, one can find out how much value belonged to each address at any point in history.

Every block contains a hash of the previous block. This has the effect of creating a chain of blocks from the genesis block to the current block. Each block is guaranteed to come after the previous block chronologically because the previous block’s hash would otherwise not be known. Each block is also computationally impractical to modify once it has been in the chain for a while because every block after it would also have to be regenerated. These properties are what make bitcoins transactions irreversible. The blockchain is the main innovation of Bitcoin.

Mining

Mining is the process of adding transaction records to bitcoin’s public ledger of past transactions. The term “mining rig” is referred to where as a single computer system that performs the necessary computations for “mining”.

The blockchain serves to confirm transactions to the rest of the network as having taken place. Bitcoin nodes use the blockchain to distinguish legitimate Bitcoin transactions from attempts to re-spend coins that have already been spent elsewhere.

Node

Any computer that connects to the bitcoin network is called a node. Nodes that fully verify all of the rules of bitcoin are called full nodes. The most popular software implementation of full nodes is called bitcoin-core, its releases can be found on their github page

What is a Full Node

A full node is a node (computer system with bitcoin-core running on it) which downloads every block and transaction and check them against bitcoin’s consensus rules. which fully validates transactions and blocks. Almost all full nodes also help the network by accepting transactions and blocks from other full nodes, validating those transactions and blocks, and then relaying them to further full nodes.

Some examples of consensus rules:

  • Blocks may only create a certain number of bitcoins. (Currently 6.25 BTC per block.)
  • Transactions must have correct signatures for the bitcoins being spent.
  • Transactions/blocks must be in the correct data format.
  • Within a single blockchain, a transaction output cannot be double-spent.

At minimum, a full node must download every transaction that has ever taken place, all new transactions, and all block headers. Additionally, full nodes must store information about every unspent transaction output until it is spent.

By default full nodes are inefficient in that they download each new transaction at least twice, and they store the entire block chain (more than 165 GB as of 20180214) forever, even though only the unspent transaction outputs (<2 GB) are required. Performance can improved by enabling -blocksonly mode and enabling pruning

Archival Nodes

A subset of full nodes also accept incoming connections and upload old blocks to other peers on the network. This happens if the software is run with -listen=1 as is default.

Contrary to some popular misconceptions, being an archival node is not necessary to being a full node. If a user’s bandwidth is constrained then they can use -listen=0, if their disk space is constrained they can use pruning, all the while still being a fully-validating node that enforces bitcoin’s consensus rules and contributing to bitcoin’s overall security.

Most information was referenced from this wiki.

Thank You

Thanks for reading, if you like my content, check out my website or follow me at @ruanbekker on Twitter.

Run a GETH Ethereum Light Node

ethereum

In this tutorial we will install the Geth implementation of Ethereum on Linux and we will be using the light sync mode which will get you up and running in minutes, which only downloads a couple of GBs.

Once we have our node setup we will be using the API and Web3 to interact with our ethereum node.

To understand the basics of blockchain better, you can read my post: - The Basics of Blockchain

Environment Setup

We require go to be installed on our server, so from golang’s releases page get the latest version of Go and extract it:

1
2
3
cd /tmp
wget https://go.dev/dl/go1.17.4.linux-amd64.tar.gz
sudo tar -C /usr/local -xzf go1.17.4.linux-amd64.tar.gz

Setup environment for Go in /etc/profile.d/go.sh:

1
2
3
export GOROOT=/usr/local/go
export GOPATH=$HOME/go
export PATH=$PATH:$GOROOT/bin

While you are in your session, source the file:

1
source /etc/profile.d/go.sh

And verify that Go is installed:

1
2
go version
go version go1.17.4 linux/amd64

Download Geth

From the geth releases page, get the latest version, extract and setup a symlink to the latest version:

1
2
3
4
5
6
cd /tmp
wget https://gethstore.blob.core.windows.net/builds/geth-linux-amd64-1.10.13-7a0c19f8.tar.gz
tar -xvf geth-linux-amd64-1.10.13-7a0c19f8.tar.gz
sudo mkdir -p /usr/local/geth/1.10.13/bin
sudo mv geth-linux-amd64-1.10.13-7a0c19f8/geth /usr/local/geth/1.10.13/bin/geth
sudo ln -s /usr/local/geth/1.10.13 /usr/local/geth/current

Setup the environment for geth on /etc/profile.d/geth.sh:

1
export PATH=$PATH:/usr/local/geth/current/bin

Then source the file while you are still in your session:

1
source /etc/profile.d/geth.sh

You should be able to verify that geth is installed by running:

1
2
3
4
5
6
7
8
9
10
geth version
Geth
Version: 1.10.13-stable
Git Commit: eae3b1946a276ac099e0018fc792d9e8c3bfda6d
Git Commit Date: 20210929
Architecture: amd64
Go Version: go1.17
Operating System: linux
GOPATH=/home/ubuntu/go
GOROOT=/usr/local/go

Setup Geth

Create the data directory for geth and change the ownership of the directory to our user:

1
2
sudo mkdir -p /blockchain/ethereum/data
sudo chown -R ubuntu:ubuntu /blockchain/ethereum

Run geth in the foreground to test:

1
2
3
4
5
geth --ropsten \
  --datadir /blockchain/ethereum/data --datadir.minfreedisk 1024 \
  --cache 128 --syncmode "light" \
  --http --http.addr 0.0.0.0 --http.port 8545 \
  --metrics --metrics.addr 0.0.0.0 --metrics.port 6060

If everything goes okay, you can stop the process, and remove the ropsten testnet blockchain and state databases:

1
geth --ropsten removedb

Create the systemd unit file in /etc/systemd/system/geth.service:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
[Unit]
Description=Geth Node
After=network.target auditd.service
Wants=network.target

[Service]
WorkingDirectory=/home/ubuntu
ExecStart=/usr/local/geth/current/bin/geth \
  --ropsten \
  --datadir /blockchain/ethereum/data \
  --datadir.minfreedisk 1024 \
  --cache 128 \
  --syncmode "light" \
  --http --http.addr 0.0.0.0 --http.port 8545 \
  --http.api "admin,db,debug,eth,miner,net,personal,txpool,web3" \
  --http.corsdomain "*" \
  --metrics --metrics.addr 0.0.0.0 --metrics.port 6060 \
  --whitelist 10920274=0xfd652086d220d506ae5b7cb80fde97d2f3f7028d346cc7d9d384a83d3d638532
User=ubuntu
Group=ubuntu
Restart=on-failure
RestartSec=120s
KillMode=process
KillSignal=SIGINT
TimeoutStopSec=600

[Install]
WantedBy=multi-user.target
Alias=geth.service

The values such as --whitelist can be retrieved from this issue or this post and extracted from the post:

“due to the London upgrade you’ll probably end up on the chain that isn’t tracked by Etherscan and Metamask. To ensure you only retrieve blocks from peers on that chain, include the following string in your geth start command”

Since we created a new systemd unit file, reload the systemd daemon:

1
sudo systemctl daemon-reload

Enable and start geth:

1
2
sudo systemctl enable geth
sudo systemctl restart geth

You can tail the logs to ensure everything runs as it should:

1
sudo journalctl -fu geth

API

Following the JSON-RPC documentation, create your account:

1
curl -H "Content-Type: application/json" -XPOST http://localhost:8545/ -d '{"jsonrpc":"2.0","method":"personal_newAccount","params":["password"],"id":1}'

The response should provide your ropsten testnet address:

1
{"jsonrpc":"2.0","id":1,"result":"0x2b1718cdf7dbcc381267ccf43d320c6e194d6aa8"}

We can list all our ethereum addresses by calling the eth_accounts method:

1
curl -H "Content-Type: application/json" -XPOST http://localhost:8545/ -d '{"jsonrpc":"2.0","method":"eth_accounts","params":[],"id":1}'

We can then check our balance with eth_getbalance, where we pass the ethereum address which is in hex format, followed by the block number, but we will use “latest”:

1
curl -H "Content-Type: application/json" -XPOST http://localhost:8545/ -d '{"jsonrpc":"2.0","method":"eth_getBalance","params":["0x2b1718cdf7dbcc381267ccf43d320c6e194d6aa8", "latest"],"id":1}'

You can use the following faucets to send testnet funds to your account: - https://faucet.dimensions.network/ - https://faucet.ropsten.be/

After sending funds to your account, we can check our balance again:

1
curl -H "Content-Type: application/json" -XPOST http://localhost:8545/ -d '{"jsonrpc":"2.0","method":"eth_getBalance","params":["0x2b1718cdf7dbcc381267ccf43d320c6e194d6aa8", "latest"],"id":1}'

And our balances should reflect in our account:

1
{"jsonrpc":"2.0","id":1,"result":"0x429d069189e0000"}

Hexadecimal and Wei

As you can notice the value of our balance for our ethereum address is in hexadecimal format, we can convert it to decimal format:

1
2
echo $((0x429d069189e0000))
300000000000000000

We can use python to convert to decimal using the int() function, by passing the hexadecimal value and pass its base to convert it into an integer, the base for hexadecimal is 16:

1
2
>>> print(int('0x429d069189e0000', 16))
300000000000000000

The decimal value that was returned is the value in Wei, and the value of 1 ETH equals to 1,000,000,000,000,000,000 Wei.

Using gwei.io the conversions from 1 ETH are:

1
2
3
4
5
6
7
Wei: 1000000000000000000
Kwei: 1000000000000000
Mwei: 1000000000000
Gwei: 1000000000
Twei: 1000000
Pwei: 1000
ETH: 1

So now we can convert our balance from wei to ethereum:

  • your_balance_in_wei / unit_value_of_wei
  • 300000000000000000 / 1000000000000000000
1
2
python3 -c "print(300000000000000000 / 1000000000000000000)"
0.3

We can use this converter to make sure my math is correct

To get the current gas price in wei

1
curl -H "Content-Type: application/json" -XPOST http://localhost:8545/ -d '{"jsonrpc":"2.0","method":"eth_gasPrice","params":[],"id":1}'{"jsonrpc":"2.0","id":1,"result":"0x73a20d04"}

CLI - Accounts

Create a password in /tmp/pass then:

1
2
3
4
5
6
7
8
9
10
11
geth --datadir /blockchain/ethereum/data --keystore /blockchain/ethereum/data/keystore account new --password /tmp/.pass

Your new key was generated

Public address of the key:   0x5814D945EC909eb1307be4F133AaAB3dEF3572f0
Path of the secret key file: /blockchain/ethereum/data/keystore/UTC--2021-10-06T15-43-23.679655564Z--5814d945ec909eb1307be4f133aaab3def3572f0

- You can share your public address with anyone. Others need it to interact with you.
- You must NEVER share the secret key with anyone! The key controls access to your funds!
- You must BACKUP your key file! Without the key, it's impossible to access account funds!
- You must REMEMBER your password! Without the password, it's impossible to decrypt the key!

Then when you attach your console session, you will be able to see the address that we created:

1
2
3
geth attach /blockchain/ethereum/data/geth.ipc
> eth.accounts[0]
"0x5814d945ec909eb1307be4f133aaab3def3572f0"

CLI - Attach

Run the geth console:

1
2
3
4
5
6
7
8
9
10
geth attach /blockchain/ethereum/data/geth.ipc
Welcome to the Geth JavaScript console!

instance: Geth/v1.10.13-stable-eae3b194/linux-amd64/go1.17
at block: 11173667 (Wed Oct 06 2021 08:00:44 GMT+0200 (SAST))
 datadir: /blockchain/ethereum/data
 modules: admin:1.0 debug:1.0 eth:1.0 ethash:1.0 les:1.0 net:1.0 personal:1.0 rpc:1.0 txpool:1.0 vflux:1.0 web3:1.0

To exit, press ctrl-d or type exit
>

We can run net to show us the peercounts:

1
2
3
4
5
6
7
8
9
> net
{
  listening: true,
  peerCount: 1,
  version: "3",
  getListening: function(callback),
  getPeerCount: function(callback),
  getVersion: function(callback)
}

Or if we just want to access the peerCount value:

1
2
> net.peerCount
1

To view the peers thats connected:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
> admin.peers
[{
    caps: ["eth/66", "les/2", "les/3", "les/4", "snap/1"],
    enode: "enode://3b76ec5359e59048721de8b6ff97a064ea280233d37433222ce7efcdac700c987326734983c9b65f8f1914c40e1efd6b43999912a3bca208fcbb540a678db110@93.75.22.22:30308",
    enr: "enr:-KO4QD2mp_FKB4ZDpfOAD_ziVnMXo1Mcd-FQl9Abj__EJKr9As6UE0frpdaiOnWjqzuGLGaabaAkG7e2CvfY8LulI9ENg2V0aMfGhHEZtrOAgmlkgnY0gmlwhF1LFhaDbGVzwQGJc2VjcDI1NmsxoQI7duxTWeWQSHId6Lb_l6Bk6igCM9N0MyIs5-_NrHAMmIRzbmFwwIN0Y3CCdmSDdWRwgnZk",
    id: "a95433e1bcbcc64f5d1ad8bd2535557d1f5ed2191a760f704d42a925656bb8de",
    name: "Geth/v1.10.9-stable-eae3b194/linux-amd64/go1.17",
    network: {
      inbound: false,
      localAddress: "192.168.0.120:55166",
      remoteAddress: "93.75.22.22:30308",
      static: false,
      trusted: false
    },
    protocols: {
      les: {
        difficulty: 35015228630523840,
        head: "1aa1db0e6810f504f1542e8c3c49cecf17b0c3246b41f45bede42723d22b7c0c",
        version: 4
      }
    }
}]

Check if the node is syncing:

1
2
3
4
5
6
7
8
> eth.syncing
{
  currentBlock: 11176044,
  highestBlock: 11176158,
  knownStates: 40043719,
  pulledStates: 39904521,
  startingBlock: 0
}

We can view our accounts:

1
2
> eth.accounts
["0x2b1718cdf7dbcc381267ccf43d320c6e194d6aa8"]

And get the balance:

1
2
> eth.getBalance(eth.accounts[0])
300000000000000000

CLI - SendTransaction

The account that will be receiving the funds (host-a):

1
2
3
$ geth attach /blockchain/ethereum/data/geth.ipc
> eth.accounts[0]
"0x2b1718cdf7dbcc381267ccf43d320c6e194d6aa8"

It’s current balance:

1
2
> eth.getBalance(eth.accounts[0])
20485608293038927543

On the account that we will be sending from (host-b):

1
2
3
$ geth attach /blockchain/ethereum/data/geth.ipc
> eth.accounts[0]
"0xd490fb53c0e7d3c80153112a4bd135e2cf897282"

It’s current balance:

1
2
> eth.getBalance(eth.accounts[0])
2001712477998186788

When we attempt to send 1ETH to the recipient address:

1
2
3
4
5
> eth.sendTransaction({from: "0xd490fb53c0e7d3c80153112a4bd135e2cf897282", to: "0x2b1718cdf7dbcc381267ccf43d320c6e194d6aa8", value: "1000000000000000000"})
Error: authentication needed: password or unlock
  at web3.js:6357:37(47)
  at web3.js:5091:62(37)
  at <eval>:1:20(10)

You will notice that we need to unlock our account first:

1
2
3
4
> web3.personal.unlockAccount(web3.personal.listAccounts[0], null, 60)
Unlock account 0xd490fb53c0e7d3c80153112a4bd135e2cf897282
Passphrase:
true

Now we can send the transaction:

1
2
> eth.sendTransaction({from: "0xd490fb53c0e7d3c80153112a4bd135e2cf897282", to: "0x2b1718cdf7dbcc381267ccf43d320c6e194d6aa8", value: "1000000000000000000"})
"0x4bffabf28b71e6f83a48f0accb850b232dc3f482e30d942be3a2eb53d639d4bd"

And the transaction id can be looked up on the ropsten blockexplorer: - https://ropsten.etherscan.io/tx/0x4bffabf28b71e6f83a48f0accb850b232dc3f482e30d942be3a2eb53d639d4bd

And after the confirmations has been confirmed, we can see in our receiving account, we received the funds:

1
2
> eth.getBalance(eth.accounts[0])
21485608293038927543

Web3

Run a python environment:

1
docker run -it python:3.8.5-slim bash

Install dependencies:

1
2
3
apt update
apt install python3-dev gcc -y
pip install web3[tester]

The examples I will be following was extracted from the documentation: - https://ethereum.org/ml/developers/tutorials/a-developers-guide-to-ethereum-part-one/

Instantiate a client and connect to your geth node, [this documentation] provides different methods of connecting, but I will be using the HTTPProvider to connect over the network:

1
2
3
4
>>> from web3 import Web3
>>> w3 = Web3(Web3.HTTPProvider('http://192.168.0.120:8545'))
>>> w3.isConnected()
True

List the accounts:

1
2
>>> w3.eth.accounts
['0x2b1718CdF7dBcc381267CCF43D320C6e194D6aa8']

Get the balance in Wei:

1
2
3
>>> account = w3.eth.accounts[0]
>>> w3.eth.get_balance(account)
300000000000000000

Convert from Wei to ETH:

1
2
3
>>> balance_wei = w3.eth.get_balance(account)
>>> w3.fromWei(balance_wei, 'ether')
Decimal('0.3')

Get the information about the latest block:

1
2
>>> w3.eth.get_block('latest')
AttributeDict({'baseFeePerGas': 9, 'difficulty': 2073277081, 'extraData': HexBytes('0x63732e7064782e65647520676574682076312e31302e38'), 'gasLimit': 8000000, 'gasUsed': 3361330, 'hash': HexBytes('0xd06a7a734413bcffa4d56617b7efb9ebd8e684c5fcc7fd4f3ec85b8b809b1a0b'), 'logsBloom': HexBytes('0x00000004080000000001000000000000000000000000000000000000000000860000000000000400000000010000000000000000001000000000000080240000000000000000000000000018000000000000000000040000000000000000000002000000020000000000000000000800000000000000200000000010000000800000000000000002000000020000000040000000000000000001000000000000020800000000000000000000000000000000000000000000000000000000800000004002008000000001000000800000000000000000000000000000000062004010202800004000000000000000000000000040000000000002200000000000'), 'miner': '0xe9e7034AeD5CE7f5b0D281CFE347B8a5c2c53504', 'mixHash': HexBytes('0x42641ef2d13826f9cb070516f81515464af9c5c0a36edaa7c250fec62d18a193'), 'nonce': HexBytes('0x670ce792aed73895'), 'number': 11173874, 'parentHash': HexBytes('0xe26e265b264e5158a46ee213d39150d90b532db061497027f35ad36e98458895'), 'receiptsRoot': HexBytes('0x08c15b7365caa993a3047a3093ae641d5b97c51aff058952ab48a56bdee9240b'), 'sha3Uncles': HexBytes('0x1dcc4de8dec75d7aab85b567b6ccd41ad312451b948a7413f0a142fd40d49347'), 'size': 16130, 'stateRoot': HexBytes('0x9e2d99e11b4c71b93f14a563a87945c0ed89e577eefc860a7909bd6f3b8e669f'), 'timestamp': 1633502988, 'totalDifficulty': 35015626783987398, 'transactions': [HexBytes('0x9013555031ca4510e619968814b75ff428c595488c46a387a6b57774313f4686'), HexBytes('0x899f3dfa8cc0ce7eac397500a014dd624d50c0024e112fa3989403da5669b838'), HexBytes('0xf0c08c7e6849be5d23e0c603b405012a1baa4252884f8efac3244d3ed77b8622'), HexBytes('0xb6e03e10e0d6ced0a791f3a9474d760d7248301dc489c5b191aa82b1ef23e677'), HexBytes('0xb424a4da501df145346027c9c839ae9bf9a25f3672bf13fe097c39f46eda5028'), HexBytes('0xcb74ac5580485542ca532f5dc46798b84cb26d34ebc127871d6e2ffead6c32c7'), HexBytes('0xb61cf0eb92798885e4a6309d228e8a31e892e4353810593ba14a2737c1fcd53a'), HexBytes('0x20b27640c1b674be98d3051fac5dcf5ae50d5b7e957defc2336f07b99053fb2c'), HexBytes('0x2929a7384e5b47c4e414142623911a2deca95996e761bc10ccedf607342156af'), HexBytes('0x698af438f73bf384b7c35d4448c0e098d7744b4ce58327dc258a3d5706421c7e')], 'transactionsRoot': HexBytes('0x33cca53eabc2aed8cb0c8a5a7b771b9f14fd2e2aa2561195250411f0714ec768'), 'uncles': []})

Mining

Note: Light clients do not support mining

From another node, im running the fast sync mode on ropsten:

1
--syncmode "fast"
1
2
> eth.hashrate
43949
1
2
3
4
> eth.coinbase
"0x2b1718cdf7dbcc381267ccf43d320c6e194d6aa8"
> eth.getBalance(eth.coinbase).toNumber();
145000000000000000000

Delete Data

If we look at a fully synced ropsten “fast” node:

1
2
3
4
5
6
7
8
9
10
du -h /blockchain/ethereum/
4.0K  /blockchain/ethereum/data/keystore
856K  /blockchain/ethereum/data/geth/triecache
58G   /blockchain/ethereum/data/geth/chaindata/ancient
69G   /blockchain/ethereum/data/geth/chaindata
2.2M  /blockchain/ethereum/data/geth/nodes
188M  /blockchain/ethereum/data/geth/ethash
69G   /blockchain/ethereum/data/geth
69G   /blockchain/ethereum/data
69G   /blockchain/ethereum/

Remove the data with removedb:

1
2
3
4
5
6
7
8
9
10
11
geth --datadir /blockchain/ethereum/data removedb
INFO [10-06|20:01:52.061] Maximum peer count                       ETH=50 LES=0 total=50
INFO [10-06|20:01:52.061] Smartcard socket not found, disabling    err="stat /run/pcscd/pcscd.comm: no such file or directory"
INFO [10-06|20:01:52.062] Set global gas cap                       cap=50,000,000
Remove full node state database (/blockchain/ethereum/data/geth/chaindata)? [y/n] y
Remove full node state database (/blockchain/ethereum/data/geth/chaindata)? [y/n] y
INFO [10-06|20:01:57.141] Database successfully deleted            path=/blockchain/ethereum/data/geth/chaindata elapsed=2.482s
Remove full node ancient database (/blockchain/ethereum/data/geth/chaindata/ancient)? [y/n] y
Remove full node ancient database (/blockchain/ethereum/data/geth/chaindata/ancient)? [y/n] y
INFO [10-06|20:02:05.645] Database successfully deleted            path=/blockchain/ethereum/data/geth/chaindata/ancient elapsed=589.737ms
INFO [10-06|20:02:05.645] Light node database missing              path=/blockchain/ethereum/data/geth/lightchaindata

Now when we list the data directory, we can see the data was removed:

1
2
3
4
5
6
7
8
9
10
du -h /blockchain/ethereum/
8.0K  /blockchain/ethereum/data/keystore
868K  /blockchain/ethereum/data/geth/triecache
4.0K  /blockchain/ethereum/data/geth/chaindata/ancient
180K  /blockchain/ethereum/data/geth/chaindata
1.4M  /blockchain/ethereum/data/geth/nodes
188M  /blockchain/ethereum/data/geth/ethash
190M  /blockchain/ethereum/data/geth
190M  /blockchain/ethereum/data
190M  /blockchain/ethereum/

Resources

Thank You

Thanks for reading, if you like my content, check out my website or follow me at @ruanbekker on Twitter.