In this tutorial we will be creating a ansible role, publish our ansible role to github, then we will install the role locally and create a ansible playbook to use the ansible role.
The source code for this blog post will be available on my github repository.
Now your ansible role should be commited and visible in git.
SSH Config Client Side
I will be referencing the git source url via SSH, and since I am using my default ssh key, the ssh config isn’t really needed, but if you are using a different version control system, with different ports or different ssh keys, the following ssh config snippet may be useful:
If you won’t be using SSH as the source url in your ansible setup for your role, you can skip the SSH setup.
Installing the Ansible Role from Git
When installing roles, ansible installs them by default under: ~/.ansible/roles, /usr/share/ansible/roles or /etc/ansible/roles.
From our previous steps, we still have the ansible role content locally (not under the default installed directory), so by saying installing the role kinda sounds like we are doing double the work. But the intention is that you have your ansible role centralized and versioned on git, and on new servers or workstations where you want to consume the role from, that specific role, won’t be present on that source.
To install the role from Git, we need to populate the requirements.yml file:
12
$ mkdir ~/my-project$ cd ~/my-project
The requirements file is used to define where our role is located, which version and the type of version control, the requirements.yml:
For other variations of using the requirements file, you can have a look at their documentation
Then install the ansible role from our requirements file (I have used --force to overwrite my current one while testing):
12345
$ ansible-galaxy install -r requirements.yml --forceStarting galaxy role install process- changing role ssh_config from main to main- extracting ssh_config to /Users/ruan/.ansible/roles/ssh_config- ssh_config (main) was installed successfully
Ansible Playbook
Define the ansible playbook to use the role that we installed from git, in a file called playbook.yml:
In this tutorial I will demonstrate how to use Terraform (a Infrastructure as Code Tool), to provision a AWS EC2 Instance and the source code that we will be using in this tutorial will be published to my terraformfiles github repository.
Requirements
To follow along this tutorial, you will need an AWS Account and Terraform installed
You will notice that I am defining my profile name default from the ~/.aws/credentials credential provider in order for terraform to authenticate with AWS.
Next I am defining the main.tf which will be the file where we define our aws resources:
variable "default_tags"{default={Environment="test"Owner="ruan.bekker"Project="terraform-blogpost"CostCenter="engineering"ManagedBy="terraform"}}variable "aws_region"{type= string
default="eu-west-1"description="the region to use in aws"}variable "vpc_id"{type= string
description="the vpc to use"}variable "ssh_keyname"{type= string
description="ssh key to use"}variable "subnet_id"{type= string
description="the subnet id where the ec2 instance needs to be placed in"}variable "instance_type"{type= string
default="t3.nano"description="the instance type to use"}variable "project_id"{type= string
default="terraform-blogpost"description="the project name"}variable "ebs_root_size_in_gb"{type= number
default= 10
description="the size in GB for the root disk"}variable "environment_name"{type= string
default="dev"description="the environment this resource will go to (assumption being made theres one account)"}
The next file is our locals.tf which just concatenates our project id and environment name:
Then our outputs.tf for the values that terraform should output:
1234567891011121314151617
output "id"{description="The ec2 instance id"value= aws_instance.ec2.id
sensitive=false}output "ip"{description="The ec2 instance public ip address"value= aws_instance.ec2.public_ip
sensitive=false}output "subnet_id"{description="the subnet id which will be used"value= var.subnet_id
sensitive=false}
Then lastly our terraform.tfvars, which you will need to supply your own values to match your AWS Account:
Now that all our configuration is in place, we need to intialize terraform by downloading the providers:
1
terraform init
Once the terraform init has completed, we can run a terraform plan which will show us what terraform will do. Since the terraform.tfvars are the default file for variables, we don’t have to specify the name of the file, but since I want to be excplicit, I will include it (should you want to change the file name):
1
terraform plan -var-file="terraform.tfvars"
Now it’s a good time to review what terraform wants to action by viewing the plan output, once you are happy you can deploy the changes by running a terraform apply:
1
terraform apply -var-file="terraform.tfvars"
Optional: You can override variables by either updating the terraform.tfvars or you can append them with terraform apply -var-file="terraform.tfvars" -var="ssh_key=default_key", a successful output should show something like this:
You can access the instance by SSH'ing to the IP that was returned by the output as well as the SSH key name that you provided, or you can make use of the terraform output to access the output value:
In this tutorial we will setup a python bot for our matrix chat server. We will only do a couple of basic commands, so that you have a solid base to build from.
Matrix Server
In our previous post we’ve setup a matrix and element server, so if you are following along, head over to that post to setup your matrix server before continuing.
In this tutorial we will setup a Matrix and Element Chat Server using Docker on Ubuntu.
What is Matrix?
Matrix is an open standard and communication protocol for secure, decentralised, real-time communication. For more information on Matrix, see their website
Install Docker
I will assume that docker and docker compose is installed, if not, follow this resource to install them:
- https://docs.docker.com/get-docker/
version:'3.8'services:element:image:vectorim/element-web:latestrestart:unless-stoppedvolumes:-./element-config.json:/app/config.jsonnetworks:default:ipv4_address:10.10.10.3synapse:image:matrixdotorg/synapse:latestrestart:unless-stoppednetworks:default:ipv4_address:10.10.10.4volumes:-./synapse:/datapostgres:image:postgres:11restart:unless-stoppednetworks:default:ipv4_address:10.10.10.2volumes:-./postgresdata:/var/lib/postgresql/dataenvironment:-POSTGRES_DB=synapse-POSTGRES_USER=synapse-POSTGRES_PASSWORD=STRONGPASSWORD-POSTGRES_INITDB_ARGS=--lc-collate C --lc-ctype C --encoding UTF8networks:default:external:name:matrix
In this quick tutorial we will demonstrate how to load additional environment variables from file into your python application.
It loads key value pairs from a file and append it to its current runtime environment variables, so your current environment is unaffected.
python-dotenv
We will make use of the package python-dotenv so we will need to install the python package with pip:
1
python3 -m pip install python-dotenv
The env file
I will create the .env in my current working directory with the content:
12
APPLICATION_NAME=foo
APPLICATION_OWNER=bar
The application
This is a basic demonstration of a python application which loads the additional environment variables from file, then we will use json.dumps(.., indent=2) so that we can get a pretty print of all our environment variables:
As we can see our two environment variables was added to the environment. If you would like to access your two environment variables, we can do the following:
In this tutorial we will run a basic api using flask-restful, it will only have two routes which will be a get and post method for the purpose of demonstration.
What is Flask Restful
Flask-RESTful is an extension for Flask that adds support for quickly building REST APIs. It is a lightweight abstraction that works with your existing ORM/libraries. Flask-RESTful encourages best practices with minimal setup.
importunittestimportappasapiclassTestFlaskApi(unittest.TestCase):defsetUp(self):self.app=api.app.test_client()deftest_get_method(self):response=self.app.get("/")self.assertEqual(response.get_json(),{"hello":"world"},)deftest_post_method(self):# request payloadpayload=json.dumps({"firstname":"ruan","lastname":"bekker"})# make requestresponse=self.app.post("/",data=payload,headers={"Content-Type":"application/json"})# assertself.assertEqual(str,type(response.json['lastname']))self.assertEqual(200,response.status_code)deftearDown(self):# delete if anything was createdpassif__name__=='__main__':unittest.main()
Then we can run our test with:
1
python -m unittest discover -p test_app.py -v
Since our first test is expecting {"hello": "world"} our test will pass, and our second test we are validating that our post request returns a 200 response code and that our lastname field is of string type.
The output of our tests will show something like this:
1234567
test_get_request (test_app.TestFlaskApi) ... ok
test_post_request (test_app.TestFlaskApi) ... ok
----------------------------------------------------------------------
Ran 2 tests in 0.009s
OK
More on Flask-Restful
This was a very basic example and their documentation provides a great tutorial on how to extend from this example. This is also a great blogpost on testing rest api’s.
Thank You
Thanks for reading, if you like my content, check out my website, read my newsletter or follow me at @ruanbekker on Twitter.
In this tutorial we will setup two containers, openldap and a openldap ui to manage our users on openldap.
What is OpenLDAP
OpenLDAP is an open source implementation of the Lightweight Directory Access Protocol, which makes it possible for organizations to use centralized authentication and directory access services over a network.
Configuration
This stack will boot a openldap and openldap-ui container container with the following docker-compose.yml:
In our main.tf, we are first using a data resource to query cloudflare for our domain rbkr.xyz and then access the attribute id which we will be using in our cloudflare_record resource so that it knows which domain to add the DNS record for.
Then we are going to create the A record foobar and provide the value of 127.0.0.1:
1234567891011
data "cloudflare_zone""this"{name="rbkr.xyz"}resource "cloudflare_record""foobar"{zone_id= data.cloudflare_zone.this.id
name="foobar"value="127.0.0.1"type="A"proxied=false}
Once our configuration code is in place we can run a init which will download the providers:
1
> terraform init
Once that is done, we can run a plan so we can see what will be deployed, but since our variables.tf has no default values, we will either have to define this in terraform.tfvars or use it in-line.
I will be using it in-line for this demonstration:
1
> terraform plan -var "cloudflare_email=$EMAIL" -var "cloudflare_api_token=$API_TOKEN"
Once you are happy, you can run a apply which will deploy the changes:
> terraform apply -var "cloudflare_email=$EMAIL" -var "cloudflare_api_token=$API_TOKEN"Terraform used the selected providers to generate the following execution plan. Resource actions are indicated with the following symbols:
+ create
Terraform will perform the following actions:
# cloudflare_record.foobar will be created + resource "cloudflare_record""foobar"{ + allow_overwrite=false + created_on=(known after apply) + hostname=(known after apply) + id=(known after apply) + metadata=(known after apply) + modified_on=(known after apply) + name="foobar" + proxiable=(known after apply) + proxied=false + ttl=(known after apply) + type="A" + value="127.0.0.1" + zone_id="xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx"}Plan: 1 to add, 0 to change, 0 to destroy.
Changes to Outputs:
+ metadata=(sensitive value) + record=(known after apply)Do you want to perform these actions?
Terraform will perform the actions described above.
Only 'yes' will be accepted to approve.
Enter a value: yes
cloudflare_record.foobar: Creating...
cloudflare_record.foobar: Creation complete after 4s [id=xxxxxxxxxxxxxxxxxxxxx]Apply complete! Resources: 1 added, 0 changed, 0 destroyed.
Outputs:
metadata= <sensitive>
record="foobar.rbkr.xyz"
Test DNS
We can now test if this is working as expected with a dns utility like dig:
In this tutorial, we will cover the basics of blockchain and why you would want to run a full-node such as bitcoin, ethereum, etc.
Blockchain Basics
Before we start setting up our bitcoin full-node, we first need to get through some blockchain basics, if you already aware of it, you can skip the the setup section of this post.
Block
Transaction data is permanently recorded into files called blocks. You can think of it as a transaction ledger. Blocks are organised into a linear sequence over time.
New transactions are constantly being processed by miners into new blocks which are added to the end of the chain. As blocks are buried deeper and deeper into the blockchain they become harder and harder to change or remove, this gives rise of Bitcoin’s Irreversible Transactions.
The first block added to the blockchain is referred to as the genesis block
Blockchain
A blockchain is a transaction database shared by all nodes participating in a system based on the bitcoin protocol. A full copy of a currency’s blockchain contains every transaction ever executed in the currency. With this information, one can find out how much value belonged to each address at any point in history.
Every block contains a hash of the previous block. This has the effect of creating a chain of blocks from the genesis block to the current block. Each block is guaranteed to come after the previous block chronologically because the previous block’s hash would otherwise not be known. Each block is also computationally impractical to modify once it has been in the chain for a while because every block after it would also have to be regenerated. These properties are what make bitcoins transactions irreversible. The blockchain is the main innovation of Bitcoin.
Mining
Mining is the process of adding transaction records to bitcoin’s public ledger of past transactions. The term “mining rig” is referred to where as a single computer system that performs the necessary computations for “mining”.
The blockchain serves to confirm transactions to the rest of the network as having taken place. Bitcoin nodes use the blockchain to distinguish legitimate Bitcoin transactions from attempts to re-spend coins that have already been spent elsewhere.
Node
Any computer that connects to the bitcoin network is called a node. Nodes that fully verify all of the rules of bitcoin are called full nodes. The most popular software implementation of full nodes is called bitcoin-core, its releases can be found on their github page
What is a Full Node
A full node is a node (computer system with bitcoin-core running on it) which downloads every block and transaction and check them against bitcoin’s consensus rules. which fully validates transactions and blocks. Almost all full nodes also help the network by accepting transactions and blocks from other full nodes, validating those transactions and blocks, and then relaying them to further full nodes.
Some examples of consensus rules:
Blocks may only create a certain number of bitcoins. (Currently 6.25 BTC per block.)
Transactions must have correct signatures for the bitcoins being spent.
Transactions/blocks must be in the correct data format.
Within a single blockchain, a transaction output cannot be double-spent.
At minimum, a full node must download every transaction that has ever taken place, all new transactions, and all block headers. Additionally, full nodes must store information about every unspent transaction output until it is spent.
By default full nodes are inefficient in that they download each new transaction at least twice, and they store the entire block chain (more than 165 GB as of 20180214) forever, even though only the unspent transaction outputs (<2 GB) are required. Performance can improved by enabling -blocksonly mode and enabling pruning
Archival Nodes
A subset of full nodes also accept incoming connections and upload old blocks to other peers on the network. This happens if the software is run with -listen=1 as is default.
Contrary to some popular misconceptions, being an archival node is not necessary to being a full node. If a user’s bandwidth is constrained then they can use -listen=0, if their disk space is constrained they can use pruning, all the while still being a fully-validating node that enforces bitcoin’s consensus rules and contributing to bitcoin’s overall security.
In this tutorial we will install the Geth implementation of Ethereum on Linux and we will be using the light sync mode which will get you up and running in minutes, which only downloads a couple of GBs.
Once we have our node setup we will be using the API and Web3 to interact with our ethereum node.
The values such as --whitelist can be retrieved from this issue or this post and extracted from the post:
“due to the London upgrade you’ll probably end up on the chain that isn’t tracked by Etherscan and Metamask. To ensure you only retrieve blocks from peers on that chain, include the following string in your geth start command”
Since we created a new systemd unit file, reload the systemd daemon:
We can then check our balance with eth_getbalance, where we pass the ethereum address which is in hex format, followed by the block number, but we will use “latest”:
As you can notice the value of our balance for our ethereum address is in hexadecimal format, we can convert it to decimal format:
12
echo$((0x429d069189e0000))300000000000000000
We can use python to convert to decimal using the int() function, by passing the hexadecimal value and pass its base to convert it into an integer, the base for hexadecimal is 16:
geth --datadir /blockchain/ethereum/data --keystore /blockchain/ethereum/data/keystore account new --password /tmp/.pass
Your new key was generated
Public address of the key: 0x5814D945EC909eb1307be4F133AaAB3dEF3572f0
Path of the secret key file: /blockchain/ethereum/data/keystore/UTC--2021-10-06T15-43-23.679655564Z--5814d945ec909eb1307be4f133aaab3def3572f0
- You can share your public address with anyone. Others need it to interact with you.
- You must NEVER share the secret key with anyone! The key controls access to your funds!
- You must BACKUP your key file! Without the key, it's impossible to access account funds!- You must REMEMBER your password! Without the password, it's impossible to decrypt the key!
Then when you attach your console session, you will be able to see the address that we created:
When we attempt to send 1ETH to the recipient address:
12345
> eth.sendTransaction({from: "0xd490fb53c0e7d3c80153112a4bd135e2cf897282", to: "0x2b1718cdf7dbcc381267ccf43d320c6e194d6aa8", value: "1000000000000000000"})Error: authentication needed: password or unlock
at web3.js:6357:37(47) at web3.js:5091:62(37) at <eval>:1:20(10)
You will notice that we need to unlock our account first:
Instantiate a client and connect to your geth node, [this documentation] provides different methods of connecting, but I will be using the HTTPProvider to connect over the network:
geth --datadir /blockchain/ethereum/data removedb
INFO [10-06|20:01:52.061] Maximum peer count ETH=50LES=0total=50
INFO [10-06|20:01:52.061] Smartcard socket not found, disabling err="stat /run/pcscd/pcscd.comm: no such file or directory"INFO [10-06|20:01:52.062] Set global gas cap cap=50,000,000
Remove full node state database (/blockchain/ethereum/data/geth/chaindata)? [y/n] y
Remove full node state database (/blockchain/ethereum/data/geth/chaindata)? [y/n] y
INFO [10-06|20:01:57.141] Database successfully deleted path=/blockchain/ethereum/data/geth/chaindata elapsed=2.482s
Remove full node ancient database (/blockchain/ethereum/data/geth/chaindata/ancient)? [y/n] y
Remove full node ancient database (/blockchain/ethereum/data/geth/chaindata/ancient)? [y/n] y
INFO [10-06|20:02:05.645] Database successfully deleted path=/blockchain/ethereum/data/geth/chaindata/ancient elapsed=589.737ms
INFO [10-06|20:02:05.645] Light node database missing path=/blockchain/ethereum/data/geth/lightchaindata
Now when we list the data directory, we can see the data was removed: