The reason I am using DNS Challenge instead of HTTP Challenge is because the Kubernetes environment is local on my laptop and there isn’t a direct HTTP route into my environment from the internet and I would like to not expose the endpoints to the public internet.
We would like to have Let’s Encrypt Certificates on our web application that will be issued by Cert-Manager using the DNS Challenge from CloudFlare.
Our ingress controller will be ingress-nginx and our endpoints will be private, as they will resolve to private IP addresses, hence the reason why we are using DNS validation instead of HTTP.
To follow along in this tutorial you will need the following
https://blog.ruanbekker.com/blog/2022/09/20/kind-for-local-kubernetes-clusters/
Cloudflare Account
Patience (just kidding, I will try my best to make it easy)
If you already have a Kubernetes Cluster, you can skip this step.
Define the kind-config.yaml
1 2 3 4 5 6 7 8 9 10 11 12 13 |
|
Then create the cluster with kind
:
1
|
|
First we need to install a ingress controller and I am opting in to use ingress-nginx, so first we need to add the helm repository to our local repositories:
1
|
|
Then we need to update our repositories:
1
|
|
Then we can install the helm release:
1 2 3 4 5 6 |
|
You can view all the default values from their GitHub repository where the chart is hosted:
Once the release has been deployed, you should see the ingress-nginx pod running under the ingress-nginx
namespace:
1
|
|
The next step is to install cert-manager using helm, first add the repository:
1
|
|
Update the repositories:
1
|
|
Then install the cert-manager release:
1 2 3 4 5 |
|
We need to grant Cert-Manager access to make DNS changes on our Cloudflare account for DNS validation on our behalf, and in order to do that, we need to create a Cloudflare API Token.
As per the cert-manager documentation, from your profile select API Tokens, create an API Token and select Edit Zone DNS
template.
Then select the following:
Permissions:
Zone: DNS -> Edit
Zone: Zone -> Read
Zone Resources:
Then create the token and save the value somewhere safe, as we will be using it in the next step.
First, we need to create a Kubernetes secret with the API Token that we created in the previous step.
1 2 |
|
Then create the clusterissuer.yaml
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 |
|
Then create the cluster issuer:
1
|
|
Now that we have our ClusterIssuer
created, we can request a certificate. In my scenario, I have a domain example.com
which is hosted on CloudFlare and I would like to create a wildcard certificate on the sub-domain *.workshop.example.com
Certificates are scoped on a namespace level, and ClusterIssuer’s are cluster-wide, therefore I am prefixing my certificate with the namespace (just my personal preference).
1 2 3 4 5 6 7 8 9 10 11 12 13 14 |
|
Before we create the certificate on CloudFlare, I have created private DNS to the names mentioned in the manifest above like the following:
1 2 |
|
In the DNS configuration mentioned above, to explain why I am creating 2 entries:
10.2.24.254
- This is my LoadBalancer IP Address
I have a static DNS entry to the name workshop.example.com
so if my LoadBalancer IP Address ever change, I can just change this address
I am creating a wildcard DNS entry for *.workshop.example.com
and I am creating a CNAME record for it to resolve to workshop.example.com
so it will essentially respond to the LoadBalancer IP.
So lets say I create test1.workshop.example.com
and test2.workshop.example.com
then it will resolve to the LoadBalancer IP in workshop.example.com
and as mentioned before, if the LoadBalancer IP ever changes, I only have to update the A Record of workshop.example.com
Then after DNS was created, I went ahead and created the certificate:
1
|
|
You can view the progress by viewing the certificate status by running:
1
|
|
Let’s deploy a nginx
web server deployment and I have concatenated the following in one manifest called deployment.yaml
:
Deployment
Service
Ingress
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 |
|
A few important things to notice on the ingress resource:
host
the host needs to match the certificate
secretName
the secret needs to match the secret defined in the certificate
Then create the deployment:
1
|
|
Ensure that cert-manager can set DNS-01 challenge records correctly, if you encounter issues, you can inspect the cert-manager pod logs.
To view the pods for cert-manager:
1
|
|
Then view the logs using:
1
|
|
You can open up a browser and access the ingress on your browser, in my case it would be https://nginx.workshop.example.com
and verify that you have a certificate issued from Lets Encrypt.
Thanks for reading, if you enjoy my content please feel free to follow me on Twitter - @@ruanbekker or visit me on my website - ruan.dev
]]>I will be using kind to run a kubernetes cluster locally, if you want to follow along, have a look at my previous post on how to install kubectl and kind and the basic usage of kind:
You will also need helm to deploy the ingress-nginx release from their helm charts, you can see their documentation on how to install it:
First we will define the kind configuration which will expose port 80 locally in a file name kind-config.yaml
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 |
|
Then go ahead and create the kubernetes cluster:
1
|
|
Install the ingress-nginx helm chart, by first adding the repository:
1
|
|
Then update your local repositories:
1
|
|
Then install the helm release, and set a couple of overrides.
The reason we use NodePort is because our kubernetes cluster runs on docker containers, and from our kind config we have exposed port 80 locally, we are using the NodePort service so that we can make an HTTP request to port 80 to traverse to the port of the service:
1 2 3 4 5 |
|
You can view all the default values from their GitHub repository where the chart is hosted:
Once the release has been deployed, you should see the ingress-nginx pod running under the ingress-nginx
namespace:
1
|
|
We will create 3 files:
example/deployment.yaml
example/service.yaml
example/ingress.yaml
Create the example directory:
1
|
|
Our example/deployment.yaml
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 |
|
Our example/service.yaml
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 |
|
Our example/ingress.yaml
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 |
|
In summary, we are creating a deployment with a pod that listens on port 5000, and then we are creating a service with port 80 that will forward its connections to the container port of 5000.
Then we define our ingress that will match our hostname and forward its connections to our service on port 80, and also notice that we are defining our ingress class name, which we have set in our helm values.
Deploy this example with kubectl:
1
|
|
Now you can access the web application at http://example.127.0.0.1.nip.io
You can delete the resources that we’ve created using:
1
|
|
Delete the cluster using:
1
|
|
Thanks for reading, if you enjoy my content please feel free to follow me on Twitter - @ruanbekker or visit me on my website - ruan.dev
]]>We will also use CloudWatch Events to trigger this lambda function every two hours.
First you will need to have Terraform installed as well as authentication for Terraform to interact with your AWS account, I have written a post about it and you can follow that on “How to use the AWS Terraform Provider”.
The following code will be available on my github repository, but if you would like to follow along we will create everything step by step.
First create the project directory:
1
|
|
Then change into the directory:
1
|
|
First we want to create our modules directory:
1
|
|
Then our environment directory:
1
|
|
We will also create the directory for our function code:
1
|
|
And we can create the file for our python function:
1
|
|
Now we will create our files inside our modules directory:
1
|
|
Then create the files inside our environments directory:
1
|
|
Then in summary our project structure should look more or less like this:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 |
|
We will first start populating the modules bit, and start with modules/lambda-function/main.tf
:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 |
|
The next one will be the modules/lambda-function/variables.tf
:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 |
|
Then define the modules output in modules/lambda-function/outputs.tf
:
1 2 3 |
|
Then we define our python function code in modules/lambda-function/functions/demo.py
:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 |
|
For our environment we want to specify the source as our module in environment/test/main.tf
:
1 2 3 4 5 |
|
Our outputs in environment/test/output.tf
:
1 2 3 |
|
And since we are using AWS, we need to define our providers and the profile that we will use to authenticate against AWS, in my case, im using the default profile in environment/test/provider.tf
:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 |
|
Now that we have defined our terraform code we can run:
1
|
|
And it should return something more or less like the following:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 |
|
If you are happy with the plan you can go ahead and run:
1
|
|
Which will create the resources in AWS. Upon creation we should see something like this:
1 2 3 4 5 6 7 |
|
Since we have our aws cli configured with a profile we can also test our lambda function:
1 2 3 4 5 |
|
And the response from the invocation can be seen in the file we defined:
1 2 |
|
If we want to redeploy our function with updated code, we can change the content of functions/demo.py
and then run:
1
|
|
Since our terraform code defined that if the source has of the function code changes, it will trigger a redeploy, and from the computed plan we can see that it will redeploy our function code:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 |
|
After entering “yes” we will update our function code
If we logon to the AWS Console and head to Lambda we can inspect our function code:
If we manually want to trigger the function, select “Test”, then enter the “Event name” with something like “testing” then click “Test”:
If we follow the CloudWatch log link we can view the logs in CloudWatch:
If you followed along and would like to destroy the created infrastructure:
1
|
|
Terraform Examples
Thanks for reading, feel free to check out my website, feel free to subscribe to my newsletter or follow me at @ruanbekker on Twitter.
Terraform is super powerful and can do a lot of things. And it shines when it provisions Infrastructure. So in a scenario where we use Terraform to provision RDS MySQL Database Instances, we might still want to provision extra MySQL Users, or Database Schemas and the respective MySQL Grants.
Usually you will logon to the database and create them manually with sql syntax. But in this tutorial we want to make use of Docker to provision our MySQL Server and we would like to make use of Terraform to provision the MySQL Database Schemas, Grants and Users.
Instead of using AWS RDS, I will be provisioning a MySQL Server on Docker so that we can keep the costs free, for those who are following along.
We will also go through the steps on how to rotate the database password that we will be provisioning for our user.
First we will provision a MySQL Server on Docker Containers, I have a docker-compose.yaml
which is available in my quick-starts github repository:
1 2 3 4 5 6 7 8 9 10 11 |
|
Once you have saved that in your current working directory, you can start the container with docker compose:
1
|
|
You can test the mysql container by logging onto the mysql server with the correct auth:
1
|
|
This should be more or less the output:
1 2 3 4 5 6 7 8 9 |
|
If you don’t have Terraform installed, you can install it from their documentation.
If you want the source code of this example, its available in my terraform-mysql/petoju-provider repository. Which you can clone and jump into the terraform/mysql/petoju-provider
directory.
First we will define the providers.tf
:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 |
|
Then the main.tf
:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 |
|
Then the variables.tf
:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 |
|
Then our outputs.tf
:
1 2 3 4 5 6 7 8 |
|
Our terraform.tfvars
that defines the values of our variables:
1 2 3 |
|
Now we are ready to run our terraform code, which will ultimately create a database, user and grants. Outputs the encrypted string of your password which was encrypted with your keybase_username
.
Initialise Terraform:
1
|
|
Run the plan to see what terraform wants to provision:
1
|
|
And we can see the following resources will be created:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 |
|
Run the apply which will create the database, the user, sets the password and applies the grants:
1
|
|
Then our returned output should show something like this:
1 2 3 4 5 6 |
|
As our password is set as sensitive, we can access the value with terraform output -raw password
, let’s assign the password to a variable:
1
|
|
Then we can exec into the mysql container and logon to the mysql server with our new credentials:
1
|
|
And we can see we are logged onto the mysql server:
1 2 3 4 5 |
|
If we run show databases;
we should see the following:
1 2 3 4 5 6 7 8 9 |
|
If we want to rotate the mysql password for the user, we can update the password_version
variable either in our terraform.tfvars
or via the cli. Let’s pass the variable in the cli and do a terraform plan
to verify the changes:
1
|
|
And due to our value for the random resource keepers parameter being updated, it will trigger the value of our password to be changed, and that will let terraform update our mysql user’s password:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 |
|
Let’s go ahead by updating our password:
1
|
|
To validate that the password has changed, we can try to logon to mysql by using the password variable that was created initially:
1
|
|
And as you can see authentication failed:
1 2 |
|
Set the new password to the variable again:
1
|
|
Then try to logon again:
1
|
|
And we can see we are logged on again:
1 2 3 4 5 |
|
The terraform mysql provider: - https://registry.terraform.io/providers/petoju/mysql/latest/docs
The quick-starts repository: - https://github.com/ruanbekker/quick-starts
Thanks for reading, feel free to check out my website, feel free to subscribe to my newsletter or follow me at @ruanbekker on Twitter.
In order to authenticate against AWS’s APIs, we need to create a AWS IAM User and create Access Keys for Terraform to use to authenticate.
From https://aws.amazon.com/ logon to your account, then search for IAM:
Select IAM, then select “Users” on the left hand side and select “Create User”, then provide the username for your AWS IAM User:
Now we need to assign permissions to our new AWS IAM User. For this scenario I will be assigning a IAM Policy directly to the user and I will be selecting the “AdministratorAccess” policy. Keep in mind that this allows admin access to your whole AWS account:
Once you select the policy, select “Next” and select “Create User”. Once the user has been created, select “Users” on the left hand side, search for your user that we created, in my case “medium-terraform”.
Select the user and click on “Security credentials”. If you scroll down to the “Access keys” section, you will notice we don’t have any access keys for this user:
In order to allow Terraform access to our AWS Account, we need to create access keys that Terraform will use, and because we assigned full admin access to the user, Terraform will be able to manage resources in our AWS Account.
Click “Create access key”, then select the “CLI” option and select the confirmation at the bottom:
Select “Next” and then select “Create access key”. I am providing a screenshot of the Access Key and Secret Access Key that has been provided, but by the time this post has been published, the key will be deleted.
Store your Access Key and Secret Access Key in a secure place and treat this like your passwords. If someone gets access to these keys they can manage your whole AWS Account.
I will be using the AWS CLI to configure my Access Key and Secret Access Key, as I will configure Terraform later to read my Access Keys from the Credential Provider config.
First we need to configure the AWS CLI by passing the profile name, which I have chosen medium
for this demonstration:
1
|
|
We will be asked to provide the access key, secret access key, aws region and the default output:
1 2 3 4 |
|
To verify if everything works as expected we can use the following command to verify:
1
|
|
The response should look something similar to the following:
1 2 3 4 5 |
|
Now that we have our AWS IAM User configured, we can install Terraform, if you don’t have Terraform installed yet, you can follow their Installation Documentation.
Once you have Terraform installed, we can setup our workspace where we will ultimately deploy a EC2 instance, but before we get there we need to create our project directory and change to that directory:
1 2 |
|
Then we will create 4 files with .tf
extensions:
1 2 3 4 |
|
We will define our Terraform definitions on how we want our desired infrastructure to look like. We will get to the content in the files soon.
I personally love Terraform’s documentation as they are rich in examples and really easy to use.
Head over to the Terraform AWS Provider documentation and you scroll a bit down, you can see the Authentication and Configuration section where they outline the order in how Terraform will look for credentials and we will be making use of the shared credentials file as that is where our access key and secret access key is stored.
If you look at the top right corner of the Terraform AWS Provider documentation, they show you how to use the AWS Provider:
We can copy that code snippet and paste it into our providers.tf
file and configure the aws provider section with the medium
profile that we’ve created earlier.
This will tell Terraform where to look for credentials in order to authenticate with AWS.
Open providers.tf
with your editor of choice:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 |
|
Then we can open main.tf
and populate the following to define the EC2 instance that we want to provision:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 |
|
In the above example we are filtering for the latest Ubuntu 22.04 64bit AMI then we are defining a EC2 instance and specifying the AMI ID that we filtered from our data source.
Note that we haven’t specified a SSH Keypair, as we are just focusing on how to provision a EC2 instance.
As you can see we are also referencing variables, which we need to define in variables.tf
:
1 2 3 4 5 6 7 8 9 10 11 |
|
And then lastly we need to define our outputs.tf
which will be used to output the instance id and ip address:
1 2 3 4 5 6 7 |
|
Now that our infrastructure has been defined as code, we can first initialise terraform which will initialise the backend and download all the providers that has been defined:
1
|
|
Once that has done we can run a “plan” which will show us what Terraform will deploy:
1
|
|
Now terraform will show us the difference in what we have defined, and what is actually in AWS, as we know its a new account with zero infrastructure, the diff should show us that it needs to create a EC2 instance.
The response from the terraform plan
shows us the following:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 |
|
As you can see terraform has looked up the AMI ID using the data source, and we can see that terraform will provision 1 resource which is a EC2 instance. Once we hare happy with the plan, we can run a apply which will show us the same but this time prompt us if we want to proceed:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 |
|
And now we can see our EC2 instance was provisioned and our outputs returned the instance id as well as the public ip address.
We can also confirm this by looking at the AWS EC2 Console:
Note that Terraform Configuration is idempotent, so when we run a terraform apply again, terraform will check what we have defined as what we want our desired infrastructure to be like, and what we actually have in our AWS Account, and since we haven’t made any changes there should be no changes.
We can run a terraform apply to validate that:
1
|
|
And we can see the response shows:
1 2 3 4 5 6 7 8 9 |
|
Destroy the infrastructure that we provisioned:
1
|
|
It will show us what terraform will destroy, then upon confirming we should see the following output:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 |
|
If you followed along and you also want to clean up the AWS IAM user, head over to the AWS IAM Console and delete the “medium-terraform” IAM User.
I hope you enjoyed this post, I will be posting more terraform related content.
Should you want to reach out to me, you can follow me on Twitter at @ruanbekker or check out my website at https://ruan.dev
]]>In this post we will have a look at FerretDB which is a opensource proxy that translates MongoDB queries to SQL, where PostgreSQL being the database engine.
From FerretDB website, they describe FerretDB as:
Initially built as open-source software, MongoDB was a game-changer for many developers, enabling them to build fast and robust applications. Its ease of use and extensive documentation made it a top choice for many developers looking for an open-source database. However, all this changed when they switched to an SSPL license, moving away from their open-source roots.
In light of this, FerretDB was founded to become the true open-source alternative to MongoDB, making it the go-to choice for most MongoDB users looking for an open-source alternative to MongoDB. With FerretDB, users can run the same MongoDB protocol queries without needing to learn a new language or command.
We will be doing the following:
mongosh
as a client to logon to ferretdb using the ferretdb endpointThe following docker-compose.yaml
defines a postgres container which will be used as the database engine for ferretdb, and then we define the ferretdb container, which connects to postgres via the environment variable FERRETDB_POSTGRESQL_URL
.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 |
|
Once you have the content above saved in docker-compose.yaml
you can run the following to run the containers in a detached mode:
1
|
|
Once the containers started, we can connect to our ferretdb server using mongosh, which is a shell utility to connect to the database). I will make use of a container to do this, where I will reference the network which we defined in our docker compose file, and set the endpoint that mongosh need to connect to:
1
|
|
Once it successfully connects to ferretdb, we should see the following prompt:
1 2 3 4 5 6 |
|
If you are familiar with MongoDB, you will find the following identical to MongoDB.
First we show the current databases:
1 2 |
|
The we create and use the database named mydb
:
1 2 |
|
To see which database are we currently connected to:
1 2 |
|
Now we can create a collection named mycol1
and mycol2
:
1 2 3 4 |
|
We can view our collections by running the following:
1 2 3 |
|
To write one document into our collection named col1
with the following data:
1 2 3 4 5 6 7 8 9 |
|
We can execute:
1 2 3 4 5 |
|
And we can insert another document:
1 2 3 4 5 |
|
We can then use countDocuments()
to view the number of documents in our collection named mycol1
:
1 2 |
|
If we want to find all our documents in our mycol1
collection:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 |
|
If we want to only display specific fields in our response, such as name and age, we can project fields to return from our query:
1 2 3 4 5 6 7 8 9 |
|
We can also suppress the _id
field by setting the value to 0
:
1 2 3 4 5 |
|
Next we can return all the fields name and age from our collection where the age field is equals to 32:
1 2 |
|
We can also find a specific document by its id as example, and return only the field value, like name:
1 2 |
|
Next we will find all documents where the age is greater than 30:
1 2 3 4 5 6 7 8 9 |
|
Let’s explore how to insert many documents at once using insertMany()
, first create a new collection:
1 2 |
|
We can then define the docs variable, and assign a array with 2 json documents:
1
|
|
Now we can insert our documents to ferretdb using insertMany()
:
1 2 3 4 5 6 7 8 |
|
We can count the documents inside our collection using:
1 2 |
|
And we can search for all the documents inside the collection:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 |
|
And searching for any data using the name peter
:
1 2 3 4 5 6 7 8 9 |
|
We will create a script so that we can generate data that we want to write into FerretDB.
Create the following script, write.js
:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 |
|
The script will loop a 1000 times and create documents that will include fields of transaction_types
, store_names
, random_transaction_type
, random_store_name
and random_age
.
Use docker, mount the file inside the container, point the database endpoint to ferretdb and load the file that we want to execute:
1
|
|
Now when we run a mongosh client:
1
|
|
And we query for the store_name: picknpay
and only show the transaction_type
and transaction
fields:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 |
|
We can also use the --eval
flag with the mongosh container to run ad-hoc queries such as counting documents for a collection:
1 2 3 4 |
|
FerretDB provides prometheus metrics out of the box, and outputs prometheus metrics on the :8080/debug/metrics
endpoint:
1
|
|
Which will output metrics more or less like the following:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 |
|
Please see the follwoing resources for FerretDB:
Thanks for reading, feel free to check out my website, feel free to subscribe to my newsletter or follow me at @ruanbekker on Twitter.
Head over to their documentation and download the UTM.dmg
file and install it, once it is installed and you have opened UTM, you should see this screen:
In my case I would like to run a Ubuntu VM, so head over to the Ubuntu Server Download page and download the version of choice, I will be downloading Ubuntu Server 22.04, once you have your ISO image downloaded, you can head over to the next step which is to “Create a New Virtual Machine”:
I will select “Emulate” as I want to run a amd64 bit architecture, then select “Linux”:
In the next step we want to select the Ubuntu ISO image that we downloaded, which we want to use to boot our VM from:
Browse and select the image that you downloaded, once you selected it, it should show something like this:
Select continue, then select the architecture to x86_64
, the system I kept on defaults and the memory I have set to 2048MB
and cores to 2
but that is just my preference:
The next screen is to configure storage, as this is for testing I am setting mine to 8GB
:
The next screen is shared directories, this is purely optional, I have created a directory for this:
1
|
|
Which I’ve then defined for a shared directory, but this depends if you need to have shared directories from your local workstation.
The next screen is a summary of your choices and you can name your vm here:
Once you are happy select save, and you should see something like this:
You can then select the play button to start your VM.
The console should appear and you can select install or try this vm:
This will start the installation process of a Linux Server:
Here you can select the options that you would like, I would just recommend to ensure that you select Install OpenSSH Server
so that you can connect to your VM via SSH.
Once you get to this screen:
The installation process is busy and you will have to wait a couple of minutes for it to complete. Once you see the following screen the installation is complete:
On the right hand side select the circle, then select CD/DVD and select the ubuntu iso and select eject:
Then power off the guest and power on again, then you should get a console login, then you can proceed to login, and view the ip address:
Now from your terminal you should be able to ssh to the VM:
We can also verify that we are running a 64bit vm, by running uname --processor
:
Thanks for reading, feel free to check out my website, feel free to subscribe to my newsletter or follow me at @ruanbekker on Twitter.
Kafka is a distributed event store and stream processing platform. Kafka is used to build real-time streaming data pipelines and real-time streaming applications.
This is a fantastic resource if you want to understand the components better in detail: - apache-kafka-architecture-what-you-need-to-know
But on a high level, the components of a typical Kafka setup:
For great in detail information about kafka and its components, I encourage you to visit the mentioned post from above.
This is the docker-compose.yaml
that we will be using to run a kafka cluster with 3 broker containers, 1 zookeeper container, 1 producer, 1 consumer and a kafka-ui.
All the source code is available on my quick-starts github repository .
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 |
|
Note: This docker-compose yaml can be found in my kafka quick-starts repository.
In our compose file we defined our core stack:
Then we have our clients:
We can boot the stack with:
1
|
|
You can verify that the brokers are passing their health checks with:
1 2 3 4 5 6 7 8 9 10 |
|
The producer generates random data and sends it to a topic, where the consumer will listen on the same topic and read messages from that topic.
To view the output of what the producer
is doing, you can tail the logs:
1 2 3 4 5 6 7 8 |
|
And to view the output of what the consumer
is doing, you can tail the logs:
1 2 3 4 5 6 7 |
|
The Kafka UI will be available on http://localhost:8080
Where we can view lots of information, but in the below screenshot we can see our topics:
And when we look at the my-topic
, we can see a overview dashboard of our topic information:
We can also look at the messages in our topic, and also search for messages:
And we can also look at the current consumers:
My Quick-Starts Github Repository:
Thanks for reading, feel free to check out my website, feel free to subscribe to my newsletter or follow me at @ruanbekker on Twitter.
In this post we will use terraform to deploy a helm release to kubernetes.
For this demonstration I will be using kind to deploy a local Kubernetes cluster to the operating system that I am running this on, which will be Ubuntu Linux. For a more in-depth tutorial on Kind, you can see my post on Kind for Local Kubernetes Clusters.
We will be installing terraform, docker, kind and kubectl on Linux.
Install terraform:
1 2 3 4 |
|
Verify that terraform has been installed:
1
|
|
Which in my case returns:
1 2 |
|
Install Docker on Linux (be careful to curl pipe bash - trust the scripts that you are running):
1
|
|
Then running docker ps
should return:
1
|
|
Install kind on Linux:
1 2 3 4 |
|
Then to verify that kind was installed with kind --version
should return:
1
|
|
Create a kubernetes cluster using kind:
1
|
|
Now install kubectl:
1 2 |
|
Then to verify that kubectl was installed:
1
|
|
Which in my case returns:
1 2 |
|
Now we can test if kubectl can communicate with the kubernetes api server:
1
|
|
In my case it returns:
1 2 |
|
Now that our pre-requirements are sorted we can configure terraform to communicate with kubernetes. For that to happen, we need to consult the terraform kubernetes provider’s documentation.
As per their documentation they provide us with this snippet:
1 2 3 4 5 6 7 8 9 10 11 12 |
|
And from their main page, it gives us a couple of options to configure the provider and the easiest is probably to read the ~/.kube/config
configuration file.
But in cases where you have multiple configurations in your kube config file, this might not be ideal, and I like to be precise, so I will extract the client certificate, client key and cluster ca certificate and endpoint from our ~/.kube/config
file.
If we run cat ~/.kube/config
we will see something like this:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 |
|
First we will create a directory for our certificates:
1
|
|
I have truncated my kube config for readability, but for our first file certs/client-cert.pem
we will copy the value of client-certificate-data:
, which will look something like this:
1 2 |
|
Then we will copy the contents of client-key-data:
into certs/client-key.pem
and then lastly the content of certificate-authority-data:
into certs/cluster-ca-cert.pem
.
So then we should have the following files inside our certs/
directory:
1 2 3 4 5 6 7 |
|
Now make them read only:
1
|
|
Now that we have that we can start writing our terraform configuration. In providers.tf
:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 |
|
Your host might look different to mine, but you can find your host endpoint in ~/.kube/config
.
For a simple test we can list all our namespaces to ensure that our configuration is working. In a file called namespaces.tf
, we can populate the following:
1 2 3 4 5 |
|
Now we need to initialize terraform so that it can download the providers:
1
|
|
Then we can run a plan which will reveal our namespaces:
1 2 3 4 5 6 7 8 9 10 11 12 13 |
|
We can now remove our namespaces.tf
as our test worked:
1
|
|
We will need two things, we need to consult the terraform helm release provider documentation and we also need to consult the helm chart documentation which we are interested in.
In my previous post I wrote about Everything you need to know about Helm and I used the Bitnami Nginx Helm Chart, so we will use that one again.
As we are working with helm releases, we need to configure the helm provider, I will just extend my configuration from my previous provider config in providers.tf
:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 |
|
We will create three terraform files:
1
|
|
And our values yaml in helm-chart/nginx/values.yaml
:
1
|
|
Then you can copy the values file from https://artifacthub.io/packages/helm/bitnami/nginx?modal=values into helm-chart/nginx/values.yaml
.
In our main.tf
I will use two ways to override values in our values.yaml
using set
and templatefile
. The reason for the templatefile, is when we want to fetch a value and want to replace the content with our values file, it could be used when we retrieve a value from a data source as an example. In my example im just using a variable.
We will have the following:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 |
|
As you can see we are referencing a NAME_OVERRIDE
in our values.yaml
, I have cleaned up the values file to the following:
1 2 3 4 5 6 7 |
|
The NAME_OVERRIDE
must be in a ${}
format.
In our variables.tf
we will have the following:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 |
|
And lastly our outputs.tf
:
1 2 3 |
|
Now that we have all our configuration ready, we can initialize terraform:
1
|
|
Then we can run a plan to see what terraform wants to deploy:
1
|
|
The plan output shows the following:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 |
|
Once we are happy with our plan, we can run a apply:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 |
|
Then we can verify if the pod is running:
1 2 3 |
|
If you have an existing helm release that was deployed with helm and you want to transfer the ownership to terraform, you first need to write the terraform code, then import the resources into terraform state using:
1
|
|
Where the last argument is <namespace>/<release-name>
. Once that is imported you can run terraform plan and apply.
If you want to discover all helm releases managed by helm you can use:
1
|
|
Thanks for reading, feel free to check out my website, feel free to subscribe to my newsletter or follow me at @ruanbekker on Twitter.
In this tutorial we will demonstrate how to persist your terraform state in gitlab managed terraform state, using the terraform http backend.
For detailed information about this consult their documentation
We will create a terraform pipeline which will run the plan step automatically and a manual step to run the apply step.
During these steps and different pipelines we need to persist our terraform state remotely so that new pipelines can read from our state what we last stored.
Gitlab offers a remote backend for our terraform state which we can use, and we will use a basic example of using the random resource.
If you don’t see the “Infrastructure” menu on your left, you need to enable it at “Settings”, “General”, “Visibility”, “Project features”, “Permissions” and under “Operations”, turn on the toggle.
For more information on this see their documentation
For this demonstration I created a token which is only scoped for this one project, for this we need a to create a token under, “Settings”, “Access Tokens”:
Select the api
under scope:
Store the token name and token value as TF_USERNAME
and TF_PASSWORD
as a CICD variable under “Settings”, “CI/CD”, “Variables”.
We will use a basic random_uuid
resource for this demonstration, our main.tf
:
1 2 3 4 5 6 |
|
Our providers.tf
, you will notice the backend "http" {}
is what is required for our gitlab remote state:
1 2 3 4 5 6 7 8 9 10 11 12 |
|
Push that up to gitlab for now.
Our .gitlab-ci.yml
consists of a plan step and a apply step which is a manual step as we first want to review our plan step before we apply.
Our pipeline will only run on the default branch, which in my case is main:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 |
|
Where the magic happens is in the terraform init
step, that is where we will initialize the terraform state in gitlab, and as you can see we are taking the TF_ADDRESS
variable to define the path of our state and in this case our state file will be named default-terraform.tfstate
.
If it was a case where you are deploying multiple environments, you can use something like ${ENVIRONMENT}-terraform.tfstate
.
When we run our pipeline, we can look at our plan step:
Once we are happy with this we can run the manual step and do the apply step, then our pipeline should look like this:
When we inspect our terraform state in the infrastructure menu, we can see the state file was created:
Thanks for reading, feel free to check out my website, feel free to subscribe to my newsletter or follow me at @ruanbekker on Twitter.
Helm, its one amazing piece of software that I use multiple times per day!
You can think of helm as a package manager for kubernetes, but in fact its much more than that.
Think about it in the following way:
Helm uses your kubernetes config to connect to your kubernetes cluster. In most cases it utilises the config defined by the KUBECONFIG
environment variable, which in most cases points to ~/kube/config
.
If you want to follow along, you can view the following blog post to provision a kubernetes cluster locally:
Once you have provisioned your kubernetes cluster locally, you can proceed to install helm, I will make the assumption that you are using Mac:
1
|
|
Once helm has been installed, you can test the installation by listing any helm releases, by running:
1
|
|
Helm uses a packaging format called charts, which is a collection of files that describes a related set of kubernetes resources. A sinlge helm chart m ight be used to deploy something simple such as a deployment or something complex that deploys a deployment, ingress, horizontal pod autoscaler, etc.
So let’s assume that we have our kubernetes cluster deployed, and now we are ready to deploy some applications to kubernetes, but we are unsure on how we would do that.
Let’s assume we want to install Nginx.
First we would navigate to artifacthub.io, which is a repository that holds a bunch of helm charts and the information on how to deploy helm charts to our cluster.
Then we would search for Nginx, which would ultimately let us land on:
On this view, we have super useful information such as how to use this helm chart, the default values, etc.
Now that we have identified the chart that we want to install, we can have a look at their readme, which will indicate how to install the chart:
1 2 |
|
But before we do that, if we think about it, we add a repository, then before we install a release, we could first find information such as the release versions, etc.
So the way I would do it, is to first add the repository:
1
|
|
Then since we have added the repository, we can update our repository to ensure that we have the latest release versions:
1
|
|
Now that we have updated our local repositories, we want to find the release versions, and we can do that by listing the repository in question. For example, if we don’t know the application name, we can search by the repository name:
1
|
|
In this case we will get an output of all the applications that is currently being hosted by Bitnami.
If we know the repository and the release name, we can extend our search by using:
1
|
|
In this case we get an output of all the Nginx release versions that is currently hosted by Bitnami.
Now that we have received a response from helm search repo
, we can see that we have different release versions, as example:
1 2 3 |
|
For each helm chart, the chart has default values which means, when we install the helm release it will use the default values which is defined by the helm chart.
We have the concept of overriding the default values with a yaml configuration file we usually refer to values.yaml
, that we can define the values that we want to override our default values with.
To get the current default values, we can use helm show values
, which will look like the following:
1
|
|
That will output to standard out, but we can redirect the output to a file using the following:
1
|
|
Now that we have redirected the output to nginx-values.yaml
, we can inspect the default values using cat nginx-values.yaml
, and any values that we see that we want to override, we can edit the yaml file and once we are done we can save it.
Now that we have our override values, we can install a release to our kubernetes cluster.
Let’s assume we want to install nginx to our cluster under the name my-nginx
and we want to deploy it to the namespace called web-servers
:
1
|
|
In the example above, we defined the following:
upgrade --install
- meaning we are installing a release, if already exists, do an upgrademy-nginx
- use the release name my-nginx
bitnami/nginx
- use the repository and chart named nginx--values nginx-values.yaml
- define the values file with the overrides--namespace web-servers --create-namespace
- define the namespace where the release will be installed to, and create the namespace if not exists--version 13.2.22
- specify the version of the chart to be installedWe can view information about our release by running:
1
|
|
It’s very common to create your own helm charts when you follow a common pattern in a microservice architecture or something else, where you only want to override specific values such as the container image, etc.
In this case we can create our own helm chart using:
1 2 3 |
|
This will create a scaffoliding project with the required information that we need to create our own helm chart. If we look at a tree view, it will look like the following:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 |
|
This example chart can already be used, to see what this chart will produce when running it with helm, we can use the helm template
command:
1 2 |
|
The output will be something like the following:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 |
|
In our example it will create a service account, service, deployment, etc.
As you can see the spec.template.spec.containers[].image
is set to nginx:1.16.0
, and to see how that was computed, we can have a look at templates/deployment.yaml
:
As you can see in image:
section we have .Values.image.repository
and .Values.image.tag
, and those values are being retrieved from the values.yaml
file, and when we look at the values.yaml
file:
1 2 3 4 5 |
|
If we want to override the image repository and image tag, we can update the values.yaml
file to lets say:
1 2 3 4 |
|
When we run our helm template command again, we can see that the computed values changed to what we want:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 |
|
Another way is to use --set
:
1 2 3 4 5 6 7 8 |
|
The template subcommand provides a great way to debug your charts. To learn more about helm charts, view their documentation.
ChartMuseum is an open-source Helm Chart Repository server written in Go.
Running chartmuseum demonstration will be done locally on my workstation using Docker. To run the server:
1 2 3 4 5 6 7 |
|
Now that ChartMuseum is running, we will need to install a helm plugin called helm-push
which helps to push charts to our chartmusuem repository:
1
|
|
We can verify if our plugin was installed:
1 2 3 |
|
Now we add our chartmuseum helm chart repository, which we will call cm-local
:
1
|
|
We can list our helm repository:
1 2 3 |
|
Now that our helm repository has been added, we can push our helm chart to our helm chart repository. Ensure that we are in our chart repository directory, where the Chart.yaml
file should be in our current directory. We need this file as it holds metadata about our chart.
We can view the Chart.yaml
:
1 2 3 4 5 6 |
|
Push the helm chart to chartmuseum:
1 2 3 |
|
Now we should update our repositories so that we can get the latest changes:
1
|
|
Now we can list the charts under our repository:
1 2 3 |
|
We can now get the values for our helm chart by running:
1
|
|
This returns the values yaml that we can use for our chart, so let’s say you want to output the values yaml so that we can use to to deploy a release we can do:
1
|
|
Now when we want to deploy a release, we can do:
1
|
|
After the release was deployed, we can list the releases by running:
1
|
|
And to view the release history:
1
|
|
Please find the following information with regards to Helm documentation: - helm docs - helm cart template guide
If you need a kubernetes cluster and you would like to run this locally, find the following documentation in order to do that: - using kind for local kubernetes clusters
Thanks for reading, feel free to check out my website, feel free to subscribe to my newsletter or follow me at @ruanbekker on Twitter.
Wiremock is a tool for building mock API’s which enables us to build stable development environments.
Run a wiremock instance with docker:
1
|
|
Then our wiremock instance will be exposed on port 8080 locally, which we can use to make a request against to create a api mapping:
1 2 3 4 |
|
The response should be something like this:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 |
|
If we make a GET request against our API:
1
|
|
Our response should be:
1 2 3 |
|
We can export our mappings to a local file named stubs.json
with:
1
|
|
We can import our mappings from our stubs.json
file with:
1
|
|
Thanks for reading, feel free to check out my website, feel free to subscribe to my newsletter or follow me at @ruanbekker on Twitter.
In this post we will use Grafana Promtail to collect all our logs and ship it to Grafana Loki.
We will be using Docker Compose and mount the docker socket to Grafana Promtail so that it is aware of all the docker events and configure it that only containers with docker labels logging=promtail
needs to be enabled for logging, which will then scrape those logs and send it to Grafana Loki where we will visualize it in Grafana.
In our promtail configuration config/promtail.yaml
:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 |
|
You can see we are using the docker_sd_configs
provider and filter only docker containers with the docker labels logging=promtail
and once we have those logs we relabel our labels to have the container name and we also use docker labels like log_stream
and logging_jobname
to add labels to our logs.
We would like to auto configure our datasources for Grafana and in config/grafana-datasources.yml
we have:
1 2 3 4 5 6 7 8 9 10 |
|
Then lastly we have our docker-compose.yml
that wire up all our containers:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 |
|
As you can see with our nginx container we define our labels:
1 2 3 4 5 6 |
|
Which uses logging: "promtail"
to let promtail know this log container’s log to be scraped and logging_jobname: "containerlogs"
which will assign containerlogs to the job label.
If you are following along all this configuration is available in my github repository https://github.com/ruanbekker/docker-promtail-loki .
Once you have everything in place you can start it with:
1
|
|
Access nginx on http://localhost:8080
Then navigate to grafana on http://localhost:3000 and select explore on the left and select the container:
And you will see the logs:
Thanks for reading, feel free to check out my website, feel free to subscribe to my newsletter or follow me at @ruanbekker on Twitter.
In this tutorial we will demonstrate how to use KinD (Kubernetes in Docker) to provision local kubernetes clusters for local development.
Updated at: 2023-12-22
KinD uses container images to run as “nodes”, so spinning up and tearing down clusters becomes really easy or running multiple or different versions, is as easy as pointing to a different container image.
Configuration such as node count, ports, volumes, image versions can either be controlled via the command line or via configuration, more information on that can be found on their documentation:
Follow the docs for more information, but for mac:
1
|
|
To verify if kind was installed, you can run:
1
|
|
Create the cluster with command line arguments, such as cluster name, the container image:
1
|
|
And the output will look something like this:
1 2 3 4 5 6 7 8 9 10 11 12 13 |
|
Then you can interact with the cluster using:
1
|
|
Then delete the cluster using:
1
|
|
I highly recommend installing kubectx, which makes it easy to switch between kubernetes contexts.
If you would like to define your cluster configuration as config, you can create a file default-config.yaml
with the following as a 2 node cluster, and specifying version 1.24.0:
1 2 3 4 5 6 7 8 |
|
Then create the cluster and point the config:
1
|
|
View the cluster info:
1
|
|
View cluster contexts:
1
|
|
Use context:
1
|
|
View nodes:
1 2 3 4 5 |
|
We will create a deployment, a service and port-forward to our service to access our application. You can also specify port configuration to your cluster so that you don’t need to port-forward, which you can find in their port mappings documentation
I will be using the following commands to generate the manifests, but will also add them to this post:
1 2 |
|
The manifest:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 |
|
Then apply them with:
1
|
|
Or if you used kubectl to create them:
1 2 |
|
You can then view your resources with:
1 2 3 4 5 6 7 8 9 10 11 12 |
|
Port forward to your service:
1
|
|
Then access your application:
1 2 3 |
|
View the clusters:
1
|
|
Delete a cluster:
1
|
|
If you want more configuration options, you can look at their documentation:
But one more example that I like using, is to define the port mappings:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 |
|
I highly recommend using kubectx
to switch contexts and kubens
to set the default namespace, and aliases:
1 2 3 |
|
Thanks for reading, feel free to check out my website, feel free to subscribe to my newsletter or follow me at @ruanbekker on Twitter.
In this tutorial I will demonstrate how to use Ansible for Homebrew Configuration Management. The aim for using Ansible to manage your homebrew packages helps you to have a consistent list of packages on your macbook.
For me personally, when I get a new laptop it’s always a mission to get the same packages installed as what I had before, and ansible solves that for us to have all our packages defined in configuration management.
Install ansible with python and pip:
1
|
|
Create the ansible.cfg
configuration file:
1 2 3 |
|
Our inventory.ini
will define the information about our target host, which will be localhost as we are using ansible to run against our local target which is our macbook:
1 2 3 4 5 |
|
Our playbook homebrew.yaml
will define the tasks to add the homebrew taps, cask packages and homebrew packages. You can change the packages as you desire, but these are the ones that I use:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 |
|
Now you can run the playbook using:
1
|
|
The code can be found in my github repository: - https://github.com/ruanbekker/ansible-macbook-setup
Thanks for reading, feel free to check out my website, feel free to subscribe to my newsletter or follow me at @ruanbekker on Twitter.
]]>In this tutorial I will demonstrate how to keep your docker container images nice and slim with the use of multistage builds for a hugo documentation project.
Hugo is a static content generator so essentially that means that it will generate your markdown files into html. Therefore we don’t need to include all the content from our project repository as we only need the static content (html, css, javascript) to reside on our final container image.
We will use the DOKS Modern Documentation theme for Hugo as our project example, where we will build and run our documentation website on a docker container, but more importantly make use of multistage builds to optimize the size of our container image.
Since hugo is a static content generator, we will use a node container image as our base. We will then build and generate the content using npm run build
which will generate the static content to /src/public
in our build stage.
Since we then have static content, we can utilize a second stage using a nginx container image with the purpose of a web server to host our static content. We will copy the static content from our build
stage into our second stage and place it under our defined path in our nginx config.
This way we only include the required content on our final container image.
First clone the docs github repository and change to the directory:
1 2 |
|
Now create a Dockerfile
in the root path with the following content:
1 2 3 4 5 6 7 8 9 10 11 |
|
As we can see we are copying two nginx config files to our final image, which we will need to create.
Create the nginx config directory:
1
|
|
The content for our main nginx config nginx/config/nginx.conf
:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 |
|
And in our main nginx config we are including a virtual host config app.conf
, which we will create locally, and the content of nginx/config/app.conf
:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 |
|
Now that we have our docker config in place, we can build our container image:
1
|
|
Then we can review the size of our container image, which is only 27.4MB
in size, pretty neat right.
1 2 3 4 |
|
Now that we’ve built our container image, we can run our documentation site, by specifying our host port on the left to map to our container port on the right in 80:80
:
1
|
|
When you don’t have port 80 already listening prior to running the previous command, when you head to http://localhost (if you are running this locally), you should see our documentation site up and running:
I have published this container image to ruanbekker/hashnode-docs-blogpost.
Thanks for reading, feel free to check out my website, feel free to subscribe to my newsletter or follow me at @ruanbekker on Twitter.
]]>Often you want to save some battery life when you are doing docker builds and leverage a remote host to do the intensive work and we can utilise docker context over ssh to do just that.
In this tutorial I will show you how to use a remote docker engine to do docker builds, so you still run the docker client locally, but the context of your build will be sent to a remote docker engine via ssh.
We will setup password-less ssh, configure our ssh config, create the remote docker context, then use the remote docker context.
I will be copying my public key to the remote host:
1
|
|
Setup my ssh config:
1 2 3 4 5 6 7 |
|
Test:
1 2 |
|
On the target host (192.168.2.18) we can verify that docker is installed:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 |
|
On the client (my laptop in this example), we will create a docker context called “home-server” and point it to our target host:
1 2 3 |
|
Now we can list our contexts:
1 2 3 4 |
|
We can verify if this works by listing our cached docker images locally and on our remote host:
1 2 |
|
And listing the remote images by specifying the context:
1 2 |
|
We can set the default context to our target host:
1 2 |
|
So running containers with remote contexts essentially becomes running containers on remote hosts. In the past, I had to setup a ssh tunnel, point the docker host env var to that endpoint, then run containers on the remote host.
Thats something of the past, we can just point our docker context to our remote host and run the container. If you haven’t set the default context, you can specify the context, so running a docker container on a remote host with your docker client locally:
1 2 |
|
Now from our client (laptop), we can test our container on our remote host:
1 2 |
|
The same way can be used to do remote docker builds, you have your Dockerfile locally, but when you build, you point the context to the remote host, and your context (dockerfile and files referenced in your dockerfile) will be sent to the remote host. This way you can save a lot of battery life as the computation is done on the remote docker engine.
Thanks for reading, feel free to check out my website, feel free to subscribe to my newsletter or follow me at @ruanbekker on Twitter.
]]>In this tutorial we will setup a RAID5 array, which is striping across multiple drives with distributed paritiy, which is good for redundancy. We will be using Ubuntu for our Linux Distribution, but the technique applies to other Linux Distributions as well.
We will run a server with one root disk and 6 extra disks, where we will first create our raid5 array with three disks, then I will show you how to expand your raid5 array by adding three other disks.
Things fail all the time, and it’s not fun when hard drives breaks, therefore we want to do our best to prevent our applications from going down due to hardware failures. To achieve data redundancy, we want to use three hard drives, which we want to add into a raid configuration that will proviide us:
This is how a RAID5 array looks like (image from diskpart.com):
We will have a Linux server with one root disk and six extra disks:
1 2 3 4 5 6 7 8 9 10 |
|
We require mdadm
to create our raid configuration:
1 2 |
|
First we will format and partition the following disks: /dev/xvdb
, /dev/xvdc
, /dev/xvdd
, I will demonstrate the process for one disk, but repeat them for the other as well:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 |
|
Using mdadm
, create the /dev/md0
device, by specifying the raid level and the disks that we want to add to the array:
1 2 3 |
|
Now that our device has been added, we can monitor the process:
1 2 3 4 5 6 7 |
|
As you can see, currently its at 11.5%, give it some time to let it complete, you should treat the following as a completed state:
1 2 3 4 5 6 |
|
We can also inspect devices with mdadm
:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 |
|
To get information about your raid5 device:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 |
|
We will use our /dev/md0
device and create a ext4
filesystem:
1 2 3 4 5 6 7 8 9 10 11 12 |
|
We can then verify that by looking at our block devices using lsblk
:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 |
|
Now we can mount our device to /mnt
:
1
|
|
We can verify that the device is mounted by using df
:
1 2 3 4 |
|
To persist the device across reboots, add it to the /etc/fstab
file:
1 2 |
|
Now our filesystem which is mounted at /mnt
is ready to be used.
By default RAID doesn’t have a config file, therefore we need to save it manually. If this step is not followed RAID device will not be in md0, but perhaps something else.
So, we must have to save the configuration to persist across reboots, when it reboot it gets loaded to the kernel and RAID will also get loaded.
1
|
|
Note: Saving the configuration will keep the RAID level stable in the md0 device.
Earlier I mentioned that we have spare disks that we can use to expand our raid device. After they have been formatted we can add them as spare devices to our raid setup:
1 2 3 4 |
|
Verify our change by viewing the detail of our device:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 |
|
As you can see it’s only spares at this moment, we can use the spares for data storage, by growing our device:
1
|
|
Verify:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 |
|
Wait for the raid to rebuild, by viewing the mdstat
::
1 2 3 4 5 6 7 |
|
Once we added the spares and growed our device, we need to run integrity checks, then we can resize the volume. But first, we need to unmount our filesystem:
1
|
|
Run a integrity check:
1 2 3 4 5 6 7 8 |
|
Once that has passed, resize the file system:
1 2 3 4 |
|
Then we remount our filesystem:
1
|
|
After the filesystem has been mounted, we can view the disk size and confirm that the size increased:
1 2 3 |
|
Thanks for reading, feel free to check out my website, feel free to subscribe to my newsletter or follow me at @ruanbekker on Twitter.
]]>In this short tutorial, I will demonstrate how to install a spcific version of Python on Ubuntu Linux.
Update the apt repositories:
1
|
|
Then install the required dependencies:
1
|
|
Head over to the Python Downloads section and select the version of your choice, in my case I will be using Python 3.8.13, once you have the download link, download it:
1
|
|
Then extract the tarball:
1
|
|
Once it completes, change to the directory:
1
|
|
Compile and add --enable-optimizations
flag as an argument:
1
|
|
Run make and make install:
1 2 |
|
Once it completes, you can symlink the python binary so that it’s detected by your PATH
, if you have no installed python versions or want to use it as the default, you can force overwriting the symlink:
1
|
|
Then we can test it by running:
1 2 |
|
Thanks for reading, feel free to check out my website, feel free to subscribe to my newsletter or follow me at @ruanbekker on Twitter.
]]>In this tutorial we will demonstrate how to persist iptables rules across reboots.
By default, when you create iptables rules its active, but as soon as you restart your server, the rules will be gone. Therefore we need to persist these rules across reboots.
We require the package iptables-persistent
and I will install it on a debian system so I will be using apt
:
1 2 |
|
Ensure that the service is enabled to start on boot:
1
|
|
In this case I will allow port 80 on TCP from all sources:
1
|
|
To persist our current rules, we need to save them to /etc/iptables/rules.v4
with iptables-save
:
1
|
|
Now when we restart, our rules will be loaded and our previous defined rules will be active.
Thanks for reading, feel free to check out my website, feel free to subscribe to my newsletter or follow me at @ruanbekker on Twitter.
]]>