In this tutorial we will provision a MySQL Server with Docker and then use Terraform to provision MySQL Users, Database Schemas and MySQL Grants with the MySQL Terraform Provider.
About
Terraform is super powerful and can do a lot of things. And it shines when it provisions Infrastructure. So in a scenario where we use Terraform to provision RDS MySQL Database Instances, we might still want to provision extra MySQL Users, or Database Schemas and the respective MySQL Grants.
Usually you will logon to the database and create them manually with sql syntax. But in this tutorial we want to make use of Docker to provision our MySQL Server and we would like to make use of Terraform to provision the MySQL Database Schemas, Grants and Users.
Instead of using AWS RDS, I will be provisioning a MySQL Server on Docker so that we can keep the costs free, for those who are following along.
We will also go through the steps on how to rotate the database password that we will be provisioning for our user.
MySQL Server
First we will provision a MySQL Server on Docker Containers, I have a docker-compose.yaml which is available in my quick-starts github repository:
If you don’t have Terraform installed, you can install it from their documentation.
If you want the source code of this example, its available in my terraform-mysql/petoju-provider repository. Which you can clone and jump into the terraform/mysql/petoju-provider directory.
variable "database_name"{description="The name of the database that you want created."type= string
default= null
}variable "database_username"{description="The name of the database username that you want created."type= string
default= null
}variable "password_version"{description="The password rotates when this value gets updated."type= number
default= 0
}
Now we are ready to run our terraform code, which will ultimately create a database, user and grants. Outputs the encrypted string of your password which was encrypted with your keybase_username.
Initialise Terraform:
1
terraform init
Run the plan to see what terraform wants to provision:
1
terraform plan
And we can see the following resources will be created:
Terraform used the selected providers to generate the following execution plan. Resource actions are indicated with the following symbols:
+ create
Terraform will perform the following actions:
# mysql_database.user_db will be created + resource "mysql_database""user_db"{ + default_character_set="utf8mb4" + default_collation="utf8mb4_general_ci" + id=(known after apply) + name="foobar"}# mysql_grant.user_id will be created + resource "mysql_grant""user_id"{ + database="foobar" + grant=false + host="%" + id=(known after apply) + privileges=[ + "SELECT",
+ "UPDATE",
] + table="*" + tls_option="NONE" + user="ruanb"}# mysql_user.user_id will be created + resource "mysql_user""user_id"{ + host="%" + id=(known after apply) + plaintext_password=(sensitive value) + tls_option="NONE" + user="ruanb"}# random_password.user_password will be created + resource "random_password""user_password"{ + bcrypt_hash=(sensitive value) + id=(known after apply) + keepers={ + "password_version"="0"} + length= 24
+ lower=true + min_lower= 0
+ min_numeric= 0
+ min_special= 2
+ min_upper= 0
+ number=true + numeric=true + override_special="!#$%^&*()-_=+[]{}<>:?" + result=(sensitive value) + special=true + upper=true}Plan: 4 to add, 0 to change, 0 to destroy.
Changes to Outputs:
+ password=(sensitive value) + user="ruanb"
Run the apply which will create the database, the user, sets the password and applies the grants:
1
terraform apply
Then our returned output should show something like this:
If we want to rotate the mysql password for the user, we can update the password_version variable either in our terraform.tfvars or via the cli. Let’s pass the variable in the cli and do a terraform plan to verify the changes:
1
terraform plan -var password_version=1
And due to our value for the random resource keepers parameter being updated, it will trigger the value of our password to be changed, and that will let terraform update our mysql user’s password:
12345678910111213141516171819202122232425
Terraform used the selected providers to generate the following execution plan. Resource actions are indicated with the following symbols:
~ update in-place
-/+ destroy and then create replacement
Terraform will perform the following actions:
# mysql_user.user_id will be updated in-place ~ resource "mysql_user""user_id"{id="ruanb@%" ~ plaintext_password=(sensitive value)# (5 unchanged attributes hidden)}# random_password.user_password must be replaced-/+ resource "random_password""user_password"{ ~ bcrypt_hash=(sensitive value) ~ id="none" -> (known after apply) ~ keepers={# forces replacement ~ "password_version"="0" -> "1"} ~ result=(sensitive value)# (11 unchanged attributes hidden)}Plan: 1 to add, 1 to change, 1 to destroy.
To validate that the password has changed, we can try to logon to mysql by using the password variable that was created initially:
1
docker exec -it mysql mysql -u ruanb -p$DBPASS
And as you can see authentication failed:
12
mysql: [Warning] Using a password on the command line interface can be insecure.
ERROR 1045(28000): Access denied for user 'ruanb'@'localhost'(using password: YES)
Set the new password to the variable again:
1
DBPASS=$(terraform output -raw password)
Then try to logon again:
1
docker exec -it mysql mysql -u ruanb -p$DBPASS
And we can see we are logged on again:
12345
Welcome to the MySQL monitor. Commands end with ; or \g.
Your MySQL connection id is 22
Server version: 8.0.33 MySQL Community Server - GPL
mysql>
In this post we will be using the AWS Terraform provider, from how to install Terraform, create a AWS IAM User, configure the AWS Provider and deploy a EC2 instance using Terraform.
AWS IAM User
In order to authenticate against AWS’s APIs, we need to create a AWS IAM User and create Access Keys for Terraform to use to authenticate.
Select IAM, then select “Users” on the left hand side and select “Create User”, then provide the username for your AWS IAM User:
Now we need to assign permissions to our new AWS IAM User. For this scenario I will be assigning a IAM Policy directly to the user and I will be selecting the “AdministratorAccess” policy. Keep in mind that this allows admin access to your whole AWS account:
Once you select the policy, select “Next” and select “Create User”. Once the user has been created, select “Users” on the left hand side, search for your user that we created, in my case “medium-terraform”.
Select the user and click on “Security credentials”. If you scroll down to the “Access keys” section, you will notice we don’t have any access keys for this user:
In order to allow Terraform access to our AWS Account, we need to create access keys that Terraform will use, and because we assigned full admin access to the user, Terraform will be able to manage resources in our AWS Account.
Click “Create access key”, then select the “CLI” option and select the confirmation at the bottom:
Select “Next” and then select “Create access key”. I am providing a screenshot of the Access Key and Secret Access Key that has been provided, but by the time this post has been published, the key will be deleted.
Store your Access Key and Secret Access Key in a secure place and treat this like your passwords. If someone gets access to these keys they can manage your whole AWS Account.
I will be using the AWS CLI to configure my Access Key and Secret Access Key, as I will configure Terraform later to read my Access Keys from the Credential Provider config.
First we need to configure the AWS CLI by passing the profile name, which I have chosen medium for this demonstration:
1
aws --profile medium configure
We will be asked to provide the access key, secret access key, aws region and the default output:
1234
AWS Access Key ID [None]: AKIATPRT2G4SGXLAC3HJ
AWS Secret Access Key [None]: KODnR[............]nYTYbd
Default region name [None]: eu-west-1
Default output format [None]: json
To verify if everything works as expected we can use the following command to verify:
1
aws --profile medium sts get-caller-identity
The response should look something similar to the following:
Now that we have our AWS IAM User configured, we can install Terraform, if you don’t have Terraform installed yet, you can follow their Installation Documentation.
Once you have Terraform installed, we can setup our workspace where we will ultimately deploy a EC2 instance, but before we get there we need to create our project directory and change to that directory:
We will define our Terraform definitions on how we want our desired infrastructure to look like. We will get to the content in the files soon.
I personally love Terraform’s documentation as they are rich in examples and really easy to use.
Head over to the Terraform AWS Provider documentation and you scroll a bit down, you can see the Authentication and Configuration section where they outline the order in how Terraform will look for credentials and we will be making use of the shared credentials file as that is where our access key and secret access key is stored.
If you look at the top right corner of the Terraform AWS Provider documentation, they show you how to use the AWS Provider:
We can copy that code snippet and paste it into our providers.tf file and configure the aws provider section with the medium profile that we’ve created earlier.
This will tell Terraform where to look for credentials in order to authenticate with AWS.
In the above example we are filtering for the latest Ubuntu 22.04 64bit AMI then we are defining a EC2 instance and specifying the AMI ID that we filtered from our data source.
Note that we haven’t specified a SSH Keypair, as we are just focusing on how to provision a EC2 instance.
As you can see we are also referencing variables, which we need to define in variables.tf :
1234567891011
variable "instance_name"{description="Instance Name for EC2."type= string
default="test"}variable "instance_type"{description="Instance Type for EC2."type= string
default="t2.micro"}
And then lastly we need to define our outputs.tf which will be used to output the instance id and ip address:
Now that our infrastructure has been defined as code, we can first initialise terraform which will initialise the backend and download all the providers that has been defined:
1
terraform init
Once that has done we can run a “plan” which will show us what Terraform will deploy:
1
terraform plan
Now terraform will show us the difference in what we have defined, and what is actually in AWS, as we know its a new account with zero infrastructure, the diff should show us that it needs to create a EC2 instance.
The response from the terraform plan shows us the following:
Terraform used the selected providers to generate the following execution plan. Resource actions are indicated with the following symbols:
+ create
Terraform will perform the following actions:
# aws_instance.ec2 will be created + resource "aws_instance""ec2"{ + ami="ami-0f56955469757e5aa" + arn=(known after apply) + id=(known after apply) + instance_type="t2.micro" + key_name=(known after apply) + private_ip=(known after apply) + public_ip=(known after apply) + security_groups=(known after apply) + subnet_id=(known after apply) + tags={ + "Name"="test-ec2-instance"} + tags_all={ + "Name"="test-ec2-instance"} + vpc_security_group_ids=(known after apply)}Plan: 1 to add, 0 to change, 0 to destroy.
Changes to Outputs:
+ instance_id=(known after apply) + ip=(known after apply)
As you can see terraform has looked up the AMI ID using the data source, and we can see that terraform will provision 1 resource which is a EC2 instance. Once we hare happy with the plan, we can run a apply which will show us the same but this time prompt us if we want to proceed:
123456789101112131415161718
Do you want to perform these actions?
Terraform will perform the actions described above.
Only 'yes' will be accepted to approve.
Enter a value: yes
aws_instance.ec2: Creating...
aws_instance.ec2: Still creating... [10s elapsed]aws_instance.ec2: Still creating... [20s elapsed]aws_instance.ec2: Still creating... [30s elapsed]aws_instance.ec2: Creation complete after 35s [id=i-005c08b899229fff0]Apply complete! Resources: 1 added, 0 changed, 0 destroyed.
Outputs:
instance_id="i-005c08b899229fff0"ip="34.253.196.167"
And now we can see our EC2 instance was provisioned and our outputs returned the instance id as well as the public ip address.
We can also confirm this by looking at the AWS EC2 Console:
Note that Terraform Configuration is idempotent, so when we run a terraform apply again, terraform will check what we have defined as what we want our desired infrastructure to be like, and what we actually have in our AWS Account, and since we haven’t made any changes there should be no changes.
It will show us what terraform will destroy, then upon confirming we should see the following output:
12345678910111213141516171819
Plan: 0 to add, 0 to change, 1 to destroy.
Changes to Outputs:
- instance_id="i-005c08b899229fff0" -> null
- ip="34.253.196.167" -> null
Do you really want to destroy all resources?
Terraform will destroy all your managed infrastructure, as shown above.
There is no undo. Only 'yes' will be accepted to confirm.
Enter a value: yes
aws_instance.ec2: Destroying... [id=i-005c08b899229fff0]aws_instance.ec2: Still destroying... [id=i-005c08b899229fff0, 10s elapsed]aws_instance.ec2: Still destroying... [id=i-005c08b899229fff0, 20s elapsed]aws_instance.ec2: Still destroying... [id=i-005c08b899229fff0, 30s elapsed]aws_instance.ec2: Destruction complete after 31s
Destroy complete! Resources: 1 destroyed.
If you followed along and you also want to clean up the AWS IAM user, head over to the AWS IAM Console and delete the “medium-terraform” IAM User.
Thank You
I hope you enjoyed this post, I will be posting more terraform related content.
Should you want to reach out to me, you can follow me on Twitter at @ruanbekker or check out my website at https://ruan.dev
In this post we will have a look at FerretDB which is a opensource proxy that translates MongoDB queries to SQL, where PostgreSQL being the database engine.
Initially built as open-source software, MongoDB was a game-changer for many developers, enabling them to build fast and robust applications. Its ease of use and extensive documentation made it a top choice for many developers looking for an open-source database. However, all this changed when they switched to an SSPL license, moving away from their open-source roots.
In light of this, FerretDB was founded to become the true open-source alternative to MongoDB, making it the go-to choice for most MongoDB users looking for an open-source alternative to MongoDB. With FerretDB, users can run the same MongoDB protocol queries without needing to learn a new language or command.
What can you expect from this tutorial
We will be doing the following:
deploying ferretdb and postgres on docker containers using docker compose
then use mongosh as a client to logon to ferretdb using the ferretdb endpoint
explore some example queries to insert and read data from ferretdb
use scripting to generate data into ferretedb
explore the embedded prometheus endpoint for metrics
Deploy FerretDB
The following docker-compose.yaml defines a postgres container which will be used as the database engine for ferretdb, and then we define the ferretdb container, which connects to postgres via the environment variable FERRETDB_POSTGRESQL_URL.
Once you have the content above saved in docker-compose.yaml you can run the following to run the containers in a detached mode:
1
docker-compose up -d
Connect to FerretDB
Once the containers started, we can connect to our ferretdb server using mongosh, which is a shell utility to connect to the database). I will make use of a container to do this, where I will reference the network which we defined in our docker compose file, and set the endpoint that mongosh need to connect to:
1
docker run --rm -it --network=ferretdb --entrypoint=mongosh mongo:6.0 "mongodb://ferret:password@ferretdb/ferretdb?authMechanism=PLAIN"
Once it successfully connects to ferretdb, we should see the following prompt:
123456
Current Mongosh Log ID: 64626c5c259916d1a68b7dad
Connecting to: mongodb://<credentials>@ferretdb/ferretdb?authMechanism=PLAIN&directConnection=true&appName=mongosh+1.8.2
Using MongoDB: 6.0.42
Using Mongosh: 1.8.2
ferretdb>
Run example queries on FerretDB
If you are familiar with MongoDB, you will find the following identical to MongoDB.
First we show the current databases:
12
ferretdb> show dbs;public 0 B
The we create and use the database named mydb:
12
ferretdb> use mydb
switched to db mydb
To see which database are we currently connected to:
12
mydb> db
mydb
Now we can create a collection named mycol1 and mycol2:
We will create a script so that we can generate data that we want to write into FerretDB.
Create the following script, write.js:
1234567891011121314151617181920
var txs=[]for(var x= 0; x < 1000; x++){ var transaction_types=["credit card", "cash", "account"]; var store_names=["edgards", "cna", "makro", "picknpay", "checkers"]; var random_transaction_type= Math.floor(Math.random() * (2 - 0 + 1)) + 0; var random_store_name= Math.floor(Math.random() * (4 - 0 + 1)) + 0; var random_age= Math.floor(Math.random() * (80 - 18) + 18) txs.push({ transaction: 'tx_' + x,
transaction_price: Math.round(Math.random()*1000),
transaction_type: transaction_types[random_transaction_type],
store_name: store_names[random_store_name],
age: random_age
});}console.log("drop and recreate the collection")db.mycollection1.drop()db.createCollection("mycollection1")console.log("insert documents into collection")db.mycollection1.insertMany(txs)
The script will loop a 1000 times and create documents that will include fields of transaction_types, store_names, random_transaction_type, random_store_name and random_age.
Use docker, mount the file inside the container, point the database endpoint to ferretdb and load the file that we want to execute:
This tutorial will show you how you can run 64bit Ubuntu Linux Virtual Machines on a Apple Mac M1 arm64 architecture macbook using UTM.
Installation
Head over to their documentation and download the UTM.dmg file and install it, once it is installed and you have opened UTM, you should see this screen:
Creating a Virtual Machine
In my case I would like to run a Ubuntu VM, so head over to the Ubuntu Server Download page and download the version of choice, I will be downloading Ubuntu Server 22.04, once you have your ISO image downloaded, you can head over to the next step which is to “Create a New Virtual Machine”:
I will select “Emulate” as I want to run a amd64 bit architecture, then select “Linux”:
In the next step we want to select the Ubuntu ISO image that we downloaded, which we want to use to boot our VM from:
Browse and select the image that you downloaded, once you selected it, it should show something like this:
Select continue, then select the architecture to x86_64, the system I kept on defaults and the memory I have set to 2048MB and cores to 2 but that is just my preference:
The next screen is to configure storage, as this is for testing I am setting mine to 8GB:
The next screen is shared directories, this is purely optional, I have created a directory for this:
1
mkdir ~/utm
Which I’ve then defined for a shared directory, but this depends if you need to have shared directories from your local workstation.
The next screen is a summary of your choices and you can name your vm here:
Once you are happy select save, and you should see something like this:
You can then select the play button to start your VM.
The console should appear and you can select install or try this vm:
This will start the installation process of a Linux Server:
Here you can select the options that you would like, I would just recommend to ensure that you select Install OpenSSH Server so that you can connect to your VM via SSH.
Once you get to this screen:
The installation process is busy and you will have to wait a couple of minutes for it to complete. Once you see the following screen the installation is complete:
On the right hand side select the circle, then select CD/DVD and select the ubuntu iso and select eject:
Starting your VM
Then power off the guest and power on again, then you should get a console login, then you can proceed to login, and view the ip address:
SSH to your VM
Now from your terminal you should be able to ssh to the VM:
We can also verify that we are running a 64bit vm, by running uname --processor:
Thank You
Thanks for reading, feel free to check out my website, feel free to subscribe to my newsletter or follow me at @ruanbekker on Twitter.
In this post we will run a Kakfa cluster with 3 kafka brokers on docker compose and using a producer to send messages to our topics and a consumer that will receive the messages from the topics, which we will develop in python and explore the kafka-ui.
What is Kafka?
Kafka is a distributed event store and stream processing platform. Kafka is used to build real-time streaming data pipelines and real-time streaming applications.
But on a high level, the components of a typical Kafka setup:
Zookeeper: Kafka relies on Zookeeper to do leadership election of Kafka Brokers and Topic Partitions.
Broker: Kafka server that receives messages from producers, assigns them to offsets and commit the messages to disk storage. A offset is used for data consistency in a event of failure, so that consumers know from where to consume from their last message.
Topic: A topic can be thought of categories to organize messages. Producers writes messages to topics, consumers reads from those topics.
Partitions: A topic is split into multiple partitions. This improves scalability through parallelism (not just one broker). Kafka also does replication
For great in detail information about kafka and its components, I encourage you to visit the mentioned post from above.
Launch Kafka
This is the docker-compose.yaml that we will be using to run a kafka cluster with 3 broker containers, 1 zookeeper container, 1 producer, 1 consumer and a kafka-ui.
You can verify that the brokers are passing their health checks with:
12345678910
docker-compose ps
NAME IMAGE COMMAND SERVICE CREATED STATUS PORTS
broker-1 confluentinc/cp-kafka:7.4.0 "/etc/confluent/dock…" broker-1 5 minutes ago Up 4 minutes (healthy) 0.0.0.0:9091->9091/tcp, :::9091->9091/tcp, 9092/tcp
broker-2 confluentinc/cp-kafka:7.4.0 "/etc/confluent/dock…" broker-2 5 minutes ago Up 4 minutes (healthy) 0.0.0.0:9092->9092/tcp, :::9092->9092/tcp
broker-3 confluentinc/cp-kafka:7.4.0 "/etc/confluent/dock…" broker-3 5 minutes ago Up 4 minutes (healthy) 9092/tcp, 0.0.0.0:9093->9093/tcp, :::9093->9093/tcp
consumer ruanbekker/kafka-producer-consumer:2023-05-17 "sh /src/run.sh $ACT…" consumer 5 minutes ago Up 4 minutes
kafka-ui provectuslabs/kafka-ui:latest "/bin/sh -c 'java --…" kafka-ui 5 minutes ago Up 4 minutes 0.0.0.0:8080->8080/tcp, :::8080->8080/tcp
producer ruanbekker/kafka-producer-consumer:2023-05-17 "sh /src/run.sh $ACT…" producer 5 minutes ago Up 4 minutes
zookeeper confluentinc/cp-zookeeper:7.4.0 "/etc/confluent/dock…" zookeeper 5 minutes ago Up 5 minutes (healthy) 0.0.0.0:2888->2888/tcp, :::2888->2888/tcp, 0.0.0.0:3888->3888/tcp, :::3888->3888/tcp, 2181/tcp, 0.0.0.0:32181->32181/tcp, :::32181->32181/tcp
Producers and Consumers
The producer generates random data and sends it to a topic, where the consumer will listen on the same topic and read messages from that topic.
To view the output of what the producer is doing, you can tail the logs:
12345678
docker logs -f producer
setting up producer, checking if brokers are available
brokers not available yet
brokers are available and ready to produce messages
message sent to kafka with squence id of 1
message sent to kafka with squence id of 2
message sent to kafka with squence id of 3
And to view the output of what the consumer is doing, you can tail the logs:
1234567
docker logs -f consumer
starting consumer, checks if brokers are availabe
brokers not availbe yet
brokers are available and ready to consume messages
{'sequence_id': 10, 'user_id': '20520', 'transaction_id': '4026fd10-2aca-4d2e-8bd2-8ef0201af2dd', 'product_id': '17974', 'address': '71741 Lopez Throughway | South John | BT', 'signup_at': '2023-05-11 06:54:52', 'platform_id': 'Tablet', 'message': 'transaction made by userid 119740995334901'}{'sequence_id': 11, 'user_id': '78172', 'transaction_id': '4089cee1-0a58-4d9b-9489-97b6bc4b768f', 'product_id': '21477', 'address': '735 Jasmine Village Apt. 009 | South Deniseland | BN', 'signup_at': '2023-05-17 09:54:10', 'platform_id': 'Tablet', 'message': 'transaction made by userid 159204336307945'}
In this post we will use terraform to deploy a helm release to kubernetes.
Kubernetes
For this demonstration I will be using kind to deploy a local Kubernetes cluster to the operating system that I am running this on, which will be Ubuntu Linux. For a more in-depth tutorial on Kind, you can see my post on Kind for Local Kubernetes Clusters.
Installing the Pre-Requirements
We will be installing terraform, docker, kind and kubectl on Linux.
Now we can test if kubectl can communicate with the kubernetes api server:
1
kubectl get nodes
In my case it returns:
12
NAME STATUS ROLES AGE VERSION
rbkr-control-plane Ready control-plane 6m20s v1.24.0
Terraform
Now that our pre-requirements are sorted we can configure terraform to communicate with kubernetes. For that to happen, we need to consult the terraform kubernetes provider’s documentation.
As per their documentation they provide us with this snippet:
And from their main page, it gives us a couple of options to configure the provider and the easiest is probably to read the ~/.kube/config configuration file.
But in cases where you have multiple configurations in your kube config file, this might not be ideal, and I like to be precise, so I will extract the client certificate, client key and cluster ca certificate and endpoint from our ~/.kube/config file.
If we run cat ~/.kube/config we will see something like this:
First we will create a directory for our certificates:
1
mkdir ~/certs
I have truncated my kube config for readability, but for our first file certs/client-cert.pem we will copy the value of client-certificate-data:, which will look something like this:
Then we will copy the contents of client-key-data: into certs/client-key.pem and then lastly the content of certificate-authority-data: into certs/cluster-ca-cert.pem.
So then we should have the following files inside our certs/ directory:
Your host might look different to mine, but you can find your host endpoint in ~/.kube/config.
For a simple test we can list all our namespaces to ensure that our configuration is working. In a file called namespaces.tf, we can populate the following:
12345
data "kubernetes_all_namespaces""allns"{}output "all-ns"{value= data.kubernetes_all_namespaces.allns.namespaces
}
Now we need to initialize terraform so that it can download the providers:
1
terraform init
Then we can run a plan which will reveal our namespaces:
12345678910111213
terraform plan
data.kubernetes_all_namespaces.allns: Reading...
data.kubernetes_all_namespaces.allns: Read complete after 0s [id=a0ff7e83ffd7b2d9953abcac9f14370e842bdc8f126db1b65a18fd09faa3347b]Changes to Outputs:
+ all-ns =[ + "default",
+ "kube-node-lease",
+ "kube-public",
+ "kube-system",
+ "local-path-storage",
]
We can now remove our namespaces.tf as our test worked:
1
rm namespaces.tf
Helm Releases with Terraform
We will need two things, we need to consult the terraform helm release provider documentation and we also need to consult the helm chart documentation which we are interested in.
As we are working with helm releases, we need to configure the helm provider, I will just extend my configuration from my previous provider config in providers.tf:
In our main.tf I will use two ways to override values in our values.yaml using set and templatefile. The reason for the templatefile, is when we want to fetch a value and want to replace the content with our values file, it could be used when we retrieve a value from a data source as an example. In my example im just using a variable.
variable "release_name" {type = stringdefault = "nginx"description = "The name of our release."}variable "chart_repository_url" {type = stringdefault = "https://charts.bitnami.com/bitnami"description = "The chart repository url."}variable "chart_name" {type = stringdefault = "nginx"description = "The name of of our chart that we want to install from the repository."}variable "chart_version" {type = stringdefault = "13.2.20"description = "The version of our chart."}variable "namespace" {type = stringdefault = "apps"description = "The namespace where our release should be deployed into."}variable "create_namespace" {type = booldefault = truedescription = "If it should create the namespace if it doesnt exist."}variable "atomic" {type = booldefault = falsedescription = "If it should wait until release is deployed."}
Terraform used the selected providers to generate the following execution plan. Resource actions are indicated with the following symbols:
+ create
Terraform will perform the following actions:
# helm_release.nginx will be created + resource "helm_release""nginx"{ + atomic=false + chart="nginx" + cleanup_on_fail=false + create_namespace=true + dependency_update=false + disable_crd_hooks=false + disable_openapi_validation=false + disable_webhooks=false + force_update=false + id=(known after apply) + lint=false + manifest=(known after apply) + max_history= 0
+ metadata=(known after apply) + name="nginx" + namespace="apps" + pass_credentials=false + recreate_pods=false + render_subchart_notes=true + replace=false + repository="https://charts.bitnami.com/bitnami" + reset_values=false + reuse_values=false + skip_crds=false + status="deployed" + timeout= 300
+ values=[ + <<-EOT nameOverride: "nginx" ## ref: https://hub.docker.com/r/bitnami/nginx/tags/ image: registry: docker.io repository: bitnami/nginx tag: 1.23.3-debian-11-r3 EOT,
] + verify=false + version="13.2.20" + wait=false + wait_for_jobs=false + set{ + name="image.tag" + value="1.23.3-debian-11-r3"}}Plan: 1 to add, 0 to change, 0 to destroy.
Changes to Outputs:
+ metadata=(known after apply)
Once we are happy with our plan, we can run a apply:
123456789101112131415161718192021222324252627
terraform apply
Plan: 1 to add, 0 to change, 0 to destroy.
Changes to Outputs:
+ metadata=(known after apply)Do you want to perform these actions?
Terraform will perform the actions described above.
Only 'yes' will be accepted to approve.
Enter a value: yes
helm_release.nginx: Creating...
helm_release.nginx: Still creating... [10s elapsed]metadata= tolist([{"app_version"="1.23.3""chart"="nginx""name"="nginx""namespace"="apps""revision"= 1
"values"="{\"image\":{\"registry\":\"docker.io\",\"repository\":\"bitnami/nginx\",\"tag\":\"1.23.3-debian-11-r3\"},\"nameOverride\":\"nginx\"}""version"="13.2.20"},
])
Then we can verify if the pod is running:
123
kubectl get pods -n apps
NAME READY STATUS RESTARTS AGE
nginx-59bdc6465-xdbfh 1/1 Running 0 2m35s
Importing Helm Releases into Terraform State
If you have an existing helm release that was deployed with helm and you want to transfer the ownership to terraform, you first need to write the terraform code, then import the resources into terraform state using:
1
terraform import helm_release.nginx apps/nginx
Where the last argument is <namespace>/<release-name>. Once that is imported you can run terraform plan and apply.
If you want to discover all helm releases managed by helm you can use:
1
kubectl get all -A -l app.kubernetes.io/managed-by=Helm
Thank You
Thanks for reading, feel free to check out my website, feel free to subscribe to my newsletter or follow me at @ruanbekker on Twitter.
We will create a terraform pipeline which will run the plan step automatically and a manual step to run the apply step.
During these steps and different pipelines we need to persist our terraform state remotely so that new pipelines can read from our state what we last stored.
Gitlab offers a remote backend for our terraform state which we can use, and we will use a basic example of using the random resource.
Prerequisites
If you don’t see the “Infrastructure” menu on your left, you need to enable it at “Settings”, “General”, “Visibility”, “Project features”, “Permissions” and under “Operations”, turn on the toggle.
For more information on this see their documentation
Authentication
For this demonstration I created a token which is only scoped for this one project, for this we need a to create a token under, “Settings”, “Access Tokens”:
Select the api under scope:
Store the token name and token value as TF_USERNAME and TF_PASSWORD as a CICD variable under “Settings”, “CI/CD”, “Variables”.
Terraform Code
We will use a basic random_uuid resource for this demonstration, our main.tf:
Where the magic happens is in the terraform init step, that is where we will initialize the terraform state in gitlab, and as you can see we are taking the TF_ADDRESS variable to define the path of our state and in this case our state file will be named default-terraform.tfstate.
If it was a case where you are deploying multiple environments, you can use something like ${ENVIRONMENT}-terraform.tfstate.
When we run our pipeline, we can look at our plan step:
Once we are happy with this we can run the manual step and do the apply step, then our pipeline should look like this:
When we inspect our terraform state in the infrastructure menu, we can see the state file was created:
Thank You
Thanks for reading, feel free to check out my website, feel free to subscribe to my newsletter or follow me at @ruanbekker on Twitter.
Helm, its one amazing piece of software that I use multiple times per day!
What is Helm?
You can think of helm as a package manager for kubernetes, but in fact its much more than that.
Think about it in the following way:
Kubernetes Package Manager
Way to templatize your applications (this is the part im super excited about)
Easy way to install applications to your kubernetes cluster
Easy way to do upgrades to your applications
Websites such as artifacthub.io provides a nice interface to lookup any application an how to install or upgrade that application.
How does Helm work?
Helm uses your kubernetes config to connect to your kubernetes cluster. In most cases it utilises the config defined by the KUBECONFIG environment variable, which in most cases points to ~/kube/config.
If you want to follow along, you can view the following blog post to provision a kubernetes cluster locally:
Once you have provisioned your kubernetes cluster locally, you can proceed to install helm, I will make the assumption that you are using Mac:
1
brew install helm
Once helm has been installed, you can test the installation by listing any helm releases, by running:
1
helm list
Helm Charts
Helm uses a packaging format called charts, which is a collection of files that describes a related set of kubernetes resources. A sinlge helm chart m
ight be used to deploy something simple such as a deployment or something complex that deploys a deployment, ingress, horizontal pod autoscaler, etc.
Using Helm to deploy applications
So let’s assume that we have our kubernetes cluster deployed, and now we are ready to deploy some applications to kubernetes, but we are unsure on how we would do that.
Let’s assume we want to install Nginx.
First we would navigate to artifacthub.io, which is a repository that holds a bunch of helm charts and the information on how to deploy helm charts to our cluster.
Then we would search for Nginx, which would ultimately let us land on:
But before we do that, if we think about it, we add a repository, then before we install a release, we could first find information such as the release versions, etc.
So the way I would do it, is to first add the repository:
Then since we have added the repository, we can update our repository to ensure that we have the latest release versions:
1
$ helm repo update
Now that we have updated our local repositories, we want to find the release versions, and we can do that by listing the repository in question. For example, if we don’t know the application name, we can search by the repository name:
1
$ helm search repo bitnami/ --versions
In this case we will get an output of all the applications that is currently being hosted by Bitnami.
If we know the repository and the release name, we can extend our search by using:
1
$ helm search repo bitnami/nginx --versions
In this case we get an output of all the Nginx release versions that is currently hosted by Bitnami.
Installing a Helm Release
Now that we have received a response from helm search repo, we can see that we have different release versions, as example:
123
NAME CHART VERSION APP VERSION DESCRIPTION
bitnami/nginx 13.2.22 1.23.3 NGINX Open Source is a web server that can be a...
bitnami/nginx 13.2.21 1.23.3 NGINX Open Source is a web server that can be a...
For each helm chart, the chart has default values which means, when we install the helm release it will use the default values which is defined by the helm chart.
We have the concept of overriding the default values with a yaml configuration file we usually refer to values.yaml, that we can define the values that we want to override our default values with.
To get the current default values, we can use helm show values, which will look like the following:
1
$ helm show values bitnami/nginx --version 13.2.22
That will output to standard out, but we can redirect the output to a file using the following:
1
$ helm show values bitnami/nginx --version 13.2.22 > nginx-values.yaml
Now that we have redirected the output to nginx-values.yaml, we can inspect the default values using cat nginx-values.yaml, and any values that we see that we want to override, we can edit the yaml file and once we are done we can save it.
Now that we have our override values, we can install a release to our kubernetes cluster.
Let’s assume we want to install nginx to our cluster under the name my-nginx and we want to deploy it to the namespace called web-servers:
upgrade --install - meaning we are installing a release, if already exists, do an upgrade
my-nginx - use the release name my-nginx
bitnami/nginx - use the repository and chart named nginx
--values nginx-values.yaml - define the values file with the overrides
--namespace web-servers --create-namespace - define the namespace where the release will be installed to, and create the namespace if not exists
--version 13.2.22 - specify the version of the chart to be installed
Information about the release
We can view information about our release by running:
1
$ helm list -n web-servers
Creating your own helm charts
It’s very common to create your own helm charts when you follow a common pattern in a microservice architecture or something else, where you only want to override specific values such as the container image, etc.
In this case we can create our own helm chart using:
123
$ mkdir ~/charts
$ cd ~/charts
$ helm create my-chart
This will create a scaffoliding project with the required information that we need to create our own helm chart. If we look at a tree view, it will look like the following:
In our example it will create a service account, service, deployment, etc.
As you can see the spec.template.spec.containers[].image is set to nginx:1.16.0, and to see how that was computed, we can have a look at templates/deployment.yaml:
As you can see in image: section we have .Values.image.repository and .Values.image.tag, and those values are being retrieved from the values.yaml file, and when we look at the values.yaml file:
12345
image:repository:nginxpullPolicy:IfNotPresent# Overrides the image tag whose default is the chart appVersion.tag:""
If we want to override the image repository and image tag, we can update the values.yaml file to lets say:
$ helm plugin list
NAME VERSION DESCRIPTION
cm-push 0.10.3 Push chart package to ChartMuseum
Now we add our chartmuseum helm chart repository, which we will call cm-local:
1
$ helm repo add cm-local http://localhost:8080/
We can list our helm repository:
123
$ helm repo list
NAME URL
cm-local http://localhost:8080/
Now that our helm repository has been added, we can push our helm chart to our helm chart repository. Ensure that we are in our chart repository directory, where the Chart.yaml file should be in our current directory. We need this file as it holds metadata about our chart.
We can view the Chart.yaml:
123456
apiVersion:v2name:my-chartdescription:A Helm chart for Kubernetestype:applicationversion:0.1.0appVersion:"1.16.0"
Now we should update our repositories so that we can get the latest changes:
1
$ helm repo update
Now we can list the charts under our repository:
123
$ helm search repo cm-local/
NAME CHART VERSION APP VERSION DESCRIPTION
cm-local/my-chart 0.0.1 1.16.0 A Helm chart for Kubernetes
We can now get the values for our helm chart by running:
1
$ helm show values cm-local/my-chart
This returns the values yaml that we can use for our chart, so let’s say you want to output the values yaml so that we can use to to deploy a release we can do:
1
$ helm show values cm-local/my-chart > my-values.yaml
If you need a kubernetes cluster and you would like to run this locally, find the following documentation in order to do that:
- using kind for local kubernetes clusters
Thank You
Thanks for reading, feel free to check out my website, feel free to subscribe to my newsletter or follow me at @ruanbekker on Twitter.
In this post we will use Grafana Promtail to collect all our logs and ship it to Grafana Loki.
About
We will be using Docker Compose and mount the docker socket to Grafana Promtail so that it is aware of all the docker events and configure it that only containers with docker labels logging=promtail needs to be enabled for logging, which will then scrape those logs and send it to Grafana Loki where we will visualize it in Grafana.
Promtail
In our promtail configuration config/promtail.yaml:
You can see we are using the docker_sd_configs provider and filter only docker containers with the docker labels logging=promtail and once we have those logs we relabel our labels to have the container name and we also use docker labels like log_stream and logging_jobname to add labels to our logs.
Grafana Config
We would like to auto configure our datasources for Grafana and in config/grafana-datasources.yml we have:
Which uses logging: "promtail" to let promtail know this log container’s log to be scraped and logging_jobname: "containerlogs" which will assign containerlogs to the job label.