In this tutorial we will demonstrate how to use KinD (Kubernetes in Docker) to provision local kubernetes clusters for local development.
About
KinD uses container images to run as “nodes”, so spinning up and tearing down clusters becomes really easy or running multiple or different versions, is as easy as pointing to a different container image.
Configuration such as node count, ports, volumes, image versions can either be controlled via the command line or via configuration, more information on that can be found on their documentation:
Creating cluster "cluster-1" ...
β Ensuring node image (kindest/node:v1.24.0) πΌ
β Preparing nodes π¦
β Writing configuration π
β Starting control-plane πΉοΈ
β Installing CNI π
β Installing StorageClass πΎ
Set kubectl context to "kind-cluster-1"You can now use your cluster with:
kubectl cluster-info --context kind-cluster-1
Have a question, bug, or feature request? Let us know! https://kind.sigs.k8s.io/#community π
I highly recommend installing kubectx, which makes it easy to switch between kubernetes contexts.
Create a Cluster with Config
If you would like to define your cluster configuration as config, you can create a file default-config.yaml with the following as a 2 node cluster, and specifying version 1.24.0:
kubectl get nodes -o wide
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
kind-cluster-control-plane Ready control-plane 2m11s v1.24.0 172.20.0.5 <none> Ubuntu 21.10 5.10.104-linuxkit containerd://1.6.4
kind-cluster-worker Ready <none> 108s v1.24.0 172.20.0.4 <none> Ubuntu 21.10 5.10.104-linuxkit containerd://1.6.4
Deploy Sample Application
We will create a deployment, a service and port-forward to our service to access our application. You can also specify port configuration to your cluster so that you don’t need to port-forward, which you can find in their port mappings documentation
I will be using the following commands to generate the manifests, but will also add them to this post:
kubectl get deployment,pod,service
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/hostname 2/2 22 9m27s
NAME READY STATUS RESTARTS AGE
pod/hostname-7ff58c5644-67vhq 1/1 Running 0 9m27s
pod/hostname-7ff58c5644-wjjbw 1/1 Running 0 9m27s
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/hostname-http ClusterIP 10.96.218.58 <none> 80/TCP 5m48s
service/kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 24m
In this tutorial I will demonstrate how to use Ansible for Homebrew Configuration Management. The aim for using Ansible to manage your homebrew packages helps you to have a consistent list of packages on your macbook.
For me personally, when I get a new laptop it’s always a mission to get the same packages installed as what I had before, and ansible solves that for us to have all our packages defined in configuration management.
Our inventory.ini will define the information about our target host, which will be localhost as we are using ansible to run against our local target which is our macbook:
Our playbook homebrew.yaml will define the tasks to add the homebrew taps, cask packages and homebrew packages. You can change the packages as you desire, but these are the ones that I use:
-hosts:localhostname:Macbook Playbookgather_facts:Falsevars:TFENV_ARCH:amd64tasks:-name:Ensures taps are present via homebrewcommunity.general.homebrew_tap:name:""state:presentwith_items:-hashicorp/tap-name:Ensures packages are present via homebrew caskcommunity.general.homebrew_cask:name:""state:presentinstall_options:'appdir=/Applications'with_items:-visual-studio-code-multipass-spotify-name:Ensures packages are present via homebrewcommunity.general.homebrew:name:""path:"/Applications"state:presentwith_items:-openssl-readline-sqlite3-xz-zlib-jq-yq-wget-go-kubernetes-cli-fzf-sshuttle-hugo-helm-kind-awscli-gnupg-kubectx-helm-stern-terraform-tfenv-pyenv-jsonnetignore_errors:yestags:-packages
In this tutorial I will demonstrate how to keep your docker container images nice and slim with the use of multistage builds for a hugo documentation project.
Hugo is a static content generator so essentially that means that it will generate your markdown files into html. Therefore we don’t need to include all the content from our project repository as we only need the static content (html, css, javascript) to reside on our final container image.
What are we doing today
We will use the DOKS Modern Documentation theme for Hugo as our project example, where we will build and run our documentation website on a docker container, but more importantly make use of multistage builds to optimize the size of our container image.
Our Build Strategy
Since hugo is a static content generator, we will use a node container image as our base. We will then build and generate the content using npm run build which will generate the static content to /src/public in our build stage.
Since we then have static content, we can utilize a second stage using a nginx container image with the purpose of a web server to host our static content. We will copy the static content from our build stage into our second stage and place it under our defined path in our nginx config.
This way we only include the required content on our final container image.
Then we can review the size of our container image, which is only 27.4MB in size, pretty neat right.
1234
docker images --filter reference=ruanbekker/hashnode-docs-blogpost
REPOSITORY TAG IMAGE ID CREATED SIZE
ruanbekker/hashnode-docs-blogpost latest 5b60f30f40e6 21 minutes ago 27.4MB
Running our Container
Now that we’ve built our container image, we can run our documentation site, by specifying our host port on the left to map to our container port on the right in 80:80:
1
docker run -it -p 80:80 ruanbekker/hashnode-docs-blogpost:latest
When you don’t have port 80 already listening prior to running the previous command, when you head to http://localhost (if you are running this locally), you should see our documentation site up and running:
Often you want to save some battery life when you are doing docker builds and leverage a remote host to do the intensive work and we can utilise docker context over ssh to do just that.
About
In this tutorial I will show you how to use a remote docker engine to do docker builds, so you still run the docker client locally, but the context of your build will be sent to a remote docker engine via ssh.
We will setup password-less ssh, configure our ssh config, create the remote docker context, then use the remote docker context.
Password-less SSH
I will be copying my public key to the remote host:
1
$ ssh-copy-id ruan@192.168.2.18
Setup my ssh config:
1234567
$ cat ~/.ssh/config
Host home-server
Hostname 192.168.2.18
User ruan
IdentityFile ~/.ssh/id_rsa
StrictHostKeyChecking no
UserKnownHostsFile /dev/null
Test:
12
$ ssh home-server whoami
ruan
Docker Context
On the target host (192.168.2.18) we can verify that docker is installed:
1234567891011121314151617181920212223242526272829
$ docker version
Client: Docker Engine - Community
Version: 20.10.12
API version: 1.41
Go version: go1.16.12
Git commit: e91ed57
Built: Mon Dec 13 11:45:37 2021
OS/Arch: linux/amd64
Context: default
Experimental: trueServer: Docker Engine - Community
Engine:
Version: 20.10.12
API version: 1.41 (minimum version 1.12) Go version: go1.16.12
Git commit: 459d0df
Built: Mon Dec 13 11:43:46 2021
OS/Arch: linux/amd64
Experimental: falsecontainerd:
Version: 1.4.12
GitCommit: 7b11cfaabd73bb80907dd23182b9347b4245eb5d
runc:
Version: 1.0.2
GitCommit: v1.0.2-0-g52b36a2
docker-init:
Version: 0.19.0
GitCommit: de40ad0
On the client (my laptop in this example), we will create a docker context called “home-server” and point it to our target host:
docker context ls
NAME TYPE DESCRIPTION DOCKER ENDPOINT KUBERNETES ENDPOINT ORCHESTRATOR
default * moby Current DOCKER_HOST based configuration unix:///var/run/docker.sock https://k3d-master.127.0.0.1.nip.io:6445 (default) swarm
home-server moby ssh://home-server
Using Contexts
We can verify if this works by listing our cached docker images locally and on our remote host:
12
$ docker --context=default images | wc -l
16
And listing the remote images by specifying the context:
12
$ docker --context=home-server images | wc -l
70
We can set the default context to our target host:
12
$ docker context use home-server
home-server
Running Containers over Contexts
So running containers with remote contexts essentially becomes running containers on remote hosts. In the past, I had to setup a ssh tunnel, point the docker host env var to that endpoint, then run containers on the remote host.
Thats something of the past, we can just point our docker context to our remote host and run the container. If you haven’t set the default context, you can specify the context, so running a docker container on a remote host with your docker client locally:
12
$ docker --context=home-server run -it -p 8002:8080 ruanbekker/hostname
2022/07/14 05:44:04 Server listening on port 8080
Now from our client (laptop), we can test our container on our remote host:
The same way can be used to do remote docker builds, you have your Dockerfile locally, but when you build, you point the context to the remote host, and your context (dockerfile and files referenced in your dockerfile) will be sent to the remote host. This way you can save a lot of battery life as the computation is done on the remote docker engine.
Thank You
Thanks for reading, feel free to check out my website, feel free to subscribe to my newsletter or follow me at @ruanbekker on Twitter.
In this tutorial we will setup a RAID5 array, which is striping across multiple drives with distributed paritiy, which is good for redundancy. We will be using Ubuntu for our Linux Distribution, but the technique applies to other Linux Distributions as well.
What are we trying to achieve
We will run a server with one root disk and 6 extra disks, where we will first create our raid5 array with three disks, then I will show you how to expand your raid5 array by adding three other disks.
Things fail all the time, and it’s not fun when hard drives breaks, therefore we want to do our best to prevent our applications from going down due to hardware failures. To achieve data redundancy, we want to use three hard drives, which we want to add into a raid configuration that will proviide us:
striping, which is the technique of segmenting logically sequential data, so that consecutive segments are stored on different physical storage devices.
distributed parity, where parity data are distributed between the physical disks, where there is only one parity block per disk, this provide protection against one physical disk failure, where the minimum number of disks are three.
This is how a RAID5 array looks like (image from diskpart.com):
Hardware Overview
We will have a Linux server with one root disk and six extra disks:
12345678910
$ lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
xvda 202:0 0 8G 0 disk
ββxvda1 202:1 0 8G 0 part /
xvdb 202:16 0 10G 0 disk
xvdc 202:32 0 10G 0 disk
xvdd 202:48 0 10G 0 disk
xvde 202:64 0 10G 0 disk
xvdf 202:80 0 10G 0 disk
xvdg 202:96 0 10G 0 disk
Dependencies
We require mdadm to create our raid configuration:
12
$ sudo apt update
$ sudo apt install mdadm -y
Format Disks
First we will format and partition the following disks: /dev/xvdb, /dev/xvdc, /dev/xvdd, I will demonstrate the process for one disk, but repeat them for the other as well:
$ fdisk /dev/xvdc
Welcome to fdisk (util-linux 2.34).
Changes will remain in memory only, until you decide to write them.
Be careful before using the write command.
The old ext4 signature will be removed by a write command.
Device does not contain a recognized partition table.
Created a new DOS disklabel with disk identifier 0x26a2d2f6.
Command (m forhelp): n
Partition typep primary (0 primary, 0 extended, 4 free) e extended (container for logical partitions)Select (default p): p
Partition number (1-4, default 1): 1
First sector (2048-20971519, default 2048):
Last sector, +/-sectors or +/-size{K,M,G,T,P}(2048-20971519, default 20971519):
Created a new partition 1 of type'Linux' and of size 10 GiB.
Command (m forhelp): t
Selected partition 1
Hex code (type L to list all codes): fd
Changed type of partition 'Linux' to 'Linux raid autodetect'.
Command (m forhelp): w
The partition table has been altered.
Calling ioctl() to re-read partition table.
Syncing disks.
Create RAID5 Array
Using mdadm, create the /dev/md0 device, by specifying the raid level and the disks that we want to add to the array:
123
$ mdadm --create /dev/md0 --level=5 --raid-devices=3 /dev/xvdb1 /dev/xvdc1 /dev/xvdd1
mdadm: Defaulting to version 1.2 metadata
mdadm: array /dev/md0 started.
Now that our device has been added, we can monitor the process:
To persist the device across reboots, add it to the /etc/fstab file:
12
$ cat /etc/fstab
/dev/md0 /mnt ext4 defaults 0 0
Now our filesystem which is mounted at /mnt is ready to be used.
RAID Configuration (across reboots)
By default RAID doesnβt have a config file, therefore we need to save it manually. If this step is not followed RAID device will not be in md0, but perhaps something else.
So, we must have to save the configuration to persist across reboots, when it reboot it gets loaded to the kernel and RAID will also get loaded.
Note: Saving the configuration will keep the RAID level stable in the md0 device.
Adding Spare Devices
Earlier I mentioned that we have spare disks that we can use to expand our raid device. After they have been formatted we can add them as spare devices to our raid setup:
Once we added the spares and growed our device, we need to run integrity checks, then we can resize the volume. But first, we need to unmount our filesystem:
$ resize2fs /dev/md0
resize2fs 1.45.5 (07-Jan-2020)Resizing the filesystem on /dev/md0 to 13094400(4k) blocks.
The filesystem on /dev/md0 is now 13094400(4k) blocks long.
Then we remount our filesystem:
1
$ mount /dev/md0 /mnt
After the filesystem has been mounted, we can view the disk size and confirm that the size increased:
123
$ df -h /mnt
Filesystem Size Used Avail Use% Mounted on
/dev/md0 50G 52M 47G 1% /mnt
Thank You
Thanks for reading, feel free to check out my website, feel free to subscribe to my newsletter or follow me at @ruanbekker on Twitter.
Head over to the Python Downloads section and select the version of your choice, in my case I will be using Python 3.8.13, once you have the download link, download it:
Compile and add --enable-optimizations flag as an argument:
1
$ ./configure --enable-optimizations
Run make and make install:
12
$ make
$ sudo make install
Once it completes, you can symlink the python binary so that it’s detected by your PATH, if you have no installed python versions or want to use it as the default, you can force overwriting the symlink:
In this tutorial we will demonstrate how to persist iptables rules across reboots.
Rules Peristence
By default, when you create iptables rules its active, but as soon as you restart your server, the rules will be gone. Therefore we need to persist these rules across reboots.
Dependencies
We require the package iptables-persistent and I will install it on a debian system so I will be using apt:
I’ve stumbled upon a great bookmarks manager service called Linkding. What I really like about it, it allows you to save your bookmarks, assign tags to it to search for it later, it has chrome and firefox browser extensions, and comes with an API.
Installing Linkding
We will be using Traefik to do SSL termination and host based routing, if you donβt have Traefik running already, you can follow this post to get that set up:
Once you head over to the linkding url that you provided and you logon, you should be able to see something like this:
Creating Bookmarks
When you select “Add Bookmark” and you provide the URL, linkding will retrieve the title and the description and populate it for you, and you can provide the tags (seperated by spaces):
Browser Extensions
To add a browser extension, select “Settings”, then “Integrations”, then you will find the link to the browser extension for Chrome and Firefox:
After you install the browser extension and click on it for the first time, it will ask you to set the Linkding Base URL and API Authentication Token:
You can find that at the bottom of the “Integrations” section:
REST API
You can follow the API Docs for more information, using an example to search for bookmarks with the term “docker”:
{"count":1,"next":null,"previous":null,"results":[{"id":6,"url":"https://www.docker.com/blog/deploying-web-applications-quicker-and-easier-with-caddy-2/","title":"","description":"","website_title":"Deploying Web Applications Quicker and Easier with Caddy 2 - Docker","website_description":"Deploying web apps can be tough, even with leading server technologies. Learn how you can use Caddy 2 and Docker simplify this process.","is_archived":false,"tag_names":["caddy","docker"],"date_added":"2022-05-31T19:11:53.739002Z","date_modified":"2022-05-31T19:11:53.739016Z"}]}
Thank You
Thanks for reading, feel free to check out my website, read my newsletter or follow me at @ruanbekker on Twitter.
In this tutorial, we will demonstrate how to use Python Flask and render_template to use Jinja Templating with our Form. The example is just a ui that accepts a firstname, lastname and email address and when we submit the form data, it renders on a table.
Install Flask
Create a virtual environment and install python flask
As you can see our first route / will render the template in form.html. Our second route /result a couple of things are happening:
If we received a POST method, we will capture the form data
We are then casting it to a dictionary data type
Print the results out of our form data (for debugging)
Then we are passing the result object and the app_version variable to our template where it will be parsed.
When using render_template all html files resides under the templates directory, so let’s first create our base.html file that we will use as a starting point in templates/base.html:
1
mkdirtemplates
Then in your templates/base.html:
In our templates/form.html we have our form template, and you can see we are referencing our base.html in our template to include the first bit:
Then our last template templates/result.html is used when we click on submit, when the form data is displayed in our table: