Often you want to save some battery life when you are doing docker builds and leverage a remote host to do the intensive work and we can utilise docker context over ssh to do just that.
About
In this tutorial I will show you how to use a remote docker engine to do docker builds, so you still run the docker client locally, but the context of your build will be sent to a remote docker engine via ssh.
We will setup password-less ssh, configure our ssh config, create the remote docker context, then use the remote docker context.
Password-less SSH
I will be copying my public key to the remote host:
1
$ ssh-copy-id ruan@192.168.2.18
Setup my ssh config:
1234567
$ cat ~/.ssh/config
Host home-server
Hostname 192.168.2.18
User ruan
IdentityFile ~/.ssh/id_rsa
StrictHostKeyChecking no
UserKnownHostsFile /dev/null
Test:
12
$ ssh home-server whoami
ruan
Docker Context
On the target host (192.168.2.18) we can verify that docker is installed:
1234567891011121314151617181920212223242526272829
$ docker version
Client: Docker Engine - Community
Version: 20.10.12
API version: 1.41
Go version: go1.16.12
Git commit: e91ed57
Built: Mon Dec 13 11:45:37 2021
OS/Arch: linux/amd64
Context: default
Experimental: trueServer: Docker Engine - Community
Engine:
Version: 20.10.12
API version: 1.41 (minimum version 1.12) Go version: go1.16.12
Git commit: 459d0df
Built: Mon Dec 13 11:43:46 2021
OS/Arch: linux/amd64
Experimental: falsecontainerd:
Version: 1.4.12
GitCommit: 7b11cfaabd73bb80907dd23182b9347b4245eb5d
runc:
Version: 1.0.2
GitCommit: v1.0.2-0-g52b36a2
docker-init:
Version: 0.19.0
GitCommit: de40ad0
On the client (my laptop in this example), we will create a docker context called “home-server” and point it to our target host:
docker context ls
NAME TYPE DESCRIPTION DOCKER ENDPOINT KUBERNETES ENDPOINT ORCHESTRATOR
default * moby Current DOCKER_HOST based configuration unix:///var/run/docker.sock https://k3d-master.127.0.0.1.nip.io:6445 (default) swarm
home-server moby ssh://home-server
Using Contexts
We can verify if this works by listing our cached docker images locally and on our remote host:
12
$ docker --context=default images | wc -l
16
And listing the remote images by specifying the context:
12
$ docker --context=home-server images | wc -l
70
We can set the default context to our target host:
12
$ docker context use home-server
home-server
Running Containers over Contexts
So running containers with remote contexts essentially becomes running containers on remote hosts. In the past, I had to setup a ssh tunnel, point the docker host env var to that endpoint, then run containers on the remote host.
Thats something of the past, we can just point our docker context to our remote host and run the container. If you haven’t set the default context, you can specify the context, so running a docker container on a remote host with your docker client locally:
12
$ docker --context=home-server run -it -p 8002:8080 ruanbekker/hostname
2022/07/14 05:44:04 Server listening on port 8080
Now from our client (laptop), we can test our container on our remote host:
The same way can be used to do remote docker builds, you have your Dockerfile locally, but when you build, you point the context to the remote host, and your context (dockerfile and files referenced in your dockerfile) will be sent to the remote host. This way you can save a lot of battery life as the computation is done on the remote docker engine.
Thank You
Thanks for reading, feel free to check out my website, feel free to subscribe to my newsletter or follow me at @ruanbekker on Twitter.
In this tutorial we will setup a RAID5 array, which is striping across multiple drives with distributed paritiy, which is good for redundancy. We will be using Ubuntu for our Linux Distribution, but the technique applies to other Linux Distributions as well.
What are we trying to achieve
We will run a server with one root disk and 6 extra disks, where we will first create our raid5 array with three disks, then I will show you how to expand your raid5 array by adding three other disks.
Things fail all the time, and it’s not fun when hard drives breaks, therefore we want to do our best to prevent our applications from going down due to hardware failures. To achieve data redundancy, we want to use three hard drives, which we want to add into a raid configuration that will proviide us:
striping, which is the technique of segmenting logically sequential data, so that consecutive segments are stored on different physical storage devices.
distributed parity, where parity data are distributed between the physical disks, where there is only one parity block per disk, this provide protection against one physical disk failure, where the minimum number of disks are three.
This is how a RAID5 array looks like (image from diskpart.com):
Hardware Overview
We will have a Linux server with one root disk and six extra disks:
12345678910
$ lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
xvda 202:0 0 8G 0 disk
└─xvda1 202:1 0 8G 0 part /
xvdb 202:16 0 10G 0 disk
xvdc 202:32 0 10G 0 disk
xvdd 202:48 0 10G 0 disk
xvde 202:64 0 10G 0 disk
xvdf 202:80 0 10G 0 disk
xvdg 202:96 0 10G 0 disk
Dependencies
We require mdadm to create our raid configuration:
12
$ sudo apt update
$ sudo apt install mdadm -y
Format Disks
First we will format and partition the following disks: /dev/xvdb, /dev/xvdc, /dev/xvdd, I will demonstrate the process for one disk, but repeat them for the other as well:
$ fdisk /dev/xvdc
Welcome to fdisk (util-linux 2.34).
Changes will remain in memory only, until you decide to write them.
Be careful before using the write command.
The old ext4 signature will be removed by a write command.
Device does not contain a recognized partition table.
Created a new DOS disklabel with disk identifier 0x26a2d2f6.
Command (m forhelp): n
Partition typep primary (0 primary, 0 extended, 4 free) e extended (container for logical partitions)Select (default p): p
Partition number (1-4, default 1): 1
First sector (2048-20971519, default 2048):
Last sector, +/-sectors or +/-size{K,M,G,T,P}(2048-20971519, default 20971519):
Created a new partition 1 of type'Linux' and of size 10 GiB.
Command (m forhelp): t
Selected partition 1
Hex code (type L to list all codes): fd
Changed type of partition 'Linux' to 'Linux raid autodetect'.
Command (m forhelp): w
The partition table has been altered.
Calling ioctl() to re-read partition table.
Syncing disks.
Create RAID5 Array
Using mdadm, create the /dev/md0 device, by specifying the raid level and the disks that we want to add to the array:
123
$ mdadm --create /dev/md0 --level=5 --raid-devices=3 /dev/xvdb1 /dev/xvdc1 /dev/xvdd1
mdadm: Defaulting to version 1.2 metadata
mdadm: array /dev/md0 started.
Now that our device has been added, we can monitor the process:
To persist the device across reboots, add it to the /etc/fstab file:
12
$ cat /etc/fstab
/dev/md0 /mnt ext4 defaults 0 0
Now our filesystem which is mounted at /mnt is ready to be used.
RAID Configuration (across reboots)
By default RAID doesn’t have a config file, therefore we need to save it manually. If this step is not followed RAID device will not be in md0, but perhaps something else.
So, we must have to save the configuration to persist across reboots, when it reboot it gets loaded to the kernel and RAID will also get loaded.
Note: Saving the configuration will keep the RAID level stable in the md0 device.
Adding Spare Devices
Earlier I mentioned that we have spare disks that we can use to expand our raid device. After they have been formatted we can add them as spare devices to our raid setup:
Once we added the spares and growed our device, we need to run integrity checks, then we can resize the volume. But first, we need to unmount our filesystem:
$ resize2fs /dev/md0
resize2fs 1.45.5 (07-Jan-2020)Resizing the filesystem on /dev/md0 to 13094400(4k) blocks.
The filesystem on /dev/md0 is now 13094400(4k) blocks long.
Then we remount our filesystem:
1
$ mount /dev/md0 /mnt
After the filesystem has been mounted, we can view the disk size and confirm that the size increased:
123
$ df -h /mnt
Filesystem Size Used Avail Use% Mounted on
/dev/md0 50G 52M 47G 1% /mnt
Thank You
Thanks for reading, feel free to check out my website, feel free to subscribe to my newsletter or follow me at @ruanbekker on Twitter.
Head over to the Python Downloads section and select the version of your choice, in my case I will be using Python 3.8.13, once you have the download link, download it:
Compile and add --enable-optimizations flag as an argument:
1
$ ./configure --enable-optimizations
Run make and make install:
12
$ make
$ sudo make install
Once it completes, you can symlink the python binary so that it’s detected by your PATH, if you have no installed python versions or want to use it as the default, you can force overwriting the symlink:
In this tutorial we will demonstrate how to persist iptables rules across reboots.
Rules Peristence
By default, when you create iptables rules its active, but as soon as you restart your server, the rules will be gone. Therefore we need to persist these rules across reboots.
Dependencies
We require the package iptables-persistent and I will install it on a debian system so I will be using apt:
I’ve stumbled upon a great bookmarks manager service called Linkding. What I really like about it, it allows you to save your bookmarks, assign tags to it to search for it later, it has chrome and firefox browser extensions, and comes with an API.
Installing Linkding
We will be using Traefik to do SSL termination and host based routing, if you don’t have Traefik running already, you can follow this post to get that set up:
Once you head over to the linkding url that you provided and you logon, you should be able to see something like this:
Creating Bookmarks
When you select “Add Bookmark” and you provide the URL, linkding will retrieve the title and the description and populate it for you, and you can provide the tags (seperated by spaces):
Browser Extensions
To add a browser extension, select “Settings”, then “Integrations”, then you will find the link to the browser extension for Chrome and Firefox:
After you install the browser extension and click on it for the first time, it will ask you to set the Linkding Base URL and API Authentication Token:
You can find that at the bottom of the “Integrations” section:
REST API
You can follow the API Docs for more information, using an example to search for bookmarks with the term “docker”:
{"count":1,"next":null,"previous":null,"results":[{"id":6,"url":"https://www.docker.com/blog/deploying-web-applications-quicker-and-easier-with-caddy-2/","title":"","description":"","website_title":"Deploying Web Applications Quicker and Easier with Caddy 2 - Docker","website_description":"Deploying web apps can be tough, even with leading server technologies. Learn how you can use Caddy 2 and Docker simplify this process.","is_archived":false,"tag_names":["caddy","docker"],"date_added":"2022-05-31T19:11:53.739002Z","date_modified":"2022-05-31T19:11:53.739016Z"}]}
Thank You
Thanks for reading, feel free to check out my website, read my newsletter or follow me at @ruanbekker on Twitter.
In this tutorial, we will demonstrate how to use Python Flask and render_template to use Jinja Templating with our Form. The example is just a ui that accepts a firstname, lastname and email address and when we submit the form data, it renders on a table.
Install Flask
Create a virtual environment and install python flask
As you can see our first route / will render the template in form.html. Our second route /result a couple of things are happening:
If we received a POST method, we will capture the form data
We are then casting it to a dictionary data type
Print the results out of our form data (for debugging)
Then we are passing the result object and the app_version variable to our template where it will be parsed.
When using render_template all html files resides under the templates directory, so let’s first create our base.html file that we will use as a starting point in templates/base.html:
1
mkdirtemplates
Then in your templates/base.html:
In our templates/form.html we have our form template, and you can see we are referencing our base.html in our template to include the first bit:
Then our last template templates/result.html is used when we click on submit, when the form data is displayed in our table:
This is a quick demonstration on how to use prometheus relabel configs, when you have scenarios for when example, you want to use a part of your hostname and assign it to a prometheus label.
Prometheus Relabling
Using a standard prometheus config to scrape two targets:
- ip-192-168-64-29.multipass:9100
- ip-192-168-64-30.multipass:9100
When we want to relabel one of the source the prometheus internal labels, __address__ which will be the given target including the port, then we apply regex: (.*) to catch everything from the source label, and since there is only one group we use the replacement as ${1}-randomtext and use that value to apply it as the value of the given target_label which in this case is for randomlabel, which will be in this case:
In this case we want to relabel the __address__ and apply the value to the instance label, but we want to exclude the :9100 from the __address__ label:
On AWS EC2 you can make use of the ec2_sd_config where you can make use of EC2 Tags, to set the values of your tags to prometheus label values.
In this scenario, on my EC2 instances I have 3 tags:
- Key: PrometheusScrape, Value: Enabled
- Key: Name, Value: pdn-server-1
- Key: Environment, Value: dev
In our config, we only apply a node-exporter scrape config to instances which are tagged PrometheusScrape=Enabled, then we use the Name tag, and assign it’s value to the instance tag, and the similarly we assign the Environment tag value to the environment promtheus label value.
Because this prometheus instance resides in the same VPC, I am using the __meta_ec2_private_ip which is the private ip address of the EC2 instance to assign it to the address where it needs to scrape the node exporter metrics endpoint:
In this tutorial we will be creating a AWS Lambda Python Layer that will include the Python Requests package and we will compile the package with Docker and the LambCI image.
Getting Started
First we will create the directory where we will store the intermediate data:
12
$ mkdir lambda-layers
$ cd lambda-layers
Then we will create the directory structure, as you can see I will be using the python 3.8 runtime:
12
$ mkdir -p requests/python/lib/python3.8
$ cd requests
Write the dependencies to the requirements file:
1
$ echo"requests" > requirements.txt
Install dependencies locally using docker, where we will be using the lambci/lambda:build-python3.8 iamge and we are mounting our current working directory to /var/task inside the container, and then we will be running the command pip install -r requirements.txt -t python/lib/python3.7/site-packages/; exit inside the container, which will essentially dump the content to our working directory:
123
$ docker run -v $PWD:/var/task \ lambci/lambda:build-python3.8 \ sh -c "pip install -r requirements.txt -t python/lib/python3.8/site-packages/; exit"
Zip up the deployment package that we will push to AWS Lambda Layers:
1
$ zip -r package.zip python > /dev/null
Publish the layer using the aws cli tools, by specifying the deployment package, the compatible runtime and a identifier:
In this tutorial we will customize the vim editor, by adding the molokai color scheme, change a couple of basic settings (more suited for my preference - not too much) and add a couple of plugins that will change the look to something like this:
About Vim
vim has always been my favorite linux text editor, which is super powerful and highly customizable
Install Vim
Update indexes:
1
sudo apt update
Install vim:
1
sudo apt install vim -y
Color Scheme
To find all existing vim color schemes installed:
1
find /usr/share/vim/vim*/colors/ -type f -name "*.vim"
By default our color scheme will look like this when we create foo.py:
When we hit the “esc” button, and enter :colorscheme molokai we can change the colorscheme to molokai, and then we should have the following:
To persist these changes, open up ~/.vimrc and paste the following as a starter:
12
colorscheme molokai
syntax on
Now when we open up foo.py we will see that it defaults to the molokai color scheme.
Vim Configuration
Everyone has their own personal preference on vim configs, but I like to keep mine basic, and this is the content of my ~/.vimrc:
123456789101112131415
colorscheme molokai
syntax on
set mouse-=a
filetype on
filetype indent plugin on
set noexpandtab " tabs ftwset smarttab " tab respects 'tabstop', 'shiftwidth', and 'softtabstop'set tabstop=4" the visible width of tabsset softtabstop=4 " edit as if the tabs are 4 characters wide
set shiftwidth=4" number of spaces to use for indent and unindentset shiftround " round indent to a multiple of 'shiftwidth'autocmd FileType yml setlocal ts=2sts=2sw=2 expandtab
autocmd FileType yaml setlocal ts=2sts=2sw=2 expandtab
"" https://github.com/VundleVim/Vundle.vim
set nocompatible
filetype off
" set the runtime path to include Vundle and initializeset rtp+=~/.vim/bundle/Vundle.vimcall vundle#begin()" alternatively, pass a path where Vundle should install plugins
"call vundle#begin('~/some/path/here')"let Vundle manage Vundle, required
Plugin 'VundleVim/Vundle.vim'" The following are examples of different formats supported." Keep Plugin commands between vundle#begin/end.
" plugin on GitHub repoPlugin 'tpope/vim-fugitive'" plugin from http://vim-scripts.org/vim/scripts.html
" Plugin 'L9'" Git plugin not hosted on GitHub
Plugin 'git://git.wincent.com/command-t.git'" git repos on your local machine (i.e. when working on your own plugin)" Plugin 'file:///home/gmarik/path/to/plugin'" The sparkup vim script is in a subdirectory of this repo called vim." Pass the path to set the runtimepath properly.
Plugin 'rstacruz/sparkup', {'rtp': 'vim/'}" Install L9 and avoid a Naming conflict if you've already installed a" different version somewhere else.
" Plugin 'ascenator/L9', {'name': 'newL9'}" All of your Plugins must be added before the following line
call vundle#end()" requiredfiletype plugin indent on " required
" To ignore plugin indent changes, instead use:"filetype plugin on
"" Brief help" :PluginList - lists configured plugins" :PluginInstall - installs plugins; append `!` to update or just :PluginUpdate
" :PluginSearch foo - searches for foo; append `!` to refresh local cache" :PluginClean - confirms removal of unused plugins; append `!` to auto-approve removal
"" see :h vundle for more details or wiki for FAQ
" Put your non-Plugin stuff after this line" colorscheme duo-mini
" sets color themescolorscheme molokaisyntax on" sets the filename at the bottom
set laststatus=2
" https://github.com/itchyny/lightline.vimPlugin 'itchyny/lightline.vim'" https://github.com/Shougo/neobundle.vim
" Note: Skip initialization for vim-tiny or vim-small.if 0 | endifif &compatible set nocompatible " Be iMproved
endif
" Required:set runtimepath+=~/.vim/bundle/neobundle.vim/" Required:
call neobundle#begin(expand('~/.vim/bundle/'))" Let NeoBundle manage NeoBundle" Required:
NeoBundleFetch 'Shougo/neobundle.vim'" My Bundles here:" Refer to |:NeoBundle-examples|.
" Note: You don't set neobundle setting in .gvimrc!NeoBundle 'itchyny/lightline.vim'call neobundle#end()" Required:
filetype plugin indent on
" If there are uninstalled bundles found on startup," this will conveniently prompt you to install them.
NeoBundleCheck
" https://github.com/junegunn/vim-plug" Specify a directory for plugins
" - For Neovim: stdpath('data') . '/plugged'" - Avoid using standard Vim directory names like 'plugin'call plug#begin('~/.vim/plugged')" Make sure you use single quotes" Shorthand notation; fetches https://github.com/junegunn/vim-easy-align
Plug 'junegunn/vim-easy-align'" Any valid git URL is allowedPlug 'https://github.com/junegunn/vim-github-dashboard.git'" Multiple Plug commands can be written in a single line using | separators
"Plug 'SirVer/ultisnips' | Plug 'honza/vim-snippets'" On-demand loading
Plug 'scrooloose/nerdtree', {'on': 'NERDTreeToggle'}Plug 'tpope/vim-fireplace', {'for': 'clojure'}" Using a non-master branchPlug 'rdnetto/YCM-Generator', { 'branch': 'stable' }" Using a tagged release; wildcard allowed (requires git 1.9.2 or above)Plug 'fatih/vim-go', {'tag': '*'}" Plugin optionsPlug 'nsf/gocode', { 'tag': 'v.20150303', 'rtp': 'vim' }" Plugin outside ~/.vim/plugged with post-update hook
Plug 'junegunn/fzf', {'dir': '~/.fzf', 'do': './install --all'}" Unmanaged plugin (manually installed and updated)Plug '~/my-prototype-plugin'Plug 'itchyny/lightline.vim'" Initialize plugin system
call plug#end()" sets the filename as the title up top"set title
" let g:airline#extensions#tabline#enabled = 1set noexpandtab " tabs ftw
set smarttab " tab respects 'tabstop', 'shiftwidth', and 'softtabstop'set tabstop=4 " the visible width of tabs
set softtabstop=4" edit as if the tabs are 4 characters wideset shiftwidth=4 " number of spaces to use for indent and unindent
set shiftround " round indent to a multiple of 'shiftwidth'autocmd FileType yml setlocal ts=2sts=2sw=2 expandtab
autocmd FileType yaml setlocal ts=2sts=2sw=2 expandtab