I had a public VPS server that I wanted to scrape node-exporter metrics from, but my Prometheus instance was behind a Dynamic IP address, so to allow only my prometheus instance to scrape my Node Exporter instance, was a bit difficult, since the IP keep changing and I had to update my iptables firewall rules.
In this tutorial I will show you how to setup TLS and Basic Authentication on Node Exporter, and how to configure prometheus to pass the auhtentication to successfully scrape the node exporter metrics endpoint.
Install Node Exporter
On the node-exporter host, set the environment variables for the version, user and directory path where node exporter will be installed::
Install htpasswd so that we can generate a password hash with bcrypt, which will prompt you for a password that we are setting for the prometheus user::
Concourse CI is a Continious Integration Platform. Concourse enables you to construct pipelines with a yaml configuration that can consist out of 3 core concepts, tasks, resources, and jobs that compose them. For more information about this have a look at their docs
What will we be doing today
We will setup a Concourse CI Server v6.7.6 (web and worker) on Ubuntu 20.04 and run the traditional Hello, World pipeline
Create SystemD Unit Files, first for the Web Service:
1234567891011121314
$ cat > /etc/systemd/system/concourse-web.service << EOF[Unit]Description=Concourse CI web process (ATC and TSA)After=postgresql.service[Service]User=concourseRestart=on-failureEnvironmentFile=/etc/concourse/web_environmentExecStart=/usr/local/concourse/bin/concourse web[Install]WantedBy=multi-user.targetEOF
Then the SystemD Unit File for the Worker Service:
1234567891011121314
$ cat > /etc/systemd/system/concourse-worker.service << EOF[Unit]Description=Concourse CI worker processAfter=concourse-web.service[Service]User=rootRestart=on-failureEnvironmentFile=/etc/concourse/worker_environmentExecStart=/usr/local/concourse/bin/concourse worker[Install]WantedBy=multi-user.targetEOF
Create a postgres password for the concourse user:
1234
$ cd /home/concourse/
$ sudo -u concourse psql atc
atc=> ALTER USER concourse WITH PASSWORD 'concourse';atc=> \q
Start and Enable the Services:
1234567
$ systemctl start concourse-web concourse-worker
$ systemctl enable concourse-web concourse-worker postgresql
$ systemctl status concourse-web concourse-worker
$ systemctl is-active concourse-worker concourse-web
active
active
The listening ports should more or less look like the following:
Next, we need to setup our Concourse Target by Authenticating against our Concourse Endpoint, lets setup our target with the name ci, and make sure to replace the ip address with the ip of your concourse server:
123456789
$ fly -t ci login -c http://${IP_ADDRESS}:8080
logging in to team 'main'navigate to the following URL in your browser:
http://${IP_ADDRESS}:8080/login?fly_port=42181
or enter token manually (input hidden):
target saved
Lets list our targets:
123
$ fly targets
name url team expiry
ci http://x.x.x.x:8080 main Wed, 08 Nov 2021 15:32:59 UTC
Listing Registered Workers:
123
$ fly -t ci workers
name containers platform tags team state version
x.x.x.x 0 linux none none running 1.2
Listing Active Containers:
12
$ fly -t ci containers
handle worker pipeline job build # build id type name attempt
Hello World Pipeline:
Let’s create a basic pipeline, that will print out Hello, World!:
$ fly -t ci set-pipeline -p yeeehaa -c hello-world.yml
jobs:
job my-job has been added:
name: my-job
plan:
- task: say-hello
config:
platform: linux
image_resource:
type: docker-image
source:
repository: alpine
tag: edge
run:
path: /bin/sh
args:
- -c
- |echo"============="echo"Hello, World!"echo"============="apply configuration? [yN]: y
pipeline created!
you can view your pipeline here: http://x.x.x.x:8080/teams/main/pipelines/yeeehaa
the pipeline is currently paused. to unpause, either:
- run the unpause-pipeline command - click play next to the pipeline in the web ui
We can browse to the WebUI to unpause the pipeline, but since I like to do everything on cli as far as possible, I will unpause the pipeline via cli:
12
$ fly -t ci unpause-pipeline -p yeeehaa
unpaused 'yeeehaa'
Now our Pipeline is unpaused, but since we did not specify any triggers, we need to manually trigger the pipeline to run, you can either via the WebUI, select your pipeline which in this case will be named yeeehaa and then select the job, which will be my-job then hit the + sign, which will trigger the pipeline.
I will be using the cli:
12
$ fly -t ci trigger-job --job yeeehaa/my-job
started yeeehaa/my-job #1
Via the WebUI on http://x.x.x.x:8080/teams/main/pipelines/yeeehaa/jobs/my-job/builds/1 you should see the Hello, World! output, or via the cli, we also have the option to see the output, so let’s trigger it again, but this time passing the --watch flag:
123456789101112
$ fly -t ci trigger-job --job yeeehaa/my-job --watch
started yeeehaa/my-job #2initializing
running /bin/sh -c echo"============="echo"Hello, World!"echo"============="=============Hello, World!
=============succeeded
Listing our Workers and Containers again:
1234567
$ fly -t ci workers
name containers platform tags team state version
x.x.x.x 2 linux none none running 1.2
$ fly -t ci containers
handle worker pipeline job build # build id type name attempt46282555-64cd-5h1b-67b8-316486h58eb8 x.x.x.x yeeehaa my-job 2729 task say-hello n/a
Thank You
Thanks for reading, if you like my content, check out my website or follow me at @ruanbekker on Twitter.
Vagrant makes it really easy to provision virtual servers for local development (not limited to), which they refer as “boxes”, that enables developers to run their jobs/tasks/applications in a really easy and fast way. Vagrant utilizes a declarative configuration model, so you can describe which OS you want, bootstrap them with installation instructions as soon as it boots, etc.
What are we doing today?
When completing this tutorial, you will have Vagrant and Virtualbox installed on your Mac and should be able to launch a Ubuntu Virtual Server locally with Vagrant and using the Virtualbox provider which will be responsible for running our VM’s.
We will also look at different configuration options to configure the VM, bootstrapping software, using the shell, docker and ansible provisioner.
For this demonstration, I am using a Mac OSX, but you can run this on Mac, Windows or Linux. First we will use Homebrew to install Virtualbox, then Vagrant, then we will provision a Ubuntu box and I will also show how to inject shell commands into your Vagrantfile so that you can provision software to your VM, and also forward traffic to a web server from the host to the guest.
You will also notice that we are forwarding port 8080 from our host, to port 80 on the VM so that we can access the webserver on port 8080 from our laptop. Then boot the VM:
1
$vagrantup
Once the VM has booted and installed our software, we should be able to access the index document served by Nginx on our VM:
Let’s say you want to map your local directory to your VM, in a scenario where you want to store your index.html on your laptop and map it to the VM, we can use config.vm.synced_folder.
On our laptop, create a html directory where we will store our index.hml:
1
$ mkdir html
Now create the content in the index.html under the html directory:
1
$ echo"Hello, World" > html/index.html
Now we need to make vagrant aware of the folder that we are mapping to the VM, so we need to edit the Vagrantfile and it will now look like this:
To reload the VM with our changes, we use vagrant provision to update our VM when changes to provisioners are made, and vagrant reload when we have config changes such as config.vm.network, but to restart the VM and forcing provisioners to run, we can use the following:
Since I don’t have port 3306 listening locally, I have mapped port 3306 from my laptop to port 3306 on my VM and I am using the mysql:8.0 container image from docker hub and passing the arguments which is specific to the container.
The convenient thing about the docker provisioner, is that it will install docker onto the VM for you.
Once the config has been set in your Vagrantfile do a reload:
From our laptop we should be able to communicate with our mysql server:
1234567891011
$ nc -vz localhost 3306
found 0 associations
found 1 connections:
1: flags=82<CONNECTED,PREFERRED>
outif lo0
src 127.0.0.1 port 58745
dst 127.0.0.1 port 3306
rank info not available
TCP aux info available
Connection to localhost port 3306[tcp/mysql] succeeded!
We can also SSH to our VM and verify if the container is running:
1
$ vagrant ssh
And then list the containers:
123
$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
30a843a486ae mysql:8.0 "docker-entrypoint.sh 2 minutes ago Up 2 minutes 0.0.0.0:3306->3306/tcp, :::3306->3306/tcp, 33060/tcp mysql
Then the content for our provisioning/playbook.yml playbook:
123456789101112131415
----hosts:allbecome:yestasks:-name:ensure ntpd is at the latest versionapt:pkg:ntpstate:""notify:-restart ntpdhandlers:-name:restart ntpdservice:name:ntpstate:restarted
Our provisioning/group_vars/all file that will contain the variables for the all group:
1
desired_state:"latest"
In our Vagrantfile:
12345678910
# -*- mode: ruby -*-# vi: set ft=ruby :Vagrant.configure("2")do|config|config.vm.box="ubuntu/focal64"config.vm.hostname="mydevbox"config.vm.provision:ansibledo|ansible|ansible.playbook="provisioning/playbook.yml"endend
When using ansible with vagrant the inventory is auto-generated when then inventory is not specified. Vagrant will store the inventory on the host at .vagrant/provisioners/ansible/inventory/vagrant_ansible_inventory.
To execute playbooks with ansible, we need ansible installed on our host machine, for this demonstration I will be using virtualenv and then install ansible using pip:
With bitcoin-core, you get a configuration option called walletnotify which allow you to invoke a command whenever you receive a payment, first confirmation of a payment or send a payment.
You can specify %s as an argument which will be used to parse the transaction id.
Bitcoind WalletNotify TransactionID Example
To see what walletnotify does, in my bitcoin.conf I had a basic script to write a entry every time I receive a payment:
I had to figure out how to capture the wallet name as well as the transaction id, as I thought its not possible until I stumbled upon a post which mentioned from bitcoind 0.20:
The -walletnotify configuration parameter will now replace any %w in its argument with the name of the wallet generating the notification.
Task (aka Taskfile) is a task runner written in Go, which is similar to GNU Make, but in my opinion is a lot easier to use as you specify your tasks in yaml.
What to expect
In this post we will go through a quick demonstration using Task, how to install Task, as well as a couple of basic examples to get you up and running with Task.
Install
For mac, installing task::
1
$ brew install go-task/tap/go-task
For linux, installing task:
1
$ sh -c "$(curl --location https://taskfile.dev/install.sh)" -- -d -b /usr/local/bin
To run both tasks with one command, you can specify dependencies, so if we define a task with zero commands but just dependencies, it will call those tasks and execute them:
I’m trying to force myself to move away from using the print() function as I’m pretty much using print all the time to cater for logging, and using the logging package instead.
This is a basic example of using logging in a basic python app:
12345678910111213141516
importlogginglogging.basicConfig(level=logging.INFO,format="%(asctime)s [%(levelname)s] %(name)s%(message)s",handlers=[logging.StreamHandler()])messagestring={'info':'info message','warn':'this is a warning','err':'this is a error'}logger=logging.getLogger('thisapp')logger.info('message: {}'.format(messagestring['info']))logger.warning('message: {}'.format(messagestring['warn']))logger.error('message: {}'.format(messagestring['err']))
When running this example, this is the output that you will see:
1234
$ python app.py
2021-07-19 13:07:43,647 [INFO] thisapp message: info message
2021-07-19 13:07:43,647 [WARNING] thisapp message: this is a warning
2021-07-19 13:07:43,647 [ERROR] thisapp message: this is a error
The ECS Execution Role is used by the ecs-agent which runs on ECS and is responsible for:
- Pulling down docker images from ECR
- Fetching the SSM Parameters from SSM for your Task (Secrets and LogConfigurations)
- Writing Logs to CloudWatch
The IAM Role has been configured that the Trusted Identity is ecs so only ECS is allowed to assume credentials from the IAM Policy that is associated to the Role.
So only the ECS tasks using the role is allowed to assume credentials from the IAM Role, and the policy associated to the role, can look something like this:
In your application you don’t need to reference any aws access keys as the role will assume credentials for you by the SDK, with python a short example will be:
In this tutorial I will show you how to setup a digibyte (DGB) Full Node on Linux and show you how to interact with your wallet and the blockchain.
What is a Full Node
By running a Full Node, you contribute by helping to fully validate transactions and blocks. Almost all full nodes also help the network by accepting transactions and blocks from other full nodes, validating those transactions and blocks and then relaying them to other full nodes. Therefore you are contributing to maintaining the consensus of the blockchain.
Hardware Requirements
In order to run a full digibyte node you will need a server that is preferrably online 24/7 and that you have an uncapped connection as at the time of writing the digibyte blockchain is about 25GB in size but increases over time. I also used a server with 2vCPU’s and 4GB of memory.
Let’s make a transaction to my wallet node from a crypto currency exchange where I have digibyte, so first to get the wallet address where we would like to deposit the crypto currency:
From a exchange where you have DGB, withdraw to the address DN8RMAUz2yHGW1PuuLtiSkiTZARzMJ4L2A which is your wallet on the node (ensure you have enough to cover the transaction fee).
Once the transaction has enough confirmations, have a look at your wallet balance, and you will see the 5 DGB that I sent to my wallet can be seen:
I’ve setup a software wallet on my pc, and from DGB I selected receive and copied my DGB software wallet address, now I would like to transfer my funds from my node wallet to my software wallet:
12
$ curl -H 'Content-Type: application/json' -u "jsonrpc:$PASSWORD" http://localhost:14022 -d '{"jsonrpc": "1.0", "id":"curl", "method": "sendtoaddress", "params": ["DTqHG9KA3oQAywq18gpBknxHXHZviyYdvS", 5.0, "donation", "happy bday"] }'{"result":null,"error":{"code":-4,"message":"Error: This transaction requires a transaction fee of at least 0.0004324"},"id":"curl"}
As you can see I don’t have enough in my nodes wallet to make the transaction, therefore I need to keep the transaction cost in consideration:
As you can see we are getting back a transaction id which we can use later to check up on. A couple of seconds later I received a notification on my software wallet that my funds were received: