Ruan Bekker's Blog

From a Curious mind to Posts on Github

Setup TLS and Basic Authentication on Node Exporter for Prometheus

I had a public VPS server that I wanted to scrape node-exporter metrics from, but my Prometheus instance was behind a Dynamic IP address, so to allow only my prometheus instance to scrape my Node Exporter instance, was a bit difficult, since the IP keep changing and I had to update my iptables firewall rules.

In this tutorial I will show you how to setup TLS and Basic Authentication on Node Exporter, and how to configure prometheus to pass the auhtentication to successfully scrape the node exporter metrics endpoint.

Install Node Exporter

On the node-exporter host, set the environment variables for the version, user and directory path where node exporter will be installed::

1
2
3
$ NODE_EXPORTER_VERSION="1.1.2"
$ NODE_EXPORTER_USER="node_exporter"
$ BIN_DIRECTORY="/usr/local/bin"

Download and place the node-exporter binary in place:

1
2
3
4
5
6
$ wget https://github.com/prometheus/node_exporter/releases/download/v${NODE_EXPORTER_VERSION}/node_exporter-${NODE_EXPORTER_VERSION}.linux-amd64.tar.gz
$ tar -xf node_exporter-${NODE_EXPORTER_VERSION}.linux-amd64.tar.gz
$ cp node_exporter-${NODE_EXPORTER_VERSION}.linux-amd64/node_exporter ${BIN_DIRECTORY}/
$ chown ${NODE_EXPORTER_USER}:${NODE_EXPORTER_USER} ${BIN_DIRECTORY}/node_exporter
$ rm -rf node_exporter-${NODE_EXPORTER_VERSION}.linux-amd64*
$ mkdir /etc/node-exporter

Configuration

Create a self-signed cert for node-exporter:

1
$ openssl req -new -newkey rsa:2048 -days 365 -nodes -x509 -keyout node_exporter.key -out node_exporter.crt -subj "/C=ZA/ST=CT/L=SA/O=VPN/CN=localhost" -addext "subjectAltName = DNS:localhost"

Move the certs into the directory we created:

1
$ mv node_exporter.* /etc/node-exporter/

Install htpasswd so that we can generate a password hash with bcrypt, which will prompt you for a password that we are setting for the prometheus user::

1
2
$ apt install apache2-utils
$ htpasswd -nBC 10 "" | tr -d ':\n'; echo

Now populate the config for node-exporter:

1
2
3
4
5
6
$ cat /etc/node-exporter/config.yml
tls_server_config:
  cert_file: node_exporter.crt
  key_file: node_exporter.key
basic_auth_users:
  prometheus: <the-output-value-of-htpasswd>

Change the ownership of the node exporter directory:

1
$ chown -R ${NODE_EXPORTER_USER}:${NODE_EXPORTER_USER} /etc/node-exporter

Then create the systemd unit file:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
$ cat > /etc/systemd/system/node_exporter.service << EOF
[Unit]
Description=Node Exporter
Wants=network-online.target
After=network-online.target
StartLimitIntervalSec=500
StartLimitBurst=5
[Service]
User=${NODE_EXPORTER_USER}
Group=${NODE_EXPORTER_USER}
Type=simple
Restart=on-failure
RestartSec=5s
ExecStart=${BIN_DIRECTORY}/node_exporter --web.config=/etc/node-exporter/config.yml
[Install]
WantedBy=multi-user.target
EOF

Reload systemd and start node-exporter

1
2
3
$ systemctl daemon-reload
$ systemctl enable node_exporter
$ systemctl restart node_exporter

Prometheus Config

Copy the /etc/node-exporter/node_exporter.crt from the node-exporter node to prometheus-node, then in the /etc/prometheus/prometheus.yml config:

1
2
3
4
5
6
7
8
9
10
11
12
13
scrape_configs:
  - job_name: 'node-exporter-tls'
    scheme: https
    basic_auth:
      username: prometheus
      password: <the-plain-text-password>
    tls_config:
      ca_file: node_exporter.crt
      insecure_skip_verify: true
    static_configs:
    - targets: ['node-exporter-ip:9100']
      labels:
        instance: friendly-instance-name

After you restart prometheus, you should see the metrics in prometheus' tsdb of the node exporter target that we are scraping.

Thank You

Thanks for reading, if you like my content, check out my website or follow me at @ruanbekker on Twitter.

Install Concourse CI v7.4 on Ubuntu Linux

Concourse is a Pipeline Based Continious Integration system written in Go

Resources:

Older Version

An older version is available:

What is Concourse CI:

Concourse CI is a Continious Integration Platform. Concourse enables you to construct pipelines with a yaml configuration that can consist out of 3 core concepts, tasks, resources, and jobs that compose them. For more information about this have a look at their docs

What will we be doing today

We will setup a Concourse CI Server v6.7.6 (web and worker) on Ubuntu 20.04 and run the traditional Hello, World pipeline

Setup the Server:

Concourse needs PostgresSQL server:

1
2
3
$ apt update && apt upgrade -y
$ apt install postgresql postgresql-contrib -y
$ systemctl enable postgresql

Create the Database and User for Concourse on Postgres:

1
2
$ sudo -u postgres createuser concourse
$ sudo -u postgres createdb --owner=concourse atc

Download the Concourse Binary:

1
2
3
4
$ export CONCOURSE_VERSION=7.4.0
$ wget https://github.com/concourse/concourse/releases/download/v${CONCOURSE_VERSION}/concourse-${CONCOURSE_VERSION}-linux-amd64.tgz
$ tar -xvf concourse-${CONCOURSE_VERSION}-linux-amd64.tgz -C /usr/local/
$ rm -rf concourse-*-linux-amd64.tgz

Create the Encryption Keys:

1
2
3
4
5
$ mkdir /etc/concourse
$ ssh-keygen -t rsa -q -N '' -f /etc/concourse/tsa_host_key -m pem
$ ssh-keygen -t rsa -q -N '' -f /etc/concourse/worker_key -m pem
$ ssh-keygen -t rsa -q -N '' -f /etc/concourse/session_signing_key -m pem
$ cp /etc/concourse/worker_key.pub /etc/concourse/authorized_worker_keys -m pem

Set the IP Address:

1
$ export IP_ADDRESS=$(ifconfig $(route -n | grep '0.0.0.0' | head -1 | rev | awk '{print $1}' | rev) | grep -w 'inet' | awk '{print $2}')

Concourse Web Process Configuration:

1
2
3
4
5
6
7
8
9
10
11
12
13
$ cat > /etc/concourse/web_environment << EOF
PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/local/concourse/bin
CONCOURSE_ADD_LOCAL_USER=ruan:$(openssl rand -hex 14)
CONCOURSE_SESSION_SIGNING_KEY=/etc/concourse/session_signing_key
CONCOURSE_TSA_HOST_KEY=/etc/concourse/tsa_host_key
CONCOURSE_TSA_AUTHORIZED_KEYS=/etc/concourse/authorized_worker_keys
CONCOURSE_POSTGRES_HOST=127.0.0.1
CONCOURSE_POSTGRES_USER=concourse
CONCOURSE_POSTGRES_PASSWORD=concourse
CONCOURSE_POSTGRES_DATABASE=atc
CONCOURSE_MAIN_TEAM_LOCAL_USER=ruan
CONCOURSE_EXTERNAL_URL=http://$IP_ADDRESS:8080
EOF

Concourse Worker Process Configuration:

1
2
3
4
5
6
7
8
cat > /etc/concourse/worker_environment << EOF
PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/local/concourse/bin
CONCOURSE_WORK_DIR=/var/lib/concourse
CONCOURSE_TSA_HOST=127.0.0.1:2222
CONCOURSE_TSA_PUBLIC_KEY=/etc/concourse/tsa_host_key.pub
CONCOURSE_TSA_WORKER_PRIVATE_KEY=/etc/concourse/worker_key
CONCOURSE_GARDEN_DNS_SERVER=8.8.8.8
EOF

Create a Concourse user:

1
2
3
4
$ mkdir /var/lib/concourse
$ sudo adduser --system --group concourse
$ sudo chown -R concourse:concourse /etc/concourse /var/lib/concourse
$ sudo chmod 600 /etc/concourse/*_environment

Create SystemD Unit Files, first for the Web Service:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
$ cat > /etc/systemd/system/concourse-web.service << EOF
[Unit]
Description=Concourse CI web process (ATC and TSA)
After=postgresql.service

[Service]
User=concourse
Restart=on-failure
EnvironmentFile=/etc/concourse/web_environment
ExecStart=/usr/local/concourse/bin/concourse web

[Install]
WantedBy=multi-user.target
EOF

Then the SystemD Unit File for the Worker Service:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
$ cat > /etc/systemd/system/concourse-worker.service << EOF
[Unit]
Description=Concourse CI worker process
After=concourse-web.service

[Service]
User=root
Restart=on-failure
EnvironmentFile=/etc/concourse/worker_environment
ExecStart=/usr/local/concourse/bin/concourse worker

[Install]
WantedBy=multi-user.target
EOF

Create a postgres password for the concourse user:

1
2
3
4
$ cd /home/concourse/
$ sudo -u concourse psql atc
atc=> ALTER USER concourse WITH PASSWORD 'concourse';
atc=> \q

Start and Enable the Services:

1
2
3
4
5
6
7
$ systemctl start concourse-web concourse-worker
$ systemctl enable concourse-web concourse-worker postgresql
$ systemctl status concourse-web concourse-worker

$ systemctl is-active concourse-worker concourse-web
active
active

The listening ports should more or less look like the following:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
$ netstat -tulpn

Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address           Foreign Address         State       PID/Program name
tcp        0      0 127.0.0.1:7777          0.0.0.0:*               LISTEN      4530/concourse
tcp        0      0 127.0.0.1:7788          0.0.0.0:*               LISTEN      4530/concourse
tcp        0      0 127.0.0.1:8079          0.0.0.0:*               LISTEN      4525/concourse
tcp        0      0 0.0.0.0:22              0.0.0.0:*               LISTEN      1283/sshd
tcp        0      0 127.0.0.1:5432          0.0.0.0:*               LISTEN      4047/postgres
tcp6       0      0 :::36159                :::*                    LISTEN      4525/concourse
tcp6       0      0 :::46829                :::*                    LISTEN      4525/concourse
tcp6       0      0 :::2222                 :::*                    LISTEN      4525/concourse
tcp6       0      0 :::8080                 :::*                    LISTEN      4525/concourse
tcp6       0      0 :::22                   :::*                    LISTEN      1283/sshd
udp        0      0 0.0.0.0:68              0.0.0.0:*                           918/dhclient
udp        0      0 0.0.0.0:42165           0.0.0.0:*                           4530/concourse

You can check the logs like this:

1
2
$ sudo journalctl -fu concourse-web
$ sudo journalctl -fu concourse-worker

Make a request using the API:

1
2
$ curl http://${IP_ADDRESS}:8080/api/v1/info
{"version":"7.4.0","worker_version":"2.3","feature_flags":{"across_step":false,"build_rerun":false,"cache_streamed_volumes":false,"global_resources":false,"pipeline_instances":false,"redact_secrets":false,"resource_causality":false},"external_url":"http://x.x.x.x:8080"}

Client Side:

I will be using a the Fly cli from a Mac, so first we need to download the fly-cli for Mac:

1
2
3
4
5
$ export CONCOURSE_VERSION=7.4.0
$ wget https://github.com/concourse/concourse/releases/download/v${CONCOURSE_VERSION}/fly-${CONCOURSE_VERSION}-darwin-amd64.tgz
$ tar -xvf fly-${CONCOURSE_VERSION}-darwin-amd64.tgz
$ sudo mv fly /usr/local/bin/fly
$ rm -rf fly-${CONCOURSE_VERSION}-darwin-amd64.tgz

Next, we need to setup our Concourse Target by Authenticating against our Concourse Endpoint, lets setup our target with the name ci, and make sure to replace the ip address with the ip of your concourse server:

1
2
3
4
5
6
7
8
9
$ fly -t ci login -c http://${IP_ADDRESS}:8080
logging in to team 'main'

navigate to the following URL in your browser:

  http://${IP_ADDRESS}:8080/login?fly_port=42181

or enter token manually (input hidden):
target saved

Lets list our targets:

1
2
3
$ fly targets
name  url                        team  expiry
ci    http://x.x.x.x:8080        main  Wed, 08 Nov 2021 15:32:59 UTC

Listing Registered Workers:

1
2
3
$ fly -t ci workers
name              containers  platform  tags  team  state    version
x.x.x.x           0           linux     none  none  running  1.2

Listing Active Containers:

1
2
$ fly -t ci containers
handle                                worker            pipeline     job            build #  build id  type   name                  attempt

Hello World Pipeline:

Let’s create a basic pipeline, that will print out Hello, World!:

Our hello-world.yml

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
jobs:
- name: my-job
  plan:
  - task: say-hello
    config:
      platform: linux
      image_resource:
        type: docker-image
        source:
          repository: alpine
          tag: edge
      run:
        path: /bin/sh
        args:
        - -c
        - |
          echo "============="
          echo "Hello, World!"
          echo "============="

Applying the configuration to our pipeline:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
$ fly -t ci set-pipeline -p yeeehaa -c hello-world.yml
jobs:
  job my-job has been added:
    name: my-job
    plan:
    - task: say-hello
      config:
        platform: linux
        image_resource:
          type: docker-image
          source:
            repository: alpine
            tag: edge
        run:
          path: /bin/sh
          args:
          - -c
          - |
            echo "============="
            echo "Hello, World!"
            echo "============="

apply configuration? [yN]: y
pipeline created!
you can view your pipeline here: http://x.x.x.x:8080/teams/main/pipelines/yeeehaa

the pipeline is currently paused. to unpause, either:
  - run the unpause-pipeline command
  - click play next to the pipeline in the web ui

We can browse to the WebUI to unpause the pipeline, but since I like to do everything on cli as far as possible, I will unpause the pipeline via cli:

1
2
$ fly -t ci unpause-pipeline -p yeeehaa
unpaused 'yeeehaa'

Now our Pipeline is unpaused, but since we did not specify any triggers, we need to manually trigger the pipeline to run, you can either via the WebUI, select your pipeline which in this case will be named yeeehaa and then select the job, which will be my-job then hit the + sign, which will trigger the pipeline.

I will be using the cli:

1
2
$ fly -t ci trigger-job --job yeeehaa/my-job
started yeeehaa/my-job #1

Via the WebUI on http://x.x.x.x:8080/teams/main/pipelines/yeeehaa/jobs/my-job/builds/1 you should see the Hello, World! output, or via the cli, we also have the option to see the output, so let’s trigger it again, but this time passing the --watch flag:

1
2
3
4
5
6
7
8
9
10
11
12
$ fly -t ci trigger-job --job yeeehaa/my-job --watch
started yeeehaa/my-job #2

initializing
running /bin/sh -c echo "============="
echo "Hello, World!"
echo "============="

=============
Hello, World!
=============
succeeded

Listing our Workers and Containers again:

1
2
3
4
5
6
7
$ fly -t ci workers
name              containers  platform  tags  team  state    version
x.x.x.x            2           linux     none  none  running  1.2

$ fly -t ci containers
handle                                worker            pipeline     job         build #  build id  type   name           attempt
46282555-64cd-5h1b-67b8-316486h58eb8  x.x.x.x           yeeehaa      my-job      2        729       task   say-hello      n/a

Thank You

Thanks for reading, if you like my content, check out my website or follow me at @ruanbekker on Twitter.

A Tour With Vagrant and Virtualbox on Mac

vagrant

Vagrant, yet another amazing product from Hashicorp.

Vagrant makes it really easy to provision virtual servers for local development (not limited to), which they refer as “boxes”, that enables developers to run their jobs/tasks/applications in a really easy and fast way. Vagrant utilizes a declarative configuration model, so you can describe which OS you want, bootstrap them with installation instructions as soon as it boots, etc.

What are we doing today?

When completing this tutorial, you will have Vagrant and Virtualbox installed on your Mac and should be able to launch a Ubuntu Virtual Server locally with Vagrant and using the Virtualbox provider which will be responsible for running our VM’s.

We will also look at different configuration options to configure the VM, bootstrapping software, using the shell, docker and ansible provisioner.

For this demonstration, I am using a Mac OSX, but you can run this on Mac, Windows or Linux. First we will use Homebrew to install Virtualbox, then Vagrant, then we will provision a Ubuntu box and I will also show how to inject shell commands into your Vagrantfile so that you can provision software to your VM, and also forward traffic to a web server from the host to the guest.

If you are looking for a Linux version instead of mac, you can look at this post: * Use Vagrant to Setup a Local Development Environment on Linux

Pre-Requisites

I will be installing Vagrant and Virtualbox with Homebrew, if you do not have homebrew installed, you can install homebrew with:

1
$ ruby -e "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/master/install)"

Once homebrew is installed, it’s a good thing to update the indexes:

1
$ brew update

Virtualbox

Install VirtualBox using homebrew:

1
$ brew install --cask virtualbox

Vagrant

Install Vagrant using homebrew:

1
$ brew install --cask vagrant

Install the virtualbox guest additions plugin for vagrant:

1
$ vagrant plugin install vagrant-vbguest

If you would like a vagrant manager utility to help you manage your vagrant boxes, you can install vagrant-manager using homebrew:

1
$ brew install --cask vagrant-manager

Create your first Vagrant Box

From app.vagrantup.com/boxes/search you can search for any box, such as ubuntu, centos, alpine etc and for this demonstration I am going with ubuntu/focal64.

I am creating a new directory for my devbox:

1
2
$ mkdir devbox
$ cd devbox

Then initialize the Vagrantfile by running:

1
$ vagrant init ubuntu/focal64

A Vagrantfile has been created in the current working directory:

1
2
3
4
5
$ cat Vagrantfile | grep -v "#"

Vagrant.configure("2") do |config|
  config.vm.box = "ubuntu/focal64"
end

Boot the VM:

1
$ vagrant up

The box should now be in a started state, and we can verify that by running:

1
2
3
4
$ vagrant status
Current machine states:

default                   running (virtualbox)

We can now SSH to our VM by running:

1
2
$ vagrant ssh
vagrant@ubuntu-focal:~$

Installing Software with Vagrant

First let’s destroy the VM that we created:

1
$ vagrant destroy --force

Then edit the Vagrantfile and add the commands that we want to be executed when the VM boots, in our case, installing Nginx:

1
2
3
4
5
6
7
8
Vagrant.configure("2") do |config|
  config.vm.box = "ubuntu/focal64"
  config.vm.network "forwarded_port", guest: 80, host: 8080
  config.vm.provision "shell", inline: <<-SHELL
     apt update
     apt install nginx -y
  SHELL
end

You will also notice that we are forwarding port 8080 from our host, to port 80 on the VM so that we can access the webserver on port 8080 from our laptop. Then boot the VM:

1
$ vagrant up

Once the VM has booted and installed our software, we should be able to access the index document served by Nginx on our VM:

1
2
3
4
5
6
7
8
9
10
11
$ curl -I http://localhost:8080/

HTTP/1.1 200 OK
Server: nginx/1.18.0 (Ubuntu)
Date: Sat, 14 Aug 2021 18:11:59 GMT
Content-Type: text/html
Content-Length: 612
Last-Modified: Sat, 14 Aug 2021 18:11:10 GMT
Connection: keep-alive
ETag: "6118073e-264"
Accept-Ranges: bytes

Shared Folders

Let’s say you want to map your local directory to your VM, in a scenario where you want to store your index.html on your laptop and map it to the VM, we can use config.vm.synced_folder.

On our laptop, create a html directory where we will store our index.hml:

1
$ mkdir html

Now create the content in the index.html under the html directory:

1
$ echo "Hello, World" > html/index.html

Now we need to make vagrant aware of the folder that we are mapping to the VM, so we need to edit the Vagrantfile and it will now look like this:

1
2
3
4
5
6
7
8
9
10
11
12
# -*- mode: ruby -*-
# vi: set ft=ruby :

Vagrant.configure("2") do |config|
  config.vm.box = "ubuntu/focal64"
  config.vm.network "forwarded_port", guest: 80, host: 8080
  config.vm.provision "shell", inline: <<-SHELL
     apt update
     apt install nginx -y
  SHELL
  config.vm.synced_folder "html", "/var/www/html"
end

To reload the VM with our changes, we use vagrant provision to update our VM when changes to provisioners are made, and vagrant reload when we have config changes such as config.vm.network, but to restart the VM and forcing provisioners to run, we can use the following:

Thanks @joshva_jebaraj

1
$ vagrant reload --provision

Once the VM is up, we can verify the changes:

1
2
$ curl http://localhost:8080/
Hello, World

Now we can edit our content locally which is synced to our VM.

Setting Hostname and Configure Memory

We can also configure the hostname of our VM and configure the amount of memory that we want to allocate to our VM using:

  • config.vm.hostname
  • vb.memory

An example of that will look like the following:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
# -*- mode: ruby -*-
# vi: set ft=ruby :

Vagrant.configure("2") do |config|
  config.vm.box = "ubuntu/focal64"
  config.vm.hostname = "mydevbox"
  config.vm.network "forwarded_port", guest: 80, host: 8080
  config.vm.provision "shell", inline: <<-SHELL
     apt update
     apt install nginx -y
  SHELL
  config.vm.synced_folder "html", "/var/www/html"
  config.vm.provider "virtualbox" do |vb|
    vb.memory = "1024"
  end
end

`

In this example our VM’s hostname is mydevbox and we assigned 1024MB of memory to our VM.

Provisioners: Shell

We can also run scripts from our local directory on our laptop on our VM using the shell provisioner.

First we need to create the script on our local directory:

1
2
3
4
$ cat bootstrap.sh
#!/usr/bin/env bash
set -x
echo "my hostname is $(hostname)"

Then in our Vagrantfile we inform vagrant to execute the shell script:

1
2
3
4
5
6
7
8
# -*- mode: ruby -*-
# vi: set ft=ruby :

Vagrant.configure("2") do |config|
  config.vm.box = "ubuntu/focal64"
  config.vm.hostname = "mydevbox"
  config.vm.provision :shell, :path => "bootstrap.sh"
end

Since my VM is already running, I will be doing a reload:

1
2
3
4
5
6
7
$ vagrant reload --provision
...
==> default: Running provisioner: shell...
    default: Running: /var/folders/04/r10yvb8d5dgfvd167jz5z23w0000gn/T/vagrant-shell20210814-70233-1p9dump.sh
    default: ++ hostname
    default: my hostname is mydevbox
    default: + echo 'my hostname is mydevbox'

As you can see the shell script from our local directory was executed on our VM, you can use this method to automate installations as well, etc.

Provisioners: Docker

Vagrant offers a docker provisioner, and for this example we will be hosting a mysql server using docker container in our VM.

Our Vagrantfile:

1
2
3
4
5
6
7
8
9
10
11
12
# -*- mode: ruby -*-
# vi: set ft=ruby :

Vagrant.configure("2") do |config|
  config.vm.box = "ubuntu/focal64"
  config.vm.hostname = "mydevbox"
  config.vm.network "forwarded_port", guest: 3306, host: 3306
  config.vm.provision "docker" do |d|
    d.run "mysql", image: "mysql:8.0",
      args: "-p 3306:3306 -e MYSQL_ROOT_PASSWORD=password"
  end
end

Since I don’t have port 3306 listening locally, I have mapped port 3306 from my laptop to port 3306 on my VM and I am using the mysql:8.0 container image from docker hub and passing the arguments which is specific to the container.

The convenient thing about the docker provisioner, is that it will install docker onto the VM for you.

Once the config has been set in your Vagrantfile do a reload:

1
2
3
4
5
6
7
$ vagrant reload --provision
...
    default: /vagrant => /Users/ruanbekker/workspace/vagrant/devbox
==> default: Running provisioner: docker...
    default: Installing Docker onto machine...
==> default: Starting Docker containers...
==> default: -- Container: mysql

From our laptop we should be able to communicate with our mysql server:

1
2
3
4
5
6
7
8
9
10
11
$ nc -vz localhost 3306
found 0 associations
found 1 connections:
     1:   flags=82<CONNECTED,PREFERRED>
  outif lo0
  src 127.0.0.1 port 58745
  dst 127.0.0.1 port 3306
  rank info not available
  TCP aux info available

Connection to localhost port 3306 [tcp/mysql] succeeded!

We can also SSH to our VM and verify if the container is running:

1
$ vagrant ssh

And then list the containers:

1
2
3
$  docker ps
CONTAINER ID   IMAGE       COMMAND                  CREATED         STATUS         PORTS                                                  NAMES
30a843a486ae   mysql:8.0   "docker-entrypoint.sh    2 minutes ago   Up 2 minutes   0.0.0.0:3306->3306/tcp, :::3306->3306/tcp, 33060/tcp   mysql

Provisioners: Ansible

We can also execute Ansible playbooks on our VM using the Ansible Provisioner.

Something to note is that we use ansible to execute the playbook on the host, and ansible_local to execute the playbook on the VM.

First we will create our project structure for ansible, so that we have the following in place:

1
2
3
4
.
Vagrantfile
provisioning/playbook.yml
provisioning/group_vars/all

Create the provisioning directory:

1
$ mkdir provisioning

Then the content for our provisioning/playbook.yml playbook:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
---
- hosts: all
  become: yes
  tasks:
    - name: ensure ntpd is at the latest version
      apt:
        pkg: ntp
        state: ""
      notify:
      - restart ntpd
  handlers:
    - name: restart ntpd
      service:
        name: ntp
        state: restarted

Our provisioning/group_vars/all file that will contain the variables for the all group:

1
desired_state: "latest"

In our Vagrantfile:

1
2
3
4
5
6
7
8
9
10
# -*- mode: ruby -*-
# vi: set ft=ruby :

Vagrant.configure("2") do |config|
  config.vm.box = "ubuntu/focal64"
  config.vm.hostname = "mydevbox"
  config.vm.provision :ansible do |ansible|
    ansible.playbook = "provisioning/playbook.yml"
  end
end

When using ansible with vagrant the inventory is auto-generated when then inventory is not specified. Vagrant will store the inventory on the host at .vagrant/provisioners/ansible/inventory/vagrant_ansible_inventory.

To execute playbooks with ansible, we need ansible installed on our host machine, for this demonstration I will be using virtualenv and then install ansible using pip:

1
2
3
4
$ python3 -m pip install virtualenv
$ virtualenv -p $(which python3) .venv
$ source .venv/bin/activate
$ pip install ansible

Now that we have ansible installed, reload the VM to execute the playbook on our VM:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
$ vagrant reload --provision
...
==> default: Running provisioner: ansible...
    default: Running ansible-playbook...

PLAY [all] *********************************************************************

TASK [Gathering Facts] *********************************************************
ok: [default]

TASK [ensure ntpd is at the latest version] ************************************
ok: [default]

PLAY RECAP *********************************************************************
default                    : ok=2    changed=0    unreachable=0    failed=0    skipped=0    rescued=0    ignored=0

Pretty neat right?

Tear Down

To destroy the VM:

1
$ vagrant destroy --force

Resources

For more information on vagrant, check out their documentation:

On provisioning documentation:

I have a couple of example Vagrantfiles available on my github repository:

Thank You

Thanks for reading, if you like my content, check out my website or follow me at @ruanbekker on Twitter.

How to Specify Wallet Name in Bitcoin Core Walletnotify

With bitcoin-core, you get a configuration option called walletnotify which allow you to invoke a command whenever you receive a payment, first confirmation of a payment or send a payment.

You can specify %s as an argument which will be used to parse the transaction id.

Bitcoind WalletNotify TransactionID Example

To see what walletnotify does, in my bitcoin.conf I had a basic script to write a entry every time I receive a payment:

1
2
3
$ cat ~/.bitcoin/bitcoin.conf
...
walletnotify=/bin/notify.sh %s %w

And in my /bin/notify.sh script I have this:

1
2
3
4
5
#!/usr/bin/env bash
transaction_id=$1

# writing to log
echo "[$(date +%FT%T)] event for txid $transaction_id" >> /var/log/bitcoin-notify.log

I have executable permissions for the script:

1
$ chmod +x /bin/notify.sh

When a payment was made, my logfile showed the following:

1
[2021-08-04T12:21:43] event for txid xxxxxx5d92f729ed77xxxxxx2cbccedxxxxa7a03a801xxxxxxx33a41c1xxxxxd2 

Capturing the wallet name in walletnotify

In bitcoin-core we wave wallets, and in a wallet we have one or more bitcoin addresses, as can be seen below for wallets:

1
2
$ curl -s -u "bitcoin:${bpass}" -d '{"jsonrpc": "1.0", "id": "curl", "method": "listwallets", "params": []}' -H 'content-type: text/plain;' http://127.0.0.1:18332/
{"result":["rpi01-main", "rpi01-secondary"],"error":null,"id":"curl"}

and to get the addresses for that wallet:

1
2
$ curl -s -u "bitcoin:${bpass}" -d '{"jsonrpc": "1.0", "id": "curl", "method": "getaddressesbylabel", "params": [""]}' -H 'content-type: text/plain;' http://127.0.0.1:18332/wallet/rpi01-main
{"result":{"txxxxxmefmcpq98xxxxxxx80gvug2fe97xxxxxx8yv":{"purpose":"receive"}},"error":null,"id":"curl"}

I had to figure out how to capture the wallet name as well as the transaction id, as I thought its not possible until I stumbled upon a post which mentioned from bitcoind 0.20:

The -walletnotify configuration parameter will now replace any %w in its argument with the name of the wallet generating the notification.

Which was merged by this PR: - https://github.com/bitcoin/bitcoin/pull/13339

So first to verify that bitcoind is newer than mentioned:

1
2
$ /usr/local/bin/bitcoind -version
Bitcoin Core version v0.21.1

Updated the walletnotify config in bitcoin.conf to include %w:

1
2
$ cat /home/bitcoin/.bitcoin/bitcoin.conf | grep wallet
walletnotify=/bin/notify.sh %s %w

Then in the notify.sh script:

1
2
3
4
5
#!/usr/bin/env bash
transaction_id=$1
wallet_name=$2

echo "[$(date +%FT%T)] $transaction_id $wallet_name" >> /var/log/bitcoin-notify.log

And then restart bitcoind:

1
$ sudo systemctl restart bitcoind

When a transaction occurred, I could see the transaction id with the corresponding wallet name:

1
2
$ tail -f /var/log/bitcoin-notify.log
[2021-08-04T12:31:20] fxxxxxxxxxxxxxxxxxxxxxxx2cbcced28ea26fhkxxxxhjn01f33a41c12f8xxx8 rpi01-main

Thanks

Thanks for reading, if you like my content, check out my website or follow me at @ruanbekker on Twitter.

AWS EC2 Linux - Warning: Setlocale: LC_CTYPE: Cannot Change Locale UTF-8

On Amazon Linux EC2 Instances, I noticed the following error when SSH onto them:

1
-bash: warning: setlocale: LC_CTYPE: cannot change locale (UTF-8): No such file or directory

To resolve, add the following to the /etc/environment file:

1
2
3
$ cat /etc/environment
LANG=en_US.utf-8
LC_ALL=en_US.utf-8

Logout and log back in and it should be resolved.

Task Runner With YAML Config Written in Go

Task (aka Taskfile) is a task runner written in Go, which is similar to GNU Make, but in my opinion is a lot easier to use as you specify your tasks in yaml.

What to expect

In this post we will go through a quick demonstration using Task, how to install Task, as well as a couple of basic examples to get you up and running with Task.

Install

For mac, installing task::

1
$ brew install go-task/tap/go-task

For linux, installing task:

1
$ sh -c "$(curl --location https://taskfile.dev/install.sh)" -- -d -b /usr/local/bin

Or manual installation for arm as an example:

1
2
3
4
5
6
$ pushd /tmp
$ wget https://github.com/go-task/task/releases/download/v3.7.0/task_linux_arm.tar.gz
$ tar -xvf task_linux_arm.tar.gz
$ sudo mv task /usr/local/bin/task
$ sudo chmod +x /usr/local/bin/task
$ popd

Verify that task is installed:

1
2
$ task --version
Task version: v3.7.0

For more information check the installation page: - https://taskfile.dev/#/installation

Usage

Task uses a default config file: Taskfile.yml in the current working directory where you can provide context on what your tasks should do.

To generate a Taskfile.yml with example config, task gives us a --init flag to generate a sample.

For a basic hello-world example, our task helloworld will echo out hello, world!. To generate the sample code, run:

1
task --init

Then update the config, to the following:

1
2
3
4
5
6
7
version: '3'

tasks:
  helloworld:
    desc: prints out hello world message
    cmds:
      - echo "hello, world!"

To demonstrate what the config means:

  • tasks: refers to the list of tasks
  • helloworld: is the task name
  • desc: describes the task, useful for listing tasks
  • cmds: the commands that the task will execute

To list all our tasks for our taskfile:

1
2
3
$ task --list
task: Available tasks for this project:
* helloworld:     prints out hello world message

Which we call using the application task with the argument of the task name:

1
2
3
$ task helloworld
task: [helloworld] echo "hello, world!"
hello, world!

We can also reduce the output verbosity using silent:

1
2
3
4
5
6
7
8
version: '3'

tasks:
  helloworld:
    desc: prints out hello world message
    cmds:
      - echo "hello, world!"
    silent: true

Which will result in:

1
2
$ task helloworld
hello, world!

For a example using environment variables, we can use it in two ways:

  • per task
  • globally, across all tasks

For using environment variables per task:

1
2
3
4
5
6
7
8
version: '3'

tasks:
  helloworld:
    cmds:
      - echo "hello, $WORD!"
    env:
      WORD: world

Results in:

1
2
3
$ task helloworld
task: [helloworld] echo "hello, $WORD!"
hello, world!

For using environment variables globally across all tasks:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
version: '3'

env:
  WORD: world

tasks:
  helloworld:
    cmds:
      - echo "hello, $WORD!"
    env:
      GREETING: hello

  byeworld:
    cmds:
      - echo "$GREETING, $WORD!"
    env:
      GREETING: bye

Running our first task:

1
2
3
$ task helloworld
task: [helloworld] echo "hello, $WORD!"
hello, world!

And running our second task:

1
2
3
$ task byeworld
task: [byeworld] echo "$GREETING, $WORD!"
bye, world!

To store your environment variables in a .env file, you can specify it as the following in your Taskfile.yml:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
version: '3'

dotenv: ['.env']

tasks:
  helloworld:
    cmds:
      - echo "hello, $WORD!"
    env:
      GREETING: hello

  byeworld:
    cmds:
      - echo "$GREETING, $WORD!"
    env:
      GREETING: bye

And in your .env:

1
WORD=world

Then you should see your environment variables referenced from the .env file:

1
2
3
$ task helloworld
task: [helloworld] echo "hello, $WORD!"
hello, world!

We can also reference config using vars:

1
2
3
4
5
6
7
8
9
10
version: '3'

vars:
  GREETING: Hello, World!

tasks:
  default:
    desc: prints out a message
    cmds:
      - echo ""

In this case our task name is default, therefore we can only run task without any arguments, as default with be the default task:

1
2
3
$ task
task: [default] echo "Hello, World!"
Hello, World!

To run both tasks with one command, you can specify dependencies, so if we define a task with zero commands but just dependencies, it will call those tasks and execute them:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
version: '3'

env:
  WORD: world

tasks:
  helloworld:
    cmds:
      - echo "hello, $WORD!"
    env:
      GREETING: hello

  byeworld:
    cmds:
      - echo "$GREETING, $WORD!"
    env:
      GREETING: bye

  all:
    deps: [helloworld, byeworld]

So when we run the all task:

1
2
3
4
5
$ task all
task: [helloworld] echo "hello, $WORD!"
hello, world!
task: [byeworld] echo "$GREETING, $WORD!"
bye, world!

For more usage examples, have a look at their documentation: - https://taskfile.dev/#/usage

Thanks

Thanks for reading, if you like my content, check out my website or follow me at @ruanbekker on Twitter.

Basic Logging With Python

I’m trying to force myself to move away from using the print() function as I’m pretty much using print all the time to cater for logging, and using the logging package instead.

This is a basic example of using logging in a basic python app:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
import logging

logging.basicConfig(
    level=logging.INFO,
    format="%(asctime)s [%(levelname)s] %(name)s %(message)s",
    handlers=[
        logging.StreamHandler()
    ]
)

messagestring = {'info': 'info message', 'warn': 'this is a warning', 'err': 'this is a error'}

logger = logging.getLogger('thisapp')
logger.info('message: {}'.format(messagestring['info']))
logger.warning('message: {}'.format(messagestring['warn']))
logger.error('message: {}'.format(messagestring['err']))

When running this example, this is the output that you will see:

1
2
3
4
$ python app.py
2021-07-19 13:07:43,647 [INFO] thisapp message: info message
2021-07-19 13:07:43,647 [WARNING] thisapp message: this is a warning
2021-07-19 13:07:43,647 [ERROR] thisapp message: this is a error

More more info on this package, see it’s documentation: - https://docs.python.org/3/library/logging.html

Thanks for reading, if you like my content, check out my website or follow me at @ruanbekker on Twitter.

Difference With ECS Task and Execution IAM Roles on AWS

In this post we will look at what the difference is between the AWS ECS Task Execution IAM Role and the IAM Role for Tasks and give a example policy to demonstrate.

ECS Task Execution Role

The ECS Execution Role is used by the ecs-agent which runs on ECS and is responsible for: - Pulling down docker images from ECR - Fetching the SSM Parameters from SSM for your Task (Secrets and LogConfigurations) - Writing Logs to CloudWatch

The IAM Role has been configured that the Trusted Identity is ecs so only ECS is allowed to assume credentials from the IAM Policy that is associated to the Role.

The trusted identity in the IAM Role to be ecs:

1
2
3
4
5
6
7
8
9
10
11
12
13
{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Sid": "",
      "Effect": "Allow",
      "Principal": {
        "Service": "ecs-tasks.amazonaws.com"
      },
      "Action": "sts:AssumeRole"
    }
  ]
}

and the policy will look like this more or less for a example service, I am demonstrating my-dev-service:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Effect": "Allow",
            "Action": [
                "ecr:GetAuthorizationToken",
                "ecr:BatchCheckLayerAvailability",
                "ecr:GetDownloadUrlForLayer",
                "ecr:BatchGetImage",
                "logs:CreateLogStream",
                "logs:PutLogEvents"
            ],
            "Resource": "*"
        },
        {
            "Sid": "SSMGetParameters",
            "Effect": "Allow",
            "Action": [
                "ssm:GetParameter"
            ],
            "Resource": "arn:aws:ssm:eu-west-1:*:parameter/my-service/dev/*"
        },
        {
            "Sid": "KMSDecryptParametersWithKey",
            "Effect": "Allow",
            "Action": [
                "kms:GetPublicKey",
                "kms:Decrypt",
                "kms:GenerateDataKey",
                "kms:DescribeKey"
            ],
            "Resource": "*"
        }
    ]
}

In the ECS Task Definition the role arn is specified as "executionRoleArn" in:

1
2
3
4
5
6
{
  "family": "my-dev-service",
  "executionRoleArn":"arn:aws:iam::000000000000:role/ecs-exec-role",
  "taskRoleArn":"arn:aws:iam::000000000000:role/ecs-task-role",
  "containerDefinitions": []
}

ECS Task Role

The ECS Task Role is used by the service that is deployed to ECS, so this will be your application requiring access to SQS as an example

Same as before, we set the trusted identity in the IAM Role to be ecs:

1
2
3
4
5
6
7
8
9
10
11
12
13
{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Sid": "",
      "Effect": "Allow",
      "Principal": {
        "Service": "ecs-tasks.amazonaws.com"
      },
      "Action": "sts:AssumeRole"
    }
  ]
}

So only the ECS tasks using the role is allowed to assume credentials from the IAM Role, and the policy associated to the role, can look something like this:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Sid": "AllowDevSQS",
            "Effect": "Allow",
            "Action": [
                "sqs:GetQueueUrl",
                "sqs:ReceiveMessage",
                "sqs:SendMessage",
                "sqs:ChangeMessageVisibility"
            ],
            "Resource": [
                "arn:aws:sqs:eu-west-1:000000000000:dev-pending-queue",
                "arn:aws:sqs:eu-west-1:000000000000:dev-confirmed-queue"
            ]
        }
    ]
}

The role arn will be specified in "taskRoleArn" from the following in the ECS Task Definition:

1
2
3
4
5
6
{
  "family": "my-dev-service",
  "executionRoleArn":"arn:aws:iam::000000000000:role/ecs-exec-role",
  "taskRoleArn":"arn:aws:iam::000000000000:role/ecs-task-role",
  "containerDefinitions": []
}

Application Code

In your application you don’t need to reference any aws access keys as the role will assume credentials for you by the SDK, with python a short example will be:

1
2
import boto3
sqs = boto3.Session(region_name='eu-west-1').client('sqs')

Thanks

Thanks for reading, if you like my content, check out my website or follow me at @ruanbekker on Twitter.

Install Java 11 and Maven on Ubuntu Linux

In this short tutorial I will show you how to prepare your environment for Java 11 and Maven on Ubuntu for Linux.

Install

Update your package manager and install OpenJDK 11:

1
2
sudo apt update
sudo apt install openjdk-11-jdk -y

Verify that Java is installed:

1
2
3
4
$ java -version
openjdk version "11.0.11" 2021-04-20
OpenJDK Runtime Environment (build 11.0.11+9-Ubuntu-0ubuntu2.20.04)
OpenJDK 64-Bit Server VM (build 11.0.11+9-Ubuntu-0ubuntu2.20.04, mixed mode, sharing)

Once Java is installed, we can install Maven, first switch to the root user:

1
$ sudo su

I will be using maven version 3.6.2, so adjust accordingly:

1
2
3
$ MAVEN_HOME="/opt/maven"
$ MAVEN_VERSION=3.6.3
$ MAVEN_CONFIG_HOME="/root/.m2"

Create the directories, then download maven and extract:

1
2
3
4
5
6
$ mkdir -p $MAVEN_HOME
$ curl -LSso /var/tmp/apache-maven-$MAVEN_VERSION-bin.tar.gz https://apache.org/dist/maven/maven-3/$MAVEN_VERSION/binaries/apache-maven-$MAVEN_VERSION-bin.tar.gz
$ tar xzvf /var/tmp/apache-maven-$MAVEN_VERSION-bin.tar.gz -C $MAVEN_HOME --strip-components=1
$ rm /var/tmp/apache-maven-$MAVEN_VERSION-bin.tar.gz
$ update-alternatives --install /usr/bin/mvn mvn /opt/maven/bin/mvn 10000
$ mkdir -p $MAVEN_CONFIG_HOME

Set the environment variables for maven:

1
2
3
4
5
$ cat /etc/profile.d/custom.sh
#!/bin/bash
export MAVEN_HOME="/opt/maven"
export MAVEN_VERSION=3.6.3
export MAVEN_CONFIG_HOME="/root/.m2"

Then make the file executable:

1
$ chmod +x /etc/profile.d/custom.sh

Verify that maven is installed:

1
2
3
4
5
6
$ mvn -version
Apache Maven 3.6.3 (cecedd343002696d0abb50b32b541b8a6ba2883f)
Maven home: /opt/maven
Java version: 11.0.11, vendor: Ubuntu, runtime: /usr/lib/jvm/java-11-openjdk-amd64
Default locale: en, platform encoding: UTF-8
OS name: "linux", version: "5.4.0-1041-aws", arch: "amd64", family: "unix"

Thank You

Thanks for reading, if you like my content, check out my website or follow me at @ruanbekker on Twitter.

Setup a Crypto Digibyte Full Node on Linux

In this tutorial I will show you how to setup a digibyte (DGB) Full Node on Linux and show you how to interact with your wallet and the blockchain.

What is a Full Node

By running a Full Node, you contribute by helping to fully validate transactions and blocks. Almost all full nodes also help the network by accepting transactions and blocks from other full nodes, validating those transactions and blocks and then relaying them to other full nodes. Therefore you are contributing to maintaining the consensus of the blockchain.

Hardware Requirements

In order to run a full digibyte node you will need a server that is preferrably online 24/7 and that you have an uncapped connection as at the time of writing the digibyte blockchain is about 25GB in size but increases over time. I also used a server with 2vCPU’s and 4GB of memory.

Setup the Pre-Requisites

First create the user:

1
2
$ useradd -G sudo digibyte -m -s /bin/bash
$ echo "digibyte ALL=(ALL:ALL) NOPASSWD: ALL" | sudo tee /etc/sudoers.d/no-sudo-password-for-digibyte

Create the configuration directory:

1
$ mkdir -p /etc/digibyte /var/lib/digibyte

Create the digibyte configuration file:

1
2
3
4
5
6
7
$ cat <<EOF > /etc/digibyte/digibyte.conf
daemon=1
maxconnections=300
disablewallet=0
rpcuser=jsonrpc
rpcpassword=$(openssl rand -base64 18)
EOF

Download the Software

Get the latest release, but at the time of writing v7.17.2 is the latest:

1
2
3
$ wget https://github.com/DigiByte-Core/digibyte/releases/download/v7.17.2/digibyte-7.17.2-x86_64-linux-gnu.tar.gz
$ tar -xf digibyte-7.17.2-x86_64-linux-gnu.tar.gz
$ mv digibyte-7.17.2 /usr/local/digibyte-7.17.2

Then symbolic link the version directory to digibyte:

1
$ ln -s /usr/local/digibyte-7.17.2 /usr/local/digibyte

SystemD

Create the systemd unit file:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
$ cat <<EOF > /etc/systemd/system/digibyted.service
[Unit]
Description=DigiByte's distributed currency daemon
After=network.target

[Service]
User=digibyte
Group=digibyte

Type=forking
PIDFile=/etc/digibyte/digibyted.pid
ExecStart=/usr/local/digibyte/bin/digibyted -daemon -pid=/etc/digibyte/digibyted.pid \
  -conf=/etc/digibyte/digibyte.conf -datadir=/var/lib/digibyte -deprecatedrpc=accounts 

Restart=always
PrivateTmp=true
TimeoutStopSec=60s
TimeoutStartSec=2s
StartLimitInterval=120s
StartLimitBurst=5

[Install]
WantedBy=multi-user.target
EOF

Change the ownerships to digibyte:

1
$ chown -R digibyte:digibyte /etc/digibyte /var/lib/digibyte

Enable and start the service:

1
2
$ systemctl enable digibyted.service
$ systemctl start digibyted.service

Check the log:

1
$ tail -f /var/lib/digibyte/debug.log

Interact with the Node

Check the uptime:

1
$ curl -XPOST -H 'Content-Type: application/json' -u "jsonrpc:$PASSWORD" http://localhost:14022 -d '{"jsonrpc": "1.0", "id": "curl", "method": "uptime", "params": []}'

Check the wallet address:

1
2
$ curl -XPOST -H 'Content-Type: application/json' -u "jsonrpc:$PASSWORD" http://localhost:14022 -d '{"jsonrpc": "1.0", "id": "curl", "method": "getaccountaddress", "params": []}'
{"result":"D7ZznMe4NyEkXd6zA6MB3GYXiAURo64hNs","error":null,"id":"curl"}

Get the account balance:

1
2
$ curl -XPOST -H 'Content-Type: application/json' -u "jsonrpc:$PASSWORD" http://localhost:14022 -d '{"jsonrpc": "1.0", "id": "curl", "method": "getbalance", "params": []}'
{"result":0.00000000,"error":null,"id":"curl"}

Using the digibyte-cli:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
$ /usr/local/digibyte/bin/digibyte-cli -getinfo
{
  "version": 7170200,
  "protocolversion": 70017,
  "walletversion": 169900,
  "balance": 0.00000000,
  "blocks": 183019,
  "timeoffset": 0,
  "connections": 8,
  "proxy": "",
  "difficulty": null,
  "testnet": false,
  "keypoololdest": 1619558662,
  "keypoolsize": 1000,
  "paytxfee": 0.00000000,
  "relayfee": 0.00001000,
  "warnings": ""
}

Making a Transaction to my Wallet

Let’s make a transaction to my wallet node from a crypto currency exchange where I have digibyte, so first to get the wallet address where we would like to deposit the crypto currency:

1
2
$ curl -XPOST -H 'Content-Type: application/json' -u "jsonrpc:$PASSWORD" http://localhost:14022 -d '{"jsonrpc": "1.0", "id": "curl", "method": "getaccountaddress", "params": []}'
{"result":"D7ZznMe4NyEkXd6zA6MB3GYXiAURo64hNs","error":null,"id":"curl"}

From a exchange where you have DGB, withdraw to the address DN8RMAUz2yHGW1PuuLtiSkiTZARzMJ4L2A which is your wallet on the node (ensure you have enough to cover the transaction fee).

Once the transaction has enough confirmations, have a look at your wallet balance, and you will see the 5 DGB that I sent to my wallet can be seen:

1
2
$ curl -H 'Content-Type: application/json' -u "jsonrpc:$PASSWORD" http://localhost:14022 -d '{"jsonrpc": "1.0", "id": "curl", "method": "getbalance", "params": [""]}'
{"result":5.00000000,"error":null,"id":"curl"}

I’ve setup a software wallet on my pc, and from DGB I selected receive and copied my DGB software wallet address, now I would like to transfer my funds from my node wallet to my software wallet:

1
2
$ curl -H 'Content-Type: application/json' -u "jsonrpc:$PASSWORD" http://localhost:14022 -d '{"jsonrpc": "1.0", "id":"curl", "method": "sendtoaddress", "params": ["DTqHG9KA3oQAywq18gpBknxHXHZviyYdvS", 5.0, "donation", "happy bday"] }'
{"result":null,"error":{"code":-4,"message":"Error: This transaction requires a transaction fee of at least 0.0004324"},"id":"curl"}

As you can see I don’t have enough in my nodes wallet to make the transaction, therefore I need to keep the transaction cost in consideration:

1
2
$ python3 -c 'print(5.0-0.0004324)'
4.9995676

So let’s send 4.998:

1
2
$ curl -H 'Content-Type: application/json' -u "jsonrpc:$PASSWORD" http://localhost:14022 -d '{"jsonrpc": "1.0", "id":"curl", "method": "sendtoaddress", "params": ["DTqHG9KA3oQAywq18gpBknxHXHZviyYdvS", 4.998, "donation", "happy bday"] }'
{"result":"260e49b72f17f42f5a6c858e5403e23b5382000650997292e7e79f1535f5c4d0","error":null,"id":"curl"}

As you can see we are getting back a transaction id which we can use later to check up on. A couple of seconds later I received a notification on my software wallet that my funds were received:

First, using our software wallet’s address we can look it up: - https://digiexplorer.info/address/DTqHG9KA3oQAywq18gpBknxHXHZviyYdvS

And it should look like this:

We can also lookup the transaction id: - https://digiexplorer.info/tx/260e49b72f17f42f5a6c858e5403e23b5382000650997292e7e79f1535f5c4d0

And it should look like this:

Resources

RPC Docs: - https://developer.bitcoin.org/reference/rpc/index.html - https://chainquery.com/bitcoin-cli

Digibyte Config: - https://github.com/digibyte/digibyte/blob/master/contrib/debian/examples/digibyte.conf

REST Config: - https://github.com/digibyte/digibyte/blob/master/doc/REST-interface.md

Resources: - https://digibytewallets.com/