In this tutorial I will demonstrate how to run Loki v2.0.0 behind a Nginx Reverse Proxy with basic http authentication enabled on Nginx and what to do to configure Nginx for websockets, which is required when you want to use tail in logcli via Nginx.
Assumptions
My environment consists of a AWS Application LoadBalancer with a Host entry and a Target Group associated to port 80 of my Nginx/Loki EC2 instance.
Health checks to my EC2 instance are being performed to instance:80/ready
I have a S3 bucket and a DynamoDB table already running in my account which Loki will use. But NOTE that boltdb-shipper is now production ready since v2.0.0, which is awesome, because now you only require a object store such as S3, so you don’t need DynamoDB.
More information on this topic can be found under their changelog
What can you expect from this blogpost
We will go through the following topics:
Install Loki v2.0.0 and Nginx
Configure HTTP Basic Authentication to Loki’s API Endpoints
Bypass HTTP Basic Authentication to the /ready endpoint for our Load Balancer to perform healthchecks
Enable Nginx to upgrade websocket connections so that we can use logcli --tail
Test out access to Loki via our Nginx Reverse Proxy
Install and use LogCLI
Install Software
First we will install nginx and apache2-utils. In my use-case I will be using Ubuntu 20 as my operating system:
Next we will install Loki v2.0.0, if you are upgrading from a previous version of Loki, I would recommend checking out the upgrade guide mentioned on their releases page.
As you’ve noticed, we are providing a auth_basic_user_file to /etc/nginx/passwords, so let’s create a user that we will be using to authenticate against loki:
1
$ htpasswd -c /etc/nginx/passwords lokiisamazing
Enable and Start Services
Because we created a systemd unit file, we need to reload the systemd daemon:
You will notice that I have a /ready endpoint that I am proxy passing to loki, which bypasses authentication, this has been setup for my AWS Application Load Balancer’s Target Group to perform health checks against.
We can verify if we are getting a 200 response code without passing authentication:
So let’s access the labels API endpoint by passing our basic auth credentials. To leave no leaking passwords behind, create a file and save your password content in that file:
12
$ vim /tmp/.pass
-> then enter your password and save the file <-
Expose the content as an environment variable:
1
$ pass=$(cat /tmp/.pass)
Now make a request to Loki’s labels endpoint by passing authentication:
And unset your pass environment variable, to clean up your tracks:
1
$ unset pass
LogCLI
Now for my favorite part, using logcli to interact with Loki, but more specifically using --tail as it requires websockets, nginx will now be able to upgrade those connections:
Install logcli, in my case I am using a mac, so I will be using darwin:
In this post I will demonstrate how you can use ansible to automate the task of adding one or more ssh public keys to multiple servers authorized_keys file.
This will be focused in a scenario where you have 5 new ssh keys that we would want to copy to our bastion hosts authorized_keys file
The User Accounts
We have our bastion server named bastion.mydomain.com where would like to create the following accounts: john, bob, sarah, sam, adam and also upload their personal ssh public keys to those accounts so that they can logon with their ssh private keys.
On my local directory, I have their ssh public keys as:
They will be referenced in our playbook as key: ".pub') }}" but if they were on github we can reference them as key: https://github.com/.keys, more info on that can be found on the authorized_key_module documentation.
The Target Server
Our inventory for the target server only includes one host, but we can add as many as we want, but our inventory will look like this:
In this playbook, we will reference the users that we want to create and it will loop through those users, creating them on the target server and also use those names to match to the files on our laptop to match the ssh public keys:
$ cat playbook.yml
---
- hosts: bastion
become: yes
become_user: root
become_method: sudo
tasks:
- name: create local user account on the target server
user:
name: ''
comment: ''
shell: /bin/bash
append: yes
groups: sudo
generate_ssh_key: yes
ssh_key_type: rsa
with_items:
- john
- bob
- sarah
- sam
- adam
- name: upload ssh public key to users authorized keys file
authorized_key:
user: ''
state: present
manage_dir: yes
key: ".pub') }}"
with_items:
- john
- bob
- sarah
- sam
- adam
Theres a utility called sshuttle which allows you to VPN via a SSH connection, which is really handy when you quickly want to be able to reach a private range, which is accessible from a public reachable server such as a bastion host.
In this tutorial, I will demonstrate how to install sshuttle on a mac, if you are using a different OS you can see their documentation and then we will use the VPN connection to reach a “prod” and a “dev” environment.
SSH Config
We will declare 2 jump-boxes / bastion hosts in our ssh config:
dev-jump-host is a public server that has network access to our private endpoints in 172.31.0.0/16
prod-jump-host is a public server that has network access to our private endpoints in 172.31.0.0/16
In this case, the above example is 2 AWS Accounts with the same CIDR’s, and wanted to demonstrate using sshuttle for this reason, as if we had different CIDRs we can setup a dedicated VPN and route them respectively.
1234567891011121314151617
$ cat ~/.ssh/config
Host *
Port 22
StrictHostKeyChecking no
UserKnownHostsFile /dev/null
ServerAliveInterval 60
ServerAliveCountMax 30
Host dev-jump-host
HostName dev-bastion.mydomain.com
User bastion
IdentityFile ~/.ssh/id_rsa
Host prod-jump-host
HostName prod-bastion.mydomain.com
User bastion
IdentityFile ~/.ssh/id_rsa
Then you should be able to use vpn_dev and vpn_prod from your terminal:
1234
$ vpn_prod
[local sudo] Password:
Warning: Permanently added 'xx,xx' (ECDSA) to the list of known hosts.
client: Connected.
And in a new terminal we can connect to a RDS MySQL Database sitting in a private network:
12
$ mysql -h my-prod-db.pvt.mydomain.com -u dbadmin -p$pass
mysql>
Sshuttle as a Service
You can create a systemd unit file to run a sshuttle vpn as a service. In this scenario I provided 2 different vpn routes, dev and prod, so you can create 2 seperate systemd unit files, but my case I will only create for prod:
In this post we will demonstrate how to use a SSH Bastion or Jump Host with Ansible to reach the target server.
In some scenarios, the target server might be in a private range which is only accessible via a bastion host, and that counts the same for ansible as ansible is using SSH to reach to the target servers.
SSH Config
Our bastion host is configured as bastion and the config under ~/.ssh/config looks like this:
1234567891011
Host *
Port 22
StrictHostKeyChecking no
UserKnownHostsFile /dev/null
ServerAliveInterval 60
ServerAliveCountMax 30
Host bastion
HostName bastion.mydomain.com
User bastion
IdentityFile ~/.ssh/id_rsa
To verify that our config is working, you should be able to use:
1
$ ssh bastion
Using a Bastion with Ansible
In order to reach our target server we need to use the bastion, so to test the SSH connection we can use this SSH one-liner. Our target server has a IP address of 172.31.81.94 and expects us to provide a ansible.pem private key and we need to authenticate with the ubuntu user:
Our inventory.ini includes the hosts that we will be using, and in this case I will be defining a group named rpifleet with all the host nested under that group and I’m using the user pi and my private ssh key ~/.ssh/id_rsa:
In this post we will use the libvirt provisioner with Terraform to deploy a KVM Virtual Machine on a Remote KVM Host using SSH and use Ansible to deploy Nginx on our VM.
In my previous post I demonstrated how I provisioned my KVM Host and created a dedicated user for Terraform to authenticate to our KVM host to provision VMs.
Once you have KVM installed and your SSH access is sorted, we can start by installing our dependencies.
Create the main.tf, you will notice that we are using ssh to connect to KVM, and because the private range of our VM’s are not routable via the internet, I’m using a bastion host to reach them.
The bastion host (ssh config from the pre-requirements section) is the KVM host and you will see that ansible is also using that host as a jump box, to get to the VM. I am also using cloud-init to bootstrap the node with SSH, etc.
The reason why I’m using remote-exec before the ansible deployment, is to ensure that we can establish a command via SSH before Ansible starts.
And lastly, our outputs.tf which will display our IP address of our VM:
1234567
output "ip" {
value = libvirt_domain.domain-ubuntu.network_interface[0].addresses[0]
}
output "url" {
value = "http://${libvirt_domain.domain-ubuntu.network_interface[0].addresses[0]}"
}
Deploy our Terraform Deployment
It’s time to deploy a KVM instance with Terraform and deploy Nginx to our VM with Ansible using the local-exec provisioner.
Initialize terraform to download all the plugins:
12345678910111213141516171819202122
$ terraform init
Initializing the backend...
Initializing provider plugins...
- Finding latest version of hashicorp/template...
- Finding dmacvicar/libvirt versions matching "0.6.2"...
- Installing hashicorp/template v2.1.2...
- Installed hashicorp/template v2.1.2 (signed by HashiCorp)
- Installing dmacvicar/libvirt v0.6.2...
- Installed dmacvicar/libvirt v0.6.2 (unauthenticated)
The following providers do not have any version constraints in configuration,
so the latest version was installed.
To prevent automatic upgrades to new major versions that may contain breaking
changes, we recommend adding version constraints in a required_providers block
in your configuration, with the constraint strings suggested below.
* hashicorp/template: version = "~> 2.1.2"
Terraform has been successfully initialized!
Run a plan, to see what will be done:
123456789
$ terraform plan
...
Plan: 4 to add, 0 to change, 0 to destroy.
Changes to Outputs:
+ ip = (known after apply)
+ url = (known after apply)
...
I’ve been on the hunt for a hobby dedicated server for a terraform project, where I’m intending to use the libvirt provider and found one awesome provider that offers amazingly great prices.
At oneprovider.com, they offer dedicated servers for great prices and they offer a huge number of locations. So I decided to give them a go and ordered a dedicated server in Amsterdam, Netherlands:
I went for a 4GB DDR3 RAM, Atom C2350 2 Cores CPU with 128GB SSD and 1Gbps unmetered bandwidth for $7.30 a month, which is super cheap and more than enough for a hobby project:
I’ve been using them for the last week and super impressed.
What are we doing
As part of my Terraform project, I would like to experiment with the libvirt provisioner to provision KVM instances, I need a dedicated server with KVM installed, and in this guide we will install KVM and create a dedicated user that we will use with Terraform.
Install KVM
Once your server is provisioned, SSH to your dedicated server and install cpu-checker to ensure that we are able to install KVM:
Create the directory where we will store our vm’s disks:
1
$ mkdir -p /opt/kvm
And apply ownership permissions for our user and group:
1
$ chown -R deploys:libvirt /opt/kvm
I ran into a permission denied issue using terraform and the dedicated user, and to resolve I had to ensure that the security_driver is set to none in /etc/libvirt/qemu.conf:
1
$ vim /etc/libvirt/qemu.conf
and update the following:
1
security_driver = "none"
Then restart libvirtd:
1
$ sudo systemctl restart libvirtd
Test KVM
Switch to the deploys user:
1
$ sudo su - deploys
And list domains using virsh:
123
$ virsh list
Id Name State
----------------------------------------------------
Thank You
That’s it, now we have a KVM host that allows us to provision VM’s. In the next post we will install terraform and the libvirt provisioner for terraform to provision a vm and use ansible to deploy software to our vm.
Thanks for reaching out to me, check out my website or follow me at @ruanbekker on Twitter.
As you can see our local-exec provisioner is issuing the command echo to write the environment variable owner’s value to a file on disk, and the file name is file_ + the null resource’s id.
As we are referencing a variable, we need to define the variable, I will define it in variables.tf:
1
variable "owner" {}
As you can see, I am not defining the value, as I will define the value at runtime.
Initialize
When we initialize terraform, terraform builds up a dependency tree from all the .tf files and downloads any dependencies it requires:
1
$ terraform init
Apply
Run our deployment and pass our variable at runtime:
In this tutorial we will setup a NFS Server using Docker for our development environment.
Host Storage Path
In this example we will be using our host path /data/nfs-storage which will host our storage for our NFS server, which will will mount to the container:
This is a quick post to demonstrate how to use if statements in bash to check if we have the required environment variables in our environment before we continue a script.
Let’s say we require FOO and BAR in our environment before we can continue, we can do this:
1234567891011
#!/usr/bin/env bashif[ -z ${FOO}]||[ -z ${BAR}];thenecho"required environment variables does not exist"exit 1
elseecho"required environment variables are set"# do thingsexit 0
fi
So now if FOO or BAR is not set in our environment, the script will exit with return code 1.
To test it, when we pass no environment variables:
123
$ chmod +x ./update.sh
$ ./update.sh
required environment variables does not exist
If we only pass one environment variable:
12
$ FOO=1 ./update.sh
required environment variables does not exist
And as the result we want, when we pass both required environment variables, we have success:
12
$ FOO=1BAR=2 ./update.sh
required environment variables are set