This is a quick post on how to do port forwarding with iptables on linux.
What would we like to achieve
We have a lxc container running a redis server and we would like to do port forwarding so that we can reach the server over the internet
LXC Host
On our host that hosts our lxc containers, we want to forward the host port 5379 to 6379 of the container (10.37.117.37), so we can connect on a non-standard redis port:
I stumbled upon a github repository that stores time-series data in json format of corona virus / covid19 statistics, which get updated daily.
I was curious to see data about my country and want to see how metrics will look like after our lockdown started, so I decided to consume that data with Python and the requests library, then ingest data about covid19 into Elasticsearch and the visualize the data with Grafana.
Sample of the Data
Let’s have a peek at the data to determine how we will use it to write to Elasticsearch. Let’s consume the data with python:
As you can see the date is displayed as 2020-1-22 instead of 2020-01-22, I want to make it consistent as I will be ingesting the data with a @timestamp key which we will use the date from the returned data. So first we will need to convert that before we ingest the data.
The other thing I was thinking of is that, if for some reason we need to ingest this data again, we dont want to sit with duplicates (same document with different _id’s), so for that I decided to generate a hash value that consist of the date and the country, so if the script run to ingest the data, it will use the same id for the specific document, which would just overwrite it, therefore we won’t sit with duplicates.
So the idea is to ingest a document to elasticsearch like this:
The first run will load all the data and ingest all the data up to the current day to elasticsearch. Once that is done, we will add code to our script to only ingest the most recent day’s data into elasticsearch, which we will control with a cronjob.
Create a index with a mapping to let Elasticsearch know timestamp will be a date field:
Run the script to ingest all the data into elasticsearch. Now we will create the script that will run daily to only ingest the previous day’s data, so that we only ingest the latest data and not all the data from scratch again.
I will create this file in /opt/scripts/corona_covid19_ingest.py:
#!/usr/bin/pythonimportrequestsimportdatetimeasdtimportjsonimporthashliburl='https://pomber.github.io/covid19/timeseries.json'elasticsearch_url="https://es.domain.com"elasticsearch_username=""elasticsearch_password=""api_response=requests.get(url).json()yesterdays_date=dt.date.today()-dt.timedelta(days=1)defconvert_datestamp(day):returnstr(dt.datetime.strptime(day,'%Y-%m-%d'))defhash_function(country,date):string_to_hash=country+datehash_obj=hashlib.sha1(string_to_hash.encode('utf-8'))hash_value=hash_obj.hexdigest()returnhash_valuedefmap_es_doc(payload,country):doc={"day":payload['date'],"timestamp":convert_datestamp(payload['date']),"country":country,"confirmed":payload['confirmed'],"recovered":payload['recovered'],"deaths":payload['deaths']}returndocdefingest(doc_id,payload):response=requests.put(elasticsearch_url+'/coronastats/coronastats/'+doc_id,auth=(elasticsearch_username,elasticsearch_password),headers={'content-type':'application/json'},json=payload)returnresponse.status_codeforcountryinapi_response.keys():try:foreach_payloadinapi_response[country]:ifconvert_datestamp(each_payload['date']).split()[0]==str(yesterdays_date):print("ingesting latest data for {country}".format(country=country))doc_id=hash_function(country,each_payload['date'])doc=map_es_doc(each_payload,country)response=ingest(doc_id,doc)print(response)exceptExceptionase:print(e)
The only difference with this script is that it checks if the date is equals to yesterday’s date, and if so the document will be prepared and ingested into elasticsearch. We will create a cronjob that runs this script every morning at 08:45.
First make the file executable:
1
$chmod+x/opt/scripts/corona_covid19_ingest.py
Run crontab -e and add the following
1
458***/opt/scripts/corona_covid19_ingest.py
Visualize the Data with Grafana
We will create this dashboard:
We need a elasticsearch datasource that points to the index that we ingest our data into. Head over to datasources, add a elasticsearch datasource and set the index to coronastats and add the timefield as timestamp.
We want to make the dashboard dynamic to have a “country” dropdown selector, for that go to the dashboard settings, select variable and add a country variable:
First panel: “Reported Cases per Day”:
Second panel: “Confirmed Cases”:
Third panel: “Recovered Cases”:
Now, if we select Italy, Spain and France as an example, we will see something like this:
Thank You
Although its pretty cool visualizing data, the issue that we are in at the moment with coronavirus / covid19 is really scary and we should all do our part to try and stay home, sanitize and try not to spread the virus. Together we can all do great things by reducing the spread of this virus.
In this post we will setup a nginx log exporter for prometeus to get metrics of our nginx web server, such as number of requests per method, status code, processed bytes etc. Then we will configure prometheus to scrape our nginx metric endpoint and also create a basic dashbaord to visualize our data.
If you follow along on this tutorial, it assumes that you have Prometheus and Grafana up and running. But if not the embedded links will take you to the blog posts to set it up.
Nginx Webserver
Install nginx:
12
$ apt update
$ apt install nginx -y
Configure your nginx server’s log format to match the nginx log exporter’s expected format, we will name it custom:
user www-data;
worker_processes auto;
pid /run/nginx.pid;
# remote the escape char if you are going to use this config
include /etc/nginx/modules-enabled/\*.conf;
events {
worker_connections 768;
}
http {
# basic config
sendfile on;
tcp_nopush on;
tcp_nodelay on;
keepalive_timeout 65;
types_hash_max_size 2048;
include /etc/nginx/mime.types;
default_type application/octet-stream;
# ssl config
ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
ssl_prefer_server_ciphers on;
# logging config
log_format custom '$remote_addr - $remote_user [$time_local] '
'"$request" $status $body_bytes_sent '
'"$http_referer" "$http_user_agent" "$http_x_forwarded_for"';
access_log /var/log/nginx/access.log custom;
error_log /var/log/nginx/error.log;
# gzip
gzip on;
# virtual host config
include /etc/nginx/conf.d/myapp.conf;
}
I will delete the default host config:
1
$ rm -rf /etc/nginx/sites-enabled/default
And then create my /etc/nginx/conf.d/myapp.conf as referenced in my main config, with the following:
1234567891011121314
server {
listen 80 default_server;
# remove the escape char if you are going to use this config
server_name \_;
root /var/www/html;
index index.html index.htm index.nginx-debian.html;
location / {
try_files $uri $uri/ =404;
}
}
When you make a GET request to your server, you should see something like this in your access log:
1
10x.1x.2x.1x - - [25/Apr/2020:00:31:11 +0000] "GET / HTTP/1.1" 200 396 "-" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_14_6) AppleWebKit/605.1.15 (KHTML, like Gecko) Version/13.1 Safari/605.1.15" "-"
Create the directory where we will place our config for our exporter:
1
$ mkdir /etc/prometheus
Create the config file:
1
$ vim /etc/prometheus/nginxlog_exporter.yml
You can follow the instructions from github.com/prometheus-nginxlog-exporter for more information on configuration, but I will be using the following config:
$ systemctl status nginxlog_exporter
● nginxlog_exporter.service - Prometheus Log Exporter
Loaded: loaded (/etc/systemd/system/nginxlog_exporter.service; disabled; vendor preset: enabled)
Active: active (running) since Sat 2020-04-25 00:50:06 UTC; 5s ago
Main PID: 4561 (prometheus-ngin)
Tasks: 7 (limit: 2317)
CGroup: /system.slice/nginxlog_exporter.service
└─4561 /usr/bin/prometheus-nginxlog-exporter -config-file /etc/prometheus/nginxlog_exporter.yml
Apr 25 00:50:06 nginx-log-exporter systemd[1]: Started Prometheus Log Exporter.
Apr 25 00:50:06 nginx-log-exporter prometheus-nginxlog-exporter[4561]: loading configuration file /etc/prometheus/nginxlog_exporter.yml
Apr 25 00:50:06 nginx-log-exporter prometheus-nginxlog-exporter[4561]: using configuration {Listen:{Port:4040 Address:0.0.0.0} Consul:{Enable:false Address: Datacenter: Scheme: Toke
Apr 25 00:50:06 nginx-log-exporter prometheus-nginxlog-exporter[4561]: starting listener for namespace myapp
Apr 25 00:50:06 nginx-log-exporter prometheus-nginxlog-exporter[4561]: running HTTP server on address 0.0.0.0:4040
Apr 25 00:50:06 nginx-log-exporter prometheus-nginxlog-exporter[4561]: 2020/04/25 00:50:06 Seeked /var/log/nginx/access.log - &{Offset:0 Whence:2}
Test the exporter
Make a couple of requests against your webserver:
1
$ for each in {1..10}; do curl http://78.141.211.49 ; done
So prometheus will now scrape the exporter http endpoint (:4040/metrics) and push the returned values into prometheus. But to get a feel on how the metrics look like, make a request to the metrics endpoint:
1234567891011121314
$ curl http://localhost:4040/metrics
...
# HELP myapp_http_response_count_total Amount of processed HTTP requests
# TYPE myapp_http_response_count_total counter
myapp_http_response_count_total{environment="production",hostname="myapp.example.com",method="GET",service="myapp",status="200"} 10
myapp_http_response_count_total{environment="production",hostname="myapp.example.com",method="POST",service="myapp",status="404"} 1
# HELP myapp_http_response_size_bytes Total amount of transferred bytes
# TYPE myapp_http_response_size_bytes counter
myapp_http_response_size_bytes{environment="production",hostname="myapp.example.com",method="GET",service="myapp",status="200"} 6120
myapp_http_response_size_bytes{environment="production",hostname="myapp.example.com",method="POST",service="myapp",status="404"} 152
# HELP myapp_parse_errors_total Total number of log file lines that could not be parsed
# TYPE myapp_parse_errors_total counter
myapp_parse_errors_total 0
...
As you can see we are getting metrics such as response count total, response size, errors, etc.
Configure Prometheus
Let’s configure prometheus to scrape this endpoint. Head over to your prometheus instance, and edit your prometheus config:
1
$ vim /etc/prometheus/prometheus.yml
Note that in my config I have 2 endpoints that I am scraping, the prometheus endpoint which exists and I will be adding the nginx endpoint, so in full, this is how my config will look like:
To verify that the exporter is working as expected, head over to your prometheus ui on port 9090, and query up{} to see if your exporters are returning 1:
We can then query prometheus with myapp_http_response_count_total{service="myapp"} to see the response counts:
Dashboarding in Grafana
If you don’t have Grafana installed, you can look at my Grafana Installation post to get that up and running.
If you have not created the Prometheus datasource, on Grafana, head over to the configuration section on your left, select Datasources, add a Prometheus datasource and add the following (this is assuming grafana runs on the prometheus node - which is fine for testing):
Create a new dashboard and add a new panel:
Let’s query our data to show us HTTP Method and Status code per 30s: rate(myapp_http_response_count_total{service="myapp"}[$__interval])
Thank You
Hope you found this helpful, if you haven’t seen my other posts on Prometheus, have a look at the following:
In this tutorial we will setup a site to site ipsec vpn with strongswan and we will enable each server to discover the other vpn server via dynamic dns. We will also append to our config the ability of roadwarriors so that you will be able to connect to your homelab from any mobile or laptop device from any remote source.
Some background
Me and one of my friends decided to build a site to site vpn with strongswan so that our homelabs could be reachable to each other over private networks.
One challenge that I thought of is that both of our internet providers don’t support static ip addressing, so each vpn server needs to know where to connect to whenever the ip address changes.
What we will be doing
We will setup strongswan vpn on both servers and allow the private LAN ranges to be reachable for both sides. As I have a domain hosted on cloudflare, I will be using cloudflare’s api to update the A record of each sides dns whenever the IP changes.
Environment
On my side, which I will be referring to as Side-A:
123
Public DNS Name: side-a.example.com
Private Range: 192.168.0.0/24
VPN Server IP: 192.168.0.2
On my friend’s side, which I will be referring to as Side-B:
123
Public DNS Name: side-b.example.com
Private Range: 192.168.1.0/24
VPN Server IP: 192.168.1.2
Cloudflare Dynamic DNS
You don’t need to use Cloudflare, theres services such as dyndns.com, no-ip.com. But for this tutorial I will be using cloudflare to utilize my own domain.
First we need to create a API Token, head over to your dashboard: dash.cloudflare.com, head over to “my profile”, select “API Tokens”, then allow “Read Zones” and “Edit DNS”, then select “Create Token”. Keep the returned token value in a safe place.
Install the pre-requirements:
1
$ apt install python python-dev python-pip make curl build-essential -y
Get the source and install:
123
$ git clone https://github.com/LINKIWI/cloudflare-ddns-client.git
$ cd cloudflare-ddns-client
$ make install
We will now configure the cloudflare dynamic dns client, this will be done on both sides, but will only demonstrate for side-a:
12345678
$ cloudflare-ddns --configure
Use API token or API key to authenticate?
Choose [T]oken or [K]ey: T
Enter the API token you created at https://dash.cloudflare.com/profile/api-tokens.
Required permissions are READ Account.Access: Organizations, Identity Providers, and Groups; READ Zone.Zone; EDIT Zone.DNS
CloudFlare API token: [redacted]
Enter the domains for which you would like to automatically update the DNS records, delimited by a single comma.
Comma-delimited domains: side-a.example.com
Testing it out to ensure the A record can be updated:
123456
$ cloudflare-ddns --update-now
Found external IPv4: "1.x.x.x"
Listing all zones.
Finding all DNS records.
Updating the A record (ID x) of (sub)domain side-a.example.com (ID x) to 1.x.x.x.
DNS record updated successfully!
We can run this command from above in a cron, but I will use a bash script to only run when the public ip changed: /opt/scripts/detect_ip_change.sh:
12345678
#!/bin/bash
set -ex
MY_DDNS_HOST="side-a.example.com"
if [ $(dig ${MY_DDNS_HOST} +short) == $(curl -s icanhazip.com) ];
then exit 0;
else /usr/local/bin/cloudflare-ddns --update-now;
fi
Make the file executable: chmod +x /opt/scripts/detect_ip_change.sh then edit your cronjobs: crontab -e and add the script:
1
* * * * * /opt/scripts/detect_ip_change.sh
This will keep your DNS updated, this needs to be done on both sides, if you want to use dynamic dns.
Port Forwarding
We will need to forward UDP traffic from the router to the VPN server, on both sides:
$ ip route add 192.168.1.0/24 via 192.168.0.2 dev eth0
We want to persist the iptables and static route across reboots, so edit the /etc/rc.local file, if it’s not there create it with the following values:
12345
#!/bin/bash
iptables -t nat -A POSTROUTING -s 192.168.1.0/24 -d 192.168.0.0/24 -j MASQUERADE
iptables -A FORWARD -s 192.168.1.0/24 -d 192.168.0.0/24 -j ACCEPT
ip route add 192.168.1.0/24 via 192.168.0.2 dev eth0
exit 0
If you created the file, make sure to apply executable permissions:
$ ip route add 192.168.0.0/24 via 192.168.1.2 dev eth0
We want to persist the iptables and static route across reboots, so edit the /etc/rc.local file, if it’s not there create it with the following values:
12345
#!/bin/bash
iptables -t nat -A POSTROUTING -s 192.168.0.0/24 -d 192.168.1.0/24 -j MASQUERADE
iptables -A FORWARD -s 192.168.0.0/24 -d 192.168.1.0/24 -j ACCEPT
ip route add 192.168.0.0/24 via 192.168.1.2 dev eth0
exit 0
If you created the file, make sure to apply executable permissions:
From side-a (192.168.0.2) ping the gateway on side-b (192.168.1.1):
123
$ $ ping -c2 192.168.1.1
PING 10.3.96.2 (192.168.1.1) 56(84) bytes of data.
64 bytes from 192.168.1.1: icmp_seq=1 ttl=62 time=11.9 ms
If you want to be able to reach the private range of the other side of the vpn from any device on your network, you should add a static route on your router to inform your default gateway where to route traffic to.
In this case on side-a (192.168.0.0/24) we want to inform our default gateway to route (192.168.1.0/24) to the VPN as it knows to route that destination over the VPN.
This step is optional, but since we can access each others homelabs, we thought it would be nice to be able to access the resources from mobile devices or laptops when we are on remote locations.
We made it that each VPN owner will connect to its own endpoint (for roadwarriors), so side-a (which will be me) will connect to its own dns endpoint to connect when away from home..
I will only demonstrate how to append your config to add the ability for a roadwarrion vpn connection, append to the /etc/ipsec.conf:
Connecting your VPN Client, I will be using my Laptop, with the following details:
1234567
VPN Type: IKEv2
Description: Home VPN
Server: side-a.example.com
Remote ID: side-a.roadwarrior
Local ID: my-laptop
User Authentication: None
Secret: MySuperSecureSecret123
Thank You
In this tutorial I demonstrated how to setup a site to site ipsec vpn between 2 sides that consists of internet connections that has dynamic ip’s and also appending roadwarrior config so that you can connect to your homelab from anywhere in the world.
With k3d we can mount the host to container path, and with persistent volumes we can set a hostPath for our persistent volumes. With k3d, all the nodes will be using the same volume mapping which maps back to the host.
We will test the data persistence by writing a file inside a container, kill the pod, then exec into the pod again and test if the data persisted
The k3d Cluster
Create the directory on the host where we will persist the data:
In this post we will explore how to use asynchronous functions in OpenFaas.
What are we doing
A synchronous request blocks the client until operation completes, where a asynchronous request doesn’t block the client, which is nice to use for long-running tasks or function invocations to run in the background through the use of NATS Streaming.
We will be building a Python Flask API Server which will act as our webhook service. When we invoke our function by making a http request, we also include a callback url as a header which will be the address where the queue worker will post it’s results.
Then we will make a http request to the synchronous function where we will get the response from the function and a http request to the asynchronous function, where we will get the response from the webhook service’s logs
Deploy OpenFaas
Deploy OpenFaas on a k3d Kubernetes Cluster if you want to follow along on your laptop. You can follow this post to deploy a kubernetes cluster and deploying openfaas:
In this post we will deploy OpenFaas on Kubernetes locally using k3sup and k3d, then deploy a Traefik Ingress so that we can access the OpenFaas Gateway on HTTP over the standard port 80.
K3d is a amazing wrapper that deploys a k3s cluster on docker, and k3sup makes it very easy to provision OpenFaas to your Kubernetes cluster.
Deploy a Kubernetes Cluster
If you have not installed k3d, you can install k3d on mac with brew:
1
$ brew install k3d
We will deploy our cluster with 2 worker nodes and publish port 80 to the containers port 80:
In my scenario, I am using openfaas.localdns.xyz which resolves to 127.0.0.1. Next we need to know to which service to route the traffic to, we can find that by:
123
$ kubectl get svc/gateway -n openfaas
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
gateway ClusterIP 10.43.174.57 <none> 8080/TCP 23m
In this post we will deploy iopenfaas on kubernetes (k3d)
Kubernetes on k3d
k3d is a helper tool that provisions a kubernetes distribution, called k3s on docker. To deploy a kubernetes cluster on k3d, you can follow this blog post
$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
k3d-demo-server Ready master 102s v1.14.6-k3s.1
k3d-demo-worker-0 Ready worker 102s v1.14.6-k3s.1
k3d-demo-worker-1 Ready worker 102s v1.14.6-k3s.1
That was easy right?
Deploy a Sample App
We will deploy a simple golang web application that will return the container name upon a http request. We will also make use of the traefik ingress for demonstration.
Our deployment manifest that I will save as app.yml: