In this post we wil be focusing on increasing / hardening our security by adjusting our ssh configuration and applying some iptables firewall rules.
This will be the list of things that we will do:
123456
- Change the SSH Port
- Don't allow root to SSH
- Disable password based authentication
- Enable key based authentication and only for a singular user
- Allow our user to sudo
- Use iptables to block sources trying to DDoS your server
Packages
First let’s install the packages that we need, I’m using Debian so I will be using the apt package manager:
We will be using the SSH port 2914, replace your SSH config with the following and make your adjustments where you need to:
123456789101112131415161718192021222324
# /etc/ssh/sshd_config
Port 2914
HostKey /etc/ssh/ssh_host_rsa_key
HostKey /etc/ssh/ssh_host_ecdsa_key
HostKey /etc/ssh/ssh_host_ed25519_key
LoginGraceTime 1m
PermitRootLogin no
MaxAuthTries 3
MaxSessions 5
AuthenticationMethods publickey
PubkeyAuthentication yes
AuthorizedKeysFile /home/james/.ssh/authorized_keys
PasswordAuthentication no
PermitEmptyPasswords no
ChallengeResponseAuthentication no
UsePAM yes
AllowUsers james
DenyUsers root
X11Forwarding yes
PrintMotd no
UseDNS no
PidFile /var/run/sshd.pid
AcceptEnv LANG LC_*
Subsystem sftp /usr/lib/openssh/sftp-server
Then save the file and restart SSH:
1
$ systemctl restart sshd
While you are still connected to the shell session, open up a new terminal and try to connect with your new user and private SSH key to ensure that you can connect to your server.
Iptables
We want to drop incoming connections which make more than 10 connection attempts to SSH within 60 seconds.
The tokens get refilled into buckets at 3 per minute and maximum of 3 tokens that can be filled into the bucket.
In our script we will place the following content:
123456789101112
#!/usr/bin/env bash
INTERFACE=eth0 # check ifconfig to determine the correct interface
SSH_PORT=2914
CONNECTION_ATTEMPTS=10
CONNECTION_TIME=60
#WHITELIST_IP=x.x.x.x/32 # replace ip and uncomment if you want to whitelist a ip
#iptables -I INPUT -s ${WHITELIST_IP} -p tcp --dport ${SSH_PORT} -i ${INTERFACE} -j ACCEPT # uncomment if you want to use whitelisting
iptables -A INPUT -p tcp --dport ${SSH_PORT} -i ${INTERFACE} -m state --state NEW -m recent --set
iptables -A INPUT -p tcp --dport ${SSH_PORT} -i ${INTERFACE} -m state --state NEW -m recent --update --seconds ${CONNECTION_TIME} --hitcount ${CONNECTION_ATTEMPTS} -j DROP
iptables -A INPUT -i ${INTERFACE} -p tcp --dport ${SSH_PORT} -m state --state NEW -m limit --limit 3/min --limit-burst 3 -j ACCEPT
iptables -A INPUT -i ${INTERFACE} -p tcp --dport ${SSH_PORT} -m state --state ESTABLISHED -j ACCEPT
iptables -A OUTPUT -o ${INTERFACE} -p tcp --sport ${SSH_PORT} -m state --state ESTABLISHED -j ACCEPT
Now we want to execute this script whenever the server boots, open up /etc/rc.local and append the following line, so that the file looks more or less like:
This is a quick post to demonstrate how to encrypt and decrypt files with ccrypt
About
Ccrypt’s description from its project page:
Encryption and decryption depends on a keyword (or key phrase) supplied by the user. By default, the user is prompted to enter a keyword from the terminal. Keywords can consist of any number of characters, and all characters are significant (although ccrypt internally hashes the key to 256 bits). Longer keywords provide better security than short ones, since they are less likely to be discovered by exhaustive search.
In this post I will demonstrate how to deploy Grafana Labs’s Loki on Multipass using cloud-init so that you can run your own dev environment and run a couple of queries to get you started.
About
If you haven’t heard of Multipass, it allows you to run Ubuntu VMs on your Mac or Windows workstation.
If you haven’t heard of Loki, as described by Grafana Labs: “Loki is a horizontally-scalable, highly-available, multi-tenant log aggregation system inspired by Prometheus.”
Install Multipass
Head over to multipass.run to get the installer for your operating system, and if you are curious about Multipass, I wrote a beginners guide on Multipass which can be found here
Cloud Init for Loki
We will be making use of cloud-init to bootstrap Loki v2.0.0 to our multipass instance.
V2.0.0 is the current release of the time of writing, so depending on the time when you read this, have a look at the Loki Releases page for the latest version and adjust the cloud-init.yml according to the version if it differs from the one I’m mentioning.
(Optional) If you want to use SSH to your Multipass VM, you can use your existing SSH key or generate a new one, if you want to create a new key, you can follow this post
Copy your public key, in my case ~/.ssh/id_rsa.pub and paste it under the ssh authorized_keys section.
Our cloud-init.yml has a couple of sections, but to break it down it will do the following:
We provide it our public ssh key so that we can ssh with our private key
Updates the index repository
Installs the packages, unzip and wget
Creates the loki systemd unit file and places it under /etc/systemd/system/
When the vm boots it will create the user loki and creates the loki etc directory
Once that completes, we are downloading the loki, logcli and promtail binaries from github
#cloud-configssh_authorized_keys:-ssh-rsa AAAA...Ha9 your-commentpackage_update:truepackages:-unzip-wgetwrite_files:-content:|-[Unit]Description=LokiUser=lokiGroup=lokiWants=network-online.targetAfter=network-online.target[Service]Type=simpleRestart=on-failureExecStart=/usr/local/bin/loki -config.file /etc/loki/loki-local-config.yaml[Install]WantedBy=multi-user.targetowner:root:rootpath:/etc/systemd/system/loki.servicepermissions:'0644'bootcmd:-useradd --no-create-home --shell /bin/false loki-mkdir /etc/loki-chown -R loki:loki /etc/lokiruncmd:-for app in loki logcli promtail; do wget "https://github.com/grafana/loki/releases/download/v2.0.0/${app}-linux-amd64.zip"; done-for app in loki logcli promtail; do unzip "${app}-linux-amd64.zip"; done-for app in loki logcli promtail; do mv "${app}-linux-amd64" /usr/local/bin/${app}; done-for app in loki logcli promtail; do rm -f "${app}-linux-amd64.zip"; done-wget https://raw.githubusercontent.com/grafana/loki/v2.0.0/cmd/loki/loki-local-config.yaml-mv ./loki-local-config.yaml /etc/loki/loki-local-config.yaml-chown loki:loki /etc/loki/loki-local-config.yaml-systemctl daemon-reload-systemctl start loki-sleep 5-echo "this is a test" | promtail --stdin --client.url http://localhost:3100/loki/api/v1/push --client.external-labels=app=cli -server.disable
You will notice that the VM will have loki, logcli and promtail available on it, so you will have an environment to use all of them together.
As you can see once we start loki, we are piping this is a test to Loki using Promtail, so that we can verify that the data is visible in Loki. That step is not required, but just added it to this demo.
Deploy Loki on Multipass
We will provision a Multipass VM using the Ubuntu Focal distribution and spec our VM with 1 CPU, 512MB of Memory and 1GB of disk and then bootstrap our installation of Loki using cloud-init:
$ multipass list
Name State IPv4 Image
loki Running 192.168.64.19 Ubuntu 20.04 LTS
Test Loki inside the VM
First we will exec into the VM (or SSH), then we will test out Loki inside the VM since we already have logcli available:
12345
$ multipass exec loki -- bash
To run a command as administrator (user "root"), use "sudo <command>".
See "man sudo_root"for details.
ubuntu@loki:~$
Remembered in our cloud-init, we instructed this command to run:
1
echo"this is a test"| promtail --stdin --client.url http://localhost:3100/loki/api/v1/push --client.external-labels=app=cli -server.disable
So if we use logcli, we can inspect our visible labels:
12345
$ logcli --quiet labels
__name__
app
hostname
job
And as we expect, we will see the app label from the --client.external-labels=app=cli argument that we passed. We can also look at the values for a given label:
12
$ logcli --quiet labels app
cli
Now let’s query our logs using the label selector: {app="cli"}:
12
$ logcli --quiet --output raw query '{app="cli"}'this is a test
If we remove the extra arguments, we will see more verbose output like the following:
1234567
$ logcli query '{app="cli"}'http://localhost:3100/loki/api/v1/query_range?direction=BACKWARD&end=1605092055756745122&limit=30&query=%7Bapp%3D%22cli%22%7D&start=1605088455756745122
Common labels: {app="cli", hostname="loki", job="stdin"}2020-11-11T12:45:20+02:00 {} this is a testhttp://localhost:3100/loki/api/v1/query_range?direction=BACKWARD&end=1605091520778438972&limit=30&query=%7Bapp%3D%22cli%22%7D&start=1605088455756745122
Common labels: {app="cli", hostname="loki", job="stdin"}
We can pipe some more output to Loki:
1
$ echo"this is another test"| promtail --stdin --client.url http://localhost:3100/loki/api/v1/push --client.external-labels=app=cli -server.disable
And querying our logs:
123
$ logcli --quiet --output raw query '{app="cli"}'this is another testthis is a test
Testing Loki Outside our VM
Let’s exit the VM and test Loki from our local workstation, first you will need to get the logcli for your OS, head over to the releases page and get the binary of your choice.
So every time we run loki_echo {string} we will run a docker container from the busybox image and pass the {string} as an argument to the echo command inside the container, which will be sent to the loki log driver and land up in Loki.
Let’s push 100 log events to Loki:
123456789101112131415161718192021222324
$ count=0
$ while[${count} !=100]dofor color in red blue white silver green;do loki_echo "there are ${RANDOM} items of ${color} available";count=$((count+1))donedonethere are 26890 items of green available
there are 14856 items of red available
there are 31162 items of blue available
there are 23993 items of white available
there are 22310 items of silver available
there are 10700 items of green available
there are 14077 items of red available
there are 20642 items of blue available
there are 31576 items of white available
there are 26053 items of silver available
there are 2973 items of green available
there are 2203 items of red available
there are 8557 items of blue available
...
Let’s look for logs with blue or green and limit the results to 5:
123456
$ logcli query '{app="echo-container"} |~ "items of (blue|green)"' --quiet --limit 5 --output raw
there are 28985 items of green available
there are 10289 items of blue available
there are 12316 items of green available
there are 23775 items of blue available
there are 20 items of green available
Teardown
If you followed along, you can terminate your Multipass VM with:
Recently Grafana Labs announced Loki v2 and its awesome! Definitely check out their blog post on more details.
Loki has a index option called boltdb-shipper, which allows you to run Loki with only a object store and you no longer need a dedicated index store such as DynamoDB. You can extract labels from log lines at query time, which is CRAZY! And I really like how they’ve implemented it, you can parse, filter and format like mad. I really like that.
And then generating alerts from any query, which we will go into today. Definitely check out this blogpost and this video for more details on the features of Loki v2.
What will we be doing today
In this tutorial we will setup a alert using the Loki local ruler to alert us when we have high number of log events coming in. For example, let’s say someone has debug logging enabled in their application and we want to send a alert to slack when it breaches the threshold.
I will simulate this with a http-client container which runs curl in a while loop to fire a bunch of http requests against the nginx container which logs to Loki, so we can see how the alerting works, and in this scenario we will alert to Slack.
And after that we will stop our http-client container to see how the alarm resolves when the log rate comes down again.
All the components are available in the docker-compose.yml on my github repository
Components
Let’s break it down and start with the loki config:
In the section of the loki config, I will be making use of the local ruler and map my alert rules under /etc/loki/rules/ and we are also defining our alertmanager instance where these alerts should be shipped to.
In my rule definition /etc/loki/rules/demo/rules.yml:
In my expression, I am using LogQL to return per second rate of all my docker logs within the last minute per compose service for my dockerlogs job and we are specifying that it should alert when the threshold is above 60.
As you can see I have a couple of labels and annotations, which becomes very useful when you have dashboard links, runbooks etc and you would like to map that to your alert. I am doing the mapping in my alertmanager.yml config:
As you can see, when my alert matches nothing it will go to my catchall receiver, but when my label contains devops and the route the alert to my warning-devops-slack receiver, and then we will be parsing our labels and annotations to include the values in our alarm on slack.
Demo
Enough with the background details, and it’s time to get into the action.
The docker-compose will have a container of grafana, alertmanager, loki, nginx and a http-client.
The http-client is curl in a while loop that will just make a bunch of http requests against the nginx container, which will be logging to loki.
Get the source
Get the code from my github repository:
12
$ git clone https://github.com/ruanbekker/loki-alerts-docker
$ cd loki-alerts-docker
You will need to replace the slack webhook url and the slack channel where you want your alerts to be sent to. This will take the environment variables and replace the values in config/alertmanager.yml (always check out the script first, before executing it)
And in slack, we should see that the alarm recovered and we should see the notification:
Then you can terminate your stack:
1
$ docker-compose down
Pretty epic stuff right? I really love how cost effective Loki is as logging use to be so expensive to run and especially maintain, Grafana Labs are really doing some epic work and my hat goes off to them.
Thanks
I hope you found this useful, feel free to reach out to me on Twitter @ruanbekker or visit me on my website ruan.dev
In this post I will demonstrate how to send messages to slack using python based on the status of an event.
We will keep it basic, that when something is down or up, it should send a slack message with the status, message, color and embed your grafana dashboard links inside the alert (or any links that you would like).
Create a Webhook
From a previous post on how to use curl to send slack messages I showed how to create your webhook, so you can just follow that post if you want to follow along.
Once you have a webhook, which will look like https://hooks.slack.com/services/xx/yy/zz, you are good to follow to the next step.
Creating the Script
First we need requests:
1
$ pip install requests
Then we will create the slack_notifier.py, just ensure that you replace your slack webhook url and slack channel to yours:
importrequestsimportsysimportosSLACK_WEBHOOK_URL='https://hooks.slack.com/<your>/<slack>/<webhook>'SLACK_CHANNEL="#your-slack-channel"ALERT_STATE=sys.argv[1]alert_map={"emoji":{"up":":white_check_mark:","down":":fire:"},"text":{"up":"RESOLVED","down":"FIRING"},"message":{"up":"Everything is good!","down":"Stuff is burning!"},"color":{"up":"#32a852","down":"#ad1721"}}defalert_to_slack(status,log_url,metric_url):data={"text":"AlertManager","username":"Notifications","channel":SLACK_CHANNEL,"attachments":[{"text":"{emoji} [*{state}*] Status Checker\n {message}".format(emoji=alert_map["emoji"][status],state=alert_map["text"][status],message=alert_map["message"][status]),"color":alert_map["color"][status],"attachment_type":"default","actions":[{"name":"Logs","text":"Logs","type":"button","style":"primary","url":log_url},{"name":"Metrics","text":"Metrics","type":"button","style":"primary","url":metric_url}]}]}r=requests.post(SLACK_WEBHOOK_URL,json=data)returnr.status_codealert_to_slack(ALERT_STATE,"https://grafana-logs.dashboard.local","https://grafana-metrics.dashboard.local")
Testing it out
Time to test it out, so let’s assume something is down, then we can react on that event and action the following:
1
$pythonslack_notifier.pydown
Which will look like the following on slack:
And when recovery is in place, we can action the following:
1
$pythonslack_notifier.pyup
Which will look like this:
Thanks
That was a basic example on how you can use python to send slack messages.
In this quick post I will demonstrate how to discover a EC2 Instance’s Private IP Address using the AWS API by using Tags then use Paramiko in Python to SSH to the EC2 instance and run SSH commands on the target instance.
I have my development profile for aws configured under dev as can seen below:
1234567
$ aws --profile dev configure list
Name Value Type Location
---- ----- ---- --------
profile dev manual --profile
access_key ****************xxxx assume-role
secret_key ****************xxxx assume-role
region eu-west-1 config-file ~/.aws/config
First we need to discover the private ip address from the api by referencing tags, and in this example we will use the Name tag:
1234567891011121314
import boto3
ec2 = boto3.Session(profile_name='dev', region_name='eu-west-1').client('ec2')
target_instances = ec2.describe_instances(
Filters=[{'Name':'tag:Name','Values':['my-demo-ec2-instance']}]
)
ec2_instances = []
for each_instance in target_instances['Reservations']:
for found_instance in each_instance['Instances']:
ec2_instances.append(found_instance['PrivateIpAddress'])
# ec2_instances
# ['172.31.2.89']
So we are instantiating a ec2 instance with our configured dev profile, then we describe all our instances using the tag key Name and value my-demo-ec2-instance and then access the private ip address and append it to our ec2_instances list.
Next we want to define the commands that we want to run on the target ec2 instance:
12345
commands = [
"echo hi",
"whoami",
"hostname"
]
In my case I only have 1 ec2 instance with the name my-demo-ec2-instance, but if you have more you can just loop through the list and perform the actions.
Next we want to establish the SSH connection:
1234
k = paramiko.RSAKey.from_private_key_file("/Users/ruan/.ssh/id_rsa")
c = paramiko.SSHClient()
c.set_missing_host_key_policy(paramiko.AutoAddPolicy())
c.connect(hostname=ec2_instances[0], username="ruan", pkey=k, allow_agent=False, look_for_keys=False)
Once our SSH connection has established, we can loop through our commands and execute them:
12345
for command in commands:
print("running command: {}".format(command))
stdin , stdout, stderr = c.exec_command(command)
print(stdout.read())
print(stderr.read())
In this tutorial I will demonstrate how to run Loki v2.0.0 behind a Nginx Reverse Proxy with basic http authentication enabled on Nginx and what to do to configure Nginx for websockets, which is required when you want to use tail in logcli via Nginx.
Assumptions
My environment consists of a AWS Application LoadBalancer with a Host entry and a Target Group associated to port 80 of my Nginx/Loki EC2 instance.
Health checks to my EC2 instance are being performed to instance:80/ready
I have a S3 bucket and a DynamoDB table already running in my account which Loki will use. But NOTE that boltdb-shipper is now production ready since v2.0.0, which is awesome, because now you only require a object store such as S3, so you don’t need DynamoDB.
More information on this topic can be found under their changelog
What can you expect from this blogpost
We will go through the following topics:
Install Loki v2.0.0 and Nginx
Configure HTTP Basic Authentication to Loki’s API Endpoints
Bypass HTTP Basic Authentication to the /ready endpoint for our Load Balancer to perform healthchecks
Enable Nginx to upgrade websocket connections so that we can use logcli --tail
Test out access to Loki via our Nginx Reverse Proxy
Install and use LogCLI
Install Software
First we will install nginx and apache2-utils. In my use-case I will be using Ubuntu 20 as my operating system:
Next we will install Loki v2.0.0, if you are upgrading from a previous version of Loki, I would recommend checking out the upgrade guide mentioned on their releases page.
As you’ve noticed, we are providing a auth_basic_user_file to /etc/nginx/passwords, so let’s create a user that we will be using to authenticate against loki:
1
$ htpasswd -c /etc/nginx/passwords lokiisamazing
Enable and Start Services
Because we created a systemd unit file, we need to reload the systemd daemon:
You will notice that I have a /ready endpoint that I am proxy passing to loki, which bypasses authentication, this has been setup for my AWS Application Load Balancer’s Target Group to perform health checks against.
We can verify if we are getting a 200 response code without passing authentication:
So let’s access the labels API endpoint by passing our basic auth credentials. To leave no leaking passwords behind, create a file and save your password content in that file:
12
$ vim /tmp/.pass
-> then enter your password and save the file <-
Expose the content as an environment variable:
1
$ pass=$(cat /tmp/.pass)
Now make a request to Loki’s labels endpoint by passing authentication:
And unset your pass environment variable, to clean up your tracks:
1
$ unset pass
LogCLI
Now for my favorite part, using logcli to interact with Loki, but more specifically using --tail as it requires websockets, nginx will now be able to upgrade those connections:
Install logcli, in my case I am using a mac, so I will be using darwin:
In this post I will demonstrate how you can use ansible to automate the task of adding one or more ssh public keys to multiple servers authorized_keys file.
This will be focused in a scenario where you have 5 new ssh keys that we would want to copy to our bastion hosts authorized_keys file
The User Accounts
We have our bastion server named bastion.mydomain.com where would like to create the following accounts: john, bob, sarah, sam, adam and also upload their personal ssh public keys to those accounts so that they can logon with their ssh private keys.
On my local directory, I have their ssh public keys as:
They will be referenced in our playbook as key: ".pub') }}" but if they were on github we can reference them as key: https://github.com/.keys, more info on that can be found on the authorized_key_module documentation.
The Target Server
Our inventory for the target server only includes one host, but we can add as many as we want, but our inventory will look like this:
In this playbook, we will reference the users that we want to create and it will loop through those users, creating them on the target server and also use those names to match to the files on our laptop to match the ssh public keys:
$ cat playbook.yml
---
- hosts: bastion
become: yes
become_user: root
become_method: sudo
tasks:
- name: create local user account on the target server
user:
name: ''
comment: ''
shell: /bin/bash
append: yes
groups: sudo
generate_ssh_key: yes
ssh_key_type: rsa
with_items:
- john
- bob
- sarah
- sam
- adam
- name: upload ssh public key to users authorized keys file
authorized_key:
user: ''
state: present
manage_dir: yes
key: ".pub') }}"
with_items:
- john
- bob
- sarah
- sam
- adam
Theres a utility called sshuttle which allows you to VPN via a SSH connection, which is really handy when you quickly want to be able to reach a private range, which is accessible from a public reachable server such as a bastion host.
In this tutorial, I will demonstrate how to install sshuttle on a mac, if you are using a different OS you can see their documentation and then we will use the VPN connection to reach a “prod” and a “dev” environment.
SSH Config
We will declare 2 jump-boxes / bastion hosts in our ssh config:
dev-jump-host is a public server that has network access to our private endpoints in 172.31.0.0/16
prod-jump-host is a public server that has network access to our private endpoints in 172.31.0.0/16
In this case, the above example is 2 AWS Accounts with the same CIDR’s, and wanted to demonstrate using sshuttle for this reason, as if we had different CIDRs we can setup a dedicated VPN and route them respectively.
1234567891011121314151617
$ cat ~/.ssh/config
Host *
Port 22
StrictHostKeyChecking no
UserKnownHostsFile /dev/null
ServerAliveInterval 60
ServerAliveCountMax 30
Host dev-jump-host
HostName dev-bastion.mydomain.com
User bastion
IdentityFile ~/.ssh/id_rsa
Host prod-jump-host
HostName prod-bastion.mydomain.com
User bastion
IdentityFile ~/.ssh/id_rsa
Then you should be able to use vpn_dev and vpn_prod from your terminal:
1234
$ vpn_prod
[local sudo] Password:
Warning: Permanently added 'xx,xx' (ECDSA) to the list of known hosts.
client: Connected.
And in a new terminal we can connect to a RDS MySQL Database sitting in a private network:
12
$ mysql -h my-prod-db.pvt.mydomain.com -u dbadmin -p$pass
mysql>
Sshuttle as a Service
You can create a systemd unit file to run a sshuttle vpn as a service. In this scenario I provided 2 different vpn routes, dev and prod, so you can create 2 seperate systemd unit files, but my case I will only create for prod: