Ruan Bekker's Blog

From a Curious mind to Posts on Github

Environment Variables With Ansible

This is a quick post on how to use environment variables in ansible

Inventory

Our inventory.ini file looks like this:

1
2
[localhost]
localhost

Across Tasks

You can set environment variables across tasks, and let your tasks inherit the variables:

1
2
3
4
5
6
7
8
9
10
11
- hosts: localhost
  vars:
    var_mysecret: secret123

  tasks:
    - name: echo my env var
      environment:
        MYNAME: ""
      shell: "echo hello $MYNAME > /tmp/bla.txt"
      args:
        creates: /tmp/bla.txt

When we run the task:

1
$ ansible-playbook -i inventory.ini -u ruan task.yml

Check the output:

1
2
$ cat /tmp/bla.txt
hello secret123

Environment Variables Per Task

You can set environment variables per task:

1
2
3
4
5
6
7
8
- hosts: dev
  tasks:
    - name: echo my env var
      environment:
        MYNAME: "RUAN"
      shell: "echo hello $MYNAME > /tmp/bla2.txt"
      args:
        creates: /tmp/bla2.txt

Running the task:

1
$ ansible-playbook -i inventory.ini -u ruan task.yml

Checking the output:

1
2
$ cat /tmp/bla2.txt
hello RUAN

Read More

Read more on environment variables in ansible in their documentation

Setup a WireGuard VPN Server on Linux

Installation

I will be installing my wireguard vpn server on a ubuntu 18 server, for other distributions you can have a look at their docs

1
2
3
$ sudo add-apt-repository ppa:wireguard/wireguard
$ sudo apt update
$ sudo apt install wireguard -y

Configuration

On the Server, create they keys directory where we will save our keys:

1
$ mkdir -p /etc/wireguard/keys

Create the private and public key:

1
$ wg genkey | tee privatekey | wg pubkey > publickey

Generate the pre-shared key:

1
$ wg genpsk > client.psk

On the client, create the keys directory:

1
$ mkdir -p ~/wireguard/keys

Create the private and public keys:

1
2
$ cd ~/wireguard/keys
$ wg genkey | tee privatekey | wg pubkey > publickey

Populate the server config:

1
2
3
4
5
6
7
8
9
10
11
12
$ cat /etc/wireguard/wg0.conf
[Interface]
PrivateKey = <output-of-client.privatekey>
Address = 192.168.199.1/32
ListenPort = 8999
PostUp = sysctl -w net.ipv4.ip_forward=1; iptables -A FORWARD -i %i -j ACCEPT; iptables -t nat -A POSTROUTING -o ens3 -j MASQUERADE
PostDown = iptables -D FORWARD -i %i -j ACCEPT; iptables -t nat -D POSTROUTING -o ens3 -j MASQUERADE

[Peer]
PublicKey = <output-of-server.publickey>
PresharedKey = <output-of-client.psk>
AllowedIPs = 192.168.199.2/32

Populate the client config:

1
2
3
4
5
6
7
8
9
10
11
12
$ cat ~/wireguard/wg0.conf
[Interface]
PrivateKey = <output-of-client.privatekey>
Address = 192.168.199.2/24
DNS = 1.1.1.1

[Peer]
PublicKey = <output-of-server.publickey>
PresharedKey = <output-of-client.psk>
Endpoint = <server-public-ip>:8999
AllowedIPs = 0.0.0.0/0
PersistentKeepalive = 25

Start the Server

On the server, enable and start the service:

1
2
$ systemctl enable wg-quick@wg0.service
$ wg-quick up wg0

On the client, connect the VPN:

1
$ wg-quick up ~/wireguard/wg0.conf

Verify the status:

1
2
3
4
5
6
7
8
9
10
11
12
$ wg show
interface: wg0
  public key: +Giwk8Y5KS5wx9mw0nEIdQODI+DsR+3TcbMxjJqfZys=
  private key: (hidden)
  listening port: 8999

peer: Q8LGMj6CeCYQJp+sTu74mLMRoPFAprV8PsnS0cu9fDI=
  preshared key: (hidden)
  endpoint: 102.132.208.80:57800
  allowed ips: 192.168.199.2/32
  latest handshake: 22 seconds ago
  transfer: 292.00 KiB received, 322.15 KiB sent

Check if you can ping the private ip address of the VPN:

1
2
3
$ ping 192.168.199.2
PING 192.168.199.2 (192.168.199.2): 56 data bytes
64 bytes from 192.168.199.2: icmp_seq=0 ttl=63 time=304.844 ms

Managing Background Processes With Screen

image

This is a quick post on how to create, manage and delete background processes with screen

About

Screen allows you to run processes in a different session, so when you exit your terminal the process will still be running.

Install

Install screen on the operating system of choice, for debian based systems it will be:

1
$ sudo apt install screen -y

Working with Screen

To create a screen session, you can just run screen or you can provide an argument to provide a name:

1
$ screen -S my-screen-session

Now you will be dropped into a screen session, run a ping:

1
$ ping 8.8.8.8

Now to allow the ping process to run in the background, send the commands to detach the screen session:

1
Ctrl + a, then press d

To view the screen session:

1
2
3
4
$ screen -ls
There is a screen on:
  45916.my-screen-session (Detached)
1 Socket in /var/folders/jr/dld7mjhn0sx6881xs_0s7rtc0000gn/T/.screen.

To resume the screen session, pass the screen id or screen name as a argument:

1
2
3
$ screen -r my-screen-session
64 bytes from 8.8.8.8: icmp_seq=297 ttl=55 time=7.845 ms
64 bytes from 8.8.8.8: icmp_seq=298 ttl=55 time=6.339 ms

Scripting

To use a one liner to send a process as a detached screen session for scripting as an example, you can do that like this:

1
$ screen -S ping-process -m -d sh -c "ping 8.8.8.8"

Listing the screen session:

1
2
3
$ screen -ls
There is a screen on:
  46051.ping-process  (Detached)

Terminating the screen session:

1
$ screen -S ping-process -X quit

Thank You

Let me know what you think. If you liked my content, feel free to visit me at ruan.dev or follow me on twitter at @ruanbekker

ko-fi

2019: My Personal Highlights for the Year

image

2019 was a great year! I met some awesome people from Civo, Traefik, Rancher, OpenFaas, Docker, Elastic, AWS and the list goes on.

Thank you to everyone of you that helped me during this year, to the ones inspired me, all the great motivation, support and shout outs! There’s so many people to thank, even for the people that is not mentioned, if you ever interacted with me, helped me or supported me, thank you to each and everyone of you!

Below is a list of some of my personal highlights:

Number of Blogposts per Website:

Learnings

  • You cannot be good at everything
  • You need to switch off every now and then
  • Work / Life balance is important
  • A hobby other than work does wonders to help switch off every now and then

Contributions on Github

Contributions for 2019:

image

Most Starred Github Repository:

image

Most Starred Gist:

image

Analytics

Some analytics for my blog posts:

blog.ruanbekker.com

Analytics for blog.ruanbekker.com:

image

Top 10 Most Viewed Pages:

image

Most Viewed by Country:

image

sysadmins.co.za

Analytics for sysadmins.co.za:

image

Top 10 Most Viewed Pages:

image

Most Viewed by Country:

image

Authors on Blogposts:

A list of places where I blog on:

Proud Moments

Some of my proud moments on Twitter:

2019.06.11 - Scaleway Tweet on Kapsule

2019.06.11 - Traefik Tweet on Kubernetes

2019.07.13 - Mention from OpenFaas on VSCode Demo

2019.07.14 - Elasticsearch Tweet from Devconnected

2019.08.14 - Rancher’s Tweet on my Rpi K3s Blogpost

2019.08.19 - Civo Learn Guide

2019.10.09 - Civo Marketplace MongoDB

2019.10.23 - Civo Marketplace Jenkins

2019.11.05 - Traefik Swag

2019.11.14 - Mentions on Civo Blog for KUBE100

Some proud moments from mentions on blog posts:

2019.08.06 - VPNCloud Peer to Peer Docs

image

2019.08.06 - MarkHeath Blog Post Mention

image

2019.08.08 - Civo Docker Swarm Blogpost

2019.08.13 - Raspberry Pi Post (teamserverless)

2019.10.11 - Serverless Email - Migration OpenFaas Blog post:

Certifications:

MongoDB Basics:

image

MongoDB Cluster Administration:

image

Thank You

Thank you for reading. If you like my content, feel free to visit me at ruan.dev or follow me on twitter at @ruanbekker

ko-fi

Setting the Correct Service Name in Datadog Logging for Docker Swarm

For some reason, when logging to datadog from your applications running on docker swarm, the service names in datadog appears to have the names on the docker image. The application talks to the datadog agent which runs in global mode on swarm.

Setting DATADOG_SERVICE_NAME or DD_SERVICE_NAME as environment variables on the swarm service has zero affect, as they keep showing the service name as the docker image name, as example:

08496333-01C4-4492-807E-FAC40826AFDE

If we inspect the tags, we can see that the docker image shows as the source and maps through as the docker service name. As you can see the swarm service name is what we want to be the service name (not alpine):

783C6D52-62B2-4F2B-A6D4-28150CC58005

One way how to fix this is to setup a pipeline processor, head over to Logs -> Configuration:

93CEE277-55A6-4DE1-8AE6-A02C64B0ACAD

Select “Pipelines” and add a new pipeline, select the filter source:alpine to limit down the results to the alpine image, and name your processor:

0BF3D6A6-9646-442D-A494-8DF489C5217F

Next add a new processor and set the type to remapper, select the tag group as “swarm_service” and set the attribute to service and name the processor:

C02092F4-0EEC-4AF9-9E2A-F7A126560CD8

Add a new processor:

5C2F7FB9-8948-4588-A283-86E94BC07513

Select a service remapper, set the attribute to service and name the processor:

852904AE-9395-4B4B-B1F4-54427D88C970

Now when you go back to logs, you will find that the service name is being set to the correct service name in datadog:

0F11DDC4-E99C-4A2F-B6AB-7409B4E7546C

When you inspect one of the logs, you will see that the attribute is being set to the log:

4B098970-6345-40B9-9F90-411D8FE6A9E6

Setup AWS S3 Cross Account Access

Say Thanks! Slack Status Chat on Slack GitHub followers

In this tutorial I will demonstrate how to setup cross account access from S3.

Scenario

We will have 2 AWS Accounts:

  1. a Green AWS Account which will host the IAM Users, this account will only be used for our IAM Accounts.

  2. a Blue AWS Account which will be the account that hosts our AWS Resources, S3 in this scenario.

We will the allow the Green Account to access the Blue Account’s S3 Bucket.

Setup the Blue Account

In the Blue Account, we will setup the S3 Bucket, as well as the Trust Relationship with the Policy, which is where we will define what we want to allow for the Green Account.

9488F107-A5B0-4A9E-A7A4-5A91B9805DE3

Now we need to setup the IAM Role which will allow the Green Account and also define what needs to be allowed.

Go ahead to your IAM Console and create a IAM Policy (just remember to replace the bucket name if you are following along)

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Sid": "PutGetListAccessOnS3",
            "Effect": "Allow",
            "Action": [
                "s3:PutObject",
                "s3:GetObject",
                "s3:ListBucket"
            ],
            "Resource": [
                "arn:aws:s3:::ruanbekker-prod-s3-bucket",
                "arn:aws:s3:::ruanbekker-prod-s3-bucket/*"
            ]
        }
    ]
}

In my case I have named my IAM Policy CrossAccountS3Access. After you have created your IAM Policy, go ahead and create a IAM Role. Here we need the source account that we want to allow as a trusted entity, which will be the AWS AccountId of the Green Account:

E73FC957-EBFA-4E41-AFDB-D994D6D3110E

Associate the IAM Policy that you created earlier:

610814A8-E8CB-45F7-A038-FE4274FD425C

After you have done that, you should see a summary screen:

ABAADD0E-9140-4EB1-855A-0B0E46F429FF

Make note of your IAM Role ARN, it will look something like this: arn:aws:iam::xxxxxxxxxxxx:role/CrossAccountS3Access-Role

Setup the Green Account

In the Green Account is where we will create the IAM User and the credentials will be provided to the user which requires to access the S3 Bucket.

Let’s create a IAM Group, I will name mine prod-s3-users. I will just create the group, as I will attach the policy later:

459D98BF-7A5D-49B4-BBD9-11717655188D

From the IAM Group, select the Permissions tab and create a New Inline Policy:

E55E521D-A3C1-4669-B0AB-C23A5BA51E21

Select the “STS” service, select the “AssumeRole” action, and provide the Role ARN of the Blue Account that we created earlier:

FDECEF7C-14F1-41DC-94F5-B6E63FE46A7D

This will allow the Blue account to assume the credentials from the Green account. And the Blue account will only obtain permissions to access the resources that we have defined in the policy document of the Blue Account. In summary, it should look like this:

0133A1AF-D2B0-4A61-B179-B4B40B81953C

Select the Users tab on the left hand side, create a New IAM User (I will name mine s3-prod-user) and select the “Programmatic Access” check box as we need API keys as we will be using the CLI to access S3:

ACE1F066-4400-4000-A9D8-0FD438DB7028

Then from the next window, add the user to the group that we have created earlier:

0AEC8E84-091F-44CB-966D-BDA93970C881

Test Cross Account Access

Let’s configure our AWS CLI with the API Keys that we received. Our credential provider will consist with 2 profiles, the Green Profile which holds the API Keys of the Green Account:

1
2
3
4
5
$ aws configure --profile green
AWS Access Key ID [None]: AKIATPRT2G4SAHA7ZQU2
AWS Secret Access Key [None]: x
Default region name [None]: eu-west-1
Default output format [None]: json

And configure the Blue profile that will reference the Green account as a source profile and also specify the IAM Role ARN of the Blue Account:

1
$ vim ~/.aws/credentials
1
2
3
4
[blue]
role_arn=arn:aws:iam::xxxxxxxxxxxx:role/CrossAccountS3Access-Role
source_profile=green
region=eu-west-1

Now we can test if we can authenticate with our Green AWS Account:

1
2
3
4
5
6
$ aws --profile green sts get-caller-identity
{
    "UserId": "AKIATPRT2G4SAHA7ZQU2",
    "Account": "xxxxxxxxxxxx",
    "Arn": "arn:aws:iam:: xxxxxxxxxxxx:user/s3-prod-user"
}

Now let’s upload an object to S3 using our blue profile:

1
2
$ aws --profile blue s3 cp foo s3://ruanbekker-prod-s3-bucket/
upload: ./foo to s3://ruanbekker-prod-s3-bucket/foo

Let’s verify if we can see the object:

1
2
$ aws --profile blue s3 ls s3://ruanbekker-prod-s3-bucket/
2019-10-03 22:13:30      14582 foo

Thank You

Let me know what you think. If you liked my content, feel free to checkout my content on ruan.dev or follow me on twitter at @ruanbekker


I’ve recently started a Developer Range t-shirts, let me know what you think:

How to Setup VPC Peering on AWS

Say Thanks! Slack Status Chat on Slack GitHub followers

In this tutorial I will demonstrate how to create a VPC Peering Connection between Two AWS Accounts and how to route traffic between them and then show you how we create Two EC2 Instances and demonstrate how to SSH to each other via it’s Private IP Address.

Scenario Information

We will have Two AWS Accounts in this demonstration, a “Green AWS Account” and a “Blue AWS Account”.

In this scenario, we will have two teams, both teams manage their own account and in this scenario the two teams need to be able to communicate to each other. To keep it simple, each team has a EC2 instance and the two EC2 instances need to be able to communicate with each other.

Therefore we will setup a VPC Peering Connection between the two accounts. Both accounts will be operating in the eu-west-2 (London) region.

1
2
3
Account, CIDR
green: 10.1.0.0/16
blue:  10.2.0.0/16

Getting Started

This will be our Green AWS Account:

140424C7-6FD5-4D74-AD26-AA1077D3DA92

This will be our Blue AWS Account:

AAFBF715-897D-4D54-BDF2-9A5282A60165

Creating the VPCs

From our green account, head over to VPC and create a new VPC with a CIDR of 10.1.0.0/16:

55FB3F87-9F73-4CDD-845B-8748700B0981

Then head over to the blue account, head over to VPC and create a new VPC with CIDR of 10.2.0.0/16:

854DC039-7F83-4E6F-BD28-6843BE417EEB

So in summary we have the following resources:

1
2
Green: vpc-0af4b247a1353b78b | 10.1.0.0/16
Blue: vpc-031c4ce3f56660c30 | 10.2.0.0/16

Creating the Subnets

Now we need to create subnets for the VPC’s that we created. We will create the following subnets in our VPC, each subnet in its own availability zone:

1
2
3
10.1.0.0/20 (az-2a)
10.1.16.0/20 (az-2b)
10.1.32.0/20 (az-2c)

Let’s go ahead and do this, head over to your green account, from the VPC section select “Subnets”:

BBB38DDB-D9CF-4BD4-AEA0-C30B6998F016

Go ahead and create a subnet where you will need to specify the VPC that you created, slect the first CIDR block, in my case 10.1.0.0/20 and select the first AZ:

BB1627EE-A92D-4274-BF97-40AE4E01A9A4

Do this for the other two subnets as well and then when you are done, it may look more or less like this:

051767FD-2D52-48BD-B495-01ACB431B358

Repeat this process that you have three subnets for your blue account as well:

881A973C-7C9A-423C-B6F4-555CE78E0A16

Setup VPC Peering Connection

Now that we’ve created our VPC’s and subnets for each VPC we want to peer our two VPC’s with each other so that we have a direct connection between our VPC’s so that our EC2 instances from our green account is able to connect with our EC2 instances in our blue account.

Head over to your green account’s VPC section and select “Peering Connections”:

21972956-D24A-4C45-94C5-10A6FC742D98

Create a new peering connection, we will first need to name our peering connection, select the source VPC which will be our green account’s VPC, since the VPC that we want to peer with is in another account, get the AWS Account ID from the blue account, and select “Another account” and provide the account id that we want to peer with, select the AWS Region and provide the VPC ID of the blue account:

1BDCB500-7BF0-4C5F-B171-9E09463A956A

Once you create the peering connection, you will find the peering request details:

C74BAE40-9C78-45FE-BE7F-3AC495E93A41

Now let’s head over to our blue Account, head over to VPC, select Peering connections and you will find the peering request from our green account:

05DB8A16-6CF4-48F1-920C-20AE7492E381

From the top, hit “Actions” and accept the request:

0FF04F44-F5B7-4AAF-9D66-89396EC2AA06

You should see that the VPC Peering connection has been established:

2D1D101F-3574-4A40-A1A6-F2F875B29158

From the blue account you should see that the VPC Peering Connection is active:

A2070A8B-6247-4D75-BFF8-D5AE152EFA42

If you head back to the green account, you will see under Peering Connections that the connection has been established:

1A50F913-9C6E-4F6D-A61C-5954617EBE5B

We have now successfully created our VPC peering connection and the two VPC’s from different accounts has been peered. Now we would like to launch our EC2 instances in our VPC, we will connect to our EC2 instance in our green account via the internet and then SSH to our EC2 instance in our blue account via the VPC peering connection via the Private IP Address.

Setup Internet Gateway

In order to connect to a Public Elastic IP, we first need to create a Internet Gateway on our VPC and add a route to route all public traffic via our Internet Gateway. This allows our resources in that VPC to be able to connect to the Internet.

Head over to “Internet Gateways”, and create a new Internet Gateway:

9750329C-E89E-425E-9DCC-D420D092C5E6

Our IGW (Internet Gateway) will now be in a detached state, we now need to attach our IGW to our VPC. Hit “Actions”, then select “Attach to VPC”, and select your VPC:

0BF7CB7A-C40A-483C-8083-410DBFFBA171

You should now see that your IGW has been attached to your VPC:

B6C3094F-233C-4A6C-A6FC-C5FD7727FBBD

Now that we have created an IGW and associated it to our VPC, we now need to configure our routing table so that it knows how to route non-local traffic via the IGW.

Configure Routing Table

Head over to VPC, select your VPC, select the “Route Tables” section from the left and you should see the following when you select the “Routes” section:

FF7E141E-2C8D-4D87-BE67-513AB44784F2

Select “Edit Routes” and add a route with the destination 0.0.0.0/0 select the Internet Gateway as a target and it will filter through your available IGW’s and select the IGW that you created earlier, then select save. (If your blue account needs internet access, repeat these steps on the blue account as well.)

E223A267-1A4F-4DA4-B23A-37CE6EDAFEF5

While we are at our routing tables configuration, we should also inform our VPC how to reach the subnet from the VPC from the other account. So that our Green App (10.1.0.0/16) can reach our blue app (10.2.0.0/16) via the Peering Connection.

We do this by adding a route to our routing table. From the green account’s VPC’s routing table add a new route with the destination of 10.2.0.0/16, select “Peering Connection” as the target and it should resolve to the peering connection resource that we created, then select save:

B5E8CF35-0C06-4261-9668-6C091BA19E2A

Now our green Account knows how to route traffic to our blue account and also knows which network traffic to route. But we also need to route traffic back. Head over to your blue Account and add a route 10.1.0.0/16 to the peering connection so that we can route traffic back to our green Account:

885DCDE3-ACA5-4136-851D-3DF9D2D9D62D

Launch EC2 Instances

Now we want to launch a EC2 instance in each account and ensure to launch them into the VPC’s that we created, I will also be creating two new SSH keys (blue-keypair + green-keypair) And I have created a Security Group that allows ICMP and SSH from anywhere, this is purely for demonstration (always review the sources that you want to allow).

For our green account:

C60E3DAD-DD12-4670-97CD-AC524269C20E

For our blue account:

1BFBF8B9-D090-4883-8E2B-92F29B19AEDE

Once the EC2 instances are deployed, you should see something like this. For my green account:

image

And for my blue account:

74F20740-17EE-46C9-9A51-D3ACAB8937B5

Public IP Addressing

Now that our EC2 instances are provisioned, we will be connecting to our green EC2 instances using a Public IP, therefore we need to create a Elastic IP. From EC2, select Elastic IPs and allocate a New Address:

C4B9AC94-7AFC-465D-8D51-0497ABA475B3

Select the IP, hit “Actions” and select “Associate Address”, then select the EC2 instance to which you want to associate the Elastic IP to:

E3AA99D6-CD59-4530-B818-422E1D584932

You should know see that the EC2 instance has a Public IP assigned to it:

FE545350-0A45-453C-9855-4F65CC0783C6

Test Network Connectivity

From the downloaded SSH keypairs:

1
2
3
$ ls | grep keyp
blue-keypair.pem.txt
green-keypair.pem.txt

Apply the correct permissions to our keypairs so that we can use them to SSH:

1
$ chmod 0400 blue-keypair.pem.txt green-keypair.pem.txt

We will want to add both SSH keys to our agent so we can include them when we SSH:

1
2
$ eval $(ssh-agent -t 36000)
Agent pid 6613

Add both keys to your ssh-agent:

1
2
3
4
5
$ ssh-add blue-keypair.pem.txt
Identity added: blue-keypair.pem.txt (blue-keypair.pem.txt)

$ ssh-add green-keypair.pem.txt
Identity added: green-keypair.pem.txt (green-keypair.pem.txt)

SSH to our Green EC2 instance:

1
2
3
4
5
6
7
8
$ ssh -A ec2-user@3.11.6.171

       __|  __|_  )
       _|  (     /   Amazon Linux 2 AMI
      ___|\___|___|

https://aws.amazon.com/amazon-linux-2/
[ec2-user@ip-10-1-1-190 ~]$

Now lets ping our Blue EC2 Instance which will be accessible via our VPC Peering Connection:

1
2
3
4
[ec2-user@ip-10-1-1-190 ~]$ ping 10.2.1.167
PING 10.2.1.167 (10.2.1.167) 56(84) bytes of data.
64 bytes from 10.2.1.167: icmp_seq=1 ttl=255 time=0.754 ms
64 bytes from 10.2.1.167: icmp_seq=2 ttl=255 time=0.854 ms

And since we’ve allowed SSH traffic, we should be able to SSH to our instance via its Private IP Address:

1
2
3
4
5
6
7
8
[ec2-user@ip-10-1-1-190 ~]$ ssh 10.2.1.167

       __|  __|_  )
       _|  (     /   Amazon Linux 2 AMI
      ___|\___|___|

https://aws.amazon.com/amazon-linux-2/
[ec2-user@ip-10-2-1-167 ~]$

Now we have successfully created a VPC Peering Connection between Two AWS Accounts and demonstrated how to communicate to and from resources in those VPC’s.

Thank You

Let me know what you think. If you liked my content, feel free to checkout my content on ruan.dev or follow me on twitter at @ruanbekker


Feel free to have a look at my Developer T-Shirt Range:

How to Deploy a Webapp on a AWS EKS Kubernetes Cluster

kubernetes-eks-deploy-webapp

Say Thanks! Slack Status Chat on Slack GitHub followers

In our previous post, Part 1 - Setup a EKS Cluster we went through the steps on how to Setup a EKS Cluster.

What are we doing today

In this post, we will deploy a sample web application to EKS and access our application using a ELB that EKS provides us.

Deployment Manifests

We will have two manifests that we will deploy to Kubernetes, a deployment manifest that will hold the information about our application and a service manifest that will hold the information about the service load balancer.

The deployment manifest, you will notice that we are specifying that we want 3 containers, we are using labels so that our service and deployment can find each other and we are using a basic http web application that will listen on port 8000 inside the container:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
$ cat deployment.yml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: my-hostname-app
spec:
  replicas: 3
  selector:
    matchLabels:
      app: my-app
  template:
    metadata:
      labels:
        app: my-app
    spec:
      containers:
        - name: my-app-container
          image: ruanbekker/hostname
          ports:
          - name: http
            containerPort: 8000

The service manifest, you will notice that we are specifying type: LoadBalancer in our service manifest, this will tell EKS to provision a ELB for your application so that we can access our application from the internet.

You will see that the selector is specifying my-app which we also provided in our deployment.yml so that our service know where to find our backend application. We are also stating that the service is listening on port 80, and will forward its traffic to our deployment on port 8000:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
$ cat service.yml
apiVersion: v1
kind: Service
metadata:
  name: my-hostname-app-service
  labels:
    app: my-app
spec:
  ports:
  - port: 80
    targetPort: 8000
  selector:
    app: my-app
  type: LoadBalancer

Deployment Time

Deploy our application:

1
2
$ kubectl apply -f deployment.yml
deployment.apps/my-hostname-app created

Deploy our service:

1
2
$ kubectl apply -f service.yml
service/my-hostname-app-service created

Now when we look at our deployment, we should see that 3 replicas of our application is running:

1
2
3
$ kubectl get deployments
NAME              READY   UP-TO-DATE   AVAILABLE   AGE
my-hostname-app   3/3     3            3           4m38s

To see the pods of that deployment, look at the pods:

1
2
3
4
5
$ kubectl get pods
NAME                               READY   STATUS    RESTARTS   AGE
my-hostname-app-5dcd48dfc5-2j8zm   1/1     Running   0          24s
my-hostname-app-5dcd48dfc5-58vkc   1/1     Running   0          24s
my-hostname-app-5dcd48dfc5-cmjwj   1/1     Running   0          24s

As we have more than one service in our EKS cluster, we can specify the labels that we have applied on our manifests to filter what we want to see (app: my-app):

1
2
3
$ kubectl get service --selector app=my-app
NAME                      TYPE           CLUSTER-IP       EXTERNAL-IP                                                              PORT(S)        AGE
my-hostname-app-service   LoadBalancer   10.100.114.166   a460661ce089b11ea97cd06dd7513db6-669054126.eu-west-1.elb.amazonaws.com   80:30648/TCP   2m29s

As we can see EKS provisioned a ELB for us, and we can access the application by making a HTTP request:

1
2
3
4
5
6
7
$ curl -i http://a460661ce089b11ea97cd06dd7513db6-669054126.eu-west-1.elb.amazonaws.com
HTTP/1.1 200 OK
Date: Sat, 16 Nov 2019 18:05:27 GMT
Content-Length: 43
Content-Type: text/plain; charset=utf-8

Hostname: my-hostname-app-5dcd48dfc5-2j8zm

Scaling our Deployment

Let’s scale our deployment to 5 replicas:

1
2
$ kubectl scale deployment/my-hostname-app --replicas 5
deployment.extensions/my-hostname-app scaled

After all the pods has been deployed, you should be able to see the 5 out of 5 pods that we provisioned, should be running:

1
2
3
$ kubectl get deployments
NAME              READY   UP-TO-DATE   AVAILABLE   AGE
my-hostname-app   5/5     5            5           5m7s

We can then also see the pods that our deployment is referencing:

1
2
3
4
5
6
7
$ kubectl get pods
NAME                               READY   STATUS    RESTARTS   AGE
my-hostname-app-5dcd48dfc5-2j8zm   1/1     Running   0          6m8s
my-hostname-app-5dcd48dfc5-58vkc   1/1     Running   0          6m8s
my-hostname-app-5dcd48dfc5-cmjwj   1/1     Running   0          6m8s
my-hostname-app-5dcd48dfc5-m4xcq   1/1     Running   0          67s
my-hostname-app-5dcd48dfc5-zf6xl   1/1     Running   0          68s

Further Reading on Kubernetes

This is one amazing resource that covers a lot of kubernetes topics and will help you throughout your EKS journey:

Thank You

Let me know what you think. If you liked my content, feel free to checkout my content on ruan.dev or follow me on twitter at @ruanbekker

How to Setup a AWS EKS Kubernetes Cluster

kubernetes-eks-aws-cluster

Say Thanks! Slack Status Chat on Slack GitHub followers

This will be a tutorial split up in two posts, where I will show you how to provision a EKS Cluster (Elastic Kubernetes Service) on AWS and in the next post, how to deploy a web application to your cluster (Part2 - Deploy a Web App to EKS.)

And then came EKS

As some of you may know, I’m a massive AWS fan boy, and since AWS released their managed Kubernetes service, I was quite excited to test it out. A couple of months passed and I got the opportunity to test out on-the-job as we moved to Kubernetes.

A couple of moths has passed, and serving multiple production workloads on EKS, and I am really impressed with the service.

Amazon provides a vanilla Kubernetes version, they manage the master nodes and they have a extra component called the cloud controller that runs on the master nodes, which is the aws native component that talks to other aws services (as far as I can recall)

What are we doing today

We will cover this in this post:

Topic
Deploy a EKS Cluster
View the resources to see what was provisioned on AWS
Interact with Kubernetes using kubectl
Terminate a Node and verify that the ASG replaces the node
Scale down your worker nodes
Run a pod on your cluster

In the next post we will deploy a web service to our EKS cluster.

Install Pre-Requirements

We require awscli, eksctl and kubectl before we continue. I will be installing this on MacOS, but you can have a look at the following links if you are using a different operating system:

Install awscli:

1
$ pip install awscli

Install kubectl:

1
2
$ brew update
$ brew install kubernetes-cli

Install eksctl:

1
2
$ brew tap weaveworks/tap
$ brew install weaveworks/tap/eksctl

Deploy EKS

Create a SSH key if you would like to SSH to your worker nodes:

1
$ ssh-keygen -b 2048 -f ~/.ssh/eks -t rsa -q -N ""

Now we need to import our public key to EC2, note that I am referencing --profile dev which is my dev AWS profile. If you only have one default profile, you can use --profile default:

1
$ aws --profile dev --region eu-west-1 ec2 import-key-pair --key-name "eks" --public-key-material file://~/.ssh/eks.pub

Provision your cluster using eksctl. This will deploy two cloudformation stacks, one for the kubernetes cluster, and one for the node group.

I am creating a kubernetes cluster with 3 nodes of instance type (t2.small) and using version 1.14:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
$ eksctl --profile dev --region eu-west-1 create cluster --name my-eks-cluster --version 1.14 --nodes 3 --node-type t2.small --ssh-public-key eks

[]  eksctl version 0.9.0
[]  using region eu-west-1
[]  setting availability zones to [eu-west-1a eu-west-1b eu-west-1c]
[]  subnets for eu-west-1a - public:192.168.0.0/19 private:192.168.96.0/19
[]  subnets for eu-west-1b - public:192.168.32.0/19 private:192.168.128.0/19
[]  subnets for eu-west-1c - public:192.168.64.0/19 private:192.168.160.0/19
[]  nodegroup "ng-f27f560e" will use "ami-059c6874350e63ca9" [AmazonLinux2/1.14]
[]  using Kubernetes version 1.14
[]  creating EKS cluster "my-eks-cluster" in "eu-west-1" region
[]  will create 2 separate CloudFormation stacks for cluster itself and the initial nodegroup
[]  if you encounter any issues, check CloudFormation console or try 'eksctl utils describe-stacks --region=eu-west-1 --cluster=my-eks-cluster'
[]  CloudWatch logging will not be enabled for cluster "my-eks-cluster" in "eu-west-1"
[]  you can enable it with 'eksctl utils update-cluster-logging --region=eu-west-1 --cluster=my-eks-cluster'
[]  Kubernetes API endpoint access will use default of {publicAccess=true, privateAccess=false} for cluster "my-eks-cluster" in "eu-west-1"
[]  2 sequential tasks: { create cluster control plane "my-eks-cluster", create nodegroup "ng-f27f560e" }
[]  building cluster stack "eksctl-my-eks-cluster-cluster"
[]  deploying stack "eksctl-my-eks-cluster-cluster"
[]  building nodegroup stack "eksctl-my-eks-cluster-nodegroup-ng-f27f560e"
[]  --nodes-min=3 was set automatically for nodegroup ng-f27f560e
[]  --nodes-max=3 was set automatically for nodegroup ng-f27f560e
[]  deploying stack "eksctl-my-eks-cluster-nodegroup-ng-f27f560e"
[+]  all EKS cluster resources for "my-eks-cluster" have been created
[+]  saved kubeconfig as "/Users/ruan/.kube/config"
[]  adding identity "arn:aws:iam::000000000000:role/eksctl-my-eks-cluster-nodegroup-n-NodeInstanceRole-SNVIW5C3J3SM" to auth ConfigMap
[]  nodegroup "ng-f27f560e" has 0 node(s)
[]  waiting for at least 3 node(s) to become ready in "ng-f27f560e"
[]  nodegroup "ng-f27f560e" has 3 node(s)
[]  node "ip-192-168-42-186.eu-west-1.compute.internal" is ready
[]  node "ip-192-168-75-87.eu-west-1.compute.internal" is ready
[]  node "ip-192-168-8-167.eu-west-1.compute.internal" is ready
[]  kubectl command should work with "/Users/ruan/.kube/config", try 'kubectl get nodes'
[+]  EKS cluster "my-eks-cluster" in "eu-west-1" region is ready

Now that our EKS cluster has been provisioned, let’s browse through our AWS Management Console to understand what was provisioned.

View the Provisioned Resources

If we have a look at the Cloudformation stacks, we can see the two stacks that I mentioned previously:

image

Navigating to our EC2 Instances dashboard, we can see the three worker nodes that we provisioned. Remember that AWS manages the master nodes and we cant see them.

image

We have a ASG (Auto Scaling Group) associated with our worker nodes, nodegroup. We can make use of autoscaling and also have desired state, so we will test this out later where we will delete a worker node and verify if it gets replaced:

image

Navigate using Kubectl:

Eksctl already applied the kubeconfig to ~/.kube/config, so we can start using kubectl. Let’s start by viewing the nodes:

1
2
3
4
5
$ kubectl get nodes
NAME                                           STATUS   ROLES    AGE     VERSION
ip-192-168-42-186.eu-west-1.compute.internal   Ready    <none>   8m50s   v1.14.7-eks-1861c5
ip-192-168-75-87.eu-west-1.compute.internal    Ready    <none>   8m55s   v1.14.7-eks-1861c5
ip-192-168-8-167.eu-west-1.compute.internal    Ready    <none>   8m54s   v1.14.7-eks-1861c5

Viewing our pods from our kube-system namespace (we dont have any pods in our default namespace at the moment):

1
2
3
4
5
6
7
8
9
10
$ kubectl get pods --namespace kube-system
NAME                       READY   STATUS    RESTARTS   AGE
aws-node-btfbk             1/1     Running   0          11m
aws-node-c6ktk             1/1     Running   0          11m
aws-node-wf8mc             1/1     Running   0          11m
coredns-759d6fc95f-ljxzf   1/1     Running   0          17m
coredns-759d6fc95f-s6lg6   1/1     Running   0          17m
kube-proxy-db46b           1/1     Running   0          11m
kube-proxy-ft4mc           1/1     Running   0          11m
kube-proxy-s5q2w           1/1     Running   0          11m

And our services from all our namespaces:

1
2
3
4
$ kubectl get services --all-namespaces
NAMESPACE     NAME         TYPE        CLUSTER-IP    EXTERNAL-IP   PORT(S)         AGE
default       kubernetes   ClusterIP   10.100.0.1    <none>        443/TCP         19m
kube-system   kube-dns     ClusterIP   10.100.0.10   <none>        53/UDP,53/TCP   19m

Testing the ASG

Let’s view our current nodes in our cluster, then select the first node, delete it and verify if the ASG replaces that node.

First, view the nodes and select one node’s address:

1
2
3
4
5
$ kubectl get nodes
NAME                                           STATUS   ROLES    AGE   VERSION
ip-192-168-42-186.eu-west-1.compute.internal   Ready    <none>   37m   v1.14.7-eks-1861c5
ip-192-168-75-87.eu-west-1.compute.internal    Ready    <none>   37m   v1.14.7-eks-1861c5
ip-192-168-8-167.eu-west-1.compute.internal    Ready    <none>   37m   v1.14.7-eks-1861c5

Use the awscli to lookup the EC2 instance id, as we will need this id to delete the node:

1
2
$ aws --profile dev ec2 describe-instances --query 'Reservations[*].Instances[?PrivateDnsName==`ip-192-168-42-186.eu-west-1.compute.internal`].[InstanceId][]' --output text
i-0d016de17a46d5178

Now that we have the EC2 instance id, delete the node:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
$ aws --profile dev ec2 terminate-instances --instance-id i-0d016de17a46d51782
{
    "TerminatingInstances": [
        {
            "CurrentState": {
                "Code": 32,
                "Name": "shutting-down"
            },
            "InstanceId": "i-0d016de17a46d5178",
            "PreviousState": {
                "Code": 16,
                "Name": "running"
            }
        }
    ]
}

Now that we have deleted the EC2 instance, view the nodes and you will see the node has been terminated:

1
2
3
4
$ kubectl get nodes
NAME                                          STATUS   ROLES    AGE   VERSION
ip-192-168-75-87.eu-west-1.compute.internal   Ready    <none>   41m   v1.14.7-eks-1861c5
ip-192-168-8-167.eu-west-1.compute.internal   Ready    <none>   41m   v1.14.7-eks-1861c5

Allow about a minute so that the ASG can replace the node, and when you list again you will see that the ASG replaced the node :

1
2
3
4
5
$ kubectl get nodes
NAME                                          STATUS   ROLES    AGE   VERSION
ip-192-168-42-61.eu-west-1.compute.internal   Ready    <none>   50s   v1.14.7-eks-1861c5
ip-192-168-75-87.eu-west-1.compute.internal   Ready    <none>   42m   v1.14.7-eks-1861c5
ip-192-168-8-167.eu-west-1.compute.internal   Ready    <none>   42m   v1.14.7-eks-1861c5

Run a Pod

Run a busybox pod on your EKS cluster:

1
$ kubectl run --rm -it --generator run-pod/v1 my-busybox-pod --image busybox -- /bin/sh

You will be dropped into a shell:

1
2
/ # busybox | head -1
BusyBox v1.31.1 (2019-10-28 18:40:01 UTC) multi-call binary.

And exit the shell:

1
2
3
/ # exit
Session ended, resume using 'kubectl attach my-busybox-pod -c my-busybox-pod -i -t' command when the pod is running
pod "my-busybox-pod" deleted

Scaling Nodes

While I will not be covering auto-scaling in this post, we can manually scale the worker node count. Let’s scale it down to 1 node.

First we need to get the EKS cluster name:

1
2
3
$ eksctl --profile dev --region eu-west-1 get clusters
NAME      REGION
my-eks-cluster    eu-west-1

Then we need the node group id:

1
2
3
$ eksctl --profile dev --region eu-west-1 get nodegroup --cluster my-eks-cluster
CLUSTER       NODEGROUP   CREATED         MIN SIZE    MAX SIZE    DESIRED CAPACITY    INSTANCE TYPE   IMAGE ID
my-eks-cluster    ng-f27f560e 2019-11-16T16:55:41Z    3       3       3           t2.small    ami-059c6874350e63ca9

Now that we have the node group id, we can scale the node count:

1
2
3
4
$ eksctl --profile dev --region eu-west-1 scale nodegroup --cluster my-eks-cluster --nodes 1 ng-f27f560e

[]  scaling nodegroup stack "eksctl-my-eks-cluster-nodegroup-ng-f27f560e" in cluster eksctl-my-eks-cluster-cluster
[]  scaling nodegroup, desired capacity from 3 to 1, min size from 3 to 1

Now when we use kubectl to view the nodes, we will see we only have 1 worker node:

1
2
3
$ kubectl get nodes
NAME                                          STATUS   ROLES    AGE   VERSION
ip-192-168-8-167.eu-west-1.compute.internal   Ready    <none>   73m   v1.14.7-eks-1861c5

Clean Up

If you want to follow along deploying a web application to your EKS cluster before we terminate the cluster, have a look at Part 2 - EKS Tutorial before continuing.

Once you are ready to terminate your EKS cluster, you can go ahead and terminate the cluster:

1
2
3
4
5
6
7
8
9
10
11
12
$ eksctl --profile dev --region eu-west-1 delete cluster --name my-eks-cluster

[]  eksctl version 0.9.0
[]  using region eu-west-1
[]  deleting EKS cluster "my-eks-cluster"
[+]  kubeconfig has been updated
[]  cleaning up LoadBalancer services
[]  2 sequential tasks: { delete nodegroup "ng-f27f560e", delete cluster control plane "my-eks-cluster" [async] }
[]  will delete stack "eksctl-my-eks-cluster-nodegroup-ng-f27f560e"
[]  waiting for stack "eksctl-my-eks-cluster-nodegroup-ng-f27f560e" to get deleted
[]  will delete stack "eksctl-my-eks-cluster-cluster"
[+]  all cluster resources were deleted

Further Reading on Kubernetes

This is one amazing resource that covers a lot of kubernetes topics and will help you throughout your EKS journey: - EKSWorkshop

Thank You

Let me know what you think. If you liked my content, feel free to checkout my content on ruan.dev or follow me on twitter at @ruanbekker

Testing AWS Lambda Functions Locally on Docker With LambCi

Say Thanks! Slack Status Chat on Slack GitHub followers

I discovered a Docker image called LambCi that allows you to test lambda functions locally on docker and wanted to share with you how it works.

Python Lambda Function

We will create a basic lambda function to demonstrate how it works.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
$ mkdir task
$ cat > task/lambda_function.py << EOF
import json

def lambda_handler(event, context):
    if event:

        try:
            event['name']
            name = event['name']
            output_string = 'My name is {}'.format(name.capitalize())

        except KeyError:
            output_string = 'A name was not defined in the event payload'

    return output_string
EOF

Now that we’ve created the function, run the docker container with the parameters of the functions handler method and the event parameters:

1
2
3
4
5
$ docker run --rm -v "$PWD/task":/var/task lambci/lambda:python3.7 lambda_function.lambda_handler '{"name": "ruan"}'
START RequestId: 70025895-1233-1362-8006-c2784b5d80b6 Version: $LATEST
END RequestId: 70025895-1233-1362-8006-c2784b5d80b6
REPORT RequestId: 70025895-1233-1362-8006-c2784b5d80b6    Duration: 7.51 ms   Billed Duration: 100 ms Memory Size: 1536 MB    Max Memory Used: 23 MB
"My name is Ruan"

And another call:

1
2
3
4
5
$ docker run --rm -v "$PWD/task":/var/task lambci/lambda:python3.7 lambda_function.lambda_handler '{"nam": "ruan"}'
START RequestId: f7ab2e97-05db-1184-a009-11b92638534f Version: $LATEST
END RequestId: f7ab2e97-05db-1184-a009-11b92638534f
REPORT RequestId: f7ab2e97-05db-1184-a009-11b92638534f    Duration: 5.32 ms   Billed Duration: 100 ms Memory Size: 1536 MB    Max Memory Used: 23 MB
"A name was not defined in the event payload"

Checkout the dockerhub page for more info: - https://hub.docker.com/r/lambci/lambda/

Thank You

Let me know what you think. If you liked my content, feel free to checkout my content on ruan.dev or follow me on twitter at @ruanbekker