Ruan Bekker's Blog

From a Curious mind to Posts on Github

Setup AWS S3 Cross Account Access

Say Thanks! Slack Status Chat on Slack GitHub followers

In this tutorial I will demonstrate how to setup cross account access from S3.

ruanbekker-cheatsheets

Scenario

We will have 2 AWS Accounts:

  1. a Green AWS Account which will host the IAM Users, this account will only be used for our IAM Accounts.

  2. a Blue AWS Account which will be the account that hosts our AWS Resources, S3 in this scenario.

We will the allow the Green Account to access the Blue Account’s S3 Bucket.

Setup the Blue Account

In the Blue Account, we will setup the S3 Bucket, as well as the Trust Relationship with the Policy, which is where we will define what we want to allow for the Green Account.

9488F107-A5B0-4A9E-A7A4-5A91B9805DE3

Now we need to setup the IAM Role which will allow the Green Account and also define what needs to be allowed.

Go ahead to your IAM Console and create a IAM Policy (just remember to replace the bucket name if you are following along)

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Sid": "PutGetListAccessOnS3",
            "Effect": "Allow",
            "Action": [
                "s3:PutObject",
                "s3:GetObject",
                "s3:ListBucket"
            ],
            "Resource": [
                "arn:aws:s3:::ruanbekker-prod-s3-bucket",
                "arn:aws:s3:::ruanbekker-prod-s3-bucket/*"
            ]
        }
    ]
}

In my case I have named my IAM Policy CrossAccountS3Access. After you have created your IAM Policy, go ahead and create a IAM Role. Here we need the source account that we want to allow as a trusted entity, which will be the AWS AccountId of the Green Account:

E73FC957-EBFA-4E41-AFDB-D994D6D3110E

Associate the IAM Policy that you created earlier:

610814A8-E8CB-45F7-A038-FE4274FD425C

After you have done that, you should see a summary screen:

ABAADD0E-9140-4EB1-855A-0B0E46F429FF

Make note of your IAM Role ARN, it will look something like this: arn:aws:iam::xxxxxxxxxxxx:role/CrossAccountS3Access-Role

Setup the Green Account

In the Green Account is where we will create the IAM User and the credentials will be provided to the user which requires to access the S3 Bucket.

Let’s create a IAM Group, I will name mine prod-s3-users. I will just create the group, as I will attach the policy later:

459D98BF-7A5D-49B4-BBD9-11717655188D

From the IAM Group, select the Permissions tab and create a New Inline Policy:

E55E521D-A3C1-4669-B0AB-C23A5BA51E21

Select the “STS” service, select the “AssumeRole” action, and provide the Role ARN of the Blue Account that we created earlier:

FDECEF7C-14F1-41DC-94F5-B6E63FE46A7D

This will allow the Blue account to assume the credentials from the Green account. And the Blue account will only obtain permissions to access the resources that we have defined in the policy document of the Blue Account. In summary, it should look like this:

0133A1AF-D2B0-4A61-B179-B4B40B81953C

Select the Users tab on the left hand side, create a New IAM User (I will name mine s3-prod-user) and select the “Programmatic Access” check box as we need API keys as we will be using the CLI to access S3:

ACE1F066-4400-4000-A9D8-0FD438DB7028

Then from the next window, add the user to the group that we have created earlier:

0AEC8E84-091F-44CB-966D-BDA93970C881

Test Cross Account Access

Let’s configure our AWS CLI with the API Keys that we received. Our credential provider will consist with 2 profiles, the Green Profile which holds the API Keys of the Green Account:

1
2
3
4
5
$ aws configure --profile green
AWS Access Key ID [None]: AKIATPRT2G4SAHA7ZQU2
AWS Secret Access Key [None]: x
Default region name [None]: eu-west-1
Default output format [None]: json

And configure the Blue profile that will reference the Green account as a source profile and also specify the IAM Role ARN of the Blue Account:

1
$ vim ~/.aws/credentials
1
2
3
4
[blue]
role_arn=arn:aws:iam::xxxxxxxxxxxx:role/CrossAccountS3Access-Role
source_profile=green
region=eu-west-1

Now we can test if we can authenticate with our Green AWS Account:

1
2
3
4
5
6
$ aws --profile green sts get-caller-identity
{
    "UserId": "AKIATPRT2G4SAHA7ZQU2",
    "Account": "xxxxxxxxxxxx",
    "Arn": "arn:aws:iam:: xxxxxxxxxxxx:user/s3-prod-user"
}

Now let’s upload an object to S3 using our blue profile:

1
2
$ aws --profile blue s3 cp foo s3://ruanbekker-prod-s3-bucket/
upload: ./foo to s3://ruanbekker-prod-s3-bucket/foo

Let’s verify if we can see the object:

1
2
$ aws --profile blue s3 ls s3://ruanbekker-prod-s3-bucket/
2019-10-03 22:13:30      14582 foo

Thank You

Let me know what you think. If you liked my content, feel free to checkout my content on ruan.dev or follow me on twitter at @ruanbekker


How to Setup VPC Peering on AWS

Say Thanks! Slack Status Chat on Slack GitHub followers

In this tutorial I will demonstrate how to create a VPC Peering Connection between Two AWS Accounts and how to route traffic between them and then show you how we create Two EC2 Instances and demonstrate how to SSH to each other via it’s Private IP Address.

Scenario Information

We will have Two AWS Accounts in this demonstration, a “Green AWS Account” and a “Blue AWS Account”.

In this scenario, we will have two teams, both teams manage their own account and in this scenario the two teams need to be able to communicate to each other. To keep it simple, each team has a EC2 instance and the two EC2 instances need to be able to communicate with each other.

Therefore we will setup a VPC Peering Connection between the two accounts. Both accounts will be operating in the eu-west-2 (London) region.

1
2
3
Account, CIDR
green: 10.1.0.0/16
blue:  10.2.0.0/16

Getting Started

This will be our Green AWS Account:

140424C7-6FD5-4D74-AD26-AA1077D3DA92

This will be our Blue AWS Account:

AAFBF715-897D-4D54-BDF2-9A5282A60165

Creating the VPCs

From our green account, head over to VPC and create a new VPC with a CIDR of 10.1.0.0/16:

55FB3F87-9F73-4CDD-845B-8748700B0981

Then head over to the blue account, head over to VPC and create a new VPC with CIDR of 10.2.0.0/16:

854DC039-7F83-4E6F-BD28-6843BE417EEB

So in summary we have the following resources:

1
2
Green: vpc-0af4b247a1353b78b | 10.1.0.0/16
Blue: vpc-031c4ce3f56660c30 | 10.2.0.0/16

Creating the Subnets

Now we need to create subnets for the VPC’s that we created. We will create the following subnets in our VPC, each subnet in its own availability zone:

1
2
3
10.1.0.0/20 (az-2a)
10.1.16.0/20 (az-2b)
10.1.32.0/20 (az-2c)

Let’s go ahead and do this, head over to your green account, from the VPC section select “Subnets”:

BBB38DDB-D9CF-4BD4-AEA0-C30B6998F016

Go ahead and create a subnet where you will need to specify the VPC that you created, slect the first CIDR block, in my case 10.1.0.0/20 and select the first AZ:

BB1627EE-A92D-4274-BF97-40AE4E01A9A4

Do this for the other two subnets as well and then when you are done, it may look more or less like this:

051767FD-2D52-48BD-B495-01ACB431B358

Repeat this process that you have three subnets for your blue account as well:

881A973C-7C9A-423C-B6F4-555CE78E0A16

Setup VPC Peering Connection

Now that we’ve created our VPC’s and subnets for each VPC we want to peer our two VPC’s with each other so that we have a direct connection between our VPC’s so that our EC2 instances from our green account is able to connect with our EC2 instances in our blue account.

Head over to your green account’s VPC section and select “Peering Connections”:

21972956-D24A-4C45-94C5-10A6FC742D98

Create a new peering connection, we will first need to name our peering connection, select the source VPC which will be our green account’s VPC, since the VPC that we want to peer with is in another account, get the AWS Account ID from the blue account, and select “Another account” and provide the account id that we want to peer with, select the AWS Region and provide the VPC ID of the blue account:

1BDCB500-7BF0-4C5F-B171-9E09463A956A

Once you create the peering connection, you will find the peering request details:

C74BAE40-9C78-45FE-BE7F-3AC495E93A41

Now let’s head over to our blue Account, head over to VPC, select Peering connections and you will find the peering request from our green account:

05DB8A16-6CF4-48F1-920C-20AE7492E381

From the top, hit “Actions” and accept the request:

0FF04F44-F5B7-4AAF-9D66-89396EC2AA06

You should see that the VPC Peering connection has been established:

2D1D101F-3574-4A40-A1A6-F2F875B29158

From the blue account you should see that the VPC Peering Connection is active:

A2070A8B-6247-4D75-BFF8-D5AE152EFA42

If you head back to the green account, you will see under Peering Connections that the connection has been established:

1A50F913-9C6E-4F6D-A61C-5954617EBE5B

We have now successfully created our VPC peering connection and the two VPC’s from different accounts has been peered. Now we would like to launch our EC2 instances in our VPC, we will connect to our EC2 instance in our green account via the internet and then SSH to our EC2 instance in our blue account via the VPC peering connection via the Private IP Address.

Setup Internet Gateway

In order to connect to a Public Elastic IP, we first need to create a Internet Gateway on our VPC and add a route to route all public traffic via our Internet Gateway. This allows our resources in that VPC to be able to connect to the Internet.

Head over to “Internet Gateways”, and create a new Internet Gateway:

9750329C-E89E-425E-9DCC-D420D092C5E6

Our IGW (Internet Gateway) will now be in a detached state, we now need to attach our IGW to our VPC. Hit “Actions”, then select “Attach to VPC”, and select your VPC:

0BF7CB7A-C40A-483C-8083-410DBFFBA171

You should now see that your IGW has been attached to your VPC:

B6C3094F-233C-4A6C-A6FC-C5FD7727FBBD

Now that we have created an IGW and associated it to our VPC, we now need to configure our routing table so that it knows how to route non-local traffic via the IGW.

Configure Routing Table

Head over to VPC, select your VPC, select the “Route Tables” section from the left and you should see the following when you select the “Routes” section:

FF7E141E-2C8D-4D87-BE67-513AB44784F2

Select “Edit Routes” and add a route with the destination 0.0.0.0/0 select the Internet Gateway as a target and it will filter through your available IGW’s and select the IGW that you created earlier, then select save. (If your blue account needs internet access, repeat these steps on the blue account as well.)

E223A267-1A4F-4DA4-B23A-37CE6EDAFEF5

While we are at our routing tables configuration, we should also inform our VPC how to reach the subnet from the VPC from the other account. So that our Green App (10.1.0.0/16) can reach our blue app (10.2.0.0/16) via the Peering Connection.

We do this by adding a route to our routing table. From the green account’s VPC’s routing table add a new route with the destination of 10.2.0.0/16, select “Peering Connection” as the target and it should resolve to the peering connection resource that we created, then select save:

B5E8CF35-0C06-4261-9668-6C091BA19E2A

Now our green Account knows how to route traffic to our blue account and also knows which network traffic to route. But we also need to route traffic back. Head over to your blue Account and add a route 10.1.0.0/16 to the peering connection so that we can route traffic back to our green Account:

885DCDE3-ACA5-4136-851D-3DF9D2D9D62D

Launch EC2 Instances

Now we want to launch a EC2 instance in each account and ensure to launch them into the VPC’s that we created, I will also be creating two new SSH keys (blue-keypair + green-keypair) And I have created a Security Group that allows ICMP and SSH from anywhere, this is purely for demonstration (always review the sources that you want to allow).

For our green account:

C60E3DAD-DD12-4670-97CD-AC524269C20E

For our blue account:

1BFBF8B9-D090-4883-8E2B-92F29B19AEDE

Once the EC2 instances are deployed, you should see something like this. For my green account:

image

And for my blue account:

74F20740-17EE-46C9-9A51-D3ACAB8937B5

Public IP Addressing

Now that our EC2 instances are provisioned, we will be connecting to our green EC2 instances using a Public IP, therefore we need to create a Elastic IP. From EC2, select Elastic IPs and allocate a New Address:

C4B9AC94-7AFC-465D-8D51-0497ABA475B3

Select the IP, hit “Actions” and select “Associate Address”, then select the EC2 instance to which you want to associate the Elastic IP to:

E3AA99D6-CD59-4530-B818-422E1D584932

You should know see that the EC2 instance has a Public IP assigned to it:

FE545350-0A45-453C-9855-4F65CC0783C6

Test Network Connectivity

From the downloaded SSH keypairs:

1
2
3
$ ls | grep keyp
blue-keypair.pem.txt
green-keypair.pem.txt

Apply the correct permissions to our keypairs so that we can use them to SSH:

1
$ chmod 0400 blue-keypair.pem.txt green-keypair.pem.txt

We will want to add both SSH keys to our agent so we can include them when we SSH:

1
2
$ eval $(ssh-agent -t 36000)
Agent pid 6613

Add both keys to your ssh-agent:

1
2
3
4
5
$ ssh-add blue-keypair.pem.txt
Identity added: blue-keypair.pem.txt (blue-keypair.pem.txt)

$ ssh-add green-keypair.pem.txt
Identity added: green-keypair.pem.txt (green-keypair.pem.txt)

SSH to our Green EC2 instance:

1
2
3
4
5
6
7
8
$ ssh -A ec2-user@3.11.6.171

       __|  __|_  )
       _|  (     /   Amazon Linux 2 AMI
      ___|\___|___|

https://aws.amazon.com/amazon-linux-2/
[ec2-user@ip-10-1-1-190 ~]$

Now lets ping our Blue EC2 Instance which will be accessible via our VPC Peering Connection:

1
2
3
4
[ec2-user@ip-10-1-1-190 ~]$ ping 10.2.1.167
PING 10.2.1.167 (10.2.1.167) 56(84) bytes of data.
64 bytes from 10.2.1.167: icmp_seq=1 ttl=255 time=0.754 ms
64 bytes from 10.2.1.167: icmp_seq=2 ttl=255 time=0.854 ms

And since we’ve allowed SSH traffic, we should be able to SSH to our instance via its Private IP Address:

1
2
3
4
5
6
7
8
[ec2-user@ip-10-1-1-190 ~]$ ssh 10.2.1.167

       __|  __|_  )
       _|  (     /   Amazon Linux 2 AMI
      ___|\___|___|

https://aws.amazon.com/amazon-linux-2/
[ec2-user@ip-10-2-1-167 ~]$

Now we have successfully created a VPC Peering Connection between Two AWS Accounts and demonstrated how to communicate to and from resources in those VPC’s.

Thank You

Let me know what you think. If you liked my content, feel free to checkout my content on ruan.dev or follow me on twitter at @ruanbekker


ruanbekker-cheatsheets

How to Deploy a Webapp on a AWS EKS Kubernetes Cluster

kubernetes-eks-deploy-webapp

Say Thanks! Slack Status Chat on Slack GitHub followers

In our previous post, Part 1 - Setup a EKS Cluster we went through the steps on how to Setup a EKS Cluster.

What are we doing today

In this post, we will deploy a sample web application to EKS and access our application using a ELB that EKS provides us.

Deployment Manifests

We will have two manifests that we will deploy to Kubernetes, a deployment manifest that will hold the information about our application and a service manifest that will hold the information about the service load balancer.

The deployment manifest, you will notice that we are specifying that we want 3 containers, we are using labels so that our service and deployment can find each other and we are using a basic http web application that will listen on port 8000 inside the container:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
$ cat deployment.yml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: my-hostname-app
spec:
  replicas: 3
  selector:
    matchLabels:
      app: my-app
  template:
    metadata:
      labels:
        app: my-app
    spec:
      containers:
        - name: my-app-container
          image: ruanbekker/hostname
          ports:
          - name: http
            containerPort: 8000

The service manifest, you will notice that we are specifying type: LoadBalancer in our service manifest, this will tell EKS to provision a ELB for your application so that we can access our application from the internet.

You will see that the selector is specifying my-app which we also provided in our deployment.yml so that our service know where to find our backend application. We are also stating that the service is listening on port 80, and will forward its traffic to our deployment on port 8000:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
$ cat service.yml
apiVersion: v1
kind: Service
metadata:
  name: my-hostname-app-service
  labels:
    app: my-app
spec:
  ports:
  - port: 80
    targetPort: 8000
  selector:
    app: my-app
  type: LoadBalancer

Deployment Time

Deploy our application:

1
2
$ kubectl apply -f deployment.yml
deployment.apps/my-hostname-app created

Deploy our service:

1
2
$ kubectl apply -f service.yml
service/my-hostname-app-service created

Now when we look at our deployment, we should see that 3 replicas of our application is running:

1
2
3
$ kubectl get deployments
NAME              READY   UP-TO-DATE   AVAILABLE   AGE
my-hostname-app   3/3     3            3           4m38s

To see the pods of that deployment, look at the pods:

1
2
3
4
5
$ kubectl get pods
NAME                               READY   STATUS    RESTARTS   AGE
my-hostname-app-5dcd48dfc5-2j8zm   1/1     Running   0          24s
my-hostname-app-5dcd48dfc5-58vkc   1/1     Running   0          24s
my-hostname-app-5dcd48dfc5-cmjwj   1/1     Running   0          24s

As we have more than one service in our EKS cluster, we can specify the labels that we have applied on our manifests to filter what we want to see (app: my-app):

1
2
3
$ kubectl get service --selector app=my-app
NAME                      TYPE           CLUSTER-IP       EXTERNAL-IP                                                              PORT(S)        AGE
my-hostname-app-service   LoadBalancer   10.100.114.166   a460661ce089b11ea97cd06dd7513db6-669054126.eu-west-1.elb.amazonaws.com   80:30648/TCP   2m29s

As we can see EKS provisioned a ELB for us, and we can access the application by making a HTTP request:

1
2
3
4
5
6
7
$ curl -i http://a460661ce089b11ea97cd06dd7513db6-669054126.eu-west-1.elb.amazonaws.com
HTTP/1.1 200 OK
Date: Sat, 16 Nov 2019 18:05:27 GMT
Content-Length: 43
Content-Type: text/plain; charset=utf-8

Hostname: my-hostname-app-5dcd48dfc5-2j8zm

Scaling our Deployment

Let’s scale our deployment to 5 replicas:

1
2
$ kubectl scale deployment/my-hostname-app --replicas 5
deployment.extensions/my-hostname-app scaled

After all the pods has been deployed, you should be able to see the 5 out of 5 pods that we provisioned, should be running:

1
2
3
$ kubectl get deployments
NAME              READY   UP-TO-DATE   AVAILABLE   AGE
my-hostname-app   5/5     5            5           5m7s

We can then also see the pods that our deployment is referencing:

1
2
3
4
5
6
7
$ kubectl get pods
NAME                               READY   STATUS    RESTARTS   AGE
my-hostname-app-5dcd48dfc5-2j8zm   1/1     Running   0          6m8s
my-hostname-app-5dcd48dfc5-58vkc   1/1     Running   0          6m8s
my-hostname-app-5dcd48dfc5-cmjwj   1/1     Running   0          6m8s
my-hostname-app-5dcd48dfc5-m4xcq   1/1     Running   0          67s
my-hostname-app-5dcd48dfc5-zf6xl   1/1     Running   0          68s

Further Reading on Kubernetes

This is one amazing resource that covers a lot of kubernetes topics and will help you throughout your EKS journey:

Thank You

Let me know what you think. If you liked my content, feel free to checkout my content on ruan.dev or follow me on twitter at @ruanbekker

How to Setup a AWS EKS Kubernetes Cluster

kubernetes-eks-aws-cluster

Say Thanks! Slack Status Chat on Slack GitHub followers

This will be a tutorial split up in two posts, where I will show you how to provision a EKS Cluster (Elastic Kubernetes Service) on AWS and in the next post, how to deploy a web application to your cluster (Part2 - Deploy a Web App to EKS.)

And then came EKS

As some of you may know, I’m a massive AWS fan boy, and since AWS released their managed Kubernetes service, I was quite excited to test it out. A couple of months passed and I got the opportunity to test out on-the-job as we moved to Kubernetes.

A couple of moths has passed, and serving multiple production workloads on EKS, and I am really impressed with the service.

Amazon provides a vanilla Kubernetes version, they manage the master nodes and they have a extra component called the cloud controller that runs on the master nodes, which is the aws native component that talks to other aws services (as far as I can recall)

What are we doing today

We will cover this in this post:

Topic
Deploy a EKS Cluster
View the resources to see what was provisioned on AWS
Interact with Kubernetes using kubectl
Terminate a Node and verify that the ASG replaces the node
Scale down your worker nodes
Run a pod on your cluster

In the next post we will deploy a web service to our EKS cluster.

Install Pre-Requirements

We require awscli, eksctl and kubectl before we continue. I will be installing this on MacOS, but you can have a look at the following links if you are using a different operating system:

Install awscli:

1
$ pip install awscli

Install kubectl:

1
2
$ brew update
$ brew install kubernetes-cli

Install eksctl:

1
2
$ brew tap weaveworks/tap
$ brew install weaveworks/tap/eksctl

Deploy EKS

Create a SSH key if you would like to SSH to your worker nodes:

1
$ ssh-keygen -b 2048 -f ~/.ssh/eks -t rsa -q -N ""

Now we need to import our public key to EC2, note that I am referencing --profile dev which is my dev AWS profile. If you only have one default profile, you can use --profile default:

1
$ aws --profile dev --region eu-west-1 ec2 import-key-pair --key-name "eks" --public-key-material file://~/.ssh/eks.pub

Provision your cluster using eksctl. This will deploy two cloudformation stacks, one for the kubernetes cluster, and one for the node group.

I am creating a kubernetes cluster with 3 nodes of instance type (t2.small) and using version 1.14:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
$ eksctl --profile dev --region eu-west-1 create cluster --name my-eks-cluster --version 1.14 --nodes 3 --node-type t2.small --ssh-public-key eks

[]  eksctl version 0.9.0
[]  using region eu-west-1
[]  setting availability zones to [eu-west-1a eu-west-1b eu-west-1c]
[]  subnets for eu-west-1a - public:192.168.0.0/19 private:192.168.96.0/19
[]  subnets for eu-west-1b - public:192.168.32.0/19 private:192.168.128.0/19
[]  subnets for eu-west-1c - public:192.168.64.0/19 private:192.168.160.0/19
[]  nodegroup "ng-f27f560e" will use "ami-059c6874350e63ca9" [AmazonLinux2/1.14]
[]  using Kubernetes version 1.14
[]  creating EKS cluster "my-eks-cluster" in "eu-west-1" region
[]  will create 2 separate CloudFormation stacks for cluster itself and the initial nodegroup
[]  if you encounter any issues, check CloudFormation console or try 'eksctl utils describe-stacks --region=eu-west-1 --cluster=my-eks-cluster'
[]  CloudWatch logging will not be enabled for cluster "my-eks-cluster" in "eu-west-1"
[]  you can enable it with 'eksctl utils update-cluster-logging --region=eu-west-1 --cluster=my-eks-cluster'
[]  Kubernetes API endpoint access will use default of {publicAccess=true, privateAccess=false} for cluster "my-eks-cluster" in "eu-west-1"
[]  2 sequential tasks: { create cluster control plane "my-eks-cluster", create nodegroup "ng-f27f560e" }
[]  building cluster stack "eksctl-my-eks-cluster-cluster"
[]  deploying stack "eksctl-my-eks-cluster-cluster"
[]  building nodegroup stack "eksctl-my-eks-cluster-nodegroup-ng-f27f560e"
[]  --nodes-min=3 was set automatically for nodegroup ng-f27f560e
[]  --nodes-max=3 was set automatically for nodegroup ng-f27f560e
[]  deploying stack "eksctl-my-eks-cluster-nodegroup-ng-f27f560e"
[+]  all EKS cluster resources for "my-eks-cluster" have been created
[+]  saved kubeconfig as "/Users/ruan/.kube/config"
[]  adding identity "arn:aws:iam::000000000000:role/eksctl-my-eks-cluster-nodegroup-n-NodeInstanceRole-SNVIW5C3J3SM" to auth ConfigMap
[]  nodegroup "ng-f27f560e" has 0 node(s)
[]  waiting for at least 3 node(s) to become ready in "ng-f27f560e"
[]  nodegroup "ng-f27f560e" has 3 node(s)
[]  node "ip-192-168-42-186.eu-west-1.compute.internal" is ready
[]  node "ip-192-168-75-87.eu-west-1.compute.internal" is ready
[]  node "ip-192-168-8-167.eu-west-1.compute.internal" is ready
[]  kubectl command should work with "/Users/ruan/.kube/config", try 'kubectl get nodes'
[+]  EKS cluster "my-eks-cluster" in "eu-west-1" region is ready

Now that our EKS cluster has been provisioned, let’s browse through our AWS Management Console to understand what was provisioned.

View the Provisioned Resources

If we have a look at the Cloudformation stacks, we can see the two stacks that I mentioned previously:

image

Navigating to our EC2 Instances dashboard, we can see the three worker nodes that we provisioned. Remember that AWS manages the master nodes and we cant see them.

image

We have a ASG (Auto Scaling Group) associated with our worker nodes, nodegroup. We can make use of autoscaling and also have desired state, so we will test this out later where we will delete a worker node and verify if it gets replaced:

image

Navigate using Kubectl:

Eksctl already applied the kubeconfig to ~/.kube/config, so we can start using kubectl. Let’s start by viewing the nodes:

1
2
3
4
5
$ kubectl get nodes
NAME                                           STATUS   ROLES    AGE     VERSION
ip-192-168-42-186.eu-west-1.compute.internal   Ready    <none>   8m50s   v1.14.7-eks-1861c5
ip-192-168-75-87.eu-west-1.compute.internal    Ready    <none>   8m55s   v1.14.7-eks-1861c5
ip-192-168-8-167.eu-west-1.compute.internal    Ready    <none>   8m54s   v1.14.7-eks-1861c5

Viewing our pods from our kube-system namespace (we dont have any pods in our default namespace at the moment):

1
2
3
4
5
6
7
8
9
10
$ kubectl get pods --namespace kube-system
NAME                       READY   STATUS    RESTARTS   AGE
aws-node-btfbk             1/1     Running   0          11m
aws-node-c6ktk             1/1     Running   0          11m
aws-node-wf8mc             1/1     Running   0          11m
coredns-759d6fc95f-ljxzf   1/1     Running   0          17m
coredns-759d6fc95f-s6lg6   1/1     Running   0          17m
kube-proxy-db46b           1/1     Running   0          11m
kube-proxy-ft4mc           1/1     Running   0          11m
kube-proxy-s5q2w           1/1     Running   0          11m

And our services from all our namespaces:

1
2
3
4
$ kubectl get services --all-namespaces
NAMESPACE     NAME         TYPE        CLUSTER-IP    EXTERNAL-IP   PORT(S)         AGE
default       kubernetes   ClusterIP   10.100.0.1    <none>        443/TCP         19m
kube-system   kube-dns     ClusterIP   10.100.0.10   <none>        53/UDP,53/TCP   19m

Testing the ASG

Let’s view our current nodes in our cluster, then select the first node, delete it and verify if the ASG replaces that node.

First, view the nodes and select one node’s address:

1
2
3
4
5
$ kubectl get nodes
NAME                                           STATUS   ROLES    AGE   VERSION
ip-192-168-42-186.eu-west-1.compute.internal   Ready    <none>   37m   v1.14.7-eks-1861c5
ip-192-168-75-87.eu-west-1.compute.internal    Ready    <none>   37m   v1.14.7-eks-1861c5
ip-192-168-8-167.eu-west-1.compute.internal    Ready    <none>   37m   v1.14.7-eks-1861c5

Use the awscli to lookup the EC2 instance id, as we will need this id to delete the node:

1
2
$ aws --profile dev ec2 describe-instances --query 'Reservations[*].Instances[?PrivateDnsName==`ip-192-168-42-186.eu-west-1.compute.internal`].[InstanceId][]' --output text
i-0d016de17a46d5178

Now that we have the EC2 instance id, delete the node:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
$ aws --profile dev ec2 terminate-instances --instance-id i-0d016de17a46d51782
{
    "TerminatingInstances": [
        {
            "CurrentState": {
                "Code": 32,
                "Name": "shutting-down"
            },
            "InstanceId": "i-0d016de17a46d5178",
            "PreviousState": {
                "Code": 16,
                "Name": "running"
            }
        }
    ]
}

Now that we have deleted the EC2 instance, view the nodes and you will see the node has been terminated:

1
2
3
4
$ kubectl get nodes
NAME                                          STATUS   ROLES    AGE   VERSION
ip-192-168-75-87.eu-west-1.compute.internal   Ready    <none>   41m   v1.14.7-eks-1861c5
ip-192-168-8-167.eu-west-1.compute.internal   Ready    <none>   41m   v1.14.7-eks-1861c5

Allow about a minute so that the ASG can replace the node, and when you list again you will see that the ASG replaced the node :

1
2
3
4
5
$ kubectl get nodes
NAME                                          STATUS   ROLES    AGE   VERSION
ip-192-168-42-61.eu-west-1.compute.internal   Ready    <none>   50s   v1.14.7-eks-1861c5
ip-192-168-75-87.eu-west-1.compute.internal   Ready    <none>   42m   v1.14.7-eks-1861c5
ip-192-168-8-167.eu-west-1.compute.internal   Ready    <none>   42m   v1.14.7-eks-1861c5

Run a Pod

Run a busybox pod on your EKS cluster:

1
$ kubectl run --rm -it --generator run-pod/v1 my-busybox-pod --image busybox -- /bin/sh

You will be dropped into a shell:

1
2
/ # busybox | head -1
BusyBox v1.31.1 (2019-10-28 18:40:01 UTC) multi-call binary.

And exit the shell:

1
2
3
/ # exit
Session ended, resume using 'kubectl attach my-busybox-pod -c my-busybox-pod -i -t' command when the pod is running
pod "my-busybox-pod" deleted

Scaling Nodes

While I will not be covering auto-scaling in this post, we can manually scale the worker node count. Let’s scale it down to 1 node.

First we need to get the EKS cluster name:

1
2
3
$ eksctl --profile dev --region eu-west-1 get clusters
NAME      REGION
my-eks-cluster    eu-west-1

Then we need the node group id:

1
2
3
$ eksctl --profile dev --region eu-west-1 get nodegroup --cluster my-eks-cluster
CLUSTER       NODEGROUP   CREATED         MIN SIZE    MAX SIZE    DESIRED CAPACITY    INSTANCE TYPE   IMAGE ID
my-eks-cluster    ng-f27f560e 2019-11-16T16:55:41Z    3       3       3           t2.small    ami-059c6874350e63ca9

Now that we have the node group id, we can scale the node count:

1
2
3
4
$ eksctl --profile dev --region eu-west-1 scale nodegroup --cluster my-eks-cluster --nodes 1 ng-f27f560e

[]  scaling nodegroup stack "eksctl-my-eks-cluster-nodegroup-ng-f27f560e" in cluster eksctl-my-eks-cluster-cluster
[]  scaling nodegroup, desired capacity from 3 to 1, min size from 3 to 1

Now when we use kubectl to view the nodes, we will see we only have 1 worker node:

1
2
3
$ kubectl get nodes
NAME                                          STATUS   ROLES    AGE   VERSION
ip-192-168-8-167.eu-west-1.compute.internal   Ready    <none>   73m   v1.14.7-eks-1861c5

Clean Up

If you want to follow along deploying a web application to your EKS cluster before we terminate the cluster, have a look at Part 2 - EKS Tutorial before continuing.

Once you are ready to terminate your EKS cluster, you can go ahead and terminate the cluster:

1
2
3
4
5
6
7
8
9
10
11
12
$ eksctl --profile dev --region eu-west-1 delete cluster --name my-eks-cluster

[]  eksctl version 0.9.0
[]  using region eu-west-1
[]  deleting EKS cluster "my-eks-cluster"
[+]  kubeconfig has been updated
[]  cleaning up LoadBalancer services
[]  2 sequential tasks: { delete nodegroup "ng-f27f560e", delete cluster control plane "my-eks-cluster" [async] }
[]  will delete stack "eksctl-my-eks-cluster-nodegroup-ng-f27f560e"
[]  waiting for stack "eksctl-my-eks-cluster-nodegroup-ng-f27f560e" to get deleted
[]  will delete stack "eksctl-my-eks-cluster-cluster"
[+]  all cluster resources were deleted

Further Reading on Kubernetes

This is one amazing resource that covers a lot of kubernetes topics and will help you throughout your EKS journey: - EKSWorkshop

Thank You

Let me know what you think. If you liked my content, feel free to checkout my content on ruan.dev or follow me on twitter at @ruanbekker

Testing AWS Lambda Functions Locally on Docker With LambCi

Say Thanks! Slack Status Chat on Slack GitHub followers

I discovered a Docker image called LambCi that allows you to test lambda functions locally on docker and wanted to share with you how it works.

Python Lambda Function

We will create a basic lambda function to demonstrate how it works.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
$ mkdir task
$ cat > task/lambda_function.py << EOF
import json

def lambda_handler(event, context):
    if event:

        try:
            event['name']
            name = event['name']
            output_string = 'My name is {}'.format(name.capitalize())

        except KeyError:
            output_string = 'A name was not defined in the event payload'

    return output_string
EOF

Now that we’ve created the function, run the docker container with the parameters of the functions handler method and the event parameters:

1
2
3
4
5
$ docker run --rm -v "$PWD/task":/var/task lambci/lambda:python3.7 lambda_function.lambda_handler '{"name": "ruan"}'
START RequestId: 70025895-1233-1362-8006-c2784b5d80b6 Version: $LATEST
END RequestId: 70025895-1233-1362-8006-c2784b5d80b6
REPORT RequestId: 70025895-1233-1362-8006-c2784b5d80b6    Duration: 7.51 ms   Billed Duration: 100 ms Memory Size: 1536 MB    Max Memory Used: 23 MB
"My name is Ruan"

And another call:

1
2
3
4
5
$ docker run --rm -v "$PWD/task":/var/task lambci/lambda:python3.7 lambda_function.lambda_handler '{"nam": "ruan"}'
START RequestId: f7ab2e97-05db-1184-a009-11b92638534f Version: $LATEST
END RequestId: f7ab2e97-05db-1184-a009-11b92638534f
REPORT RequestId: f7ab2e97-05db-1184-a009-11b92638534f    Duration: 5.32 ms   Billed Duration: 100 ms Memory Size: 1536 MB    Max Memory Used: 23 MB
"A name was not defined in the event payload"

Checkout the dockerhub page for more info: - https://hub.docker.com/r/lambci/lambda/

Thank You

Let me know what you think. If you liked my content, feel free to checkout my content on ruan.dev or follow me on twitter at @ruanbekker

Integrating Google OAuth With Traefik

Say Thanks! Slack Status Chat on Slack GitHub followers

I stumbled upon a really cool project: Traefik Forward Auth that provides Google OAuth based Login and Authentication for Traefik

This means that you can secure your Traefik backend services by using Google for authentication to access your backends. Authorizing who can logon, get’s managed on the forward proxy.

If you have not worked with Traefik, Traefik is one amazing dynamic and modern reverse proxy / load balancer built for micro services.

What are we doing today

In this demonstration we will setup a new google application, setup the forward-auth proxy and spin up a service that we will use google to authenticate against to access our application on Docker Swarm.

Step by step tutorial has been published on my sysadmins blog, read more here

Thank You

Let me know what you think. If you liked my content, feel free to checkout my content on ruan.dev or follow me on twitter at @ruanbekker

Running vs Code in Your Browser With Docker

vscode

Say Thanks! Slack Status Chat on Slack GitHub followers

Today we will setup a Visual Studio Code instance running on Docker, so that you can access VSCode via the web browser.

VSCode in Docker

The work directory will be under code and the application will store its data under data. Lets go ahead and create them:

1
2
mkdir demo/{code,data}
cd demo

Run the vscode container:

1
2
3
4
$ docker run --rm --name vscode \
  -it -p 8443:8443 -p 8888:8888 \
  -v $(pwd)/data:/data -v $(pwd)/code:/code \
ruanbekker/vscode:python-3.7

The password that you require on login will be prompted in the output:

1
2
3
4
5
6
7
8
9
10
11
INFO  code-server v1.1156-vsc1.33.1
INFO  Additional documentation: http://github.com/cdr/code-server
INFO  Initializing {"data-dir":"/data","extensions-dir":"/data/extensions","working-dir":"/code","log-dir":"/root/.cache/code-server/logs/20190914105631217"}
INFO  Starting shared process [1/5]...
INFO  Starting webserver... {"host":"0.0.0.0","port":8443}
INFO
INFO  Password: 4b050c4fa0ef109d53c10d9f
INFO
INFO  Started (click the link below to open):
INFO  https://localhost:8443/
INFO  Connected to shared process

Access vscode on https://localhost:8443/ and after you accepted the self-signed certificate warning, you will be presented with the login page:

image

After you have logged a example of creating a python file will look like this:

image

The source code for this docker image can be found at https://github.com/ruanbekker/dockerfiles/tree/master/vscode .

Different versions

Currently I have only python available on docker hub with the requests and flask packages available. But you can fork the repository and add the upstream or packages of your choice.

Expire Objects in AWS S3 Automatically After 30 Days

In AWS S3 you can make use of lifecycle policies to manage the lifetime of your objects stored in S3.

In this tutorial, I will show you how to delete objects automatically from S3 after 30 days.

Navigate to your Bucket

Head over to your AWS S3 bucket where you want to delete objects after they have been stored for 30 days:

0400F9CB-9223-4FDF-8FA5-D0BC1FA8EB71

Lifecycle Policies

Select “Management” and click on “Add lifecycle rule”:

9BB26C7C-F251-45C4-AE44-A34459BD0F4B

Set a rule name of choice and you have the option to provide a prefix if you want to delete objects based on a specific prefix. I will leave this blank as I want to delete objects in the root level of the bucket. Head to next on the following section:

AEF8B151-3FA8-454F-AC71-778A531BD1EE

From the “Transitions” section, configure the transition section, by selecting to expire the current version of the object after 30 days:

2B395671-A4C0-4E5A-82E7-00EE6579DB5A

Review the configuration:

F7F8E800-62FF-4156-B506-5FB9BCC148E0

When you select “Save”, you should be returned to the following section:

8421EBCE-9503-4259-92AA-DB66C6F532AF

Housecleaning on your S3 Bucket

Now 30 days after you created objects on AWS S3, they will be deleted.

Reindex Elasticsearch Indices With Logstash

logstash

In this tutorial I will show you how to reindex daily indices to a monthly index on Elasticsearch using Logstash

Use Case

In this scenario we have filebeat indices which have a low document count and would like to aggregate the daily indices into a bigger index, which will be a monthly index. So reindexing from "filebeat-2019.08.*" to "filebeat-monthly-2019.08".

Overview of our Setup

Here we can see all the indices that we would like to read from"

1
2
3
4
5
6
7
$ curl 10.37.117.130:9200/_cat/indices/filebeat-2019.08.*?v
health status index               uuid                   pri rep docs.count docs.deleted store.size pri.store.size
green  open   filebeat-2019.08.28 qoKiHUjQT5eNVF_wjLi9fA   5   1         17            0    295.4kb        147.7kb
green  open   filebeat-2019.08.27 8PWngqFdRPKLEnrCCiw6xA   5   1        301            0    900.9kb          424kb
green  open   filebeat-2019.08.29 PiG2ma8zSbSt6sSg7soYPA   5   1         24            0    400.2kb          196kb
green  open   filebeat-2019.08.31 XSWZvqQDR0CugD23y6_iaA   5   1         27            0    451.5kb        222.1kb
green  open   filebeat-2019.08.30 u_Hr9fA5RtOtpabNGUmSpw   5   1         18            0    326.1kb          163kb

I have 3 nodes in my elasticsearch cluster:

1
2
3
4
5
$ curl 10.37.117.130:9200/_cat/nodes?v
ip            heap.percent ram.percent cpu load_1m load_5m load_15m node.role master name
10.37.117.132           56          56   5    0.47    0.87     1.10 mdi       -      elasticsearch-01
10.37.117.130           73          56   4    0.47    0.87     1.10 mdi       -      elasticsearch-03
10.37.117.199           29          56   4    0.47    0.87     1.10 mdi       *      elasticsearch-02

As elasticsearch create 5 primary shards by default, I want to override this behavior to creating 3 primary shards. I will be using a template, so whenever a index get created with the index pattern `“-monthly-”, it will apply the settings to create 3 primary shards and 1 replica shards:

1
2
3
$ curl -H 'Content-Type: application/json' -XPUT 10.37.117.130:9200/_template/monthly -d '
{"index_patterns": ["*-monthly-*"], "order": -1, "settings": {"number_of_shards": "3", "number_of_replicas": "1"}}
'

Logstash Configuration

Our logstash configuration which we will use, will read from elasticsearch and the index pattern which we want to read from. Then our ouput configuration instructs where to write the data to:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
$ cat /tmp/logstash/logstash.conf
input {
  elasticsearch {
    hosts => [ "http://10.37.117.132:9200" ]
    index => "filebeat-2019.08.*"
    size => 500
    scroll => "5m"
    docinfo => true
  }
}

output {
  elasticsearch {
    hosts => ["http://10.37.117.199:9200"]
    index => "filebeat-monthly-2019.08"
    document_id => "%{[@metadata][_id]}"
  }
  stdout {
    codec => "dots"
  }
}

Reindex the Data

I will be using docker to run logstash, and map the configuration to the configuration directory inside the container:

1
2
3
4
5
$ sudo docker run --rm -it -v /tmp/logstash:/usr/share/logstash/pipeline docker.elastic.co/logstash/logstash-oss:6.2.4
[2019-09-08T10:57:36,170][INFO ][logstash.pipeline        ] Pipeline started successfully {:pipeline_id=>"main", :thread=>"#<Thread:0x7db57d5f run>"}
[2019-09-08T10:57:36,325][INFO ][logstash.agent           ] Pipelines running {:count=>1, :pipelines=>["main"]}
...
[2019-09-08T10:57:39,359][INFO ][logstash.pipeline        ] Pipeline has terminated {:pipeline_id=>"main", :thread=>"#<Thread:0x7db57d5f run>"}

Review that the data was reindexed:

1
2
3
4
5
6
7
8
$ curl 10.37.117.130:9200/_cat/indices/*filebeat-*08*?v
health status index                    uuid                   pri rep docs.count docs.deleted store.size pri.store.size
green  open   filebeat-2019.08.28      qoKiHUjQT5eNVF_wjLi9fA   5   1         17            0    295.4kb        147.7kb
green  open   filebeat-2019.08.29      PiG2ma8zSbSt6sSg7soYPA   5   1         24            0    400.2kb          196kb
green  open   filebeat-2019.08.30      u_Hr9fA5RtOtpabNGUmSpw   5   1         18            0    326.1kb          163kb
green  open   filebeat-2019.08.27      8PWngqFdRPKLEnrCCiw6xA   5   1        301            0    900.9kb          424kb
green  open   filebeat-2019.08.31      XSWZvqQDR0CugD23y6_iaA   5   1         27            0    451.5kb        222.1kb
green  open   filebeat-monthly-2019.08 VZD8iDjfTfeyP-SWB9l2Pg   3   1        387            0    577.8kb        274.7kb

Once we are happy with what we are seeing, we can delete the source data:

1
2
$ curl -XDELETE "10.37.117.130:9200/filebeat-2019.08.*"
{"acknowledged":true}

Deploy a Monitoring Stack on Docker Swarm With Grafana and Prometheus

Say Thanks! Slack Status Chat on Slack GitHub followers

In this tutorial we will deploy a monitoring stack to docker swarm, that includes Grafana, Prometheus, Node-Exporter, cAdvisor and Alertmanager.

If you are looking for more information on Prometheus, have a look at my other Prometheus and Monitoring blog posts.

What you will get out of this

Once you deployed the stacks, you will have the following:

  • Access Grafana through Traefik reverse proxy
  • Node-Exporter to expose node level metrics
  • cAdvisor to expose container level metrics
  • Prometheus to scrape the exposed entpoints and ingest it into Prometheus
  • Prometheus for your Timeseries Database
  • Alertmanager for firing alerts on configured rules

The compose file that I will provide will have pre-populated dashboards

Deploy Traefik

Get the traefik stack sources:

1
2
$ git clone https://github.com/bekkerstacks/traefik
$ pushd traefik

Have a look at HTTPS Mode if you want to deploy traefik on HTTPS, as I will use HTTP in this demonstration.

Set your domain and deploy the stack:

1
2
3
4
5
6
7
8
9
10
$ DOMAIN=localhost PROTOCOL=http bash deploy.sh

Username for Traefik UI: demo
Password for Traefik UI: 
deploying traefik stack in http mode
Creating network public
Creating config proxy_traefik_htpasswd
Creating service proxy_traefik
Traefik UI is available at:
- http://traefik.localhost

Your traefik service should be running:

1
2
3
$ docker service ls
ID                  NAME                MODE                REPLICAS            IMAGE               PORTS
0wga71zbx1pe        proxy_traefik       replicated          1/1                 traefik:1.7.14      *:80->80/tcp

Switch back to the previous directory:

1
$ popd

Deploy the Monitoring Stack

Get the sources:

1
2
$ git clone https://github.com/bekkerstacks/monitoring-cpang
$ pushd monitoring-cpang

If you want to deploy the stack with no pre-configured dashboards, you would need to use ./docker-compose.html, but in this case we will deploy the stack with pre-configured dashboards.

Set the domain and deploy the stack:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
$ docker stack deploy -c alt_versions/docker-compose_http_with_dashboards.yml mon

Creating network private
Creating config mon_grafana_config_datasource
Creating config mon_grafana_dashboard_prometheus
Creating config mon_grafana_dashboard_docker
Creating config mon_grafana_dashboard_nodes
Creating config mon_grafana_dashboard_blackbox
Creating config mon_alertmanager_config
Creating config mon_prometheus_config
Creating config mon_prometheus_rules
Creating service mon_blackbox-exporter
Creating service mon_alertmanager
Creating service mon_prometheus
Creating service mon_grafana
Creating service mon_cadvisor
Creating service mon_node-exporter

The endpoints is configured as ${service_name}.${DOMAIN} so you will be able to access grafana on http://grafana.localhost as showed in my use-case.

Use docker stack services mon to see if all the tasks has checked into its desired count then access grafana on http://grafana.${DOMAIN}

Accessing Grafana

Access Grafana on http://grafana.${DOMAIN} and logon with the user admin and the password admin:

image

You will be asked to reset the password:

image

You will then be directed to the ui:

image

From the top, when you list dashboards, you will see the 3 dashboards that was pre-configured:

image

When looking at the Swarm Nodes Dashboard:

image

The Swarm Services Dashboard:

image

Exploring Metrics in Prometheus

Access prometheus on http://prometheus.${DOMAIN} and from the search input, you can start exploring though all the metrics that is available in prometheus:

image

If we search for node_load15 and select graph, we can have a quick look on how the 15 minute load average looks like for the node where the stack is running on:

image

Having a look at the alerts section:

image

Resources

For more information and configuration on the stack that we use, have a look at the wiki: - https://github.com/bekkerstacks/monitoring-cpang/wiki

The github repository: - https://github.com/bekkerstacks/monitoring-cpang

Thank You

Let me know what you think. If you liked my content, feel free to checkout my content on ruan.dev or follow me on twitter at @ruanbekker