Guide to creating a Kubernetes Cluster in existing subnets & VPC on AWS with kops

This article is a guide on how to setup a Kubernetes cluster in AWS using kops and plugging it into your own subnets and VPC. We attempt to minimise the external IPs used in this method.

Export your AWS API keys into environment variables

export AWS_ACCESS_KEY_ID='YOUR_KEY'
export AWS_SECRET_ACCESS_KEY='YOUR_ACCESS_KEY'
export CLUSTER_NAME="my-cluster-name"
export VPC="vpc-xxxxxx"
export K8SSTATE="s3-k8sstate"</pre>

Create the cluster (you can change some of these switches to match your requirements. I would suggest only using one worker node and one master node to begin with and then increase them once you have confirmed the config is good. The more workers and master nodes you have, the longer it will take to run a rolling-update.

kops create cluster --cloud aws --name $CLUSTER_NAME --state s3://$K8SSTATE --node-count 1 --zones eu-west-1a,eu-west-1b,eu-west-1c --node-size t2.micro --master-size t2.micro --master-zoneseu-west-1a,eu-west-1b,eu-west-1c --ssh-public-key ~/.ssh/id_rsa.pub --topology=private --networking=weave --associate-public-ip=false --vpc $VPC

Important note: There must be an ODD number of master zones. If you tell kops to use an even number zones for master, it will complain.

If you want to use additional security groups, don’t add them yet — add them after you have confirmed the cluster is working.

Internal IPs: You must have a VPN connection into your VPC or you will not be able to ssh into the instances. The alternative is to use the bastion functionality using the --bastion flag with the create command. Then doing this:

ssh -i ~/.ssh/id_rsa -o ProxyCommand='ssh -W %h:%p admin@bastion.$CLUSTER_NAME' admin@INTERNAL_MASTER_IP

However, if you do this method, you MUST then use public IP addressing on the api load balancer, as you will not be able to do kops validate otherwise.

Edit the cluster

kops edit cluster $CLUSTER_NAME --state=s3://$K8SSTATE

Make the following changes:

If you have a VPN connection into the VPC, change spec.api.loadBalancer.type to “Internal“, otherwise, leave it as “Public
Change spec.subnets to match your private subnets. To use existing private subnets, they should also include the id of the subnet and match the CIDR range, e.g.:

subnets:
- cidr: 10.10.10.0/23
  id: subnet-xxxxxxx
  name: eu-west-1a
  type: Private
  zone: eu-west-1a</pre>

The utility subnet is where the Bastion hosts will be placed, and these should be in a public subnet, since they will be the inbound route into the cluster from the internet.

If you need to change or add specific IAM permissions, add them under spec.additionalPolicies like this to add additional policies to the node IAM policy (apologies about the formatting. WordPress is doing something weird to it.)

additionalPolicies:
  node: | 
    [ 
      {  
        "Effect": "Allow", 
        "Action": ["dynamodb:*"],
        "Resource": ["*"] 
      },  
      {  
        "Effect": "Allow",  
        "Action": ["es:*"],     
        "Resource": ["*"]     
      }    
    ]

Edit the bastion, nodes, and master configs (MASTER_REGION is the zone where you placed the master. If you are running a multi-region master config, you’ll have to do this for each region)

kops edit ig master-{MASTER_REGION} --name=$CLUSTER_NAME --state s3://$K8SSTATE

kops edit ig nodes --name=$CLUSTER_NAME --state s3://$K8SSTATE
kops edit ig bastions --name=$CLUSTER_NAME --state s3://$K8SSTATE

Check and make any updates.

If you want a mixture of instance types (e.g. t2.mediums and r3.larges), you’ll need to separate these using new instance groups ($SUBNETS is the subnets where you want the nodes to appear — for example, you can provide a list “eu-west-2a,eu-west-2b)

kops create ig anothernodegroup --state s3://$K8SSTATE --subnets $SUBNETS

You can later delete this with

kops delete ig anothernodegroup --state s3://$K8SSTATE

If you want to use spot prices, add this under the spec section (x.xx is the price you want to bid):

maxPrice: "x.xx"

Check the instance size and count if you want to change them (I would recommend not changing the node count just yet)

If you want to add tags to the instances (for example for billing), add something like this to the spec section:

cloudLabels:
  Billing: product-team</pre>

If you want to run some script(s) at node startup (cloud-init), add them to spec.additionalUserData:

spec:
  additionalUserData:
  - name: myscript.sh
    type: text/x-shellscript
    content: |
      #!/bin/sh
      echo "Hello World.  The time is now $(date -R)!" | tee /root/output.txt

Apply the update:

kops update cluster $CLUSTER_NAME --state s3://$K8SSTATE --yes

Wait for DNS to propagate and then validate

kops validate cluster --state s3://$K8SSTATE

Once the cluster returns ready, apply the Kubernetes dashboard

kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/master/src/deploy/alternative/kubernetes-dashboard.yaml

Access the dashboard via

https://api.$CLUSTER_NAME/api/v1/namespaces/kube-system/services/kubernetes-dashboard/proxy/

also try:

https://api.$CLUSTER_NAME/api/v1/namespaces/kube-system/services/https:kubernetes-dashboard:/proxy/

If the first doesn’t work

(ignore the cert error)

Username is “admin” and the password is found from your local ~/.kube/config

Add the External DNS update to allow you to give friendly names to your externally-exposed services rather than the horrible elb names.

See here: https://github.com/kubernetes-incubator/external-dns/blob/master/docs/tutorials/aws.md

(You can apply the yaml directly onto the cluster via the dashboard. Make sure you change the filter to match your domain or subdomain. )

Note that if you use this, you’ll need to change the node IAM policy on the cluster config as the default IAM policy won’t allow the External DNS container to modify Route 53 entries, and also annotate (use kubectl annotate $service_name key:value) your service with text such as:

external-dns.alpha.kubernetes.io/hostname: $SERVICE_NAME.$CLUSTERNAME

And also you might need this annotation, to make the ELB internal rather than public – otherwise Kubernetes will complain “Error creating load balancer (will retry): Failed to ensure load balancer for service namespace/service: could not find any suitable subnets for creating the ELB”

service.beta.kubernetes.io/aws-load-balancer-internal: 0.0.0.0/0

(Optional) Add the Cockpit pod to your cluster as described here

http://cockpit-project.org/guide/133/feature-kubernetes.html

It will allow you to visually see a topology of your cluster at a cluster and also provides some management features too. For example, here’s my cluster. It contains 5 nodes (1 master, 4 workers and is running 4 services (Kubernetes, external-dns, cockpit, and dashboard). Cockpit creates a replication controller so it knows about the changes.

chrome_2018-01-14_15-44-00

Add any additional security groups by adding this under the spec section of the node/master/bastions config, then do a rolling-update (you might need to use the --force switch), do this as soon as you can after creating and verifying the cluster updates work.

additionalSecurityGroups:
- sg-xxxxxxxx
- sg-xxxxxxxx

If the cluster breaks after this (i.e. the nodes haven’t shown up on the master), reboot the server (don’t terminate, use the reboot option from the AWS console), and see if that helps. If it still doesn’t show up, there’s something wrong with the security groups attached — i.e. they’re conflicting somehow with the Kubernetes access. Remove those groups and then do another rolling-update but use both the --force and --cloudonly switches to force a “dirty” respin.

If the cluster comes up good, then you can change the node counts on the configs and apply the update.

Note that if you change the node count and then apply the update, the cluster attempts to make the update without rolling-update. For example, if you change the node count from 1 to 3, the cluster attempts to bring up the 2 additional nodes.

Other things you can look at:

Kompose – which converts a docker-compose configuration into Kubernetes resources

Finally, have fun!

Tinkering with Kubernetes and AWS

 

This article just goes through my tinkering with Kubernetes on AWS.

Create a new S3 bucket to store the state of your Kubernetes clusters

aws s3 mb s3://k8sstate --region eu-west-2

Verify

aws s3 ls

Create a Route 53 hosted zone. I’m creating k8stest.blenderfox.uk

aws route53 create-hosted-zone --name k8stest.blenderfox.uk \
--caller-reference $(uuidgen)

dig the nameservers for the hosted zone you created

dig NS k8stest.blenderfox.uk

If your internet connection already has DNS setup to the hosted zone, you’ll see the nameservers in the output:

;; QUESTION SECTION:
;k8stest.blenderfox.uk.     IN  NS

;; ANSWER SECTION:
k8stest.blenderfox.uk. 172800 IN NS ns-1353.awsdns-41.org.
k8stest.blenderfox.uk. 172800 IN NS ns-1816.awsdns-35.co.uk.
k8stest.blenderfox.uk. 172800 IN NS ns-404.awsdns-50.com.
k8stest.blenderfox.uk. 172800 IN NS ns-644.awsdns-16.net.

 

Export your AWS credentials as environment variables (I’ve found Kubernetes doesn’t reliably pick up the credentials from the aws cli especially if you have multiple profiles

export AWS_ACCESS_KEY_ID='your key here'
export AWS_SECRET_ACCESS_KEY='your secret access key here'

You can also add it to a bash script and source it.

Create the cluster using kops. Note that the master zones must have an odd count (1, 3, etc.) since eu-west-2 only has two zones (a and b), I have to have only one zone here

kops create cluster --cloud aws --name cluster.k8stest.blenderfox.uk \
--state s3://k8sstate --node-count 3 --zones eu-west-2a,eu-west-2b \
--node-size m4.large --master-size m4.large \
--master-zones eu-west-2a \
--ssh-public-key ~/.ssh/id_rsa.pub \
--master-volume-size 50 \
--node-volume-size 50 \
--topology private

You can also add the --kubernetes-version switch to specifically pick a Kubernetes version to include in the cluster. Recognised versions are shown at

https://github.com/kubernetes/kops/blob/master/channels/stable

TL;DL: Bands are:

  • >=1.4.0 and <1.5.0
  • >=1.5.0 and <1.6.0
  • >=1.6.0 and <1.7.0
  • >=1.7.0

Each with their own Debian image.

 

Assuming the create completed successfully, update the cluster so it pushes the update out to your cloud

kops update cluster cluster.k8stest.blenderfox.uk --yes \
--state s3://k8sstate

While the cluster starts up, all the new records will be set up with placeholder IPs.

Selection_004

NOTE: Kubernetes needs an externally resolvable DNS name. Basically, you need to be able to create a hosted zone on a domain you control. You can’t use Kops on a domain you can’t control, even if you hack the resolver config.

The cluster can take a while to come up. Use

kops validate cluster --state s3://k8sstate

To check the cluster state.

When ready, you’ll see something like this:

Using cluster from kubectl context: cluster.k8stest.blenderfox.co.uk

Validating cluster cluster.k8stest.blenderfox.co.uk

INSTANCE GROUPS
NAME                    ROLE    MACHINETYPE     MIN     MAX     SUBNETS
master-eu-west-2a       Master  m4.large        1       1       eu-west-2a
nodes                   Node    m4.large        3       3       eu-west-2a,eu-west-2b

NODE STATUS
NAME                                            ROLE    READY
ip-172-20-35-51.eu-west-2.compute.internal      master  True
ip-172-20-49-10.eu-west-2.compute.internal  node    True
ip-172-20-72-100.eu-west-2.compute.internal     node    True
ip-172-20-91-236.eu-west-2.compute.internal     node    True

Your cluster cluster.k8stest.blenderfox.co.uk is ready

Now you can start interacting with the cluster. First thing is to deploy the Kubernetes dashboard

$ kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/master/src/deploy/alternative/kubernetes-dashboard.yaml
 serviceaccount "kubernetes-dashboard" created
 role "kubernetes-dashboard-minimal" created
 rolebinding "kubernetes-dashboard-minimal" created
 deployment "kubernetes-dashboard" created
 service "kubernetes-dashboard" created

Now setup a proxy to the api

$ kubectl proxy
Starting to serve on 127.0.0.1:8001

Next, access

http://localhost:8001/ui

To get the dashboard

Now let’s create a job to deploy on to the cluster.

The Linux commands you should NEVER use (Hewlett Packard Enterprise)

Source: https://www.hpe.com/us/en/insights/articles/the-linux-commands-you-should-never-use-1712.html

The classic rm -rf / is there, along with accidental dd‘ing or mkfs‘ing the wrong disk (I’ve done that before), but the lesser known fork bombs and moving to /dev/null are in there (I often redirect output to /dev/null, but not moved files into there. That’s an interesting way of getting rid of files.

TMOUT – Auto Logout Linux Shell When There Isn’t Any Activity

Something new I learned today — doing

export TMOUT=120

Will auto logout your current shell/login session after that many seconds.

Very useful if you hook this into the root account’s profile or as a default to all users so people can’t leave terminals open

Source: TMOUT – Auto Logout Linux Shell When There Isn’t Any Activity

How to Install Multiple Linux Distributions on One USB

As someone who has tinkered with multiple distributions, this will be a great way to try out multiples

This tutorial shows you how to install multiple Linux distributions on one USB. This way, you can enjoy more than one live Linux distros on a single USB key.

Source: How to Install Multiple Linux Distributions on One USB

Some Myths About Linux That Cause New Users To Run Away From Linux – LinuxAndUbuntu

An attempt to bust some of the myths that surround Linux. Not a lot of them, but still some of them – some of which I see a lot in Windows communities. And the old classic “Linux is CLI only” (facepalm)

Source: Some Myths About Linux That Cause New Users To Run Away From Linux – LinuxAndUbuntu

%d bloggers like this: