This is a scribble post — a WIP/incomplete post, so read with the understanding that it will have holes in the knowledge or gaps.
This article just goes through my tinkering with Kubernetes on AWS.
Create a new S3 bucket to store the state of your Kubernetes clusters
aws s3 mb s3://k8sstate --region eu-west-2
aws s3 ls
Create a Route 53 hosted zone. I’m creating
aws route53 create-hosted-zone --name k8stest.blenderfox.uk \ --caller-reference $(uuidgen)
dig the nameservers for the hosted zone you created
dig NS k8stest.blenderfox.uk
If your internet connection already has DNS setup to the hosted zone, you’ll see the nameservers in the output:
;; QUESTION SECTION: ;k8stest.blenderfox.uk. IN NS ;; ANSWER SECTION: k8stest.blenderfox.uk. 172800 IN NS ns-1353.awsdns-41.org. k8stest.blenderfox.uk. 172800 IN NS ns-1816.awsdns-35.co.uk. k8stest.blenderfox.uk. 172800 IN NS ns-404.awsdns-50.com. k8stest.blenderfox.uk. 172800 IN NS ns-644.awsdns-16.net.
If your connection isn’t set up to resolve to the aws DNS (like mine), you’ll get this instead:
;; QUESTION SECTION: ;k8stest.blenderfox.uk. IN NS ;; AUTHORITY SECTION: uk. 603 IN SOA dns1.nic.uk. hostmaster.nic.uk. 1403374706 7200 900 2419200 603
This means you need to do a bit of DNS hacking to get this to work. The quick and dirty method is to add one of the aws DNS hosts to your
dig using one of the aws DNS servers and see if that resolves properly
dig @ns-1816.awsdns-35.co.uk NS k8stest.blenderfox.uk
If it does, then look for this line near the end:
Add this into
/etc/resolv.conf (make sure you’re root/sudo’ed up)
# Dynamic resolv.conf(5) file for glibc resolver(3) generated by resolvconf(8) # DO NOT EDIT THIS FILE BY HAND -- YOUR CHANGES WILL BE OVERWRITTEN nameserver 220.127.116.11 nameserver 127.0.1.1
Now try to dig the nameservers and confirm it now returns the nameservers correctly
dig NS k8stest.blenderfox.uk
If that works, we can now continue
First, export your AWS credentials as environment variables (I’ve found Kubernetes doesn’t reliably pick up the credentials from the aws cli especially if you have multiple profiles
export AWS_ACCESS_KEY_ID='your key here' export AWS_SECRET_ACCESS_KEY='your secret access key here'
You can also add it to a bash script and source it.
Create the cluster using
kops. Note that the master zones must have an odd count (1, 3, etc.) since eu-west-2 only has two zones (a and b), I have to have only one zone here
kops create cluster --cloud aws --name cluster.k8stest.blenderfox.uk \ --state s3://k8sstate --node-count 3 --zones eu-west-2a,eu-west-2b \ --node-size m4.large --master-size m4.large \ --master-zones eu-west-2a \ --ssh-public-key ~/.ssh/id_rsa.pub \ --master-volume-size 50 \ --node-volume-size 50
You can also add the
--kubernetes-version switch to specifically pick a Kubernetes version to include in the cluster. Recognised versions are shown at
TL;DL: Bands are:
- >=1.4.0 and <1.5.0
- >=1.5.0 and <1.6.0
- >=1.6.0 and <1.7.0
Each with their own Debian image.
If you get this message:
error doing DNS lookup for NS records for "k8stest.blenderfox.uk": lookup k8stest.blenderfox.uk on 127.0.1.1:53: no such host
It means you haven’t done the resolv.conf hack
Assuming the create completed successfully, update the cluster so it pushes the update out to your cloud
kops update cluster cluster.k8stest.blenderfox.uk --yes \ --state s3://k8sstate
While the cluster starts up, all the new records will be set up with placeholder IPs. Remove your resolv.conf hack as this can affect your DNS resolution
Now you’re at a stage where the cluster is starting up but the API server is failing. Currently trying to figure that part out.