Well, it’s the day after Boxing Day. The day where the majority of people who haven’t taken the interim days off on holiday, go back to work.
There was definitely a run-down feeling on the train ride into work and the trains were running a reduced (probably a Sunday service), so I ended up running for the earlier train since my normal one wasn’t there today.
Christmas Day. Jammed up by an accident on the way, but lots (and lots) of food, movies and Wii-ing (is that even a word?)
Was pouring with rain on the way home. So bad, in fact that we had to slow down severely on the motorway. That however did not stop a 4×4 zooming past us. Christmas Day always brings out the idiots. 😑😒😔
First long run since my week away. 25 minutes running non stop and my body screamed at me to stop at 4km. Busted through and did one more round, taking it to 5.6 km. Strava logged it as 5.9 km, with moving time of 39:49, with a best km time of 5:24 min/km.
I was using the Spotify running playlist to keep my cadence steady and it showed.
The first and last kilometres were the warm up and cool down walks, hence the skewed time.
This article just goes through my tinkering with Kubernetes on AWS.
Create a new S3 bucket to store the state of your Kubernetes clusters
aws s3 mb s3://k8sstate --region eu-west-2
aws s3 ls
Create a Route 53 hosted zone. I’m creating
aws route53 create-hosted-zone --name k8stest.blenderfox.uk \ --caller-reference $(uuidgen)
dig the nameservers for the hosted zone you created
dig NS k8stest.blenderfox.uk
If your internet connection already has DNS setup to the hosted zone, you’ll see the nameservers in the output:
;; QUESTION SECTION: ;k8stest.blenderfox.uk. IN NS ;; ANSWER SECTION: k8stest.blenderfox.uk. 172800 IN NS ns-1353.awsdns-41.org. k8stest.blenderfox.uk. 172800 IN NS ns-1816.awsdns-35.co.uk. k8stest.blenderfox.uk. 172800 IN NS ns-404.awsdns-50.com. k8stest.blenderfox.uk. 172800 IN NS ns-644.awsdns-16.net.
Export your AWS credentials as environment variables (I’ve found Kubernetes doesn’t reliably pick up the credentials from the aws cli especially if you have multiple profiles
export AWS_ACCESS_KEY_ID='your key here' export AWS_SECRET_ACCESS_KEY='your secret access key here'
You can also add it to a bash script and source it.
Create the cluster using
kops. Note that the master zones must have an odd count (1, 3, etc.) since eu-west-2 only has two zones (a and b), I have to have only one zone here
kops create cluster --cloud aws --name cluster.k8stest.blenderfox.uk \ --state s3://k8sstate --node-count 3 --zones eu-west-2a,eu-west-2b \ --node-size m4.large --master-size m4.large \ --master-zones eu-west-2a \ --ssh-public-key ~/.ssh/id_rsa.pub \ --master-volume-size 50 \ --node-volume-size 50 \ --topology private
You can also add the
--kubernetes-version switch to specifically pick a Kubernetes version to include in the cluster. Recognised versions are shown at
TL;DL: Bands are:
- >=1.4.0 and <1.5.0
- >=1.5.0 and <1.6.0
- >=1.6.0 and <1.7.0
Each with their own Debian image.
Assuming the create completed successfully, update the cluster so it pushes the update out to your cloud
kops update cluster cluster.k8stest.blenderfox.uk --yes \ --state s3://k8sstate
While the cluster starts up, all the new records will be set up with placeholder IPs.
NOTE: Kubernetes needs an externally resolvable DNS name. Basically, you need to be able to create a hosted zone on a domain you control. You can’t use Kops on a domain you can’t control, even if you hack the resolver config.
The cluster can take a while to come up. Use
kops validate cluster --state s3://k8sstate
To check the cluster state.
When ready, you’ll see something like this:
Using cluster from kubectl context: cluster.k8stest.blenderfox.co.uk Validating cluster cluster.k8stest.blenderfox.co.uk INSTANCE GROUPS NAME ROLE MACHINETYPE MIN MAX SUBNETS master-eu-west-2a Master m4.large 1 1 eu-west-2a nodes Node m4.large 3 3 eu-west-2a,eu-west-2b NODE STATUS NAME ROLE READY ip-172-20-35-51.eu-west-2.compute.internal master True ip-172-20-49-10.eu-west-2.compute.internal node True ip-172-20-72-100.eu-west-2.compute.internal node True ip-172-20-91-236.eu-west-2.compute.internal node True Your cluster cluster.k8stest.blenderfox.co.uk is ready
Now you can start interacting with the cluster. First thing is to deploy the Kubernetes dashboard
$ kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/master/src/deploy/alternative/kubernetes-dashboard.yaml serviceaccount "kubernetes-dashboard" created role "kubernetes-dashboard-minimal" created rolebinding "kubernetes-dashboard-minimal" created deployment "kubernetes-dashboard" created service "kubernetes-dashboard" created
Now setup a proxy to the api
$ kubectl proxy Starting to serve on 127.0.0.1:8001
To get the dashboard
Now let’s create a job to deploy on to the cluster.
rm -rf / is there, along with accidental
mkfs‘ing the wrong disk (I’ve done that before), but the lesser known fork bombs and moving to
/dev/null are in there (I often redirect output to /dev/null, but not moved files into there. That’s an interesting way of getting rid of files.
Attempted to start Week 7 runs (which are 5 min warm up, 20+ min run, 5 min cooldown) — and today fell short by walking at 23 mins of 25 :(
Will try again tomorrow.