This article just goes through my tinkering with Kubernetes on AWS.
Create a new S3 bucket to store the state of your Kubernetes clusters
aws s3 mb s3://k8sstate --region eu-west-2
Verify
aws s3 ls
Create a Route 53 hosted zone. I’m creating k8stest.blenderfox.uk
aws route53 create-hosted-zone --name k8stest.blenderfox.uk \ --caller-reference $(uuidgen)
dig
the nameservers for the hosted zone you created
dig NS k8stest.blenderfox.uk
If your internet connection already has DNS setup to the hosted zone, you’ll see the nameservers in the output:
;; QUESTION SECTION: ;k8stest.blenderfox.uk. IN NS ;; ANSWER SECTION: k8stest.blenderfox.uk. 172800 IN NS ns-1353.awsdns-41.org. k8stest.blenderfox.uk. 172800 IN NS ns-1816.awsdns-35.co.uk. k8stest.blenderfox.uk. 172800 IN NS ns-404.awsdns-50.com. k8stest.blenderfox.uk. 172800 IN NS ns-644.awsdns-16.net.
Export your AWS credentials as environment variables (I’ve found Kubernetes doesn’t reliably pick up the credentials from the aws cli especially if you have multiple profiles
export AWS_ACCESS_KEY_ID='your key here' export AWS_SECRET_ACCESS_KEY='your secret access key here'
You can also add it to a bash script and source it.
Create the cluster using kops
. Note that the master zones must have an odd count (1, 3, etc.) since eu-west-2 only has two zones (a and b), I have to have only one zone here
kops create cluster --cloud aws --name cluster.k8stest.blenderfox.uk \ --state s3://k8sstate --node-count 3 --zones eu-west-2a,eu-west-2b \ --node-size m4.large --master-size m4.large \ --master-zones eu-west-2a \ --ssh-public-key ~/.ssh/id_rsa.pub \ --master-volume-size 50 \ --node-volume-size 50 \ --topology private
You can also add the --kubernetes-version
switch to specifically pick a Kubernetes version to include in the cluster. Recognised versions are shown at
https://github.com/kubernetes/kops/blob/master/channels/stable
TL;DL: Bands are:
- >=1.4.0 and <1.5.0
- >=1.5.0 and <1.6.0
- >=1.6.0 and <1.7.0
- >=1.7.0
Each with their own Debian image.
Assuming the create completed successfully, update the cluster so it pushes the update out to your cloud
kops update cluster cluster.k8stest.blenderfox.uk --yes \ --state s3://k8sstate
While the cluster starts up, all the new records will be set up with placeholder IPs.
NOTE: Kubernetes needs an externally resolvable DNS name. Basically, you need to be able to create a hosted zone on a domain you control. You can’t use Kops on a domain you can’t control, even if you hack the resolver config.
The cluster can take a while to come up. Use
kops validate cluster --state s3://k8sstate
To check the cluster state.
When ready, you’ll see something like this:
Using cluster from kubectl context: cluster.k8stest.blenderfox.co.uk Validating cluster cluster.k8stest.blenderfox.co.uk INSTANCE GROUPS NAME ROLE MACHINETYPE MIN MAX SUBNETS master-eu-west-2a Master m4.large 1 1 eu-west-2a nodes Node m4.large 3 3 eu-west-2a,eu-west-2b NODE STATUS NAME ROLE READY ip-172-20-35-51.eu-west-2.compute.internal master True ip-172-20-49-10.eu-west-2.compute.internal node True ip-172-20-72-100.eu-west-2.compute.internal node True ip-172-20-91-236.eu-west-2.compute.internal node True Your cluster cluster.k8stest.blenderfox.co.uk is ready
Now you can start interacting with the cluster. First thing is to deploy the Kubernetes dashboard
$ kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/master/src/deploy/alternative/kubernetes-dashboard.yaml serviceaccount "kubernetes-dashboard" created role "kubernetes-dashboard-minimal" created rolebinding "kubernetes-dashboard-minimal" created deployment "kubernetes-dashboard" created service "kubernetes-dashboard" created
Now setup a proxy to the api
$ kubectl proxy Starting to serve on 127.0.0.1:8001
Next, access
http://localhost:8001/ui
To get the dashboard
Now let’s create a job to deploy on to the cluster.
You must be logged in to post a comment.