Blender Fox


Training

#

Finally did the next run in the 5 to 10 k. Still hurts when I run for an hour, and a tried slightly a different running route. Just over 8k today. Have no idea why the app registered only 5.7k.

[gallery type=“rectangular” size=“large” ids=“6731,6733,6732”]

Training

#

First run of the 5 to 10K section and it’s a lot harder.

4x 10-minute runs, with only a 1 minute rest in between. 7K measured by the app, 8.8km measured by Fitbit.

[gallery type=“rectangular” size=“large” ids=“6714,6715,6716”]

Guide to creating a Kubernetes Cluster in existing subnets & VPC on AWS with kops

#

This article is a guide on how to setup a Kubernetes cluster in AWS using kops and plugging it into your own subnets and VPC. We attempt to minimise the external IPs used in this method.

Export your AWS API keys into environment variables

[code lang=text] export AWS_ACCESS_KEY_ID=‘YOUR_KEY’ export AWS_SECRET_ACCESS_KEY=‘YOUR_ACCESS_KEY’ export CLUSTER_NAME=“my-cluster-name” export VPC=“vpc-xxxxxx” export K8SSTATE=“s3-k8sstate”</pre> [/code]

Create the cluster (you can change some of these switches to match your requirements. I would suggest only using one worker node and one master node to begin with and then increase them once you have confirmed the config is good. The more workers and master nodes you have, the longer it will take to run a rolling-update.

kops create cluster --cloud aws --name $CLUSTER_NAME --state s3://$K8SSTATE --node-count 1 --zones eu-west-1a,eu-west-1b,eu-west-1c --node-size t2.micro --master-size t2.micro --master-zoneseu-west-1a,eu-west-1b,eu-west-1c --ssh-public-key ~/.ssh/id_rsa.pub --topology=private --networking=weave --associate-public-ip=false --vpc $VPC

Important note: There must be an ODD number of master zones. If you tell kops to use an even number zones for master, it will complain.

If you want to use additional security groups, don’t add them yet – add them after you have confirmed the cluster is working.

Internal IPs: You must have a VPN connection into your VPC or you will not be able to ssh into the instances. The alternative is to use the bastion functionality using the –bastion flag with the create command. Then doing this:

ssh -i ~/.ssh/id_rsa -o ProxyCommand='ssh -W %h:%p admin@bastion.$CLUSTER_NAME' admin@INTERNAL_MASTER_IP

However, if you do this method, you MUST then use public IP addressing on the api load balancer, as you will not be able to do kops validate otherwise.

Edit the cluster

kops edit cluster $CLUSTER_NAME --state=s3://$K8SSTATE

Make the following changes:

If you have a VPN connection into the VPC, change spec.api.loadBalancer.type to “Internal”, otherwise, leave it as “Public” Change spec.subnets to match your private subnets. To use existing private subnets, they should also include the id of the subnet and match the CIDR range, e.g.:

[code lang=text] subnets:

The utility subnet is where the Bastion hosts will be placed, and these should be in a public subnet, since they will be the inbound route into the cluster from the internet.

If you need to change or add specific IAM permissions, add them under spec.additionalPolicies like this to add additional policies to the node IAM policy (apologies about the formatting. WordPress is doing something weird to it.)

[code lang=text] additionalPolicies:   node: |     [       {          “Effect”: “Allow”,         “Action”: [“dynamodb:"],         “Resource”: [""]       },        {          “Effect”: “Allow”,          “Action”: [“es:"],              “Resource”: [""]            }        ] [/code]

Edit the bastion, nodes, and master configs (MASTER_REGION is the zone where you placed the master. If you are running a multi-region master config, you’ll have to do this for each region)

kops edit ig master-{MASTER_REGION} --name=$CLUSTER_NAME --state s3://$K8SSTATE

kops edit ig nodes --name=$CLUSTER_NAME --state s3://$K8SSTATE
kops edit ig bastions --name=$CLUSTER_NAME --state s3://$K8SSTATE

Check and make any updates.

If you want a mixture of instance types (e.g. t2.mediums and r3.larges), you’ll need to separate these using new instance groups ($SUBNETS is the subnets where you want the nodes to appear – for example, you can provide a list “eu-west-2a,eu-west-2b)

kops create ig anothernodegroup --state s3://$K8SSTATE --subnets $SUBNETS

You can later delete this with

kops delete ig anothernodegroup --state s3://$K8SSTATE

If you want to use spot prices, add this under the spec section (x.xx is the price you want to bid):

maxPrice: "x.xx"

Check the instance size and count if you want to change them (I would recommend not changing the node count just yet)

If you want to add tags to the instances (for example for billing), add something like this to the spec section:

[code lang=text] cloudLabels: Billing: product-team</pre> [/code]

If you want to run some script(s) at node startup (cloud-init), add them to spec.additionalUserData:

[code lang=text] spec: additionalUserData:

Apply the update:

kops update cluster $CLUSTER_NAME --state s3://$K8SSTATE --yes

Wait for DNS to propagate and then validate

kops validate cluster --state s3://$K8SSTATE

Once the cluster returns ready, apply the Kubernetes dashboard

kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/master/src/deploy/alternative/kubernetes-dashboard.yaml

Access the dashboard via

https://api.$CLUSTER_NAME/api/v1/namespaces/kube-system/services/kubernetes-dashboard/proxy/

also try:

https://api.$CLUSTER_NAME/api/v1/namespaces/kube-system/services/https:kubernetes-dashboard:/proxy/

If the first doesn’t work

(ignore the cert error)

Username is “admin” and the password is found from your local ~/.kube/config

Add the External DNS update to allow you to give friendly names to your externally-exposed services rather than the horrible elb names.

See here: https://github.com/kubernetes-incubator/external-dns/blob/master/docs/tutorials/aws.md

(You can apply the yaml directly onto the cluster via the dashboard. Make sure you change the filter to match your domain or subdomain. )

Note that if you use this, you’ll need to change the node IAM policy on the cluster config as the default IAM policy won’t allow the External DNS container to modify Route 53 entries, and also annotate (use kubectl annotate $service_name key:value) your service with text such as:

external-dns.alpha.kubernetes.io/hostname: $SERVICE_NAME.$CLUSTERNAME

And also you might need this annotation, to make the ELB internal rather than public - otherwise Kubernetes will complain “Error creating load balancer (will retry): Failed to ensure load balancer for service namespace/service: could not find any suitable subnets for creating the ELB”

service.beta.kubernetes.io/aws-load-balancer-internal: 0.0.0.0/0

(Optional) Add the Cockpit pod to your cluster as described here

http://cockpit-project.org/guide/133/feature-kubernetes.html

It will allow you to visually see a topology of your cluster at a cluster and also provides some management features too. For example, here’s my cluster. It contains 5 nodes (1 master, 4 workers and is running 4 services (Kubernetes, external-dns, cockpit, and dashboard). Cockpit creates a replication controller so it knows about the changes.

chrome_2018-01-14_15-44-00

Add any additional security groups by adding this under the spec section of the node/master/bastions config, then do a rolling-update (you might need to use the –force switch), do this as soon as you can after creating and verifying the cluster updates work.

[code lang=text] additionalSecurityGroups:

If the cluster breaks after this (i.e. the nodes haven’t shown up on the master), reboot the server (don’t terminate, use the reboot option from the AWS console), and see if that helps. If it still doesn’t show up, there’s something wrong with the security groups attached – i.e. they’re conflicting somehow with the Kubernetes access. Remove those groups and then do another rolling-update but use both the –force and –cloudonly switches to force a “dirty” respin.

If the cluster comes up good, then you can change the node counts on the configs and apply the update.

Note that if you change the node count and then apply the update, the cluster attempts to make the update without rolling-update. For example, if you change the node count from 1 to 3, the cluster attempts to bring up the 2 additional nodes.

Other things you can look at:

Kompose - which converts a docker-compose configuration into Kubernetes resources

Finally, have fun!

Massive Intel Chip Security Flaw Threatens Computers

#

An Intel flaw that has been sitting hidden for a decade has finally surfaced.

Being on the chip rather than the OS, it doesn’t affect a single OS – with Linux, Windows and MacOS being mentioned in this article.

www.linuxinsider.com/story/850…

Please keep hands and other body parts away from the doors....

#

Evidently this guy thought he could jump the gate, but something didn’t clear it. ^_^

www.facebook.com/Mrphysica…

 

 

Training &amp; C25K Completed

#

Completed ZenLabs C25K and now looking at the 10K trainer. First run of Week 9 (it continues the C25K plan) is a near-hour set of 10 minutes runs O_O

[gallery type=“rectangular” size=“large” ids=“6695,6697,6694,6696”]

Training

#

First run of the final week and ended up dropping to a walk near the end of the 28 minutes. Disappointed, even though I was slow-jogging and making a pace of about 5:30 min/km on average for the main kms in the middle..

[gallery type=“rectangular” size=“large” ids=“6685,6684,6686”]

 

Post-Christmas Day holidays

#

Well, it’s the day after Boxing Day. The day where the majority of people who haven’t taken the interim days off on holiday, go back to work.

There was definitely a run-down feeling on the train ride into work and the trains were running a reduced (probably a Sunday service), so I ended up running for the earlier train since my normal one wasn’t there today.

Christmas Day

#

Christmas Day. Jammed up by an accident on the way, but lots (and lots) of food, movies and Wii-ing (is that even a word?)

Was pouring with rain on the way home. So bad, in fact that we had to slow down severely on the motorway. That however did not stop a 4x4 zooming past us. Christmas Day always brings out the idiots. 😑😒😔

[gallery type=“rectangular” size=“medium” ids=“6652,6653,6654,6655,6656,6657,6658,6659,6660,6661,6662,6663,6664,6665,6666,6667”]

Training

#

Final run of week 7’s set of runs. Next one is week 8 – two 28min runs, then the 5k/30min run.

[gallery type=“rectangular” size=“large” ids=“6647,6646,6648”]

Training

#

Second day of 25min run. Didn’t manage to run as far as yesterday, but nearly made the 5k

[gallery type=“rectangular” size=“large” ids=“6643,6642,6641”]

Training

#

First long run since my week away. 25 minutes running non stop and my body screamed at me to stop at 4km. Busted through and did one more round, taking it to 5.6 km. Strava logged it as 5.9 km, with moving time of 39:49, with a best km time of 5:24 min/km.

I was using the Spotify running playlist to keep my cadence steady and it showed.

The first and last kilometres were the warm up and cool down walks, hence the skewed time.

[gallery type=“rectangular” size=“large” ids=“6637,6636,6635”]

Tinkering with Kubernetes and AWS

#

 

This article just goes through my tinkering with Kubernetes on AWS.

Create a new S3 bucket to store the state of your Kubernetes clusters

aws s3 mb s3://k8sstate --region eu-west-2

Verify

aws s3 ls

Create a Route 53 hosted zone. I’m creating k8stest.blenderfox.uk

aws route53 create-hosted-zone --name k8stest.blenderfox.uk \
--caller-reference $(uuidgen)

dig the nameservers for the hosted zone you created

dig NS k8stest.blenderfox.uk

If your internet connection already has DNS setup to the hosted zone, you’ll see the nameservers in the output:

;; QUESTION SECTION:
;k8stest.blenderfox.uk.     IN  NS

;; ANSWER SECTION:
k8stest.blenderfox.uk. 172800 IN NS ns-1353.awsdns-41.org.
k8stest.blenderfox.uk. 172800 IN NS ns-1816.awsdns-35.co.uk.
k8stest.blenderfox.uk. 172800 IN NS ns-404.awsdns-50.com.
k8stest.blenderfox.uk. 172800 IN NS ns-644.awsdns-16.net.

 

Export your AWS credentials as environment variables (I’ve found Kubernetes doesn’t reliably pick up the credentials from the aws cli especially if you have multiple profiles

export AWS_ACCESS_KEY_ID='your key here'
export AWS_SECRET_ACCESS_KEY='your secret access key here'

You can also add it to a bash script and source it.

Create the cluster using kops. Note that the master zones must have an odd count (1, 3, etc.) since eu-west-2 only has two zones (a and b), I have to have only one zone here

kops create cluster --cloud aws --name cluster.k8stest.blenderfox.uk \
--state s3://k8sstate --node-count 3 --zones eu-west-2a,eu-west-2b \
--node-size m4.large --master-size m4.large \
--master-zones eu-west-2a \
--ssh-public-key ~/.ssh/id_rsa.pub \
--master-volume-size 50 \
--node-volume-size 50 \
--topology private

You can also add the –kubernetes-version switch to specifically pick a Kubernetes version to include in the cluster. Recognised versions are shown at

https://github.com/kubernetes/kops/blob/master/channels/stable

TL;DL: Bands are:

Each with their own Debian image.

 

Assuming the create completed successfully, update the cluster so it pushes the update out to your cloud

kops update cluster cluster.k8stest.blenderfox.uk --yes \
--state s3://k8sstate

While the cluster starts up, all the new records will be set up with placeholder IPs.

Selection_004

NOTE: Kubernetes needs an externally resolvable DNS name. Basically, you need to be able to create a hosted zone on a domain you control. You can’t use Kops on a domain you can’t control, even if you hack the resolver config.

The cluster can take a while to come up. Use

kops validate cluster –state s3://k8sstate

To check the cluster state.

When ready, you’ll see something like this:

Using cluster from kubectl context: cluster.k8stest.blenderfox.co.uk

Validating cluster cluster.k8stest.blenderfox.co.uk

INSTANCE GROUPS
NAME                    ROLE    MACHINETYPE     MIN     MAX     SUBNETS
master-eu-west-2a       Master  m4.large        1       1       eu-west-2a
nodes                   Node    m4.large        3       3       eu-west-2a,eu-west-2b

NODE STATUS
NAME                                            ROLE    READY
ip-172-20-35-51.eu-west-2.compute.internal      master  True
ip-172-20-49-10.eu-west-2.compute.internal  node    True
ip-172-20-72-100.eu-west-2.compute.internal     node    True
ip-172-20-91-236.eu-west-2.compute.internal     node    True

Your cluster cluster.k8stest.blenderfox.co.uk is ready

Now you can start interacting with the cluster. First thing is to deploy the Kubernetes dashboard

$ kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/master/src/deploy/alternative/kubernetes-dashboard.yaml
 serviceaccount "kubernetes-dashboard" created
 role "kubernetes-dashboard-minimal" created
 rolebinding "kubernetes-dashboard-minimal" created
 deployment "kubernetes-dashboard" created
 service "kubernetes-dashboard" created

Now setup a proxy to the api

$ kubectl proxy
Starting to serve on 127.0.0.1:8001

Next, access

http://localhost:8001/ui

To get the dashboard

Now let’s create a job to deploy on to the cluster.

The Linux commands you should NEVER use (Hewlett Packard Enterprise)

#

Source: https://www.hpe.com/us/en/insights/articles/the-linux-commands-you-should-never-use-1712.html

The classic rm -rf / is there, along with accidental dd‘ing or mkfs‘ing the wrong disk (I’ve done that before), but the lesser known fork bombs and moving to /dev/null are in there (I often redirect output to /dev/null, but not moved files into there. That’s an interesting way of getting rid of files.

Training

#

Attempted to start Week 7 runs (which are 5 min warm up, 20+ min run, 5 min cooldown) – and today fell short by walking at 23 mins of 25 :(

Will try again tomorrow.

Training

#

22 minute non-stop run on a treadmill. Haven’t done this for a while. The app registered 3.2km, my Fitbit registered 4.99km.

[gallery type=“rectangular” size=“large” ids=“6619,6620,6621”]

 

Training

#

Forgot to log this yesterday.

Training

#

Today’s run. Three runs of 5/8/5 mins

[gallery type=“rectangular” size=“large” ids=“6609,6607,6608”]

 

Training

#

Forgot to log this from yesterday. This was a 20-minute non-stop run. On a treadmill not so bad.

Just about to do my run for today.

Training

#

Week 5 runs are the first ones which vary the entire week. Today it was 8 min run / 5 min walk, 8 min run / 5 min cooldown.

Did the run early in the morning, and ironically had Morning from Peer Gynt playing ^_^

Training

#

Week 5 runs - 5 min run, 3 min walk, 5 min run, 3 min walk, 5 min run, 5 min cooldown.

Ran through a local community fayre on the way and found them giving out ride in a horse carriage

Training

#

Finished Week 4 of the runs. Unlocked a new app skin for getting halfway through the program.

Training

#

Week 4 runs continue. My pace today was worse than the last run. Much worse :(

Training

#

Week 4 runs starting. 3:00/1:30/5:00/2:50 run/walk intervals. A lot harder than the previous runs, but still feels good.

Be careful if you fall asleep during a classical concert

#

Composers have a tendency to make you jump.

Stravinsky’s Firebird has a slow, peaceful opening, then a really sudden BOOM. Evidently, a woman dozed off during this bit and got a fright.

[embed]www.youtube.com/watch

Disney also took this piece and added it their Fantasia 2000 sequence, and it looked brilliant. (L’oiseau de feu literally means “The Bird of Fire”)

www.youtube.com/watch