Pixelbook

Spent a big chunk of today preparing for, and attempting to upgrade my Pixelbook to Gallium OS.

I imaged it, then made a file backup of my home directory, before installing the OS, overwriting my Ubuntu, then restoring the home directory backup into the newly installed OS and then chowning the directory to me.

As a habit, I then imaged the laptop at this state.

I prepared a semi-automated script to install apps that I had installed on my Ubuntu, which included things like virt-manager, virtualbox, google-chrome and the like.

However, I soon found out that VirtualBox 6.1 seems to crash the mouse driver on reboot and the mouse pointer no longer moves and Gallium doesn’t even seem to see a pointer device when you check the mouse and touchpad option. I had to revert back to the image just after the file copy.

There is always the option of installing VirtualBox 6.0 from the Ubuntu repositories rather than the Oracle repositories, which uses a different installation setup. Maybe that will result in a different outcome.

Eventually, I restored back to my original Ubuntu installation so I can retry again tomorrow.

EDIT: Retried again the next day, and found out the sound wasn’t working, even on the live disk. Better find out what’s the deal with that…

EDIT2: Found out that my Pixelbook model doesn’t have working sound drivers on GalliumOS. I guess I will have to wait until that is fixed before using that. I guess I’m staying on Ubuntu. In the meantime, I’m going to see if I can compile a later version of the kernel to see if I can somehow get VirtualBox working better.

Pixelbooks and Ubuntu 20.04

After using my Pixelbook Eve on Ubuntu Eoan (19.10), my Ubuntu has started notifying me for an upgrade to 20.04 LTS. So, based on my past experiences of Ubuntu upgrades and how they always break things, I went through the process of backing up my files and making a Clonezilla image of my Pixelbook before even starting to do anything.

Then I went through the upgrade. It went through without any problems, but when it went to reboot afterwards, I was black screened after the Ubuntu splash screen.

I suspect it’s because my Pixelbook contains some tweaks via this GitHub repo, and that is still using a 4.x kernel. Last update was in 2019, so maybe it’s out of date.

Before restoring my old image back on, I installed GalliumOS which is an Ubuntu-based distro specifically aimed at ChromeOS devices. Then made a backup image of that before restoring the old image back on.

I might try installing Ubuntu 20.04 from clean and see if that has any better Pixelbook support than the older versions of Ubuntu, and make it so I don’t need to use the hacks. Bear in mind the hacks used the ChromeOS kernel, and I couldn’t do some things like use ufw or gufw. Using GalliumOS should fix that since I wouldn’t be using tweaks.

However, there’s still an annoying quirk GalliumOS has on my Pixelbook and that’s the jumpiness of the mouse pointer — touch the touchpad and the pointer jumps to that part of the screen, as if the touchpad was a representation of the screen, not a touchpad. It’s a quirk that can be gotten used to, but it is still annoying.

Slow Download Speeds on Steam For Linux

I’ve been getting horrendously slow speeds on Linux Steam (~500k/s) and 5-6Mb/s on Windows, and only now found out why. There’s a ticket on GitHub for this:

https://github.com/ValveSoftware/steam-for-linux/issues/3401

In short, the client is very aggressive on its DNS requests, which normally causes it to be throttled by servers, leading to really slow downloads. However, using dnsmasq allows the requests to be cached locally and offload the requests.

Even though the instructions are for Arch, they worked for me:

  1. Install dnsmasq
  2. Modify /etc/dnsmasq.config and add the line listen-address=127.0.0.1
  3. Restart the dnsmasq service (systemctl restart dnsmasq.service) or reboot your machine

Enjoy the speed

General Updates

So I haven’t been posting here much recently so here are some updates.

Been slowing trying to get back into running, have been slacking off WAAAAY too much lately. Tried using Aaptiv (@aaptiv) which is a training fitness app that has trainers talking you through the stuff, there are a few problems with it.

  1. When you use a stretch/strength training routine or yoga routine, you’re reliant on them telling you what to do, there’s no video guide to show you the correct form, and that’s bad. Other apps like FitBit Coach has videos where you can copy the coach to make sure you have the right form.
  2. On Treadmill/Running routines, they talk in mph, but treadmills here in the UK go in km/h, which requires conversion (1.0 mph = 1.6 kph)

On a separate note, I have bought another attempt at the CKA exam, but this time bought the bundle with the Kubernetes Fundamentals Training from Linux Foundation. Let’s see how different that is to Linux Academy’s training….

 

New Ubuntu Quirks

So, I install Ubuntu 17 clean on my laptop after the issues I had with drivers and immediately found out that gksu was not installed.

Installed that and tried to

gksudo nautilus

That failed and found out that Wayland had replaced the default of Xorg. Found an old Xauthority file from my backups and copied that back, which allowed me to get the popup window back for my gksu, but I couldn’t click it to enter the password :(

Then I found this article:

https://www.linuxuprising.com/2018/04/gksu-removed-from-ubuntu-heres.html

Which tells me I need to use the admin:/// file prefix instead to open something up as admin. Guess I’ll give it a go later.

Upgrading Ubuntu (fun! ¬_¬)

Spent several hours trying to upgrade my Ubuntu installation from 15 up to the latest 17. The upgrade didn’t fail, but I did see a few error messages, and now I have applications failing to start for various reasons, including the settings applet; and when I install or use my nvidia drivers, ubuntu doesn’t start up properly until I do


apt-get purge nvidia*

But removing all the nvidia stuff causes it to fallback to nouveau which for the most part works, but not exactly good for any linux gaming.

Looks like it’s going to be a full-reinstall job to make sure everything is clean :(

Tunnelling to Kubernetes Nodes & Pods via a Bastion

A quick note to remind myself (and other people) how to tunnel to a node (or pod) in Kubernetes via the bastion server

rm ~/.ssh/known_hosts #Needed if you keep scaling the bastion up/down

BASTION=bastion.{cluster-domain}
DEST=$1

ssh -o StrictHostKeyChecking=no -o ProxyCommand='ssh -o StrictHostKeyChecking=no -W %h:%p admin@bastion.{cluster-domain}' admin@$DEST

Run like this:

bash ./tunnelK8s.sh NODE_IP

Example:

bash ./tunnelK8s.sh 10.10.10.100 #Assuming 10.10.10.100 is the node you want to connect to.

You can extend this by using this to ssh into a pod, assuming the pod has an SSH server on it.

BASTION=bastion.${cluster domain name}
NODE=$1
NODEPORT=$2
PODUSER=$3

ssh -o ProxyCommand="ssh -W %h:%p admin@$BASTION" admin@$NODE ssh -tto StrictHostKeyChecking=no $PODUSER@localhost -p $NODEPORT

So if you have service listening on port 32000 on node 10.10.10.100 that expects a login user of "poduser", you would do this:

bash ./tunnelPod.sh 10.10.10.100 32000 poduser

If you have to pass a password you can install sshpass on the node, then use that (be aware of security risk though – this is not an ideal solution)

ssh -o ProxyCommand="ssh -W %h:%p admin@$BASTION" admin@$NODE sshpass -p ${password} ssh -tto StrictHostKeyChecking=no $PODUSER@localhost -p $NODEPORT

Caveat though — you will have to make sure that your node security group allows your bastion security group to talk to the nodes on the additional ports. By default, the only port that the bastions are able to talk to the node security groups on is SSH (22) only.

How to using S3 as a RWM/NFS-like store in Kubernetes

Let’s assume you have an application that runs happily on its own and is stateless. No problem. You deploy it onto Kubernetes and it works fine. You kill the pod and it respins, happily continuing where it left off.

Let’s add three replicas to the group. That also is fine, since its stateless.

Let’s now change that so that the application is now stateful and requires storage of where it is in between runs. So you pre-provision a disk using EBS and hook that up into the pods, and convert the deployment to a stateful set. Great, it still works fine. All three will pick up where they left off.

Now, what if we wanted to share the same state between the replicas?

For example, what if these three replicas were frontend boxes to a website? Having three different disks is a bad idea unless you can guarantee they will all have the same content. Even if you can, there’s guaranteed to be a case where one or more of the boxes will be either behind or ahead of the other boxes, and consequently have a case where one or more of the boxes will serve the wrong version of content.

There are several options for shared storage, NFS is the most logical but requires you to pre-provision a disk that will be used and also to either have an NFS server outside the cluster or create an NFS pod within the cluster. Also, you will likely over-provision your disk here (100GB when you only need 20GB for example)

Another alternative is EFS, which is Amazon’s NFS storage, where you mount an NFS and only pay for the amount of storage you use. However, even when creating a filesystem in a public subnet, you get a private IP which is useless if you are not DirectConnected into the VPC.

Another option is S3, but how do you use that short of using “s3 sync” repeatedly?

One answer is through the use of s3fs and sshfs

We use s3fs to mount the bucket into a pod (or pods), then we can use those mounts via sshfs as an NFS-like configuration.

The downside to this setup is the fact it will be slower than locally mounted disks.

So here’s the yaml for the s3fs pods (change values within {…} where applicable) — details at Docker Hub here: https://hub.docker.com/r/blenderfox/s3fs/

(and yes, I could convert the environment variables into secrets and reference those, and I might do a follow up article for that)

---
kind: Deployment
apiVersion: extensions/v1beta1
metadata:
  name: s3fs
  namespace: default
  labels:
    k8s-app: s3fs
  annotations: {}
spec:
  replicas: 1
  selector:
    matchLabels:
      k8s-app: s3fs
  template:
    metadata:
      name: s3fs
      labels:
        k8s-app: s3fs
    spec:
      containers:
      - name: s3fs
        image: blenderfox/s3fs
        env:
        - name: S3_BUCKET
          value: {...}
        - name: S3_REGION
          value: {...}
        - name: AWSACCESSKEYID
          value: {...}
        - name: AWSSECRETACCESSKEY
          value: {...}
        - name: REMOTEKEY
          value: {...}
        - name: BUCKETUSERPASSWORD
          value: {...}
        resources: {}
        imagePullPolicy: Always
        securityContext:
          privileged: true
      restartPolicy: Always
      terminationGracePeriodSeconds: 30
      dnsPolicy: ClusterFirst
      securityContext: {}
      schedulerName: default-scheduler
  strategy:
    type: RollingUpdate
    rollingUpdate:
      maxUnavailable: 25%
      maxSurge: 25%
  revisionHistoryLimit: 10
  progressDeadlineSeconds: 600
---
kind: Service
apiVersion: v1
metadata:
  name: s3-service
  annotations:
    external-dns.alpha.kubernetes.io/hostname: {hostnamehere}
service.beta.kubernetes.io/aws-load-balancer-connection-idle-timeout: "3600"
  labels:
    name: s3-service
spec:
  ports:
  - protocol: TCP
    name: ssh
    port: 22
    targetPort: 22
  selector:
    k8s-app: s3fs
  type: LoadBalancer
  sessionAffinity: None
  externalTrafficPolicy: Cluster

This will create a service and a pod

If you have external DNS enabled, the hostname will be added to Route 53.

SSH into the service and verify you can access the bucket mount

ssh bucketuser@dns-name ls -l /mnt/bucket/

(This should give you the listing of the bucket and also should have user:group set on the directory as “bucketuser”)

You should also be able to rsync into the bucket using this

rsync -rvhP /source/path bucketuser@dns-name:/mnt/bucket/

Or sshfs using a similar method


sshfs bucketuser@dns-name:/mnt/bucket/ /path/to/local/mountpoint

Edit the connection timeout annotation if needed

Now, if you set up a pod that has three replicas and all three sshfs to the same service, you essentially have an NFS-like storage.

 

%d bloggers like this: