How Splitting A Computer Into Multiple Realities Can Protect You From Hackers

Virtualisation, Sandboxes, Containers. All terms and technologies used for various reasons. Security is not always the main reason, but considering the details in this article, it is a valid point. It is simple enough to setup a container in your machine. LXC/Linux Containers for example, don’t have as much overhead as a VirtualBox or VMWare virtual machine and can run almost, if not just as fast as a native installation (I’m using LXC for my Docker.io build script), but conceptually, if you use a container, and it is infected with malware, you can drop and rebuild the container, or roll back to a snapshot much more easily than reimaging your machine.

Right now I run three different containers — one is my main Ubuntu Studio, which is not a container, but my core OS. the second is my Docker.io build LXC, which I rebuild everytime I compile (and I now have that tied into Jenkins, so I might put up regular builds somehow), and the final one is a VirtualBox virtual machine that runs Windows 7 so I don’t have to dual boot.

How Splitting A Computer Into Multiple Realities Can Protect You From Hackers | WIRED.

Building Docker.io on Ubuntu 32-bit

Interestingly, after upgrading to Ubuntu Utopic Unicorn, the build script I made for Docker.io fails during the Go build. Something inside the Utopic minimal install is not being liked by the Go build script, so for now, you will have to force the LXC container to use Trusty instead.

lxc-create -n Ubuntu -t ubuntu -- --release trusty --auth-key /home/user/.ssh/id_rsa.pub

 

 

Google Music

English: Google Logo officially released on Ma...

I am getting pretty peeved with Google recently. I have a huge amount of music on my Google Music library, so much in fact, that I hit Google’s track limit for uploads. Now, I’m trying to download my purchased music back to my machine, but their MusicManager is winding me up no end. It downloads for a while, then stops, thinking it has finished, with several tracks not downloaded. I restart the download, and it goes on a bit more then stop again.

Google suggested a few things, eventually ending up blaming my ISP. But there isn’t much alternative for me. Other than my current ISP, I can only use my corporate connection, but that requires a proxy – something Google do not support on MusicManager, or using Tor, which also doesn’t work properly. They suggested using the Google Music app, but that only works (if it ever does) on a single album.

I even tried using AWS and Google Cloud, but the app ties to MAC and refuses to identify my machine (which is a virtual machine). I also tried using an LXC contain, and that worked for a bit longer, but also died. So now, I’m trying using a Docker image. Slightly different concept, but lets see if it works.

If that doesn’t work, I’m going to try using TAILS.

EDIT: Docker image didn’t work. So anything with a “true” virtual environment such as AWS, GC, and Docker don’t seem to work (VirtualBox will probably be in this list too), anything else (LXC, e.g.) will work, but fail later.

Building Docker.io on 32-bit arch

NOTE: Automated 32-bit-enabled builds are now available. See this page for link details.

EDIT 29th September 2015: This article seems to be quite popular. Note that Docker has progressed a long way since I wrote this and it has pretty much broken the script due to things being moved around and versions being updated. You can still read this to learn the basics of LXC-based in-container compiling, and if you want to extend it, go right ahead. When I get a chance, I will try to figure out why this build keeps breaking.

Steps to compile and build a docker.io binary on a 32-bit architecture, to work around the fact the community does not support anything other than 64-bit at present, even though this restriction has been flagged up many times.

A caveat, though. As the binary you compile via these steps will work on a 32-bit architecture, the Docker images you download from the cloud may NOT work, as the majority are meant for 64-bit architectures. So if you compile a Docker binary, you will have to build your own images. Not too difficult — you can use LXC or debootstrap for that. Here is a quick tutorial on that.

I am using a LXC container to do my build as it helps control the packages plus it reduces the chances of a conflict occurring between two versions (e.g. one “dev” version and one “release” version of a package), plus, the LXC container is disposable – I can get rid of it each time I do a build.

I utilise several scripts — one to do a build/rebuild of my LXC container, one to start up my build-environment LXC container and take it down afterwards; and the other, the actual build script. To make it more automated, I setup my LXC container to be allow a passwordless SSH login (see this link). This means I can do a scp into my container and copy to and from the container without having to enter my password. Useful because the container can take a few seconds to startup. It does open security problems, but as long as the container is only up for the duration of the build, this isn’t a problem.

EDIT: One note. If you have upgraded to Ubuntu’s Utopic Unicorn version, you may end up having to enter your GPG keyring password a few times.

EDIT 2: A recent code change has caused this build script to break. More details, and how to fix it on this post.

As with most things linux and script-based. There’s many ways to do the same thing. This is just one way.

Continue reading

Linux Containers

After much tinkering and cursing, I finally managed to get Linux Container running. I had originally wanted a Fedora container, but for some unknown reason, the container would not start. Instead, I tried a CentOS 6 container, and that started up successfully, so I am using that instead. It is actually good, because I can tinker with the CentOS container, experiment with different configurations, maybe practise setting it up as a proper (i.e. no GDM) server. This will help if I decide to go for a Red Hat-themed Linux certification.

Still bugging me why the Fedora 20 container won’t start, though.

Virtualisation

Wow, you learn something new everyday. I’ve just found out about two variations on virtualisation. Linux Containers (LXC) and Vagrant.

Linux Containers (LXC) is known as OS-level virtualisation, meaning the kernel looks after the virtualisation, and there is no need for some extra management software along the lines of VMWare or Virtualbox. The guest OSes run as containers, similar to chroot jails, and all containers, including the main one you booted from, share the same kernel and resources as your main container. As such, LXC only supports linux-based guest OSes. You can’t (easily, anyway) run Windows under LXC. Homepage, Wikipedia.

Vagrant is a strange one. It sells itself as being a way to keep development environments consistent, and I can understand why — if you have a team of people all with a VM of the same OS, but end with different results because they have tinkered with the settings on the VM OS, Vagrant prevents this by keeping the core one in the cloud, and each time the machine is started up, it checks itself against the cloud version, updating itself if needed. That guarantees consistency. Homepage, Wikipedia.

I haven’t tried both of these tools in great detail yet, but here’s some related links for you to check out:

%d bloggers like this: