Docker.io Builds Page For 32-bit Architectures

I have started posting up my builds of Docker.io. They are unofficial, and unsupported by the community, pending official support and code release supporting 32-bit architectures.

https://drive.google.com/drive/u/0/#folders/0Bx5ME-up1Usbb2JMdVBvNGFSTUE

I have setup my system to auto-build every week and post to this shared directory. There’s a readme in the shared folder.

How Splitting A Computer Into Multiple Realities Can Protect You From Hackers

Virtualisation, Sandboxes, Containers. All terms and technologies used for various reasons. Security is not always the main reason, but considering the details in this article, it is a valid point. It is simple enough to setup a container in your machine. LXC/Linux Containers for example, don’t have as much overhead as a VirtualBox or VMWare virtual machine and can run almost, if not just as fast as a native installation (I’m using LXC for my Docker.io build script), but conceptually, if you use a container, and it is infected with malware, you can drop and rebuild the container, or roll back to a snapshot much more easily than reimaging your machine.

Right now I run three different containers — one is my main Ubuntu Studio, which is not a container, but my core OS. the second is my Docker.io build LXC, which I rebuild everytime I compile (and I now have that tied into Jenkins, so I might put up regular builds somehow), and the final one is a VirtualBox virtual machine that runs Windows 7 so I don’t have to dual boot.

How Splitting A Computer Into Multiple Realities Can Protect You From Hackers | WIRED.

Building Docker.io on 32-bit arch

NOTE: Automated 32-bit-enabled builds are now available. See this page for link details.

EDIT 29th September 2015: This article seems to be quite popular. Note that Docker has progressed a long way since I wrote this and it has pretty much broken the script due to things being moved around and versions being updated. You can still read this to learn the basics of LXC-based in-container compiling, and if you want to extend it, go right ahead. When I get a chance, I will try to figure out why this build keeps breaking.

Steps to compile and build a docker.io binary on a 32-bit architecture, to work around the fact the community does not support anything other than 64-bit at present, even though this restriction has been flagged up many times.

A caveat, though. As the binary you compile via these steps will work on a 32-bit architecture, the Docker images you download from the cloud may NOT work, as the majority are meant for 64-bit architectures. So if you compile a Docker binary, you will have to build your own images. Not too difficult — you can use LXC or debootstrap for that. Here is a quick tutorial on that.

I am using a LXC container to do my build as it helps control the packages plus it reduces the chances of a conflict occurring between two versions (e.g. one “dev” version and one “release” version of a package), plus, the LXC container is disposable – I can get rid of it each time I do a build.

I utilise several scripts — one to do a build/rebuild of my LXC container, one to start up my build-environment LXC container and take it down afterwards; and the other, the actual build script. To make it more automated, I setup my LXC container to be allow a passwordless SSH login (see this link). This means I can do a scp into my container and copy to and from the container without having to enter my password. Useful because the container can take a few seconds to startup. It does open security problems, but as long as the container is only up for the duration of the build, this isn’t a problem.

EDIT: One note. If you have upgraded to Ubuntu’s Utopic Unicorn version, you may end up having to enter your GPG keyring password a few times.

EDIT 2: A recent code change has caused this build script to break. More details, and how to fix it on this post.

As with most things linux and script-based. There’s many ways to do the same thing. This is just one way.

Continue reading

Linux Containers

After much tinkering and cursing, I finally managed to get Linux Container running. I had originally wanted a Fedora container, but for some unknown reason, the container would not start. Instead, I tried a CentOS 6 container, and that started up successfully, so I am using that instead. It is actually good, because I can tinker with the CentOS container, experiment with different configurations, maybe practise setting it up as a proper (i.e. no GDM) server. This will help if I decide to go for a Red Hat-themed Linux certification.

Still bugging me why the Fedora 20 container won’t start, though.

Virtualisation

Wow, you learn something new everyday. I’ve just found out about two variations on virtualisation. Linux Containers (LXC) and Vagrant.

Linux Containers (LXC) is known as OS-level virtualisation, meaning the kernel looks after the virtualisation, and there is no need for some extra management software along the lines of VMWare or Virtualbox. The guest OSes run as containers, similar to chroot jails, and all containers, including the main one you booted from, share the same kernel and resources as your main container. As such, LXC only supports linux-based guest OSes. You can’t (easily, anyway) run Windows under LXC. Homepage, Wikipedia.

Vagrant is a strange one. It sells itself as being a way to keep development environments consistent, and I can understand why — if you have a team of people all with a VM of the same OS, but end with different results because they have tinkered with the settings on the VM OS, Vagrant prevents this by keeping the core one in the cloud, and each time the machine is started up, it checks itself against the cloud version, updating itself if needed. That guarantees consistency. Homepage, Wikipedia.

I haven’t tried both of these tools in great detail yet, but here’s some related links for you to check out:

Corporate Linux

Virtualization madness

Had my first encounter of Linux, or specifically, a linux-like environment in a corporate environment. The IT peops were trying to setup an environment on Xenserver, and they had setup a storage space to copy a virtual machine image onto. But they kept running out of space. It took me a while to figure out what they were doing (wrong), though.

They were trying to copy onto the PV partition, and Xenserver had setup its environment to use LVM, so the PV partition was already allocated to the LVM system, and therefore had no space to copy onto.

After figuring out which LV was the one they wanted to use, I had problems mounting, with mount saying I had to specify the filesystem. After trying various switches with mount and specifying a filesystem (only NFS, ext, ext2 and ext3 were supported by Xenserver. No vfat, ntfs or btrfs. Admittedly, however, the Xenserver version the IT people were using was an older version), I soon found out that the IT people had created the storage space, but not done anything else. Therefore, that would explain why I couldn’t mount it — it hadn’t been formatted. So a simple mkfs.ext3 (remember ext4 wasn’t supported) on the block device in /dev/mapper/ meant I could mount it without specifying filesystem. scp’ing into the server and copying into the path proved it worked.

 

%d bloggers like this: