Building on 32-bit arch

NOTE: Automated 32-bit-enabled builds are now available. See this page for link details.

EDIT 29th September 2015: This article seems to be quite popular. Note that Docker has progressed a long way since I wrote this and it has pretty much broken the script due to things being moved around and versions being updated. You can still read this to learn the basics of LXC-based in-container compiling, and if you want to extend it, go right ahead. When I get a chance, I will try to figure out why this build keeps breaking.

Steps to compile and build a binary on a 32-bit architecture, to work around the fact the community does not support anything other than 64-bit at present, even though this restriction has been flagged up many times.

A caveat, though. As the binary you compile via these steps will work on a 32-bit architecture, the Docker images you download from the cloud may NOT work, as the majority are meant for 64-bit architectures. So if you compile a Docker binary, you will have to build your own images. Not too difficult — you can use LXC or debootstrap for that. Here is a quick tutorial on that.

I am using a LXC container to do my build as it helps control the packages plus it reduces the chances of a conflict occurring between two versions (e.g. one “dev” version and one “release” version of a package), plus, the LXC container is disposable – I can get rid of it each time I do a build.

I utilise several scripts — one to do a build/rebuild of my LXC container, one to start up my build-environment LXC container and take it down afterwards; and the other, the actual build script. To make it more automated, I setup my LXC container to be allow a passwordless SSH login (see this link). This means I can do a scp into my container and copy to and from the container without having to enter my password. Useful because the container can take a few seconds to startup. It does open security problems, but as long as the container is only up for the duration of the build, this isn’t a problem.

EDIT: One note. If you have upgraded to Ubuntu’s Utopic Unicorn version, you may end up having to enter your GPG keyring password a few times.

EDIT 2: A recent code change has caused this build script to break. More details, and how to fix it on this post.

As with most things linux and script-based. There’s many ways to do the same thing. This is just one way.

Script 1:

This script does the rebuild of my LXC build environment

First, I take down the container if it already exists. I named my container “Ubuntu” for simplicity.

lxc-stop -n Ubuntu

Next, destroy it.

lxc-destroy -n Ubuntu

Now create a new one using the Ubuntu template. Here, I also inject my SSH public key into the container so I can use passwordless SSH

IMPORTANT NOTE: If you are NOT running Ubuntu Trusty, you MUST use the “–release” option. If you are running on an x86 architecture, and want to compile a 32-bit version, you MUST also use the “–arch i386” (otherwise LXC will pull the amd64 packages down instead). There is a problem with the Go build script with Utopic. Hopefully to be fixed at some point in the future.

lxc-create -n Ubuntu -t ubuntu — –release trusty –arch i386 –auth-key /home/user/.ssh/

Start up the container, and send it to the background

lxc-start -n Ubuntu -d

Wait till LXC reports the IP address of the container, then assign it to a variable for reuse later. We do this by waiting for LXC to report the IP then running ‘ifconfig’ within the container to get the IP as seen by the container. The ‘lxc-info’ command can return two IP addresses — the actual one, and the bridge, and it is not always obvious which one is which.

while [ lxc-info -n Ubuntu | grep IP: | sort | uniq | unexpand -a | cut -f3 | wc -l -ne 1 ];
sleep 1s
IP=lxc-attach -n Ubuntu -- ifconfig | grep 'inet addr' | head -n 1 | cut -d ':' -f 2 | cut -d ' ' -f 1

echo Main IP: $IP

Container is setup, take it down now.

lxc-stop -n Ubuntu

Script 2:

This script is the wrapper script around the build process. It starts up the build container, runs the build in the container, then pulls the resulting output from the container after the build is done, extracting it to the current folder

First, check if we should rebuild the build environment. I normally do, to guarantee a clean slate each time I run the build.

echo -n "Rebuild Docker build environment (Y/N)? "
read REPLY
case "$REPLY" in
echo Rebuilding docker build environment
./ #If you want to rebuild the LXC container for each build
echo Not rebuilding docker build environment

Start/restart the build container

lxc-stop -n Ubuntu
lxc-start -n Ubuntu -d

Get the IP address of the container

while [ lxc-info -n Ubuntu | grep IP: | sort | uniq | unexpand -a | cut -f3 | wc -l -ne 1 ];
sleep 1s
IP=lxc-attach -n Ubuntu -- ifconfig | grep 'inet addr' | head -n 1 | cut -d ':' -f 2 | cut -d ' ' -f 1
echo Main Container IP: $IP

Now push the compile script to the container. This will fail whilst the container starts up, so I keep retrying

echo Pushing script to IP $IP
scp -o StrictHostKeyChecking=no -i /home/user/.ssh/id_rsa /home/user/ ubuntu@$IP:/home/ubuntu
while [ $? -ne 0 ]
scp -o StrictHostKeyChecking=no -i /home/user/.ssh/id_rsa /home/user/ ubuntu@$IP:/home/ubuntu

With the container started, we can invoke the compile script within the container. This does the build and will take a while.

lxc-attach -n Ubuntu '/home/ubuntu/'

Now, after the build is done, pull the results from the container

scp -o StrictHostKeyChecking=no -i /home/user/.ssh/id_rsa ubuntu@$IP:/home/ubuntu/*.txz .

Take down the container

lxc-stop -n Ubuntu

Extract the package for use

for a in ls *.txz
echo Extracting $a
tar -xvvvf $a && rm $a


Script 3:

This script is run inside the container and performs the actual build. It is derived mostly from the Dockerfile that is included in the repository, with some tweaks.

First, we install the basic packages for compiling

cd /home/ubuntu
echo Installing basic dependencies
apt-get update && apt-get install -y aufs-tools automake btrfs-tools build-essential curl dpkg-sig git iptables libapparmor-dev libcap-dev libsqlite3-dev lxc mercurial parallel reprepro ruby1.9.1 ruby1.9.1-dev pkg-config libpcre* --no-install-recommends

Then we pull the Go repository.

hg clone -u release ./p/go
cd ./p/go/src
cd ../../../

We setup variables for the Go environment

export GOPATH=$(pwd)/go
export PATH=$GOPATH/bin:$PATH:$(pwd)/p/go/bin
export AUTO_GOPATH=1

Next, we pull from the lvm2 repository to build a version of devmapper needed for static linking.

git clone
cd lvm2
(git checkout -q v2_02_103 && ./configure --enable-static_link && make device-mapper && make install_device-mapper && echo lvm build OK!) || (echo lvm2 build failed && exit 1)
cd ..

EDIT see this link for extra code that should go here.

Next, get the docker source

git clone $GOPATH/src/

Now the important bit. We patch the source code to remove the 64-bit arch restriction.

for f in grep -r "if runtime.GOARCH \!\= \"amd64\" {" $GOPATH/src/* | cut -d: -f1
echo Patching $f
sed -i ‘s/if runtime.GOARCH != “amd64” {/if runtime.GOARCH != “amd64” \&\& runtime.GOARCH != “386” {/g’ $f

Finally, we build docker. We utilise the Docker build script, which gives a warning as we are not running in a docker environment (we can’t at this time, since we have no usable docker binary)

cd $GOPATH/src/
./hack/ binary
cd ../../../../../

Assuming the build succeeded, we should be able to bundle the binaries (this will be copied off by the script)

cd go/src/
for a in
echo Creating $a.txz
tar -cJvvvvf $a.txz $a
mv *.txz ../../../../../../
cd ../../../../../../

And that’s it. How to build docker on a 32-bit arch.

%d bloggers like this: