As I relied on grive to do the sync between my local machine and Google Drive, where the builds were stored, I found out (at work, ironically, since we use some Google APIs), that Google shut off some of their APIs on 20th April, which killed some of our functionality and also, killed grive’s functionality with some really cryptic messages in the console window. Nonetheless, I found that an alternative, “drive” works, although a hell of a lot slower.
Build script now builds correctly under Utopic. Also modified the Jenkins job to include the commit version so you can see the commit active at the time of the build.
An informative article form Linux Journal on Docker.
I have started posting up my builds of Docker.io. They are unofficial, and unsupported by the community, pending official support and code release supporting 32-bit architectures.
I have setup my system to auto-build every week and post to this shared directory. There’s a readme in the shared folder.
Some changes to the Docker.io code has caused the build script to fail, this was down to the code now using btrfs to build a driver. It has taken me a while to figure out how to fix that error message, but the script now works. You have to add this chunk of code anywhere before the main docker build
git clone git://git.kernel.org/pub/scm/linux/kernel/git/kdave/btrfs-progs.git mv btrfs-progs btrfs #Needed to include into Docker code export PATH=$PATH:$(pwd) cd btrfs make || (echo "btrfs compile failed" && exit 1) export C_INCLUDE_PATH=$C_INCLUDE_PATH:$(pwd) #Might not be needed export CPLUS_INCLUDE_PATH=$CPLUS_INCLUDE_PATH:$(pwd) #Might not be needed echo PATH: $PATH cd ..
Virtualisation, Sandboxes, Containers. All terms and technologies used for various reasons. Security is not always the main reason, but considering the details in this article, it is a valid point. It is simple enough to setup a container in your machine. LXC/Linux Containers for example, don’t have as much overhead as a VirtualBox or VMWare virtual machine and can run almost, if not just as fast as a native installation (I’m using LXC for my Docker.io build script), but conceptually, if you use a container, and it is infected with malware, you can drop and rebuild the container, or roll back to a snapshot much more easily than reimaging your machine.
Right now I run three different containers — one is my main Ubuntu Studio, which is not a container, but my core OS. the second is my Docker.io build LXC, which I rebuild everytime I compile (and I now have that tied into Jenkins, so I might put up regular builds somehow), and the final one is a VirtualBox virtual machine that runs Windows 7 so I don’t have to dual boot.
Interestingly, after upgrading to Ubuntu Utopic Unicorn, the build script I made for Docker.io fails during the Go build. Something inside the Utopic minimal install is not being liked by the Go build script, so for now, you will have to force the LXC container to use Trusty instead.
lxc-create -n Ubuntu -t ubuntu -- --release trusty --auth-key /home/user/.ssh/id_rsa.pub