Using ZFS with LinuxKit and Moby.

After raising an issue with compiling ZFS; the awesome LinuxKit community have designed this into their kernel build process.

Here’s a howto on building a simple (SSH) LinuxKit ISO, which has the ZFS drivers loaded.

Before we start, if you’re not familiar with LinuxKit or why you’d want to build a LinuxKit OS image, this article may not be for you; but for those interested in the what, why and how, the docs give a really good overview:

LinuxKit Architecture Overview.
Configuration Reference.

Step 0. Lets start at the very beginning.

To kick us off, we’ll take an existing LinuxKit configuration, below. This will create us a system with SSH via public key auth, and also running an *insecure* root shell on the local terminal.

We would then build this into an ISO with the moby command:

moby build -format iso-bios simplessh.yml

Moby goes and packages up our containers, init’s and kernel and gives us a bootable ISO. Awesome.

I booted the ISO in virtualbox (there is a nicer way of testing images with Docker’s Hyperkit using the linuxkit CLI, but Virtualbox is closer to my use case)

Step 1. The ZFS Kernel Modules

The ZFS kernel modules aren’t going to get shipped in the default LinuxKit kernel images (such as the one we’re using above). Instead, they can be compiled and added to your build as a separate image.

Here is one I built earlier, notice the extra container entry under the init: section, coming from my docker hub repo (trxuk). You’ll see i’m also using a kernel that was built at the same time, so i’m 100% sure they are compatible.

We have also added a “modprobe”  onboot:  service, this just runs  modprobe zfs  to load the zfs module and it’s dependencies (SPL, etc etc) on boot.

The metadata in the modprobe image automatically map /lib/modules  and /sys  into the modprobe container, so it can access the modules it needs to modprobe.

For those interested, you can see that metadata here in the package build:

Also, it’s worth highlighting that this module is still a PR for linuxkit, hence the image is also coming from my own dockerhub repo at the moment:

1.1 Using my images

As my docker hub repo’s are public, you could go ahead and use the configuration i’ve provided above, to use my builds of the kernel, zfs modules and modprobe package. You end up booting into a system that looks like this:

Huzzah! Success!

The observant amongst you may have noticed that the SSH image is also now coming from my  trxuk/sshd  docker hub repo; this is because i’ve edited it to have the zfs userspace tools (zpool, zfs) built in, instead of having to run the following on each boot:

1.2 Building your own ZFS image

I built the images above using work recently committed into the linuxkit repo by Rolf Neugebauer, thanks to him, if you wanted to build your own, you now easily can;

Once those commands have finished, you’ll have two new dockerhub repo’s, one containing the kernel and the other is the matching zfs-kmod image to use in the  init: section.

There is an issue currently preventing the  zfs-kmod image from working with modprobe (depmod appears to run in the build but the output modules.dep doesn’t end up including zfs) i’ll be opening a PR to resolve this, if you’re building your own module as above, you may want to hold off.

I hope this helps; I’ve only just got to this stage myself (very much by standing on others shoulders!) so watch out for deeper ZFS and LinuxKit articles coming soon! 🙂


ZFS on Linux resilver & scrub performance tuning

Improving Scrub and Resilver performance with ZFS on Linux.

I’ve been a longtime user of ZFS, since the internal Sun beta’s of Solaris Nevada (OpenSolaris).
However, for over a year i’ve been running a single box at home to provide file storage (ZFS) and VM’s and as I work with Linux day to day, chose to do this on CentOS, using the native port of ZFS for linux.

I had a disk die last week on a 2 disk RAID-0 mirror.
Replacement was easy, however reslivering was way to slow!

After hunting for some performance tuning ideas, I came across this excellent post for Solaris/IllumOS ZFS systems and wanted to translate it for Linux ZFS users.

The post covers the tunable parameter names and why we are changing them, so I won’t repeat/shamelessly steal. What I will do is show that they can be set under linux just like regular kernel module parameters:

[[email protected] ~]# ls /etc/modprobe.d/
anaconda.conf blacklist.conf blacklist-kvm.conf dist-alsa.conf dist.conf dist-oss.conf openfwwf.conf zfs.conf

[[email protected] ~]# cat /etc/modprobe.d/zfs.conf
options zfs zfs_arc_max=2147483648 zfs_top_maxinflight=64 zfs_resilver_min_time_ms=5000 zfs_resilver_delay=0

Here you can see I have set the zfs IO limit to 64 from 32, the resilver time from 5 sec from 3 and the delay to zero. Parameters can be checked after a reboot:

cat /sys/module/zfs/parameters/zfs_resilver_min_time_ms

Result: After a reboot, my resilver speed increased from ~400KB/s to around 6.5MB/s.

I didn’t tweak anymore, it was good enough for me and had other things to get on with.

One day i’ll revisit these to see what other performance I can get out of it. (I’m aware on my box, the RAM limitation is causing me less than ‘blazing fast’ ZFS usage anyway)

Happy Pools!

[[email protected] ~]# zpool status
pool: F43Protected state: ONLINE
scan: resilvered 134G in 2h21m with 0 errors on Tue Jun 24 01:07:12 2014

pool: F75Volatile
state: ONLINE scan: scrub repaired 0 in 5h41m with 0 errors on Tue Feb 4 03:23:39 2014



OpenSolaris / Solaris Express to Solaris 11 boot Issues

I have had a trusty Solaris box at home now for 5-6 years running a few things;
– ZFS for my files, sharing out through SMB for media, iSCSI for playing with Netbooting and VMware shared storage.

– Xen (More recently) running on a Solaris Dom0 hosting a number of Centos5 DomU’s for other linux server based stuff.

– Multicast/Bonjour spoofing and apple filesharing making an excellent ‘fake’ timemachine for backing up my Macbook pro onto ZFS (works flawlessly and doesn’t have a single disk prone to failure unlike the time capsules

Over that time, I’ve either in-place upgraded, or overwritten the OS and let the new version of Solaris import the ZFS pool from;

Solaris 10, Solaris SNV_8X (Sun Internal), Solaris SNV_9X (Sun Internal), OpenSolaris (SNV_1XX), Oracle Solaris Express (SNV_151).

And everything was pretty much good 🙂 Until now, now I tried to take the latest update, moving to the newly released Solaris 11.

Lots of things have changed in Solaris 11 compared to the SNV/OpenSolaris/SolarisExpress years (well, i’m not saying there hasn’t been a lot of changes during that time, just none that have negatively affected me, where as these do);

– Support removed for Linux branded Solaris Zones
– Support removed for Solaris 11 to be a Xen Dom0, or indeed be the base of any form of visualization solution apart from Solaris zones and VirtualBox (Guessing to allow Oracle to push it’s visualization product)
– No check in the ‘pkg update’ procedure as to wether the Xen kernel was in use before upgrade.

So, cutting to the POINT OF THE POST, I updated, a new boot environment was created, update successful, rebooted, boot fails!

You could just boot the previous Boot Environment, which works, but this is what you’ll need to do to boot the new BE;

1. Open the grub menu.lst from /rpool/boot/grub/menu.lst
2. Find the last entry in the file (named after the Boot Environment you’re having issues with)
3. Remove the references to Xen, as below;


title example-solaris-1
findroot (pool_rpool,0,a)
bootfs rpool/ROOT/example-solaris-1
kernel$ /boot/$ISADIR/xen.gz console=vga dom0_mem=2048M dom0_vcpus_pin=false watchdog=false
module$ /platform/i86xpv/kernel/$ISADIR/unix /platform/i86xpv/kernel/$ISADIR/unix -B $ZFS-BOOTFS
module$ /platform/i86pc/$ISADIR/boot_archive


title example-solaris-1
findroot (pool_rpool,0,a)
bootfs rpool/ROOT/example-solaris-1
kernel$ /platform/i86pc/kernel/$ISADIR/unix -B $ZFS-BOOTFS
module$ /platform/i86pc/$ISADIR/boot_archive

We have just removed the Xen kernel and options and instead told grub to boot the ‘normal’ Solaris kernel. It seems pkg update don’t check for this when upgrading.

Now reboot and try the Boot Environment from the grub menu, should load fine and after some information about upgrading the SMF versions, you’ll be ready to login.

The second issue I found after this is that my SMB shares were not available, seemed that the SMB service was stopped due to dependencies, starting the following services magically made my shares come back to life;

svcadm enable idmap
svcadm enable smb/client
svcadm enable smb/server

Verify with ‘share’;

[email protected]:~# share
IPC$ smb - Remote IPC
Matt /F43Datapool/Matt smb -
Public /F43Datapool/Public smb -
c$ /var/smb/cvol smb - Default Share

I hope this helps someone, the last thing I have to work out is whether VirtualBox will provide as stable a solution for my Linux VM’s as Xen (as it seems to be the only option I have now, apart from moving back to Linux and losing ZFS/SFM/Crossbow/Comstar etc which I really don’t want to do).

That said, it really annoys me that Oracle have removed such a simple and powerful combination of Xen Dom0 and ZFS in the base solaris image, it served a perfect need for people who don’t need a full, separate, virtualization product such as testing, home use, small businesses etc. Why remove Dom0 support but keep DomU support! Anyone know?