The Tridecadal Korean (astralblue) wrote,
The Tridecadal Korean

Adding a new disk to a ZFS root pool on FreeBSD

Again, this is mostly for my own reference, but if you have the same problem that I had, this article may prove useful to you as well.

OpenSolaris (from which FreeBSD's ZFS codebase came from) has this limitation: The “root” pool (i.e. the one that contains a boot filesystem, specified in the bootfs pool property) can only be a simple pool or a mirrored pool, not a raidz or any other fancier type.  The main problem seems to be the OpenSolaris ZFS boot loader having trouble reading from other disks than the “boot disk”: If anything required for booting (e.g. the kernel) resides on a non-boot disk, the system fails to boot.  Yes, boo.

There is a safeguard against this in the zpool program: If you tell it to add another vdev to a root pool (that is, one with bootfs set) in a non-mirrored setup, it complains: “root pool cannot have multiple vdevs or separate logs”.

The thing is, the FreeBSD ZFS boot loader does not suffer from the same limitation as the OpenSolaris counterpart does, and the system can indeed boot from a pool with multiple top-level vdevs, or even from a raidz pool.  The boot loader enumerates all hard drives visible through BIOS, examines metadata on ZFS partition(s) on each drive to figure out which partition belongs to what pool, then mixes-and-matches them as necessary to reconstruct a complete root pool from which to boot.  In other words, it is safe to add more top-level vdevs to a FreeBSD boot pool.

Problem: The safeguard mentioned above is still present in the FreeBSD version of zpool.  Because a root pool has the bootfs property, you cannot add another disk to a non-mirrored root pool, as long as the pool has the bootfs property set.

Well, the workaround is obvious now, isn't it?  It is as simple as temporarily clearing the bootfs property, as shown in the example below where I create a new GUID partition labeled mail0-002 on the disk da1 then add it to the root pool named mail0:

# gpart create -s GPT da1 da1 created # gpart bootcode -b /boot/pmbr da1 da1 has bootcode # gpart add -t freebsd-zfs -l mail0-002 da1 da1p1 added # zpool add mail0 gpt/mail0-002 cannot add to 'mail0': root pool cannot have multiple vdevs or separate logs # zpool get bootfs mail0 NAME PROPERTY VALUE SOURCE mail0 bootfs mail0 local # zpool set bootfs="" mail0 # zpool get bootfs mail0 NAME PROPERTY VALUE SOURCE mail0 bootfs - default # zpool add mail0 gpt/mail0-002 # zpool status mail0 pool: mail0 state: ONLINE scrub: none requested config: NAME STATE READ WRITE CKSUM mail0 gpt/mail0-001 ONLINE 0 0 0 gpt/mail0-002 ONLINE 0 0 0 errors: No known data errors # zpool set bootfs=mail0 mail0 # zpool get bootfs mail0 NAME PROPERTY VALUE SOURCE mail0 bootfs mail0 local

Important: Do not forget to restore the bootfs property as shown in the last two commands (use the original value as returned by a previous zpool get command); the system will otherwise fail to boot.

Note that, even though you can add more disks to your root pool this way, you may still want to limit the number of devices in your root pool to a minimum.  FreeBSD's ZFS boot loader—I use gptzfsboot—seems to take a long time to examine a disk.  “mail0” shown above has 3 disks, and it already takes about 10-20 seconds for gptzfsboot to finish scanning the 3 disks for ZFS partitions and start loading the next-stage BTX loader (/boot/loader) from them.

Tags: freebsd, zfs

  • Post a new comment


    default userpic

    Your reply will be screened

    Your IP address will be recorded 

    When you submit the form an invisible reCAPTCHA check will be performed.
    You must follow the Privacy Policy and Google Terms of use.