+++ date = 2024-02-06 title = "How to Create a ZFS Pool on Ubuntu Linux" description = "" draft = false +++ This post details the process I used to create ZFS pools, datasets, and snapshots on Ubuntu Server. I found the following pages very helpful while going through this process: - [Setup a ZFS storage pool](https://ubuntu.com/tutorials/setup-zfs-storage-pool) - [Kernel/Reference/ZFS](https://wiki.ubuntu.com/Kernel/Reference/ZFS) - [ZFS for Dummies](https://blog.victormendonca.com/2020/11/03/zfs-for-dummies/) # Installation To start, I installed the ZFS package with the following command: ```sh sudo apt install zfsutils-linux ``` Once installed, you can check the version to see if it installed correctly. ```sh > zsf --version zfs-2.1.5-1ubuntu6~22.04.2 zfs-kmod-2.1.5-1ubuntu6~22.04.1 ``` # ZFS Configuration Now that ZFS is installed, we can create and configure the pool. You have various options for configuring ZFS pools that all come different pros and cons. I suggest visiting the links at the top of this post or searching online for the best configuration for your use-case. - Striped VDEVs (Raid0) - Mirrored VDEVs (Raid1) - Striped Mirrored VDEVs (Raid10) - RAIDz (Raid5) - RAIDz2 (Raidd6) - RAIDz3 - Nested RAIDz (Raid50, Raid60) I will be using Raid10 in this guide. However, the majority of the steps are the same regardless of your chosen pool configuration. ## Creating the Pool To start, let\'s list the disks available to use. You can use `fdisk` command to see all available disks. ```sh sudo fdisk -l ``` Or, if you currently have them mounted, you can use the `df` command to view your disks. ```sh > sudo df -h Filesystem Size Used Avail Use% Mounted on ... /dev/sda1 7.3T 28K 6.9T 1% /mnt/red-01 /dev/sdb1 7.3T 144G 6.8T 3% /mnt/red-02 /dev/sdc1 7.3T 5.5T 1.9T 75% /mnt/white-02 /dev/sdd1 9.1T 8.7T 435G 96% /mnt/white-01 /dev/sde1 7.3T 28K 6.9T 1% /mnt/red-03 /dev/sdf1 7.3T 28K 6.9T 1% /mnt/red-04 ``` If you\'re going to use mounted disks, make sure to umount them before creating the pool. ```sh sudo umount /dev/sda1 sudo umount /dev/sdb1 ``` Now that I\'ve identified the disks I want to use and have them unmounted, let\'s create the pool. For this example, I will call it `tank`. ```sh sudo zpool create -f -m /mnt/pool tank mirror /dev/sda /dev/sdb ``` See below for the results of the new ZFS pool named `tank`, with a vdev automatically named `mirror-0`. ```sh > zfs list NAME USED AVAIL REFER MOUNTPOINT tank 396K 7.14T 96K /tank ``` ```sh > zpool status pool: tank state: ONLINE config: NAME STATE READ WRITE CKSUM tank ONLINE 0 0 0 mirror-0 ONLINE 0 0 0 sda ONLINE 0 0 0 sdb ONLINE 0 0 0 errors: No known data errors ``` We can also look at the mounted filesystem to see where the pool is mounted and some quick stats. ```sh > df -h Filesystem Size Used Avail Use% Mounted on ... tank 7.2T 128K 7.2T 1% /tank ``` ## Expanding the Pool If you want to expand this pool, you will need to add a new VDEV to the pool. Since I am using 2 disks per VDEV, I will need to add a new 2-disk VDEV to the existing `tank` pool. ```sh sudo zpool add tank mirror /dev/sdX /dev/sdY ``` If you\'re adding disks of different sizes, you\'ll need to use the `-f` flag. Keep in mind that the max size will be limited to the smallest disk added. ```sh sudo zpool add -f tank mirror /dev/sdX /dev/sdY ``` I added two 8TB hard drives and this process took around 10 seconds to complete. When viewing the pool again, you can see that the pool has now doubled in size. We have 14.3 TB useable space and the same space used for mirroring. ```sh > zfs list NAME USED AVAIL REFER MOUNTPOINT tank 145G 14.3T 104K /tank tank/cloud 145G 14.3T 145G /tank/cloud tank/media 96K 14.3T 96K /tank/media ``` ### Converting Disks Some disks, such as NTFS-formatted drives, will need to be partitioned and formatted prior to being added to the pool. Start by identifying the disks you want to format and add to the pool. ```sh sudo fdisk -l | grep /dev ``` I am going to format my `/dev/sdc` and `/dev/sdd` disks with the `fdisk` command. See below for instructions on how to use `fdisk`. Here\'s what I did to create basic Linux formatted disks: - `g` : Create GPT partition table - `n` : Create a new partition, hit Enter for all default options - `t` : Change partition type to `20` for `Linux filesystem` - `w` : Write the changes to disk and exit I repeated this process for both disks. ```sh > sudo fdisk /dev/sdc Welcome to fdisk (util-linux 2.37.2). Changes will remain in memory only, until you decide to write them. Be careful before using the write command. This disk is currently in use - repartitioning is probably a bad idea. It's recommended to umount all file systems, and swapoff all swap partitions on this disk. Command (m for help): m Help: GPT M enter protective/hybrid MBR Generic d delete a partition F list free unpartitioned space l list known partition types n add a new partition p print the partition table t change a partition type v verify the partition table i print information about a partition Misc m print this menu x extra functionality (experts only) Script I load disk layout from sfdisk script file O dump disk layout to sfdisk script file Save & Exit w write table to disk and exit q quit without saving changes Create a new label g create a new empty GPT partition table G create a new empty SGI (IRIX) partition table o create a new empty DOS partition table s create a new empty Sun partition table ``` Once the drives are formatted, we can add these disks to the pool. ```sh sudo zpool add tank mirror /dev/sdc /dev/sdd ``` When we list the pool again, we can see that our size is now updated to approximately 22TB. This represents my hard drives totalling 45.6TB when shown with `fdisk -l`, with a Raid10 configuration using 22TB for mirroring and 22TB of useable space. ```sh > zfs list NAME USED AVAIL REFER MOUNTPOINT tank 145G 21.7T 104K /tank tank/cloud 145G 21.7T 145G /tank/cloud tank/media 145GT 21.7T 96K /tank/media ``` ## Creating Datasets According to [ZFS Terminology](https://docs.oracle.com/cd/E18752_01/html/819-5461/ftyue.html), a `dataset` can refer to "clones, file systems, snapshots, and volumes. For this guide, I will use the `dataset` term to refer to file systems created under a pool. Within my `tank` pool, I am going to create some datasets to help organize my files. This will give me location to store data rather than simply dumping everything at the `/tank/` location. ```sh sudo zfs create tank/cloud sudo zfs create tank/media ``` Once created, you can see these datasets in the output of your pool list: ```sh > zfs list NAME USED AVAIL REFER MOUNTPOINT tank 752K 7.14T 104K /tank tank/cloud 96K 7.14T 96K /tank/cloud tank/media 96K 7.14T 96K /tank/media ``` ## Creating Snapshots Next, let\'s create our first snapshot. We can do this by calling the `snapshot` command and give it an output name. I will be throwing the current date and time into my example. ```sh sudo zfs snapshot tank@$(date '+%Y-%m-%d_%H-%M') ``` We can list the snapshots in our pool with the following command: ```sh > zfs list -t snapshot NAME USED AVAIL REFER MOUNTPOINT tank@2024-02-06_19-41 0B - 104K - ``` ## Destroy Snapshots You can always destroy snapshots that are no longer needed: ```sh sudo zfs destroy tank@2024-02-06_19-41 ``` Once deleted, they will no longer appear in the list: ```sh > zfs list -t snapshot no datasets available ``` # My Thoughts on ZFS So Far - I sacrificed 25TB to be able to mirror my data, but I feel more comfortable with the potential to save my data by quickly replacing a disk if I need to. - The set-up was surprisingly easy and fast. - Disk I/O is fast as well. I was worried that the data transfer speeds would be slower due to the RAID configuration. - Media streaming and transcoding has seen no noticeable drop in performance. - My only limitation really is the number of HDD bays in my server HDD cage.