aboutsummaryrefslogtreecommitdiff
path: root/content
diff options
context:
space:
mode:
authorChristian Cleberg <hello@cleberg.net>2024-02-09 22:14:20 -0600
committerChristian Cleberg <hello@cleberg.net>2024-02-09 22:14:20 -0600
commit5990e07309cbfc67b205557e96c21a8fa9087e5f (patch)
tree141f348e9e1daf8d85a6d48c5f194e231c536fa0 /content
parent8c7130e2a13faefca56dca8a9278bb1852d31d2b (diff)
downloadcleberg.net-5990e07309cbfc67b205557e96c21a8fa9087e5f.tar.gz
cleberg.net-5990e07309cbfc67b205557e96c21a8fa9087e5f.tar.bz2
cleberg.net-5990e07309cbfc67b205557e96c21a8fa9087e5f.zip
add zfs post
Diffstat (limited to 'content')
-rw-r--r--content/blog/2024-02-06-zfs.md219
1 files changed, 219 insertions, 0 deletions
diff --git a/content/blog/2024-02-06-zfs.md b/content/blog/2024-02-06-zfs.md
new file mode 100644
index 0000000..a0c5023
--- /dev/null
+++ b/content/blog/2024-02-06-zfs.md
@@ -0,0 +1,219 @@
++++
+date = 2024-02-06
+title = "How to Create a ZFS Pool on Ubuntu Linux"
+description = "Learn how to create a simple ZFS pool on Ubuntu Linux."
++++
+
+This post details the process I used to create ZFS pools, datasets, and
+snapshots on Ubuntu Server.
+
+I found the following pages very helpful while going through this process:
+
+- [Setup a ZFS storage pool](https://ubuntu.com/tutorials/setup-zfs-storage-pool)
+- [Kernel/Reference/ZFS](https://wiki.ubuntu.com/Kernel/Reference/ZFS)
+- [ZFS for Dummies](https://blog.victormendonca.com/2020/11/03/zfs-for-dummies/)
+
+## Installation
+
+To start, I installed the ZFS package with the following command:
+
+```sh
+sudo apt install zfsutils-linux
+```
+
+Once installed, you can check the version to see if it installed correctly.
+
+```sh
+> zsf --version
+
+zfs-2.1.5-1ubuntu6~22.04.2
+zfs-kmod-2.1.5-1ubuntu6~22.04.1
+```
+
+## ZFS Configuration
+
+Now that ZFS is installed, we can create and configure the pool.
+
+You have various options for configuring ZFS pools that all come different pros
+and cons.
+
+- Striped VDEVs (Raid0)
+- Mirrored VDEVs (Raid1)
+- Striped Mirrored VDEVs (Raid10)
+- RAIDz (Raid5)
+- RAIDz2 (Raidd6)
+- RAIDz3
+- Nested RAIDz (Raid50, Raid60)
+
+I will be using Raid10 in this guide. However, the majority of the steps are the
+same regardless of your chosen pool configuration.
+
+### Creating the Pool
+
+To start, let's list the disks available to use. You can use `fdisk` command to
+see all available disks.
+
+```sh
+sudo fdisk -l
+```
+
+Or, if you currently have them mounted, you can use the `df` command to view
+your disks.
+
+```sh
+> sudo df -h
+
+Filesystem Size Used Avail Use% Mounted on
+...
+/dev/sda1 7.3T 28K 6.9T 1% /mnt/red-01
+/dev/sdb1 7.3T 144G 6.8T 3% /mnt/red-02
+/dev/sdc1 7.3T 5.5T 1.9T 75% /mnt/white-02
+/dev/sdd1 9.1T 8.7T 435G 96% /mnt/white-01
+```
+
+If you're going to use mounted disks, make sure to umount them before creating
+the pool.
+
+```sh
+sudo umount /dev/sda1
+sudo umount /dev/sdb1
+```
+
+Now that I've identified the disks I want to use and have them unmounted, let's
+create the pool. For this example, I will call it `tank`.
+
+```sh
+sudo zpool create -f -m /mnt/pool tank mirror /dev/sda /dev/sdb
+```
+
+See below for the results of the new ZFS pool named `tank`, with a vdev
+automatically named `mirror-0`.
+
+```sh
+> zfs list
+
+NAME USED AVAIL REFER MOUNTPOINT
+tank 396K 7.14T 96K /tank
+```
+
+```sh
+> zpool status
+
+ pool: tank
+ state: ONLINE
+config:
+
+ NAME STATE READ WRITE CKSUM
+ tank ONLINE 0 0 0
+ mirror-0 ONLINE 0 0 0
+ sda ONLINE 0 0 0
+ sdb ONLINE 0 0 0
+
+errors: No known data errors
+```
+
+We can also look at the mounted filesystem to see where the pool is mounted and
+some quick stats.
+
+```sh
+> df -h
+
+Filesystem Size Used Avail Use% Mounted on
+...
+tank 7.2T 128K 7.2T 1% /tank
+```
+
+### Expanding the Pool
+
+If you want to expand this pool, you will need to add a new VDEV to the pool.
+Since I am using 2 disks per VDEV, I will need to add a new 2-disk VDEV to the
+existing `tank` pool.
+
+```sh
+sudo zpool add tank mirror /dev/sdX /dev/sdY
+```
+
+I added two 8TB hard drives and this process took around 10 seconds to complete.
+
+When viewing the pool again, you can see that the pool has now doubled in size.
+We have 14.3 TB useable space and the same space used for mirroring.
+
+```sh
+> zfs list
+
+NAME USED AVAIL REFER MOUNTPOINT
+tank 145G 14.3T 104K /tank
+tank/cloud 145G 14.3T 145G /tank/cloud
+tank/media 96K 14.3T 96K /tank/media
+```
+
+### Creating Datasets
+
+According to [ZFS
+Terminology](https://docs.oracle.com/cd/E18752_01/html/819-5461/ftyue.html), a
+`dataset` can refer to "clones, file systems, snapshots, and volumes.
+
+For this guide, I will use the `dataset` term to refer to file systems created
+under a pool.
+
+Within my `tank` pool, I am going to create some datasets to help organize my
+files. This will give me location to store data rather than simply dumping
+everything at the `/tank/` location.
+
+```sh
+sudo zfs create tank/cloud
+sudo zfs create tank/media
+```
+
+Once created, you can see these datasets in the output of your pool list:
+
+```sh
+> zfs list
+NAME USED AVAIL REFER MOUNTPOINT
+tank 752K 7.14T 104K /tank
+tank/cloud 96K 7.14T 96K /tank/cloud
+tank/media 96K 7.14T 96K /tank/media
+```
+
+### Creating Snapshots
+
+Next, let's create our first snapshot. We can do this by calling the `snapshot`
+command and give it an output name. I will be throwing the current date and time
+into my example.
+
+```sh
+sudo zfs snapshot tank@$(date '+%Y-%m-%d_%H-%M')
+```
+
+We can list the snapshots in our pool with the following command:
+
+```sh
+> zfs list -t snapshot
+NAME USED AVAIL REFER MOUNTPOINT
+tank@2024-02-06_19-41 0B - 104K -
+```
+
+### Destroy Snapshots
+
+You can always destroy snapshots that are no longer needed:
+
+```sh
+sudo zfs destroy tank@2024-02-06_19-41
+```
+
+Once deleted, they will no longer appear in the list:
+
+```sh
+> zfs list -t snapshot
+no datasets available
+```
+
+## My Thoughts on ZFS So Far
+
+- I sacrificed 25TB to be able to mirror my data, but I feel more comfortable
+ with the potential to save my data by quickly replacing a disk if I need to.
+- The set-up was surprisingly easy and fast.
+- Disk I/O is fast as well. I was worried that the data transfer speeds would be
+ slower due to the RAID configuration.
+- Media streaming and transcoding has seen no noticeable drop in performance.
+- My only limitation really is the number of HDD bays in my server HDD cage.