aboutsummaryrefslogtreecommitdiff
path: root/content/blog/2024-02-06-zfs.md
diff options
context:
space:
mode:
Diffstat (limited to 'content/blog/2024-02-06-zfs.md')
-rw-r--r--content/blog/2024-02-06-zfs.md152
1 files changed, 69 insertions, 83 deletions
diff --git a/content/blog/2024-02-06-zfs.md b/content/blog/2024-02-06-zfs.md
index cda4b73..fa482b1 100644
--- a/content/blog/2024-02-06-zfs.md
+++ b/content/blog/2024-02-06-zfs.md
@@ -8,14 +8,12 @@ draft = false
This post details the process I used to create ZFS pools, datasets, and
snapshots on Ubuntu Server.
-I found the following pages very helpful while going through this
-process:
+I found the following pages very helpful while going through this process:
-- [Setup a ZFS storage
- pool](https://ubuntu.com/tutorials/setup-zfs-storage-pool)
-- [Kernel/Reference/ZFS](https://wiki.ubuntu.com/Kernel/Reference/ZFS)
-- [ZFS for
- Dummies](https://blog.victormendonca.com/2020/11/03/zfs-for-dummies/)
+- [Setup a ZFS storage
+ pool](https://ubuntu.com/tutorials/setup-zfs-storage-pool)
+- [Kernel/Reference/ZFS](https://wiki.ubuntu.com/Kernel/Reference/ZFS)
+- [ZFS for Dummies](https://blog.victormendonca.com/2020/11/03/zfs-for-dummies/)
# Installation
@@ -25,8 +23,7 @@ To start, I installed the ZFS package with the following command:
sudo apt install zfsutils-linux
```
-Once installed, you can check the version to see if it installed
-correctly.
+Once installed, you can check the version to see if it installed correctly.
```sh
> zsf --version
@@ -39,32 +36,32 @@ zfs-kmod-2.1.5-1ubuntu6~22.04.1
Now that ZFS is installed, we can create and configure the pool.
-You have various options for configuring ZFS pools that all come
-different pros and cons. I suggest visiting the links at the top of this
-post or searching online for the best configuration for your use-case.
+You have various options for configuring ZFS pools that all come different pros
+and cons. I suggest visiting the links at the top of this post or searching
+online for the best configuration for your use-case.
-- Striped VDEVs (Raid0)
-- Mirrored VDEVs (Raid1)
-- Striped Mirrored VDEVs (Raid10)
-- RAIDz (Raid5)
-- RAIDz2 (Raidd6)
-- RAIDz3
-- Nested RAIDz (Raid50, Raid60)
+- Striped VDEVs (Raid0)
+- Mirrored VDEVs (Raid1)
+- Striped Mirrored VDEVs (Raid10)
+- RAIDz (Raid5)
+- RAIDz2 (Raidd6)
+- RAIDz3
+- Nested RAIDz (Raid50, Raid60)
-I will be using Raid10 in this guide. However, the majority of the steps
-are the same regardless of your chosen pool configuration.
+I will be using Raid10 in this guide. However, the majority of the steps are the
+same regardless of your chosen pool configuration.
## Creating the Pool
-To start, let\'s list the disks available to use. You can use
-`fdisk` command to see all available disks.
+To start, let's list the disks available to use. You can use `fdisk` command to
+see all available disks.
```sh
sudo fdisk -l
```
-Or, if you currently have them mounted, you can use the `df`
-command to view your disks.
+Or, if you currently have them mounted, you can use the `df` command to view
+your disks.
```sh
> sudo df -h
@@ -79,24 +76,23 @@ Filesystem Size Used Avail Use% Mounted on
/dev/sdf1 7.3T 28K 6.9T 1% /mnt/red-04
```
-If you\'re going to use mounted disks, make sure to umount them before
-creating the pool.
+If you're going to use mounted disks, make sure to umount them before creating
+the pool.
```sh
sudo umount /dev/sda1
sudo umount /dev/sdb1
```
-Now that I\'ve identified the disks I want to use and have them
-unmounted, let\'s create the pool. For this example, I will call it
-`tank`.
+Now that I've identified the disks I want to use and have them unmounted,
+let's create the pool. For this example, I will call it `tank`.
```sh
sudo zpool create -f -m /mnt/pool tank mirror /dev/sda /dev/sdb
```
-See below for the results of the new ZFS pool named `tank`,
-with a vdev automatically named `mirror-0`.
+See below for the results of the new ZFS pool named `tank`, with a vdev
+automatically named `mirror-0`.
```sh
> zfs list
@@ -121,8 +117,8 @@ config:
errors: No known data errors
```
-We can also look at the mounted filesystem to see where the pool is
-mounted and some quick stats.
+We can also look at the mounted filesystem to see where the pool is mounted and
+some quick stats.
```sh
> df -h
@@ -134,28 +130,25 @@ tank 7.2T 128K 7.2T 1% /tank
## Expanding the Pool
-If you want to expand this pool, you will need to add a new VDEV to the
-pool. Since I am using 2 disks per VDEV, I will need to add a new 2-disk
-VDEV to the existing `tank` pool.
+If you want to expand this pool, you will need to add a new VDEV to the pool.
+Since I am using 2 disks per VDEV, I will need to add a new 2-disk VDEV to the
+existing `tank` pool.
```sh
sudo zpool add tank mirror /dev/sdX /dev/sdY
```
-If you\'re adding disks of different sizes, you\'ll need to use the
-`-f` flag. Keep in mind that the max size will be limited to
-the smallest disk added.
+If you're adding disks of different sizes, you'll need to use the `-f` flag.
+Keep in mind that the max size will be limited to the smallest disk added.
```sh
sudo zpool add -f tank mirror /dev/sdX /dev/sdY
```
-I added two 8TB hard drives and this process took around 10 seconds to
-complete.
+I added two 8TB hard drives and this process took around 10 seconds to complete.
-When viewing the pool again, you can see that the pool has now doubled
-in size. We have 14.3 TB useable space and the same space used for
-mirroring.
+When viewing the pool again, you can see that the pool has now doubled in size.
+We have 14.3 TB useable space and the same space used for mirroring.
```sh
> zfs list
@@ -168,8 +161,8 @@ tank/media 96K 14.3T 96K /tank/media
### Converting Disks
-Some disks, such as NTFS-formatted drives, will need to be partitioned
-and formatted prior to being added to the pool.
+Some disks, such as NTFS-formatted drives, will need to be partitioned and
+formatted prior to being added to the pool.
Start by identifying the disks you want to format and add to the pool.
@@ -177,18 +170,16 @@ Start by identifying the disks you want to format and add to the pool.
sudo fdisk -l | grep /dev
```
-I am going to format my `/dev/sdc` and `/dev/sdd`
-disks with the `fdisk` command.
+I am going to format my `/dev/sdc` and `/dev/sdd` disks with the `fdisk`
+command.
-See below for instructions on how to use `fdisk`. Here\'s
-what I did to create basic Linux formatted disks:
+See below for instructions on how to use `fdisk`. Here's what I did to create
+basic Linux formatted disks:
-- `g` : Create GPT partition table
-- `n` : Create a new partition, hit Enter for all default
- options
-- `t` : Change partition type to `20` for
- `Linux filesystem`
-- `w` : Write the changes to disk and exit
+- `g` : Create GPT partition table
+- `n` : Create a new partition, hit Enter for all default options
+- `t` : Change partition type to `20` for `Linux filesystem`
+- `w` : Write the changes to disk and exit
I repeated this process for both disks.
@@ -247,9 +238,9 @@ sudo zpool add tank mirror /dev/sdc /dev/sdd
```
When we list the pool again, we can see that our size is now updated to
-approximately 22TB. This represents my hard drives totalling 45.6TB when
-shown with `fdisk -l`, with a Raid10 configuration using 22TB
-for mirroring and 22TB of useable space.
+approximately 22TB. This represents my hard drives totalling 45.6TB when shown
+with `fdisk -l`, with a Raid10 configuration using 22TB for mirroring and 22TB
+of useable space.
```sh
> zfs list
@@ -263,24 +254,22 @@ tank/media 145GT 21.7T 96K /tank/media
## Creating Datasets
According to [ZFS
-Terminology](https://docs.oracle.com/cd/E18752_01/html/819-5461/ftyue.html),
-a `dataset` can refer to "clones, file systems, snapshots,
-and volumes.
+Terminology](https://docs.oracle.com/cd/E18752_01/html/819-5461/ftyue.html), a
+`dataset` can refer to "clones, file systems, snapshots, and volumes.
-For this guide, I will use the `dataset` term to refer to
-file systems created under a pool.
+For this guide, I will use the `dataset` term to refer to file systems created
+under a pool.
-Within my `tank` pool, I am going to create some datasets to
-help organize my files. This will give me location to store data rather
-than simply dumping everything at the `/tank/` location.
+Within my `tank` pool, I am going to create some datasets to help organize my
+files. This will give me location to store data rather than simply dumping
+everything at the `/tank/` location.
```sh
sudo zfs create tank/cloud
sudo zfs create tank/media
```
-Once created, you can see these datasets in the output of your pool
-list:
+Once created, you can see these datasets in the output of your pool list:
```sh
> zfs list
@@ -292,9 +281,9 @@ tank/media 96K 7.14T 96K /tank/media
## Creating Snapshots
-Next, let\'s create our first snapshot. We can do this by calling the
-`snapshot` command and give it an output name. I will be
-throwing the current date and time into my example.
+Next, let's create our first snapshot. We can do this by calling the `snapshot`
+command and give it an output name. I will be throwing the current date and time
+into my example.
```sh
sudo zfs snapshot tank@$(date '+%Y-%m-%d_%H-%M')
@@ -325,13 +314,10 @@ no datasets available
# My Thoughts on ZFS So Far
-- I sacrificed 25TB to be able to mirror my data, but I feel more
- comfortable with the potential to save my data by quickly replacing
- a disk if I need to.
-- The set-up was surprisingly easy and fast.
-- Disk I/O is fast as well. I was worried that the data transfer
- speeds would be slower due to the RAID configuration.
-- Media streaming and transcoding has seen no noticeable drop in
- performance.
-- My only limitation really is the number of HDD bays in my server HDD
- cage.
+- I sacrificed 25TB to be able to mirror my data, but I feel more comfortable
+ with the potential to save my data by quickly replacing a disk if I need to.
+- The set-up was surprisingly easy and fast.
+- Disk I/O is fast as well. I was worried that the data transfer speeds would be
+ slower due to the RAID configuration.
+- Media streaming and transcoding has seen no noticeable drop in performance.
+- My only limitation really is the number of HDD bays in my server HDD cage.