aboutsummaryrefslogtreecommitdiff
path: root/content/blog/2024-02-06-zfs.org
diff options
context:
space:
mode:
authorChristian Cleberg <hello@cleberg.net>2024-09-01 22:03:26 -0500
committerChristian Cleberg <hello@cleberg.net>2024-09-01 22:03:26 -0500
commita0578880ef14f54647d7cfd96382395ab1e3cddb (patch)
tree3b48908939708db6580a90d99bf88ff045311e9d /content/blog/2024-02-06-zfs.org
parent17d0e7fa0f46eae4ef284af4593e33ad24da3bef (diff)
downloadcleberg.net-a0578880ef14f54647d7cfd96382395ab1e3cddb.tar.gz
cleberg.net-a0578880ef14f54647d7cfd96382395ab1e3cddb.tar.bz2
cleberg.net-a0578880ef14f54647d7cfd96382395ab1e3cddb.zip
format 2024 blog posts
Diffstat (limited to 'content/blog/2024-02-06-zfs.org')
-rw-r--r--content/blog/2024-02-06-zfs.org122
1 files changed, 54 insertions, 68 deletions
diff --git a/content/blog/2024-02-06-zfs.org b/content/blog/2024-02-06-zfs.org
index 410b030..a870a2c 100644
--- a/content/blog/2024-02-06-zfs.org
+++ b/content/blog/2024-02-06-zfs.org
@@ -6,14 +6,11 @@
This post details the process I used to create ZFS pools, datasets, and
snapshots on Ubuntu Server.
-I found the following pages very helpful while going through this
-process:
+I found the following pages very helpful while going through this process:
-- [[https://ubuntu.com/tutorials/setup-zfs-storage-pool][Setup a ZFS
- storage pool]]
+- [[https://ubuntu.com/tutorials/setup-zfs-storage-pool][Setup a ZFS storage pool]]
- [[https://wiki.ubuntu.com/Kernel/Reference/ZFS][Kernel/Reference/ZFS]]
-- [[https://blog.victormendonca.com/2020/11/03/zfs-for-dummies/][ZFS for
- Dummies]]
+- [[https://blog.victormendonca.com/2020/11/03/zfs-for-dummies/][ZFS for Dummies]]
* Installation
@@ -23,8 +20,7 @@ To start, I installed the ZFS package with the following command:
sudo apt install zfsutils-linux
#+end_src
-Once installed, you can check the version to see if it installed
-correctly.
+Once installed, you can check the version to see if it installed correctly.
#+begin_src sh
> zsf --version
@@ -37,9 +33,9 @@ zfs-kmod-2.1.5-1ubuntu6~22.04.1
Now that ZFS is installed, we can create and configure the pool.
-You have various options for configuring ZFS pools that all come
-different pros and cons. I suggest visiting the links at the top of this
-post or searching online for the best configuration for your use-case.
+You have various options for configuring ZFS pools that all come different pros
+and cons. I suggest visiting the links at the top of this post or searching
+online for the best configuration for your use-case.
- Striped VDEVs (Raid0)
- Mirrored VDEVs (Raid1)
@@ -49,20 +45,20 @@ post or searching online for the best configuration for your use-case.
- RAIDz3
- Nested RAIDz (Raid50, Raid60)
-I will be using Raid10 in this guide. However, the majority of the steps
-are the same regardless of your chosen pool configuration.
+I will be using Raid10 in this guide. However, the majority of the steps are the
+same regardless of your chosen pool configuration.
** Creating the Pool
-To start, let's list the disks available to use. You can use =fdisk=
-command to see all available disks.
+To start, let's list the disks available to use. You can use =fdisk= command to
+see all available disks.
#+begin_src sh
sudo fdisk -l
#+end_src
-Or, if you currently have them mounted, you can use the =df= command to
-view your disks.
+Or, if you currently have them mounted, you can use the =df= command to view
+your disks.
#+begin_src sh
> sudo df -h
@@ -77,17 +73,16 @@ Filesystem Size Used Avail Use% Mounted on
/dev/sdf1 7.3T 28K 6.9T 1% /mnt/red-04
#+end_src
-If you're going to use mounted disks, make sure to umount them before
-creating the pool.
+If you're going to use mounted disks, make sure to umount them before creating
+the pool.
#+begin_src sh
sudo umount /dev/sda1
sudo umount /dev/sdb1
#+end_src
-Now that I've identified the disks I want to use and have them
-unmounted, let's create the pool. For this example, I will call it
-=tank=.
+Now that I've identified the disks I want to use and have them unmounted, let's
+create the pool. For this example, I will call it =tank=.
#+begin_src sh
sudo zpool create -f -m /mnt/pool tank mirror /dev/sda /dev/sdb
@@ -119,8 +114,8 @@ config:
errors: No known data errors
#+end_src
-We can also look at the mounted filesystem to see where the pool is
-mounted and some quick stats.
+We can also look at the mounted filesystem to see where the pool is mounted and
+some quick stats.
#+begin_src sh
> df -h
@@ -132,28 +127,25 @@ tank 7.2T 128K 7.2T 1% /tank
** Expanding the Pool
-If you want to expand this pool, you will need to add a new VDEV to the
-pool. Since I am using 2 disks per VDEV, I will need to add a new 2-disk
-VDEV to the existing =tank= pool.
+If you want to expand this pool, you will need to add a new VDEV to the pool.
+Since I am using 2 disks per VDEV, I will need to add a new 2-disk VDEV to the
+existing =tank= pool.
#+begin_src sh
sudo zpool add tank mirror /dev/sdX /dev/sdY
#+end_src
-If you're adding disks of different sizes, you'll need to use the =-f=
-flag. Keep in mind that the max size will be limited to the smallest
-disk added.
+If you're adding disks of different sizes, you'll need to use the =-f= flag.
+Keep in mind that the max size will be limited to the smallest disk added.
#+begin_src sh
sudo zpool add -f tank mirror /dev/sdX /dev/sdY
#+end_src
-I added two 8TB hard drives and this process took around 10 seconds to
-complete.
+I added two 8TB hard drives and this process took around 10 seconds to complete.
-When viewing the pool again, you can see that the pool has now doubled
-in size. We have 14.3 TB useable space and the same space used for
-mirroring.
+When viewing the pool again, you can see that the pool has now doubled in size.
+We have 14.3 TB useable space and the same space used for mirroring.
#+begin_src sh
> zfs list
@@ -166,8 +158,8 @@ tank/media 96K 14.3T 96K /tank/media
*** Converting Disks
-Some disks, such as NTFS-formatted drives, will need to be partitioned
-and formatted prior to being added to the pool.
+Some disks, such as NTFS-formatted drives, will need to be partitioned and
+formatted prior to being added to the pool.
Start by identifying the disks you want to format and add to the pool.
@@ -178,8 +170,8 @@ sudo fdisk -l | grep /dev
I am going to format my =/dev/sdc= and =/dev/sdd= disks with the =fdisk=
command.
-See below for instructions on how to use =fdisk=. Here's what I did to
-create basic Linux formatted disks:
+See below for instructions on how to use =fdisk=. Here's what I did to create
+basic Linux formatted disks:
- =g= : Create GPT partition table
- =n= : Create a new partition, hit Enter for all default options
@@ -191,13 +183,12 @@ I repeated this process for both disks.
#+begin_src sh
> sudo fdisk /dev/sdc
-Welcome to fdisk (util-linux 2.37.2).
-Changes will remain in memory only, until you decide to write them.
-Be careful before using the write command.
+Welcome to fdisk (util-linux 2.37.2). Changes will remain in memory only, until
+you decide to write them. Be careful before using the write command.
-This disk is currently in use - repartitioning is probably a bad idea.
-It's recommended to umount all file systems, and swapoff all swap
-partitions on this disk.
+This disk is currently in use - repartitioning is probably a bad idea. It's
+recommended to umount all file systems, and swapoff all swap partitions on this
+disk.
Command (m for help): m
@@ -243,9 +234,9 @@ sudo zpool add tank mirror /dev/sdc /dev/sdd
#+end_src
When we list the pool again, we can see that our size is now updated to
-approximately 22TB. This represents my hard drives totalling 45.6TB when
-shown with =fdisk -l=, with a Raid10 configuration using 22TB for
-mirroring and 22TB of useable space.
+approximately 22TB. This represents my hard drives totalling 45.6TB when shown
+with =fdisk -l=, with a Raid10 configuration using 22TB for mirroring and 22TB
+of useable space.
#+begin_src sh
> zfs list
@@ -258,17 +249,15 @@ tank/media 145GT 21.7T 96K /tank/media
** Creating Datasets
-According to
-[[https://docs.oracle.com/cd/E18752_01/html/819-5461/ftyue.html][ZFS
-Terminology]], a =dataset= can refer to “clones, file systems,
+According to [[https://docs.oracle.com/cd/E18752_01/html/819-5461/ftyue.html][ZFS Terminology]], a =dataset= can refer to “clones, file systems,
snapshots, and volumes.
-For this guide, I will use the =dataset= term to refer to file systems
-created under a pool.
+For this guide, I will use the =dataset= term to refer to file systems created
+under a pool.
-Within my =tank= pool, I am going to create some datasets to help
-organize my files. This will give me location to store data rather than
-simply dumping everything at the =/tank/= location.
+Within my =tank= pool, I am going to create some datasets to help organize my
+files. This will give me location to store data rather than simply dumping
+everything at the =/tank/= location.
#+begin_src sh
sudo zfs create tank/cloud
@@ -288,9 +277,9 @@ tank/media 96K 7.14T 96K /tank/media
** Creating Snapshots
-Next, let's create our first snapshot. We can do this by calling the
-=snapshot= command and give it an output name. I will be throwing the
-current date and time into my example.
+Next, let's create our first snapshot. We can do this by calling the =snapshot=
+command and give it an output name. I will be throwing the current date and time
+into my example.
#+begin_src sh
sudo zfs snapshot tank@$(date '+%Y-%m-%d_%H-%M')
@@ -321,13 +310,10 @@ no datasets available
* My Thoughts on ZFS So Far
-- I sacrificed 25TB to be able to mirror my data, but I feel more
- comfortable with the potential to save my data by quickly replacing a
- disk if I need to.
+- I sacrificed 25TB to be able to mirror my data, but I feel more comfortable
+ with the potential to save my data by quickly replacing a disk if I need to.
- The set-up was surprisingly easy and fast.
-- Disk I/O is fast as well. I was worried that the data transfer speeds
- would be slower due to the RAID configuration.
-- Media streaming and transcoding has seen no noticeable drop in
- performance.
-- My only limitation really is the number of HDD bays in my server HDD
- cage.
+- Disk I/O is fast as well. I was worried that the data transfer speeds would be
+ slower due to the RAID configuration.
+- Media streaming and transcoding has seen no noticeable drop in performance.
+- My only limitation really is the number of HDD bays in my server HDD cage.