Home > Solaris, ZFS > ZFS Part 1: Introduction

ZFS Part 1: Introduction

ZFS is simply awesome. It simplifies storage management, performs well, is fault-tolerant and scalable and generally is just amazing. I will use this article to demonstrate some of its interesting features. Note that we are only scraping the tip of the ZFS iceberg here; read the official documentation for much more detail. The terms dataset and filesystem are used interchangeably throughout as with ZFS they are essentially the same thing.

When I created my sol11lab VM I created a bunch of 20GB disks along with it:

Even though it will not be the final Zpool/ZFS configuration (I’ll presume, again, this article is being read by andvanced system administrators who have at least worked a little with ZFS) but I will spend a little time creating and destroying pools, and discussing ZFS send/receive, deduplication, ecryption, compression and so on.

First, I’ll create a simple striped RAID0 set using all four free disks:

As you can see, zpool warns you that this simple set of disks provides no redundancy – and that the striped set (i.e. RAID0) will be destroyed even if a single pool member fails. Let’s make sure the pool was indeed created how we expected:

And that a default ZFS dataset has been created and mounted for the pool:

Here you can see that the pool has been mounted under /<poolname> (in our case /testpool). Additional zfs filesystems (or the root of the zpool itself) can have their mountpoints changed at any time via the zfs set mountpoint command. We won’t spend too much work here, as this isn’t our standard configuration. So let’s destroy the pool:

Our final configuration will be a two-way mirror, (i.e. RAID1 with 2 disks) with a third disk as a hot-spare. However, as we have four disks, let’s create another test configuration – this time RAID0/1:

And see if this has had the desired effect:

As I am using virtual hardware, it isn’t too easy to simulate a failure of a disk component – running cfgadm -c unconfigure against one of the disks used in the pool gives an I/O error on the SCSI device. If I had a physical server I’d yank a disk and show you it works. For now; believe that ZFS is highly fault-tolerant, and very easy to recover from a failure (just attach the new device back into the pool once replaced). ZFS is only as fault-tolerant as the RAID level of your pool though. We can create RAIDZ{1,2,3} pools which are RAID5 with x number of parity disks (i.e. RAIDZ2 is RAID6, essentially). We can also add hot-spare disks, logging disks, caching disks, etc. and generally tune ZFS to perform in exactly the way we want.

I will destroy testpool to make way for our final destination zpool and filesystem:

Verify that it’s gone, and all we have is our original rpool:

Advertisements
Categories: Solaris, ZFS Tags: , ,
  1. No comments yet.
  1. No trackbacks yet.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: