Home > Solaris > ZFS Part 5: ZFS Clones and Sending/Receiving ZFS Data

ZFS Part 5: ZFS Clones and Sending/Receiving ZFS Data

A ZFS Clone is a read-write clone of a filesystem created from a snapshot. It still refers to the snapshot it has been created from, but allows us to make changes. We cannot remove the origin snapshot whilst the clone is in use, unless we promote it. These concepts will become clear during the examples.

Let’s create a test dataset:

Put some data on the filesystem:

We can now take a snapshot of the filesystem:

Now, take a clone of this snapshot into a new dataset:

Here, the clone is being created from the snapshot datapool/clonefs@20130129 into the new dataset datapool/cloned. zfs list shows the new dataset:

See that 19KB is used, but the dataset refers to 306KB somewhere else. Where does that originate? The origin property of the ZFS dataset datapool/cloned will show us:

There we go. We will be unable to delete the origin snapshot (as it’s still required for the clone to function):

No – we don’t want that! Before we complete this, verify that the dataset is in-fact a read-write clone:

The ZFS dataset can be promoted:

See that the snapshot is now a snapshot of the CLONED filesystem:

And that the original filesystem (the one we cloned) is now dependent on this snapshot, and uses it as its origin:

Essentially, the parent-child relationship is switched. We can switch it back:

Switch it back again (dizzy yet?) and then you can destroy the dataset that datapool/cloned was created from (i.e. datapool/clonefs):

As the dependent filesystem has now been removed, the snapshot too can be removed:

And we’re done with clones.

Sending/Receiving ZFS Data

ZFS send/receive is essentially ufsdump/ufsrestore on steroids. zfs send can be used to create “streams” from snapshots, and send those streams to files, other systems, or indeed another dataset with zfs recv.

zfs send/recv, along with the snapshot functionality, allow us to create our own complex backup solutions relatively simply.

Start, as usual, with a test dataset with a few files copied to it:

Create a snapshot of the filesystem:

zfs send writes a stream of the current snapshot to STDOUT. zfs recv receives a ZFS data stream on STDIN. Thus, we can just pipe it the output of zfs send into zfs recv and create a new filesystem from the stream on-the-fly:

We can see that the new filesystem has been created:

The snapshot has also been created:

We won’t be needing that, so we can remove it:

We can verify that datapool/receiveme contains the data we sent:

zfs send will, by default, send a full backup. You can use zfs send -i to send incremental backups and fashion yourself that backup system I’ve been prattling on about – which is when the ZFS snapshots that it creates on the destination (receiving) dataset are required (so that the filesystem can be restored to a point-in-time, for example). Going into this solution within this article is stretching the scope a little.

As zfs send/recv operate on streams (just like the rest of UNIX), we can do things like:

Conclusion

This article has covered the ZFS basics, and a few advanced concepts too. In a later article, I’ll introduce other concepts such as ZFS Delegated administration, setting up NFS/SMB servers using a ZFS backing store, repairing failed zpools (and scrubbing) and much more.

Advertisements
Categories: Solaris Tags: , , ,
  1. No comments yet.
  1. No trackbacks yet.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: