LDOM live migration basics

One of the best features included in the Oracle VM server for SPARC 2.1 version was live migration. Live migration enables migrating active LDOMs from one physical system to other without any downtime. This may help you in doing any maintenance activity like patching or hardware changes on the physical server also helps in load balancing between 2 servers.

Live Migration phases

Phase 1 : Pre-checks
In this phase the source machine does pre-checks on the target system to ensure that the migration will succeed.
Phase 2 : Target LDOM creation
Here a LDOM is created on the target machine, which will be in bound state until the migration is complete.
Phase 3 : Run time state transfer
The run time state of the source LDOM is transferred to the target machine. Any changes to the source LDOM is also transferred to the target machine during the migration process by the LDOM manager. The information is retrieved from Hypervisor on source machine and transferred to the Hypervisor on the target machine.
Phase 4 : Source domain suspension
In this phase the source domain is suspended for a fraction of time and the remaining state information of the source LDOM is transferred to the target machine.
Phase 5 : Hand-off
In this last step a hand-off occurs between the ldom manager from source machine to ldom manager on the target machine. It occurs when the ldom is migrated and resumes execution on target machine and the source ldom is destroyed completely.

Hardware Requirements

1. CPU
a. same CPU migration
– Sufficient number of CPUs on target machine to accommodate migrating LDOM
– For system running Solris 10 , both target and source systems must have same processor type.
To check CPU type and frequency

# psrinfo -pv
The physical processor has 8 virtual processors (0-7)
  SPARC-T4 (chipid 0, clock 2548 MHz)

– Also the stick frequency of both source and target system CPU must match. Solaris 11 doesn’t have this condition.

# prtconf -pv | grep stick-frequency
    stick-frequency:  05f4bc08

b. Cross CPU migration
– UltraSPARC T2+ CPU or higher
– Both target and source systems should have Solaris 11 running.
– Set “cpu-arch” property on source machine to “generic”. generic cpu-arch uses common CPU hardware feaatures enabling LDOMs to perform a CPU independant migration. Default value is “native” which uses CPU specific hardware features, thus can be used if you don’t want to do a cross CPU migration. Generic attribute enables migration to an old or newer CPU type. To enable generic attribute

primary# ldm set-domain cpu-arch=generic ldom01

2. Memory
– Sufficient memory on target machine to accommodate migrating LDOM.
– Same number of identical sized memory blocks with same real addresses.

3. I/O related
– Domain with direct I/O access can not be migrated (I/O domains)
– Each virtual disk back end on target system must have same volume name but the path to actual back end device may be same.
for example when you add back end device to guest ldom in source machine as :

primary# ldm add-vdsdev /dev/dsk/c2t6d0s2 vol01@primary-vds0

Here vol01 should be there on target machine as vol01 only , but the disk slice c2t6d0s2 can have different name on the target machine , but referring the same device off course.

4. Virtual Services
– all 3 service vsw, vds and vcc (in case of console group only) must be present on target machine.
– arleast one free port in vcc

Software requirements

– Oracle VM version 2.1 or later for same CPU migration, version 2.2 or later for cross CPU migration

Migrating a domain (examples)

primary# ldm migrate ldom01 root@target-system
Target Password:
primary# ldm migrate -p password_file ldom01 root@target-system     (unattended migration)
primary# ldm migrate ldom01 root@target-system:ldom_other    (Migrating and renaming the ldom)

Monitoring migration status

You can check status of the migration from both source and target machines as below:

primary# ldm list ldom-source
NAME         STATE      FLAGS   CONS  VCPU  MEMORY  UTIL  UPTIME 
ldom-source  suspended  -n---s         1     1G     0.0%  2h 7m
primary# ldm list ldom-target
NAME         STATE      FLAGS   CONS  VCPU  MEMORY  UTIL  UPTIME 
ldom-target  bound      -----t  5000   1     1G

The sixth column in the FLAGS field indicates whether the system is the source or target of the migration

primary# ldm list -o status ldom-target 
NAME
ldom-target
STATUS
    OPERATION    PROGRESS    SOURCE
    migration    34%         source-system

Canceling a Migration in progress

Either a KILL signal to ldm migration command or ldm cancel-operation can terminate a migration. Both commands must be executed from the source machine. Target machine can not control the migration process. As effect of cancel operation the ldom created on target machine is destroyed.

Advertisements

How to clone LDOMs using ZFS

The ZFS snapshot and cloning feature can be used to clones LDOMs. This comes very handy when you need to create multiple ldoms with some softwares already installed. The steps involved are :

1. Setup the primary domain
2. Create a guest LDOM (base LDOM)
3. Unconfigure, stop and unbind the base LDOM
4. Take zfs snapshot of base LDOM (called as golden image)
5. Clone the golden image to create new LDOMs

Setup the primary domain

Setup the primary domain with necessary resources and services and reboot the machine, for configuration changes to take effect.
Create default services

primary# ldm add-vcc port-range=5000-5100 primary-vcc0 primary
primary# ldm add-vds primary-vds0 primary
primary# ldm add-vsw net-dev=nxge0 primary-vsw0 primary
primary# ldm list-services primary 
VDS
    NAME             VOLUME         OPTIONS          DEVICE
    primary-vds0
VCC
    NAME             PORT-RANGE
    primary-vcc0     5000-5100
VSW
    NAME             MAC               NET-DEV   DEVICE     MODE
    primary-vsw0     02:04:4f:fb:9f:0d nxge0     switch@0   prog,promisc

Ensure ldmd deamon is online and set CPU, memory resources fro primary domain.

primary# svcs -a | grep ldmd
online 14:23:34 svc:/ldoms/ldmd:default
primary# ldm set-mau 1 primary
primary# ldm set-vcpu 8 primary
primary# ldm start-reconf primary          (delayed reconfiguration)
primary# ldm set-memory 4G primary
primary# ldm add-config new_config
primary# ldm list-config 
factory-default
new_config [current]

Reboot primary domain for new configuration (new_config) to become active.

primary# shutdown -y -g0 -i6

Enable networking between primary and guest domains

primary# ifconfig nxge0 down unplumb
primary# ifconfig vsw0 plumb
primary# ifconfig vsw0 192.168.1.2 netmask + broadcast + up
primary# mv /etc/hostname.nxge0 /etc/hostname.vsw0

Enable virtual network terminal server daemon if not already enabled.

primary# svcadm enable vntsd
primary# svcs vntsd
    STATE          STIME    FMRI
    online         Oct_12   svc:/ldoms/vntsd:default

Setting up the base LDOM

Setup the base LDOM (base_ldom) with 8 VCPU, 2GB Memory, virtual network device vnet1 and zfs volume (base_ldom) as a virtual disk (vdisk1).

primary# ldm add-domain base_ldom
primary# ldm add-vcpu 8 base_ldom
primary# ldm add-memory 2G base_ldom
primary# ldm add-vnet vnet1 primary-vsw0 base_ldom
primary# zfs create -V 5gb ldompool/base_ldomvol
primary# ldm add-vdsdev /dev/zvol/dsk/ldompool/base_ldomvol vol01@primary-vds0
primary# ldm add-vdisk vdisk1 vol01@primary-vds0 base_ldom

Set the boot environment variables

primary# ldm set-var auto-boot?=true base_ldom
primary# ldm set-var boot-device=vdisk1 base_ldom

Install Solaris 10 on base ldom using solaris 10 iso image. We will add solaris 10 iso image as a virtual disk and then boot the base LDOM from this disk to install solaris 10.

primary# ldm add-vdsdev options=ro /data/sol_10.iso iso@primary-vds0
primary# ldm add-vdisk sol10_iso iso@primary-vds0 base_ldom

Bind and start the base LDOM. The solaris 10 iso should reflect in the devalias output at OK prompt as sol10_iso. Boot from this image to start the installation.

primary# ldm bind base_ldom
primary# ldm start base_ldom
LDom base_ldom started
ok> devalias
sol10_iso                /virtual-devices@100/channel-devices@200/disk@1
vdisk0                   /virtual-devices@100/channel-devices@200/disk@0
vnet1                    /virtual-devices@100/channel-devices@200/network@0
net                      /virtual-devices@100/channel-devices@200/network@0
disk                     /virtual-devices@100/channel-devices@200/disk@0
virtual-console          /virtual-devices/console@1
name                     aliases
ok> boot sol10_iso

Unconfigure, stop and unbind base LDOM

Unconfigure the base LDOM which automatically halts it. We would then stop the LDOM and unbind it so that we can take a snapshot of the base LDOM boot disk volume (base_ldomvol).

base_ldom# sys-unconfigure  (the ldom halts after this)
primary-domain# ldm stop base_ldom
primary-domain# ldm unbind base_ldom

Create the golden image

To create the golden image take a snapshot of the base_ldomvol from the base ldom.

primary-domain# zfs snapshot ldompool/bas_ldomvol@golden

Clone the golden image to create new LDOM

Clone the base_ldomvol snapshot (golden image) and use it to create the new LDOM, ldom01 with 4 VCPU, 4G, 1 MAU

primary-domain# zfs clone ldompool/bas_ldomvol@golden ldompool/ldom01_bootvol
primary-domain# ldm create ldom01
primary-domain# ldm set-mau 1 ldom01
primary-domain# ldm set-vcpu 4 ldom01
primary-domain# ldm set-mem 4G ldom01
primary-domain# ldm add-vnet vnet1 primary-vsw0 ldom01
primary-domain# ldm add-vdsdev ldompool/ldom01_bootvol vol01@primary-vds0
primary-domain# ldm add-vdisk vdisk1 vol01@primary-vds0
primary-domain# ldm set-variable auto-boot?=false ldom01
primary-domain# ldm bind ldom01
primary-domain# ldm start ldom01

When you boot the new LDOM, you will have to configure it with hostname, IP, timezone etc settings as it is an unconfigured LDOM.

How to install and configure LDOMs

Virtualization has been a need of time over several past years as we have machines now even with 16 cores and memory in TBs. A single machine is now capable of accommodating even more than 100 VMs at a time. Oracle VM for SPARC formerly known as LDOMs has played a key role in oracles virtualization strategies and is improving with every version. Before start configuring our first oracle VM for SPARC let us understand types of ldoms, ldom services and virtual devices.

Types of logical domains

Name Purpose
Guest No direct access to underlying hardware and does not provide virtual device or services to other ldoms. Uses virtual device.
I/O has direct access to underlying hardware in the server. It can be used in cases like oracle DB which wants direct/raw access to the storage devices.
Service provides virtualized devices and services to guest domains.
Control Service domain that also runs the ldoms manager software to control the configuration of hypervisor. This ldom manager is responsible for mapping between physical and virtual devices.

Virtual Services and Devices

Abbreviation Name Purpose
VLDC virtual logical domain channel communication channel between logical domain and hypervisor
VCC Virtual console concentrator Acts as a virtual console for each logical domain
VSW Virtual switch service provides network access for guest ldoms to the physical network ports
VDS virtual disk service provides virtual storage service for guest ldoms
VCPU virtual CPU Each thread of a T series CPU acts as a virtual CPU
MAU Mathematical arithmetic unit Each core of T series CPU will have a MAU for accelerated RAS/DSA encryption
Memory Physical memory is mapped into virtual memory and assigned to ldoms
VCONS Virtual console a port in guest ldom that connects to the VCC service in control domain
VNET Virtual network network port in guest ldom which is connected to the VSW service in the control domain
VSDEV Virtual disk service device physical storage device that is virtualized by VDS service in control domain
VDISK Virtual disk VDISK in guest domain is connected to the VDS service in control domain/service domain

Installing the OVM software

To install the LDOM software simply unzip the software zip and run the install-ldm script with -s option in case you don’t want to use the configuration assistant to configure the primary and guest ldoms.

primary # unzip OVM_Server_SPARC_latest.zip
primary # ./install-ldm -s

Creating the default services

Create the essential services like vsw, vcc and vds required to serve the guest LDOMs.

primary# ldm add-vcc port-range=5000-5100 primary-vcc0 primary
primary# ldm add-vds primary-vds0 primary
primary# ldm add-vsw net-dev=nxge0 primary-vsw0 primary
primary# ldm list-services primary 
VDS
    NAME             VOLUME         OPTIONS          DEVICE
    primary-vds0
VCC
    NAME             PORT-RANGE
    primary-vcc0     5000-5100
VSW
    NAME             MAC               NET-DEV   DEVICE     MODE
    primary-vsw0     02:04:4f:fb:9f:0d nxge0     switch@0   prog,promisc

Initial configuration of the control domain

By default all the VCPUs, Memory and MAUs are assigned to the primary domain which is the default domain created after installing the OVM for SPARC software. Primary or control domain is used to configure all the guest ldoms and provide necessary virtual services to them like vcc, vsw and vds. Logical domain manager is responsible to create, delete, modify and control ldoms, thus make sure the ldmd service is running before configuring the primary and guest domains. Use delayed reconfiguration in order to configure the primary ldom without rebooting for previous changes to take effect.

primary# svcs -a | grep ldmd
online 14:23:34 svc:/ldoms/ldmd:default
primary# ldm set-mau 1 primary
primary# ldm set-vcpu 8 primary
primary# ldm start-reconf primary          (delayed reconfiguration)
primary# ldm set-memory 4G primary
primary# ldm add-config new_config
primary# ldm list-config 
factory-default
new_config [current]

Reboot the primary domain for configuration settings to take effect

primary# shutdown -y -g0 -i6

Enable networking between primary and guest domains

By default communication between control domain and all the guest domains is disabled. To enable it, virtual switch has to be configured as the network device instead of nxge0.

primary# ifconfig nxge0 down unplumb
primary# ifconfig vsw0 plumb
primary# ifconfig vsw0 192.168.1.2 netmask + broadcast + up
primary# mv /etc/hostname.nxge0 /etc/hostname.vsw0

Enable virtual network terminal server daemon

The vntsd daemon is responsible to provide the virtual network terminal services to the guest ldoms. If this service is not running enable it with svcadm command.

primary# svcadm enable vntsd
primary# svcs vntsd
    STATE          STIME    FMRI
    online         Oct_12   svc:/ldoms/vntsd:default

Setting up the Guest Domain

We would assign 8 VCPUs, 2 GB of memory and 1 MAU to our first guest ldom. Also a virtual network vnet1 will be created and associated with the virtual switch vsw0.

primary# ldm add-domain ldom01
primary# ldm add-vcpu 8 ldom01
primary# ldm add-memory 2G ldom01
primary-domain# ldm set-mau 1 ldom01
primary# ldm add-vnet vnet1 primary-vsw0 ldom01

Adding storage to the guest domain

Here we first need to specify the physical device that needs to be exported by vdsdev to the guest domain and then we actually add the virtual disk thus created to the guest domain. Now use any one of the 3 methods mentioned below.
1. Adding physical disks

primary# ldm add-vdsdev /dev/dsk/c2t1d0s2 vol1@primary-vds0
primary# ldm add-vdisk vdisk1 vol1@primary-vds0 ldom01

2. Adding file

primary# mkfile 10g /ldoms/ldom01_boot
primary# ldm add-vdsdev /ldoms/ldom01_boot vol1@primary-vds0
primary# ldm add-vdisk vdisk1 vol1@primary-vds0 ldom01

3. Adding a volume

primary# zfs create -V 5gb pool/vol01
primary# ldm add-vdsdev /dev/zvol/dsk/pool/vol01 vol1@primary-vds0
primary# ldm add-vdisk vdisk1 vol1@primary-vds0 ldom01

Setting variables

Setup the boot environment variable for the guest ldom.

primary# ldm set-var auto-boot?=true ldom01
primary# ldm set-var boot-device=vdisk1 ldom01

Setting up the solaris ISO image for installing guest ldom

Now we can also do a jumpstart installation of the guest domain. But one of the easiest and most widely used method is add iso image as virtual disk to the guest ldom and install it from it. Here you can access the vdisk sol10_iso in the ok prompt and boot from it.

primary# ldm add-vdsdev options=ro /data/sol_10.iso iso@primary-vds0
primary# ldm add-vdisk sol10_iso iso@primary-vds0 ldom01

Bind and start installing the ldom

primary# ldm bind ldom01
primary# ldm start ldom01
LDom ldom01 started
ok> devalias
ok> boot sol10_iso

Connect the guest domain

Now check the port which is bound with the guest domain and connect the virtual console of the guest domain.

primary:~ # ldm list
NAME    STATE  FLAGS CONS VCPU MEMORY UTIL UPTIME 
primary active -n-cv  SP   8    4G    0.3% 8h 46m 
ldom01  active -n--- 5000  8    2G     48% 1h 52m
primary# telnet localhost 5000 Trying 127.0.0.1...
    Connected to localhost.
    Escape character is ’^]’.
Connecting to console "ldom01" in group "ldom01" .... Press ~? for control options ..

Flag definitions

Now you can see various flags in the “ldm list” command output. The falgs represent the current state of the ldom.

column 1 column 2 column 3
s starting or stopping
– placeholder
n normal
t transition
d delayed reconfiguration
– placeholder
column 4 column 5 column 6
c control domain
– placeholder
v virtual I/O service domain
– placeholder
s source domain in migration
t target domain in migration
e error occurred in migration
– placeholder

Other useful Commands

View current version of Oracle VM server for SPARC software

primary# ldm -V

Long listing of domains

primary# ldm list -l

List the resource for all LDOMs and per LDOM

# ldm list -o cpu primary
# ldm list -o network,memory ldom01

List the boot variables

# ldm list-variable boot-device ldg1 
boot-device=/virtual-devices@100/channel-devices@200/disk@0:a

List the bindings of all the LDOMs

# ldm list-bindings ldom

List all server resources, bound and unbound.

# ldm list-devices -a
# ldm list-devices mem

How to clone a solaris 11 zone

First step to clone any zone is to create a profile and store it as a template. Login to the non-global zone and use sysconfig to create the configuration template which will be used later to install and configure our cloned zone, zone02. The system configuration tool will start upon executing the sysconfig command and you can configure the hostname, IP address , time zone etc.

Configuration Template creation

root@geeklab:~# zlogin zone01
root@zone01:~# sysconfig create-profile -o /root/zone02-template.xml

The system configuration tool will guide you through the configuration process:
System configuration tool

Set the hostname for the zone as zone02 and mode of network configuration as manual.

network configuration

On the next screen give the IP address to the NIC card net0 and a netmask.

manual configuration net0

We will not configure any DNS service so select “Do not configure DNS”.

Do not configure DNS

Select “None” option for alternate name service.

alternate name service

On the next screens set the time zone according to your location.

Now set the root password. Also if you want any user to be created, you can do it on this screen. Note that, you can not create a user which already present in zone01.

root password

Profile creation

Now we will create the profile for our zone02. We need to first halt the zone01 from the global zone.

root@geeklab:~# zoneadm -z zone01 halt
root@geeklab:~# zoneadm list -ivc
  ID NAME             STATUS     PATH                           BRAND    IP
   0 global           running    /                              solaris  shared
   - zone01           installed  /rpool/zone01                  solaris  excl

Export the zone01 configuration which we will use as a profile template for creating our new zone, zone02.

root@geeklab:~# zonecfg -z zone01 export -f zone02-profile

Edit the zone02-profile file and change zonepath to /rpool/zone02 (make sure you have created this file system). Make sure you have double quotes around “-m verbose”, otherwise it will give an error while creating the zone02.

root@geeklab:~# cat zone02-profile
create -b
set brand=solaris
set zonepath=/rpool/zone02
set autoboot=true
set bootargs="-m verbose"
set ip-type=exclusive
add anet
set linkname=net0
set lower-link=auto
set configure-allowed-address=true
set link-protection=mac-nospoof
set mac-address=random
end

Copy the configuration xml template to somewhere in global zone.

root@geeklab:~# cp /rpool/zone01/root/root/zone02-template.xml /var/tmp/

Now create the zone02 by cloning the zone01. First we will use zonecfg and the modified profile file of zone02 to configure the zone02 and then clone the zone01 by using the zoneadm command.

root@geeklab:~# zonecfg -z zone02 -f /root/zone02-profile
root@geeklab:~# zoneadm -z zone02 clone -c /var/tmp/zone02-template.xml zone01
/rpool/zone02 must not be group readable.
/rpool/zone02 must not be group executable.
/rpool/zone02 must not be world readable.
/rpool/zone02 must not be world executable.
changing zonepath permissions to 0700.
Progress being logged to /var/log/zones/zoneadm.20131122T124138Z.zone02.clone
Log saved in non-global zone as /rpool/zone02/root/var/log/zones/zoneadm.20131122T124138Z.zone02.clone

Confirm the creation of zone02. You would see new zfs chiled datasets created under rpool/zone02 filesystem. Also check zoneadm list command output.

root@geeklab:~# zfs list |grep zone02
rpool/zone02                            366K  4.47G    35K  /rpool/zone02
rpool/zone02/rpool                      330K  4.47G    31K  /rpool
rpool/zone02/rpool/ROOT                 310K  4.47G    31K  legacy
rpool/zone02/rpool/ROOT/solaris-0       308K  4.47G   420M  /rpool/zone02/root
rpool/zone02/rpool/ROOT/solaris-0/var    44K  4.47G  23.8M  /rpool/zone02/root/var
rpool/zone02/rpool/VARSHARE               1K  4.47G    39K  /var/share
rpool/zone02/rpool/export                 2K  4.47G    32K  /export
rpool/zone02/rpool/export/home            1K  4.47G    31K  /export/home
root@geeklab:~# zoneadm list -ivc
  ID NAME             STATUS     PATH                           BRAND    IP
   0 global           running    /                              solaris  shared
   - zone01           installed  /rpool/zone01                  solaris  excl
   - zone02           installed  /rpool/zone02                  solaris  excl

Boot the new zone and login into the console of the zone. Now unlike the normal configuration of a solaris 11 zone with System configuration Tool, the OS uses the XML template to configure the zone. Thus we do not have to give any input to configure the zone02.

root@geeklab:~# zoneadm -z zone02 boot
root@geeklab:~# zlogin -C zone02

Exit out of the console of the zone02 by pressing “~.”.

Login to the zone and verify the network settings and filesystems.

root@zone02:~# ipadm show-addr
ADDROBJ           TYPE     STATE        ADDR
lo0/v4            static   ok           127.0.0.1/8
net0/v4           static   ok           192.168.1.35/24
lo0/v6            static   ok           ::1/128
net0/v6           addrconf ok           fe80::8:20ff:febf:cf6e/10
root@zone02:~# zfs list
NAME                       USED  AVAIL  REFER  MOUNTPOINT
rpool                     37.3M  4.43G    31K  /rpool
rpool/ROOT                37.2M  4.43G    31K  legacy
rpool/ROOT/solaris-0      37.2M  4.43G   453M  /
rpool/ROOT/solaris-0/var   246K  4.43G  23.8M  /var
rpool/VARSHARE              19K  4.43G    39K  /var/share
rpool/export                36K  4.43G    32K  /export
rpool/export/home           18K  4.43G    31K  /export/home

ZFS Part 5: ZFS Clones and Sending/Receiving ZFS Data

A ZFS Clone is a read-write clone of a filesystem created from a snapshot. It still refers to the snapshot it has been created from, but allows us to make changes. We cannot remove the origin snapshot whilst the clone is in use, unless we promote it. These concepts will become clear during the examples.

Let’s create a test dataset:

Put some data on the filesystem:

We can now take a snapshot of the filesystem:

Now, take a clone of this snapshot into a new dataset:

Here, the clone is being created from the snapshot datapool/clonefs@20130129 into the new dataset datapool/cloned. zfs list shows the new dataset:

See that 19KB is used, but the dataset refers to 306KB somewhere else. Where does that originate? The origin property of the ZFS dataset datapool/cloned will show us:

There we go. We will be unable to delete the origin snapshot (as it’s still required for the clone to function):

No – we don’t want that! Before we complete this, verify that the dataset is in-fact a read-write clone:

The ZFS dataset can be promoted:

See that the snapshot is now a snapshot of the CLONED filesystem:

And that the original filesystem (the one we cloned) is now dependent on this snapshot, and uses it as its origin:

Essentially, the parent-child relationship is switched. We can switch it back:

Switch it back again (dizzy yet?) and then you can destroy the dataset that datapool/cloned was created from (i.e. datapool/clonefs):

As the dependent filesystem has now been removed, the snapshot too can be removed:

And we’re done with clones.

Sending/Receiving ZFS Data

ZFS send/receive is essentially ufsdump/ufsrestore on steroids. zfs send can be used to create “streams” from snapshots, and send those streams to files, other systems, or indeed another dataset with zfs recv.

zfs send/recv, along with the snapshot functionality, allow us to create our own complex backup solutions relatively simply.

Start, as usual, with a test dataset with a few files copied to it:

Create a snapshot of the filesystem:

zfs send writes a stream of the current snapshot to STDOUT. zfs recv receives a ZFS data stream on STDIN. Thus, we can just pipe it the output of zfs send into zfs recv and create a new filesystem from the stream on-the-fly:

We can see that the new filesystem has been created:

The snapshot has also been created:

We won’t be needing that, so we can remove it:

We can verify that datapool/receiveme contains the data we sent:

zfs send will, by default, send a full backup. You can use zfs send -i to send incremental backups and fashion yourself that backup system I’ve been prattling on about – which is when the ZFS snapshots that it creates on the destination (receiving) dataset are required (so that the filesystem can be restored to a point-in-time, for example). Going into this solution within this article is stretching the scope a little.

As zfs send/recv operate on streams (just like the rest of UNIX), we can do things like:

Conclusion

This article has covered the ZFS basics, and a few advanced concepts too. In a later article, I’ll introduce other concepts such as ZFS Delegated administration, setting up NFS/SMB servers using a ZFS backing store, repairing failed zpools (and scrubbing) and much more.