Building a Solaris 11 repository without network connection

Solaris 11 has been released and is a fantastic new iteration of Oracle’s rock solid, enterprise operating system.  One of the great new features is the repository based Image Packaging system.  IPS not only introduces new cloud based package installation services, it is also integrated with our zones, boot environment and ZFS file systems to provide a safe, easy and fast way to perform system updates.

My customers typically don’t have network access and, in fact, can’t connect to any network until they have “Authority to connect.”  It’s useful, however, to build up a Solaris 11 system with additional software using the new Image Packaging System and locally stored repository. The Solaris 11 documentation describes how to create a locally stored repository with full explanations of what the commands do. I’m simply providing the quick and dirty steps.

The easiest way is to download the ISO image, burn to a DVD and insert into your DVD drive.  Then as root:

  • pkg set-publisher -G ‘*’ -g file:///cdrom/sol11repo_full/repo solaris

Now you can to install software using the GUI package manager or the pkg commands.  If you would like something more permanent (or don’t have a DVD drive), however, it takes a little more work.

  • After installing Solaris 11, download (on another system perhaps) the two files that make up the Solaris 11 repository from our download site
  • Sneaker-net the files to your Solaris 11 system
  • Cat the two files together to create one large ISO image. The file is about 6.9 GB in size
  • mount -F hsfs sol-11-11-repo-full.iso /mnt

You could stop here and set the publisher to point to the /mnt/repo location, however, this mount will not be persistent across reboots. Copy the repository from the mounted ISO image to a permanent, on disk location.

  • zfs create -o atime=off -o compression=on rpool/export/repoSolaris11
  • rsync -aP /mnt/repo /export/repoSolaris11
  • pkgrepo -s /export/repoSolaris11/repo refresh
  • pkg set-publisher -G ‘*’ -g /export/repoSolaris11/repo solaris

You now have a locally installed repository for adding additional software packages for Solaris 11.  The documentation also takes you through publishing your repository on the network so that others can access it.

Advertisements

ZFS Part 4: ZFS Snapshots

Snapshots are another piece of awesome functionality built right into ZFS. Essentially, snapshots are a read-only picture of a filesystem at a particular point-in-time. You can use these snapshots to perform incremental backups of filesystems (sending them to remote systems, too), create filesystem clones, create pre-upgrade backups prior to working with new software on a filesystem, and so on.

The snapshot will, initially, only refer to the files on the parent ZFS dataset from which they were created and not consume any space. They will only start to consume space once the data on the original dataset is changed. The snapshot will refer to these blocks and will not free them, thus the snapshot will start consuming space within the pool. The files on the snapshot can be accessed and read via standard UNIX tools (once you know where to look).

The best way to discuss these concepts is via some examples. Let us start by creating a test dataset:

Copy a few files to the new filesystem:

Verify that the dataset has been created and is using space within the pool:

So, we can see that 35KB is being referenced, and used, by the dataset. Which is all as expected.

Now, let’s take a snapshot of this dataset. Snapshots are named <datasetname>@<arbitrarystring> – you can pretty much use whichever string you want for arbitrarystring, but it’d make sense to use something meaningful, such as the date or some such.

Create the snapshot:

Verify that the snapshot has been created:

You can see here that whilst the snapshot refers to 35KB of data, 0 bytes are currently used. This is expected as we are referencing all the data within datapool/snapshotfs, and haven’t yet changed anything.

Let’s delete a file from the filesystem.

Now, a zfs list -t snapshot shows that the snapshot is consuming 21KB:

This is expected, as the snapshot now has a copy (via copy-on-write) of the data we deleted.

Create another snapshot (note REFER is now 34K as the file copy of /etc/shadow was removed from datapool/snapshotfs) …

and remove another file.

USED will change accordingly:

See that the first snapshot we took HASN’T changed, again due to copy-on-write. Just to labour the point, let’s create another snapshot:

Again, REFER has gone down to 31.5K (due to the copy of /etc/passwd being removed from datapool/snapshotfs) and USED is 0 because we haven’t done anything else to datapool/snapshotfs since creating the snapshot.

Remove the final file:

All as expected. USED is up again. Thus, snapshots contain incremental changes and can be used as the basis of developing incremental backups (once an appropriate full backup has been taken). We can use these to rollback, too. Let’s rollback to datapool/snapshotfs@20130129-3, and get our copy of /etc/group back:

Very cool. Now, let’s rollback the dataset to the first snapshot we took:

Note that we need to specify the -r option to zfs rollback as we have multiple snapshots taken later than snapshotfs@20130129 that must be removed during the rollback to original dataset state. Note, we still have the ORIGINAL snapshot we’ve rolled back to active, and this will still be using space as the dataset is changed.

Our files are now restored.

We can access the copies of the files stored on the snapshot, under <dataset_mountpoint>/.zfs/snapshot/<snapshot_name>, for example:

Let’s destroy the test dataset (remembering -r to zfs destroy so that the snapshot(s) are removed too):

We can now use what we’ve learned about ZFS Snapshots to work with ZFS Clones.

ZFS Part 3: Compression & Encryption

Also available to us is ZFS compression. Let’s create a test pool for testing. We’ll turn a few options on and off so you see the syntax:

Verify that the dataset was created with all appropriate options:

Compression ratio is 1.00x, as you’d expect for an empty filesystem. Copy some stuff to it:

Then check the compressratio variable within the dataset properties:

So – compression has given us some benefit. It’d be worth weighing up compression/dedup/encryption of ZFS filesystems against the system resources which they consume. Nowadays I’d be pushing FOR turning all this stuff on – servers are cheap and can work hard. Put them to use.

Encryption

Filesystem encryption is another easy-to-implement feature of ZFS. ZFS root pools and other OS components (such as the /var filesystem) cannot be encrypted.

To start, I’ll create a new encrypted dataset. You will be prompted for a passphrase to use when encrypting/decrypting the filesystem. Needless to say – do not forget this passphrase! Create the dataset with the encryption=on option:

Verify that the operation has succeeded and that the encrypted dataset has been created:

Encrypted ZFS datasets, when created with encryption=on and no other options, use aes-128-ccm as the default encryption algorithm.

You will see that by default, ZFS uses passphrase,prompt as the value for the keysource property:

In this configuration, the ZFS filesystem will not be automatically mounted at boot – observe. After a reboot the output of zfs mount does not contain an entry for datapool/encryptfs:

So – any encrypted datasets, using passphrase,prompt as the value for the keysource property, require manual mount with zfs mount:

The passphrase can be placed in a file, so that it is automatically mounted on boot. Note – this is not secure nor is it recommended. It is best to use keys, and we will configure this shortly. For now, place the passphrase in a read-only file in root’s home directory:

Set the keysource property to include the passphrase file:///path/to/key value as below:

Now, unmount the filesystem and unload the cached key (otherwise the filesystem will be remounted using the cached key and nothing will change):

If the filesystem is mounted now, we are not prompted for a passphrase:

A reboot confirms this:

OK – this is all well and good, but storing the passphrase in a file is far from being best practice. A more secure method is to use pktool to create a key, then change the dataset to use this key (or indeed create the ZFS dataset in the first place using this key). Whilst this is better – it’s still only as secure as the security of the key file location. Any compromise of the key file leads to a potential compromise of the ZFS dataset. However, we’re not storing the passphrase clear-text in a file somewhere, which is positive in my book.

The conversion from using a passphrase based key to the pktool generated key is completed as follows. First, generate a key, and store in as secure a location as possible:

Load the existing wrapping key for the dataset, by either mounting the dataset, or using zfs key -l. In our case, we can see that the key is already loaded, and so can ignore the warning:

Change the wrapping key via zfs key -c:

To test that the change has been successfully implemented, unmount the filesystem and unload the wrapping key for the dataset:

Try mounting the dataset, and confirm that the operation is successful, and that no prompts/warnings are displayed:

As previously discussed, a ZFS filesystem with encryption=on set uses aes-128-ccm by default. We can change this when we create a new dataset, however. Observe:

Our new dataset has been created using aes-256-ccm encryption.

ZFS Part 1: Introduction

ZFS is simply awesome. It simplifies storage management, performs well, is fault-tolerant and scalable and generally is just amazing. I will use this article to demonstrate some of its interesting features. Note that we are only scraping the tip of the ZFS iceberg here; read the official documentation for much more detail. The terms dataset and filesystem are used interchangeably throughout as with ZFS they are essentially the same thing.

When I created my sol11lab VM I created a bunch of 20GB disks along with it:

Even though it will not be the final Zpool/ZFS configuration (I’ll presume, again, this article is being read by andvanced system administrators who have at least worked a little with ZFS) but I will spend a little time creating and destroying pools, and discussing ZFS send/receive, deduplication, ecryption, compression and so on.

First, I’ll create a simple striped RAID0 set using all four free disks:

As you can see, zpool warns you that this simple set of disks provides no redundancy – and that the striped set (i.e. RAID0) will be destroyed even if a single pool member fails. Let’s make sure the pool was indeed created how we expected:

And that a default ZFS dataset has been created and mounted for the pool:

Here you can see that the pool has been mounted under /<poolname> (in our case /testpool). Additional zfs filesystems (or the root of the zpool itself) can have their mountpoints changed at any time via the zfs set mountpoint command. We won’t spend too much work here, as this isn’t our standard configuration. So let’s destroy the pool:

Our final configuration will be a two-way mirror, (i.e. RAID1 with 2 disks) with a third disk as a hot-spare. However, as we have four disks, let’s create another test configuration – this time RAID0/1:

And see if this has had the desired effect:

As I am using virtual hardware, it isn’t too easy to simulate a failure of a disk component – running cfgadm -c unconfigure against one of the disks used in the pool gives an I/O error on the SCSI device. If I had a physical server I’d yank a disk and show you it works. For now; believe that ZFS is highly fault-tolerant, and very easy to recover from a failure (just attach the new device back into the pool once replaced). ZFS is only as fault-tolerant as the RAID level of your pool though. We can create RAIDZ{1,2,3} pools which are RAID5 with x number of parity disks (i.e. RAIDZ2 is RAID6, essentially). We can also add hot-spare disks, logging disks, caching disks, etc. and generally tune ZFS to perform in exactly the way we want.

I will destroy testpool to make way for our final destination zpool and filesystem:

Verify that it’s gone, and all we have is our original rpool:

An Introduction to Solaris 11 Zones

In Solaris 10, the default IP type for zones was shared, which meant that the zone shared the IP stack with the global zone. Within a zone on Solaris 10, an administrator was unable to configure network settings, unless exclusive IP was used, in which case the zone would be bound to a physical NIC in the global zone, and that NIC would only be available for exclusive use by that zone. With Solaris 11, and virtual networking, all zones can be created with an exclusive IP type. A Virtual NIC (VNIC) is created for each zone, over some physical NIC on the global zone. This network virtualisation allows each zone to maintain its own TCP/IP stack, and the zone administrator can change the zone’s network configuration from within the zone itself. A new anet interface type has been introduced within zonecfg to handle this.

Solaris 11 zones are now provisioned using the new Image Packaging System (IPS) and in a default configuration, packages will be installed from the repository configured (http://pkg.oracle.com, for example) in the global zone. It would make sense to have a local repository if you were rolling out large numbers of systems or zones, but for our testing purposes, downloading a couple of hundred megabytes of packages is no big issue.

This article will walk through the creation of a simple Solaris 11 zone, and introduce a method of installing zones without operator intervention using System Profiles.

Filesystem Creation

Zones are tightly integrated with ZFS in Solaris 11, and a new ZFS dataset (or datasets, depending on the configuration) will be created for each new zone provisioned. To aid in administration, I’ll create a new dataset to hold my zones. I have a ZPool available (datapool) and will create the new dataset within this:

I only have 19.4GB available, but as zones are an extremely sparse form of virtualisation technology, it will suffice for our requirements.

Zone Creation

Next, create the zone using zonecfg and set it’s zonepath to a subdirectory of our new /zones filesystem:

You’ll note that the zone has been created and is now in the configured state:

Here you can see that the zone is created with exclusive IP. A new VNIC will be created when the zone is booted, so for now you’ll see no additional interfaces when running dladm show-link – just the interfaces currently configured in the global zone:

System Profiles

The sysconfig utility provided with Solaris 11 allows us to unconfigure or reconfigure a Solaris instance. It is essentially a “fancy” version of sys-unconfig provided with earlier Solaris versions. One of the subcommands, however, supplied by sysconfig is create-profile. This allows us to step through all of the screens normally presented at system installation time (whether provisioning physical systems or zones) and will write out an XML file containing all the choices we made – a System Profile. This new XML format replaces the older sysidcfg format used previously when jumpstarting servers or provisioning zones. I can generate a master copy of this file from the global zone, and modify it to provide a template. This template can then be copied for each zone we wish to create, with the appropriate parameters substituted. Once the zone is installed using this profile, and booted, the appropriate system configuration will be performed automatically and the operator will not be prompted for information. This means that it’s easy to script the installation of Solaris 11 zones (as it was with Solaris 10).

Let’s create our system profile:

Work through the screens as if you were installing Solaris for the first time, configuring network, timezone, root password, user, naming services, etc. It will have NO EFFECT on current system configuration, so don’t be scared of it. I configured the parameters that I’ll be changing with obviously bogus values for my configuration – Hostname: NEWHOST, IP Address: 123.123.123.123, Gateway: 123.123.123.1, and DNS Servers: 123.1.2.1 and 123.1.2.3. Once the XML file had been generated and reviewed, I moved it to a secure location:

Then, I created a copy of the configuration for the zone I was about to provision – testzone-01, with the first substitution for hostname being made during the redirection to the new file:

I then substituted appropriate values for IP address, gateway and DNS servers. The netmask I specified during sysconfig create-profile is already correct for my network, so just substituting the correct IP address in place of 123.123.123.123 (my templated IP) will work:

Substitute the value for the default gateway, and DNS servers, similarly using gsed:

The profile is now ready to use.

Zone Installation

Our zone is now ready to be installed using the new profile. Use the new -c <profile_name> option to zoneadm install when installing the zone. Note that the -c option expects an absolute pathname to the profile, or at least that’s what I found to prevent strange errors about being unable to find the file. If we didn’t use a system profile, we’d have to step through the configuration screens on the first boot of the zone and specify all the information by hand.

Install the zone:

You will note that the IPS has been used for the zone installation and a new dataset has been created for this zone at datapool/zonefs/testzone-01.

Let’s verify the zone state:

Note the status change from configured to installed.

Our new VNIC has not yet been created.

Booting and Verifying the Zone

Boot the zone for the first time:

And connect to the zone’s console, checking for errors during initial boot, manifest import, and system identification (which should be automated thanks to our system profile):

Note that I specify a different escape sequence here (#.) from the default, to save confusion and possible disconnection via SSH escape sequences.

During zone boot, you may see the following message displayed on the zone console if you’re deploying your zones on a virtualised global zone (for example: under VMware or VirtualBox):

Disconnect from the zone’s console if no other errors are generated. The “Unable to verify add of static route” message is because you need to enable promiscuous mode on the global zone’s interface (whichever one the VNIC is being created over) for networking within the zone to work when the global zone itself is virtualised.

This can be done from the global zone. To determine the physical network interface that needs to be placed into promiscuous mode, run dladm show-link:

You can also see that a VNIC has now been created for our zone (testzone-01/net0) over net0. Run a snoop on the physical interface in the global zone (in our case, net0 is virtualised at the VMware layer) identified via dladm show-link:

Now, log into the zone and verify network connectivity:

Looks good. Our zone was provisioned with the minimum of fuss, and we now have the foundations of a scriptable solution to provision many zones on-the-fly.

Logging into the zone, review the default ZFS configuration and ownership of our network interface:

Of course, other datasets can be created in the global zone and added to a zone using zonecfg (as indeed can new VNICs with dladm create-vnic) but as you can see even a default zone configuration makes good use of the underlying core technologies present in Solaris 11.

Back in the global zone, review that the zone is reported as running:

And review the various ZFS filesystems that were created in the global zone during the creation of testzone-01 (note that the user we specified to create during sysconfig also has their own ZFS dataset) :

It’s worth noting that our zone only consumes 441MB from the above – highlighting how sparse the zones actually are – a complete operating environment in less than 0.5GB.

Conclusion

This article has provided a brief introduction to zones on Solaris 11, how to provision them and some insights on how to automate their provisioning. Zones have been even more tightly integrated with the technologies at the core of Solaris, and enable simple segregation of services, or the ability to delegate administration of an entire operating environment to a different sysadmin. Whilst not covered in this article, zones can be used to virtualise Solaris 10 instances into branded zones and thus consolidate existing infrastructure – you can run your Solaris 10 applications unmodified on Solaris 11 in a Solaris 10 branded zone.