ZFS: promote – for getting rid of snapshots, but not the data
When I built the Kraken ZFS file server, I used a
snapshot to copy the data over. Those snapshots
are still hanging around. I’d like to get rid of them. I can do that with
zfs promote. This article documents how I did that.
The background
I have a few snaphots here?
$ zfs list -t snapshot NAME USED AVAIL REFER MOUNTPOINT storage/Retored@2010.07.27 706G - 3.11T - storage/Retored@2010.07.28 264K - 2.42T -
Those snapshots were used with zfs send and zfs receive
to copy the files from the original ZFS system to the new one. Now
that array is solid and stable, the snapshots are still around but now
longer needed. My goal is to get rid of them.
On a side note, I am annoyed by this inconsistency in free space:
$ zpool list NAME SIZE USED AVAIL CAP HEALTH ALTROOT storage 12.7T 7.28T 5.41T 57% ONLINE -
According to this, I have 3.7T available:
$ df -h | grep stora storage 3.7T 1.7G 3.7T 0% /storage storage/Retored 3.7T 39K 3.7T 0% /storage/Retored storage/bacula 8.0T 4.3T 3.7T 54% /storage/bacula storage/pgsql 3.7T 5.5G 3.7T 0% /storage/pgsql
Which is correct?
But back to the promotion. From man zfs:
zfs promote clone-filesystem Promotes a clone file system to no longer be dependent on its "ori- gin" snapshot. This makes it possible to destroy the file system that the clone was created from. The clone parent-child dependency relationship is reversed, so that the origin file system becomes a clone of the specified file system. The snapshot that was cloned, and any snapshots previous to this snapshot, are now owned by the promoted clone. The space they use moves from the origin file system to the promoted clone, so enough space must be available to accommodate these snapshots. No new space is consumed by this operation, but the space accounting is adjusted. The promoted clone must not have any conflicting snapshot names of its own. The rename subcommand can be used to rename any conflicting snapshots.
The dirty work
If I try to remove this snapshot, I get told that it forms the basis for another filesystem:
$ zfs destroy storage/Retored@2010.07.28 cannot destroy 'storage/Retored@2010.07.28': snapshot has dependent clones use '-R' to destroy the following datasets: storage/bacula
Thus, what I want to try is promoting storage/bacula.
$ sudo zfs promote storage/bacula Password: $
That took about 10 seconds. Now look at my snapshots:
$ zfs list -t snapshot NAME USED AVAIL REFER MOUNTPOINT storage/bacula@2010.07.27 706G - 3.11T - storage/bacula@2010.07.28 218K - 2.42T -
As you can see, the snapshots are now storage/bacula@ whereas they were storage/Retored@.
Now let’s delete a snapshot:
$ sudo zfs destroy storage/bacula@2010.07.27 Password:
That took about a minute. Now let’s check our space:
$ zpool list NAME SIZE USED AVAIL CAP HEALTH ALTROOT storage 12.7T 6.31T 6.38T 49% ONLINE -
OK, now delete the next snapshot:
$ sudo zfs destroy storage/bacula@2010.07.28 cannot destroy 'storage/bacula@2010.07.28': snapshot has dependent clones use '-R' to destroy the following datasets: storage/Retored
That dataset does not contain any data:
$ ls -l /storage/Retored total 0
So I think I’m safe in just destroying that dataset and then removing that snapshot. But I haven’t tried that yet.
About the free space
zpool list shows you the size of all your disks lets say (5 * 250 GB=~1.2 TB) including parity space.
zfs list shows you the space available to filesystems/volumes, after all the parity is accounted for. With 2 parity disks, you get 3 * 250 GB = ~750 GB. Add in ZFS overhead for redundant metadata and other info, and you get what you see.
regards,
Johan Hendriks
[%sig%]
Post Edited (23-09-10 13:54)