.
sanoid-2.2.0/README.md 0000664 0000000 0000000 00000045045 14455537001 0014263 0 ustar 00root root 0000000 0000000 
Sanoid is a policy-driven snapshot management tool for ZFS filesystems. When combined with the Linux KVM hypervisor, you can use it to make your systems functionally immortal.

(Real time demo: rolling back a full-scale cryptomalware infection in seconds!)
More prosaically, you can use Sanoid to create, automatically thin, and monitor snapshots and pool health from a single eminently human-readable TOML config file at /etc/sanoid/sanoid.conf. (Sanoid also requires a "defaults" file located at /etc/sanoid/sanoid.defaults.conf, which is not user-editable.) A typical Sanoid system would have a single cron job but see INSTALL.md for more details:
```
* * * * * TZ=UTC /usr/local/bin/sanoid --cron
```
`Note`: Using UTC as timezone is recommended to prevent problems with daylight saving times
And its /etc/sanoid/sanoid.conf might look something like this:
```
[data/home]
use_template = production
[data/images]
use_template = production
recursive = yes
process_children_only = yes
[data/images/win7]
hourly = 4
#############################
# templates below this line #
#############################
[template_production]
frequently = 0
hourly = 36
daily = 30
monthly = 3
yearly = 0
autosnap = yes
autoprune = yes
```
Which would be enough to tell sanoid to take and keep 36 hourly snapshots, 30 dailies, 3 monthlies, and no yearlies for all datasets under data/images (but not data/images itself, since process_children_only is set). Except in the case of data/images/win7, which follows the same template (since it's a child of data/images) but only keeps 4 hourlies for whatever reason.
For more full details on sanoid.conf settings see [Wiki page](https://github.com/jimsalterjrs/sanoid/wiki/Sanoid#options).
**Note**: Be aware that if you don't specify some interval options the defaults will be used (from /etc/sanoid/sanoid.defaults.conf)
##### Sanoid Command Line Options
+ --cron
This will process your sanoid.conf file, create snapshots, then purge expired ones.
+ --configdir
Specify a location for the config file named sanoid.conf. Defaults to /etc/sanoid
+ --cache-dir
Specify a directory to store the zfs snapshot cache. Defaults to /var/cache/sanoid
+ --run-dir
Specify a directory for temporary files such as lock files. Defaults to /var/run/sanoid
+ --take-snapshots
This will process your sanoid.conf file, create snapshots, but it will NOT purge expired ones. (Note that snapshots taken are atomic in an individual dataset context, not a global context - snapshots of pool/dataset1 and pool/dataset2 will each be internally consistent and atomic, but one may be a few filesystem transactions "newer" than the other.)
+ --prune-snapshots
This will process your sanoid.conf file, it will NOT create snapshots, but it will purge expired ones.
+ --force-prune
Purges expired snapshots even if a send/recv is in progress
+ --monitor-snapshots
This option is designed to be run by a Nagios monitoring system. It reports on the health of your snapshots.
+ --monitor-health
This option is designed to be run by a Nagios monitoring system. It reports on the health of the zpool your filesystems are on. It only monitors filesystems that are configured in the sanoid.conf file.
+ --monitor-capacity
This option is designed to be run by a Nagios monitoring system. It reports on the capacity of the zpool your filesystems are on. It only monitors pools that are configured in the sanoid.conf file.
+ --force-update
This clears out sanoid's zfs snapshot listing cache. This is normally not needed.
+ --version
This prints the version number, and exits.
+ --quiet
Suppress non-error output.
+ --verbose
This prints additional information during the sanoid run.
+ --debug
This prints out quite a lot of additional information during a sanoid run, and is normally not needed.
+ --readonly
Skip creation/deletion of snapshots (Simulate).
+ --help
Show help message.
### Sanoid script hooks
There are three script types which can optionally be executed at various stages in the lifecycle of a snapshot:
#### `pre_snapshot_script`
Will be executed before the snapshot(s) of a single dataset are taken. The following environment variables are passed:
| Env vars | Description |
| ----------------- | ----------- |
| `SANOID_SCRIPT` | The type of script being executed, one of `pre`, `post`, or `prune`. Allows for one script to be used for multiple tasks |
| `SANOID_TARGET` | **DEPRECATED** The dataset about to be snapshot (only the first dataset will be provided) |
| `SANOID_TARGETS` | Comma separated list of all datasets to be snapshotted (currently only a single dataset, multiple datasets will be possible later with atomic groups) |
| `SANOID_SNAPNAME` | **DEPRECATED** The name of the snapshot that will be taken (only the first name will be provided, does not include the dataset name) |
| `SANOID_SNAPNAMES` | Comma separated list of all snapshot names that will be taken (does not include the dataset name) |
| `SANOID_TYPES` | Comma separated list of all snapshot types to be taken (yearly, monthly, weekly, daily, hourly, frequently) |
If the script returns a non-zero exit code, the snapshot(s) will not be taken unless `no_inconsistent_snapshot` is false.
#### `post_snapshot_script`
Will be executed when:
- The pre-snapshot script succeeded or
- The pre-snapshot script failed and `force_post_snapshot_script` is true.
| Env vars | Description |
| -------------------- | ----------- |
| `SANOID_SCRIPT` | as above |
| `SANOID_TARGET` | **DEPRECATED** as above |
| `SANOID_TARGETS` | as above |
| `SANOID_SNAPNAME` | **DEPRECATED** as above |
| `SANOID_SNAPNAMES` | as above |
| `SANOID_TYPES` | as above |
| `SANOID_PRE_FAILURE` | This will indicate if the pre-snapshot script failed |
#### `pruning_script`
Will be executed after a snapshot is successfully deleted. The following environment variables will be passed:
| Env vars | Description |
| ----------------- | ----------- |
| `SANOID_SCRIPT` | as above |
| `SANOID_TARGET` | as above |
| `SANOID_SNAPNAME` | as above |
#### example
**sanoid.conf**:
```
...
[sanoid-test-0]
use_template = production
recursive = yes
pre_snapshot_script = /tmp/debug.sh
post_snapshot_script = /tmp/debug.sh
pruning_script = /tmp/debug.sh
...
```
**verbose sanoid output**:
```
...
executing pre_snapshot_script '/tmp/debug.sh' on dataset 'sanoid-test-0'
taking snapshot sanoid-test-0@autosnap_2020-02-12_14:49:33_yearly
taking snapshot sanoid-test-0@autosnap_2020-02-12_14:49:33_monthly
taking snapshot sanoid-test-0@autosnap_2020-02-12_14:49:33_daily
taking snapshot sanoid-test-0@autosnap_2020-02-12_14:49:33_hourly
executing post_snapshot_script '/tmp/debug.sh' on dataset 'sanoid-test-0'
...
```
**pre script env variables**:
```
SANOID_SCRIPT=pre
SANOID_TARGET=sanoid-test-0/b/bb
SANOID_TARGETS=sanoid-test-0/b/bb
SANOID_SNAPNAME=autosnap_2020-02-12_14:49:32_yearly
SANOID_SNAPNAMES=autosnap_2020-02-12_14:49:32_yearly,autosnap_2020-02-12_14:49:32_monthly,autosnap_2020-02-12_14:49:32_daily,autosnap_2020-02-12_14:49:32_hourly
SANOID_TYPES=yearly,monthly,daily,hourly
```
**post script env variables**:
```
SANOID_SCRIPT=post
SANOID_TARGET=sanoid-test-0/b/bb
SANOID_TARGETS=sanoid-test-0/b/bb
SANOID_SNAPNAME=autosnap_2020-02-12_14:49:32_yearly
SANOID_SNAPNAMES=autosnap_2020-02-12_14:49:32_yearly,autosnap_2020-02-12_14:49:32_monthly,autosnap_2020-02-12_14:49:32_daily,autosnap_2020-02-12_14:49:32_hourly
SANOID_TYPES=yearly,monthly,daily,hourly
SANOID_PRE_FAILURE=0
```
----------
# Syncoid
Sanoid also includes a replication tool, syncoid, which facilitates the asynchronous incremental replication of ZFS filesystems. A typical syncoid command might look like this:
```
syncoid data/images/vm backup/images/vm
```
Which would replicate the specified ZFS filesystem (aka dataset) from the data pool to the backup pool on the local system, or
```
syncoid data/images/vm root@remotehost:backup/images/vm
```
Which would push-replicate the specified ZFS filesystem from the local host to remotehost over an SSH tunnel, or
```
syncoid root@remotehost:data/images/vm backup/images/vm
```
Which would pull-replicate the filesystem from the remote host to the local system over an SSH tunnel.
Syncoid supports recursive replication (replication of a dataset and all its child datasets) and uses mbuffer buffering, lzop compression, and pv progress bars if the utilities are available on the systems used.
If ZFS supports resumable send/receive streams on both the source and target those will be enabled as default.
As of 1.4.18, syncoid also automatically supports and enables resume of interrupted replication when both source and target support this feature.
##### Syncoid Dataset Properties
+ syncoid:sync
Available values:
+ `true` (default if unset)
This dataset will be synchronised to all hosts.
+ `false`
This dataset will not be synchronised to any hosts - it will be skipped. This can be useful for preventing certain datasets from being transferred when recursively handling a tree.
+ `host1,host2,...`
A comma separated list of hosts. This dataset will only be synchronised by hosts listed in the property.
_Note_: this check is performed by the host running `syncoid`, thus the local hostname must be present for inclusion during a push operation // the remote hostname must be present for a pull.
_Note_: this will also prevent syncoid from handling the dataset if given explicitly on the command line.
_Note_: syncing a child of a no-sync dataset will currently result in a critical error.
_Note_: empty properties will be handled as if they were unset.
##### Syncoid Command Line Options
+ [source]
This is the source dataset. It can be either local or remote.
+ [destination]
This is the destination dataset. It can be either local or remote.
+ --identifier=
Adds the given identifier to the snapshot and hold name after "syncoid_" prefix and before the hostname. This enables the use case of reliable replication to multiple targets from the same host. The following chars are allowed: a-z, A-Z, 0-9, _, -, : and . .
+ -r --recursive
This will also transfer child datasets.
+ --skip-parent
This will skip the syncing of the parent dataset. Does nothing without '--recursive' option.
+ --compress
Compression method to use for network transfer. Currently accepted options: gzip, pigz-fast, pigz-slow, zstd-fast, zstd-slow, lz4, xz, lzo (default) & none. If the selected compression method is unavailable on the source and destination, no compression will be used.
+ --source-bwlimit
This is the bandwidth limit in bytes (kbytes, mbytes, etc) per second imposed upon the source. This is mainly used if the target does not have mbuffer installed, but bandwidth limits are desired.
+ --target-bwlimit
This is the bandwidth limit in bytes (kbytes, mbytes, etc) per second imposed upon the target. This is mainly used if the source does not have mbuffer installed, but bandwidth limits are desired.
+ --no-command-checks
Does not check the existence of commands before attempting the transfer, providing administrators a way to run the tool with minimal overhead and maximum speed, at risk of potentially failed replication, or other possible edge cases. It assumes all programs are available, and should not be used in most situations. This is an not an officially supported run mode.
+ --no-stream
This argument tells syncoid to use -i incrementals, not -I. This updates the target with the newest snapshot from the source, without replicating the intermediate snapshots in between. (If used for an initial synchronization, will do a full replication from newest snapshot and exit immediately, rather than starting with the oldest and then doing an immediate -i to the newest.)
+ --no-sync-snap
This argument tells syncoid to restrict itself to existing snapshots, instead of creating a semi-ephemeral syncoid snapshot at execution time. Especially useful in multi-target (A->B, A->C) replication schemes, where you might otherwise accumulate a large number of foreign syncoid snapshots.
+ --keep-sync-snap
This argument tells syncoid to skip pruning old snapshots created and used by syncoid for replication if '--no-sync-snap' isn't specified.
+ --create-bookmark
This argument tells syncoid to create a zfs bookmark for the newest snapshot after it got replicated successfully. The bookmark name will be equal to the snapshot name. Only works in combination with the --no-sync-snap option. This can be very useful for irregular replication where the last matching snapshot on the source was already deleted but the bookmark remains so a replication is still possible.
+ --use-hold
This argument tells syncoid to add a hold to the newest snapshot on the source and target after replication succeeds and to remove the hold after the next succesful replication. Setting a hold prevents the snapshots from being destroyed. The hold name incldues the identifier if set. This allows for separate holds in case of replication to multiple targets.
+ --preserve-recordsize
This argument tells syncoid to set the recordsize on the target before writing any data to it matching the one set on the replication src. This only applies to initial sends.
+ --preserve-properties
This argument tells syncoid to get all locally set dataset properties from the source and apply all supported ones on the target before writing any data. It's similar to the '-p' flag for zfs send but also works for encrypted datasets in non raw sends. This only applies to initial sends.
+ --delete-target-snapshots
With this argument snapshots which are missing on the source will be destroyed on the target. Use this if you only want to handle snapshots on the source.
Note that snapshot deletion is only done after a successful synchronization. If no new snapshots are found, no synchronization is done and no deletion either.
+ --no-clone-rollback
Do not rollback clones on target
+ --no-rollback
Do not rollback anything (clones or snapshots) on target host
+ --exclude=REGEX
The given regular expression will be matched against all datasets which would be synced by this run and excludes them. This argument can be specified multiple times.
+ --no-resume
This argument tells syncoid to not use resumable zfs send/receive streams.
+ --force-delete
Remove target datasets recursively (WARNING: this will also affect child datasets with matching snapshots/bookmarks), if there are no matching snapshots/bookmarks. Also removes conflicting snapshots if the replication would fail because of a snapshot which has the same name between source and target but different contents.
+ --no-clone-handling
This argument tells syncoid to not recreate clones on the target on initial sync, and do a normal replication instead.
+ --dumpsnaps
This prints a list of snapshots during the run.
+ --no-privilege-elevation
Bypass the root check and assume syncoid has the necessary permissions (for use with ZFS permission delegation).
+ --sshport
Allow sync to/from boxes running SSH on non-standard ports.
+ --sshcipher
Instruct ssh to use a particular cipher set.
+ --sshoption
Passes option to ssh. This argument can be specified multiple times.
+ --sshkey
Use specified identity file as per ssh -i.
+ --insecure-direct-connection=IP:PORT[,IP:PORT,[TIMEOUT,[mbuffer]]]
WARNING: This is an insecure option as the data is not encrypted while being sent over the network. Only use if you trust the complete network path.
Use a direct tcp connection (with socat and busybox nc/mbuffer) for the actual zfs send/recv stream. All control commands are still executed via the ssh connection. The first address pair is used for connecting to the target host from the source host and the second pair is for listening on the target host. If the later isn't provided the same as the former is used. This can be used for saturating high throughput connection like >= 10GBe network which isn't easy with the overhead off ssh. It can also be useful for encrypted datasets to lower the cpu usage needed for replication but be aware that metadata is NOT ENCRYPTED in this case. The default timeout is 60 seconds and can be overridden by providing it as third argument. By default busybox nc is used for the listeing tcp socket, if mbuffer is preferred specify its name as fourth argument but be aware that mbuffer listens on all interfaces and uses an optionally provided ip address for access restriction (This option can't be used for relaying between two remote hosts)
+ --quiet
Suppress non-error output.
+ --debug
This prints out quite a lot of additional information during a sanoid run, and is normally not needed.
+ --help
Show help message.
+ --version
Print the version and exit.
+ --monitor-version
This doesn't do anything right now.
Note that the sync snapshots syncoid creates are not atomic in a global context: sync snapshots of pool/dataset1 and pool/dataset2 will each be internally consistent, but one may be a few filesystem transactions "newer" than the other. (This does not affect the consistency of snapshots already taken in other ways, which syncoid replicates in the overall stream unless --no-stream is specified. So if you want to manually zfs snapshot -R pool@1 before replicating with syncoid, the global atomicity of pool/dataset1@1 and pool/dataset2@1 will still be intact.)
sanoid-2.2.0/VERSION 0000664 0000000 0000000 00000000006 14455537001 0014040 0 ustar 00root root 0000000 0000000 2.2.0
sanoid-2.2.0/findoid 0000775 0000000 0000000 00000007624 14455537001 0014347 0 ustar 00root root 0000000 0000000 #!/usr/bin/perl
# this software is licensed for use under the Free Software Foundation's GPL v3.0 license, as retrieved
# from http://www.gnu.org/licenses/gpl-3.0.html on 2014-11-17. A copy should also be available in this
# project's Git repository at https://github.com/jimsalterjrs/sanoid/blob/master/LICENSE.
$::VERSION = '2.2.0';
use strict;
use warnings;
use Getopt::Long qw(:config auto_version auto_help);
use Pod::Usage;
my $zfs = 'zfs';
my %args = ('path' => '');
GetOptions(\%args, "path=s") or pod2usage(2);
if ($args{'path'} eq '') {
if (scalar(@ARGV) < 1) {
warn "file path missing!\n";
pod2usage(2);
exit 127;
} else {
$args{'path'} = $ARGV[0];
}
}
my $dataset = getdataset($args{'path'});
my %versions = getversions($args{'path'}, $dataset);
foreach my $version (sort { $versions{$a}{'mtime'} <=> $versions{$b}{'mtime'} } keys %versions) {
my $disptime = localtime($versions{$version}{'mtime'});
my $dispsize = humansize($versions{$version}{'size'});
print "$disptime\t$dispsize\t$version\n";
}
exit 0;
###################################################################
###################################################################
###################################################################
sub humansize {
my ($rawsize) = @_;
my $humansize;
if ($rawsize > 1024*1024*1024) {
$humansize = sprintf("%.1f",$rawsize/1024/1024/1024) . ' GB';
} elsif ($rawsize > 1024*1024) {
$humansize = sprintf("%.1f",$rawsize/1024/1024) . ' MB';
} elsif ($rawsize > 255) {
$humansize = sprintf("%.1f",$rawsize/1024) . ' KB';
} else {
$humansize = $rawsize . ' Bytes';
}
return $humansize;
}
sub getversions {
my ($path, $dataset) = @_;
my @snaps = findsnaps($dataset, $args{'path'});
my $snappath = '.zfs/snapshot';
my $relpath = $path;
$relpath =~ s/^$dataset\///;
my %versions;
foreach my $snap (@snaps) {
my $filename = "$dataset/$snappath/$snap/$relpath";
my ($dev,$ino,$mode,$nlink,$uid,$gid,$rdev,$size,$atime,$mtime,$ctime,$blksize,$blocks) = stat($filename);
if (!defined $size) {
next;
}
# only push to the $versions hash if this size and mtime aren't already present (simple dedupe)
my $duplicate = 0;
foreach my $version (keys %versions) {
if ($versions{$version}{'size'} eq $size && $versions{$version}{'mtime'} eq $mtime) {
$duplicate = 1;
}
}
if (! $duplicate) {
$versions{$filename}{'size'} = $size;
$versions{$filename}{'mtime'} = $mtime;
}
}
my $filename = "$dataset/$relpath";
my ($dev,$ino,$mode,$nlink,$uid,$gid,$rdev,$size,$atime,$mtime,$ctime,$blksize,$blocks) = stat($filename);
if (defined $size) {
$versions{$filename}{'size'} = $size;
$versions{$filename}{'mtime'} = $mtime;
}
return %versions;
}
sub findsnaps {
my ($dataset, $path) = @_;
my $snappath = '.zfs/snapshot';
my $relpath = $path;
$relpath =~ s/^$dataset//;
my @snaps;
opendir (my $dh, "$dataset/$snappath");
while (my $dir=(readdir $dh)) {
if ($dir ne '.' && $dir ne '..') { push @snaps, $dir; }
}
closedir $dh;
return @snaps;
}
sub getdataset {
my ($path) = @_;
open FH, "$zfs list -H -t filesystem -o mountpoint,mounted |";
my @datasets = ;
close FH;
my @matchingdatasets;
foreach my $dataset (@datasets) {
chomp $dataset;
my ($mountpoint, $mounted) = ($dataset =~ m/([^\t]*)\t*(.*)/);
if ($mounted ne "yes") {
next;
}
if ( $path =~ /^$mountpoint/ ) { push @matchingdatasets, $mountpoint; }
}
my $bestmatch = '';
foreach my $dataset (@matchingdatasets) {
if ( length $dataset > length $bestmatch ) { $bestmatch = $dataset; }
}
return $bestmatch;
}
__END__
=head1 NAME
findoid - ZFS file version listing tool
=head1 SYNOPSIS
findoid [options] FILE
FILE local path to file for version listing
Options:
--path=FILE alternative to specify file path to list versions for
--help Prints this helptext
--version Prints the version number
sanoid-2.2.0/packages/ 0000775 0000000 0000000 00000000000 14455537001 0014552 5 ustar 00root root 0000000 0000000 sanoid-2.2.0/packages/debian/ 0000775 0000000 0000000 00000000000 14455537001 0015774 5 ustar 00root root 0000000 0000000 sanoid-2.2.0/packages/debian/.gitignore 0000664 0000000 0000000 00000000117 14455537001 0017763 0 ustar 00root root 0000000 0000000 *.debhelper
*.debhelper.log
*.substvars
debhelper-build-stamp
files
sanoid
tmp
sanoid-2.2.0/packages/debian/TODO 0000664 0000000 0000000 00000001170 14455537001 0016463 0 ustar 00root root 0000000 0000000 - This package needs to be a 3.0 (quilt) format, not 3.0 (native).
- Fix the changelog
- Move the packaging out to a separate repository, or at a minimum,
a separate branch.
- Provide an extended description in debian/control
- Figure out a plan for sanoid.defaults.conf. It is not supposed to be
edited, so it shouldn't be installed in /etc. At a minimum, install
it under /usr and make a symlink, but preferably patch sanoid to look
there directly.
- Man pages are necessary for all the utilities installed.
- With these, there is probably no need to ship README.md.
- Break out syncoid into a separate package?
sanoid-2.2.0/packages/debian/changelog 0000664 0000000 0000000 00000024131 14455537001 0017647 0 ustar 00root root 0000000 0000000 sanoid (2.2.0) unstable; urgency=medium
[overall] documentation updates, small fixes (@azmodude, @deviantintegral, @jimsalterjrs, @alexhaydock, @cbreak-black, @kd8bny, @JavaScriptDude, @veeableful, @rsheasby, @Topslakr, @mavhc, @adam-stamand, @joelishness, @jsoref, @dodexahedron, @phreaker0)
[syncoid] implemented flag for preserving properties without the zfs -p flag (@phreaker0)
[syncoid] implemented target snapshot deletion (@mat813)
[syncoid] support bookmarks which are taken in the same second (@delxg, @phreaker0)
[syncoid] exit with an error if the specified src dataset doesn't exist (@phreaker0)
[syncoid] rollback is now done implicitly instead of explicit (@jimsalterjrs, @phreaker0)
[syncoid] append a rand int to the socket name to prevent collisions with parallel invocations (@Gryd3)
[syncoid] implemented support for ssh_config(5) files (@endreszabo)
[syncoid] snapshot hold/unhold support (@rbike)
[sanoid] handle duplicate key definitions gracefully (@phreaker0)
[syncoid] implemented removal of conflicting snapshots with force-delete option (@phreaker0)
[sanoid] implemented pre pruning script hook (@phreaker0)
[syncoid] implemented direct connection support (bypass ssh) for the actual data transfer (@phreaker0)
-- Jim Salter Tue, 18 Jul 2023 10:04:00 +0200
sanoid (2.1.0) unstable; urgency=medium
[overall] documentation updates, small fixes (@HavardLine, @croadfeldt, @jimsalterjrs, @jim-perkins, @kr4z33, @phreaker0)
[syncoid] do not require user to be specified for syncoid (@aerusso)
[syncoid] implemented option for keeping sync snaps (@phreaker0)
[syncoid] use sudo if necessary for checking pool capabilities regarding resumable send (@phreaker0)
[syncoid] catch another case were the resume state isn't available anymore (@phreaker0)
[syncoid] check for an invalid argument combination (@phreaker0)
[syncoid] fix iszfsbusy check for similar dataset names (@phreaker0)
[syncoid] append timezone offset to the syncoid snapshot name to fix DST collisions (@phreaker0)
[packaging] post install script for debian package to remove old unused snapshot cache file (@phreaker0)
[syncoid] implemented fallback for listing snapshots on solaris (@phreaker0)
[sanoid] remove invalid locks (@phreaker0)
[packaging] removed debian dependency for systemd (@phreaker0)
[sanoid] move sanoid cache and lock files to subdirectories (@lopsided98)
[sanoid] remove 's in monitoring messages (@dlangille)
[findoid] reworked argument parsing and error out if file path is not provided (@phreaker0)
[findoid] also show current file version if available (@phreaker0)
[findoid] handle FileNotFound errors properly (@phreaker0)
[findoid] don't use hardcoded paths (@phreaker0)
[findoid] improve dataset detection by only including mounted datasets (@phreaker0)
[sanoid] pass more information to pre/post/prune scripts and execute them only once per dataset (@tiedotguy, @phreaker0)
[syncoid] implemented option for preserving recordsizes on initial replications (@phreaker0)
[syncoid] fixed send size estimation for latest FreeBSD versions (@phreaker0)
[syncoid] add ability to configure pv (@gdevenyi)
[sanoid] don't use hardcoded paths (@phreaker0)
[syncoid] gracefully handle error when source dataset disappeared (@mschout)
-- Jim Salter Tue, 24 Nov 2020 11:47:00 +0100
sanoid (2.0.3) unstable; urgency=medium
[sanoid] reverted DST handling and improved it as quickfix (@phreaker0)
-- Jim Salter Wed, 02 Oct 2019 17:00:00 +0100
sanoid (2.0.2) unstable; urgency=medium
[overall] documentation updates, new dependencies, small fixes, more warnings (@benyanke, @matveevandrey, @RulerOf, @klemens-u, @johnramsden, @danielewood, @g-a-c, @hartzell, @fryfrog, @phreaker0)
[syncoid] changed and simplified DST handling (@shodanshok)
[syncoid] reset partially resume state automatically (@phreaker0)
[syncoid] handle some zfs errors automatically by parsing the stderr outputs (@phreaker0)
[syncoid] fixed ordering of snapshots with the same creation timestamp (@phreaker0)
[syncoid] don't use hardcoded paths (@phreaker0)
[syncoid] fix for special setup with listsnapshots=on (@phreaker0)
[syncoid] check ssh connection on startup (@phreaker0)
[syncoid] fix edge case with initial send and no-stream option (@phreaker0)
[syncoid] fallback to normal replication if clone recreation fails (@phreaker0)
[packaging] ebuild for gentoo (@thehaven)
[syncoid] support for zfs bookmark creation (@phreaker0)
[syncoid] fixed bookmark edge cases (@phreaker0)
[syncoid] handle invalid dataset paths nicely (@phreaker0)
[syncoid] fixed resume support check to be zpool based (@phreaker0)
[sanoid] added hotspare template (@jimsalterjrs)
[syncoid] support for advanced zfs send/recv options (@clinta, @phreaker0)
[syncoid] option to change mbuffer size (@TerraTech)
[tests] fixes for FreeBSD (@phreaker0)
[sanoid] support for zfs recursion (@jMichaelA, @phreaker0)
[syncoid] fixed bookmark handling for volumens (@ppcontrib)
[sanoid] allow time units for monitoring warn/crit values (@phreaker0)
-- Jim Salter Fri, 20 Sep 2019 23:01:00 +0100
sanoid (2.0.1) unstable; urgency=medium
[sanoid] fixed broken monthly warn/critical monitoring values in default template (@jimsalterjrs)
[sanoid] flag to force pruning while filesystem is in an active zfs send/recv (@shodanshok)
[syncoid] flags to disable rollbacks (@shodanshok)
-- Jim Salter Fri, 14 Dec 2018 16:48:00 +0100
sanoid (2.0.0) unstable; urgency=medium
[overall] documentation updates, small fixes, more warnings (@sparky3387, @ljwobker, @phreaker0)
[syncoid] added force delete flag (@phreaker0)
[sanoid] removed sleeping between snapshot taking (@phreaker0)
[syncoid] added '--no-privilege-elevation' option to bypass root check (@lopsided98)
[sanoid] implemented weekly period (@phreaker0)
[syncoid] implemented support for zfs bookmarks as fallback (@phreaker0)
[sanoid] support for pre, post and prune snapshot scripts (@jouir, @darkbasic, @phreaker0)
[sanoid] ignore snapshots types that are set to 0 (@muff1nman)
[packaging] split snapshot taking/pruning into separate systemd units for debian package (@phreaker0)
[syncoid] replicate clones (@phreaker0)
[syncoid] added compression algorithms: lz4, xz (@spheenik, @phreaker0)
[sanoid] added option to defer pruning based on the available pool capacity (@phreaker0)
[sanoid] implemented frequent snapshots with configurable period (@phreaker0)
[syncoid] prevent a perl warning on systems which doesn't output estimated send size information (@phreaker0)
[packaging] dependency fixes (@rodgerd, mabushey)
[syncoid] implemented support for excluding children of a specific dataset (@phreaker0)
[sanoid] monitor-health command additionally checks vdev members for io and checksum errors (@phreaker0)
[syncoid] added ability to skip datasets by a custom dataset property 'syncoid:no-sync' (@attie)
[syncoid] don't die on some critical replication errors, but continue with the remaining datasets (@phreaker0)
[syncoid] return a non zero exit code if there was a problem replicating datasets (@phreaker0)
[syncoid] make local source bwlimit work (@phreaker0)
[syncoid] fix 'resume support' detection on FreeBSD (@pit3k)
[sanoid] updated INSTALL with missing dependency
[sanoid] fixed monitor-health command for pools containing cache and log devices (@phreaker0)
[sanoid] quiet flag suppresses all info output (@martinvw)
[sanoid] check for empty lockfile which lead to sanoid failing on start (@jasonblewis)
[sanoid] added dst handling to prevent multiple invalid snapshots on time shift (@phreaker0)
[sanoid] cache improvements, makes sanoid much faster with a huge amount of datasets/snapshots (@phreaker0)
[sanoid] implemented monitor-capacity flag for checking zpool capacity limits (@phreaker0)
[syncoid] Added support for ZStandard compression.(@danielewood)
[syncoid] implemented support for excluding datasets from replication with regular expressions (@phreaker0)
[syncoid] correctly parse zfs column output, fixes resumable send with datasets containing spaces (@phreaker0)
[syncoid] added option for using extra identification in the snapshot name for replication to multiple targets (@phreaker0)
[syncoid] added option for skipping the parent dataset in recursive replication (@phreaker0)
[syncoid] typos (@UnlawfulMonad, @jsavikko, @phreaker0)
[sanoid] use UTC by default in unit template and documentation (@phreaker0)
[syncoid] don't prune snapshots if instructed to not create them either (@phreaker0)
[syncoid] documented compatibility issues with (t)csh shells (@ecoutu)
-- Jim Salter Wed, 04 Dec 2018 18:10:00 -0400
sanoid (1.4.18) unstable; urgency=medium
implemented special character handling and support of ZFS resume/receive tokens by default in syncoid,
thank you @phreaker0!
-- Jim Salter Wed, 25 Apr 2018 16:24:00 -0400
sanoid (1.4.17) unstable; urgency=medium
changed die to warn when unexpectedly unable to remove a snapshot - this
allows sanoid to continue taking/removing other snapshots not affected by
whatever lock prevented the first from being taken or removed
-- Jim Salter Wed, 8 Nov 2017 15:25:00 -0400
sanoid (1.4.16) unstable; urgency=medium
* merged @hrast01's extended fix to support -o option1=val,option2=val passthrough to SSH. merged @JakobR's
* off-by-one fix to stop unnecessary extra snapshots being taken under certain conditions. merged @stardude900's
* update to INSTALL for FreeBSD users re:symlinks. Implemented @LordAro's update to change DIE to WARN when
* encountering a dataset with no snapshots and --no-sync-snap set during recursive replication. Implemented
* @LordAro's update to sanoid.conf to add an ignore template which does not snap, prune, or monitor.
-- Jim Salter Wed, 9 Aug 2017 12:28:49 -0400
sanoid-2.2.0/packages/debian/compat 0000664 0000000 0000000 00000000003 14455537001 0017173 0 ustar 00root root 0000000 0000000 10
sanoid-2.2.0/packages/debian/control 0000664 0000000 0000000 00000001231 14455537001 0017374 0 ustar 00root root 0000000 0000000 Source: sanoid
Section: utils
Priority: optional
Maintainer: Jim Salter
Build-Depends: debhelper (>= 10)
Standards-Version: 4.1.2
Homepage: https://github.com/jimsalterjrs/sanoid
Vcs-Git: https://github.com/jimsalterjrs/sanoid.git
Vcs-Browser: https://github.com/jimsalterjrs/sanoid
Package: sanoid
Architecture: all
Depends: libcapture-tiny-perl,
libconfig-inifiles-perl,
zfsutils-linux | zfs,
${misc:Depends},
${perl:Depends}
Recommends: gzip,
lzop,
mbuffer,
openssh-client | ssh-client,
pv
Description: Policy-driven snapshot management and replication tools
sanoid-2.2.0/packages/debian/copyright 0000664 0000000 0000000 00000002156 14455537001 0017733 0 ustar 00root root 0000000 0000000 Format: https://www.debian.org/doc/packaging-manuals/copyright-format/1.0/
Upstream-Name: sanoid
Source: https://github.com/jimsalterjrs/sanoid
Files: *
Copyright: 2017 Jim Salter
License: GPL-3.0+
Files: debian/*
Copyright: 2017 Jim Salter
2017 Richard Laager
License: GPL-3.0+
License: GPL-3.0+
This program is free software: you can redistribute it and/or modify
it under the terms of the GNU General Public License as published by
the Free Software Foundation, either version 3 of the License, or
(at your option) any later version.
.
This package is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
GNU General Public License for more details.
.
You should have received a copy of the GNU General Public License
along with this program. If not, see .
.
On Debian systems, the complete text of the GNU General
Public License version 3 can be found in "/usr/share/common-licenses/GPL-3".
sanoid-2.2.0/packages/debian/postinst 0000775 0000000 0000000 00000000170 14455537001 0017603 0 ustar 00root root 0000000 0000000 #!/bin/bash
# remove old cache file
[ -f /var/cache/sanoidsnapshots.txt ] && rm /var/cache/sanoidsnapshots.txt || true
sanoid-2.2.0/packages/debian/rules 0000775 0000000 0000000 00000001605 14455537001 0017056 0 ustar 00root root 0000000 0000000 #!/usr/bin/make -f
# See debhelper(7) for more info
# output every command that modifies files on the build system.
#export DH_VERBOSE = 1
%:
dh $@
DESTDIR = $(CURDIR)/debian/sanoid
override_dh_auto_install:
install -d $(DESTDIR)/etc/sanoid
install -m 664 sanoid.defaults.conf $(DESTDIR)/etc/sanoid
install -d $(DESTDIR)/lib/systemd/system
install -m 664 debian/sanoid-prune.service debian/sanoid.timer \
$(DESTDIR)/lib/systemd/system
install -d $(DESTDIR)/usr/sbin
install -m 775 \
findoid sanoid sleepymutex syncoid \
$(DESTDIR)/usr/sbin
install -d $(DESTDIR)/usr/share/doc/sanoid
install -m 664 sanoid.conf \
$(DESTDIR)/usr/share/doc/sanoid/sanoid.conf.example
override_dh_installinit:
dh_installinit --noscripts
override_dh_systemd_enable:
dh_systemd_enable sanoid.timer
dh_systemd_enable sanoid-prune.service
override_dh_systemd_start:
dh_systemd_start sanoid.timer
sanoid-2.2.0/packages/debian/sanoid-prune.service 0000664 0000000 0000000 00000000413 14455537001 0021760 0 ustar 00root root 0000000 0000000 [Unit]
Description=Cleanup ZFS Pool
Requires=zfs.target
After=zfs.target sanoid.service
ConditionFileNotEmpty=/etc/sanoid/sanoid.conf
[Service]
Environment=TZ=UTC
Type=oneshot
ExecStart=/usr/sbin/sanoid --prune-snapshots --verbose
[Install]
WantedBy=sanoid.service
sanoid-2.2.0/packages/debian/sanoid.README.Debian 0000664 0000000 0000000 00000000134 14455537001 0021307 0 ustar 00root root 0000000 0000000 To start, copy the example config file in /usr/share/doc/sanoid to
/etc/sanoid/sanoid.conf.
sanoid-2.2.0/packages/debian/sanoid.docs 0000664 0000000 0000000 00000000012 14455537001 0020114 0 ustar 00root root 0000000 0000000 README.md
sanoid-2.2.0/packages/debian/sanoid.service 0000664 0000000 0000000 00000000420 14455537001 0020627 0 ustar 00root root 0000000 0000000 [Unit]
Description=Snapshot ZFS Pool
Requires=zfs.target
After=zfs.target
Wants=sanoid-prune.service
Before=sanoid-prune.service
ConditionFileNotEmpty=/etc/sanoid/sanoid.conf
[Service]
Environment=TZ=UTC
Type=oneshot
ExecStart=/usr/sbin/sanoid --take-snapshots --verbose
sanoid-2.2.0/packages/debian/sanoid.timer 0000664 0000000 0000000 00000000174 14455537001 0020315 0 ustar 00root root 0000000 0000000 [Unit]
Description=Run Sanoid Every 15 Minutes
[Timer]
OnCalendar=*:0/15
Persistent=true
[Install]
WantedBy=timers.target
sanoid-2.2.0/packages/debian/source/ 0000775 0000000 0000000 00000000000 14455537001 0017274 5 ustar 00root root 0000000 0000000 sanoid-2.2.0/packages/debian/source/format 0000664 0000000 0000000 00000000015 14455537001 0020503 0 ustar 00root root 0000000 0000000 3.0 (native)
sanoid-2.2.0/packages/gentoo/ 0000775 0000000 0000000 00000000000 14455537001 0016045 5 ustar 00root root 0000000 0000000 sanoid-2.2.0/packages/gentoo/sys-fs/ 0000775 0000000 0000000 00000000000 14455537001 0017271 5 ustar 00root root 0000000 0000000 sanoid-2.2.0/packages/gentoo/sys-fs/sanoid/ 0000775 0000000 0000000 00000000000 14455537001 0020546 5 ustar 00root root 0000000 0000000 sanoid-2.2.0/packages/gentoo/sys-fs/sanoid/Manifest 0000664 0000000 0000000 00000002064 14455537001 0022241 0 ustar 00root root 0000000 0000000 AUX sanoid.cron 45 BLAKE2B 3f6294bbbf485dc21a565cd2c8da05a42fb21cdaabdf872a21500f1a7338786c60d4a1fd188bbf81ce85f06a376db16998740996f47c049707a5109bdf02c052 SHA512 7676b32f21e517e8c84a097c7934b54097cf2122852098ea756093ece242125da3f6ca756a6fbb82fc348f84b94bfd61639e86e0bfa4bbe7abf94a8a4c551419
DIST sanoid-2.0.2.tar.gz 115797 BLAKE2B d00a038062df3dd8e77d3758c7b80ed6da0bac4931fb6df6adb72eeddb839c63d5129e0a281948a483d02165dad5a8505e1a55dc851360d3b366371038908142 SHA512 73e3d25dbdd58a78ffc4384584304e7230c5f31a660ce6d2a9b9d52a92a3796f1bc25ae865dbc74ce586cbd6169dbb038340f4a28e097e77ab3eb192b15773db
EBUILD sanoid-2.0.2.ebuild 796 BLAKE2B f3d633289d66c60fd26cb7731bc6b63533019f527aaec9ca8e5c0e748542d391153dbb55b17b8c981ca4fa4ae1fc8dc202b5480c13736fca250940b3b5ebb793 SHA512 d0143680c029ffe4ac37d97a979ed51527b4b8dd263d0c57e43a4650bf8a9bb8
EBUILD sanoid-9999.ebuild 776 BLAKE2B 416b8d04a9e5a84bce46d2a6f88eaefe03804944c03bc7f49b7a5b284b844212a6204402db3de3afa5d9c0545125d2631e7231c8cb2a3537bdcb10ea1be46b6a SHA512 98d8a30a13e75d7847ae9d60797d54078465bf75c6c6d9b6fd86075e342c0374
sanoid-2.2.0/packages/gentoo/sys-fs/sanoid/files/ 0000775 0000000 0000000 00000000000 14455537001 0021650 5 ustar 00root root 0000000 0000000 sanoid-2.2.0/packages/gentoo/sys-fs/sanoid/files/sanoid.cron 0000664 0000000 0000000 00000000055 14455537001 0024010 0 ustar 00root root 0000000 0000000 * * * * * root TZ=UTC /usr/bin/sanoid --cron
sanoid-2.2.0/packages/gentoo/sys-fs/sanoid/sanoid-2.0.2.ebuild 0000664 0000000 0000000 00000001434 14455537001 0023650 0 ustar 00root root 0000000 0000000 # Copyright 2019 Gentoo Authors
# Distributed under the terms of the GNU General Public License v2
EAPI=7
DESCRIPTION="Policy-driven snapshot management and replication tools for ZFS"
HOMEPAGE="https://github.com/jimsalterjrs/sanoid"
SRC_URI="https://github.com/jimsalterjrs/${PN}/archive/v${PV}.tar.gz -> ${P}.tar.gz"
LICENSE="GPL-3.0"
SLOT="0"
KEYWORDS="~x86 ~amd64"
IUSE=""
DEPEND="app-arch/lzop
dev-perl/Config-IniFiles
dev-perl/Capture-Tiny
sys-apps/pv
sys-block/mbuffer
virtual/perl-Data-Dumper"
RDEPEND="${DEPEND}"
BDEPEND=""
DOCS=( README.md )
src_install() {
dobin findoid
dobin sanoid
dobin sleepymutex
dobin syncoid
keepdir /etc/${PN}
insinto /etc/${PN}
doins sanoid.conf
doins sanoid.defaults.conf
insinto /etc/cron.d
newins "${FILESDIR}/${PN}.cron" ${PN}
}
sanoid-2.2.0/packages/gentoo/sys-fs/sanoid/sanoid-9999.ebuild 0000664 0000000 0000000 00000001410 14455537001 0023626 0 ustar 00root root 0000000 0000000 # Copyright 2019 Gentoo Authors
# Distributed under the terms of the GNU General Public License v2
EAPI=7
EGIT_REPO_URI="https://github.com/jimsalterjrs/${PN}.git"
inherit git-r3
DESCRIPTION="Policy-driven snapshot management and replication tools for ZFS"
HOMEPAGE="https://github.com/jimsalterjrs/sanoid"
LICENSE="GPL-3.0"
SLOT="0"
KEYWORDS="**"
IUSE=""
DEPEND="app-arch/lzop
dev-perl/Config-IniFiles
dev-perl/Capture-Tiny
sys-apps/pv
sys-block/mbuffer
virtual/perl-Data-Dumper"
RDEPEND="${DEPEND}"
BDEPEND=""
DOCS=( README.md )
src_install() {
dobin findoid
dobin sanoid
dobin sleepymutex
dobin syncoid
keepdir /etc/${PN}
insinto /etc/${PN}
doins sanoid.conf
doins sanoid.defaults.conf
insinto /etc/cron.d
newins "${FILESDIR}/${PN}.cron" ${PN}
}
sanoid-2.2.0/packages/rhel/ 0000775 0000000 0000000 00000000000 14455537001 0015504 5 ustar 00root root 0000000 0000000 sanoid-2.2.0/packages/rhel/sanoid.spec 0000664 0000000 0000000 00000007131 14455537001 0017637 0 ustar 00root root 0000000 0000000 %global version 2.2.0
%global git_tag v%{version}
# Enable with systemctl "enable sanoid.timer"
%global _with_systemd 1
Name: sanoid
Version: %{version}
Release: 1%{?dist}
BuildArch: noarch
Summary: A policy-driven snapshot management tool for ZFS file systems
Group: Applications/System
License: GPLv3
URL: https://github.com/jimsalterjrs/sanoid
Source0: https://github.com/jimsalterjrs/%{name}/archive/%{git_tag}/%{name}-%{version}.tar.gz
Requires: perl, mbuffer, lzop, pv, perl-Config-IniFiles, perl-Capture-Tiny
%if 0%{?_with_systemd}
Requires: systemd >= 212
BuildRequires: systemd
%endif
%description
Sanoid is a policy-driven snapshot management
tool for ZFS file systems. You can use Sanoid
to create, automatically thin, and monitor snapshots
and pool health from a single eminently
human-readable TOML configuration file.
%prep
%setup -q
%build
echo "Nothing to build"
%install
%{__install} -D -m 0644 sanoid.defaults.conf %{buildroot}/etc/sanoid/sanoid.defaults.conf
%{__install} -d %{buildroot}%{_sbindir}
%{__install} -m 0755 sanoid syncoid findoid sleepymutex %{buildroot}%{_sbindir}
%if 0%{?_with_systemd}
%{__install} -d %{buildroot}%{_unitdir}
%endif
%if 0%{?fedora}
%{__install} -D -m 0644 sanoid.conf %{buildroot}%{_docdir}/%{name}/examples/sanoid.conf
%endif
%if 0%{?rhel}
%{__install} -D -m 0644 sanoid.conf %{buildroot}%{_docdir}/%{name}-%{version}/examples/sanoid.conf
%endif
%if 0%{?_with_systemd}
cat > %{buildroot}%{_unitdir}/%{name}.service < %{buildroot}%{_unitdir}/%{name}.timer < %{buildroot}%{_docdir}/%{name}-%{version}/examples/sanoid.cron
%endif
%endif
%post
%{?_with_systemd:%{_bindir}/systemctl daemon-reload}
%postun
%{?_with_systemd:%{_bindir}/systemctl daemon-reload}
%files
%doc CHANGELIST VERSION README.md FREEBSD.readme
%license LICENSE
%{_sbindir}/sanoid
%{_sbindir}/syncoid
%{_sbindir}/findoid
%{_sbindir}/sleepymutex
%dir %{_sysconfdir}/%{name}
%config %{_sysconfdir}/%{name}/sanoid.defaults.conf
%if 0%{?fedora}
%{_docdir}/%{name}
%endif
%if 0%{?rhel}
%{_docdir}/%{name}-%{version}
%endif
%if 0%{?_with_systemd}
%{_unitdir}/%{name}.service
%{_unitdir}/%{name}.timer
%endif
%changelog
* Tue Jul 18 2023 Christoph Klaffl - 2.2.0
- Bump to 2.2.0
* Tue Nov 24 2020 Christoph Klaffl - 2.1.0
- Bump to 2.1.0
* Wed Oct 02 2019 Christoph Klaffl - 2.0.3
- Bump to 2.0.3
* Wed Sep 25 2019 Christoph Klaffl - 2.0.2
- Bump to 2.0.2
* Tue Dec 04 2018 Christoph Klaffl - 2.0.0
- Bump to 2.0.0
* Sat Apr 28 2018 Dominic Robinson - 1.4.18-1
- Bump to 1.4.18
* Thu Aug 31 2017 Dominic Robinson - 1.4.14-2
- Add systemd timers
* Wed Aug 30 2017 Dominic Robinson - 1.4.14-1
- Version bump
* Wed Jul 12 2017 Thomas M. Lapp - 1.4.13-1
- Version bump
- Include FREEBSD.readme in docs
* Wed Jul 12 2017 Thomas M. Lapp - 1.4.9-1
- Version bump
- Clean up variables and macros
- Compatible with both Fedora and Red Hat
* Sat Feb 13 2016 Thomas M. Lapp - 1.4.4-1
- Initial RPM Package
sanoid-2.2.0/sanoid 0000775 0000000 0000000 00000161774 14455537001 0014217 0 ustar 00root root 0000000 0000000 #!/usr/bin/perl
# this software is licensed for use under the Free Software Foundation's GPL v3.0 license, as retrieved
# from http://www.gnu.org/licenses/gpl-3.0.html on 2014-11-17. A copy should also be available in this
# project's Git repository at https://github.com/jimsalterjrs/sanoid/blob/master/LICENSE.
$::VERSION = '2.2.0';
my $MINIMUM_DEFAULTS_VERSION = 2;
use strict;
use warnings;
use Config::IniFiles; # read samba-style conf file
use Data::Dumper; # debugging - print contents of hash
use File::Path 'make_path';
use Getopt::Long qw(:config auto_version auto_help);
use Pod::Usage; # pod2usage
use Time::Local; # to parse dates in reverse
use Capture::Tiny ':all';
my %args = (
"configdir" => "/etc/sanoid",
"cache-dir" => "/var/cache/sanoid",
"run-dir" => "/var/run/sanoid"
);
GetOptions(\%args, "verbose", "debug", "cron", "readonly", "quiet",
"configdir=s", "cache-dir=s", "run-dir=s",
"monitor-health", "force-update",
"monitor-snapshots", "take-snapshots", "prune-snapshots", "force-prune",
"monitor-capacity"
) or pod2usage(2);
# If only config directory (or nothing) has been specified, default to --cron --verbose
if (keys %args < 4) {
$args{'cron'} = 1;
$args{'verbose'} = 1;
}
# for compatibility reasons, older versions used hardcoded command paths
$ENV{'PATH'} = $ENV{'PATH'} . ":/bin:/sbin";
my $pscmd = 'ps';
my $zfs = 'zfs';
my $zpool = 'zpool';
my $conf_file = "$args{'configdir'}/sanoid.conf";
my $default_conf_file = "$args{'configdir'}/sanoid.defaults.conf";
# parse config file
my %config = init($conf_file,$default_conf_file);
my $cache_dir = $args{'cache-dir'};
my $run_dir = $args{'run-dir'};
make_path($cache_dir);
make_path($run_dir);
# if we call getsnaps(%config,1) it will forcibly update the cache, TTL or no TTL
my $forcecacheupdate = 0;
my $cache = "$cache_dir/snapshots.txt";
my $cacheTTL = 900; # 15 minutes
my %snaps = getsnaps( \%config, $cacheTTL, $forcecacheupdate );
my %pruned;
my %capacitycache;
my %snapsbytype = getsnapsbytype( \%config, \%snaps );
my %snapsbypath = getsnapsbypath( \%config, \%snaps );
# let's make it a little easier to be consistent passing these hashes in the same order to each sub
my @params = ( \%config, \%snaps, \%snapsbytype, \%snapsbypath );
if ($args{'debug'}) { $args{'verbose'}=1; blabber (@params); }
if ($args{'monitor-snapshots'}) { monitor_snapshots(@params); }
if ($args{'monitor-health'}) { monitor_health(@params); }
if ($args{'monitor-capacity'}) { monitor_capacity(@params); }
if ($args{'force-update'}) { my $snaps = getsnaps( \%config, $cacheTTL, 1 ); }
if ($args{'cron'}) {
if ($args{'quiet'}) { $args{'verbose'} = 0; }
take_snapshots (@params);
prune_snapshots (@params);
} else {
if ($args{'take-snapshots'}) { take_snapshots (@params); }
if ($args{'prune-snapshots'}) { prune_snapshots (@params); }
}
exit 0;
####################################################################################
####################################################################################
####################################################################################
sub monitor_health {
my ($config, $snaps, $snapsbytype, $snapsbypath) = @_;
my %pools;
my @messages;
my $errlevel=0;
foreach my $path (keys %{ $snapsbypath}) {
my @pool = split ('/',$path);
$pools{$pool[0]}=1;
}
foreach my $pool (keys %pools) {
my ($exitcode, $msg) = check_zpool($pool,2);
if ($exitcode > $errlevel) { $errlevel = $exitcode; }
chomp $msg;
push (@messages, $msg);
}
my @warninglevels = ('','*** WARNING *** ','*** CRITICAL *** ');
my $message = $warninglevels[$errlevel] . join (', ',@messages);
print "$message\n";
exit $errlevel;
}
####################################################################################
####################################################################################
####################################################################################
sub monitor_snapshots {
# nagios plugin format: exit 0,1,2,3 for OK, WARN, CRITICAL, or ERROR.
# check_snapshot_date - test ZFS fs creation timestamp for recentness
# accepts arguments: $filesystem, $warn (in seconds elapsed), $crit (in seconds elapsed)
my ($config, $snaps, $snapsbytype, $snapsbypath) = @_;
my %datestamp = get_date();
my $errlevel = 0;
my $msg;
my @msgs;
my @paths;
foreach my $section (keys %config) {
if ($section =~ /^template/) { next; }
if (! $config{$section}{'monitor'}) { next; }
if ($config{$section}{'process_children_only'}) { next; }
my $path = $config{$section}{'path'};
push @paths, $path;
my @types = ('yearly','monthly','weekly','daily','hourly','frequently');
foreach my $type (@types) {
if ($config{$section}{$type} == 0) { next; }
my $smallerperiod = 0;
# we need to set the period length in seconds first
if ($type eq 'frequently') { $smallerperiod = 1; }
elsif ($type eq 'hourly') { $smallerperiod = 60; }
elsif ($type eq 'daily') { $smallerperiod = 60*60; }
elsif ($type eq 'weekly') { $smallerperiod = 60*60*24; }
elsif ($type eq 'monthly') { $smallerperiod = 60*60*24*7; }
elsif ($type eq 'yearly') { $smallerperiod = 60*60*24*31; }
my $typewarn = $type . '_warn';
my $typecrit = $type . '_crit';
my $warn = convertTimePeriod($config{$section}{$typewarn}, $smallerperiod);
my $crit = convertTimePeriod($config{$section}{$typecrit}, $smallerperiod);
my $elapsed = -1;
if (defined $snapsbytype{$path}{$type}{'newest'}) {
$elapsed = $snapsbytype{$path}{$type}{'newest'};
}
my $dispelapsed = displaytime($elapsed);
my $dispwarn = displaytime($warn);
my $dispcrit = displaytime($crit);
if ( $elapsed > $crit || $elapsed == -1) {
if ($crit > 0) {
if (! $config{$section}{'monitor_dont_crit'}) { $errlevel = 2; }
if ($elapsed == -1) {
push @msgs, "CRIT: $path has no $type snapshots at all!";
} else {
push @msgs, "CRIT: $path newest $type snapshot is $dispelapsed old (should be < $dispcrit)";
}
}
} elsif ($elapsed > $warn) {
if ($warn > 0) {
if (! $config{$section}{'monitor_dont_warn'} && ($errlevel < 2) ) { $errlevel = 1; }
push @msgs, "WARN: $path newest $type snapshot is $dispelapsed old (should be < $dispwarn)";
}
} else {
# push @msgs .= "OK: $path newest $type snapshot is $dispelapsed old \n";
}
}
}
my @sorted_msgs = sort { lc($a) cmp lc($b) } @msgs;
my @sorted_paths = sort { lc($a) cmp lc($b) } @paths;
$msg = join (", ", @sorted_msgs);
my $paths = join (", ", @sorted_paths);
if ($msg eq '') { $msg = "OK: all monitored datasets \($paths\) have fresh snapshots"; }
print "$msg\n";
exit $errlevel;
}
####################################################################################
####################################################################################
####################################################################################
sub monitor_capacity {
my ($config, $snaps, $snapsbytype, $snapsbypath) = @_;
my %pools;
my @messages;
my $errlevel=0;
# build pool list with corresponding capacity limits
foreach my $section (keys %config) {
my @pool = split ('/',$section);
if (scalar @pool == 1 || !defined($pools{$pool[0]}) ) {
my %capacitylimits;
if (!check_capacity_limit($config{$section}{'capacity_warn'})) {
die "ERROR: invalid zpool capacity warning limit!\n";
}
if ($config{$section}{'capacity_warn'} != 0) {
$capacitylimits{'warn'} = $config{$section}{'capacity_warn'};
}
if (!check_capacity_limit($config{$section}{'capacity_crit'})) {
die "ERROR: invalid zpool capacity critical limit!\n";
}
if ($config{$section}{'capacity_crit'} != 0) {
$capacitylimits{'crit'} = $config{$section}{'capacity_crit'};
}
if (%capacitylimits) {
$pools{$pool[0]} = \%capacitylimits;
}
}
}
foreach my $pool (keys %pools) {
my $capacitylimitsref = $pools{$pool};
my ($exitcode, $msg) = check_zpool_capacity($pool,\%$capacitylimitsref);
if ($exitcode > $errlevel) { $errlevel = $exitcode; }
chomp $msg;
push (@messages, $msg);
}
my @warninglevels = ('','*** WARNING *** ','*** CRITICAL *** ');
my $message = $warninglevels[$errlevel] . join (', ',@messages);
print "$message\n";
exit $errlevel;
}
####################################################################################
####################################################################################
####################################################################################
sub prune_snapshots {
if ($args{'verbose'}) { print "INFO: pruning snapshots...\n"; }
my ($config, $snaps, $snapsbytype, $snapsbypath) = @_;
my %datestamp = get_date();
my $forcecacheupdate = 0;
foreach my $section (keys %config) {
if ($section =~ /^template/) { next; }
if (! $config{$section}{'autoprune'}) { next; }
if ($config{$section}{'process_children_only'}) { next; }
my $path = $config{$section}{'path'};
my $period = 0;
if (check_prune_defer($config, $section)) {
if ($args{'verbose'}) { print "INFO: deferring snapshot pruning ($section)...\n"; }
next;
}
foreach my $type (keys %{ $config{$section} }){
unless ($type =~ /ly$/) { next; }
# we need to set the period length in seconds first
if ($type eq 'frequently') { $period = 60 * $config{$section}{'frequent_period'}; }
elsif ($type eq 'hourly') { $period = 60*60; }
elsif ($type eq 'daily') { $period = 60*60*24; }
elsif ($type eq 'weekly') { $period = 60*60*24*7; }
elsif ($type eq 'monthly') { $period = 60*60*24*31; }
elsif ($type eq 'yearly') { $period = 60*60*24*365.25; }
# avoid pissing off use warnings by not executing this block if no matching snaps exist
if (defined $snapsbytype{$path}{$type}{'sorted'}) {
my @sorted = split (/\|/,$snapsbytype{$path}{$type}{'sorted'});
# if we say "daily=30" we really mean "don't keep any dailies more than 30 days old", etc
my $maxage = ( time() - $config{$section}{$type} * $period );
# but if we say "daily=30" we ALSO mean "don't get rid of ANY dailies unless we have more than 30".
my $minsnapsthistype = $config{$section}{$type};
# how many total snaps of this type do we currently have?
my $numsnapsthistype = scalar (@sorted);
my @prunesnaps;
foreach my $snap( @sorted ){
# print "snap $path\@$snap has age $snaps{$path}{$snap}{'ctime'}, maxage is $maxage.\n";
if ( ($snaps{$path}{$snap}{'ctime'} < $maxage) && ($numsnapsthistype > $minsnapsthistype) ) {
my $fullpath = $path . '@' . $snap;
push(@prunesnaps,$fullpath);
# we just got rid of a snap, so we now have one fewer, duh
$numsnapsthistype--;
}
}
if ((scalar @prunesnaps) > 0) {
# print "found some snaps to prune!\n"
if (checklock('sanoid_pruning')) {
writelock('sanoid_pruning');
foreach my $snap( @prunesnaps ){
my $dataset = (split '@', $snap)[0];
my $snapname = (split '@', $snap)[1];
if (! $args{'readonly'} && $config{$dataset}{'pre_pruning_script'}) {
$ENV{'SANOID_TARGET'} = $dataset;
$ENV{'SANOID_SNAPNAME'} = $snapname;
if ($args{'verbose'}) { print "executing pre_pruning_script '".$config{$dataset}{'pre_pruning_script'}."' on dataset '$dataset'\n"; }
my $ret = runscript('pre_pruning_script', $dataset);
delete $ENV{'SANOID_TARGET'};
delete $ENV{'SANOID_SNAPNAME'};
if ($ret != 0) {
# warning was already thrown by runscript function
# skip pruning if pre snapshot script returns non zero exit code
next;
}
}
if ($args{'verbose'}) { print "INFO: pruning $snap ... \n"; }
if (!$args{'force-prune'} && iszfsbusy($path)) {
if ($args{'verbose'}) { print "INFO: deferring pruning of $snap - $path is currently in zfs send or receive.\n"; }
} else {
if (! $args{'readonly'}) {
if (system($zfs, "destroy", $snap) == 0) {
$pruned{$snap} = 1;
if ($config{$dataset}{'pruning_script'}) {
$ENV{'SANOID_TARGET'} = $dataset;
$ENV{'SANOID_SNAPNAME'} = $snapname;
$ENV{'SANOID_SCRIPT'} = 'prune';
if ($args{'verbose'}) { print "executing pruning_script '".$config{$dataset}{'pruning_script'}."' on dataset '$dataset'\n"; }
my $ret = runscript('pruning_script',$dataset);
delete $ENV{'SANOID_TARGET'};
delete $ENV{'SANOID_SNAPNAME'};
delete $ENV{'SANOID_SCRIPT'};
}
} else {
warn "could not remove $snap : $?";
}
}
}
}
removelock('sanoid_pruning');
removecachedsnapshots(0);
} else {
if ($args{'verbose'}) { print "INFO: deferring snapshot pruning - valid pruning lock held by other sanoid process.\n"; }
}
}
}
}
}
# if there were any deferred cache updates,
# do them now and wait if necessary
removecachedsnapshots(1);
} # end prune_snapshots
####################################################################################
####################################################################################
####################################################################################
sub take_snapshots {
my ($config, $snaps, $snapsbytype, $snapsbypath) = @_;
my %datestamp = get_date();
my $forcecacheupdate = 0;
my %newsnapsgroup;
# get utc timestamp of the current day for DST check
my $daystartUtc = timelocal(0, 0, 0, $datestamp{'mday'}, ($datestamp{'mon'}-1), $datestamp{'year'});
my ($isdst) = (localtime($daystartUtc))[8];
my $dstOffset = 0;
if ($isdst ne $datestamp{'isdst'}) {
# current dst is different then at the beginning og the day
if ($isdst) {
# DST ended in the current day
$dstOffset = 60*60;
}
}
if ($args{'verbose'}) { print "INFO: taking snapshots...\n"; }
foreach my $section (keys %config) {
if ($section =~ /^template/) { next; }
if (! $config{$section}{'autosnap'}) { next; }
if ($config{$section}{'process_children_only'}) { next; }
my $path = $config{$section}{'path'};
my @types = ('yearly','monthly','weekly','daily','hourly','frequently');
foreach my $type (@types) {
if ($config{$section}{$type} > 0) {
my $newestage; # in seconds
if (defined $snapsbytype{$path}{$type}{'newest'}) {
$newestage = $snapsbytype{$path}{$type}{'newest'};
} else{
$newestage = 9999999999999999;
}
# for use with localtime: @preferredtime will be most recent preferred snapshot time in ($sec,$min,$hour,$mon-1,$year) format
my @preferredtime;
my $lastpreferred;
# to avoid duplicates with DST
my $handleDst = 0;
if ($type eq 'frequently') {
my $frequentslice = int($datestamp{'min'} / $config{$section}{'frequent_period'});
push @preferredtime,0; # try to hit 0 seconds
push @preferredtime,$frequentslice * $config{$section}{'frequent_period'};
push @preferredtime,$datestamp{'hour'};
push @preferredtime,$datestamp{'mday'};
push @preferredtime,($datestamp{'mon'}-1); # january is month 0
push @preferredtime,$datestamp{'year'};
$lastpreferred = timelocal(@preferredtime);
if ($lastpreferred > time()) { $lastpreferred -= 60 * $config{$section}{'frequent_period'}; } # preferred time is later this frequent period - so look at last frequent period
} elsif ($type eq 'hourly') {
push @preferredtime,0; # try to hit 0 seconds
push @preferredtime,$config{$section}{'hourly_min'};
push @preferredtime,$datestamp{'hour'};
push @preferredtime,$datestamp{'mday'};
push @preferredtime,($datestamp{'mon'}-1); # january is month 0
push @preferredtime,$datestamp{'year'};
$lastpreferred = timelocal(@preferredtime);
if ($dstOffset ne 0) {
# timelocal doesn't take DST into account
$lastpreferred += $dstOffset;
# DST ended, avoid duplicates
$handleDst = 1;
}
if ($lastpreferred > time()) { $lastpreferred -= 60*60; } # preferred time is later this hour - so look at last hour's
} elsif ($type eq 'daily') {
push @preferredtime,0; # try to hit 0 seconds
push @preferredtime,$config{$section}{'daily_min'};
push @preferredtime,$config{$section}{'daily_hour'};
push @preferredtime,$datestamp{'mday'};
push @preferredtime,($datestamp{'mon'}-1); # january is month 0
push @preferredtime,$datestamp{'year'};
$lastpreferred = timelocal(@preferredtime);
# timelocal doesn't take DST into account
$lastpreferred += $dstOffset;
# check if the planned time has different DST flag than the current
my ($isdst) = (localtime($lastpreferred))[8];
if ($isdst ne $datestamp{'isdst'}) {
if (!$isdst) {
# correct DST difference
$lastpreferred -= 60*60;
}
}
if ($lastpreferred > time()) {
$lastpreferred -= 60*60*24;
if ($dstOffset ne 0) {
# because we are going back one day
# the DST difference has to be accounted
# for in reverse now
$lastpreferred -= 2*$dstOffset;
}
} # preferred time is later today - so look at yesterday's
} elsif ($type eq 'weekly') {
# calculate offset in seconds for the desired weekday
my $offset = 0;
if ($config{$section}{'weekly_wday'} < $datestamp{'wday'}) {
$offset += 7;
}
$offset += $config{$section}{'weekly_wday'} - $datestamp{'wday'};
$offset *= 60*60*24; # full day
push @preferredtime,0; # try to hit 0 seconds
push @preferredtime,$config{$section}{'weekly_min'};
push @preferredtime,$config{$section}{'weekly_hour'};
push @preferredtime,$datestamp{'mday'};
push @preferredtime,($datestamp{'mon'}-1); # january is month 0
push @preferredtime,$datestamp{'year'};
$lastpreferred = timelocal(@preferredtime);
$lastpreferred += $offset;
if ($lastpreferred > time()) { $lastpreferred -= 60*60*24*7; } # preferred time is later this week - so look at last week's
} elsif ($type eq 'monthly') {
push @preferredtime,0; # try to hit 0 seconds
push @preferredtime,$config{$section}{'monthly_min'};
push @preferredtime,$config{$section}{'monthly_hour'};
push @preferredtime,$config{$section}{'monthly_mday'};
push @preferredtime,($datestamp{'mon'}-1); # january is month 0
push @preferredtime,$datestamp{'year'};
$lastpreferred = timelocal(@preferredtime);
if ($lastpreferred > time()) { $lastpreferred -= 60*60*24*31; } # preferred time is later this month - so look at last month's
} elsif ($type eq 'yearly') {
push @preferredtime,0; # try to hit 0 seconds
push @preferredtime,$config{$section}{'yearly_min'};
push @preferredtime,$config{$section}{'yearly_hour'};
push @preferredtime,$config{$section}{'yearly_mday'};
push @preferredtime,($config{$section}{'yearly_mon'}-1); # january is month 0
push @preferredtime,$datestamp{'year'};
$lastpreferred = timelocal(@preferredtime);
if ($lastpreferred > time()) { $lastpreferred -= 60*60*24*31*365.25; } # preferred time is later this year - so look at last year
} else {
warn "WARN: unknown interval type $type in config!";
next;
}
# reconstruct our human-formatted most recent preferred snapshot time into an epoch time, to compare with the epoch of our most recent snapshot
my $maxage = time()-$lastpreferred;
if ( $newestage > $maxage ) {
# print "we should have had a $type snapshot of $path $maxage seconds ago; most recent is $newestage seconds old.\n";
if (!exists $newsnapsgroup{$path}) {
$newsnapsgroup{$path} = {
'recursive' => $config{$section}{'zfs_recursion'},
'handleDst' => $handleDst,
'datasets' => [$path], # for later atomic grouping, currently only a one element array
'types' => []
};
}
push(@{$newsnapsgroup{$path}{'types'}}, $type);
}
}
}
}
if (%newsnapsgroup) {
while ((my $path, my $snapData) = each(%newsnapsgroup)) {
my $recursiveFlag = $snapData->{recursive};
my $dstHandling = $snapData->{handleDst};
my @datasets = @{$snapData->{datasets}};
my $dataset = $datasets[0];
my @types = @{$snapData->{types}};
# same timestamp for all snapshots types (daily, hourly, ...)
my %datestamp = get_date();
my @snapshots;
foreach my $type (@types) {
my $snapname = "autosnap_$datestamp{'sortable'}_$type";
push(@snapshots, $snapname);
}
my $datasetString = join(",", @datasets);
my $typeString = join(",", @types);
my $snapshotString = join(",", @snapshots);
my $extraMessage = "";
if ($recursiveFlag) {
$extraMessage = " (zfs recursive)";
}
my $presnapshotfailure = 0;
my $ret = 0;
if ($config{$dataset}{'pre_snapshot_script'}) {
$ENV{'SANOID_TARGET'} = $dataset;
$ENV{'SANOID_TARGETS'} = $datasetString;
$ENV{'SANOID_SNAPNAME'} = $snapshots[0];
$ENV{'SANOID_SNAPNAMES'} = $snapshotString;
$ENV{'SANOID_TYPES'} = $typeString;
$ENV{'SANOID_SCRIPT'} = 'pre';
if ($args{'verbose'}) { print "executing pre_snapshot_script '".$config{$dataset}{'pre_snapshot_script'}."' on dataset '$dataset'\n"; }
if (!$args{'readonly'}) {
$ret = runscript('pre_snapshot_script',$dataset);
}
delete $ENV{'SANOID_TARGET'};
delete $ENV{'SANOID_TARGETS'};
delete $ENV{'SANOID_SNAPNAME'};
delete $ENV{'SANOID_SNAPNAMES'};
delete $ENV{'SANOID_TYPES'};
delete $ENV{'SANOID_SCRIPT'};
if ($ret != 0) {
# warning was already thrown by runscript function
$config{$dataset}{'no_inconsistent_snapshot'} and next;
$presnapshotfailure = 1;
}
}
foreach my $snap (@snapshots) {
$snap = "$dataset\@$snap";
if ($args{'verbose'}) { print "taking snapshot $snap$extraMessage\n"; }
if (!$args{'readonly'}) {
my $stderr;
my $exit;
($stderr, $exit) = tee_stderr {
if ($recursiveFlag) {
system($zfs, "snapshot", "-r", "$snap");
} else {
system($zfs, "snapshot", "$snap");
}
};
$exit == 0 or do {
if ($dstHandling) {
if ($stderr =~ /already exists/) {
$exit = 0;
$snap =~ s/_([a-z]+)$/dst_$1/g;
if ($args{'verbose'}) { print "taking dst snapshot $snap$extraMessage\n"; }
if ($recursiveFlag) {
system($zfs, "snapshot", "-r", "$snap") == 0
or warn "CRITICAL ERROR: $zfs snapshot -r $snap failed, $?";
} else {
system($zfs, "snapshot", "$snap") == 0
or warn "CRITICAL ERROR: $zfs snapshot $snap failed, $?";
}
}
}
};
$exit == 0 or do {
if ($recursiveFlag) {
warn "CRITICAL ERROR: $zfs snapshot -r $snap failed, $?";
} else {
warn "CRITICAL ERROR: $zfs snapshot $snap failed, $?";
}
};
}
}
if ($config{$dataset}{'post_snapshot_script'}) {
if (!$presnapshotfailure or $config{$dataset}{'force_post_snapshot_script'}) {
$ENV{'SANOID_TARGET'} = $dataset;
$ENV{'SANOID_TARGETS'} = $datasetString;
$ENV{'SANOID_SNAPNAME'} = $snapshots[0];
$ENV{'SANOID_SNAPNAMES'} = $snapshotString;
$ENV{'SANOID_TYPES'} = $typeString;
$ENV{'SANOID_SCRIPT'} = 'post';
$ENV{'SANOID_PRE_FAILURE'} = $presnapshotfailure;
if ($args{'verbose'}) { print "executing post_snapshot_script '".$config{$dataset}{'post_snapshot_script'}."' on dataset '$dataset'\n"; }
if (!$args{'readonly'}) {
runscript('post_snapshot_script',$dataset);
}
delete $ENV{'SANOID_TARGET'};
delete $ENV{'SANOID_TARGETS'};
delete $ENV{'SANOID_SNAPNAME'};
delete $ENV{'SANOID_SNAPNAMES'};
delete $ENV{'SANOID_TYPES'};
delete $ENV{'SANOID_SCRIPT'};
delete $ENV{'SANOID_PRE_FAILURE'};
}
}
}
$forcecacheupdate = 1;
%snaps = getsnaps(%config,$cacheTTL,$forcecacheupdate);
}
}
####################################################################################
####################################################################################
####################################################################################
sub blabber {
my ($config, $snaps, $snapsbytype, $snapsbypath) = @_;
$Data::Dumper::Sortkeys = 1;
print "****** CONFIGS ******\n";
print Dumper(\%config);
#print "****** SNAPSHOTS ******\n";
#print Dumper(\%snaps);
#print "****** SNAPSBYTYPE ******\n";
#print Dumper(\%snapsbytype);
#print "****** SNAPSBYPATH ******\n";
#print Dumper(\%snapsbypath);
print "\n";
foreach my $section (keys %config) {
my $path = $config{$section}{'path'};
print "Filesystem $path has:\n";
print " $snapsbypath{$path}{'numsnaps'} total snapshots ";
if ($snapsbypath{$path}{'numsnaps'} == 0) {
print "(no current snapshots)"
} else {
print "(newest: ";
my $newest = sprintf("%.1f",$snapsbypath{$path}{'newest'} / 60 / 60);
print "$newest hours old)\n";
foreach my $type (keys %{ $snapsbytype{$path} }){
print " $snapsbytype{$path}{$type}{'numsnaps'} $type\n";
print " desired: $config{$section}{$type}\n";
print " newest: ";
my $newest = sprintf("%.1f",($snapsbytype{$path}{$type}{'newest'} / 60 / 60));
print "$newest hours old, named $snapsbytype{$path}{$type}{'newestname'}\n";
}
}
print "\n\n";
}
} # end blabber
####################################################################################
####################################################################################
####################################################################################
sub getsnapsbytype {
my ($config, $snaps) = @_;
my %snapsbytype;
# iterate through each module section - each section is a single ZFS path
foreach my $section (keys %config) {
my $path = $config{$section}{'path'};
my %rawsnaps;
foreach my $name (keys %{ $snaps{$path} }){
my $type = $snaps{$path}{$name}{'type'};
$rawsnaps{$type}{$name} = $snaps{$path}{$name}{'ctime'}
}
# iterate through snapshots of each type, ordered by creation time of each snapshot within that type
foreach my $type (keys %rawsnaps) {
$snapsbytype{$path}{$type}{'numsnaps'} = scalar (keys %{ $rawsnaps{$type} });
my @sortedsnaps;
foreach my $name (
sort { $rawsnaps{$type}{$a} <=> $rawsnaps{$type}{$b} } keys %{ $rawsnaps{$type} }
) {
push @sortedsnaps, $name;
$snapsbytype{$path}{$type}{'newest'} = (time-$snaps{$path}{$name}{'ctime'});
$snapsbytype{$path}{$type}{'newestname'} = $name;
}
$snapsbytype{$path}{$type}{'sorted'} = join ('|',@sortedsnaps);
}
}
return %snapsbytype;
} # end getsnapsbytype
####################################################################################
####################################################################################
####################################################################################
sub getsnapsbypath {
my ($config,$snaps) = @_;
my %snapsbypath;
# iterate through each module section - each section is a single ZFS path
foreach my $section (keys %config) {
my $path = $config{$section}{'path'};
$snapsbypath{$path}{'numsnaps'} = scalar (keys %{ $snaps{$path} });
# iterate through snapshots of each type, ordered by creation time of each snapshot within that type
my %rawsnaps;
foreach my $snapname ( keys %{ $snaps{$path} } ) {
$rawsnaps{$path}{$snapname} = $snaps{$path}{$snapname}{'ctime'};
}
my @sortedsnaps;
foreach my $snapname (
sort { $rawsnaps{$path}{$a} <=> $rawsnaps{$path}{$b} } keys %{ $rawsnaps{$path} }
) {
push @sortedsnaps, $snapname;
$snapsbypath{$path}{'newest'} = (time-$snaps{$path}{$snapname}{'ctime'});
}
my $sortedsnaps = join ('|',@sortedsnaps);
$snapsbypath{$path}{'sorted'} = $sortedsnaps;
}
return %snapsbypath;
} # end getsnapsbypath
####################################################################################
####################################################################################
####################################################################################
sub getsnaps {
my ($config, $cacheTTL, $forcecacheupdate) = @_;
my @rawsnaps;
my ($dev, $ino, $mode, $nlink, $uid, $gid, $rdev, $size, $atime, $mtime, $ctime, $blksize, $blocks) = stat($cache);
if ( $forcecacheupdate || ! -f $cache || (time() - $mtime) > $cacheTTL ) {
if (checklock('sanoid_cacheupdate')) {
writelock('sanoid_cacheupdate');
if ($args{'verbose'}) {
if ($args{'force-update'}) {
print "INFO: cache forcibly expired - updating from zfs list.\n";
} else {
print "INFO: cache expired - updating from zfs list.\n";
}
}
open FH, "$zfs get -Hrpt snapshot creation |";
@rawsnaps = ;
close FH;
open FH, "> $cache" or die 'Could not write to $cache!\n';
print FH @rawsnaps;
close FH;
removelock('sanoid_cacheupdate');
} else {
if ($args{'verbose'}) { print "INFO: deferring cache update - valid cache update lock held by another sanoid process.\n"; }
open FH, "< $cache";
@rawsnaps = ;
close FH;
}
} else {
# if ($args{'debug'}) { print "DEBUG: cache not expired (" . (time() - $mtime) . " seconds old with TTL of $cacheTTL): pulling snapshot list from cache.\n"; }
open FH, "< $cache";
@rawsnaps = ;
close FH;
}
foreach my $snap (@rawsnaps) {
my ($fs,$snapname,$snapdate) = ($snap =~ m/(.*)\@(.*ly)\t*creation\t*(\d*)/);
# avoid pissing off use warnings
if (defined $snapname) {
my ($snaptype) = ($snapname =~ m/.*_(\w*ly)/);
if ($snapname =~ /^autosnap/) {
$snaps{$fs}{$snapname}{'ctime'}=$snapdate;
$snaps{$fs}{$snapname}{'type'}=$snaptype;
}
}
}
return %snaps;
}
####################################################################################
####################################################################################
####################################################################################
sub init {
my ($conf_file, $default_conf_file) = @_;
my %config;
unless (-e $default_conf_file ) { die "FATAL: cannot load $default_conf_file - please restore a clean copy, this is not a user-editable file!"; }
unless (-e $conf_file ) { die "FATAL: cannot load $conf_file - please create a valid local config file before running sanoid!"; }
tie my %defaults, 'Config::IniFiles', ( -file => $default_conf_file ) or die "FATAL: cannot load $default_conf_file - please restore a clean copy, this is not a user-editable file!";
tie my %ini, 'Config::IniFiles', ( -file => $conf_file ) or die "FATAL: cannot load $conf_file - please create a valid local config file before running sanoid!";
# we'll use these later to normalize potentially true and false values on any toggle keys
my @toggles = ('autosnap','autoprune','monitor_dont_warn','monitor_dont_crit','monitor','recursive','process_children_only','skip_children','no_inconsistent_snapshot','force_post_snapshot_script');
# recursive is defined as toggle but can also have the special value "zfs", it is kept to be backward compatible
my @istrue=(1,"true","True","TRUE","yes","Yes","YES","on","On","ON");
my @isfalse=(0,"false","False","FALSE","no","No","NO","off","Off","OFF");
# check if default configuration file is up to date
my $defaults_version = 1;
if (defined $defaults{'version'}{'version'}) {
$defaults_version = $defaults{'version'}{'version'};
delete $defaults{'version'};
}
if ($defaults_version < $MINIMUM_DEFAULTS_VERSION) {
die "FATAL: you're using sanoid.defaults.conf v$defaults_version, this version of sanoid requires a minimum sanoid.defaults.conf v$MINIMUM_DEFAULTS_VERSION";
}
foreach my $section (keys %ini) {
# first up - die with honor if unknown parameters are set in any modules or templates by the user.
foreach my $key (keys %{$ini{$section}}) {
if (! defined ($defaults{'template_default'}{$key})) {
die "FATAL ERROR: I don't understand the setting $key you've set in \[$section\] in $conf_file.\n";
}
# in case of duplicate lines we will end up with an array of all values
my $value = $ini{$section}{$key};
if (ref($value) eq 'ARRAY') {
warn "duplicate key '$key' in section '$section', using the value from the first occurence and ignoring the others.\n";
$ini{$section}{$key} = $value->[0];
}
}
if ($section =~ /^template_/) { next; } # don't process templates directly
# only set defaults on sections that haven't already been initialized - this allows us to override values
# for sections directly when they've already been defined recursively, without starting them over from scratch.
if (! defined ($config{$section}{'initialized'})) {
if ($args{'debug'}) { print "DEBUG: initializing \$config\{$section\} with default values from $default_conf_file.\n"; }
# set default values from %defaults, which can then be overridden by template
# and/or local settings within the module.
foreach my $key (keys %{$defaults{'template_default'}}) {
if (! ($key =~ /template|recursive|children_only/)) {
$config{$section}{$key} = $defaults{'template_default'}{$key};
}
}
# override with values from user-defined default template, if any
foreach my $key (keys %{$ini{'template_default'}}) {
if ($key =~ /template|recursive/) {
warn "ignored key '$key' from user-defined default template.\n";
next;
}
if ($args{'debug'}) { print "DEBUG: overriding $key on $section with value from user-defined default template.\n"; }
$config{$section}{$key} = $ini{'template_default'}{$key};
}
}
# override with values from user-defined templates applied to this module,
# in the order they were specified (ie use_template = default,production,mytemplate)
if (defined $ini{$section}{'use_template'}) {
my @templates = split (' *, *',$ini{$section}{'use_template'});
foreach my $rawtemplate (@templates) {
# strip trailing whitespace
$rawtemplate =~ s/\s+$//g;
my $template = 'template_'.$rawtemplate;
foreach my $key (keys %{$ini{$template}}) {
if ($key =~ /template|recursive/) {
warn "ignored key '$key' from '$rawtemplate' template.\n";
next;
}
if ($args{'debug'}) { print "DEBUG: overriding $key on $section with value from user-defined template $template.\n"; }
$config{$section}{$key} = $ini{$template}{$key};
}
}
}
# override with any locally set values in the module itself
foreach my $key (keys %{$ini{$section}} ) {
if (! ($key =~ /template|recursive|skip_children/)) {
if ($args{'debug'}) { print "DEBUG: overriding $key on $section with value directly set in module.\n"; }
$config{$section}{$key} = $ini{$section}{$key};
}
}
# make sure that true values are true and false values are false for any toggled values
foreach my $toggle(@toggles) {
foreach my $true (@istrue) {
if (defined $config{$section}{$toggle} && $config{$section}{$toggle} eq $true) { $config{$section}{$toggle} = 1; }
}
foreach my $false (@isfalse) {
if (defined $config{$section}{$toggle} && $config{$section}{$toggle} eq $false) { $config{$section}{$toggle} = 0; }
}
}
# section path is the section name, unless section path has been explicitly defined
if (defined ($ini{$section}{'path'})) {
$config{$section}{'path'} = $ini{$section}{'path'};
} else {
$config{$section}{'path'} = $section;
}
# how 'bout some recursion? =)
if ($config{$section}{'zfs_recursion'} && $config{$section}{'zfs_recursion'} == 1 && $config{$section}{'autosnap'} == 1) {
warn "ignored autosnap configuration for '$section' because it's part of a zfs recursion.\n";
$config{$section}{'autosnap'} = 0;
}
my $recursive = $ini{$section}{'recursive'} && grep( /^$ini{$section}{'recursive'}$/, @istrue );
my $zfsRecursive = $ini{$section}{'recursive'} && $ini{$section}{'recursive'} =~ /zfs/i;
my $skipChildren = $ini{$section}{'skip_children'} && grep( /^$ini{$section}{'skip_children'}$/, @istrue );
my @datasets;
if ($zfsRecursive || $recursive || $skipChildren) {
if ($zfsRecursive) {
$config{$section}{'zfs_recursion'} = 1;
}
@datasets = getchilddatasets($config{$section}{'path'});
DATASETS: foreach my $dataset(@datasets) {
chomp $dataset;
if ($zfsRecursive) {
# don't try to take the snapshot ourself, recursive zfs snapshot will take care of that
$config{$dataset}{'autosnap'} = 0;
foreach my $key (keys %{$config{$section}} ) {
if (! ($key =~ /template|recursive|children_only|autosnap/)) {
if ($args{'debug'}) { print "DEBUG: recursively setting $key from $section to $dataset.\n"; }
$config{$dataset}{$key} = $config{$section}{$key};
}
}
} else {
if ($skipChildren) {
if ($args{'debug'}) { print "DEBUG: ignoring $dataset.\n"; }
delete $config{$dataset};
next DATASETS;
}
foreach my $key (keys %{$config{$section}} ) {
if (! ($key =~ /template|recursive|children_only/)) {
if ($args{'debug'}) { print "DEBUG: recursively setting $key from $section to $dataset.\n"; }
$config{$dataset}{$key} = $config{$section}{$key};
}
}
}
$config{$dataset}{'path'} = $dataset;
$config{$dataset}{'initialized'} = 1;
}
}
}
return %config;
} # end sub init
####################################################################################
####################################################################################
####################################################################################
sub get_date {
my %datestamp;
($datestamp{'sec'},$datestamp{'min'},$datestamp{'hour'},$datestamp{'mday'},$datestamp{'mon'},$datestamp{'year'},$datestamp{'wday'},$datestamp{'yday'},$datestamp{'isdst'}) = localtime(time);
$datestamp{'year'} += 1900;
$datestamp{'unix_time'} = (((((((($datestamp{'year'} - 1971) * 365) + $datestamp{'yday'}) * 24) + $datestamp{'hour'}) * 60) + $datestamp{'min'}) * 60) + $datestamp{'sec'};
$datestamp{'sec'} = sprintf ("%02u", $datestamp{'sec'});
$datestamp{'min'} = sprintf ("%02u", $datestamp{'min'});
$datestamp{'hour'} = sprintf ("%02u", $datestamp{'hour'});
$datestamp{'mday'} = sprintf ("%02u", $datestamp{'mday'});
$datestamp{'mon'} = sprintf ("%02u", ($datestamp{'mon'} + 1));
$datestamp{'noseconds'} = "$datestamp{'year'}-$datestamp{'mon'}-$datestamp{'mday'}_$datestamp{'hour'}:$datestamp{'min'}";
$datestamp{'sortable'} = "$datestamp{'noseconds'}:$datestamp{'sec'}";
return %datestamp;
}
####################################################################################
####################################################################################
####################################################################################
sub displaytime {
# take a time in seconds, return it in human readable form
my ($elapsed) = @_;
my $days = int ($elapsed / 60 / 60 / 24);
$elapsed -= $days * 60 * 60 * 24;
my $hours = int ($elapsed / 60 / 60);
$elapsed -= $hours * 60 * 60;
my $minutes = int ($elapsed / 60);
$elapsed -= $minutes * 60;
my $seconds = int($elapsed);
my $humanreadable;
if ($days) { $humanreadable .= " $days" . 'd'; }
if ($hours || $days) { $humanreadable .= " $hours" . 'h'; }
if ($minutes || $hours || $days) { $humanreadable .= " $minutes" . 'm'; }
$humanreadable .= " $seconds" . 's';
$humanreadable =~ s/^ //;
return $humanreadable;
}
####################################################################################
####################################################################################
####################################################################################
sub check_zpool() {
# check_zfs Nagios plugin for monitoring Sun ZFS zpools
# Copyright (c) 2007
# original Written by Nathan Butcher
# adapted for use within Sanoid framework by Jim Salter (2014)
#
# Released under the GNU Public License
#
# This program is free software; you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation; either version 2 of the License, or
# (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program; if not, write to the Free Software
# Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
# Version: 0.9.2
# Date : 24th July 2007
# This plugin has tested on FreeBSD 7.0-CURRENT and Solaris 10
# With a bit of fondling, it could be expanded to recognize other OSes in
# future (e.g. if FUSE Linux gets off the ground)
# Verbose levels:-
# 1 - Only alert us of zpool health and size stats
# 2 - ...also alert us of failed devices when things go bad
# 3 - ...alert us of the status of all devices regardless of health
#
# Usage: check_zfs
# Example: check_zfs zeepool 1
# ZPOOL zeedata : ONLINE {Size:3.97G Used:183K Avail:3.97G Cap:0%}
my %ERRORS=('DEPENDENT'=>4,'UNKNOWN'=>3,'OK'=>0,'WARNING'=>1,'CRITICAL'=>2);
my $state="UNKNOWN";
my $msg="FAILURE";
my $pool=shift;
my $verbose=shift;
my $size="";
my $used="";
my $avail="";
my $cap="";
my $health="";
my $dmge="";
my $dedup="";
if ($verbose < 1 || $verbose > 3) {
print "Verbose levels range from 1-3\n";
exit $ERRORS{$state};
}
my $statcommand="$zpool list -o name,size,cap,health,free $pool";
if (! open STAT, "$statcommand|") {
print ("$state '$statcommand' command returns no result! NOTE: This plugin needs OS support for ZFS, and execution with root privileges.\n");
exit $ERRORS{$state};
}
# chuck the header line
my $header = ;
# find and parse the line with values for the pool
while() {
chomp;
if (/^${pool}\s+/) {
my @row = split (/ +/);
my $name;
($name, $size, $cap, $health, $avail) = @row;
}
}
# Tony: Debugging
# print "Size: $size \t Used: $used \t Avai: $avail \t Cap: $cap \t Health: $health\n";
close(STAT);
## check for valid zpool list response from zpool
if (! $health ) {
$state = "CRITICAL";
$msg = sprintf "ZPOOL {%s} does not exist and/or is not responding!\n", $pool;
print $state, " ", $msg;
exit ($ERRORS{$state});
}
## determine health of zpool and subsequent error status
if ($health eq "ONLINE" ) {
$state = "OK";
} else {
if ($health eq "DEGRADED") {
$state = "WARNING";
} else {
$state = "CRITICAL";
}
}
## get more detail on possible device failure
## flag to detect section of zpool status involving our zpool
my $poolfind=0;
$statcommand="$zpool status $pool";
if (! open STAT, "$statcommand|") {
$state = 'CRITICAL';
print ("$state '$statcommand' command returns no result! NOTE: This plugin needs OS support for ZFS, and execution with root privileges.\n");
exit $ERRORS{$state};
}
## go through zfs status output to find zpool fses and devices
while() {
chomp;
if (/^\s${pool}/ && $poolfind==1) {
$poolfind=2;
next;
} elsif ( $poolfind==1 ) {
$poolfind=0;
}
if (/NAME\s+STATE\s+READ\s+WRITE\s+CKSUM/) {
$poolfind=1;
}
if ( /^$/ ) {
$poolfind=0;
}
if ($poolfind == 2) {
## special cases pertaining to full verbose
if (/^\sspares/) {
next unless $verbose == 3;
$dmge=$dmge . "[SPARES]:- ";
next;
}
if (/^\s{5}spare\s/) {
next unless $verbose == 3;
my ($sta) = /spare\s+(\S+)/;
$dmge=$dmge . "[SPARE:${sta}]:- ";
next;
}
if (/^\s{5}replacing\s/) {
next unless $verbose == 3;
my $perc;
my ($sta) = /^\s+\S+\s+(\S+)/;
if (/%/) {
($perc) = /([0-9]+%)/;
} else {
$perc = "working";
}
$dmge=$dmge . "[REPLACING:${sta} (${perc})]:- ";
next;
}
## other cases
my ($dev, $sta, $read, $write, $cksum) = /^\s+(\S+)\s+(\S+)\s+(\S+)\s+(\S+)\s+(\S+)/;
if (!defined($sta)) {
# cache and logs are special and don't have a status
next;
}
## pool online, not degraded thanks to dead/corrupted disk
if ($state eq "OK" && $sta eq "UNAVAIL") {
$state="WARNING";
## switching to verbose level 2 to explain weirdness
if ($verbose == 1) {
$verbose =2;
}
}
## no display for verbose level 1
next if ($verbose==1);
## don't display working devices for verbose level 2
if ($verbose==2 && ($state eq "OK" || $sta eq "ONLINE" || $sta eq "AVAIL")) {
# check for io/checksum errors
my @vdeverr = ();
if ($read != 0) { push @vdeverr, "read" };
if ($write != 0) { push @vdeverr, "write" };
if ($cksum != 0) { push @vdeverr, "cksum" };
if (scalar @vdeverr) {
$dmge=$dmge . "(" . $dev . ":" . join(", ", @vdeverr) . " errors) ";
if ($state eq "OK") { $state = "WARNING" };
}
next;
}
## show everything else
if (/^\s{3}(\S+)/) {
$dmge=$dmge . "<" . $dev . ":" . $sta . "> ";
} elsif (/^\s{7}(\S+)/) {
$dmge=$dmge . "(" . $dev . ":" . $sta . ") ";
} else {
$dmge=$dmge . $dev . ":" . $sta . " ";
}
}
}
## calling all goats!
$msg = sprintf "ZPOOL %s : %s {Size:%s Free:%s Cap:%s} %s\n", $pool, $health, $size, $avail, $cap, $dmge;
$msg = "$state $msg";
return ($ERRORS{$state},$msg);
} # end check_zpool()
sub check_capacity_limit {
my $value = shift;
if (!defined($value) || $value !~ /^\d+\z/) {
return undef;
}
if ($value < 0 || $value > 100) {
return undef;
}
return 1
}
sub check_zpool_capacity() {
my %ERRORS=('DEPENDENT'=>4,'UNKNOWN'=>3,'OK'=>0,'WARNING'=>1,'CRITICAL'=>2);
my $state="UNKNOWN";
my $msg="FAILURE";
my $pool=shift;
my $capacitylimitsref=shift;
my %capacitylimits=%$capacitylimitsref;
my $statcommand="$zpool list -H -o cap $pool";
if (! open STAT, "$statcommand|") {
print ("$state '$statcommand' command returns no result!\n");
exit $ERRORS{$state};
}
my $line = ;
close(STAT);
chomp $line;
my @row = split(/ +/, $line);
my $cap=$row[0];
## check for valid capacity value
if ($cap !~ m/^[0-9]{1,3}%$/ ) {
$state = "CRITICAL";
$msg = sprintf "ZPOOL {%s} does not exist and/or is not responding!\n", $pool;
print $state, " ", $msg;
exit ($ERRORS{$state});
}
$state="OK";
# check capacity
my $capn = $cap;
$capn =~ s/\D//g;
if (defined($capacitylimits{"warn"})) {
if ($capn >= $capacitylimits{"warn"}) {
$state = "WARNING";
}
}
if (defined($capacitylimits{"crit"})) {
if ($capn >= $capacitylimits{"crit"}) {
$state = "CRITICAL";
}
}
$msg = sprintf "ZPOOL %s : %s\n", $pool, $cap;
$msg = "$state $msg";
return ($ERRORS{$state},$msg);
} # end check_zpool_capacity()
sub check_prune_defer {
my ($config, $section) = @_;
my $limit = $config{$section}{"prune_defer"};
if (!check_capacity_limit($limit)) {
die "ERROR: invalid prune_defer limit!\n";
}
if ($limit eq 0) {
return 0;
}
my @parts = split /\//, $section, 2;
my $pool = $parts[0];
if (exists $capacitycache{$pool}) {
} else {
$capacitycache{$pool} = get_zpool_capacity($pool);
}
if ($limit < $capacitycache{$pool}) {
return 0;
}
return 1;
}
sub get_zpool_capacity {
my $pool = shift;
my $statcommand="$zpool list -H -o cap $pool";
if (! open STAT, "$statcommand|") {
die "ERROR: '$statcommand' command returns no result!\n";
}
my $line = ;
close(STAT);
chomp $line;
my @row = split(/ +/, $line);
my $cap=$row[0];
## check for valid capacity value
if ($cap !~ m/^[0-9]{1,3}%$/ ) {
die "ERROR: '$statcommand' command returned invalid capacity value ($cap)!\n";
}
$cap =~ s/\D//g;
return $cap;
}
######################################################################################################
######################################################################################################
######################################################################################################
######################################################################################################
######################################################################################################
sub checklock {
# take argument $lockname.
#
# read $run_dir/$lockname.lock for a pid on first line and a mutex on second line.
#
# check process list to see if the pid from $run_dir/$lockname.lock is still active with
# the original mutex found in $run_dir/$lockname.lock.
#
# return:
# 0 if lock is present and valid for another process
# 1 if no lock is present
# 2 if lock is present, but we own the lock
#
# shorthand - any true return indicates we are clear to lock; a false return indicates
# that somebody else already has the lock and therefore we cannot.
#
my $lockname = shift;
my $lockfile = "$run_dir/$lockname.lock";
if (! -e $lockfile) {
# no lockfile
return 1;
}
# make sure lockfile contains something
if ( -z $lockfile) {
# zero size lockfile, something is wrong
warn "WARN: deleting invalid/empty $lockfile\n";
unlink $lockfile;
return 1
}
# lockfile exists. read pid and mutex from it. see if it's our pid. if not, see if
# there's still a process running with that pid and with the same mutex.
open FH, "< $lockfile" or die "ERROR: unable to open $lockfile";
my @lock = ;
close FH;
# if we didn't get exactly 2 items from the lock file there is a problem
if (scalar(@lock) != 2) {
warn "WARN: deleting invalid $lockfile\n";
unlink $lockfile;
return 1
}
my $lockmutex = pop(@lock);
my $lockpid = pop(@lock);
chomp $lockmutex;
chomp $lockpid;
if ($lockpid == $$) {
# we own the lockfile. no need to check any further.
return 2;
}
open PL, "$pscmd -p $lockpid -o args= |";
my @processlist = ;
close PL;
my $checkmutex = pop(@processlist);
chomp $checkmutex;
if ($checkmutex eq $lockmutex) {
# lock exists, is valid, is not owned by us - return false
return 0;
} else {
# lock is present but not valid - remove and return true
unlink $lockfile;
return 1;
}
}
sub removelock {
# take argument $lockname.
#
# make sure $run_dir/$lockname.lock actually belongs to me (contains my pid and mutex)
# and remove it if it does, die if it doesn't.
my $lockname = shift;
my $lockfile = "$run_dir/$lockname.lock";
if (checklock($lockname) == 2) {
unlink $lockfile;
return;
} elsif (checklock($lockname) == 1) {
die "ERROR: No valid lockfile found - Did a rogue process or user update or delete it?\n";
} else {
die "ERROR: A valid lockfile exists but does not belong to me! I refuse to remove it.\n";
}
}
sub writelock {
# take argument $lockname.
#
# write a lockfile to $run_dir/$lockname.lock with first line
# being my pid and second line being my mutex.
my $lockname = shift;
my $lockfile = "$run_dir/$lockname.lock";
# die honorably rather than overwriting a valid, existing lock
if (! checklock($lockname)) {
die "ERROR: Valid lock already exists - I refuse to overwrite it. Committing seppuku now.\n";
}
my $pid = $$;
open PL, "$pscmd -p $$ -o args= |";
my @processlist = ;
close PL;
my $mutex = pop(@processlist);
chomp $mutex;
open FH, "> $lockfile";
print FH "$pid\n";
print FH "$mutex\n";
close FH;
}
sub iszfsbusy {
# check to see if ZFS filesystem passed in as argument currently has a zfs send or zfs receive process referencing it.
# return true if busy (currently being sent or received), return false if not.
my $fs = shift;
# if (args{'debug'}) { print "DEBUG: checking to see if $fs on is already in zfs receive using $pscmd -Ao args= ...\n"; }
open PL, "$pscmd -Ao args= |";
my @processes = ;
close PL;
foreach my $process (@processes) {
# if ($args{'debug'}) { print "DEBUG: checking process $process...\n"; }
if ($process =~ /zfs *(send|receive|recv).*$fs/) {
# there's already a zfs send/receive process for our target filesystem - return true
# if ($args{'debug'}) { print "DEBUG: process $process matches target $fs!\n"; }
return 1;
}
}
# no zfs receive processes for our target filesystem found - return false
return 0;
}
#######################################################################################################################3
#######################################################################################################################3
#######################################################################################################################3
sub getchilddatasets {
# for later, if we make sanoid itself support sudo use
my $fs = shift;
my $mysudocmd = '';
my $getchildrencmd = "$mysudocmd $zfs list -o name -t filesystem,volume -Hr $fs |";
if ($args{'debug'}) { print "DEBUG: getting list of child datasets on $fs using $getchildrencmd...\n"; }
open FH, $getchildrencmd;
my @children = ;
close FH;
# parent dataset is the first element
shift @children;
return @children;
}
#######################################################################################################################3
#######################################################################################################################3
#######################################################################################################################3
sub removecachedsnapshots {
my $wait = shift;
if (not %pruned) {
return;
}
my $unlocked = checklock('sanoid_cacheupdate');
if ($wait != 1 && not $unlocked) {
if ($args{'verbose'}) { print "INFO: deferring cache update (snapshot removal) - valid cache update lock held by another sanoid process.\n"; }
return;
}
# wait until we can get a lock to do our cache changes
while (not $unlocked) {
if ($args{'verbose'}) { print "INFO: waiting for cache update lock held by another sanoid process.\n"; }
sleep(10);
$unlocked = checklock('sanoid_cacheupdate');
}
writelock('sanoid_cacheupdate');
if ($args{'verbose'}) {
print "INFO: removing destroyed snapshots from cache.\n";
}
open FH, "< $cache";
my @rawsnaps = ;
close FH;
open FH, "> $cache" or die 'Could not write to $cache!\n';
foreach my $snapline ( @rawsnaps ) {
my @columns = split("\t", $snapline);
my $snap = $columns[0];
print FH $snapline unless ( exists($pruned{$snap}) );
}
close FH;
removelock('sanoid_cacheupdate');
%snaps = getsnaps(\%config,$cacheTTL,$forcecacheupdate);
# clear hash
undef %pruned;
}
#######################################################################################################################3
#######################################################################################################################3
#######################################################################################################################3
sub runscript {
my $key=shift;
my $dataset=shift;
my $timeout=$config{$dataset}{'script_timeout'};
my $ret;
eval {
if ($timeout gt 0) {
local $SIG{ALRM} = sub { die "alarm\n" };
alarm $timeout;
}
$ret = system($config{$dataset}{$key});
alarm 0;
};
if ($@) {
if ($@ eq "alarm\n") {
warn "WARN: $key didn't finish in the allowed time!";
} else {
warn "CRITICAL ERROR: $@";
}
return -1;
} else {
if ($ret != 0) {
warn "WARN: $key failed, $?";
}
}
return $ret;
}
#######################################################################################################################3
#######################################################################################################################3
#######################################################################################################################3
sub convertTimePeriod {
my $value=shift;
my $period=shift;
if ($value =~ /^\d+[yY]$/) {
$period = 60*60*24*31*365;
chop $value;
} elsif ($value =~ /^\d+[wW]$/) {
$period = 60*60*24*7;
chop $value;
} elsif ($value =~ /^\d+[dD]$/) {
$period = 60*60*24;
chop $value;
} elsif ($value =~ /^\d+[hH]$/) {
$period = 60*60;
chop $value;
} elsif ($value =~ /^\d+[mM]$/) {
$period = 60;
chop $value;
} elsif ($value =~ /^\d+[sS]$/) {
$period = 1;
chop $value;
} elsif ($value =~ /^\d+$/) {
# no unit, provided fallback period is used
} else {
# invalid value, return smallest valid value as fallback
# (will trigger a warning message for monitoring for sure)
return 1;
}
return $value * $period;
}
__END__
=head1 NAME
sanoid - ZFS snapshot management and replication tool
=head1 SYNOPSIS
sanoid [options]
Assumes --cron --verbose if no other arguments (other than configdir) are specified
Options:
--configdir=DIR Specify a directory to find config file sanoid.conf
--cache-dir=DIR Specify a directory to store the zfs snapshot cache
--run-dir=DIR Specify a directory for temporary files such as lock files
--cron Creates snapshots and purges expired snapshots
--verbose Prints out additional information during a sanoid run
--readonly Simulates creation/deletion of snapshots
--quiet Suppresses non-error output
--force-update Clears out sanoid's zfs snapshot cache
--monitor-health Reports on zpool "health", in a Nagios compatible format
--monitor-capacity Reports on zpool capacity, in a Nagios compatible format
--monitor-snapshots Reports on snapshot "health", in a Nagios compatible format
--take-snapshots Creates snapshots as specified in sanoid.conf
--prune-snapshots Purges expired snapshots as specified in sanoid.conf
--force-prune Purges expired snapshots even if a send/recv is in progress
--help Prints this helptext
--version Prints the version number
--debug Prints out a lot of additional information during a sanoid run
sanoid-2.2.0/sanoid.conf 0000664 0000000 0000000 00000007073 14455537001 0015127 0 ustar 00root root 0000000 0000000 ######################################
# This is a sample sanoid.conf file. #
# It should go in /etc/sanoid. #
######################################
## name your backup modules with the path to their ZFS dataset - no leading slash.
#[zpoolname/datasetname]
# # pick one or more templates - they're defined (and editable) below. Comma separated, processed in order.
# # in this example, template_demo's daily value overrides template_production's daily value.
# use_template = production,demo
#
# # if you want to, you can override settings in the template directly inside module definitions like this.
# # in this example, we override the template to only keep 12 hourly and 1 monthly snapshot for this dataset.
# hourly = 12
# monthly = 1
#
## you can also handle datasets recursively.
#[zpoolname/parent]
# use_template = production
# recursive = yes
# # if you want sanoid to manage the child datasets but leave this one alone, set process_children_only.
# process_children_only = yes
#
## you can selectively override settings for child datasets which already fall under a recursive definition.
#[zpoolname/parent/child]
# # child datasets already initialized won't be wiped out, so if you use a new template, it will
# # only override the values already set by the parent template, not replace it completely.
# use_template = demo
# you can also handle datasets recursively in an atomic way without the possibility to override settings for child datasets.
[zpoolname/parent2]
use_template = production
recursive = zfs
#############################
# templates below this line #
#############################
# name your templates template_templatename. you can create your own, and use them in your module definitions above.
[template_demo]
daily = 60
[template_production]
frequently = 0
hourly = 36
daily = 30
monthly = 3
yearly = 0
autosnap = yes
autoprune = yes
[template_backup]
autoprune = yes
frequently = 0
hourly = 30
daily = 90
monthly = 12
yearly = 0
### don't take new snapshots - snapshots on backup
### datasets are replicated in from source, not
### generated locally
autosnap = no
### monitor hourlies and dailies, but don't warn or
### crit until they're over 48h old, since replication
### is typically daily only
hourly_warn = 2880
hourly_crit = 3600
daily_warn = 48
daily_crit = 60
[template_hotspare]
autoprune = yes
frequently = 0
hourly = 30
daily = 90
monthly = 3
yearly = 0
### don't take new snapshots - snapshots on backup
### datasets are replicated in from source, not
### generated locally
autosnap = no
### monitor hourlies and dailies, but don't warn or
### crit until they're over 4h old, since replication
### is typically hourly only
hourly_warn = 4h
hourly_crit = 6h
daily_warn = 2d
daily_crit = 4d
[template_scripts]
### information about the snapshot will be supplied as environment variables,
### see the README.md file for details about what is passed when.
### run script before snapshot
pre_snapshot_script = /path/to/script.sh
### run script after snapshot
post_snapshot_script = /path/to/script.sh
### run script before pruning snapshot
pre_pruning_script = /path/to/script.sh
### run script after pruning snapshot
pruning_script = /path/to/script.sh
### don't take an inconsistent snapshot (skip if pre script fails)
#no_inconsistent_snapshot = yes
### run post_snapshot_script when pre_snapshot_script is failing
#force_post_snapshot_script = yes
### limit allowed execution time of scripts before continuing (<= 0: infinite)
script_timeout = 5
[template_ignore]
autoprune = no
autosnap = no
monitor = no
sanoid-2.2.0/sanoid.defaults.conf 0000664 0000000 0000000 00000011115 14455537001 0016725 0 ustar 00root root 0000000 0000000 ###################################################################################
# default template - DO NOT EDIT THIS FILE DIRECTLY. #
# If you wish to override default values, you can create your #
# own [template_default] in /etc/sanoid/sanoid.conf. #
# #
# you have been warned. #
###################################################################################
[version]
version = 2
[template_default]
# these settings don't make sense in a template, but we use the defaults file
# as our list of allowable settings also, so they need to be present here even if
# unset.
path =
recursive =
use_template =
process_children_only =
skip_children =
# See "Sanoid script hooks" in README.md for information about scripts.
pre_snapshot_script =
post_snapshot_script =
pre_pruning_script =
pruning_script =
script_timeout = 5
no_inconsistent_snapshot =
force_post_snapshot_script =
# for snapshots shorter than one hour, the period duration must be defined
# in minutes. Because they are executed within a full hour, the selected
# value should divide 60 minutes without remainder so taken snapshots
# are apart in equal intervals. Values larger than 59 aren't practical
# as only one snapshot will be taken on each full hour in this case.
# examples:
# frequent_period = 15 -> four snapshot each hour 15 minutes apart
# frequent_period = 5 -> twelve snapshots each hour 5 minutes apart
# frequent_period = 45 -> two snapshots each hour with different time gaps
# between them: 45 minutes and 15 minutes in this case
frequent_period = 15
# If any snapshot type is set to 0, we will not take snapshots for it - and will immediately
# prune any of those type snapshots already present.
#
# Otherwise, if autoprune is set, we will prune any snapshots of that type which are older
# than (setting * periodicity) - so if daily = 90, we'll prune any dailies older than 90 days.
autoprune = yes
frequently = 0
hourly = 48
daily = 90
weekly = 0
monthly = 6
yearly = 0
# pruning can be skipped based on the used capacity of the pool
# (0: always prune, 1-100: only prune if used capacity is greater than this value)
prune_defer = 0
# We will automatically take snapshots if autosnap is on, at the desired times configured
# below (or immediately, if we don't have one since the last preferred time for that type).
#
# Note that we will not take snapshots for a given type if that type is set to 0 above,
# regardless of the autosnap setting - for example, if yearly=0 we will not take yearlies
# even if we've defined a preferred time for yearlies and autosnap is on.
autosnap = 1
# hourly - top of the hour
hourly_min = 0
# daily - at 23:59 (most people expect a daily to contain everything done DURING that day)
daily_hour = 23
daily_min = 59
# weekly -at 23:30 each Monday
weekly_wday = 1
weekly_hour = 23
weekly_min = 30
# monthly - immediately at the beginning of the month (ie 00:00 of day 1)
monthly_mday = 1
monthly_hour = 0
monthly_min = 0
# yearly - immediately at the beginning of the year (ie 00:00 on Jan 1)
yearly_mon = 1
yearly_mday = 1
yearly_hour = 0
yearly_min = 0
# monitoring plugin - define warn / crit levels for each snapshot type by age, in units of one period down
# example hourly_warn = 90 means issue WARNING if most recent hourly snapshot is not less than 90 minutes old,
# daily_crit = 36 means issue CRITICAL if most recent daily snapshot is not less than 36 hours old,
# monthly_warn = 5 means issue WARNING if most recent monthly snapshot is not less than 5 weeks old... etc.
# the following time case insensitive suffixes can also be used:
# y = years, w = weeks, d = days, h = hours, m = minutes, s = seconds
#
# monitor_dont_warn = yes will cause the monitoring service to report warnings as text, but with status OK.
# monitor_dont_crit = yes will cause the monitoring service to report criticals as text, but with status OK.
#
# setting any value to 0 will keep the monitoring service from monitoring that snapshot type on that section at all.
monitor = yes
monitor_dont_warn = no
monitor_dont_crit = no
frequently_warn = 0
frequently_crit = 0
hourly_warn = 90m
hourly_crit = 360m
daily_warn = 28h
daily_crit = 32h
weekly_warn = 0
weekly_crit = 0
monthly_warn = 32d
monthly_crit = 40d
yearly_warn = 0
yearly_crit = 0
# default limits for capacity checks (if set to 0, limit will not be checked)
# for overriding these values one needs to specify them in a root pool section! ([tank]\n ...)
capacity_warn = 80
capacity_crit = 95
sanoid-2.2.0/sleepymutex 0000775 0000000 0000000 00000000476 14455537001 0015315 0 ustar 00root root 0000000 0000000 #!/bin/bash
# this is just a cheap way to trigger mutex-based checks for process activity.
#
# ie ./sleepymutex zfs receive data/lolz if you want a mutex hanging around
# as long as necessary that will show up to any routine that actively does
# something like "ps axo | grep 'zfs receive'" or whatever.
sleep 99999
sanoid-2.2.0/syncoid 0000775 0000000 0000000 00000233554 14455537001 0014406 0 ustar 00root root 0000000 0000000 #!/usr/bin/perl
# this software is licensed for use under the Free Software Foundation's GPL v3.0 license, as retrieved
# from http://www.gnu.org/licenses/gpl-3.0.html on 2014-11-17. A copy should also be available in this
# project's Git repository at https://github.com/jimsalterjrs/sanoid/blob/master/LICENSE.
$::VERSION = '2.2.0';
use strict;
use warnings;
use Data::Dumper;
use Getopt::Long qw(:config auto_version auto_help);
use Pod::Usage;
use Time::Local;
use Sys::Hostname;
use Capture::Tiny ':all';
my $mbuffer_size = "16M";
my $pvoptions = "-p -t -e -r -b";
# Blank defaults to use ssh client's default
# TODO: Merge into a single "sshflags" option?
my %args = ('sshconfig' => '', 'sshkey' => '', 'sshport' => '', 'sshcipher' => '', 'sshoption' => [], 'target-bwlimit' => '', 'source-bwlimit' => '');
GetOptions(\%args, "no-command-checks", "monitor-version", "compress=s", "dumpsnaps", "recursive|r", "sendoptions=s", "recvoptions=s",
"source-bwlimit=s", "target-bwlimit=s", "sshconfig=s", "sshkey=s", "sshport=i", "sshcipher|c=s", "sshoption|o=s@",
"debug", "quiet", "no-stream", "no-sync-snap", "no-resume", "exclude=s@", "skip-parent", "identifier=s",
"no-clone-handling", "no-privilege-elevation", "force-delete", "no-rollback", "create-bookmark", "use-hold",
"pv-options=s" => \$pvoptions, "keep-sync-snap", "preserve-recordsize", "mbuffer-size=s" => \$mbuffer_size,
"delete-target-snapshots", "insecure-direct-connection=s", "preserve-properties")
or pod2usage(2);
my %compressargs = %{compressargset($args{'compress'} || 'default')}; # Can't be done with GetOptions arg, as default still needs to be set
my @sendoptions = ();
if (length $args{'sendoptions'}) {
@sendoptions = parsespecialoptions($args{'sendoptions'});
if (! defined($sendoptions[0])) {
warn "invalid send options!";
pod2usage(2);
exit 127;
}
if (defined $args{'recursive'}) {
foreach my $option(@sendoptions) {
if ($option->{option} eq 'R') {
warn "invalid argument combination, zfs send -R and --recursive aren't compatible!";
pod2usage(2);
exit 127;
}
}
}
}
my @recvoptions = ();
if (length $args{'recvoptions'}) {
@recvoptions = parsespecialoptions($args{'recvoptions'});
if (! defined($recvoptions[0])) {
warn "invalid receive options!";
pod2usage(2);
exit 127;
}
}
# TODO Expand to accept multiple sources?
if (scalar(@ARGV) != 2) {
print("Source or target not found!\n");
pod2usage(2);
exit 127;
} else {
$args{'source'} = $ARGV[0];
$args{'target'} = $ARGV[1];
}
# Could possibly merge these into an options function
if (length $args{'source-bwlimit'}) {
$args{'source-bwlimit'} = "-R $args{'source-bwlimit'}";
}
if (length $args{'target-bwlimit'}) {
$args{'target-bwlimit'} = "-r $args{'target-bwlimit'}";
}
$args{'streamarg'} = (defined $args{'no-stream'} ? '-i' : '-I');
my $rawsourcefs = $args{'source'};
my $rawtargetfs = $args{'target'};
my $debug = $args{'debug'};
my $quiet = $args{'quiet'};
my $resume = !$args{'no-resume'};
# for compatibility reasons, older versions used hardcoded command paths
$ENV{'PATH'} = $ENV{'PATH'} . ":/bin:/usr/bin:/sbin";
my $zfscmd = 'zfs';
my $zpoolcmd = 'zpool';
my $sshcmd = 'ssh';
my $pscmd = 'ps';
my $pvcmd = 'pv';
my $mbuffercmd = 'mbuffer';
my $socatcmd = 'socat';
my $sudocmd = 'sudo';
my $mbufferoptions = "-q -s 128k -m $mbuffer_size";
# currently using POSIX compatible command to check for program existence because we aren't depending on perl
# being present on remote machines.
my $checkcmd = 'command -v';
if (length $args{'sshcipher'}) {
$args{'sshcipher'} = "-c $args{'sshcipher'}";
}
if (length $args{'sshport'}) {
$args{'sshport'} = "-p $args{'sshport'}";
}
if (length $args{'sshconfig'}) {
$args{'sshconfig'} = "-F $args{'sshconfig'}";
}
if (length $args{'sshkey'}) {
$args{'sshkey'} = "-i $args{'sshkey'}";
}
my $sshoptions = join " ", map { "-o " . $_ } @{$args{'sshoption'}}; # deref required
my $identifier = "";
if (length $args{'identifier'}) {
if ($args{'identifier'} !~ /^[a-zA-Z0-9-_:.]+$/) {
# invalid extra identifier
print("CRITICAL: extra identifier contains invalid chars!\n");
pod2usage(2);
exit 127;
}
$identifier = "$args{'identifier'}_";
}
# figure out if source and/or target are remote.
$sshcmd = "$sshcmd $args{'sshconfig'} $args{'sshcipher'} $sshoptions $args{'sshport'} $args{'sshkey'}";
if ($debug) { print "DEBUG: SSHCMD: $sshcmd\n"; }
my ($sourcehost,$sourcefs,$sourceisroot) = getssh($rawsourcefs);
my ($targethost,$targetfs,$targetisroot) = getssh($rawtargetfs);
my $sourcesudocmd = $sourceisroot ? '' : $sudocmd;
my $targetsudocmd = $targetisroot ? '' : $sudocmd;
if (!defined $sourcehost) { $sourcehost = ''; }
if (!defined $targethost) { $targethost = ''; }
# handle insecure direct connection arguments
my $directconnect = "";
my $directlisten = "";
my $directtimeout = 60;
my $directmbuffer = 0;
if (length $args{'insecure-direct-connection'}) {
if ($sourcehost ne '' && $targethost ne '') {
print("CRITICAL: relaying between remote hosts is not supported with insecure direct connection!\n");
pod2usage(2);
exit 127;
}
my @parts = split(',', $args{'insecure-direct-connection'});
if (scalar @parts > 4) {
print("CRITICAL: invalid insecure-direct-connection argument!\n");
pod2usage(2);
exit 127;
} elsif (scalar @parts >= 2) {
$directconnect = $parts[0];
$directlisten = $parts[1];
} else {
$directconnect = $args{'insecure-direct-connection'};
$directlisten = $args{'insecure-direct-connection'};
}
if (scalar @parts == 3) {
$directtimeout = $parts[2];
}
if (scalar @parts == 4) {
if ($parts[3] eq "mbuffer") {
$directmbuffer = 1;
}
}
}
# figure out whether compression, mbuffering, pv
# are available on source, target, local machines.
# warn user of anything missing, then continue with sync.
my %avail = checkcommands();
my %snaps;
my $exitcode = 0;
my $replicationCount = 0;
## break here to call replication individually so that we ##
## can loop across children separately, for recursive ##
## replication ##
if (!defined $args{'recursive'}) {
syncdataset($sourcehost, $sourcefs, $targethost, $targetfs, undef);
} else {
if ($debug) { print "DEBUG: recursive sync of $sourcefs.\n"; }
my @datasets = getchilddatasets($sourcehost, $sourcefs, $sourceisroot);
if (!@datasets) {
warn "CRITICAL ERROR: no datasets found";
@datasets = ();
$exitcode = 2;
}
my @deferred;
foreach my $datasetProperties(@datasets) {
my $dataset = $datasetProperties->{'name'};
my $origin = $datasetProperties->{'origin'};
if ($origin eq "-" || defined $args{'no-clone-handling'}) {
$origin = undef;
} else {
# check if clone source is replicated too
my @values = split(/@/, $origin, 2);
my $srcdataset = $values[0];
my $found = 0;
foreach my $datasetProperties(@datasets) {
if ($datasetProperties->{'name'} eq $srcdataset) {
$found = 1;
last;
}
}
if ($found == 0) {
# clone source is not replicated, do a full replication
$origin = undef;
} else {
# clone source is replicated, defer until all non clones are replicated
push @deferred, $datasetProperties;
next;
}
}
$dataset =~ s/\Q$sourcefs\E//;
chomp $dataset;
my $childsourcefs = $sourcefs . $dataset;
my $childtargetfs = $targetfs . $dataset;
# print "syncdataset($sourcehost, $childsourcefs, $targethost, $childtargetfs); \n";
syncdataset($sourcehost, $childsourcefs, $targethost, $childtargetfs, $origin);
}
# replicate cloned datasets and if this is the initial run, recreate them on the target
foreach my $datasetProperties(@deferred) {
my $dataset = $datasetProperties->{'name'};
my $origin = $datasetProperties->{'origin'};
$dataset =~ s/\Q$sourcefs\E//;
chomp $dataset;
my $childsourcefs = $sourcefs . $dataset;
my $childtargetfs = $targetfs . $dataset;
syncdataset($sourcehost, $childsourcefs, $targethost, $childtargetfs, $origin);
}
}
# close SSH sockets for master connections as applicable
if ($sourcehost ne '') {
open FH, "$sshcmd $sourcehost -O exit 2>&1 |";
close FH;
}
if ($targethost ne '') {
open FH, "$sshcmd $targethost -O exit 2>&1 |";
close FH;
}
exit $exitcode;
##############################################################################
##############################################################################
##############################################################################
##############################################################################
sub getchilddatasets {
my ($rhost,$fs,$isroot,%snaps) = @_;
my $mysudocmd;
my $fsescaped = escapeshellparam($fs);
if ($isroot) { $mysudocmd = ''; } else { $mysudocmd = $sudocmd; }
if ($rhost ne '') {
$rhost = "$sshcmd $rhost";
# double escaping needed
$fsescaped = escapeshellparam($fsescaped);
}
my $getchildrencmd = "$rhost $mysudocmd $zfscmd list -o name,origin -t filesystem,volume -Hr $fsescaped |";
if ($debug) { print "DEBUG: getting list of child datasets on $fs using $getchildrencmd...\n"; }
if (! open FH, $getchildrencmd) {
die "ERROR: list command failed!\n";
}
my @children;
my $first = 1;
DATASETS: while() {
chomp;
if (defined $args{'skip-parent'} && $first eq 1) {
# parent dataset is the first element
$first = 0;
next;
}
my ($dataset, $origin) = /^([^\t]+)\t([^\t]+)/;
if (defined $args{'exclude'}) {
my $excludes = $args{'exclude'};
foreach (@$excludes) {
if ($dataset =~ /$_/) {
if ($debug) { print "DEBUG: excluded $dataset because of $_\n"; }
next DATASETS;
}
}
}
my %properties;
$properties{'name'} = $dataset;
$properties{'origin'} = $origin;
push @children, \%properties;
}
close FH;
return @children;
}
sub syncdataset {
my ($sourcehost, $sourcefs, $targethost, $targetfs, $origin, $skipsnapshot) = @_;
my $stdout;
my $exit;
my $sourcefsescaped = escapeshellparam($sourcefs);
my $targetfsescaped = escapeshellparam($targetfs);
# if no rollbacks are allowed, disable forced receive
my $forcedrecv = "-F";
if (defined $args{'no-rollback'}) {
$forcedrecv = "";
}
if ($debug) { print "DEBUG: syncing source $sourcefs to target $targetfs.\n"; }
my ($sync, $error) = getzfsvalue($sourcehost,$sourcefs,$sourceisroot,'syncoid:sync');
if (!defined $sync) {
# zfs already printed the corresponding error
if ($error =~ /\bdataset does not exist\b/ && $replicationCount > 0) {
if (!$quiet) { print "WARN Skipping dataset (dataset no longer exists): $sourcefs...\n"; }
return 0;
}
else {
# print the error out and set exit code
print "ERROR: $error\n";
if ($exitcode < 2) { $exitcode = 2 }
}
return 0;
}
if ($sync eq 'true' || $sync eq '-' || $sync eq '') {
# empty is handled the same as unset (aka: '-')
# definitely sync this dataset - if a host is called 'true' or '-', then you're special
} elsif ($sync eq 'false') {
if (!$quiet) { print "INFO: Skipping dataset (syncoid:sync=false): $sourcefs...\n"; }
return 0;
} else {
my $hostid = hostname();
my @hosts = split(/,/,$sync);
if (!(grep $hostid eq $_, @hosts)) {
if (!$quiet) { print "INFO: Skipping dataset (syncoid:sync doesn't include $hostid): $sourcefs...\n"; }
return 0;
}
}
# make sure target is not currently in receive.
if (iszfsbusy($targethost,$targetfs,$targetisroot)) {
warn "Cannot sync now: $targetfs is already target of a zfs receive process.\n";
if ($exitcode < 1) { $exitcode = 1; }
return 0;
}
# does the target filesystem exist yet?
my $targetexists = targetexists($targethost,$targetfs,$targetisroot);
my $receiveextraargs = "";
my $receivetoken;
if ($resume) {
# save state of interrupted receive stream
$receiveextraargs = "-s";
if ($targetexists) {
# check remote dataset for receive resume token (interrupted receive)
$receivetoken = getreceivetoken($targethost,$targetfs,$targetisroot);
if ($debug && defined($receivetoken)) {
print "DEBUG: got receive resume token: $receivetoken: \n";
}
}
}
my $newsyncsnap;
my $matchingsnap;
# skip snapshot checking/creation in case of resumed receive
if (!defined($receivetoken)) {
# build hashes of the snaps on the source and target filesystems.
%snaps = getsnaps('source',$sourcehost,$sourcefs,$sourceisroot);
if ($targetexists) {
my %targetsnaps = getsnaps('target',$targethost,$targetfs,$targetisroot);
my %sourcesnaps = %snaps;
%snaps = (%sourcesnaps, %targetsnaps);
}
if (defined $args{'dumpsnaps'}) {
print "merged snapshot list of $targetfs: \n";
dumphash(\%snaps);
print "\n\n\n";
}
if (!defined $args{'no-sync-snap'} && !defined $skipsnapshot) {
# create a new syncoid snapshot on the source filesystem.
$newsyncsnap = newsyncsnap($sourcehost,$sourcefs,$sourceisroot);
if (!$newsyncsnap) {
# we already whined about the error
return 0;
}
} else {
# we don't want sync snapshots created, so use the newest snapshot we can find.
$newsyncsnap = getnewestsnapshot($sourcehost,$sourcefs,$sourceisroot);
if ($newsyncsnap eq 0) {
warn "CRITICAL: no snapshots exist on source $sourcefs, and you asked for --no-sync-snap.\n";
if ($exitcode < 1) { $exitcode = 1; }
return 0;
}
}
}
my $newsyncsnapescaped = escapeshellparam($newsyncsnap);
# there is currently (2014-09-01) a bug in ZFS on Linux
# that causes readonly to always show on if it's EVER
# been turned on... even when it's off... unless and
# until the filesystem is zfs umounted and zfs remounted.
# we're going to do the right thing anyway.
# dyking this functionality out for the time being due to buggy mount/unmount behavior
# with ZFS on Linux (possibly OpenZFS in general) when setting/unsetting readonly.
#my $originaltargetreadonly;
my $sendoptions = getoptionsline(\@sendoptions, ('D','L','P','R','c','e','h','p','v','w'));
my $recvoptions = getoptionsline(\@recvoptions, ('h','o','x','u','v'));
# sync 'em up.
if (! $targetexists) {
# do an initial sync from the oldest source snapshot
# THEN do an -I to the newest
if ($debug) {
if (!defined ($args{'no-stream'}) ) {
print "DEBUG: target $targetfs does not exist. Finding oldest available snapshot on source $sourcefs ...\n";
} else {
print "DEBUG: target $targetfs does not exist, and --no-stream selected. Finding newest available snapshot on source $sourcefs ...\n";
}
}
my $oldestsnap = getoldestsnapshot(\%snaps);
if (! $oldestsnap) {
if (defined ($args{'no-sync-snap'}) ) {
# we already whined about the missing snapshots
return 0;
}
# getoldestsnapshot() returned false, so use new sync snapshot
if ($debug) { print "DEBUG: getoldestsnapshot() returned false, so using $newsyncsnap.\n"; }
$oldestsnap = $newsyncsnap;
}
# if --no-stream is specified, our full needs to be the newest snapshot, not the oldest.
if (defined $args{'no-stream'}) {
if (defined ($args{'no-sync-snap'}) ) {
$oldestsnap = getnewestsnapshot(\%snaps);
} else {
$oldestsnap = $newsyncsnap;
}
}
my $oldestsnapescaped = escapeshellparam($oldestsnap);
if (defined $args{'preserve-properties'}) {
my %properties = getlocalzfsvalues($sourcehost,$sourcefs,$sourceisroot);
foreach my $key (keys %properties) {
my $value = $properties{$key};
if ($debug) { print "DEBUG: will set $key to $value ...\n"; }
$recvoptions .= " -o $key=$value";
}
} elsif (defined $args{'preserve-recordsize'}) {
my $type = getzfsvalue($sourcehost,$sourcefs,$sourceisroot,'type');
if ($type eq "filesystem") {
my $recordsize = getzfsvalue($sourcehost,$sourcefs,$sourceisroot,'recordsize');
$recvoptions .= "-o recordsize=$recordsize";
}
}
my $sendcmd = "$sourcesudocmd $zfscmd send $sendoptions $sourcefsescaped\@$oldestsnapescaped";
my $recvcmd = "$targetsudocmd $zfscmd receive $recvoptions $receiveextraargs $forcedrecv $targetfsescaped";
my $pvsize;
if (defined $origin) {
my $originescaped = escapeshellparam($origin);
$sendcmd = "$sourcesudocmd $zfscmd send $sendoptions -i $originescaped $sourcefsescaped\@$oldestsnapescaped";
my $streamargBackup = $args{'streamarg'};
$args{'streamarg'} = "-i";
$pvsize = getsendsize($sourcehost,$origin,"$sourcefs\@$oldestsnap",$sourceisroot);
$args{'streamarg'} = $streamargBackup;
} else {
$pvsize = getsendsize($sourcehost,"$sourcefs\@$oldestsnap",0,$sourceisroot);
}
my $disp_pvsize = readablebytes($pvsize);
if ($pvsize == 0) { $disp_pvsize = 'UNKNOWN'; }
my $synccmd = buildsynccmd($sendcmd,$recvcmd,$pvsize,$sourceisroot,$targetisroot);
if (!$quiet) {
if (defined $origin) {
print "INFO: Clone is recreated on target $targetfs based on $origin\n";
}
if (!defined ($args{'no-stream'}) ) {
print "INFO: Sending oldest full snapshot $sourcefs\@$oldestsnap (~ $disp_pvsize) to new target filesystem:\n";
} else {
print "INFO: --no-stream selected; sending newest full snapshot $sourcefs\@$oldestsnap (~ $disp_pvsize) to new target filesystem:\n";
}
}
if ($debug) { print "DEBUG: $synccmd\n"; }
# make sure target is (still) not currently in receive.
if (iszfsbusy($targethost,$targetfs,$targetisroot)) {
warn "Cannot sync now: $targetfs is already target of a zfs receive process.\n";
if ($exitcode < 1) { $exitcode = 1; }
return 0;
}
system($synccmd) == 0 or do {
if (defined $origin) {
print "INFO: clone creation failed, trying ordinary replication as fallback\n";
syncdataset($sourcehost, $sourcefs, $targethost, $targetfs, undef, 1);
return 0;
}
warn "CRITICAL ERROR: $synccmd failed: $?";
if ($exitcode < 2) { $exitcode = 2; }
return 0;
};
# now do an -I to the new sync snapshot, assuming there were any snapshots
# other than the new sync snapshot to begin with, of course - and that we
# aren't invoked with --no-stream, in which case a full of the newest snap
# available was all we needed to do
if (!defined ($args{'no-stream'}) && ($oldestsnap ne $newsyncsnap) ) {
# get current readonly status of target, then set it to on during sync
# dyking this functionality out for the time being due to buggy mount/unmount behavior
# with ZFS on Linux (possibly OpenZFS in general) when setting/unsetting readonly.
# $originaltargetreadonly = getzfsvalue($targethost,$targetfs,$targetisroot,'readonly');
# setzfsvalue($targethost,$targetfs,$targetisroot,'readonly','on');
$sendcmd = "$sourcesudocmd $zfscmd send $sendoptions $args{'streamarg'} $sourcefsescaped\@$oldestsnapescaped $sourcefsescaped\@$newsyncsnapescaped";
$pvsize = getsendsize($sourcehost,"$sourcefs\@$oldestsnap","$sourcefs\@$newsyncsnap",$sourceisroot);
$disp_pvsize = readablebytes($pvsize);
if ($pvsize == 0) { $disp_pvsize = "UNKNOWN"; }
$synccmd = buildsynccmd($sendcmd,$recvcmd,$pvsize,$sourceisroot,$targetisroot);
# make sure target is (still) not currently in receive.
if (iszfsbusy($targethost,$targetfs,$targetisroot)) {
warn "Cannot sync now: $targetfs is already target of a zfs receive process.\n";
if ($exitcode < 1) { $exitcode = 1; }
return 0;
}
if (!$quiet) { print "INFO: Updating new target filesystem with incremental $sourcefs\@$oldestsnap ... $newsyncsnap (~ $disp_pvsize):\n"; }
if ($debug) { print "DEBUG: $synccmd\n"; }
if ($oldestsnap ne $newsyncsnap) {
my $ret = system($synccmd);
if ($ret != 0) {
warn "CRITICAL ERROR: $synccmd failed: $?";
if ($exitcode < 1) { $exitcode = 1; }
return 0;
}
} else {
if (!$quiet) { print "INFO: no incremental sync needed; $oldestsnap is already the newest available snapshot.\n"; }
}
# restore original readonly value to target after sync complete
# dyking this functionality out for the time being due to buggy mount/unmount behavior
# with ZFS on Linux (possibly OpenZFS in general) when setting/unsetting readonly.
# setzfsvalue($targethost,$targetfs,$targetisroot,'readonly',$originaltargetreadonly);
}
} else {
# resume interrupted receive if there is a valid resume $token
# and because this will ony resume the receive to the next
# snapshot, do a normal sync after that
if (defined($receivetoken)) {
$sendoptions = getoptionsline(\@sendoptions, ('P','e','v','w'));
my $sendcmd = "$sourcesudocmd $zfscmd send $sendoptions -t $receivetoken";
my $recvcmd = "$targetsudocmd $zfscmd receive $recvoptions $receiveextraargs $forcedrecv $targetfsescaped 2>&1";
my $pvsize = getsendsize($sourcehost,"","",$sourceisroot,$receivetoken);
my $disp_pvsize = readablebytes($pvsize);
if ($pvsize == 0) { $disp_pvsize = "UNKNOWN"; }
my $synccmd = buildsynccmd($sendcmd,$recvcmd,$pvsize,$sourceisroot,$targetisroot);
if (!$quiet) { print "Resuming interrupted zfs send/receive from $sourcefs to $targetfs (~ $disp_pvsize remaining):\n"; }
if ($debug) { print "DEBUG: $synccmd\n"; }
if ($pvsize == 0) {
# we need to capture the error of zfs send, this will render pv useless but in this case
# it doesn't matter because we don't know the estimated send size (probably because
# the initial snapshot used for resumed send doesn't exist anymore)
($stdout, $exit) = tee_stderr {
system("$synccmd")
};
} else {
($stdout, $exit) = tee_stdout {
system("$synccmd")
};
}
$exit == 0 or do {
if (
$stdout =~ /\Qused in the initial send no longer exists\E/ ||
$stdout =~ /incremental source [0-9xa-f]+ no longer exists/
) {
if (!$quiet) { print "WARN: resetting partially receive state because the snapshot source no longer exists\n"; }
resetreceivestate($targethost,$targetfs,$targetisroot);
# do an normal sync cycle
return syncdataset($sourcehost, $sourcefs, $targethost, $targetfs, $origin);
} else {
warn "CRITICAL ERROR: $synccmd failed: $?";
if ($exitcode < 2) { $exitcode = 2; }
return 0;
}
};
# a resumed transfer will only be done to the next snapshot,
# so do an normal sync cycle
return syncdataset($sourcehost, $sourcefs, $targethost, $targetfs, undef);
}
# find most recent matching snapshot and do an -I
# to the new snapshot
# get current readonly status of target, then set it to on during sync
# dyking this functionality out for the time being due to buggy mount/unmount behavior
# with ZFS on Linux (possibly OpenZFS in general) when setting/unsetting readonly.
# $originaltargetreadonly = getzfsvalue($targethost,$targetfs,$targetisroot,'readonly');
# setzfsvalue($targethost,$targetfs,$targetisroot,'readonly','on');
my $targetsize = getzfsvalue($targethost,$targetfs,$targetisroot,'-p used');
my $bookmark = 0;
my $bookmarkcreation = 0;
$matchingsnap = getmatchingsnapshot($sourcefs, $targetfs, \%snaps);
if (! $matchingsnap) {
# no matching snapshots, check for bookmarks as fallback
my %bookmarks = getbookmarks($sourcehost,$sourcefs,$sourceisroot);
# check for matching guid of source bookmark and target snapshot (oldest first)
foreach my $snap ( sort { $snaps{'target'}{$b}{'creation'}<=>$snaps{'target'}{$a}{'creation'} } keys %{ $snaps{'target'} }) {
my $guid = $snaps{'target'}{$snap}{'guid'};
if (defined $bookmarks{$guid}) {
# found a match
$bookmark = $bookmarks{$guid}{'name'};
$bookmarkcreation = $bookmarks{$guid}{'creation'};
$matchingsnap = $snap;
last;
}
}
if (! $bookmark) {
if ($args{'force-delete'}) {
if (!$quiet) { print "Removing $targetfs because no matching snapshots were found\n"; }
my $rcommand = '';
my $mysudocmd = '';
my $targetfsescaped = escapeshellparam($targetfs);
if ($targethost ne '') { $rcommand = "$sshcmd $targethost"; }
if (!$targetisroot) { $mysudocmd = $sudocmd; }
my $prunecmd = "$mysudocmd $zfscmd destroy -r $targetfsescaped; ";
if ($targethost ne '') {
$prunecmd = escapeshellparam($prunecmd);
}
my $ret = system("$rcommand $prunecmd");
if ($ret != 0) {
warn "WARNING: $rcommand $prunecmd failed: $?";
} else {
# redo sync and skip snapshot creation (already taken)
return syncdataset($sourcehost, $sourcefs, $targethost, $targetfs, undef, 1);
}
}
# if we got this far, we failed to find a matching snapshot/bookmark.
if ($exitcode < 2) { $exitcode = 2; }
print "\n";
print "CRITICAL ERROR: Target $targetfs exists but has no snapshots matching with $sourcefs!\n";
print " Replication to target would require destroying existing\n";
print " target. Cowardly refusing to destroy your existing target.\n\n";
# experience tells me we need a mollyguard for people who try to
# zfs create targetpool/targetsnap ; syncoid sourcepool/sourcesnap targetpool/targetsnap ...
if ( $targetsize < (64*1024*1024) ) {
print " NOTE: Target $targetfs dataset is < 64MB used - did you mistakenly run\n";
print " \`zfs create $args{'target'}\` on the target? ZFS initial\n";
print " replication must be to a NON EXISTENT DATASET, which will\n";
print " then be CREATED BY the initial replication process.\n\n";
}
# return false now in case more child datasets need replication.
return 0;
}
}
# make sure target is (still) not currently in receive.
if (iszfsbusy($targethost,$targetfs,$targetisroot)) {
warn "Cannot sync now: $targetfs is already target of a zfs receive process.\n";
if ($exitcode < 1) { $exitcode = 1; }
return 0;
}
if ($matchingsnap eq $newsyncsnap) {
# barf some text but don't touch the filesystem
if (!$quiet) { print "INFO: no snapshots on source newer than $newsyncsnap on target. Nothing to do, not syncing.\n"; }
return 0;
} else {
my $matchingsnapescaped = escapeshellparam($matchingsnap);
my $nextsnapshot = 0;
if ($bookmark) {
my $bookmarkescaped = escapeshellparam($bookmark);
if (!defined $args{'no-stream'}) {
# if intermediate snapshots are needed we need to find the next oldest snapshot,
# do an replication to it and replicate as always from oldest to newest
# because bookmark sends doesn't support intermediates directly
foreach my $snap ( sort { $snaps{'source'}{$a}{'creation'}<=>$snaps{'source'}{$b}{'creation'} } keys %{ $snaps{'source'} }) {
if ($snaps{'source'}{$snap}{'creation'} >= $bookmarkcreation) {
$nextsnapshot = $snap;
last;
}
}
}
# bookmark stream size can't be determined
my $pvsize = 0;
my $disp_pvsize = "UNKNOWN";
$sendoptions = getoptionsline(\@sendoptions, ('L','c','e','w'));
if ($nextsnapshot) {
my $nextsnapshotescaped = escapeshellparam($nextsnapshot);
my $sendcmd = "$sourcesudocmd $zfscmd send $sendoptions -i $sourcefsescaped#$bookmarkescaped $sourcefsescaped\@$nextsnapshotescaped";
my $recvcmd = "$targetsudocmd $zfscmd receive $recvoptions $receiveextraargs $forcedrecv $targetfsescaped 2>&1";
my $synccmd = buildsynccmd($sendcmd,$recvcmd,$pvsize,$sourceisroot,$targetisroot);
if (!$quiet) { print "Sending incremental $sourcefs#$bookmarkescaped ... $nextsnapshot (~ $disp_pvsize):\n"; }
if ($debug) { print "DEBUG: $synccmd\n"; }
($stdout, $exit) = tee_stdout {
system("$synccmd")
};
$exit == 0 or do {
if (!$resume && $stdout =~ /\Qcontains partially-complete state\E/) {
if (!$quiet) { print "WARN: resetting partially receive state\n"; }
resetreceivestate($targethost,$targetfs,$targetisroot);
system("$synccmd") == 0 or do {
warn "CRITICAL ERROR: $synccmd failed: $?";
if ($exitcode < 2) { $exitcode = 2; }
return 0;
}
} else {
warn "CRITICAL ERROR: $synccmd failed: $?";
if ($exitcode < 2) { $exitcode = 2; }
return 0;
}
};
$matchingsnap = $nextsnapshot;
$matchingsnapescaped = escapeshellparam($matchingsnap);
} else {
my $sendcmd = "$sourcesudocmd $zfscmd send $sendoptions -i $sourcefsescaped#$bookmarkescaped $sourcefsescaped\@$newsyncsnapescaped";
my $recvcmd = "$targetsudocmd $zfscmd receive $recvoptions $receiveextraargs $forcedrecv $targetfsescaped 2>&1";
my $synccmd = buildsynccmd($sendcmd,$recvcmd,$pvsize,$sourceisroot,$targetisroot);
if (!$quiet) { print "Sending incremental $sourcefs#$bookmarkescaped ... $newsyncsnap (~ $disp_pvsize):\n"; }
if ($debug) { print "DEBUG: $synccmd\n"; }
($stdout, $exit) = tee_stdout {
system("$synccmd")
};
$exit == 0 or do {
if (!$resume && $stdout =~ /\Qcontains partially-complete state\E/) {
if (!$quiet) { print "WARN: resetting partially receive state\n"; }
resetreceivestate($targethost,$targetfs,$targetisroot);
system("$synccmd") == 0 or do {
warn "CRITICAL ERROR: $synccmd failed: $?";
if ($exitcode < 2) { $exitcode = 2; }
return 0;
}
} else {
warn "CRITICAL ERROR: $synccmd failed: $?";
if ($exitcode < 2) { $exitcode = 2; }
return 0;
}
};
}
}
# do a normal replication if bookmarks aren't used or if previous
# bookmark replication was only done to the next oldest snapshot
if (!$bookmark || $nextsnapshot) {
if ($matchingsnap eq $newsyncsnap) {
# edge case: bookmark replication used the latest snapshot
return 0;
}
$sendoptions = getoptionsline(\@sendoptions, ('D','L','P','R','c','e','h','p','v','w'));
my $sendcmd = "$sourcesudocmd $zfscmd send $sendoptions $args{'streamarg'} $sourcefsescaped\@$matchingsnapescaped $sourcefsescaped\@$newsyncsnapescaped";
my $recvcmd = "$targetsudocmd $zfscmd receive $recvoptions $receiveextraargs $forcedrecv $targetfsescaped 2>&1";
my $pvsize = getsendsize($sourcehost,"$sourcefs\@$matchingsnap","$sourcefs\@$newsyncsnap",$sourceisroot);
my $disp_pvsize = readablebytes($pvsize);
if ($pvsize == 0) { $disp_pvsize = "UNKNOWN"; }
my $synccmd = buildsynccmd($sendcmd,$recvcmd,$pvsize,$sourceisroot,$targetisroot);
if (!$quiet) { print "Sending incremental $sourcefs\@$matchingsnap ... $newsyncsnap (~ $disp_pvsize):\n"; }
if ($debug) { print "DEBUG: $synccmd\n"; }
($stdout, $exit) = tee_stdout {
system("$synccmd")
};
$exit == 0 or do {
# FreeBSD reports "dataset is busy" instead of "contains partially-complete state"
if (!$resume && ($stdout =~ /\Qcontains partially-complete state\E/ || $stdout =~ /\Qdataset is busy\E/)) {
if (!$quiet) { print "WARN: resetting partially receive state\n"; }
resetreceivestate($targethost,$targetfs,$targetisroot);
system("$synccmd") == 0 or do {
warn "CRITICAL ERROR: $synccmd failed: $?";
if ($exitcode < 2) { $exitcode = 2; }
return 0;
}
} elsif ($args{'force-delete'} && $stdout =~ /\Qdestination already exists\E/) {
(my $existing) = $stdout =~ m/^cannot restore to ([^:]*): destination already exists$/g;
if ($existing eq "") {
warn "CRITICAL ERROR: $synccmd failed: $?";
if ($exitcode < 2) { $exitcode = 2; }
return 0;
}
if (!$quiet) { print "WARN: removing existing destination: $existing\n"; }
my $rcommand = '';
my $mysudocmd = '';
my $existingescaped = escapeshellparam($existing);
if ($targethost ne '') { $rcommand = "$sshcmd $targethost"; }
if (!$targetisroot) { $mysudocmd = $sudocmd; }
my $prunecmd = "$mysudocmd $zfscmd destroy $existingescaped; ";
if ($targethost ne '') {
$prunecmd = escapeshellparam($prunecmd);
}
my $ret = system("$rcommand $prunecmd");
if ($ret != 0) {
warn "CRITICAL ERROR: $rcommand $prunecmd failed: $?";
if ($exitcode < 2) { $exitcode = 2; }
return 0;
} else {
# redo sync and skip snapshot creation (already taken)
return syncdataset($sourcehost, $sourcefs, $targethost, $targetfs, undef, 1);
}
} else {
warn "CRITICAL ERROR: $synccmd failed: $?";
if ($exitcode < 2) { $exitcode = 2; }
return 0;
}
};
}
# restore original readonly value to target after sync complete
# dyking this functionality out for the time being due to buggy mount/unmount behavior
# with ZFS on Linux (possibly OpenZFS in general) when setting/unsetting readonly.
#setzfsvalue($targethost,$targetfs,$targetisroot,'readonly',$originaltargetreadonly);
}
}
$replicationCount++;
# if "--use-hold" parameter is used set hold on newsync snapshot and remove hold on matching snapshot both on source and target
# hold name: "syncoid" + identifier + hostname -> in case of replication to multiple targets separate holds can be set for each target by assinging different identifiers to each target. Only if all targets have been replicated all syncoid holds are removed from the matching snapshot and it can be removed
if (defined $args{'use-hold'}) {
my $holdcmd;
my $holdreleasecmd;
my $hostid = hostname();
my $matchingsnapescaped = escapeshellparam($matchingsnap);
my $holdname = "syncoid\_$identifier$hostid";
if ($sourcehost ne '') {
$holdcmd = "$sshcmd $sourcehost " . escapeshellparam("$sourcesudocmd $zfscmd hold $holdname $sourcefsescaped\@$newsyncsnapescaped");
$holdreleasecmd = "$sshcmd $sourcehost " . escapeshellparam("$sourcesudocmd $zfscmd release $holdname $sourcefsescaped\@$matchingsnapescaped");
} else {
$holdcmd = "$sourcesudocmd $zfscmd hold $holdname $sourcefsescaped\@$newsyncsnapescaped";
$holdreleasecmd = "$sourcesudocmd $zfscmd release $holdname $sourcefsescaped\@$matchingsnapescaped";
}
if ($debug) { print "DEBUG: Set new hold on source: $holdcmd\n"; }
system($holdcmd) == 0 or warn "WARNING: $holdcmd failed: $?";
# Do hold release only if matchingsnap exists
if ($matchingsnap) {
if ($debug) { print "DEBUG: Release old hold on source: $holdreleasecmd\n"; }
system($holdreleasecmd) == 0 or warn "WARNING: $holdreleasecmd failed: $?";
}
if ($targethost ne '') {
$holdcmd = "$sshcmd $targethost " . escapeshellparam("$targetsudocmd $zfscmd hold $holdname $targetfsescaped\@$newsyncsnapescaped");
$holdreleasecmd = "$sshcmd $targethost " . escapeshellparam("$targetsudocmd $zfscmd release $holdname $targetfsescaped\@$matchingsnapescaped");
} else {
$holdcmd = "$targetsudocmd $zfscmd hold $holdname $targetfsescaped\@$newsyncsnapescaped"; $holdreleasecmd = "$targetsudocmd $zfscmd release $holdname $targetfsescaped\@$matchingsnapescaped";
}
if ($debug) { print "DEBUG: Set new hold on target: $holdcmd\n"; }
system($holdcmd) == 0 or warn "WARNING: $holdcmd failed: $?";
# Do hold release only if matchingsnap exists
if ($matchingsnap) {
if ($debug) { print "DEBUG: Release old hold on target: $holdreleasecmd\n"; }
system($holdreleasecmd) == 0 or warn "WARNING: $holdreleasecmd failed: $?";
}
}
if (defined $args{'no-sync-snap'}) {
if (defined $args{'create-bookmark'}) {
my $bookmarkcmd;
if ($sourcehost ne '') {
$bookmarkcmd = "$sshcmd $sourcehost " . escapeshellparam("$sourcesudocmd $zfscmd bookmark $sourcefsescaped\@$newsyncsnapescaped $sourcefsescaped\#$newsyncsnapescaped");
} else {
$bookmarkcmd = "$sourcesudocmd $zfscmd bookmark $sourcefsescaped\@$newsyncsnapescaped $sourcefsescaped\#$newsyncsnapescaped";
}
if ($debug) { print "DEBUG: $bookmarkcmd\n"; }
system($bookmarkcmd) == 0 or do {
# fallback: assume naming conflict and try again with guid based suffix
my $guid = $snaps{'source'}{$newsyncsnap}{'guid'};
$guid = substr($guid, 0, 6);
if (!$quiet) { print "INFO: bookmark creation failed, retrying with guid based suffix ($guid)...\n"; }
if ($sourcehost ne '') {
$bookmarkcmd = "$sshcmd $sourcehost " . escapeshellparam("$sourcesudocmd $zfscmd bookmark $sourcefsescaped\@$newsyncsnapescaped $sourcefsescaped\#$newsyncsnapescaped$guid");
} else {
$bookmarkcmd = "$sourcesudocmd $zfscmd bookmark $sourcefsescaped\@$newsyncsnapescaped $sourcefsescaped\#$newsyncsnapescaped$guid";
}
if ($debug) { print "DEBUG: $bookmarkcmd\n"; }
system($bookmarkcmd) == 0 or do {
warn "CRITICAL ERROR: $bookmarkcmd failed: $?";
if ($exitcode < 2) { $exitcode = 2; }
return 0;
}
};
}
} else {
if (!defined $args{'keep-sync-snap'}) {
# prune obsolete sync snaps on source and target (only if this run created ones).
pruneoldsyncsnaps($sourcehost,$sourcefs,$newsyncsnap,$sourceisroot,keys %{ $snaps{'source'}});
pruneoldsyncsnaps($targethost,$targetfs,$newsyncsnap,$targetisroot,keys %{ $snaps{'target'}});
}
}
if (defined $args{'delete-target-snapshots'}) {
# Find the snapshots that exist on the target, filter with
# those that exist on the source. Remaining are the snapshots
# that are only on the target. Then sort by creation date, as
# to remove the oldest snapshots first.
my @to_delete = sort { $snaps{'target'}{$a}{'creation'}<=>$snaps{'target'}{$b}{'creation'} } grep {!exists $snaps{'source'}{$_}} keys %{ $snaps{'target'} };
while (@to_delete) {
# Create batch of snapshots to remove
my $snaps = join ',', splice(@to_delete, 0, 50);
my $command;
if ($targethost ne '') {
$command = "$sshcmd $targethost " . escapeshellparam("$targetsudocmd $zfscmd destroy $targetfsescaped\@$snaps");
} else {
$command = "$targetsudocmd $zfscmd destroy $targetfsescaped\@$snaps";
}
if ($debug) { print "$command\n"; }
my ($stdout, $stderr, $result) = capture { system $command; };
if ($result != 0 && !$quiet) {
warn "$command failed: $stderr";
}
}
}
} # end syncdataset()
sub compressargset {
my ($value) = @_;
my $DEFAULT_COMPRESSION = 'lzo';
my %COMPRESS_ARGS = (
'none' => {
rawcmd => '',
args => '',
decomrawcmd => '',
decomargs => '',
},
'gzip' => {
rawcmd => 'gzip',
args => '-3',
decomrawcmd => 'zcat',
decomargs => '',
},
'pigz-fast' => {
rawcmd => 'pigz',
args => '-3',
decomrawcmd => 'pigz',
decomargs => '-dc',
},
'pigz-slow' => {
rawcmd => 'pigz',
args => '-9',
decomrawcmd => 'pigz',
decomargs => '-dc',
},
'zstd-fast' => {
rawcmd => 'zstd',
args => '-3',
decomrawcmd => 'zstd',
decomargs => '-dc',
},
'zstd-slow' => {
rawcmd => 'zstd',
args => '-19',
decomrawcmd => 'zstd',
decomargs => '-dc',
},
'xz' => {
rawcmd => 'xz',
args => '',
decomrawcmd => 'xz',
decomargs => '-d',
},
'lzo' => {
rawcmd => 'lzop',
args => '',
decomrawcmd => 'lzop',
decomargs => '-dfc',
},
'lz4' => {
rawcmd => 'lz4',
args => '',
decomrawcmd => 'lz4',
decomargs => '-dc',
},
);
if ($value eq 'default') {
$value = $DEFAULT_COMPRESSION;
} elsif (!(grep $value eq $_, ('gzip', 'pigz-fast', 'pigz-slow', 'zstd-fast', 'zstd-slow', 'lz4', 'xz', 'lzo', 'default', 'none'))) {
warn "Unrecognised compression value $value, defaulting to $DEFAULT_COMPRESSION";
$value = $DEFAULT_COMPRESSION;
}
my %comargs = %{$COMPRESS_ARGS{$value}}; # copy
$comargs{'compress'} = $value;
$comargs{'cmd'} = "$comargs{'rawcmd'} $comargs{'args'}";
$comargs{'decomcmd'} = "$comargs{'decomrawcmd'} $comargs{'decomargs'}";
return \%comargs;
}
sub checkcommands {
# make sure compression, mbuffer, and pv are available on
# source, target, and local hosts as appropriate.
my %avail;
my $sourcessh;
my $targetssh;
# if --nocommandchecks then assume everything's available and return
if ($args{'nocommandchecks'}) {
if ($debug) { print "DEBUG: not checking for command availability due to --nocommandchecks switch.\n"; }
$avail{'compress'} = 1;
$avail{'localpv'} = 1;
$avail{'localmbuffer'} = 1;
$avail{'sourcembuffer'} = 1;
$avail{'targetmbuffer'} = 1;
$avail{'sourceresume'} = 1;
$avail{'targetresume'} = 1;
return %avail;
}
if ($sourcehost ne '') { $sourcessh = "$sshcmd $sourcehost"; } else { $sourcessh = ''; }
if ($targethost ne '') { $targetssh = "$sshcmd $targethost"; } else { $targetssh = ''; }
# if raw compress command is null, we must have specified no compression. otherwise,
# make sure that compression is available everywhere we need it
if ($compressargs{'compress'} eq 'none') {
if ($debug) { print "DEBUG: compression forced off from command line arguments.\n"; }
} else {
if ($debug) { print "DEBUG: checking availability of $compressargs{'rawcmd'} on source...\n"; }
$avail{'sourcecompress'} = `$sourcessh $checkcmd $compressargs{'rawcmd'} 2>/dev/null`;
if ($debug) { print "DEBUG: checking availability of $compressargs{'rawcmd'} on target...\n"; }
$avail{'targetcompress'} = `$targetssh $checkcmd $compressargs{'rawcmd'} 2>/dev/null`;
if ($debug) { print "DEBUG: checking availability of $compressargs{'rawcmd'} on local machine...\n"; }
$avail{'localcompress'} = `$checkcmd $compressargs{'rawcmd'} 2>/dev/null`;
}
my ($s,$t);
if ($sourcehost eq '') {
$s = '[local machine]'
} else {
$s = $sourcehost;
$s =~ s/^\S*\@//;
$s = "ssh:$s";
}
if ($targethost eq '') {
$t = '[local machine]'
} else {
$t = $targethost;
$t =~ s/^\S*\@//;
$t = "ssh:$t";
}
if (!defined $avail{'sourcecompress'}) { $avail{'sourcecompress'} = ''; }
if (!defined $avail{'targetcompress'}) { $avail{'targetcompress'} = ''; }
if (!defined $avail{'localcompress'}) { $avail{'localcompress'} = ''; }
if (!defined $avail{'sourcembuffer'}) { $avail{'sourcembuffer'} = ''; }
if (!defined $avail{'targetmbuffer'}) { $avail{'targetmbuffer'} = ''; }
if ($avail{'sourcecompress'} eq '') {
if ($compressargs{'rawcmd'} ne '') {
print "WARN: $compressargs{'rawcmd'} not available on source $s- sync will continue without compression.\n";
}
$avail{'compress'} = 0;
}
if ($avail{'targetcompress'} eq '') {
if ($compressargs{'rawcmd'} ne '') {
print "WARN: $compressargs{'rawcmd'} not available on target $t - sync will continue without compression.\n";
}
$avail{'compress'} = 0;
}
if ($avail{'targetcompress'} ne '' && $avail{'sourcecompress'} ne '') {
# compression available - unless source and target are both remote, which we'll check
# for in the next block and respond to accordingly.
$avail{'compress'} = 1;
}
# corner case - if source AND target are BOTH remote, we have to check for local compress too
if ($sourcehost ne '' && $targethost ne '' && $avail{'localcompress'} eq '') {
if ($compressargs{'rawcmd'} ne '') {
print "WARN: $compressargs{'rawcmd'} not available on local machine - sync will continue without compression.\n";
}
$avail{'compress'} = 0;
}
if (length $args{'insecure-direct-connection'}) {
if ($debug) { print "DEBUG: checking availability of $socatcmd on source...\n"; }
my $socatAvailable = `$sourcessh $checkcmd $socatcmd 2>/dev/null`;
if ($socatAvailable eq '') {
die "CRIT: $socatcmd is needed on source for insecure direct connection!\n";
}
if (!$directmbuffer) {
if ($debug) { print "DEBUG: checking availability of busybox (for nc) on target...\n"; }
my $busyboxAvailable = `$targetssh $checkcmd busybox 2>/dev/null`;
if ($busyboxAvailable eq '') {
die "CRIT: busybox is needed on target for insecure direct connection!\n";
}
}
}
if ($debug) { print "DEBUG: checking availability of $mbuffercmd on source...\n"; }
$avail{'sourcembuffer'} = `$sourcessh $checkcmd $mbuffercmd 2>/dev/null`;
if ($avail{'sourcembuffer'} eq '') {
if (!$quiet) { print "WARN: $mbuffercmd not available on source $s - sync will continue without source buffering.\n"; }
$avail{'sourcembuffer'} = 0;
} else {
$avail{'sourcembuffer'} = 1;
}
if ($debug) { print "DEBUG: checking availability of $mbuffercmd on target...\n"; }
$avail{'targetmbuffer'} = `$targetssh $checkcmd $mbuffercmd 2>/dev/null`;
if ($avail{'targetmbuffer'} eq '') {
if ($directmbuffer) {
die "CRIT: $mbuffercmd is needed on target for insecure direct connection!\n";
}
if (!$quiet) { print "WARN: $mbuffercmd not available on target $t - sync will continue without target buffering.\n"; }
$avail{'targetmbuffer'} = 0;
} else {
$avail{'targetmbuffer'} = 1;
}
# if we're doing remote source AND remote target, check for local mbuffer as well
if ($sourcehost ne '' && $targethost ne '') {
if ($debug) { print "DEBUG: checking availability of $mbuffercmd on local machine...\n"; }
$avail{'localmbuffer'} = `$checkcmd $mbuffercmd 2>/dev/null`;
if ($avail{'localmbuffer'} eq '') {
$avail{'localmbuffer'} = 0;
if (!$quiet) { print "WARN: $mbuffercmd not available on local machine - sync will continue without local buffering.\n"; }
}
}
if ($debug) { print "DEBUG: checking availability of $pvcmd on local machine...\n"; }
$avail{'localpv'} = `$checkcmd $pvcmd 2>/dev/null`;
if ($avail{'localpv'} eq '') {
if (!$quiet) { print "WARN: $pvcmd not available on local machine - sync will continue without progress bar.\n"; }
$avail{'localpv'} = 0;
} else {
$avail{'localpv'} = 1;
}
# check for ZFS resume feature support
if ($resume) {
my @parts = split ('/', $sourcefs);
my $srcpool = $parts[0];
@parts = split ('/', $targetfs);
my $dstpool = $parts[0];
$srcpool = escapeshellparam($srcpool);
$dstpool = escapeshellparam($dstpool);
if ($sourcehost ne '') {
# double escaping needed
$srcpool = escapeshellparam($srcpool);
}
if ($targethost ne '') {
# double escaping needed
$dstpool = escapeshellparam($dstpool);
}
my $resumechkcmd = "$zpoolcmd get -o value -H feature\@extensible_dataset";
if ($debug) { print "DEBUG: checking availability of zfs resume feature on source...\n"; }
$avail{'sourceresume'} = system("$sourcessh $sourcesudocmd $resumechkcmd $srcpool 2>/dev/null | grep '\\(active\\|enabled\\)' >/dev/null 2>&1");
$avail{'sourceresume'} = $avail{'sourceresume'} == 0 ? 1 : 0;
if ($debug) { print "DEBUG: checking availability of zfs resume feature on target...\n"; }
$avail{'targetresume'} = system("$targetssh $targetsudocmd $resumechkcmd $dstpool 2>/dev/null | grep '\\(active\\|enabled\\)' >/dev/null 2>&1");
$avail{'targetresume'} = $avail{'targetresume'} == 0 ? 1 : 0;
if ($avail{'sourceresume'} == 0 || $avail{'targetresume'} == 0) {
# disable resume
$resume = '';
my @hosts = ();
if ($avail{'sourceresume'} == 0) {
push @hosts, 'source';
}
if ($avail{'targetresume'} == 0) {
push @hosts, 'target';
}
my $affected = join(" and ", @hosts);
print "WARN: ZFS resume feature not available on $affected machine - sync will continue without resume support.\n";
}
} else {
$avail{'sourceresume'} = 0;
$avail{'targetresume'} = 0;
}
return %avail;
}
sub iszfsbusy {
my ($rhost,$fs,$isroot) = @_;
if ($rhost ne '') { $rhost = "$sshcmd $rhost"; }
if ($debug) { print "DEBUG: checking to see if $fs on $rhost is already in zfs receive using $rhost $pscmd -Ao args= ...\n"; }
open PL, "$rhost $pscmd -Ao args= |";
my @processes = ;
close PL;
foreach my $process (@processes) {
# if ($debug) { print "DEBUG: checking process $process...\n"; }
if ($process =~ /zfs *(receive|recv)[^\/]*\Q$fs\E\Z/) {
# there's already a zfs receive process for our target filesystem - return true
if ($debug) { print "DEBUG: process $process matches target $fs!\n"; }
return 1;
}
}
# no zfs receive processes for our target filesystem found - return false
return 0;
}
sub setzfsvalue {
my ($rhost,$fs,$isroot,$property,$value) = @_;
my $fsescaped = escapeshellparam($fs);
if ($rhost ne '') {
$rhost = "$sshcmd $rhost";
# double escaping needed
$fsescaped = escapeshellparam($fsescaped);
}
if ($debug) { print "DEBUG: setting $property to $value on $fs...\n"; }
my $mysudocmd;
if ($isroot) { $mysudocmd = ''; } else { $mysudocmd = $sudocmd; }
if ($debug) { print "$rhost $mysudocmd $zfscmd set $property=$value $fsescaped\n"; }
system("$rhost $mysudocmd $zfscmd set $property=$value $fsescaped") == 0
or warn "WARNING: $rhost $mysudocmd $zfscmd set $property=$value $fsescaped died: $?, proceeding anyway.\n";
return;
}
sub getzfsvalue {
my ($rhost,$fs,$isroot,$property) = @_;
my $fsescaped = escapeshellparam($fs);
if ($rhost ne '') {
$rhost = "$sshcmd $rhost";
# double escaping needed
$fsescaped = escapeshellparam($fsescaped);
}
if ($debug) { print "DEBUG: getting current value of $property on $fs...\n"; }
my $mysudocmd;
if ($isroot) { $mysudocmd = ''; } else { $mysudocmd = $sudocmd; }
if ($debug) { print "$rhost $mysudocmd $zfscmd get -H $property $fsescaped\n"; }
my ($value, $error, $exit) = capture {
system("$rhost $mysudocmd $zfscmd get -H $property $fsescaped");
};
my @values = split(/\t/,$value);
$value = $values[2];
my $wantarray = wantarray || 0;
# If we are in scalar context and there is an error, print it out.
# Otherwise we assume the caller will deal with it.
if (!$wantarray and $error) {
print "ERROR getzfsvalue $fs $property: $error\n";
}
return $wantarray ? ($value, $error) : $value;
}
sub getlocalzfsvalues {
my ($rhost,$fs,$isroot) = @_;
my $fsescaped = escapeshellparam($fs);
if ($rhost ne '') {
$rhost = "$sshcmd $rhost";
# double escaping needed
$fsescaped = escapeshellparam($fsescaped);
}
if ($debug) { print "DEBUG: getting locally set values of properties on $fs...\n"; }
my $mysudocmd;
if ($isroot) { $mysudocmd = ''; } else { $mysudocmd = $sudocmd; }
if ($debug) { print "$rhost $mysudocmd $zfscmd get all -s local -H $fsescaped\n"; }
my ($values, $error, $exit) = capture {
system("$rhost $mysudocmd $zfscmd get all -s local -H $fsescaped");
};
my %properties=();
if ($exit != 0) {
warn "WARNING: getlocalzfsvalues failed for $fs: $error";
if ($exitcode < 1) { $exitcode = 1; }
return %properties;
}
my @blacklist = (
"available", "compressratio", "createtxg", "creation", "clones",
"defer_destroy", "encryptionroot", "filesystem_count", "keystatus", "guid",
"logicalreferenced", "logicalused", "mounted", "objsetid", "origin",
"receive_resume_token", "redact_snaps", "referenced", "refcompressratio", "snapshot_count",
"type", "used", "usedbychildren", "usedbydataset", "usedbyrefreservation",
"usedbysnapshots", "userrefs", "snapshots_changed", "volblocksize", "written",
"version", "volsize", "casesensitivity", "normalization", "utf8only"
);
my %blacklisthash = map {$_ => 1} @blacklist;
foreach (split(/\n/,$values)) {
my @parts = split(/\t/, $_);
if (exists $blacklisthash{$parts[1]}) {
next;
}
$properties{$parts[1]} = $parts[2];
}
return %properties;
}
sub readablebytes {
my $bytes = shift;
my $disp;
if ($bytes > 1024*1024*1024) {
$disp = sprintf("%.1f",$bytes/1024/1024/1024) . ' GB';
} elsif ($bytes > 1024*1024) {
$disp = sprintf("%.1f",$bytes/1024/1024) . ' MB';
} else {
$disp = sprintf("%d",$bytes/1024) . ' KB';
}
return $disp;
}
sub getoldestsnapshot {
my $snaps = shift;
foreach my $snap ( sort { $snaps{'source'}{$a}{'creation'}<=>$snaps{'source'}{$b}{'creation'} } keys %{ $snaps{'source'} }) {
# return on first snap found - it's the oldest
return $snap;
}
# must not have had any snapshots on source - luckily, we already made one, amirite?
if (defined ($args{'no-sync-snap'}) ) {
# well, actually we set --no-sync-snap, so no we *didn't* already make one. Whoops.
warn "CRIT: --no-sync-snap is set, and getoldestsnapshot() could not find any snapshots on source!\n";
}
return 0;
}
sub getnewestsnapshot {
my $snaps = shift;
foreach my $snap ( sort { $snaps{'source'}{$b}{'creation'}<=>$snaps{'source'}{$a}{'creation'} } keys %{ $snaps{'source'} }) {
# return on first snap found - it's the newest
if (!$quiet) { print "NEWEST SNAPSHOT: $snap\n"; }
return $snap;
}
# must not have had any snapshots on source - looks like we'd better create one!
if (defined ($args{'no-sync-snap'}) ) {
if (!defined ($args{'recursive'}) ) {
# well, actually we set --no-sync-snap and we're not recursive, so no we *can't* make one. Whoops.
die "CRIT: --no-sync-snap is set, and getnewestsnapshot() could not find any snapshots on source!\n";
}
# fixme: we need to output WHAT the current dataset IS if we encounter this WARN condition.
# we also probably need an argument to mute this WARN, for people who deliberately exclude
# datasets from recursive replication this way.
warn "WARN: --no-sync-snap is set, and getnewestsnapshot() could not find any snapshots on source for current dataset. Continuing.\n";
if ($exitcode < 2) { $exitcode = 2; }
}
return 0;
}
sub buildsynccmd {
my ($sendcmd,$recvcmd,$pvsize,$sourceisroot,$targetisroot) = @_;
# here's where it gets fun: figuring out when to compress and decompress.
# to make this work for all possible combinations, you may have to decompress
# AND recompress across the pipe viewer. FUN.
my $synccmd;
if ($sourcehost eq '' && $targethost eq '') {
# both sides local. don't compress. do mbuffer, once, on the source side.
# $synccmd = "$sendcmd | $mbuffercmd | $pvcmd | $recvcmd";
$synccmd = "$sendcmd |";
# avoid confusion - accept either source-bwlimit or target-bwlimit as the bandwidth limiting option here
my $bwlimit = '';
if (length $args{'source-bwlimit'}) {
$bwlimit = $args{'source-bwlimit'};
} elsif (length $args{'target-bwlimit'}) {
$bwlimit = $args{'target-bwlimit'};
}
if ($avail{'sourcembuffer'}) { $synccmd .= " $mbuffercmd $bwlimit $mbufferoptions |"; }
if ($avail{'localpv'} && !$quiet) { $synccmd .= " $pvcmd $pvoptions -s $pvsize |"; }
$synccmd .= " $recvcmd";
} elsif ($sourcehost eq '') {
# local source, remote target.
#$synccmd = "$sendcmd | $pvcmd | $compressargs{'cmd'} | $mbuffercmd | $sshcmd $targethost '$compressargs{'decomcmd'} | $mbuffercmd | $recvcmd'";
$synccmd = "$sendcmd |";
if ($avail{'localpv'} && !$quiet) { $synccmd .= " $pvcmd $pvoptions -s $pvsize |"; }
if ($avail{'compress'}) { $synccmd .= " $compressargs{'cmd'} |"; }
if ($avail{'sourcembuffer'}) { $synccmd .= " $mbuffercmd $args{'source-bwlimit'} $mbufferoptions |"; }
if (length $directconnect) {
$synccmd .= " $socatcmd - TCP:" . $directconnect . ",retry=$directtimeout,interval=1 |";
}
$synccmd .= " $sshcmd $targethost ";
my $remotecmd = "";
if ($directmbuffer) {
$remotecmd .= " $mbuffercmd $args{'target-bwlimit'} -W $directtimeout -I " . $directlisten . " $mbufferoptions |";
} elsif (length $directlisten) {
$remotecmd .= " busybox nc -l " . $directlisten . " -w $directtimeout |";
}
if ($avail{'targetmbuffer'} && !$directmbuffer) { $remotecmd .= " $mbuffercmd $args{'target-bwlimit'} $mbufferoptions |"; }
if ($avail{'compress'}) { $remotecmd .= " $compressargs{'decomcmd'} |"; }
$remotecmd .= " $recvcmd";
$synccmd .= escapeshellparam($remotecmd);
} elsif ($targethost eq '') {
# remote source, local target.
#$synccmd = "$sshcmd $sourcehost '$sendcmd | $compressargs{'cmd'} | $mbuffercmd' | $compressargs{'decomcmd'} | $mbuffercmd | $pvcmd | $recvcmd";
my $remotecmd = $sendcmd;
if ($avail{'compress'}) { $remotecmd .= " | $compressargs{'cmd'}"; }
if ($avail{'sourcembuffer'}) { $remotecmd .= " | $mbuffercmd $args{'source-bwlimit'} $mbufferoptions"; }
if (length $directconnect) {
$remotecmd .= " | $socatcmd - TCP:" . $directconnect . ",retry=$directtimeout,interval=1";
}
$synccmd = "$sshcmd $sourcehost " . escapeshellparam($remotecmd);
$synccmd .= " | ";
if ($directmbuffer) {
$synccmd .= "$mbuffercmd $args{'target-bwlimit'} -W $directtimeout -I " . $directlisten . " $mbufferoptions | ";
} elsif (length $directlisten) {
$synccmd .= " busybox nc -l " . $directlisten . " -w $directtimeout | ";
}
if ($avail{'targetmbuffer'} && !$directmbuffer) { $synccmd .= "$mbuffercmd $args{'target-bwlimit'} $mbufferoptions | "; }
if ($avail{'compress'}) { $synccmd .= "$compressargs{'decomcmd'} | "; }
if ($avail{'localpv'} && !$quiet) { $synccmd .= "$pvcmd $pvoptions -s $pvsize | "; }
$synccmd .= "$recvcmd";
} else {
#remote source, remote target... weird, but whatever, I'm not here to judge you.
#$synccmd = "$sshcmd $sourcehost '$sendcmd | $compressargs{'cmd'} | $mbuffercmd' | $compressargs{'decomcmd'} | $pvcmd | $compressargs{'cmd'} | $mbuffercmd | $sshcmd $targethost '$compressargs{'decomcmd'} | $mbuffercmd | $recvcmd'";
my $remotecmd = $sendcmd;
if ($avail{'compress'}) { $remotecmd .= " | $compressargs{'cmd'}"; }
if ($avail{'sourcembuffer'}) { $remotecmd .= " | $mbuffercmd $args{'source-bwlimit'} $mbufferoptions"; }
$synccmd = "$sshcmd $sourcehost " . escapeshellparam($remotecmd);
$synccmd .= " | ";
if ($avail{'compress'}) { $synccmd .= "$compressargs{'decomcmd'} | "; }
if ($avail{'localpv'} && !$quiet) { $synccmd .= "$pvcmd $pvoptions -s $pvsize | "; }
if ($avail{'compress'}) { $synccmd .= "$compressargs{'cmd'} | "; }
if ($avail{'localmbuffer'}) { $synccmd .= "$mbuffercmd $mbufferoptions | "; }
$synccmd .= "$sshcmd $targethost ";
$remotecmd = "";
if ($avail{'targetmbuffer'}) { $remotecmd .= " $mbuffercmd $args{'target-bwlimit'} $mbufferoptions |"; }
if ($avail{'compress'}) { $remotecmd .= " $compressargs{'decomcmd'} |"; }
$remotecmd .= " $recvcmd";
$synccmd .= escapeshellparam($remotecmd);
}
return $synccmd;
}
sub pruneoldsyncsnaps {
my ($rhost,$fs,$newsyncsnap,$isroot,@snaps) = @_;
my $fsescaped = escapeshellparam($fs);
if ($rhost ne '') { $rhost = "$sshcmd $rhost"; }
my $hostid = hostname();
my $mysudocmd;
if ($isroot) { $mysudocmd=''; } else { $mysudocmd = $sudocmd; }
my @prunesnaps;
# only prune snaps beginning with syncoid and our own hostname
foreach my $snap(@snaps) {
if ($snap =~ /^syncoid_\Q$identifier$hostid\E/) {
# no matter what, we categorically refuse to
# prune the new sync snap we created for this run
if ($snap ne $newsyncsnap) {
push (@prunesnaps,$snap);
}
}
}
# concatenate pruning commands to ten per line, to cut down
# auth times for any remote hosts that must be operated via SSH
my $counter;
my $maxsnapspercmd = 10;
my $prunecmd;
foreach my $snap(@prunesnaps) {
$counter ++;
$prunecmd .= "$mysudocmd $zfscmd destroy $fsescaped\@$snap; ";
if ($counter > $maxsnapspercmd) {
$prunecmd =~ s/\; $//;
if ($debug) { print "DEBUG: pruning up to $maxsnapspercmd obsolete sync snapshots...\n"; }
if ($debug) { print "DEBUG: $rhost $prunecmd\n"; }
if ($rhost ne '') {
$prunecmd = escapeshellparam($prunecmd);
}
system("$rhost $prunecmd") == 0
or warn "WARNING: $rhost $prunecmd failed: $?";
$prunecmd = '';
$counter = 0;
}
}
# if we still have some prune commands stacked up after finishing
# the loop, commit 'em now
if ($counter) {
$prunecmd =~ s/\; $//;
if ($debug) { print "DEBUG: pruning up to $maxsnapspercmd obsolete sync snapshots...\n"; }
if ($debug) { print "DEBUG: $rhost $prunecmd\n"; }
if ($rhost ne '') {
$prunecmd = escapeshellparam($prunecmd);
}
system("$rhost $prunecmd") == 0
or warn "WARNING: $rhost $prunecmd failed: $?";
}
return;
}
sub getmatchingsnapshot {
my ($sourcefs, $targetfs, $snaps) = @_;
foreach my $snap ( sort { $snaps{'source'}{$b}{'creation'}<=>$snaps{'source'}{$a}{'creation'} } keys %{ $snaps{'source'} }) {
if (defined $snaps{'target'}{$snap}) {
if ($snaps{'source'}{$snap}{'guid'} == $snaps{'target'}{$snap}{'guid'}) {
return $snap;
}
}
}
return 0;
}
sub newsyncsnap {
my ($rhost,$fs,$isroot) = @_;
my $fsescaped = escapeshellparam($fs);
if ($rhost ne '') {
$rhost = "$sshcmd $rhost";
# double escaping needed
$fsescaped = escapeshellparam($fsescaped);
}
my $mysudocmd;
if ($isroot) { $mysudocmd = ''; } else { $mysudocmd = $sudocmd; }
my $hostid = hostname();
my %date = getdate();
my $snapname = "syncoid\_$identifier$hostid\_$date{'stamp'}";
my $snapcmd = "$rhost $mysudocmd $zfscmd snapshot $fsescaped\@$snapname\n";
if ($debug) { print "DEBUG: creating sync snapshot using \"$snapcmd\"...\n"; }
system($snapcmd) == 0 or do {
warn "CRITICAL ERROR: $snapcmd failed: $?";
if ($exitcode < 2) { $exitcode = 2; }
return 0;
};
return $snapname;
}
sub targetexists {
my ($rhost,$fs,$isroot) = @_;
my $fsescaped = escapeshellparam($fs);
if ($rhost ne '') {
$rhost = "$sshcmd $rhost";
# double escaping needed
$fsescaped = escapeshellparam($fsescaped);
}
my $mysudocmd;
if ($isroot) { $mysudocmd = ''; } else { $mysudocmd = $sudocmd; }
my $checktargetcmd = "$rhost $mysudocmd $zfscmd get -H name $fsescaped";
if ($debug) { print "DEBUG: checking to see if target filesystem exists using \"$checktargetcmd 2>&1 |\"...\n"; }
open FH, "$checktargetcmd 2>&1 |";
my $targetexists = ;
close FH;
my $exit = $?;
$targetexists = ( $targetexists =~ /^\Q$fs\E/ && $exit == 0 );
return $targetexists;
}
sub getssh {
my $fs = shift;
my $rhost = "";
my $isroot;
my $socket;
my $remoteuser = "";
# if we got passed something with an @ in it, we assume it's an ssh connection, eg root@myotherbox
if ($fs =~ /\@/) {
$rhost = $fs;
$fs =~ s/^[^\@:]*\@[^\@:]*://;
$rhost =~ s/:\Q$fs\E$//;
$remoteuser = $rhost;
$remoteuser =~ s/\@.*$//;
# do not require a username to be specified
$rhost =~ s/^@//;
} elsif ($fs =~ m{^[^/]*:}) {
# if we got passed something with an : in it, BEFORE any forward slash
# (i.e., not in a dataset name) it MAY be an ssh connection
# but we need to check if there is a pool with that name
my $pool = $fs;
$pool =~ s%/.*$%%;
my ($pools, $error, $exit) = capture {
system("$zfscmd list -d0 -H -oname");
};
$rhost = $fs;
if ($exit != 0) {
warn "Unable to enumerate pools (is zfs available?)";
} else {
foreach (split(/\n/,$pools)) {
if ($_ eq $pool) {
# there's a pool with this name.
$rhost = "";
last;
}
}
}
if ($rhost ne "") {
# there's no pool that might conflict with this
$rhost =~ s/:.*$//;
$fs =~ s/\Q$rhost\E://;
}
}
if ($rhost ne "") {
if ($remoteuser eq 'root' || $args{'no-privilege-elevation'}) { $isroot = 1; } else { $isroot = 0; }
# now we need to establish a persistent master SSH connection
$socket = "/tmp/syncoid-$rhost-" . time() . "-" . int(rand(10000));
open FH, "$sshcmd -M -S $socket -o ControlPersist=1m $args{'sshport'} $rhost exit |";
close FH;
system("$sshcmd -S $socket $rhost echo -n") == 0 or do {
my $code = $? >> 8;
warn "CRITICAL ERROR: ssh connection echo test failed for $rhost with exit code $code";
exit(2);
};
$rhost = "-S $socket $rhost";
} else {
my $localuid = $<;
if ($localuid == 0 || $args{'no-privilege-elevation'}) { $isroot = 1; } else { $isroot = 0; }
}
# if ($isroot) { print "this user is root.\n"; } else { print "this user is not root.\n"; }
return ($rhost,$fs,$isroot);
}
sub dumphash() {
my $hash = shift;
$Data::Dumper::Sortkeys = 1;
print Dumper($hash);
}
sub getsnaps() {
my ($type,$rhost,$fs,$isroot,%snaps) = @_;
my $mysudocmd;
my $fsescaped = escapeshellparam($fs);
if ($isroot) { $mysudocmd = ''; } else { $mysudocmd = $sudocmd; }
my $rhostOriginal = $rhost;
if ($rhost ne '') {
$rhost = "$sshcmd $rhost";
# double escaping needed
$fsescaped = escapeshellparam($fsescaped);
}
my $getsnapcmd = "$rhost $mysudocmd $zfscmd get -Hpd 1 -t snapshot guid,creation $fsescaped";
if ($debug) {
$getsnapcmd = "$getsnapcmd |";
print "DEBUG: getting list of snapshots on $fs using $getsnapcmd...\n";
} else {
$getsnapcmd = "$getsnapcmd 2>/dev/null |";
}
open FH, $getsnapcmd;
my @rawsnaps = ;
close FH or do {
# fallback (solaris for example doesn't support the -t option)
return getsnapsfallback($type,$rhostOriginal,$fs,$isroot,%snaps);
};
# this is a little obnoxious. get guid,creation returns guid,creation on two separate lines
# as though each were an entirely separate get command.
my %creationtimes=();
foreach my $line (@rawsnaps) {
# only import snap guids from the specified filesystem
if ($line =~ /\Q$fs\E\@.*guid/) {
chomp $line;
my $guid = $line;
$guid =~ s/^.*\tguid\t*(\d*).*/$1/;
my $snap = $line;
$snap =~ s/^.*\@(.*)\tguid.*$/$1/;
$snaps{$type}{$snap}{'guid'}=$guid;
}
}
foreach my $line (@rawsnaps) {
# only import snap creations from the specified filesystem
if ($line =~ /\Q$fs\E\@.*creation/) {
chomp $line;
my $creation = $line;
$creation =~ s/^.*\tcreation\t*(\d*).*/$1/;
my $snap = $line;
$snap =~ s/^.*\@(.*)\tcreation.*$/$1/;
# the accuracy of the creation timestamp is only for a second, but
# snapshots in the same second are highly likely. The list command
# has an ordered output so we append another three digit running number
# to the creation timestamp and make sure those are ordered correctly
# for snapshot with the same creation timestamp
my $counter = 0;
my $creationsuffix;
while ($counter < 999) {
$creationsuffix = sprintf("%s%03d", $creation, $counter);
if (!defined $creationtimes{$creationsuffix}) {
$creationtimes{$creationsuffix} = 1;
last;
}
$counter += 1;
}
$snaps{$type}{$snap}{'creation'}=$creationsuffix;
}
}
return %snaps;
}
sub getsnapsfallback() {
# fallback (solaris for example doesn't support the -t option)
my ($type,$rhost,$fs,$isroot,%snaps) = @_;
my $mysudocmd;
my $fsescaped = escapeshellparam($fs);
if ($isroot) { $mysudocmd = ''; } else { $mysudocmd = $sudocmd; }
if ($rhost ne '') {
$rhost = "$sshcmd $rhost";
# double escaping needed
$fsescaped = escapeshellparam($fsescaped);
}
my $getsnapcmd = "$rhost $mysudocmd $zfscmd get -Hpd 1 type,guid,creation $fsescaped |";
warn "snapshot listing failed, trying fallback command";
if ($debug) { print "DEBUG: FALLBACK, getting list of snapshots on $fs using $getsnapcmd...\n"; }
open FH, $getsnapcmd;
my @rawsnaps = ;
close FH or die "CRITICAL ERROR: snapshots couldn't be listed for $fs (exit code $?)";
my %creationtimes=();
my $state = 0;
foreach my $line (@rawsnaps) {
if ($state < 0) {
$state++;
next;
}
if ($state eq 0) {
if ($line !~ /\Q$fs\E\@.*type\s*snapshot/) {
# skip non snapshot type object
$state = -2;
next;
}
} elsif ($state eq 1) {
if ($line !~ /\Q$fs\E\@.*guid/) {
die "CRITICAL ERROR: snapshots couldn't be listed for $fs (guid parser error)";
}
chomp $line;
my $guid = $line;
$guid =~ s/^.*\tguid\t*(\d*).*/$1/;
my $snap = $line;
$snap =~ s/^.*\@(.*)\tguid.*$/$1/;
$snaps{$type}{$snap}{'guid'}=$guid;
} elsif ($state eq 2) {
if ($line !~ /\Q$fs\E\@.*creation/) {
die "CRITICAL ERROR: snapshots couldn't be listed for $fs (creation parser error)";
}
chomp $line;
my $creation = $line;
$creation =~ s/^.*\tcreation\t*(\d*).*/$1/;
my $snap = $line;
$snap =~ s/^.*\@(.*)\tcreation.*$/$1/;
# the accuracy of the creation timestamp is only for a second, but
# snapshots in the same second are highly likely. The list command
# has an ordered output so we append another three digit running number
# to the creation timestamp and make sure those are ordered correctly
# for snapshot with the same creation timestamp
my $counter = 0;
my $creationsuffix;
while ($counter < 999) {
$creationsuffix = sprintf("%s%03d", $creation, $counter);
if (!defined $creationtimes{$creationsuffix}) {
$creationtimes{$creationsuffix} = 1;
last;
}
$counter += 1;
}
$snaps{$type}{$snap}{'creation'}=$creationsuffix;
$state = -1;
}
$state++;
}
return %snaps;
}
sub getbookmarks() {
my ($rhost,$fs,$isroot,%bookmarks) = @_;
my $mysudocmd;
my $fsescaped = escapeshellparam($fs);
if ($isroot) { $mysudocmd = ''; } else { $mysudocmd = $sudocmd; }
if ($rhost ne '') {
$rhost = "$sshcmd $rhost";
# double escaping needed
$fsescaped = escapeshellparam($fsescaped);
}
my $error = 0;
my $getbookmarkcmd = "$rhost $mysudocmd $zfscmd get -Hpd 1 -t bookmark guid,creation $fsescaped 2>&1 |";
if ($debug) { print "DEBUG: getting list of bookmarks on $fs using $getbookmarkcmd...\n"; }
open FH, $getbookmarkcmd;
my @rawbookmarks = ;
close FH or $error = 1;
if ($error == 1) {
if ($rawbookmarks[0] =~ /invalid type/ or $rawbookmarks[0] =~ /operation not applicable to datasets of this type/) {
# no support for zfs bookmarks, return empty hash
return %bookmarks;
}
die "CRITICAL ERROR: bookmarks couldn't be listed for $fs (exit code $?)";
}
# this is a little obnoxious. get guid,creation returns guid,creation on two separate lines
# as though each were an entirely separate get command.
my $lastguid;
my %creationtimes=();
foreach my $line (@rawbookmarks) {
# only import bookmark guids, creation from the specified filesystem
if ($line =~ /\Q$fs\E\#.*guid/) {
chomp $line;
$lastguid = $line;
$lastguid =~ s/^.*\tguid\t*(\d*).*/$1/;
my $bookmark = $line;
$bookmark =~ s/^.*\#(.*)\tguid.*$/$1/;
$bookmarks{$lastguid}{'name'}=$bookmark;
} elsif ($line =~ /\Q$fs\E\#.*creation/) {
chomp $line;
my $creation = $line;
$creation =~ s/^.*\tcreation\t*(\d*).*/$1/;
my $bookmark = $line;
$bookmark =~ s/^.*\#(.*)\tcreation.*$/$1/;
# the accuracy of the creation timestamp is only for a second, but
# bookmarks in the same second are possible. The list command
# has an ordered output so we append another three digit running number
# to the creation timestamp and make sure those are ordered correctly
# for bookmarks with the same creation timestamp
my $counter = 0;
my $creationsuffix;
while ($counter < 999) {
$creationsuffix = sprintf("%s%03d", $creation, $counter);
if (!defined $creationtimes{$creationsuffix}) {
$creationtimes{$creationsuffix} = 1;
last;
}
$counter += 1;
}
$bookmarks{$lastguid}{'creation'}=$creationsuffix;
}
}
return %bookmarks;
}
sub getsendsize {
my ($sourcehost,$snap1,$snap2,$isroot,$receivetoken) = @_;
my $snap1escaped = escapeshellparam($snap1);
my $snap2escaped = escapeshellparam($snap2);
my $mysudocmd;
if ($isroot) { $mysudocmd = ''; } else { $mysudocmd = $sudocmd; }
my $sourcessh;
if ($sourcehost ne '') {
$sourcessh = "$sshcmd $sourcehost";
$snap1escaped = escapeshellparam($snap1escaped);
$snap2escaped = escapeshellparam($snap2escaped);
} else {
$sourcessh = '';
}
my $snaps;
if ($snap2) {
# if we got a $snap2 argument, we want an incremental send estimate from $snap1 to $snap2.
$snaps = "$args{'streamarg'} $snap1escaped $snap2escaped";
} else {
# if we didn't get a $snap2 arg, we want a full send estimate for $snap1.
$snaps = "$snap1escaped";
}
# in case of a resumed receive, get the remaining
# size based on the resume token
if (defined($receivetoken)) {
$snaps = "-t $receivetoken";
}
my $sendoptions;
if (defined($receivetoken)) {
$sendoptions = getoptionsline(\@sendoptions, ('e'));
} else {
$sendoptions = getoptionsline(\@sendoptions, ('D','L','R','c','e','h','p','w'));
}
my $getsendsizecmd = "$sourcessh $mysudocmd $zfscmd send $sendoptions -nvP $snaps";
if ($debug) { print "DEBUG: getting estimated transfer size from source $sourcehost using \"$getsendsizecmd 2>&1 |\"...\n"; }
open FH, "$getsendsizecmd 2>&1 |";
my @rawsize = ;
close FH;
my $exit = $?;
# process sendsize: last line of multi-line output is
# size of proposed xfer in bytes, but we need to remove
# human-readable crap from it
my $sendsize = pop(@rawsize);
# the output format is different in case of
# a resumed receive
if (defined($receivetoken)) {
$sendsize =~ s/.*\t([0-9]+)$/$1/;
} else {
$sendsize =~ s/^size\t*//;
}
chomp $sendsize;
# check for valid value
if ($sendsize !~ /^\d+$/) {
$sendsize = '';
}
# to avoid confusion with a zero size pv, give sendsize
# a minimum 4K value - or if empty, make sure it reads UNKNOWN
if ($debug) { print "DEBUG: sendsize = $sendsize\n"; }
if ($sendsize eq '' || $exit != 0) {
$sendsize = '0';
} elsif ($sendsize < 4096) {
$sendsize = 4096;
}
return $sendsize;
}
sub getdate {
my @time = localtime(time);
# get timezone info
my $offset = timegm(@time) - timelocal(@time);
my $sign = ''; # + is not allowed in a snapshot name
if ($offset < 0) {
$sign = '-';
$offset = abs($offset);
}
my $hours = int($offset / 3600);
my $minutes = int($offset / 60) - $hours * 60;
my ($sec,$min,$hour,$mday,$mon,$year,$wday,$yday,$isdst) = @time;
$year += 1900;
my %date;
$date{'unix'} = (((((((($year - 1971) * 365) + $yday) * 24) + $hour) * 60) + $min) * 60) + $sec;
$date{'year'} = $year;
$date{'sec'} = sprintf ("%02u", $sec);
$date{'min'} = sprintf ("%02u", $min);
$date{'hour'} = sprintf ("%02u", $hour);
$date{'mday'} = sprintf ("%02u", $mday);
$date{'mon'} = sprintf ("%02u", ($mon + 1));
$date{'tzoffset'} = sprintf ("GMT%s%02d:%02u", $sign, $hours, $minutes);
$date{'stamp'} = "$date{'year'}-$date{'mon'}-$date{'mday'}:$date{'hour'}:$date{'min'}:$date{'sec'}-$date{'tzoffset'}";
return %date;
}
sub escapeshellparam {
my ($par) = @_;
# avoid use of uninitialized string in regex
if (length($par)) {
# "escape" all single quotes
$par =~ s/'/'"'"'/g;
} else {
# avoid use of uninitialized string in concatenation below
$par = '';
}
# single-quote entire string
return "'$par'";
}
sub getreceivetoken() {
my ($rhost,$fs,$isroot) = @_;
my $token = getzfsvalue($rhost,$fs,$isroot,"receive_resume_token");
if (defined $token && $token ne '-' && $token ne '') {
return $token;
}
if ($debug) {
print "DEBUG: no receive token found \n";
}
return
}
sub parsespecialoptions {
my ($line) = @_;
my @options = ();
my @values = split(/ /, $line);
my $optionValue = 0;
my $lastOption;
foreach my $value (@values) {
if ($optionValue ne 0) {
my %item = (
"option" => $lastOption,
"line" => "-$lastOption $value",
);
push @options, \%item;
$optionValue = 0;
next;
}
for my $char (split //, $value) {
if ($optionValue ne 0) {
return undef;
}
if ($char eq 'o' || $char eq 'x') {
$lastOption = $char;
$optionValue = 1;
} else {
my %item = (
"option" => $char,
"line" => "-$char",
);
push @options, \%item;
}
}
}
return @options;
}
sub getoptionsline {
my ($options_ref, @allowed) = @_;
my $line = '';
foreach my $value (@{ $options_ref }) {
if (@allowed) {
if (!grep( /^$$value{'option'}$/, @allowed) ) {
next;
}
}
$line = "$line$$value{'line'} ";
}
return $line;
}
sub resetreceivestate {
my ($rhost,$fs,$isroot) = @_;
my $fsescaped = escapeshellparam($fs);
if ($rhost ne '') {
$rhost = "$sshcmd $rhost";
# double escaping needed
$fsescaped = escapeshellparam($fsescaped);
}
if ($debug) { print "DEBUG: reset partial receive state of $fs...\n"; }
my $mysudocmd;
if ($isroot) { $mysudocmd = ''; } else { $mysudocmd = $sudocmd; }
my $resetcmd = "$rhost $mysudocmd $zfscmd receive -A $fsescaped";
if ($debug) { print "$resetcmd\n"; }
system("$resetcmd") == 0
or die "CRITICAL ERROR: $resetcmd failed: $?";
}
__END__
=head1 NAME
syncoid - ZFS snapshot replication tool
=head1 SYNOPSIS
syncoid [options]... SOURCE TARGET
or syncoid [options]... SOURCE [[USER]@]HOST:TARGET
or syncoid [options]... [[USER]@]HOST:SOURCE TARGET
or syncoid [options]... [[USER]@]HOST:SOURCE [[USER]@]HOST:TARGET
SOURCE Source ZFS dataset. Can be either local or remote
TARGET Target ZFS dataset. Can be either local or remote
Options:
--compress=FORMAT Compresses data during transfer. Currently accepted options are gzip, pigz-fast, pigz-slow, zstd-fast, zstd-slow, lz4, xz, lzo (default) & none
--identifier=EXTRA Extra identifier which is included in the snapshot name. Can be used for replicating to multiple targets.
--recursive|r Also transfers child datasets
--skip-parent Skips syncing of the parent dataset. Does nothing without '--recursive' option.
--source-bwlimit= Bandwidth limit in bytes/kbytes/etc per second on the source transfer
--target-bwlimit= Bandwidth limit in bytes/kbytes/etc per second on the target transfer
--mbuffer-size=VALUE Specify the mbuffer size (default: 16M), please refer to mbuffer(1) manual page.
--pv-options=OPTIONS Configure how pv displays the progress bar, default '-p -t -e -r -b'
--no-stream Replicates using newest snapshot instead of intermediates
--no-sync-snap Does not create new snapshot, only transfers existing
--keep-sync-snap Don't destroy created sync snapshots
--create-bookmark Creates a zfs bookmark for the newest snapshot on the source after replication succeeds (only works with --no-sync-snap)
--use-hold Adds a hold to the newest snapshot on the source and target after replication succeeds and removes the hold after the next succesful replication. The hold name incldues the identifier if set. This allows for separate holds in case of multiple targets
--preserve-recordsize Preserves the recordsize on initial sends to the target
--preserve-properties Preserves locally set dataset properties similiar to the zfs send -p flag but this one will also work for encrypted datasets in non raw sends
--no-rollback Does not rollback snapshots on target (it probably requires a readonly target)
--delete-target-snapshots With this argument snapshots which are missing on the source will be destroyed on the target. Use this if you only want to handle snapshots on the source.
--exclude=REGEX Exclude specific datasets which match the given regular expression. Can be specified multiple times
--sendoptions=OPTIONS Use advanced options for zfs send (the arguments are filtered as needed), e.g. syncoid --sendoptions="Lc e" sets zfs send -L -c -e ...
--recvoptions=OPTIONS Use advanced options for zfs receive (the arguments are filtered as needed), e.g. syncoid --recvoptions="ux recordsize o compression=lz4" sets zfs receive -u -x recordsize -o compression=lz4 ...
--sshconfig=FILE Specifies an ssh_config(5) file to be used
--sshkey=FILE Specifies a ssh key to use to connect
--sshport=PORT Connects to remote on a particular port
--sshcipher|c=CIPHER Passes CIPHER to ssh to use a particular cipher set
--sshoption|o=OPTION Passes OPTION to ssh for remote usage. Can be specified multiple times
--insecure-direct-connection=IP:PORT[,IP:PORT] WARNING: DATA IS NOT ENCRYPTED. First address pair is for connecting to the target and the second for listening at the target
--help Prints this helptext
--version Prints the version number
--debug Prints out a lot of additional information during a syncoid run
--monitor-version Currently does nothing
--quiet Suppresses non-error output
--dumpsnaps Dumps a list of snapshots during the run
--no-command-checks Do not check command existence before attempting transfer. Not recommended
--no-resume Don't use the ZFS resume feature if available
--no-clone-handling Don't try to recreate clones on target
--no-privilege-elevation Bypass the root check, for use with ZFS permission delegation
--force-delete Remove target datasets recursively, if there are no matching snapshots/bookmarks (also overwrites conflicting named snapshots)
sanoid-2.2.0/tests/ 0000775 0000000 0000000 00000000000 14455537001 0014136 5 ustar 00root root 0000000 0000000 sanoid-2.2.0/tests/1_one_year/ 0000775 0000000 0000000 00000000000 14455537001 0016157 5 ustar 00root root 0000000 0000000 sanoid-2.2.0/tests/1_one_year/run.sh 0000775 0000000 0000000 00000002066 14455537001 0017326 0 ustar 00root root 0000000 0000000 #!/bin/bash
set -x
# this test will take hourly, daily and monthly snapshots
# for the whole year of 2017 in the timezone Europe/Vienna
# sanoid is run hourly and no snapshots are pruned
. ../common/lib.sh
POOL_NAME="sanoid-test-1"
POOL_TARGET="" # root
RESULT="/tmp/sanoid_test_result"
RESULT_CHECKSUM="92f2c7afba94b59e8a6f6681705f0aa3f1c61e4aededaa38281e0b7653856935"
# UTC timestamp of start and end
START="1483225200"
END="1514761199"
# prepare
setup
checkEnvironment
disableTimeSync
# set timezone
ln -sf /usr/share/zoneinfo/Europe/Vienna /etc/localtime
timestamp=$START
mkdir -p "${POOL_TARGET}"
truncate -s 5120M "${POOL_TARGET}"/zpool.img
zpool create -f "${POOL_NAME}" "${POOL_TARGET}"/zpool.img
function cleanUp {
zpool export "${POOL_NAME}"
}
# export pool in any case
trap cleanUp EXIT
while [ $timestamp -le $END ]; do
setdate $timestamp; date; "${SANOID}" --cron --verbose
timestamp=$((timestamp+3600))
done
saveSnapshotList "${POOL_NAME}" "${RESULT}"
# hourly daily monthly
verifySnapshotList "${RESULT}" 8760 365 12 "${RESULT_CHECKSUM}"
sanoid-2.2.0/tests/1_one_year/sanoid.conf 0000664 0000000 0000000 00000000224 14455537001 0020301 0 ustar 00root root 0000000 0000000 [sanoid-test-1]
use_template = production
[template_production]
hourly = 36
daily = 30
monthly = 3
yearly = 0
autosnap = yes
autoprune = no
sanoid-2.2.0/tests/2_dst_handling/ 0000775 0000000 0000000 00000000000 14455537001 0017015 5 ustar 00root root 0000000 0000000 sanoid-2.2.0/tests/2_dst_handling/run.sh 0000775 0000000 0000000 00000002336 14455537001 0020164 0 ustar 00root root 0000000 0000000 #!/bin/bash
set -x
# this test will check the behaviour arround a date where DST ends
# with hourly, daily and monthly snapshots checked in a 15 minute interval
# Daylight saving time 2017 in Europe/Vienna began at 02:00 on Sunday, 26 March
# and ended at 03:00 on Sunday, 29 October. All times are in
# Central European Time.
. ../common/lib.sh
POOL_NAME="sanoid-test-2"
POOL_TARGET="" # root
RESULT="/tmp/sanoid_test_result"
RESULT_CHECKSUM="846372ef238f2182b382c77a73ecddf99aa82f28cc9995bcc95592cc78305463"
# UTC timestamp of start and end
START="1509141600"
END="1509400800"
# prepare
setup
checkEnvironment
disableTimeSync
# set timezone
ln -sf /usr/share/zoneinfo/Europe/Vienna /etc/localtime
timestamp=$START
mkdir -p "${POOL_TARGET}"
truncate -s 512M "${POOL_TARGET}"/zpool2.img
zpool create -f "${POOL_NAME}" "${POOL_TARGET}"/zpool2.img
function cleanUp {
zpool export "${POOL_NAME}"
}
# export pool in any case
trap cleanUp EXIT
while [ $timestamp -le $END ]; do
setdate $timestamp; date; "${SANOID}" --cron --verbose
timestamp=$((timestamp+900))
done
saveSnapshotList "${POOL_NAME}" "${RESULT}"
# hourly daily monthly
verifySnapshotList "${RESULT}" 73 3 1 "${RESULT_CHECKSUM}"
# one more hour because of DST
sanoid-2.2.0/tests/2_dst_handling/sanoid.conf 0000664 0000000 0000000 00000000224 14455537001 0021137 0 ustar 00root root 0000000 0000000 [sanoid-test-2]
use_template = production
[template_production]
hourly = 36
daily = 30
monthly = 3
yearly = 0
autosnap = yes
autoprune = no
sanoid-2.2.0/tests/common/ 0000775 0000000 0000000 00000000000 14455537001 0015426 5 ustar 00root root 0000000 0000000 sanoid-2.2.0/tests/common/lib.sh 0000664 0000000 0000000 00000006016 14455537001 0016533 0 ustar 00root root 0000000 0000000 #!/bin/bash
unamestr="$(uname)"
function setup {
export LANG=C
export LANGUAGE=C
export LC_ALL=C
export SANOID="../../sanoid"
# make sure that there is no cache file
rm -f /var/cache/sanoidsnapshots.txt
# install needed sanoid configuration files
[ -f sanoid.conf ] && cp sanoid.conf /etc/sanoid/sanoid.conf
cp ../../sanoid.defaults.conf /etc/sanoid/sanoid.defaults.conf
}
function checkEnvironment {
ASK=1
which systemd-detect-virt > /dev/null
if [ $? -eq 0 ]; then
systemd-detect-virt --vm > /dev/null
if [ $? -eq 0 ]; then
# we are in a vm
ASK=0
fi
fi
if [ $ASK -eq 1 ]; then
set +x
echo "!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!"
echo "you should be running this test in a"
echo "dedicated vm, as it will mess with your system!"
echo "Are you sure you want to continue? (y)"
echo "!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!"
set -x
read -n 1 c
if [ "$c" != "y" ]; then
exit 1
fi
fi
}
function disableTimeSync {
# disable ntp sync
which timedatectl > /dev/null
if [ $? -eq 0 ]; then
timedatectl set-ntp 0
fi
}
function saveSnapshotList {
POOL_NAME="$1"
RESULT="$2"
zfs list -t snapshot -o name -Hr "${POOL_NAME}" | sort > "${RESULT}"
# clear the seconds for comparing
if [ "$unamestr" == 'FreeBSD' ]; then
sed -i '' 's/\(autosnap_[0-9][0-9][0-9][0-9]-[0-9][0-9]-[0-9][0-9]_[0-9][0-9]:[0-9][0-9]:\)[0-9][0-9]/\100/g' "${RESULT}"
else
sed -i 's/\(autosnap_[0-9][0-9][0-9][0-9]-[0-9][0-9]-[0-9][0-9]_[0-9][0-9]:[0-9][0-9]:\)[0-9][0-9]/\100/g' "${RESULT}"
fi
}
function verifySnapshotList {
RESULT="$1"
HOURLY_COUNT=$2
DAILY_COUNT=$3
MONTHLY_COUNT=$4
CHECKSUM="$5"
failed=0
message=""
hourly_count=$(grep -c "autosnap_.*_hourly" < "${RESULT}")
daily_count=$(grep -c "autosnap_.*_daily" < "${RESULT}")
monthly_count=$(grep -c "autosnap_.*_monthly" < "${RESULT}")
if [ "${hourly_count}" -ne "${HOURLY_COUNT}" ]; then
failed=1
message="${message}hourly snapshot count is wrong: ${hourly_count}\n"
fi
if [ "${daily_count}" -ne "${DAILY_COUNT}" ]; then
failed=1
message="${message}daily snapshot count is wrong: ${daily_count}\n"
fi
if [ "${monthly_count}" -ne "${MONTHLY_COUNT}" ]; then
failed=1
message="${message}monthly snapshot count is wrong: ${monthly_count}\n"
fi
checksum=$(shasum -a 256 "${RESULT}" | cut -d' ' -f1)
if [ "${checksum}" != "${CHECKSUM}" ]; then
failed=1
message="${message}result checksum mismatch\n"
fi
if [ "${failed}" -eq 0 ]; then
exit 0
fi
echo "TEST FAILED:" >&2
echo -n -e "${message}" >&2
exit 1
}
function setdate {
TIMESTAMP="$1"
if [ "$unamestr" == 'FreeBSD' ]; then
date -u -f '%s' "${TIMESTAMP}"
else
date --utc --set "@${TIMESTAMP}"
fi
}
sanoid-2.2.0/tests/run-tests.sh 0000775 0000000 0000000 00000000732 14455537001 0016443 0 ustar 00root root 0000000 0000000 #!/bin/bash
# run's all the available tests
for test in */; do
if [ ! -x "${test}/run.sh" ]; then
continue
fi
testName="${test%/}"
LOGFILE=/tmp/sanoid_test_run_"${testName}".log
pushd . > /dev/null
echo -n "Running test ${testName} ... "
cd "${test}"
echo -n y | bash run.sh > "${LOGFILE}" 2>&1
if [ $? -eq 0 ]; then
echo "[PASS]"
else
echo "[FAILED] (see ${LOGFILE})"
fi
popd > /dev/null
done
sanoid-2.2.0/tests/syncoid/ 0000775 0000000 0000000 00000000000 14455537001 0015606 5 ustar 00root root 0000000 0000000 sanoid-2.2.0/tests/syncoid/1_bookmark_replication_intermediate/ 0000775 0000000 0000000 00000000000 14455537001 0024756 5 ustar 00root root 0000000 0000000 sanoid-2.2.0/tests/syncoid/1_bookmark_replication_intermediate/run.sh 0000775 0000000 0000000 00000002646 14455537001 0026131 0 ustar 00root root 0000000 0000000 #!/bin/bash
# test replication with fallback to bookmarks and all intermediate snapshots
set -x
set -e
. ../../common/lib.sh
POOL_IMAGE="/tmp/syncoid-test-1.zpool"
POOL_SIZE="200M"
POOL_NAME="syncoid-test-1"
TARGET_CHECKSUM="a23564d5bb8a2babc3ac8936fd82825ad9fff9c82d4924f5924398106bbda9f0 -"
truncate -s "${POOL_SIZE}" "${POOL_IMAGE}"
zpool create -m none -f "${POOL_NAME}" "${POOL_IMAGE}"
function cleanUp {
zpool export "${POOL_NAME}"
}
# export pool in any case
trap cleanUp EXIT
zfs create "${POOL_NAME}"/src
zfs snapshot "${POOL_NAME}"/src@snap1
zfs bookmark "${POOL_NAME}"/src@snap1 "${POOL_NAME}"/src#snap1
# initial replication
../../../syncoid --no-sync-snap --debug --compress=none "${POOL_NAME}"/src "${POOL_NAME}"/dst
# destroy last common snapshot on source
zfs destroy "${POOL_NAME}"/src@snap1
# create intermediate snapshots
# sleep is needed so creation time can be used for proper sorting
sleep 1
zfs snapshot "${POOL_NAME}"/src@snap2
sleep 1
zfs snapshot "${POOL_NAME}"/src@snap3
sleep 1
zfs snapshot "${POOL_NAME}"/src@snap4
sleep 1
zfs snapshot "${POOL_NAME}"/src@snap5
# replicate which should fallback to bookmarks
../../../syncoid --debug --compress=none "${POOL_NAME}"/src "${POOL_NAME}"/dst || exit 1
# verify
output=$(zfs list -t snapshot -r -H -o name "${POOL_NAME}")
checksum=$(echo "${output}" | grep -v syncoid_ | shasum -a 256)
if [ "${checksum}" != "${TARGET_CHECKSUM}" ]; then
exit 1
fi
exit 0
sanoid-2.2.0/tests/syncoid/2_bookmark_replication_no_intermediate/ 0000775 0000000 0000000 00000000000 14455537001 0025453 5 ustar 00root root 0000000 0000000 sanoid-2.2.0/tests/syncoid/2_bookmark_replication_no_intermediate/run.sh 0000775 0000000 0000000 00000002656 14455537001 0026627 0 ustar 00root root 0000000 0000000 #!/bin/bash
# test replication with fallback to bookmarks and all intermediate snapshots
set -x
set -e
. ../../common/lib.sh
POOL_IMAGE="/tmp/syncoid-test-2.zpool"
POOL_SIZE="200M"
POOL_NAME="syncoid-test-2"
TARGET_CHECKSUM="2460d4d4417793d2c7a5c72cbea4a8a584c0064bf48d8b6daa8ba55076cba66d -"
truncate -s "${POOL_SIZE}" "${POOL_IMAGE}"
zpool create -m none -f "${POOL_NAME}" "${POOL_IMAGE}"
function cleanUp {
zpool export "${POOL_NAME}"
}
# export pool in any case
trap cleanUp EXIT
zfs create "${POOL_NAME}"/src
zfs snapshot "${POOL_NAME}"/src@snap1
zfs bookmark "${POOL_NAME}"/src@snap1 "${POOL_NAME}"/src#snap1
# initial replication
../../../syncoid --no-sync-snap --debug --compress=none "${POOL_NAME}"/src "${POOL_NAME}"/dst
# destroy last common snapshot on source
zfs destroy "${POOL_NAME}"/src@snap1
# create intermediate snapshots
# sleep is needed so creation time can be used for proper sorting
sleep 1
zfs snapshot "${POOL_NAME}"/src@snap2
sleep 1
zfs snapshot "${POOL_NAME}"/src@snap3
sleep 1
zfs snapshot "${POOL_NAME}"/src@snap4
sleep 1
zfs snapshot "${POOL_NAME}"/src@snap5
# replicate which should fallback to bookmarks
../../../syncoid --no-stream --no-sync-snap --debug --compress=none "${POOL_NAME}"/src "${POOL_NAME}"/dst || exit 1
# verify
output=$(zfs list -t snapshot -r -H -o name "${POOL_NAME}")
checksum=$(echo "${output}" | shasum -a 256)
if [ "${checksum}" != "${TARGET_CHECKSUM}" ]; then
exit 1
fi
exit 0
sanoid-2.2.0/tests/syncoid/3_force_delete/ 0000775 0000000 0000000 00000000000 14455537001 0020450 5 ustar 00root root 0000000 0000000 sanoid-2.2.0/tests/syncoid/3_force_delete/run.sh 0000775 0000000 0000000 00000002217 14455537001 0021615 0 ustar 00root root 0000000 0000000 #!/bin/bash
# test replication with deletion of target if no matches are found
set -x
set -e
. ../../common/lib.sh
POOL_IMAGE="/tmp/syncoid-test-3.zpool"
POOL_SIZE="200M"
POOL_NAME="syncoid-test-3"
TARGET_CHECKSUM="0409a2ac216e69971270817189cef7caa91f6306fad9eab1033955b7e7c6bd4c -"
truncate -s "${POOL_SIZE}" "${POOL_IMAGE}"
zpool create -m none -f "${POOL_NAME}" "${POOL_IMAGE}"
function cleanUp {
zpool export "${POOL_NAME}"
}
# export pool in any case
trap cleanUp EXIT
zfs create "${POOL_NAME}"/src
zfs create "${POOL_NAME}"/src/1
zfs create "${POOL_NAME}"/src/2
zfs create "${POOL_NAME}"/src/3
# initial replication
../../../syncoid -r --debug --compress=none "${POOL_NAME}"/src "${POOL_NAME}"/dst
# destroy last common snapshot on source
zfs destroy "${POOL_NAME}"/src/2@%
zfs snapshot "${POOL_NAME}"/src/2@test
sleep 1
../../../syncoid -r --force-delete --debug --compress=none "${POOL_NAME}"/src "${POOL_NAME}"/dst || exit 1
# verify
output=$(zfs list -t snapshot -r -H -o name "${POOL_NAME}" | sed 's/@syncoid_.*$'/@syncoid_/)
checksum=$(echo "${output}" | shasum -a 256)
if [ "${checksum}" != "${TARGET_CHECKSUM}" ]; then
exit 1
fi
exit 0
sanoid-2.2.0/tests/syncoid/4_bookmark_replication_edge_case/ 0000775 0000000 0000000 00000000000 14455537001 0024206 5 ustar 00root root 0000000 0000000 sanoid-2.2.0/tests/syncoid/4_bookmark_replication_edge_case/run.sh 0000775 0000000 0000000 00000002325 14455537001 0025353 0 ustar 00root root 0000000 0000000 #!/bin/bash
# test replication edge cases with bookmarks
set -x
set -e
. ../../common/lib.sh
POOL_IMAGE="/tmp/syncoid-test-4.zpool"
POOL_SIZE="200M"
POOL_NAME="syncoid-test-4"
TARGET_CHECKSUM="ad383b157b01635ddcf13612ac55577ad9c8dcf3fbfc9eb91792e27ec8db739b -"
truncate -s "${POOL_SIZE}" "${POOL_IMAGE}"
zpool create -m none -f "${POOL_NAME}" "${POOL_IMAGE}"
function cleanUp {
zpool export "${POOL_NAME}"
}
# export pool in any case
trap cleanUp EXIT
zfs create "${POOL_NAME}"/src
zfs snapshot "${POOL_NAME}"/src@snap1
zfs bookmark "${POOL_NAME}"/src@snap1 "${POOL_NAME}"/src#snap1
# initial replication
../../../syncoid --no-sync-snap --debug --compress=none "${POOL_NAME}"/src "${POOL_NAME}"/dst
# destroy last common snapshot on source
zfs destroy "${POOL_NAME}"/src@snap1
zfs snapshot "${POOL_NAME}"/src@snap2
# replicate which should fallback to bookmarks and stop because it's already on the latest snapshot
../../../syncoid --no-sync-snap --debug --compress=none "${POOL_NAME}"/src "${POOL_NAME}"/dst || exit 1
# verify
output=$(zfs list -t snapshot -r -H -o name "${POOL_NAME}")
checksum=$(echo "${output}" | grep -v syncoid_ | shasum -a 256)
if [ "${checksum}" != "${TARGET_CHECKSUM}" ]; then
exit 1
fi
exit 0
sanoid-2.2.0/tests/syncoid/5_reset_resume_state/ 0000775 0000000 0000000 00000000000 14455537001 0021734 5 ustar 00root root 0000000 0000000 sanoid-2.2.0/tests/syncoid/5_reset_resume_state/run.sh 0000775 0000000 0000000 00000002305 14455537001 0023077 0 ustar 00root root 0000000 0000000 #!/bin/bash
# test no resume replication with a target containing a partially received replication stream
set -x
set -e
. ../../common/lib.sh
POOL_IMAGE="/tmp/syncoid-test-5.zpool"
MOUNT_TARGET="/tmp/syncoid-test-5.mount"
POOL_SIZE="1000M"
POOL_NAME="syncoid-test-5"
truncate -s "${POOL_SIZE}" "${POOL_IMAGE}"
zpool create -m none -f "${POOL_NAME}" "${POOL_IMAGE}"
function cleanUp {
zpool export "${POOL_NAME}"
}
# export pool in any case
trap cleanUp EXIT
zfs create -o mountpoint="${MOUNT_TARGET}" "${POOL_NAME}"/src
../../../syncoid --debug --compress=none "${POOL_NAME}"/src "${POOL_NAME}"/dst
dd if=/dev/urandom of="${MOUNT_TARGET}"/big_file bs=1M count=200
../../../syncoid --debug --compress=none --source-bwlimit=2m "${POOL_NAME}"/src "${POOL_NAME}"/dst &
syncoid_pid=$!
sleep 5
function getcpid() {
cpids=$(pgrep -P "$1"|xargs)
for cpid in $cpids;
do
echo "$cpid"
getcpid "$cpid"
done
}
kill $(getcpid $$) || true
wait
sleep 1
../../../syncoid --debug --compress=none --no-resume "${POOL_NAME}"/src "${POOL_NAME}"/dst | grep "reset partial receive state of syncoid"
sleep 1
../../../syncoid --debug --compress=none "${POOL_NAME}"/src "${POOL_NAME}"/dst
exit $?
sanoid-2.2.0/tests/syncoid/6_reset_resume_state2/ 0000775 0000000 0000000 00000000000 14455537001 0022017 5 ustar 00root root 0000000 0000000 sanoid-2.2.0/tests/syncoid/6_reset_resume_state2/run.sh 0000775 0000000 0000000 00000002402 14455537001 0023160 0 ustar 00root root 0000000 0000000 #!/bin/bash
# test resumable replication where the original snapshot doesn't exist anymore
set -x
set -e
. ../../common/lib.sh
POOL_IMAGE="/tmp/syncoid-test-6.zpool"
MOUNT_TARGET="/tmp/syncoid-test-6.mount"
POOL_SIZE="1000M"
POOL_NAME="syncoid-test-6"
truncate -s "${POOL_SIZE}" "${POOL_IMAGE}"
zpool create -m none -f "${POOL_NAME}" "${POOL_IMAGE}"
function cleanUp {
zpool export "${POOL_NAME}"
}
# export pool in any case
trap cleanUp EXIT
zfs create -o mountpoint="${MOUNT_TARGET}" "${POOL_NAME}"/src
../../../syncoid --debug --compress=none "${POOL_NAME}"/src "${POOL_NAME}"/dst
dd if=/dev/urandom of="${MOUNT_TARGET}"/big_file bs=1M count=200
zfs snapshot "${POOL_NAME}"/src@big
../../../syncoid --debug --no-sync-snap --compress=none --source-bwlimit=2m "${POOL_NAME}"/src "${POOL_NAME}"/dst &
syncoid_pid=$!
sleep 5
function getcpid() {
cpids=$(pgrep -P "$1"|xargs)
for cpid in $cpids;
do
echo "$cpid"
getcpid "$cpid"
done
}
kill $(getcpid $$) || true
wait
sleep 1
zfs destroy "${POOL_NAME}"/src@big
../../../syncoid --debug --compress=none "${POOL_NAME}"/src "${POOL_NAME}"/dst # | grep "reset partial receive state of syncoid"
sleep 1
../../../syncoid --debug --compress=none "${POOL_NAME}"/src "${POOL_NAME}"/dst
exit $?
sanoid-2.2.0/tests/syncoid/7_preserve_recordsize/ 0000775 0000000 0000000 00000000000 14455537001 0022120 5 ustar 00root root 0000000 0000000 sanoid-2.2.0/tests/syncoid/7_preserve_recordsize/run.sh 0000775 0000000 0000000 00000002532 14455537001 0023265 0 ustar 00root root 0000000 0000000 #!/bin/bash
# test preserving the recordsize from the src filesystem to the target one
set -x
set -e
. ../../common/lib.sh
POOL_IMAGE="/tmp/syncoid-test-7.zpool"
MOUNT_TARGET="/tmp/syncoid-test-7.mount"
POOL_SIZE="1000M"
POOL_NAME="syncoid-test-7"
truncate -s "${POOL_SIZE}" "${POOL_IMAGE}"
zpool create -m none -f "${POOL_NAME}" "${POOL_IMAGE}"
function cleanUp {
zpool export "${POOL_NAME}"
}
# export pool in any case
trap cleanUp EXIT
zfs create "${POOL_NAME}"/src
zfs create -V 100M -o volblocksize=4k "${POOL_NAME}"/src/zvol4
zfs create -V 100M -o volblocksize=16k "${POOL_NAME}"/src/zvol16
zfs create -V 100M -o volblocksize=64k "${POOL_NAME}"/src/zvol64
zfs create -o recordsize=16k "${POOL_NAME}"/src/16
zfs create -o recordsize=32k "${POOL_NAME}"/src/32
zfs create -o recordsize=128k "${POOL_NAME}"/src/128
../../../syncoid --preserve-recordsize --recursive --debug --compress=none "${POOL_NAME}"/src "${POOL_NAME}"/dst
zfs get recordsize -t filesystem -r "${POOL_NAME}"/dst
zfs get volblocksize -t volume -r "${POOL_NAME}"/dst
if [ "$(zfs get recordsize -H -o value -t filesystem "${POOL_NAME}"/dst/16)" != "16K" ]; then
exit 1
fi
if [ "$(zfs get recordsize -H -o value -t filesystem "${POOL_NAME}"/dst/32)" != "32K" ]; then
exit 1
fi
if [ "$(zfs get recordsize -H -o value -t filesystem "${POOL_NAME}"/dst/128)" != "128K" ]; then
exit 1
fi
sanoid-2.2.0/tests/syncoid/8_force_delete_snapshot/ 0000775 0000000 0000000 00000000000 14455537001 0022374 5 ustar 00root root 0000000 0000000 sanoid-2.2.0/tests/syncoid/8_force_delete_snapshot/run.sh 0000775 0000000 0000000 00000002431 14455537001 0023537 0 ustar 00root root 0000000 0000000 #!/bin/bash
# test replication with deletion of conflicting snapshot on target
set -x
set -e
. ../../common/lib.sh
POOL_IMAGE="/tmp/syncoid-test-8.zpool"
POOL_SIZE="200M"
POOL_NAME="syncoid-test-8"
TARGET_CHECKSUM="ee439200c9fa54fc33ce301ef64d4240a6c5587766bfeb651c5cf358e11ec89d -"
truncate -s "${POOL_SIZE}" "${POOL_IMAGE}"
zpool create -m none -f "${POOL_NAME}" "${POOL_IMAGE}"
function cleanUp {
zpool export "${POOL_NAME}"
}
# export pool in any case
trap cleanUp EXIT
zfs create "${POOL_NAME}"/src
zfs snapshot "${POOL_NAME}"/src@duplicate
# initial replication
../../../syncoid -r --debug --compress=none "${POOL_NAME}"/src "${POOL_NAME}"/dst
# recreate snapshot with the same name on src
zfs destroy "${POOL_NAME}"/src@duplicate
zfs snapshot "${POOL_NAME}"/src@duplicate
sleep 1
../../../syncoid -r --force-delete --debug --compress=none "${POOL_NAME}"/src "${POOL_NAME}"/dst || exit 1
# verify
output1=$(zfs list -t snapshot -r -H -o guid,name "${POOL_NAME}"/src | sed 's/@syncoid_.*$'/@syncoid_/)
checksum1=$(echo "${output1}" | shasum -a 256)
output2=$(zfs list -t snapshot -r -H -o guid,name "${POOL_NAME}"/dst | sed 's/@syncoid_.*$'/@syncoid_/ | sed 's/dst/src/')
checksum2=$(echo "${output2}" | shasum -a 256)
if [ "${checksum1}" != "${checksum2}" ]; then
exit 1
fi
exit 0
sanoid-2.2.0/tests/syncoid/9_preserve_properties/ 0000775 0000000 0000000 00000000000 14455537001 0022145 5 ustar 00root root 0000000 0000000 sanoid-2.2.0/tests/syncoid/9_preserve_properties/run.sh 0000775 0000000 0000000 00000003506 14455537001 0023314 0 ustar 00root root 0000000 0000000 #!/bin/bash
# test preserving locally set properties from the src dataset to the target one
set -x
set -e
. ../../common/lib.sh
POOL_IMAGE="/tmp/syncoid-test-9.zpool"
MOUNT_TARGET="/tmp/syncoid-test-9.mount"
POOL_SIZE="1000M"
POOL_NAME="syncoid-test-9"
truncate -s "${POOL_SIZE}" "${POOL_IMAGE}"
zpool create -m none -f "${POOL_NAME}" "${POOL_IMAGE}"
function cleanUp {
zpool export "${POOL_NAME}"
}
# export pool in any case
trap cleanUp EXIT
zfs create -o recordsize=16k -o xattr=on -o mountpoint=none -o primarycache=none "${POOL_NAME}"/src
zfs create -V 100M -o volblocksize=8k "${POOL_NAME}"/src/zvol8
zfs create -V 100M -o volblocksize=16k -o primarycache=all "${POOL_NAME}"/src/zvol16
zfs create -V 100M -o volblocksize=64k "${POOL_NAME}"/src/zvol64
zfs create -o recordsize=16k -o primarycache=none "${POOL_NAME}"/src/16
zfs create -o recordsize=32k -o acltype=posixacl "${POOL_NAME}"/src/32
../../../syncoid --preserve-properties --recursive --debug --compress=none "${POOL_NAME}"/src "${POOL_NAME}"/dst
if [ "$(zfs get recordsize -H -o value -t filesystem "${POOL_NAME}"/dst)" != "16K" ]; then
exit 1
fi
if [ "$(zfs get mountpoint -H -o value -t filesystem "${POOL_NAME}"/dst)" != "none" ]; then
exit 1
fi
if [ "$(zfs get xattr -H -o value -t filesystem "${POOL_NAME}"/dst)" != "on" ]; then
exit 1
fi
if [ "$(zfs get primarycache -H -o value -t filesystem "${POOL_NAME}"/dst)" != "none" ]; then
exit 1
fi
if [ "$(zfs get recordsize -H -o value -t filesystem "${POOL_NAME}"/dst/16)" != "16K" ]; then
exit 1
fi
if [ "$(zfs get primarycache -H -o value -t filesystem "${POOL_NAME}"/dst/16)" != "none" ]; then
exit 1
fi
if [ "$(zfs get recordsize -H -o value -t filesystem "${POOL_NAME}"/dst/32)" != "32K" ]; then
exit 1
fi
if [ "$(zfs get acltype -H -o value -t filesystem "${POOL_NAME}"/dst/32)" != "posix" ]; then
exit 1
fi
sanoid-2.2.0/tests/syncoid/run-tests.sh 0000775 0000000 0000000 00000000726 14455537001 0020116 0 ustar 00root root 0000000 0000000 #!/bin/bash
# run's all the available tests
for test in */; do
if [ ! -x "${test}/run.sh" ]; then
continue
fi
testName="${test%/}"
LOGFILE=/tmp/syncoid_test_run_"${testName}".log
pushd . > /dev/null
echo -n "Running test ${testName} ... "
cd "${test}"
echo | bash run.sh > "${LOGFILE}" 2>&1
if [ $? -eq 0 ]; then
echo "[PASS]"
else
echo "[FAILED] (see ${LOGFILE})"
fi
popd > /dev/null
done