pax_global_header00006660000000000000000000000064135571622060014521gustar00rootroot0000000000000052 comment=45d089852339521012b2ebe1b61ea078d69e22d8 sanoid-2.0.3/000077500000000000000000000000001355716220600130005ustar00rootroot00000000000000sanoid-2.0.3/CHANGELIST000066400000000000000000000351331355716220600143510ustar00rootroot000000000000002.0.3 [sanoid] reverted DST handling and improved it as quickfix (@phreaker0) 2.0.2 [overall] documentation updates, new dependencies, small fixes, more warnings (@benyanke, @matveevandrey, @RulerOf, @klemens-u, @johnramsden, @danielewood, @g-a-c, @hartzell, @fryfrog, @phreaker0) [sanoid] changed and simplified DST handling (@shodanshok) [syncoid] reset partially resume state automatically (@phreaker0) [syncoid] handle some zfs erros automatically by parsing the stderr outputs (@phreaker0) [syncoid] fixed ordering of snapshots with the same creation timestamp (@phreaker0) [syncoid] don't use hardcoded paths (@phreaker0) [syncoid] fix for special setup with listsnapshots=on (@phreaker0) [syncoid] check ssh connection on startup (@phreaker0) [syncoid] fix edge case with initial send and no-stream option (@phreaker0) [syncoid] fallback to normal replication if clone recreation fails (@phreaker0) [packaging] ebuild for gentoo (@thehaven) [syncoid] support for zfs bookmark creation (@phreaker0) [syncoid] fixed bookmark edge cases (@phreaker0) [syncoid] handle invalid dataset paths nicely (@phreaker0) [syncoid] fixed resume support check to be zpool based (@phreaker0) [sanoid] added hotspare template (@jimsalterjrs) [syncoid] support for advanced zfs send/recv options (@clinta, @phreaker0) [syncoid] option to change mbuffer size (@TerraTech) [tests] fixes for FreeBSD (@phreaker0) [sanoid] support for zfs recursion (@jMichaelA, @phreaker0) [syncoid] fixed bookmark handling for volumens (@ppcontrib) [sanoid] allow time units for monitoring warn/crit values (@phreaker0) 2.0.1 [sanoid] fixed broken monthly warn/critical monitoring values in default template (@jimsalterjrs) [sanoid] flag to force pruning while filesystem is in an active zfs send/recv (@shodanshok) [syncoid] flags to disable rollbacks (@shodanshok) 2.0.0 [overall] documentation updates, small fixes, more warnings (@sparky3387, @ljwobker, @phreaker0) [syncoid] added force delete flag (@phreaker0) [sanoid] removed sleeping between snapshot taking (@phreaker0) [syncoid] added '--no-privilege-elevation' option to bypass root check (@lopsided98) [sanoid] implemented weekly period (@phreaker0) [syncoid] implemented support for zfs bookmarks as fallback (@phreaker0) [sanoid] support for pre, post and prune snapshot scripts (@jouir, @darkbasic, @phreaker0) [sanoid] ignore snapshots types that are set to 0 (@muff1nman) [packaging] split snapshot taking/pruning into separate systemd units for debian package (@phreaker0) [syncoid] replicate clones (@phreaker0) [syncoid] added compression algorithms: lz4, xz (@spheenik, @phreaker0) [sanoid] added option to defer pruning based on the available pool capacity (@phreaker0) [sanoid] implemented frequent snapshots with configurable period (@phreaker0) [syncoid] prevent a perl warning on systems which doesn't output estimated send size information (@phreaker0) [packaging] dependency fixes (@rodgerd, mabushey) [syncoid] implemented support for excluding children of a specific dataset (@phreaker0) [sanoid] monitor-health command additionally checks vdev members for io and checksum errors (@phreaker0) [syncoid] added ability to skip datasets by a custom dataset property 'syncoid:no-sync' (@attie) [syncoid] don't die on some critical replication errors, but continue with the remaining datasets (@phreaker0) [syncoid] return a non zero exit code if there was a problem replicating datasets (@phreaker0) [syncoid] make local source bwlimit work (@phreaker0) [syncoid] fix 'resume support' detection on FreeBSD (@pit3k) [sanoid] updated INSTALL with missing dependency [sanoid] fixed monitor-health command for pools containing cache and log devices (@phreaker0) [sanoid] quiet flag suppresses all info output (@martinvw) [sanoid] check for empty lockfile which lead to sanoid failing on start (@jasonblewis) [sanoid] added dst handling to prevent multiple invalid snapshots on time shift (@phreaker0) [sanoid] cache improvements, makes sanoid much faster with a huge amount of datasets/snapshots (@phreaker0) [sanoid] implemented monitor-capacity flag for checking zpool capacity limits (@phreaker0) [syncoid] Added support for ZStandard compression.(@danielewood) [syncoid] implemented support for excluding datasets from replication with regular expressions (@phreaker0) [syncoid] correctly parse zfs column output, fixes resumeable send with datasets containing spaces (@phreaker0) [syncoid] added option for using extra identification in the snapshot name for replication to multiple targets (@phreaker0) [syncoid] added option for skipping the parent dataset in recursive replication (@phreaker0) [syncoid] typos (@UnlawfulMonad, @jsavikko, @phreaker0) [sanoid] use UTC by default in unit template and documentation (@phreaker0) [syncoid] don't prune snapshots if instructed to not create them either (@phreaker0) [syncoid] documented compatibility issues with (t)csh shells (@ecoutu) 1.4.18 implemented special character handling and support of ZFS resume/receive tokens by default in syncoid, thank you @phreaker0! 1.4.17 changed die to warn when unexpectedly unable to remove a snapshot - this allows sanoid to continue taking/removing other snapshots not affected by whatever lock prevented the first from being taken or removed 1.4.16 merged @hrast01's extended fix to support -o option1=val,option2=val passthrough to SSH. merged @JakobR's off-by-one fix to stop unnecessary extra snapshots being taken under certain conditions. merged @stardude900's update to INSTALL for FreeBSD users re:symlinks. Implemented @LordAro's update to change DIE to WARN when encountering a dataset with no snapshots and --no-sync-snap set during recursive replication. Implemented @LordAro's update to sanoid.conf to add an ignore template which does not snap, prune, or monitor. 1.4.15 merged @hrast01's -o option to pass ssh CLI options through. Currently only supports a single -o=option argument - in the near future, need to add some simple parsing to expand -o=option1,option2 on the CLI to -o option1 -o option2 as passed to SSH. 1.4.14 fixed significant regression in syncoid - now pulls creation AND guid on each snap; sorts by creation and matches by guid. regression reported in #112 by @da-me, thank you! 1.4.13 Syncoid will now continue trying to replicate other child datasets after one dataset fails replication when called recursively. Eg syncoid -r source/parent target/parent when source/parent/child1 has been deleted and replaced with an imposter will no longer prevent source/parent/child2 from successfully replicating to target/parent/child2. This could still use some cleanup TBH; syncoid SHOULD exit 3 if any of these errors happen (to assist detection of errors in scripting) but now would exit 0. 1.4.12 Sanoid now strips trailing whitespace in template definitions in sanoid.conf, per Github #61 1.4.11 enhanced Syncoid to use zfs `guid` property rather than `creation` property to ensure snapshots on source and target actually match. This immediately prevents conflicts due to timezone differences on source and target, and also paves the way in the future for Syncoid to find matching snapshots even after `zfs rename` on source or target. Thank you Github user @mailinglists35 for the idea! 1.4.10 added --compress=pigz-fast and --compress=pigz-slow. On a Xeon E3-1231v3, pigz-fast is equivalent compression to --compress=gzip but with compressed throughput of 75.2 MiB/s instead of 18.1 MiB/s. pigz-slow is around 5% better compression than compress=gzip with roughly equivalent compressed throughput. Note that pigz-fast produces a whopping 20+% better compression on the test data (a linux boot drive) than lzop does, while still being fast enough to saturate or nearly saturate a real-world gigabit LAN link. The down side: pigz chews through 100% util of all available system threads, if not bottlenecked by the network link speed. Default compression remains lzop for SSH transport, with compression automatically set to none if there's no transport (ie syncoid replication from dataset to dataset on the local machine only). 1.4.9 added -c option to manually specify the SSH cipher used. Must use a cipher supported by both source and target! Thanks Tamas Papp. 1.4.8 added --no-stream argument to syncoid: allows use of -i incrementals (do not replicate a full snapshot stream, only a direct incremental update from oldest to most recent snapshot) instead of the normal -I incrementals which include all intermediate snapshots. added --no-sync-snap, which has syncoid replicate using only the newest PRE-EXISTING snapshot on source, instead of default behavior in which syncoid creates a new, ephemeral syncoid snapshot. 1.4.7a (syncoid only) added standard invocation output when called without source or target as per @rriley and @fajarnugraha suggestions 1.4.7 reverted Perl shebangs to #!/usr/bin/perl - sorry FreeBSD folks, shebanged to /usr/bin/env perl bare calls to syncoid or sanoid (without explicit calls to Perl) don't work on EITHER of our systems. I'm not OK with that, this is going to be an OS localization issue that can either be addressed with BSD-specific packaging, or you can individually address it by editing the shebangs on your own systems OR by doing a one-time ln -s /usr/local/bin/perl to /usr/bin/perl, which will fix the issue for this particular script AND all other Perl scripts developed on non-BSD systems. also temporarily dyked out the set readonly functionality in syncoid - it was causing more problems than it prevented, and using the -F argument with receive prevents incautious writes (including just cd'ing into mounted datasets, if atimes are on) from interrupting syncoid runs anyway. 1.4.6c merged @gusson's pull request to add -sshport argument 1.4.6b updated default cipherlist for syncoid to chacha20-poly1305@openssh.com,arcfour - arcfour isn't supported on newer SSH (in Ubuntu Xenial and FreeBSD), chacha20 isn't supported on some older SSH versions (Ubuntu Precise< I think?) 1.4.6a due to bug in ZFS on Linux which frequently causes errors to return from `zfs set readonly`, changed ==0 or die in setzfsvalue() to ==0 or [complain] - it's not worth causing replication to fail while this ZFS on Linux bug exists. 1.4.6 added a mollyguard to syncoid to help newbies who try to zfs create a new target dataset before doing an initial replication, instead of letting the replication itself create the target. added "==0 or die" to all system() calls in sanoid and syncoid that didn't already have them. 1.4.5 altered shebang to '#!/usr/bin/env perl' for enhanced FreeBSD compatibility 1.4.4 merged pull requests from jjlawren for OmniOS compatibility, added --configdir=/path/to/configs CLI option to sanoid at jjlawrens' request presumably for same 1.4.3 added SSH persistence to syncoid - using socket speeds up SSH overhead 300%! =) one extra commit to get rid of the "Exit request sent." SSH noise at the end. 1.4.2 removed -r flag for zfs destroy of pruned snapshots in sanoid, which unintentionally caused same-name child snapshots to be deleted - thank you Lenz Weber! 1.4.1 updated check_zpool() in sanoid to parse zpool list properly both pre- and post- ZoL v0.6.4 1.4.0 added findoid tool - find and list all versions of a given file in all available ZFS snapshots. use: findoid /path/to/file 1.3.1 whoops - prevent process_children_only from getting set from blank value in defaults 1.3.0 changed monitor_children_only to process_children_only. which keeps sanoid from messing around with empty parent datasets at all. also more thoroughly documented features in default config files. 1.2.0 added monitor_children_only parameter to sanoid.conf for use with recursive definitions - in cases where container dataset is kept empty 1.1.0 woooo - working recursive definitions in Sanoid! Also intelligent config errors in Sanoid; will die with errors if unknown config value is set. 1.0.20 greatly cleaned up config parsing in sanoid, got rid of 'hardcoded defaults' in favor of /etc/sanoid/sanoid.defaults.conf 1.0.19 working recursive sync (sync specified dataset and all child datasets, ie pool/ds, pool/ds/1, pool, ds/1/a, pool/ds/2 ...) with --recursive or -r in syncoid! 1.0.18 updated syncoid to break sync out of main routine and into syncdataset(). this will allow doing recursive sync, in next update :) 1.0.17 updated syncoid to use sudo when necessary if it isn't already root - working user needs NOPASSWD for /sbin/zfs in /etc/sudoers updated syncoid to throw errors on unknown arguments 1.0.16 updated sanoid to use VASTLY improved CLI argument parsing, throw errors on unknown arguments, etc 1.0.15 updated syncoid to accept compression engine flags - --compress=lzo|gzip|none 1.0.14 updated syncoid to reduce output when fetching snapshot list - thank you github user @0xFate. 1.0.13 removed monitor_version again - sorry for the feature instability, forgot I removed it in the first place because I didn't like pulling in so many dependencies for such a trivial feature 1.0.12 patched default sanoid.conf provided to set yearly_mon and yearly_mday correctly 1.0.11 patched bug in yearly snapshots - thank you @stevenolen for the bug report! 1.0.10 added --monitor-version to check installed version against current version in trunk on GitHub 1.0.9 added CR/LF after --monitor-health message output to make CLI usage cleaner looking 1.0.8 added --version checking in sanoid and syncoid 1.0.7 got rid of unnecessary sudo commands in sanoid's --monitor-health 1.0.6 added 'or die' to make sure /var/cache/sanoidsnapshots.txt is writeable 1.0.5 fixed another ps BSD-ism bug in sanoid 1.0.4 updated ps usage in sanoid to comply with BSDisms 1.0.3 nerfed references to $debug (not implemented yet) in sanoid 1.0.2 implemented iszfsbusy check to the thinning routine in sanoid 1.0.1 ported slightly modified iszfsbusy sub from syncoid to sanoid (to keep from thinning snapshots during replications) 1.0.0 initial commit to Github sanoid-2.0.3/FREEBSD.readme000066400000000000000000000022061355716220600152310ustar00rootroot00000000000000FreeBSD users will need to change the Perl shebangs at the top of the executables from #!/usr/bin/perl to #!/usr/local/bin/perl in most cases. Sorry folks, but if I set this with #!/usr/bin/env perl as suggested, then nothing works properly from a typical cron environment on EITHER operating system, Linux or BSD. I'm mostly using Linux systems, so I get to set the shebang for my use and give you folks a FREEBSD readme rather than the other way around. =) If you don't want to have to change the shebangs, your other option is to drop a symlink on your system: root@bsd:~# ln -s /usr/local/bin/perl /usr/bin/perl After putting this symlink in place, ANY perl script shebanged for Linux will work on your system too. Syncoid assumes a bourne style shell on remote hosts. Using (t)csh (the default for root under FreeBSD) has some known issues: * If mbuffer is present, syncoid will fail with an "Ambiguous output redirect." error. So if you: root@bsd:~# ln -s /usr/local/bin/mbuffer /usr/bin/mbuffer make sure the remote user is using an sh compatible shell. To change to a compatible shell, use the chsh command: root@bsd:~# chsh -s /bin/sh sanoid-2.0.3/INSTALL.md000066400000000000000000000130521355716220600144310ustar00rootroot00000000000000# Installation **Sanoid** and **Syncoid** are complementary but separate pieces of software. To install and configure them, follow the guide below for your operating system. Everything in `code blocks` should be copy-pasteable. If your OS isn't listed, a set of general instructions is at the end of the list and you can perform the process manually. - [Installation](#installation) - [Debian/Ubuntu](#debianubuntu) - [CentOS](#centos) - [FreeBSD](#freebsd) - [Other OSes](#other-oses) - [Configuration](#configuration) - [Sanoid](#sanoid) ## Debian/Ubuntu Install prerequisite software: ```bash apt install debhelper libcapture-tiny-perl libconfig-inifiles-perl pv lzop mbuffer ``` Clone this repo, build the debian package and install it (alternatively you can skip the package and do it manually like described below for CentOS): ```bash # Download the repo as root to avoid changing permissions later sudo git clone https://github.com/jimsalterjrs/sanoid.git cd sanoid ln -s packages/debian . dpkg-buildpackage -uc -us apt install ../sanoid_*_all.deb ``` Enable sanoid timer: ```bash # enable and start the sanoid timer sudo systemctl enable sanoid.timer sudo systemctl start sanoid.timer ``` ## CentOS Install prerequisite software: ```bash # Install and enable epel if we don't already have it, and git too sudo yum install -y epel-release git # Install the packages that Sanoid depends on: sudo yum install -y perl-Config-IniFiles perl-Data-Dumper perl-Capture-Tiny lzop mbuffer mhash pv ``` Clone this repo, then put the executables and config files into the appropriate directories: ```bash # Download the repo as root to avoid changing permissions later sudo git clone https://github.com/jimsalterjrs/sanoid.git cd sanoid # Install the executables sudo cp sanoid syncoid findoid sleepymutex /usr/local/sbin # Create the config directory sudo mkdir /etc/sanoid # Install default config sudo cp sanoid.defaults.conf /etc/sanoid # Create a blank config file sudo touch /etc/sanoid/sanoid.conf # Place the sample config in the conf directory for reference sudo cp sanoid.conf /etc/sanoid/sanoid.example.conf ``` Create a systemd service: ```bash cat << "EOF" | sudo tee /etc/systemd/system/sanoid.service [Unit] Description=Snapshot ZFS Pool Requires=zfs.target After=zfs.target Wants=sanoid-prune.service Before=sanoid-prune.service ConditionFileNotEmpty=/etc/sanoid/sanoid.conf [Service] Environment=TZ=UTC Type=oneshot ExecStart=/usr/local/sbin/sanoid --take-snapshots --verbose EOF cat << "EOF" | sudo tee /etc/systemd/system/sanoid-prune.service [Unit] Description=Cleanup ZFS Pool Requires=zfs.target After=zfs.target sanoid.service ConditionFileNotEmpty=/etc/sanoid/sanoid.conf [Service] Environment=TZ=UTC Type=oneshot ExecStart=/usr/local/sbin/sanoid --prune-snapshots --verbose [Install] WantedBy=sanoid.service EOF ``` And a systemd timer that will execute **Sanoid** once per quarter hour (Decrease the interval as suitable for configuration): ```bash cat << "EOF" | sudo tee /etc/systemd/system/sanoid.timer [Unit] Description=Run Sanoid Every 15 Minutes Requires=sanoid.service [Timer] OnCalendar=*:0/15 Persistent=true [Install] WantedBy=timers.target EOF ``` Reload systemd and start our timer: ```bash # Tell systemd about our new service definitions sudo systemctl daemon-reload # Enable sanoid-prune.service to allow it to be triggered by sanoid.service sudo systemctl enable sanoid-prune.service # Enable and start the Sanoid timer sudo systemctl enable sanoid.timer sudo systemctl start sanoid.timer ``` Now, proceed to configure [**Sanoid**](#configuration) ## FreeBSD Install prerequisite software: ```bash pkg install p5-Config-Inifiles p5-Capture-Tiny pv mbuffer lzop ``` **Additional notes:** * FreeBSD may place pv and lzop in somewhere other than /usr/bin — syncoid currently does not check path. * Simplest path workaround is symlinks, eg `ln -s /usr/local/bin/lzop /usr/bin/lzop` or similar, as appropriate to create links in **/usr/bin** to wherever the utilities actually are on your system. * See note about mbuffer and other things in FREEBSD.readme ## Other OSes **Sanoid** depends on the Perl module Config::IniFiles and will not operate without it. Config::IniFiles may be installed from CPAN, though the project strongly recommends using your distribution's repositories instead. **Syncoid** depends on ssh, pv, gzip, lzop, and mbuffer. It can run with reduced functionality in the absence of any or all of the above. SSH is only required for remote synchronization. On newer FreeBSD and Ubuntu Xenial chacha20-poly1305@openssh.com, on other distributions arcfour crypto is the default for SSH transport since v1.4.6. Syncoid runs will fail if one of them is not available on either end of the transport. ### General outline for installation 1. Install prerequisites: Perl module Config::IniFiles, ssh, pv, gzip, lzop, and mbuffer 2. Download the **Sanoid** repo 3. Create the config directory `/etc/sanoid` and put `sanoid.defaults.conf` in there, and create `sanoid.conf` in it too 4. Create a cron job or a systemd timer that runs `sanoid --cron` once per minute # Configuration **Sanoid** won't do anything useful unless you tell it how to handle your ZFS datasets in `/etc/sanoid/sanoid.conf`. **Syncoid** is a command line utility that doesn't require any configuration, with all of its switches set at runtime. ## Sanoid Take a look at the files `sanoid.defaults.conf` and` sanoid.conf.example` for all possible configuration options. Also have a look at the README.md sanoid-2.0.3/LICENSE000066400000000000000000001044611355716220600140130ustar00rootroot00000000000000GNU GENERAL PUBLIC LICENSE Version 3, 29 June 2007 Copyright (C) 2007 Free Software Foundation, Inc. Everyone is permitted to copy and distribute verbatim copies of this license document, but changing it is not allowed. Preamble The GNU General Public License is a free, copyleft license for software and other kinds of works. The licenses for most software and other practical works are designed to take away your freedom to share and change the works. By contrast, the GNU General Public License is intended to guarantee your freedom to share and change all versions of a program--to make sure it remains free software for all its users. We, the Free Software Foundation, use the GNU General Public License for most of our software; it applies also to any other work released this way by its authors. You can apply it to your programs, too. When we speak of free software, we are referring to freedom, not price. Our General Public Licenses are designed to make sure that you have the freedom to distribute copies of free software (and charge for them if you wish), that you receive source code or can get it if you want it, that you can change the software or use pieces of it in new free programs, and that you know you can do these things. To protect your rights, we need to prevent others from denying you these rights or asking you to surrender the rights. Therefore, you have certain responsibilities if you distribute copies of the software, or if you modify it: responsibilities to respect the freedom of others. For example, if you distribute copies of such a program, whether gratis or for a fee, you must pass on to the recipients the same freedoms that you received. You must make sure that they, too, receive or can get the source code. And you must show them these terms so they know their rights. Developers that use the GNU GPL protect your rights with two steps: (1) assert copyright on the software, and (2) offer you this License giving you legal permission to copy, distribute and/or modify it. For the developers' and authors' protection, the GPL clearly explains that there is no warranty for this free software. For both users' and authors' sake, the GPL requires that modified versions be marked as changed, so that their problems will not be attributed erroneously to authors of previous versions. Some devices are designed to deny users access to install or run modified versions of the software inside them, although the manufacturer can do so. This is fundamentally incompatible with the aim of protecting users' freedom to change the software. The systematic pattern of such abuse occurs in the area of products for individuals to use, which is precisely where it is most unacceptable. Therefore, we have designed this version of the GPL to prohibit the practice for those products. If such problems arise substantially in other domains, we stand ready to extend this provision to those domains in future versions of the GPL, as needed to protect the freedom of users. Finally, every program is threatened constantly by software patents. States should not allow patents to restrict development and use of software on general-purpose computers, but in those that do, we wish to avoid the special danger that patents applied to a free program could make it effectively proprietary. To prevent this, the GPL assures that patents cannot be used to render the program non-free. The precise terms and conditions for copying, distribution and modification follow. TERMS AND CONDITIONS 0. Definitions. "This License" refers to version 3 of the GNU General Public License. "Copyright" also means copyright-like laws that apply to other kinds of works, such as semiconductor masks. "The Program" refers to any copyrightable work licensed under this License. Each licensee is addressed as "you". "Licensees" and "recipients" may be individuals or organizations. To "modify" a work means to copy from or adapt all or part of the work in a fashion requiring copyright permission, other than the making of an exact copy. The resulting work is called a "modified version" of the earlier work or a work "based on" the earlier work. A "covered work" means either the unmodified Program or a work based on the Program. To "propagate" a work means to do anything with it that, without permission, would make you directly or secondarily liable for infringement under applicable copyright law, except executing it on a computer or modifying a private copy. Propagation includes copying, distribution (with or without modification), making available to the public, and in some countries other activities as well. To "convey" a work means any kind of propagation that enables other parties to make or receive copies. Mere interaction with a user through a computer network, with no transfer of a copy, is not conveying. An interactive user interface displays "Appropriate Legal Notices" to the extent that it includes a convenient and prominently visible feature that (1) displays an appropriate copyright notice, and (2) tells the user that there is no warranty for the work (except to the extent that warranties are provided), that licensees may convey the work under this License, and how to view a copy of this License. If the interface presents a list of user commands or options, such as a menu, a prominent item in the list meets this criterion. 1. Source Code. The "source code" for a work means the preferred form of the work for making modifications to it. "Object code" means any non-source form of a work. A "Standard Interface" means an interface that either is an official standard defined by a recognized standards body, or, in the case of interfaces specified for a particular programming language, one that is widely used among developers working in that language. The "System Libraries" of an executable work include anything, other than the work as a whole, that (a) is included in the normal form of packaging a Major Component, but which is not part of that Major Component, and (b) serves only to enable use of the work with that Major Component, or to implement a Standard Interface for which an implementation is available to the public in source code form. A "Major Component", in this context, means a major essential component (kernel, window system, and so on) of the specific operating system (if any) on which the executable work runs, or a compiler used to produce the work, or an object code interpreter used to run it. The "Corresponding Source" for a work in object code form means all the source code needed to generate, install, and (for an executable work) run the object code and to modify the work, including scripts to control those activities. However, it does not include the work's System Libraries, or general-purpose tools or generally available free programs which are used unmodified in performing those activities but which are not part of the work. For example, Corresponding Source includes interface definition files associated with source files for the work, and the source code for shared libraries and dynamically linked subprograms that the work is specifically designed to require, such as by intimate data communication or control flow between those subprograms and other parts of the work. The Corresponding Source need not include anything that users can regenerate automatically from other parts of the Corresponding Source. The Corresponding Source for a work in source code form is that same work. 2. Basic Permissions. All rights granted under this License are granted for the term of copyright on the Program, and are irrevocable provided the stated conditions are met. This License explicitly affirms your unlimited permission to run the unmodified Program. The output from running a covered work is covered by this License only if the output, given its content, constitutes a covered work. This License acknowledges your rights of fair use or other equivalent, as provided by copyright law. You may make, run and propagate covered works that you do not convey, without conditions so long as your license otherwise remains in force. You may convey covered works to others for the sole purpose of having them make modifications exclusively for you, or provide you with facilities for running those works, provided that you comply with the terms of this License in conveying all material for which you do not control copyright. Those thus making or running the covered works for you must do so exclusively on your behalf, under your direction and control, on terms that prohibit them from making any copies of your copyrighted material outside their relationship with you. Conveying under any other circumstances is permitted solely under the conditions stated below. Sublicensing is not allowed; section 10 makes it unnecessary. 3. Protecting Users' Legal Rights From Anti-Circumvention Law. No covered work shall be deemed part of an effective technological measure under any applicable law fulfilling obligations under article 11 of the WIPO copyright treaty adopted on 20 December 1996, or similar laws prohibiting or restricting circumvention of such measures. When you convey a covered work, you waive any legal power to forbid circumvention of technological measures to the extent such circumvention is effected by exercising rights under this License with respect to the covered work, and you disclaim any intention to limit operation or modification of the work as a means of enforcing, against the work's users, your or third parties' legal rights to forbid circumvention of technological measures. 4. Conveying Verbatim Copies. You may convey verbatim copies of the Program's source code as you receive it, in any medium, provided that you conspicuously and appropriately publish on each copy an appropriate copyright notice; keep intact all notices stating that this License and any non-permissive terms added in accord with section 7 apply to the code; keep intact all notices of the absence of any warranty; and give all recipients a copy of this License along with the Program. You may charge any price or no price for each copy that you convey, and you may offer support or warranty protection for a fee. 5. Conveying Modified Source Versions. You may convey a work based on the Program, or the modifications to produce it from the Program, in the form of source code under the terms of section 4, provided that you also meet all of these conditions: a) The work must carry prominent notices stating that you modified it, and giving a relevant date. b) The work must carry prominent notices stating that it is released under this License and any conditions added under section 7. This requirement modifies the requirement in section 4 to "keep intact all notices". c) You must license the entire work, as a whole, under this License to anyone who comes into possession of a copy. This License will therefore apply, along with any applicable section 7 additional terms, to the whole of the work, and all its parts, regardless of how they are packaged. This License gives no permission to license the work in any other way, but it does not invalidate such permission if you have separately received it. d) If the work has interactive user interfaces, each must display Appropriate Legal Notices; however, if the Program has interactive interfaces that do not display Appropriate Legal Notices, your work need not make them do so. A compilation of a covered work with other separate and independent works, which are not by their nature extensions of the covered work, and which are not combined with it such as to form a larger program, in or on a volume of a storage or distribution medium, is called an "aggregate" if the compilation and its resulting copyright are not used to limit the access or legal rights of the compilation's users beyond what the individual works permit. Inclusion of a covered work in an aggregate does not cause this License to apply to the other parts of the aggregate. 6. Conveying Non-Source Forms. You may convey a covered work in object code form under the terms of sections 4 and 5, provided that you also convey the machine-readable Corresponding Source under the terms of this License, in one of these ways: a) Convey the object code in, or embodied in, a physical product (including a physical distribution medium), accompanied by the Corresponding Source fixed on a durable physical medium customarily used for software interchange. b) Convey the object code in, or embodied in, a physical product (including a physical distribution medium), accompanied by a written offer, valid for at least three years and valid for as long as you offer spare parts or customer support for that product model, to give anyone who possesses the object code either (1) a copy of the Corresponding Source for all the software in the product that is covered by this License, on a durable physical medium customarily used for software interchange, for a price no more than your reasonable cost of physically performing this conveying of source, or (2) access to copy the Corresponding Source from a network server at no charge. c) Convey individual copies of the object code with a copy of the written offer to provide the Corresponding Source. This alternative is allowed only occasionally and noncommercially, and only if you received the object code with such an offer, in accord with subsection 6b. d) Convey the object code by offering access from a designated place (gratis or for a charge), and offer equivalent access to the Corresponding Source in the same way through the same place at no further charge. You need not require recipients to copy the Corresponding Source along with the object code. If the place to copy the object code is a network server, the Corresponding Source may be on a different server (operated by you or a third party) that supports equivalent copying facilities, provided you maintain clear directions next to the object code saying where to find the Corresponding Source. Regardless of what server hosts the Corresponding Source, you remain obligated to ensure that it is available for as long as needed to satisfy these requirements. e) Convey the object code using peer-to-peer transmission, provided you inform other peers where the object code and Corresponding Source of the work are being offered to the general public at no charge under subsection 6d. A separable portion of the object code, whose source code is excluded from the Corresponding Source as a System Library, need not be included in conveying the object code work. A "User Product" is either (1) a "consumer product", which means any tangible personal property which is normally used for personal, family, or household purposes, or (2) anything designed or sold for incorporation into a dwelling. In determining whether a product is a consumer product, doubtful cases shall be resolved in favor of coverage. For a particular product received by a particular user, "normally used" refers to a typical or common use of that class of product, regardless of the status of the particular user or of the way in which the particular user actually uses, or expects or is expected to use, the product. A product is a consumer product regardless of whether the product has substantial commercial, industrial or non-consumer uses, unless such uses represent the only significant mode of use of the product. "Installation Information" for a User Product means any methods, procedures, authorization keys, or other information required to install and execute modified versions of a covered work in that User Product from a modified version of its Corresponding Source. The information must suffice to ensure that the continued functioning of the modified object code is in no case prevented or interfered with solely because modification has been made. If you convey an object code work under this section in, or with, or specifically for use in, a User Product, and the conveying occurs as part of a transaction in which the right of possession and use of the User Product is transferred to the recipient in perpetuity or for a fixed term (regardless of how the transaction is characterized), the Corresponding Source conveyed under this section must be accompanied by the Installation Information. But this requirement does not apply if neither you nor any third party retains the ability to install modified object code on the User Product (for example, the work has been installed in ROM). The requirement to provide Installation Information does not include a requirement to continue to provide support service, warranty, or updates for a work that has been modified or installed by the recipient, or for the User Product in which it has been modified or installed. Access to a network may be denied when the modification itself materially and adversely affects the operation of the network or violates the rules and protocols for communication across the network. Corresponding Source conveyed, and Installation Information provided, in accord with this section must be in a format that is publicly documented (and with an implementation available to the public in source code form), and must require no special password or key for unpacking, reading or copying. 7. Additional Terms. "Additional permissions" are terms that supplement the terms of this License by making exceptions from one or more of its conditions. Additional permissions that are applicable to the entire Program shall be treated as though they were included in this License, to the extent that they are valid under applicable law. If additional permissions apply only to part of the Program, that part may be used separately under those permissions, but the entire Program remains governed by this License without regard to the additional permissions. When you convey a copy of a covered work, you may at your option remove any additional permissions from that copy, or from any part of it. (Additional permissions may be written to require their own removal in certain cases when you modify the work.) You may place additional permissions on material, added by you to a covered work, for which you have or can give appropriate copyright permission. Notwithstanding any other provision of this License, for material you add to a covered work, you may (if authorized by the copyright holders of that material) supplement the terms of this License with terms: a) Disclaiming warranty or limiting liability differently from the terms of sections 15 and 16 of this License; or b) Requiring preservation of specified reasonable legal notices or author attributions in that material or in the Appropriate Legal Notices displayed by works containing it; or c) Prohibiting misrepresentation of the origin of that material, or requiring that modified versions of such material be marked in reasonable ways as different from the original version; or d) Limiting the use for publicity purposes of names of licensors or authors of the material; or e) Declining to grant rights under trademark law for use of some trade names, trademarks, or service marks; or f) Requiring indemnification of licensors and authors of that material by anyone who conveys the material (or modified versions of it) with contractual assumptions of liability to the recipient, for any liability that these contractual assumptions directly impose on those licensors and authors. All other non-permissive additional terms are considered "further restrictions" within the meaning of section 10. If the Program as you received it, or any part of it, contains a notice stating that it is governed by this License along with a term that is a further restriction, you may remove that term. If a license document contains a further restriction but permits relicensing or conveying under this License, you may add to a covered work material governed by the terms of that license document, provided that the further restriction does not survive such relicensing or conveying. If you add terms to a covered work in accord with this section, you must place, in the relevant source files, a statement of the additional terms that apply to those files, or a notice indicating where to find the applicable terms. Additional terms, permissive or non-permissive, may be stated in the form of a separately written license, or stated as exceptions; the above requirements apply either way. 8. Termination. You may not propagate or modify a covered work except as expressly provided under this License. Any attempt otherwise to propagate or modify it is void, and will automatically terminate your rights under this License (including any patent licenses granted under the third paragraph of section 11). However, if you cease all violation of this License, then your license from a particular copyright holder is reinstated (a) provisionally, unless and until the copyright holder explicitly and finally terminates your license, and (b) permanently, if the copyright holder fails to notify you of the violation by some reasonable means prior to 60 days after the cessation. Moreover, your license from a particular copyright holder is reinstated permanently if the copyright holder notifies you of the violation by some reasonable means, this is the first time you have received notice of violation of this License (for any work) from that copyright holder, and you cure the violation prior to 30 days after your receipt of the notice. Termination of your rights under this section does not terminate the licenses of parties who have received copies or rights from you under this License. If your rights have been terminated and not permanently reinstated, you do not qualify to receive new licenses for the same material under section 10. 9. Acceptance Not Required for Having Copies. You are not required to accept this License in order to receive or run a copy of the Program. Ancillary propagation of a covered work occurring solely as a consequence of using peer-to-peer transmission to receive a copy likewise does not require acceptance. However, nothing other than this License grants you permission to propagate or modify any covered work. These actions infringe copyright if you do not accept this License. Therefore, by modifying or propagating a covered work, you indicate your acceptance of this License to do so. 10. Automatic Licensing of Downstream Recipients. Each time you convey a covered work, the recipient automatically receives a license from the original licensors, to run, modify and propagate that work, subject to this License. You are not responsible for enforcing compliance by third parties with this License. An "entity transaction" is a transaction transferring control of an organization, or substantially all assets of one, or subdividing an organization, or merging organizations. If propagation of a covered work results from an entity transaction, each party to that transaction who receives a copy of the work also receives whatever licenses to the work the party's predecessor in interest had or could give under the previous paragraph, plus a right to possession of the Corresponding Source of the work from the predecessor in interest, if the predecessor has it or can get it with reasonable efforts. You may not impose any further restrictions on the exercise of the rights granted or affirmed under this License. For example, you may not impose a license fee, royalty, or other charge for exercise of rights granted under this License, and you may not initiate litigation (including a cross-claim or counterclaim in a lawsuit) alleging that any patent claim is infringed by making, using, selling, offering for sale, or importing the Program or any portion of it. 11. Patents. A "contributor" is a copyright holder who authorizes use under this License of the Program or a work on which the Program is based. The work thus licensed is called the contributor's "contributor version". A contributor's "essential patent claims" are all patent claims owned or controlled by the contributor, whether already acquired or hereafter acquired, that would be infringed by some manner, permitted by this License, of making, using, or selling its contributor version, but do not include claims that would be infringed only as a consequence of further modification of the contributor version. For purposes of this definition, "control" includes the right to grant patent sublicenses in a manner consistent with the requirements of this License. Each contributor grants you a non-exclusive, worldwide, royalty-free patent license under the contributor's essential patent claims, to make, use, sell, offer for sale, import and otherwise run, modify and propagate the contents of its contributor version. In the following three paragraphs, a "patent license" is any express agreement or commitment, however denominated, not to enforce a patent (such as an express permission to practice a patent or covenant not to sue for patent infringement). To "grant" such a patent license to a party means to make such an agreement or commitment not to enforce a patent against the party. If you convey a covered work, knowingly relying on a patent license, and the Corresponding Source of the work is not available for anyone to copy, free of charge and under the terms of this License, through a publicly available network server or other readily accessible means, then you must either (1) cause the Corresponding Source to be so available, or (2) arrange to deprive yourself of the benefit of the patent license for this particular work, or (3) arrange, in a manner consistent with the requirements of this License, to extend the patent license to downstream recipients. "Knowingly relying" means you have actual knowledge that, but for the patent license, your conveying the covered work in a country, or your recipient's use of the covered work in a country, would infringe one or more identifiable patents in that country that you have reason to believe are valid. If, pursuant to or in connection with a single transaction or arrangement, you convey, or propagate by procuring conveyance of, a covered work, and grant a patent license to some of the parties receiving the covered work authorizing them to use, propagate, modify or convey a specific copy of the covered work, then the patent license you grant is automatically extended to all recipients of the covered work and works based on it. A patent license is "discriminatory" if it does not include within the scope of its coverage, prohibits the exercise of, or is conditioned on the non-exercise of one or more of the rights that are specifically granted under this License. You may not convey a covered work if you are a party to an arrangement with a third party that is in the business of distributing software, under which you make payment to the third party based on the extent of your activity of conveying the work, and under which the third party grants, to any of the parties who would receive the covered work from you, a discriminatory patent license (a) in connection with copies of the covered work conveyed by you (or copies made from those copies), or (b) primarily for and in connection with specific products or compilations that contain the covered work, unless you entered into that arrangement, or that patent license was granted, prior to 28 March 2007. Nothing in this License shall be construed as excluding or limiting any implied license or other defenses to infringement that may otherwise be available to you under applicable patent law. 12. No Surrender of Others' Freedom. If conditions are imposed on you (whether by court order, agreement or otherwise) that contradict the conditions of this License, they do not excuse you from the conditions of this License. If you cannot convey a covered work so as to satisfy simultaneously your obligations under this License and any other pertinent obligations, then as a consequence you may not convey it at all. For example, if you agree to terms that obligate you to collect a royalty for further conveying from those to whom you convey the Program, the only way you could satisfy both those terms and this License would be to refrain entirely from conveying the Program. 13. Use with the GNU Affero General Public License. Notwithstanding any other provision of this License, you have permission to link or combine any covered work with a work licensed under version 3 of the GNU Affero General Public License into a single combined work, and to convey the resulting work. The terms of this License will continue to apply to the part which is the covered work, but the special requirements of the GNU Affero General Public License, section 13, concerning interaction through a network will apply to the combination as such. 14. Revised Versions of this License. The Free Software Foundation may publish revised and/or new versions of the GNU General Public License from time to time. Such new versions will be similar in spirit to the present version, but may differ in detail to address new problems or concerns. Each version is given a distinguishing version number. If the Program specifies that a certain numbered version of the GNU General Public License "or any later version" applies to it, you have the option of following the terms and conditions either of that numbered version or of any later version published by the Free Software Foundation. If the Program does not specify a version number of the GNU General Public License, you may choose any version ever published by the Free Software Foundation. If the Program specifies that a proxy can decide which future versions of the GNU General Public License can be used, that proxy's public statement of acceptance of a version permanently authorizes you to choose that version for the Program. Later license versions may give you additional or different permissions. However, no additional obligations are imposed on any author or copyright holder as a result of your choosing to follow a later version. 15. Disclaimer of Warranty. THERE IS NO WARRANTY FOR THE PROGRAM, TO THE EXTENT PERMITTED BY APPLICABLE LAW. EXCEPT WHEN OTHERWISE STATED IN WRITING THE COPYRIGHT HOLDERS AND/OR OTHER PARTIES PROVIDE THE PROGRAM "AS IS" WITHOUT WARRANTY OF ANY KIND, EITHER EXPRESSED OR IMPLIED, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE. THE ENTIRE RISK AS TO THE QUALITY AND PERFORMANCE OF THE PROGRAM IS WITH YOU. SHOULD THE PROGRAM PROVE DEFECTIVE, YOU ASSUME THE COST OF ALL NECESSARY SERVICING, REPAIR OR CORRECTION. 16. Limitation of Liability. IN NO EVENT UNLESS REQUIRED BY APPLICABLE LAW OR AGREED TO IN WRITING WILL ANY COPYRIGHT HOLDER, OR ANY OTHER PARTY WHO MODIFIES AND/OR CONVEYS THE PROGRAM AS PERMITTED ABOVE, BE LIABLE TO YOU FOR DAMAGES, INCLUDING ANY GENERAL, SPECIAL, INCIDENTAL OR CONSEQUENTIAL DAMAGES ARISING OUT OF THE USE OR INABILITY TO USE THE PROGRAM (INCLUDING BUT NOT LIMITED TO LOSS OF DATA OR DATA BEING RENDERED INACCURATE OR LOSSES SUSTAINED BY YOU OR THIRD PARTIES OR A FAILURE OF THE PROGRAM TO OPERATE WITH ANY OTHER PROGRAMS), EVEN IF SUCH HOLDER OR OTHER PARTY HAS BEEN ADVISED OF THE POSSIBILITY OF SUCH DAMAGES. 17. Interpretation of Sections 15 and 16. If the disclaimer of warranty and limitation of liability provided above cannot be given local legal effect according to their terms, reviewing courts shall apply local law that most closely approximates an absolute waiver of all civil liability in connection with the Program, unless a warranty or assumption of liability accompanies a copy of the Program in return for a fee. END OF TERMS AND CONDITIONS How to Apply These Terms to Your New Programs If you develop a new program, and you want it to be of the greatest possible use to the public, the best way to achieve this is to make it free software which everyone can redistribute and change under these terms. To do so, attach the following notices to the program. It is safest to attach them to the start of each source file to most effectively state the exclusion of warranty; and each file should have at least the "copyright" line and a pointer to where the full notice is found. {one line to give the program's name and a brief idea of what it does.} Copyright (C) {year} {name of author} This program is free software: you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation, either version 3 of the License, or (at your option) any later version. This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details. You should have received a copy of the GNU General Public License along with this program. If not, see . Also add information on how to contact you by electronic and paper mail. If the program does terminal interaction, make it output a short notice like this when it starts in an interactive mode: {project} Copyright (C) {year} {fullname} This program comes with ABSOLUTELY NO WARRANTY; for details type `show w'. This is free software, and you are welcome to redistribute it under certain conditions; type `show c' for details. The hypothetical commands `show w' and `show c' should show the appropriate parts of the General Public License. Of course, your program's commands might be different; for a GUI interface, you would use an "about box". You should also get your employer (if you work as a programmer) or school, if any, to sign a "copyright disclaimer" for the program, if necessary. For more information on this, and how to apply and follow the GNU GPL, see . The GNU General Public License does not permit incorporating your program into proprietary programs. If your program is a subroutine library, you may consider it more useful to permit linking proprietary applications with the library. If this is what you want to do, use the GNU Lesser General Public License instead of this License. But first, please read . sanoid-2.0.3/README.md000066400000000000000000000256161355716220600142710ustar00rootroot00000000000000

sanoid logo

Sanoid is a policy-driven snapshot management tool for ZFS filesystems. When combined with the Linux KVM hypervisor, you can use it to make your systems functionally immortal.

sanoid rollback demo
(Real time demo: rolling back a full-scale cryptomalware infection in seconds!)

More prosaically, you can use Sanoid to create, automatically thin, and monitor snapshots and pool health from a single eminently human-readable TOML config file at /etc/sanoid/sanoid.conf. (Sanoid also requires a "defaults" file located at /etc/sanoid/sanoid.defaults.conf, which is not user-editable.) A typical Sanoid system would have a single cron job: ``` * * * * * TZ=UTC /usr/local/bin/sanoid --cron ``` `Note`: Using UTC as timezone is recommend to prevent problems with daylight saving times And its /etc/sanoid/sanoid.conf might look something like this: ``` [data/home] use_template = production [data/images] use_template = production recursive = yes process_children_only = yes [data/images/win7] hourly = 4 ############################# # templates below this line # ############################# [template_production] frequently = 0 hourly = 36 daily = 30 monthly = 3 yearly = 0 autosnap = yes autoprune = yes ``` Which would be enough to tell sanoid to take and keep 36 hourly snapshots, 30 dailies, 3 monthlies, and no yearlies for all datasets under data/images (but not data/images itself, since process_children_only is set). Except in the case of data/images/win7, which follows the same template (since it's a child of data/images) but only keeps 4 hourlies for whatever reason. ##### Sanoid Command Line Options + --cron This will process your sanoid.conf file, create snapshots, then purge expired ones. + --configdir Specify a location for the config file named sanoid.conf. Defaults to /etc/sanoid + --take-snapshots This will process your sanoid.conf file, create snapshots, but it will NOT purge expired ones. (Note that snapshots taken are atomic in an individual dataset context, not a global context - snapshots of pool/dataset1 and pool/dataset2 will each be internally consistent and atomic, but one may be a few filesystem transactions "newer" than the other.) + --prune-snapshots This will process your sanoid.conf file, it will NOT create snapshots, but it will purge expired ones. + --force-prune Purges expired snapshots even if a send/recv is in progress + --monitor-snapshots This option is designed to be run by a Nagios monitoring system. It reports on the health of your snapshots. + --monitor-health This option is designed to be run by a Nagios monitoring system. It reports on the health of the zpool your filesystems are on. It only monitors filesystems that are configured in the sanoid.conf file. + --monitor-capacity This option is designed to be run by a Nagios monitoring system. It reports on the capacity of the zpool your filesystems are on. It only monitors pools that are configured in the sanoid.conf file. + --force-update This clears out sanoid's zfs snapshot listing cache. This is normally not needed. + --version This prints the version number, and exits. + --quiet Supress non-error output. + --verbose This prints additional information during the sanoid run. + --debug This prints out quite alot of additional information during a sanoid run, and is normally not needed. + --readonly Skip creation/deletion of snapshots (Simulate). + --help Show help message. ---------- # Syncoid Sanoid also includes a replication tool, syncoid, which facilitates the asynchronous incremental replication of ZFS filesystems. A typical syncoid command might look like this: ``` syncoid data/images/vm backup/images/vm ``` Which would replicate the specified ZFS filesystem (aka dataset) from the data pool to the backup pool on the local system, or ``` syncoid data/images/vm root@remotehost:backup/images/vm ``` Which would push-replicate the specified ZFS filesystem from the local host to remotehost over an SSH tunnel, or ``` syncoid root@remotehost:data/images/vm backup/images/vm ``` Which would pull-replicate the filesystem from the remote host to the local system over an SSH tunnel. Syncoid supports recursive replication (replication of a dataset and all its child datasets) and uses mbuffer buffering, lzop compression, and pv progress bars if the utilities are available on the systems used. If ZFS supports resumeable send/receive streams on both the source and target those will be enabled as default. As of 1.4.18, syncoid also automatically supports and enables resume of interrupted replication when both source and target support this feature. ##### Syncoid Dataset Properties + syncoid:sync Available values: + `true` (default if unset) This dataset will be synchronised to all hosts. + `false` This dataset will not be synchronised to any hosts - it will be skipped. This can be useful for preventing certain datasets from being transferred when recursively handling a tree. + `host1,host2,...` A comma separated list of hosts. This dataset will only be synchronised by hosts listed in the property. _Note_: this check is performed by the host running `syncoid`, thus the local hostname must be present for inclusion during a push operation // the remote hostname must be present for a pull. _Note_: this will also prevent syncoid from handling the dataset if given explicitly on the command line. _Note_: syncing a child of a no-sync dataset will currently result in a critical error. _Note_: empty properties will be handled as if they were unset. ##### Syncoid Command Line Options + [source] This is the source dataset. It can be either local or remote. + [destination] This is the destination dataset. It can be either local or remote. + --identifier= Adds the given identifier to the snapshot name after "syncoid_" prefix and before the hostname. This enables the use case of reliable replication to multiple targets from the same host. The following chars are allowed: a-z, A-Z, 0-9, _, -, : and . . + -r --recursive This will also transfer child datasets. + --skip-parent This will skip the syncing of the parent dataset. Does nothing without '--recursive' option. + --compress Currently accepted options: gzip, pigz-fast, pigz-slow, zstd-fast, zstd-slow, lz4, xz, lzo (default) & none. If the selected compression method is unavailable on the source and destination, no compression will be used. + --source-bwlimit This is the bandwidth limit in bytes (kbytes, mbytes, etc) per second imposed upon the source. This is mainly used if the target does not have mbuffer installed, but bandwidth limits are desired. + --target-bw-limit This is the bandwidth limit in bytes (kbytes, mbytesm etc) per second imposed upon the target. This is mainly used if the source does not have mbuffer installed, but bandwidth limits are desired. + --no-command-checks Does not check the existence of commands before attempting the transfer, providing administrators a way to run the tool with minimal overhead and maximum speed, at risk of potentially failed replication, or other possible edge cases. It assumes all programs are available, and should not be used in most situations. This is an not an officially supported run mode. + --no-stream This argument tells syncoid to use -i incrementals, not -I. This updates the target with the newest snapshot from the source, without replicating the intermediate snapshots in between. (If used for an initial synchronization, will do a full replication from newest snapshot and exit immediately, rather than starting with the oldest and then doing an immediate -i to the newest.) + --no-sync-snap This argument tells syncoid to restrict itself to existing snapshots, instead of creating a semi-ephemeral syncoid snapshot at execution time. Especially useful in multi-target (A->B, A->C) replication schemes, where you might otherwise accumulate a large number of foreign syncoid snapshots. + --create-bookmark This argument tells syncoid to create a zfs bookmark for the newest snapshot after it got replicated successfully. The bookmark name will be equal to the snapshot name. Only works in combination with the --no-sync-snap option. This can be very useful for irregular replication where the last matching snapshot on the source was already deleted but the bookmark remains so a replication is still possible. + --no-clone-rollback Do not rollback clones on target + --no-rollback Do not rollback anything (clones or snapshots) on target host + --exclude=REGEX The given regular expression will be matched against all datasets which would be synced by this run and excludes them. This argument can be specified multiple times. + --no-resume This argument tells syncoid to not use resumeable zfs send/receive streams. + --force-delete Remove target datasets recursively (WARNING: this will also affect child datasets with matching snapshots/bookmarks), if there are no matching snapshots/bookmarks. + --no-clone-handling This argument tells syncoid to not recreate clones on the targe on initial sync and doing a normal replication instead. + --dumpsnaps This prints a list of snapshots during the run. + --no-privilege-elevation Bypass the root check and assume syncoid has the necessary permissions (for use with ZFS permission delegation). + --sshport Allow sync to/from boxes running SSH on non-standard ports. + --sshcipher Instruct ssh to use a particular cipher set. + --sshoption Passes option to ssh. This argument can be specified multiple times. + --sshkey Use specified identity file as per ssh -i. + --quiet Supress non-error output. + --debug This prints out quite alot of additional information during a sanoid run, and is normally not needed. + --help Show help message. + --version Print the version and exit. + --monitor-version This doesn't do anything right now. Note that the sync snapshots syncoid creates are not atomic in a global context: sync snapshots of pool/dataset1 and pool/dataset2 will each be internally consistent, but one may be a few filesystem transactions "newer" than the other. (This does not affect the consistency of snapshots already taken in other ways, which syncoid replicates in the overall stream unless --no-stream is specified. So if you want to manually zfs snapshot -R pool@1 before replicating with syncoid, the global atomicity of pool/dataset1@1 and pool/dataset2@1 will still be intact.) sanoid-2.0.3/VERSION000066400000000000000000000000061355716220600140440ustar00rootroot000000000000002.0.3 sanoid-2.0.3/findoid000077500000000000000000000111661355716220600143470ustar00rootroot00000000000000#!/usr/bin/perl # this software is licensed for use under the Free Software Foundation's GPL v3.0 license, as retrieved # from http://www.gnu.org/licenses/gpl-3.0.html on 2014-11-17. A copy should also be available in this # project's Git repository at https://github.com/jimsalterjrs/sanoid/blob/master/LICENSE. use strict; use warnings; my $zfs = '/sbin/zfs'; my %args = getargs(@ARGV); my $progversion = '1.4.7'; if ($args{'version'}) { print "$progversion\n"; exit 0; } my $dataset = getdataset($args{'path'}); my %versions = getversions($args{'path'}, $dataset); foreach my $version (sort { $versions{$a}{'mtime'} <=> $versions{$b}{'mtime'} } keys %versions) { my $disptime = localtime($versions{$version}{'mtime'}); my $dispsize = humansize($versions{$version}{'size'}); print "$disptime\t$dispsize\t$version\n"; } exit 0; ################################################################### ################################################################### ################################################################### sub humansize { my ($rawsize) = @_; my $humansize; if ($rawsize > 1024*1024*1024) { $humansize = sprintf("%.1f",$rawsize/1024/1024/1024) . ' GB'; } elsif ($rawsize > 1024*1024) { $humansize = sprintf("%.1f",$rawsize/1024/1024) . ' MB'; } elsif ($rawsize > 255) { $humansize = sprintf("%.1f",$rawsize/1024) . ' KB'; } else { $humansize = $rawsize . ' Bytes'; } return $humansize; } sub getversions { my ($path, $dataset) = @_; my @snaps = findsnaps($dataset, $args{'path'}); my $snappath = '.zfs/snapshot'; my $relpath = $path; $relpath =~ s/^$dataset\///; my %versions; foreach my $snap (@snaps) { my $filename = "$dataset/$snappath/$snap/$relpath"; my ($dev,$ino,$mode,$nlink,$uid,$gid,$rdev,$size,$atime,$mtime,$ctime,$blksize,$blocks) = stat($filename); # only push to the $versions hash if this size and mtime aren't already present (simple dedupe) my $duplicate = 0; foreach my $version (keys %versions) { if ($versions{$version}{'size'} eq $size && $versions{$version}{'mtime'} eq $mtime) { $duplicate = 1; } } if (! $duplicate) { $versions{$filename}{'size'} = $size; $versions{$filename}{'mtime'} = $mtime; } } return %versions; } sub findsnaps { my ($dataset, $path) = @_; my $snappath = '.zfs/snapshot'; my $relpath = $path; $relpath =~ s/^$dataset//; my @snaps; opendir (my $dh, "$dataset/$snappath"); while (my $dir=(readdir $dh)) { if ($dir ne '.' && $dir ne '..') { push @snaps, $dir; } } closedir $dh; return @snaps; } sub getdataset { my ($path) = @_; open FH, "$zfs list -Ho mountpoint |"; my @datasets = ; close FH; my @matchingdatasets; foreach my $dataset (@datasets) { chomp $dataset; if ( $path =~ /^$dataset/ ) { push @matchingdatasets, $dataset; } } my $bestmatch = ''; foreach my $dataset (@matchingdatasets) { if ( length $dataset > length $bestmatch ) { $bestmatch = $dataset; } } return $bestmatch; } sub getargs { my @args = @_; my %args; my %novaluearg; my %validarg; push my @validargs, ('debug','version'); foreach my $item (@validargs) { $validarg{$item} = 1; } push my @novalueargs, ('debug','version'); foreach my $item (@novalueargs) { $novaluearg{$item} = 1; } while (my $rawarg = shift(@args)) { my $arg = $rawarg; my $argvalue; if ($rawarg =~ /=/) { # user specified the value for a CLI argument with = # instead of with blank space. separate appropriately. $argvalue = $arg; $arg =~ s/=.*$//; $argvalue =~ s/^.*=//; } if ($rawarg =~ /^--/) { # doubledash arg $arg =~ s/^--//; if (! $validarg{$arg}) { die "ERROR: don't understand argument $rawarg.\n"; } if ($novaluearg{$arg}) { $args{$arg} = 1; } else { # if this CLI arg takes a user-specified value and # we don't already have it, then the user must have # specified with a space, so pull in the next value # from the array as this value rather than as the # next argument. if ($argvalue eq '') { $argvalue = shift(@args); } $args{$arg} = $argvalue; } } elsif ($arg =~ /^-/) { # singledash arg $arg =~ s/^-//; if (! $validarg{$arg}) { die "ERROR: don't understand argument $rawarg.\n"; } if ($novaluearg{$arg}) { $args{$arg} = 1; } else { # if this CLI arg takes a user-specified value and # we don't already have it, then the user must have # specified with a space, so pull in the next value # from the array as this value rather than as the # next argument. if ($argvalue eq '') { $argvalue = shift(@args); } $args{$arg} = $argvalue; } } else { # bare arg $args{'path'} = $arg; } } return %args; } sanoid-2.0.3/packages/000077500000000000000000000000001355716220600145565ustar00rootroot00000000000000sanoid-2.0.3/packages/debian/000077500000000000000000000000001355716220600160005ustar00rootroot00000000000000sanoid-2.0.3/packages/debian/.gitignore000066400000000000000000000001171355716220600177670ustar00rootroot00000000000000*.debhelper *.debhelper.log *.substvars debhelper-build-stamp files sanoid tmp sanoid-2.0.3/packages/debian/TODO000066400000000000000000000011701355716220600164670ustar00rootroot00000000000000- This package needs to be a 3.0 (quilt) format, not 3.0 (native). - Fix the changelog - Move the packaging out to a separate repository, or at a minimum, a separate branch. - Provide an extended description in debian/control - Figure out a plan for sanoid.defaults.conf. It is not supposed to be edited, so it shouldn't be installed in /etc. At a minimum, install it under /usr and make a symlink, but preferably patch sanoid to look there directly. - Man pages are necessary for all the utilities installed. - With these, there is probably no need to ship README.md. - Break out syncoid into a separate package? sanoid-2.0.3/packages/debian/changelog000066400000000000000000000152741355716220600176630ustar00rootroot00000000000000sanoid (2.0.3) unstable; urgency=medium [sanoid] reverted DST handling and improved it as quickfix (@phreaker0) -- Jim Salter Wed, 02 Oct 2019 17:00:00 +0100 sanoid (2.0.2) unstable; urgency=medium [overall] documentation updates, new dependencies, small fixes, more warnings (@benyanke, @matveevandrey, @RulerOf, @klemens-u, @johnramsden, @danielewood, @g-a-c, @hartzell, @fryfrog, @phreaker0) [syncoid] changed and simplified DST handling (@shodanshok) [syncoid] reset partially resume state automatically (@phreaker0) [syncoid] handle some zfs erros automatically by parsing the stderr outputs (@phreaker0) [syncoid] fixed ordering of snapshots with the same creation timestamp (@phreaker0) [syncoid] don't use hardcoded paths (@phreaker0) [syncoid] fix for special setup with listsnapshots=on (@phreaker0) [syncoid] check ssh connection on startup (@phreaker0) [syncoid] fix edge case with initial send and no-stream option (@phreaker0) [syncoid] fallback to normal replication if clone recreation fails (@phreaker0) [packaging] ebuild for gentoo (@thehaven) [syncoid] support for zfs bookmark creation (@phreaker0) [syncoid] fixed bookmark edge cases (@phreaker0) [syncoid] handle invalid dataset paths nicely (@phreaker0) [syncoid] fixed resume support check to be zpool based (@phreaker0) [sanoid] added hotspare template (@jimsalterjrs) [syncoid] support for advanced zfs send/recv options (@clinta, @phreaker0) [syncoid] option to change mbuffer size (@TerraTech) [tests] fixes for FreeBSD (@phreaker0) [sanoid] support for zfs recursion (@jMichaelA, @phreaker0) [syncoid] fixed bookmark handling for volumens (@ppcontrib) [sanoid] allow time units for monitoring warn/crit values (@phreaker0) -- Jim Salter Fri, 20 Sep 2019 23:01:00 +0100 sanoid (2.0.1) unstable; urgency=medium [sanoid] fixed broken monthly warn/critical monitoring values in default template (@jimsalterjrs) [sanoid] flag to force pruning while filesystem is in an active zfs send/recv (@shodanshok) [syncoid] flags to disable rollbacks (@shodanshok) -- Jim Salter Fri, 14 Dec 2018 16:48:00 +0100 sanoid (2.0.0) unstable; urgency=medium [overall] documentation updates, small fixes, more warnings (@sparky3387, @ljwobker, @phreaker0) [syncoid] added force delete flag (@phreaker0) [sanoid] removed sleeping between snapshot taking (@phreaker0) [syncoid] added '--no-privilege-elevation' option to bypass root check (@lopsided98) [sanoid] implemented weekly period (@phreaker0) [syncoid] implemented support for zfs bookmarks as fallback (@phreaker0) [sanoid] support for pre, post and prune snapshot scripts (@jouir, @darkbasic, @phreaker0) [sanoid] ignore snapshots types that are set to 0 (@muff1nman) [packaging] split snapshot taking/pruning into separate systemd units for debian package (@phreaker0) [syncoid] replicate clones (@phreaker0) [syncoid] added compression algorithms: lz4, xz (@spheenik, @phreaker0) [sanoid] added option to defer pruning based on the available pool capacity (@phreaker0) [sanoid] implemented frequent snapshots with configurable period (@phreaker0) [syncoid] prevent a perl warning on systems which doesn't output estimated send size information (@phreaker0) [packaging] dependency fixes (@rodgerd, mabushey) [syncoid] implemented support for excluding children of a specific dataset (@phreaker0) [sanoid] monitor-health command additionally checks vdev members for io and checksum errors (@phreaker0) [syncoid] added ability to skip datasets by a custom dataset property 'syncoid:no-sync' (@attie) [syncoid] don't die on some critical replication errors, but continue with the remaining datasets (@phreaker0) [syncoid] return a non zero exit code if there was a problem replicating datasets (@phreaker0) [syncoid] make local source bwlimit work (@phreaker0) [syncoid] fix 'resume support' detection on FreeBSD (@pit3k) [sanoid] updated INSTALL with missing dependency [sanoid] fixed monitor-health command for pools containing cache and log devices (@phreaker0) [sanoid] quiet flag suppresses all info output (@martinvw) [sanoid] check for empty lockfile which lead to sanoid failing on start (@jasonblewis) [sanoid] added dst handling to prevent multiple invalid snapshots on time shift (@phreaker0) [sanoid] cache improvements, makes sanoid much faster with a huge amount of datasets/snapshots (@phreaker0) [sanoid] implemented monitor-capacity flag for checking zpool capacity limits (@phreaker0) [syncoid] Added support for ZStandard compression.(@danielewood) [syncoid] implemented support for excluding datasets from replication with regular expressions (@phreaker0) [syncoid] correctly parse zfs column output, fixes resumeable send with datasets containing spaces (@phreaker0) [syncoid] added option for using extra identification in the snapshot name for replication to multiple targets (@phreaker0) [syncoid] added option for skipping the parent dataset in recursive replication (@phreaker0) [syncoid] typos (@UnlawfulMonad, @jsavikko, @phreaker0) [sanoid] use UTC by default in unit template and documentation (@phreaker0) [syncoid] don't prune snapshots if instructed to not create them either (@phreaker0) [syncoid] documented compatibility issues with (t)csh shells (@ecoutu) -- Jim Salter Wed, 04 Dec 2018 18:10:00 -0400 sanoid (1.4.18) unstable; urgency=medium implemented special character handling and support of ZFS resume/receive tokens by default in syncoid, thank you @phreaker0! -- Jim Salter Wed, 25 Apr 2018 16:24:00 -0400 sanoid (1.4.17) unstable; urgency=medium changed die to warn when unexpectedly unable to remove a snapshot - this allows sanoid to continue taking/removing other snapshots not affected by whatever lock prevented the first from being taken or removed -- Jim Salter Wed, 8 Nov 2017 15:25:00 -0400 sanoid (1.4.16) unstable; urgency=medium * merged @hrast01's extended fix to support -o option1=val,option2=val passthrough to SSH. merged @JakobR's * off-by-one fix to stop unnecessary extra snapshots being taken under certain conditions. merged @stardude900's * update to INSTALL for FreeBSD users re:symlinks. Implemented @LordAro's update to change DIE to WARN when * encountering a dataset with no snapshots and --no-sync-snap set during recursive replication. Implemented * @LordAro's update to sanoid.conf to add an ignore template which does not snap, prune, or monitor. -- Jim Salter Wed, 9 Aug 2017 12:28:49 -0400 sanoid-2.0.3/packages/debian/compat000066400000000000000000000000031355716220600171770ustar00rootroot0000000000000010 sanoid-2.0.3/packages/debian/control000066400000000000000000000012531355716220600174040ustar00rootroot00000000000000Source: sanoid Section: utils Priority: optional Maintainer: Jim Salter Build-Depends: debhelper (>= 10) Standards-Version: 4.1.2 Homepage: https://github.com/jimsalterjrs/sanoid Vcs-Git: https://github.com/jimsalterjrs/sanoid.git Vcs-Browser: https://github.com/jimsalterjrs/sanoid Package: sanoid Architecture: all Depends: libcapture-tiny-perl, libconfig-inifiles-perl, systemd, zfsutils-linux | zfs, ${misc:Depends}, ${perl:Depends} Recommends: gzip, lzop, mbuffer, openssh-client | ssh-client, pv Description: Policy-driven snapshot management and replication tools sanoid-2.0.3/packages/debian/copyright000066400000000000000000000021561355716220600177370ustar00rootroot00000000000000Format: https://www.debian.org/doc/packaging-manuals/copyright-format/1.0/ Upstream-Name: sanoid Source: https://github.com/jimsalterjrs/sanoid Files: * Copyright: 2017 Jim Salter License: GPL-3.0+ Files: debian/* Copyright: 2017 Jim Salter 2017 Richard Laager License: GPL-3.0+ License: GPL-3.0+ This program is free software: you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation, either version 3 of the License, or (at your option) any later version. . This package is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details. . You should have received a copy of the GNU General Public License along with this program. If not, see . . On Debian systems, the complete text of the GNU General Public License version 3 can be found in "/usr/share/common-licenses/GPL-3". sanoid-2.0.3/packages/debian/rules000077500000000000000000000016051355716220600170620ustar00rootroot00000000000000#!/usr/bin/make -f # See debhelper(7) for more info # output every command that modifies files on the build system. #export DH_VERBOSE = 1 %: dh $@ DESTDIR = $(CURDIR)/debian/sanoid override_dh_auto_install: install -d $(DESTDIR)/etc/sanoid install -m 664 sanoid.defaults.conf $(DESTDIR)/etc/sanoid install -d $(DESTDIR)/lib/systemd/system install -m 664 debian/sanoid-prune.service debian/sanoid.timer \ $(DESTDIR)/lib/systemd/system install -d $(DESTDIR)/usr/sbin install -m 775 \ findoid sanoid sleepymutex syncoid \ $(DESTDIR)/usr/sbin install -d $(DESTDIR)/usr/share/doc/sanoid install -m 664 sanoid.conf \ $(DESTDIR)/usr/share/doc/sanoid/sanoid.conf.example override_dh_installinit: dh_installinit --noscripts override_dh_systemd_enable: dh_systemd_enable sanoid.timer dh_systemd_enable sanoid-prune.service override_dh_systemd_start: dh_systemd_start sanoid.timer sanoid-2.0.3/packages/debian/sanoid-prune.service000066400000000000000000000004131355716220600217640ustar00rootroot00000000000000[Unit] Description=Cleanup ZFS Pool Requires=zfs.target After=zfs.target sanoid.service ConditionFileNotEmpty=/etc/sanoid/sanoid.conf [Service] Environment=TZ=UTC Type=oneshot ExecStart=/usr/sbin/sanoid --prune-snapshots --verbose [Install] WantedBy=sanoid.service sanoid-2.0.3/packages/debian/sanoid.README.Debian000066400000000000000000000001341355716220600213130ustar00rootroot00000000000000To start, copy the example config file in /usr/share/doc/sanoid to /etc/sanoid/sanoid.conf. sanoid-2.0.3/packages/debian/sanoid.docs000066400000000000000000000000121355716220600201200ustar00rootroot00000000000000README.md sanoid-2.0.3/packages/debian/sanoid.service000066400000000000000000000004201355716220600206330ustar00rootroot00000000000000[Unit] Description=Snapshot ZFS Pool Requires=zfs.target After=zfs.target Wants=sanoid-prune.service Before=sanoid-prune.service ConditionFileNotEmpty=/etc/sanoid/sanoid.conf [Service] Environment=TZ=UTC Type=oneshot ExecStart=/usr/sbin/sanoid --take-snapshots --verbose sanoid-2.0.3/packages/debian/sanoid.timer000066400000000000000000000001741355716220600203210ustar00rootroot00000000000000[Unit] Description=Run Sanoid Every 15 Minutes [Timer] OnCalendar=*:0/15 Persistent=true [Install] WantedBy=timers.target sanoid-2.0.3/packages/debian/source/000077500000000000000000000000001355716220600173005ustar00rootroot00000000000000sanoid-2.0.3/packages/debian/source/format000066400000000000000000000000151355716220600205070ustar00rootroot000000000000003.0 (native) sanoid-2.0.3/packages/gentoo/000077500000000000000000000000001355716220600160515ustar00rootroot00000000000000sanoid-2.0.3/packages/gentoo/sys-fs/000077500000000000000000000000001355716220600172755ustar00rootroot00000000000000sanoid-2.0.3/packages/gentoo/sys-fs/sanoid/000077500000000000000000000000001355716220600205525ustar00rootroot00000000000000sanoid-2.0.3/packages/gentoo/sys-fs/sanoid/Manifest000066400000000000000000000020641355716220600222450ustar00rootroot00000000000000AUX sanoid.cron 45 BLAKE2B 3f6294bbbf485dc21a565cd2c8da05a42fb21cdaabdf872a21500f1a7338786c60d4a1fd188bbf81ce85f06a376db16998740996f47c049707a5109bdf02c052 SHA512 7676b32f21e517e8c84a097c7934b54097cf2122852098ea756093ece242125da3f6ca756a6fbb82fc348f84b94bfd61639e86e0bfa4bbe7abf94a8a4c551419 DIST sanoid-2.0.2.tar.gz 115797 BLAKE2B d00a038062df3dd8e77d3758c7b80ed6da0bac4931fb6df6adb72eeddb839c63d5129e0a281948a483d02165dad5a8505e1a55dc851360d3b366371038908142 SHA512 73e3d25dbdd58a78ffc4384584304e7230c5f31a660ce6d2a9b9d52a92a3796f1bc25ae865dbc74ce586cbd6169dbb038340f4a28e097e77ab3eb192b15773db EBUILD sanoid-2.0.2.ebuild 796 BLAKE2B f3d633289d66c60fd26cb7731bc6b63533019f527aaec9ca8e5c0e748542d391153dbb55b17b8c981ca4fa4ae1fc8dc202b5480c13736fca250940b3b5ebb793 SHA512 d0143680c029ffe4ac37d97a979ed51527b4b8dd263d0c57e43a4650bf8a9bb8 EBUILD sanoid-9999.ebuild 776 BLAKE2B 416b8d04a9e5a84bce46d2a6f88eaefe03804944c03bc7f49b7a5b284b844212a6204402db3de3afa5d9c0545125d2631e7231c8cb2a3537bdcb10ea1be46b6a SHA512 98d8a30a13e75d7847ae9d60797d54078465bf75c6c6d9b6fd86075e342c0374 sanoid-2.0.3/packages/gentoo/sys-fs/sanoid/files/000077500000000000000000000000001355716220600216545ustar00rootroot00000000000000sanoid-2.0.3/packages/gentoo/sys-fs/sanoid/files/sanoid.cron000066400000000000000000000000551355716220600240140ustar00rootroot00000000000000* * * * * root TZ=UTC /usr/bin/sanoid --cron sanoid-2.0.3/packages/gentoo/sys-fs/sanoid/sanoid-2.0.2.ebuild000066400000000000000000000014341355716220600236540ustar00rootroot00000000000000# Copyright 2019 Gentoo Authors # Distributed under the terms of the GNU General Public License v2 EAPI=7 DESCRIPTION="Policy-driven snapshot management and replication tools for ZFS" HOMEPAGE="https://github.com/jimsalterjrs/sanoid" SRC_URI="https://github.com/jimsalterjrs/${PN}/archive/v${PV}.tar.gz -> ${P}.tar.gz" LICENSE="GPL-3.0" SLOT="0" KEYWORDS="~x86 ~amd64" IUSE="" DEPEND="app-arch/lzop dev-perl/Config-IniFiles dev-perl/Capture-Tiny sys-apps/pv sys-block/mbuffer virtual/perl-Data-Dumper" RDEPEND="${DEPEND}" BDEPEND="" DOCS=( README.md ) src_install() { dobin findoid dobin sanoid dobin sleepymutex dobin syncoid keepdir /etc/${PN} insinto /etc/${PN} doins sanoid.conf doins sanoid.defaults.conf insinto /etc/cron.d newins "${FILESDIR}/${PN}.cron" ${PN} } sanoid-2.0.3/packages/gentoo/sys-fs/sanoid/sanoid-9999.ebuild000066400000000000000000000014101355716220600236320ustar00rootroot00000000000000# Copyright 2019 Gentoo Authors # Distributed under the terms of the GNU General Public License v2 EAPI=7 EGIT_REPO_URI="https://github.com/jimsalterjrs/${PN}.git" inherit git-r3 DESCRIPTION="Policy-driven snapshot management and replication tools for ZFS" HOMEPAGE="https://github.com/jimsalterjrs/sanoid" LICENSE="GPL-3.0" SLOT="0" KEYWORDS="**" IUSE="" DEPEND="app-arch/lzop dev-perl/Config-IniFiles dev-perl/Capture-Tiny sys-apps/pv sys-block/mbuffer virtual/perl-Data-Dumper" RDEPEND="${DEPEND}" BDEPEND="" DOCS=( README.md ) src_install() { dobin findoid dobin sanoid dobin sleepymutex dobin syncoid keepdir /etc/${PN} insinto /etc/${PN} doins sanoid.conf doins sanoid.defaults.conf insinto /etc/cron.d newins "${FILESDIR}/${PN}.cron" ${PN} } sanoid-2.0.3/packages/rhel/000077500000000000000000000000001355716220600155105ustar00rootroot00000000000000sanoid-2.0.3/packages/rhel/sanoid-1.4.18.tar.gz000066400000000000000000001311651355716220600206520ustar00rootroot00000000000000[ms6pr)Iu+VwVl֞VvYfWWg\<< H\ݭC2@PȊy[(3Ç|nOgGG}vx4;br:dU7sbyx<>HfG`i:,M8>>zt8K2|$p>e[@L8m?}yz\/E^uiF([DQŵ2ʓ +۔eQժH__Jf'֪.V:jQN&ɕ1l8PSc•zR.+t5}0,\7_@p:uUY\5~ c?y&/Tz]`H<*+VY:E^h]MdnmƪsEi*KbKa›eTkJ0ZJbsb ^T^oZ늻|"[Og;V3hzTM_G}wUF˪hKz9n'c*ovZ=i:oFEꢄ"sQUS@ DT:‘B[4~4v6emq˫*2WZwL5VW95,l{䢨Ӫry}ggo.<‚`t%0*V7^@{1Fy1yI##aMeiҕ.3Gta{t6R3J"/*<ט/,MTIQ! >ɬE56stʹUh ڥzzqʛX=m we80|iTSO]G8[J鼽}YGuԂo:;Pȃғ$+/ۍ2jLO{h&KIR3:Xy-,p (|ig:[tZM N^:Kza" &:T5FYX'벰o( w޷¾T؞NU-Roʳ583M~VW/_} kjuz@ 7 {8 UUQYB/?cߢ-ƕ))瞚7H$LQ+1.\XCEm }CA5CcY To꫇3 ( 3TʇԪh?@ 7hzZSi0lh`Mcw:IhbMG8^|Kڬ1xCAtk׍2KpV0&xm?9c "htv0̮`@Z1 t0A)ċ{ ? ّR6O %ux~VBJa,6puXtkj'3m9DYFr.˂d}mhZOesޟ #U59J* 7Bp>":q4w$%P6+eL,").N/J` P`7'K}}5NV H aBǵ̑̋nu QV:ܴ< L:]$qrmQ`יX F@;I~][>4BdT?Pw0uD/‰<? q (u*bSOH(I-Guy8V6Z#Fd}BjZwFvrx#nRv"R]:yqP DW4\d ]*B`'0XQQ1|[6GS bfnCCŭhOFZr.4t#7ҿ~s6::Z`ᤷ260ouÚ*5 6* k aǀHyU ׅ }ZGB2𤪀L'i3fQEKb]Fxk]A#<L[M&|6"qF 2V?qvίgƃ6}7l N \{| 81}ygMxCR;k1Wᰇa(ƤE95\r0a>8kSn(m5OG 901G]:R0&q+N P`S$XiyȰԫ7Ro)\ &,W#t.{Ϊ!8 __*$$aGO d,~g& Ɠ#J.qLnF&[T 0e㚈%i yԛ6q#kوpkbR͐pyjVGORhW8b ';"6R׈gCĐM:jrV*6%m`X ˷Bڿ5]J({NZsqTTڶDMtTfv0=zS1?r0&9Rᛱt.uhCA jz#ɣް]#IfRYIvq~?so=(G%uL腐: hS%)$+?B򀾑4iM)L1ĵѻ{?O2 Ěw\EF [`Z yT"'g0t˯K<ь$\q.9otJpDEY_a?f`Vg_-<%VN_@.XbƢo`!ګON/p#Ӧ!d!l)=:,Aj)ZAhka/1S?{<.&'+vHO;i>NwAY\F?mLvY2KV:@@=pGJJ=,ReZ"AB O)sb$.:ĊYIz:~8>tKn‚L%3CZdV協8aDbJ|ȉ3, '*(-=%w@X;i RƔIF% RR {«/ߚJo?eB Y6A r᫠]Ut8vF8j(N$EFkۃrN*v>V&nh_=~Epĕ k"gQcAdgyn |0z#r :*@Yf0#슠o]QS*'K=M` :'kPƿ:=^-,"@5&'n8(vݣ|}t [{DAPKue2s_'Q`_=Ѩ[~^h-\5jA(PW;^!g&ܷwϠVŁ=/}HI v-H= !J5@s&IYb`v/rK nyk4[?^MJnDxs*CqRG1_.g`G'mY0;g~fiX*R}E?-; |QтQ` k۹L륽.%D\x̬*[ZBBz i"[ ^}{}]ԇ]I\Uo&P 1OH%pk6_Z^v闃g8 KhDp;U $~**ot e@&FhњMWڼ;QЙ]}3!BoAyQ}p_c\!Đcn b[Z먚F KZyE@4X2v^G[\PM[V,U*ܶ%\ecZF{zX`缱o2Tཏ&A"/E`7`&n>cyZ퍌69>LK$\%}3ؘaw?7ݾwt0{پdk=!`o^ٺ~*X} vw`u nբ&64 Bx),B*Z ymS)iTq%\z |9/!LK\G>$|kCRslx+K4_o\0O$/[|%qW\:fyepgmh8ˡ&d\d$$P{ymmd0?<俿Ha S! /;Z7T?#D"- 5{z1gOƯ߳ՏO_?gnAdݭˡ*)Ps eyKXAE Q+'["B]o,^c]E`Sq6vԹBWo.F LUύy \Wh $w2-6w-q{LxZҮAPcmz=0y[o-2N!]aU螝Cvcrm𵧈iNԦYߒ!|իD)XHI5BK񦔗)k E']khܶ}ϕd0, ౄ_b +5>&:5kf#QIQP땉Rsݞ.\aZհKnJvq?Awhs[ꆄU=*yn,֞Alaj՝!<N=}}zIn-{/B!CBd/}Wۑ˅eznt{=3sWt#`|6r^q?>_w^8<{sz^ @y#Rg_;G tz `~Z% tO~RWEZ <?,4+#3#RII(e; g.y[RsnEk:|9VC^VfAęNV V\`\%㰈jՒ#3I\>M4"I bי& RU60S//t(F3߱4}97Jxj}{6U*r]&tSOMm] n?bzL>V[ʂ8Itk@XKǙV:ӑ΂F$v) g3"C~`%F(53#Oju>@HbMl~ DK1}ys;]™?./7.u;ZR,*5S)l7aylε) DNr;L\;oRjO^qV.g4B,\yeD}nN7P ~'܍JWIF*;'Fr[wIH/wJ}-ɵT]&yrw>i]o]D(yP#G%79`hD[ov?}CG+*2^$G 1X拽FjǒZfutvdgnZ]׫W&f0ICͤ+/!VWNuT@qh}AoG1,VB?hӾS@lArTa!*69v$Ąvez9&|%/50 {w]_n`oDG']/9{|z7: ]QvU$o^C KDS Z>ѫ{;1JzoK.r@]<0DA#;h{=ڨRM?qG]]i#Rl& ld$$'$ %g8Ѕv*0R QU@A]dQ8 ?c-(b 3 D`xM =,>E {߻'*(D/,1? fHEtĪc`bטY#Hz! >h@e"Ώɀ*P;5M 4Ÿ,zS Nd2D(%/#@3&AYPq<41 CQnG>z"a!%6q|"GN brS T O#A %۬#= xFZJxCb^ba4uJJ$&eHi '!.H-G;ft=#AGS2 ʄcs M-P 6J@h #G@'J"NĻ{\8+)˽CM@D|} J0nL_P@u C1{J=GSpJ5qLAysR@ld҂G/3`ǷcVB!i]ɎAiGtlaVy%C%S[j k%5Ԝ8UĪ iƴP*9s0T]T <&B_ T=N0;1MO+ 1֏0W"}{c5"0$l-{XM!\E U;" _$c:{X&:XvLl{=R)l;>ZsDlx?ҭKk6cA0^bTN59q7]GSp{rd˼9bB86'}W#K BD8&fQ@0*|yrc1ܺ{rAVYTdB y('HEiq'Y&P|<&"wlv!: NG=03 IIH%@b(Y6CKh,yynD41k#l&TsWd=[";Gסu*3QPB&FhH U*+DNv$`#\8eC14NUvUIp% ll}NɑrVB`kFz5e2_a>|'bIeT8FdI"ȟgE1^= UpeCWU('(cPHnXy`=gfܭlm9P=+Ȭl>!+^"ԟC6%E*I\ yqq4WF_c-lIxe}mZDXӳ =eB`$4 0" PL˳p⠈k+j h/Ms"#(X9%3..ERbj?ƌ&xtDB>uZ43-`^qɂë n^16s:z ^& [dt0Bp1?2Q&liIv= ƘZ䳼h6 #hwLEɊXEEUQ&1zt xFT!l >XYfR&L2!#h2`2@#:ꤜwzhgQU(N-hygE Մz_"yr8:@JBFxz,.&A1`,4KUCv~;?+Hv¥ y>X\"Uӡ jx!nү&Ӄ|?e>XhA*,g6SٵQ"jc6 IeCo\DAQl"ǝN6RV\H#g`TΘUbP|'u`V6n${0kD,3XDa^~yJy; 5%FUbXCUF.J1kD$etkWa [1cMdDJJ T:Y̹N}Xdm)J|]uHYe Iw$a@k:ڔSvGHyX"$i<EuA xE3ٓkfY ȪdN Ly,\Z0nt@ J=hѴIGѦlI<ymvbPV)CU aA>ڇ2KV=BlNC9&|";:2̐C\TeLv`L :Ԃ"H{f2𲊰RpyGYʹ%)#MlVau$/;gBG] I2j0P q F{L숴j!Uj(0鳬7+Fj" #gbDԁ?=+e摪P5K`㔕(Х "-Kj(ed4 jJ;dE^TnޑD:sxp"s؜)wi(PQn(3ԋ3I:BͫԈ+4O6.|ޕa+(Ӊ1nȠwz,n^ԉ{GMx^D9NF?޹Cre} %sC5si̡2N6 Q}Nbb {oNOoX3d<&K8X hAz-)dC.oegm7bHf5>BLJZHs[\2Ţ5gq8ao~i,[LX.έUăYi4 =Y ; |k(]['ކQ<$DkU9A|pERY_B uRĪW]ثw=GYKW -1Nrd]!N@OFdlI"!쁉]zL3 C Ii^TTǞ|XѾ.D!؝6S񰅞iDQisN <25TfA@"O:6o٠$d&M PPK׆2:GLfksHbx8E#k֔)ܐPrޡ5wC -5[SAz:EcUö m y3i߫2ܥqn&f;]5^T˱5yJhPq̌Ϻw QHjSxʹEƖ{H/Xl3f'ia&1mVT<=4]OX4P_uy2FUzq{!͂ZBP"p'pSx5&E\pn6Bg.?)?Ȳ`/\7+pxw'M3PlN5~"'^>W^[fR1Cʺy3]4 4%LhO.LR׷M:vC "R{ ͧstI0= E]5#F׵*u etosfֈaCy'Q$@mD)r $B)ڳ rK.h3-faQT,lgy:Cޤ/C,?̡-Oy2\giye[T6a'pag}MҤgވBqpA&35e@IYPB0iNZ;QQ0ʖ_/9dy'+2T.ar9<)ճ6sOXX(5dGBMJ:`}Ah6b*t.0nT YW,;FXȷE63*e؆QVcY@d DJ&`,B%aAV OĪS1)Id "3~FdP6(fj * jЍ9bbkcN#tUy$ck:xf:pP ux+cC~hP! 1,6#AHb[SiIڂV&굸DGk&1OiqQq.3H<>`:C+8XZ:c"ǪܩDC_ȁeajʧ}"+c= hzao?\ !X6HЪ^ eeffߵI7%4 h+KopX3(]E' $>݌ٕfK)6XfBj:$4G޴d>UJ^w%]C2Rj>q >0D"Yr;׌W a& /2KMq B.,ҹtC|0ӛ9`4/DSjDɩ,kZUUT1e M mZn5β+'l +lx$;i"q)AȷeŽOc"FbvQvRy.tRǴJ@Ki8 aaJ!TDceG [ {4g#c{OM8IJfOZjʹkQMo ,#)ѥ 1pI{l1{.'y.7 P&F6! tp|jf#xóz7Ӓ;Eʬ>mM,81 ݵ!Qe.i1ҌP+# ZySD'JV`WV': ̥)j.sP{i:c 4Vux Tdj(ܞ[f1a15lz8u1:-Ιp "sqJ ½t/bK£§!+4=PXHT Dbp^MD;t7/CLė&R= Id8۴s Kbm%9XD{imk5=Arn`&Ȕ>eu)joS}4*1>(MԦ: CNHXfJ47q+CǑ4xB%M^&NAMY+uYwh V`]=R k]%ѴE#ؘ;ѻ 'K9aȩk0HQfa$Hk/t{L6i IvC҉ZH͒Ld?CI.kUˠBkFd VKJ%izP%Tx6|09^ `Əd a]/aQJWJ6تyld:{=Û6K^_]R/o9'ܓ,\BwRH;J49ļ:1/&9T-0[rDIO8x儮a|'?'[dݎd4_$/d7fMMinIS2%Q=]tIaI> >e'-S0/ i։Vw?Rj߮U/q|0 qdAjh{Dmf=4L;WHي,[B:.tud-!Lc+=7}U<ši_tjjYOg#5(hbm|"N{:$ $zll;`4A^qqLfYdt]='ݷtB%r[2F߻c>OY_B?VQt%O|H&&0ņݴcu*ۅ@uZm~r%4g@C?cÞz\J@Shj*h鮙\/% b 3J~RT)MDВ5kW! =PQ9X i9̎Ωm)Q*RulTGz.(Y #<읩>jah^=u)Qf"G@PS E%FA O,1sFɘG\^OP( 1.d cCJD6RZ.X]3{IF%R؉E9~'0RA=`V=[%Hz[hat*9CsN\zhk>̐4LGE"gNؠ%=Lh[ԈybggRpfTb&t}c0Ă+ܠu!aDWVQV5QbzںߝR{RL)#:T #;C.QbSx9);YnQesfjNHdUk60WX_P %$g6 qW~z>eJU|OGz*KPe"'3%΍|h-N~2lmh`+&̥iA;6 ^;mF8dW[//,|p ح7^\ߏ^l :o zvЅ>^llXޚ#&\i5^PP[fs[M|9xjK}pF/_ O`k >Bicw{a4zfrkx^L1v~@x!_a%&-я01=IM?Wq[q6vw~ pv(l &M p6_uaI9?wG@ X+n-PS* N bwCu- `VegA>$ %B"88nL/[H>FwJu*vtrza!^jK*k4kSu,@Ǿj'py|K}}8* '"͞AË/fzY äu!Ao$,OSCaY"fP͞s<]-\^sy)H>1ZPbHKJ\V}a>bvK4|=zl7lr:VyPL7TmQ%y _HtbLI(86ʒMnv)4Hqa;9+-ʣʮ ,۪ ԗ*?Ⱛ"xx$, bp5aYl=w,y\d,CbÕ`F' &{t6$"L;Iͥ)=FFax+6rxY|eKS[q$gʸҢ(Î)o#';b Y㢳ԓtOe9)3Asׇ[/eUMH3z?o']Dœ*`S|m8!uQtz1DE# ?9ւyZޏqSNt{L,5i;jCG}>PvU_oR&4!-q}YI2y%.ЗA4ծKF BHZ#{>_(9M莹Ul֚d {; Kҟ7_mݯ:"|>O§ÇKZhq|Ug13kex{W^\?1H<= kYkO:k5pWx$aK"P$%.y_=D77^SOkf{I(*  ^kiчKyr3K D܆VwDx>p͠}c;˸2{~%a 37<|0Șr FMSJqlNGqJe B7B໣^ 9g@4Ѷn:[\M7$% (ߦ,F>-;!W,FQcTw2 bu'1`S2092lUHcˏ?N>^HDNzQM2f`5dKJ#QO:NZS)Ѽ#? ߇;7ArBOO.OA߇의]rr奏(V'iMzmT΢/~r!ݎV[OBM[ ad[h/|&#-sZVUWG\[=i=vbjlrp$_F0y4?ZL8G#t  x;ۏtM&'n01ebߓ^Zt\ :D|fׇUwׁIac. \OCB>VdU<s?eB8AʴۆstlĬ#9dle~1k¨$xY XzI~ں*yN8u hh77e\r} M(hBΌ~Ɇ 0JXݨgE rMrMZ/nw/8d=6oI_j6 jAiD o\Z-c  мCIg vJD=+ 0Pgd)iIT9NJc=e*xϟJG=XFL\BxɁ$ o,>hɀ!jYƗ<0 =O K@'Wfج_a ӓ_Z1iB]So.Wd JnD.рИ ԃ!}xYe+$>BN>?R؃#wROQ'`KZiE5AM|ca<9Ȇ?X@뫬EA֒,V=2F[Xk9<:7 B O$dbG ),Vw?C$2Sh8)&jrK8ҴR9콵-HZ^E܉cbt.6Y2=H%$f}JHRRl:%L4'l,uyciPT+p`5Dq}  UR̀)t~hC@{Rg#NXۤ! FsV>/D)~aK"@̠1zq*'[O5 w7e)A%FlT tFo)GijB}*~%&! XKH\+= #y~04_'4@(w}[8.  (( 3%ut>06qSA_ L<wBr I=) "S6[= ?:es}cV,bA4?+.|[2اUx9 6cՠ} 85"^w)-pOOGq>^\„|8@NkeZZ;{M{iwHvh>ѲE YT} /gɥZFíNw2$+p흃CpD C$l' *HW>zBWZa{l>-Oi4/g;e)Fulb(@Jy05rzyoA98d#yrc5Ic,b.Z?Ӿ9{29zQi1?>;>91ˋKWgZo8SM1no&r?xxCw73S*5Uу>uEHWy,hsԥa^*N)?^2Z $u[ <+4顢d|׈D᜷$ҊEO LK]֙64"0C'o}~~6Fy8^yOi:M0}ygE_6=8GGգOQJ0a/ɱ.v={W2 Xt~cWK7w<'c;X@pݬ0$AŊGm1 KŅﭝQ!0wĞDss7+,),4 Z= VˏRP׉ ׉7m]6S/ q2"q_!j~)x?MBju5k E~_"^qd(@'?$Bq$!?Fj6{=jg~-#NN8zHPa%p[˕#Rlߔo{c H3L;9HZνv=)åZmEfi^I*( 4}zc?]T{tA<1'${Aɉ_<ïYRBRox3 f,EѝV=5Xܱݛ[\}>Pd>Fi7y=}^{|ȴX:j0ye7ѭ[V$^bP|J. 0@j9ɬ9IFbUmRI?㷾qa@ eHJkmֱ,`ClP뺓 ~>`z^ 5 1I5+b:$}1o?h-,=0SG?"Rvq/AݕŅ)d ۬yHϞ]FYG3yLoϬgjAg7k/:%\__yi R{neY')zPI.5If@ ew*ӼMst1gH^l-c'#~҅q(}V.al(6/ZKK-,cYaJ\4sR9rVsJZ8eG 퍭-L…`_UAj/yb*>/>?'y?4 S4caq^~l1oj~7*X\5ܭFШES'pVW[nv0Vqz=GG 8_~#?i=:~4ݢ%qbu"`na ).]aHRoƶӃt`_z3}D vY|N@l_C[YZ\Sr@~\i/ךri7?3q$|hb 1<޹㿺v|vRK%iIΩ[O[eO5hP >BB'=_/faaAzGD?͵c9i*U)h{)!"381<%+l7N3oI֭J[ABoNLC`0f]oV0% ,?7miPtYyNQg&YѼS!8G#r5&[g1Hm괣pL+)tVG ШMG&:O{gE@䱛#r.(ݭ:4qQ!p9 \rza~ꌊ-2'0 *d~*R|87I_.%` RMMx6 H)êkCoK" ; ⵢa@;@}Z7yk=l'Kk BnSHrIa3|rۥ#5n%ITt:1a %&suzm":/L֜L,_XkߝCI}wXZnMlIE΂E_Hh$41 6)&q^PJ|cB%x(ܯsIӨ~" E'2K5 S6=Jܟ䜟Qڹic޼&HΔ!W OcJ;.P(YL􋶢nx T&p+1wrmY^74ܸA}H(C+7xEtyh? k1S|^I||n_[Y]3ʇmܪWV)ŧT ŭ||!yMM6AXYٜaVItÌQ%Z`)Gp8JM8q4t'jg(>gK Yy}E)fK/%eݕy|Ywy N;qoU(sqEXSklPͬ=15ϳ edREM{ԚX  G Mjkͪ󵰐 >u^k}؄N,,QO usj^_Tւ:7JJ'dG([b{vj $cL\2Z$-y%QTdN3s\;6@klUf Fd"5#Fαv RUzi.5l3bNz% m<<|ih/Z*oƏnq-jx\kݬB@JPnNc>%H<8f@bxaVa8y {%"j٨H{OpnOQ%)OKRu[> X[ZEYeL⯽%r=$l|˵ӚKd?C&D DPyU$쒷zBߜ Rwj8׶@tլ:mڷ} S[)iCԳu@{N؜w|LiO:F=` zkES>}_Z=S*&: 0>0dM6 s4GŷxYt Fdz]C蠉\xGeԴܰD}7􎽐"n!$îZS:3"{wR -67_3K8$ߚ{>߄l ]vGȑ46.8M7ǒOsܼG_M4mmꈑQG}ˌpU˥VpH`v]`+W1Ɉb2k26"ṫ-u0`zь̞ED)L~-zzg{IORQ]Jj9>ɥ ߯xmVljt~j:f~HH%`ÿ\hE0Gc5dluz:VAh!×t>FBݦNgqTm v -.ߟXze[6#&mw`~N >ph%|_ wucuFc9оdM DX? S@TS44*k^|dJVw^,|KvnO>y0Aϼ'fQ߽2\*Sಪ$MS[eE @Le\RT y 0{{ ;DcoYkyۃ35Ίmsy>u@)oÒ݆8x%r8V17i_h\fny9Mx=cb{Ka+=|^~Y)SeJksªXAj+8zmO tVcwʖ7G|6Q{aj{5|I<_=} )Q)(WyJ[ypSO\?ʍ:ISbpe'Cx_3bܽ0٩Sp<¯޴km7t@ÝZf  m :fdEqc&y'Ũҝ"޺Ur[hGu[ 7y{3]ĝatQ:4ȒmD$n`'e',-QI/F<[ߺmW݈N2)Zi%T16Mwr*jbe#5n gͳdFI>ϖwr7΁SO_R T`Ŝ[ 0N`;vGB\lo :2%;{nWx5CK|- Hvuuڵ凯aQvQcz@mG!ˈEI?;cd|CgO?jArRuQZCtZ\ wN.OXKtinܴzc!`Sz_Uxr.54wq pfQqU n/ r~rPhNU>AQ/]#^Kq]ce.$xM8s(`ڠԜCi!Z ?}SeJ#)ZlR<;XB%/r=x]ZK \߱,/Ea4xq7!suˆzEK A^S Hg3R8z\?ݫh`Z(!+fEpq"AVT5NP_j1 * +)z>_KEZ~ W#ԨWaαq@pAt-h PGt= L2̀%xQ]\:JN"kwe} 5Xq;O{Ҁ0,S M Aܝ2hXz`CI:+U'j|TekYȬRY YM;CQhRoxtAVͱ u4P\NS0_!=' 'JB3-Z k\DVb=$12|9 Oc5!cpfpBlru&e&/LU|FɪU:9?ge Kwc6vwo`+R~cv`g}av9)ty{[?|q^D;n@/_[Ysm'8`t>.+B]T¼*/}[ U:SiV Etf.gO\BxC5KS֔_sJ[?= ~Im2Հ5u™0_GZf`먥y]ĥ۠7, o@U1 Q-&~z﯒s (U)G,QDpRrMd_BOI p]+E%!(Lm*H29fWEYQuܾfdY(> Dl;,5O莳M[0`.XPI d4PO ?9$;eա4ج&f+>}2ߘֱ.֝$|E;#ׯu*{RoЏ9pu8} ^M]|%f5'(~tЩʲ#dWz1t\G?q['. ߧnK'f~úSQ˯|Fbxa_ qһg>~< k]qfڦ+YJy3ijP@KI -)ZbT8먟 Qio2K'ЧPomy ]4wƂ2: NlT1ٍUgqoq  YUk:[zkw]Y|]إ+&LF\LN$[5iq /f)j%Utl/[z {I~Q Btzil!Ls:w1d1;tͩTL΀)!FrG `3 8\75;Q5sv7oqơ' w7ST&tI=O{=\GU`m>tKbI)KQGc28m8AE Ǯ[w˦լlVg(LFyz2eiFdԬfP\$9'sv?;5 -vج_swv5?;]>^k\a,$&]l^ߓAv>0& E^ K#4;j Lzq7Y] &ɣyGKX Dd=Mpt. Y7?޼ ]K{e1_Bǃ'm]x5pfDu?gZ ND.d9l.>Uԏv0!bC(bK6syէ;# /kGm-kU=i Holo.5P`{m/fp~jG΁l5pg)H;S"ǩ]TLfցļot1S_dS@\KqO,/bF%lRB鹉HL4HB~ʯkK:*y"rc dԤ<_1  { +"ѫ>^-@йqkru6廻̹c]yqRSnه%!%W+&Ng撯\ +gΞޢ7=: 6e&~IɃmV`Jd"\/RPi2yF>_9y֋GZbl&ŰlUe pNI2&GUih B2D3HOqK~*v-z(<je-SȺꑡjU&}x@@T0a1jCx Og7jD1q:ȏ\h:FZa(FE^E8VLS<:/(?J.eA5<Pf?Nw\]>[R5>hoMyz#z co"&KϜˣ,.Ojr" hZl -yW5EG%\9,62rp@nQl?bD;*ygȕjmőI QS+/ۢq5XoSjhjp(Sb)Јm<mYA_’c/ _-5i[|5=v>_nf;NP۔{S+ U𺟸H%%J<B|wy!/g^>N}躰k'M~cz+ZLyou}.ou3?񘕶;1z6!Qn<{5De{>h9F}x%LkrOȀ'}M@43/쑼{Z8wz(*Z,HDic.AN@ՓȻ·5`y^lm9$EA֋GhiR=&"HGs&АU|2@3;D:QMVO'm[: djJUp>[azq/3#d\.n{!ehnߌGc`sԑ$3GqƮsIlQr6¢a4!AD*iO3fu #x-~IZ~G(2`+=M+KWF<\Y" d=Xonml*̯=ߜםw˽_/Ky ҅o˚f դqmX=_~z5)o tYiG@s_;}aN{P cU(|B4@ND&lDp\u[I.V.eaEM^Hn,7<)Oο%fgVcҐ|cvvVIbs8& $&8ɇw*38a' B6 !{m&c\v}48WÆ"4e|U#ܤu`Q 3_|Fٹz|[kwN)JrtE+f5@z"1gތ,xW_F//h_W粎'wCC@aF7+3$PZ֚٧3 yUPcjN~iY*$ zG 茭MAtpJt%xܸRt &(&u7?,LQ a&Bb(hXcaM.ibvCִƄ{L\8kdJYdgoo??׍~Pc)=]j ,kvR=!cد8%G%VrLV[wj-f<{0 B,>cJK-o]||pI㼹@,>/Hoom,̘Ghsy!ϚT&W# wv6[߽F{HP91(o\(i}j8Ar$.[-[43al\,xuH9s&qwFZ^@'*ݿܱ#`?RUiΗڃ쫕=.OqYXn݂V/c;xfѰ!!_G{vB }*I-B_p6;3\Sk%+i]@\-U@.&hwr%?,G+DN3@H-3]CU_=LukJE50:F[.0NMxfR4sd  ; ybaJ1v'Q'xlB?i,B:NTI%AE'A6v"[c`X|Uե\SrksgC ABXc/4AepI*ʯ6IM|?gU0.; = Ԟ_}."’8vka G(dv0e{/}S4wR0B5k' 9d"ͥjދsNI-bD]Evq"!"Mvlh4Wَs%lJ[z+"ש>[ml{Ekhp_2h`ȶrY >IK@l?l~~fom9܈o:#̛R1X'&1CMG NѳQݯ U}92H̸Y<6>p0*LGFWA^hn7%=.B;1'pzӌ~oi‘hv1$~pA l̤6~3 xpi)%CRJ[b%~GП5$JS&8Uxt!sD"zs/v}=^(SJ)134 ((Õw^NJh@ZrbK>fD'q(g>R~{ }1}>/=z{ˏ,?zᏥGZs`(fw|D˙?XݗO2RMh<54`__!~2نœٓ9%eKG5\6$ZL{]Ŗ`3ehqB^hirÑ\ !F-CMi$ {@4^v~U% zjB<^ 3BSy<@O 6-xlzt㋒dr^>U/ܭ'6P)eÕ#5Y2R]%qݒkIitQ.̫  ogRi /غCW.TIK.Ψ_$=c~Ҥڨx. EBF:l8 h~ -PoYT*.W99fq1; >/ƪ~J 4YVNBFBJ&2nRcBɹ* ?C;J+h^H\7|+8|΁ZNi|`kGMcbʿڽ{w ei&>=$6";ްna{#q-?S@icQD\:CBqH,z\g▪O%fB [pTRBfD1\go!Jvϧ\N˳s\Fv%$>-ϋ;@Y;!ƥ'3  /ᖂͧ$o_IX7+Yv)kwlmn-q򧶙>+>ki2Lج/H=-2O}J3O\ERz'I (}ᆌ \'_qPXk9LqP6*pTMYN)\>qP]gf'Mɀ$.Zy,}׸M*X r[$ɇ$nqȇc2KiehUr_e_++lG4|!y4r3rsl ,@[ bCu1EKÈIkJg70=!wݢ Yxs|:QÜbcƨ(bXHkbcC\HFrѺOAt[:70IEَ5rȅ, ;Xܯ@؜No7ۅ_]nl;Je\RYnSUV@AXǥ趪ZN `ap€ۂy-ܺ{+3u*}*( %')/8O]\\Y\0YjD~zZ_\߰CpX7'A-qw; We~ fk \#շcJIB6 tG(;„v|ƘiBI?'DH`V<(=Ҵ1(f)Z9). 86}ƕuk)2e-%X0 |8&.lLslwb=͓Ek ԧTvq,玅͐r_5Xt1nXJF#-HvӰڑnު-U`EJ"yr"'==HoѢǸRXoI/;{ִNp?'8>1gw>Uf$r1%θgj ?,B&oT]÷]d-GA7͢w>sA]F& :|gt0=Z7ìv.ZU@cnzql2٭f5_8#ID}^z%~: O9(3yqE$ZHp}GٚK@n1%yk]zdæEwXtuճ1;st'tA 6E$WnJ4uSڔ 9+pi%;"ײ""'/m7-9|-pK+ Ya ;s`#OtN3SNCq ~O@xzwK4Y|xR©H.+Q˅Ac `te=уj/opN>iu|( I7I7%HwviO tiyq<$\[ϣ_oMwiCGG=km.֏!k |a/ևs~ּ-_PKr8p&ל>'?P67H"b÷`#ݕŅhAc5r~]|I jWJCa(1ke*ke~6, QFE<\Mߍ @Hܥ 5Õu &\~ak`{w`*W[m{om=;l#Av$퍭 =G`Rv"ǭIb)?O!e+Tz4}Mʄwݨ·Ë)e']sx:1so,/.=2ܕigAoʽ0 foM )ؓW ]hTx"~ 6ߝ/1G]O>LE $.1rd^{^ mv1{^y ڍ~_Sc<уiGK7f;Sղ'#9J2''z(5_c(+H:`-lVAY-/פ/kbM~ +8R2%لZ@/ܨ dZ¼?d99{Z Z@ԣŘ/@ 9<|{|OY yr߻R3gڋڼvǙ.ZYX8??o < '^ :h6̥ڨޣ|l@Gd^srt0)fyJ!tXLZʺ^v<]a(ʊ(kfd"0]}gJm[Y^/q6DzYR nuWV(I7G=~ppS {P3@S秦KAb`=v7wW̫dd,9X Z`~ּ&yaZn !bˌ7 ~G]OQG?G,{i?[PHLYKҮC#5z {CB jͪZ8!"E:t؂8#xcsxT &zT>tIB.-?l~(FVr}:ŷGq-J_>)v5s|d%^28[犆S~5Wyy (FtpBH+X/fΛg{^pk) OEw_ QoLlk:07n ߠWSh|I+}(k>jEoXx$?m+y ??+I1:FganV=j72O`A^.2.YRmG rp^K~ZiGc;qJ4%GrS $ B>s܅$ܡӄ3E˥K5ySi #C59*;\2ep?sve-2Li3G5֘ו H{vFф~ T]|GJiY˲!gnlzEc%Na>N씑pO+ #,ơyY ]}ۄ*~dSUxԂ#<5jsv_7[ F\3G„+ݮ~׿Ỷm-E0NcHӛ-jw'B$;'c.&pz4с;Fe{ku]~KOGi{ovM{u75s]Y#d 5JôP7*# &~}3).Rn)t*THQnc'5>l /Ӗ_RΔW5ظ rf$h}ڙMߕ; 6@mMy-8;{xUR&WU?IN$8\frmdu_(vZa*$Fx$ly *㯅y)ɵAvCa,"d=迡5 $Wz%2,~ 햜#yf&>4=,L$1f_FL )2$'\6ķ. vyion ?_qDEԼκDTrk`TAаg4WRkұ'ܚ;nX7_-LQu:!3x ƶcD(F8U"TʆpMߏ)v  '<*'xkwB2 gI<)h0U}KDT% .GN. 0пi|b*\|?9Lg'T!K},L0W+&ö-YȃB8q3R^"45;|#m[ZzVt+XrT/ -2mc:OJ<W5:_>IEt^B`^| s߉wl{rF҃Y-e՛$MG:XeȭF|^&eڌ taշw˻PI.͢VDY2Z5}V7OcTܵP}*Z[K_NvxFEi^9?Zs%en5`b-:.6uݶ Z1 啯uzS!ՔIVtn{9HbFlE5M%7)^E_Ex*ZJ摲jVBf:+Bc^f .Iyz"pMP T,Rce>X <:rOBEF,2ԙ&&&i0!B"p(D}A>#K\.ՐX3dt^稁E2W*O%x[{*n9X.[[Xx9L DYQ5I5W+a=)'_甠:g9m祺+=4At͵oв_u.ժb_q??xתkߚ׹6krmZ+__Jx&Ի([&f_cQ+ttfAgD McQT4Kw1ןw s5EU}S*RfAs=%¿ƺ ,y"FuszT4k<AJ!r?4,(*ceTHw7?quS#RH1#uGtipHKo%2,xX>uu84X=o#j,WW{t>e#d@ TUq=IBmkc[uxl逼(<ۋ1!qvg"c'sѡ*_je@T*)H݈V9rqCm|GZd;PߐJLnii:x"a'Sp*AVUJyJ20'$Ď̙4CN{_Xk%X_*pRVLdgW՟)l,W]faoxtLK?&ӂ*>9?5 v4 K*XA-2Y"20)b(HxPUt*Y9NЅDT<_52h/4Hׅ3N0)}j'&W%N>{FZ)C7][' MZ* of=_gj|>o{O!jop* & Ps@ N>;3g\FAbS~<* "7wP{xIAؿQ%HXH}~aT3'N[_<|jo@{`4!qGyz QT6]H>O$ns>̟EAJ嘭{գR_razK83`8ϜGiq.n8|xrD0 DE4AISEm>ltwC~mk=PU1)\uVm@Xh6 5aG9}]6S$Zp/1I/*zp/Ep&צ½)p~݌u4Qy}H(ٟ'kH!\:}y} Ư0ל&γ7U Gn=VS k2e'tţq=U-rZGCC'|2Ĕn\{QϵC;8BkLbq;w1pL @=i?%Bb_li?T.v`>}Օ9(M9?.OYf {6n68#z>B/bsas׈Y5YLD+NafnE7;9>9x8+,hGEn'ȊRfz^)X''vܧA8&ȒT3k=IRP$s7ZxqOǧeꮑuZ׎|b^keQѾ(˟=b? ViGwWUZHp(Ο0 07C:^ jɬ)Rh7QtNc-H؆S&E3n?,̬\aIvI ?0k+9M.vk2Ob-#d㚤xNљIHtQ"!ff䱇 Q72rz);{ Y'RZAW!p70[~.OL,R O?ޤ!<;Z,{? 6g3G|Lm_/03Ls)|.]AM\|q^Ӿ_>^{R~(x0{TDOepG[oNR8dh$.Z}% )/\TķLo, f}{.3X]q(RLz&,TS2?7{m~<Ͳa*nR舲ϨHi |c,M7L/+bLX]lBVVJRB>h}3`.,SQkV19 +`ɔED_F>/2ͪTU. 66 ;F'_RR6yA-ys[ Ҿ y{xjb\PE7 ,l]ˊHBK\V4:M}|*PO5OEBOU6|7Y X[gk1mt-/8tE[8/6M1FϙzLʥ8s呙/5xTP7|.TO]Q\ ]p 䓹mQYG3UtwHD \w:r-4}'ݓ?QAA?{@lVWM4f- [Lo]tVY[e oE~ʙ\dIƯ)v|*6˵,k׳SaB 4;ijT( ʔ(ͣ%5;yu:kGzzdžřJo&m''(lS6R; !AinUk;s.=W}8* u2Bi!'Tb2UO防 ̾-DJͣĠmVsXjBgbShC(R{;)U}-um,ddB( WOY1qb4UzpU3ǡ!W 0'CEPl32@sLЪd VQAGH%볾RH-1%]fQ)m?V͸]/ì;)AcT 2Mj ; ݿnfOj/k>0e/9˜es}|K1Gz=LҮUZ о^gES9b{%e;ɚr7 pKt;l o(yO̎Bwo5p}Æ5sN #Ꮩ}jβ pf/NH*mtJEp`y. p2&<~eAvUWΊ`Ew0^ wAǦ0p\pvv*6VbFQfB2ݔK@R94b}o=`/M /sPF0I0 ִc{pkll(2MfwK9íM'GuyOW{^C~ ?"P2Krk\Λwâ ffCN͡mT0Ζ )8M&wSnSt !73 qXTZqM# } wmW7ⴸZU)/+0cnatqb(k5Y5E4<p|Ts=RUo\={T| dǎ_)zç,pḰp|Wżi<_P1Ӆ/~?gEEmE %ӲR]#r;}p֬aHQ8"sHsl%>p(9gJXWxF႐͖7/UTp4KR{ibeke=R{S @C {T"L[@ ]s-|SMy-#!Kӆ&G*3ȁ\٢ =[sleeKF++SX8?֩i#8a0if޿iU],[<5WLpZIw(>Wg̊rYpt<A!&y:/ ʷ5i;'ER6ͪ &9^[na]n\,?SMP8KM{WyK` fBda6zGqzH+MEcQ:dGǻϋ~+# E"rKUe%gr/(cఠo6'}sBBhXR/^ʔF_9k!n'#<}1Ua76逿ka鱹r|HX9VM:9Gsٍ/ ]$1<84#X[[\4 'ˇţk=|0 [C¯+w#`r%N1|^<5yý]OyuጦHzLW齙MK]jxoȯ[+J0+>ЎK~@A #g$ms/ xE eI6Fdb~a[8)^ך@qЂ-+Bo"UI'Nyl$S5sB6V{<d[bhUjEubh"}ǷKxnkgݻ(ZVKfgHq-2JY>ʲmuAYSWv_ol#$f?>ՍnsǍtN38Mz؃gc$/v$&(UL}kw_s,aSF F8 9R-L^ilr}S7M ̈0a0_3MĆ}խcQ)v1tiĘgaS!|KBkwul }Z(9tP4*K &m6(Q>Y:SUpS#=:/jX'G0at@N]T-dmow+'K1dG3tdҾG?u6^lWLdnclmwh_U:'O{\҆yD >\CS!Yci9z m)0ʗs~`?!\=Fbp@^CjDfSW2 $_`q6j鐴Ev _'" Ym%eA>YtT6UR«P[-A|KrΠ{)!m@khq!DTH\O8ץMs|n>7s|n>7s|n>7s|n>7s|n>7W|?,sanoid-2.0.3/packages/rhel/sanoid.spec000066400000000000000000000066631355716220600176540ustar00rootroot00000000000000%global version 2.0.3 %global git_tag v%{version} # Enable with systemctl "enable sanoid.timer" %global _with_systemd 1 Name: sanoid Version: %{version} Release: 1%{?dist} BuildArch: noarch Summary: A policy-driven snapshot management tool for ZFS file systems Group: Applications/System License: GPLv3 URL: https://github.com/jimsalterjrs/sanoid Source0: https://github.com/jimsalterjrs/%{name}/archive/%{git_tag}/%{name}-%{version}.tar.gz Requires: perl, mbuffer, lzop, pv, perl-Config-IniFiles, perl-Capture-Tiny %if 0%{?_with_systemd} Requires: systemd >= 212 BuildRequires: systemd %endif %description Sanoid is a policy-driven snapshot management tool for ZFS file systems. You can use Sanoid to create, automatically thin, and monitor snapshots and pool health from a single eminently human-readable TOML configuration file. %prep %setup -q %build echo "Nothing to build" %install %{__install} -D -m 0644 sanoid.defaults.conf %{buildroot}/etc/sanoid/sanoid.defaults.conf %{__install} -d %{buildroot}%{_sbindir} %{__install} -m 0755 sanoid syncoid findoid sleepymutex %{buildroot}%{_sbindir} %if 0%{?_with_systemd} %{__install} -d %{buildroot}%{_unitdir} %endif %if 0%{?fedora} %{__install} -D -m 0644 sanoid.conf %{buildroot}%{_docdir}/%{name}/examples/sanoid.conf %endif %if 0%{?rhel} %{__install} -D -m 0644 sanoid.conf %{buildroot}%{_docdir}/%{name}-%{version}/examples/sanoid.conf %endif %if 0%{?_with_systemd} cat > %{buildroot}%{_unitdir}/%{name}.service < %{buildroot}%{_unitdir}/%{name}.timer < %{buildroot}%{_docdir}/%{name}-%{version}/examples/sanoid.cron %endif %endif %post %{?_with_systemd:%{_bindir}/systemctl daemon-reload} %postun %{?_with_systemd:%{_bindir}/systemctl daemon-reload} %files %doc CHANGELIST VERSION README.md FREEBSD.readme %license LICENSE %{_sbindir}/sanoid %{_sbindir}/syncoid %{_sbindir}/findoid %{_sbindir}/sleepymutex %dir %{_sysconfdir}/%{name} %config %{_sysconfdir}/%{name}/sanoid.defaults.conf %if 0%{?fedora} %{_docdir}/%{name} %endif %if 0%{?rhel} %{_docdir}/%{name}-%{version} %endif %if 0%{?_with_systemd} %{_unitdir}/%{name}.service %{_unitdir}/%{name}.timer %endif %changelog * Wed Oct 02 2019 Christoph Klaffl - 2.0.3 - Bump to 2.0.3 * Wed Sep 25 2019 Christoph Klaffl - 2.0.2 - Bump to 2.0.2 * Wed Dec 04 2018 Christoph Klaffl - 2.0.0 - Bump to 2.0.0 * Sat Apr 28 2018 Dominic Robinson - 1.4.18-1 - Bump to 1.4.18 * Thu Aug 31 2017 Dominic Robinson - 1.4.14-2 - Add systemd timers * Wed Aug 30 2017 Dominic Robinson - 1.4.14-1 - Version bump * Wed Jul 12 2017 Thomas M. Lapp - 1.4.13-1 - Version bump - Include FREEBSD.readme in docs * Wed Jul 12 2017 Thomas M. Lapp - 1.4.9-1 - Version bump - Clean up variables and macros - Compatible with both Fedora and Red Hat * Sat Feb 13 2016 Thomas M. Lapp - 1.4.4-1 - Initial RPM Package sanoid-2.0.3/packages/rhel/sources000066400000000000000000000000671355716220600171210ustar00rootroot00000000000000cf0ec23c310d2f9416ebabe48f5edb73 sanoid-1.4.18.tar.gz sanoid-2.0.3/sanoid000077500000000000000000001551321355716220600142120ustar00rootroot00000000000000#!/usr/bin/perl # this software is licensed for use under the Free Software Foundation's GPL v3.0 license, as retrieved # from http://www.gnu.org/licenses/gpl-3.0.html on 2014-11-17. A copy should also be available in this # project's Git repository at https://github.com/jimsalterjrs/sanoid/blob/master/LICENSE. $::VERSION = '2.0.3'; my $MINIMUM_DEFAULTS_VERSION = 2; use strict; use warnings; use Config::IniFiles; # read samba-style conf file use Data::Dumper; # debugging - print contents of hash use File::Path; # for rmtree command in use_prune use Getopt::Long qw(:config auto_version auto_help); use Pod::Usage; # pod2usage use Time::Local; # to parse dates in reverse use Capture::Tiny ':all'; my %args = ("configdir" => "/etc/sanoid"); GetOptions(\%args, "verbose", "debug", "cron", "readonly", "quiet", "monitor-health", "force-update", "configdir=s", "monitor-snapshots", "take-snapshots", "prune-snapshots", "force-prune", "monitor-capacity" ) or pod2usage(2); # If only config directory (or nothing) has been specified, default to --cron --verbose if (keys %args < 2) { $args{'cron'} = 1; $args{'verbose'} = 1; } my $pscmd = '/bin/ps'; my $zfs = '/sbin/zfs'; my $zpool = '/sbin/zpool'; my $conf_file = "$args{'configdir'}/sanoid.conf"; my $default_conf_file = "$args{'configdir'}/sanoid.defaults.conf"; # parse config file my %config = init($conf_file,$default_conf_file); # if we call getsnaps(%config,1) it will forcibly update the cache, TTL or no TTL my $forcecacheupdate = 0; my $cache = '/var/cache/sanoidsnapshots.txt'; my $cacheTTL = 900; # 15 minutes my %snaps = getsnaps( \%config, $cacheTTL, $forcecacheupdate ); my %pruned; my %capacitycache; my %snapsbytype = getsnapsbytype( \%config, \%snaps ); my %snapsbypath = getsnapsbypath( \%config, \%snaps ); # let's make it a little easier to be consistent passing these hashes in the same order to each sub my @params = ( \%config, \%snaps, \%snapsbytype, \%snapsbypath ); if ($args{'debug'}) { $args{'verbose'}=1; blabber (@params); } if ($args{'monitor-snapshots'}) { monitor_snapshots(@params); } if ($args{'monitor-health'}) { monitor_health(@params); } if ($args{'monitor-capacity'}) { monitor_capacity(@params); } if ($args{'force-update'}) { my $snaps = getsnaps( \%config, $cacheTTL, 1 ); } if ($args{'cron'}) { if ($args{'quiet'}) { $args{'verbose'} = 0; } take_snapshots (@params); prune_snapshots (@params); } else { if ($args{'take-snapshots'}) { take_snapshots (@params); } if ($args{'prune-snapshots'}) { prune_snapshots (@params); } } exit 0; #################################################################################### #################################################################################### #################################################################################### sub monitor_health { my ($config, $snaps, $snapsbytype, $snapsbypath) = @_; my %pools; my @messages; my $errlevel=0; foreach my $path (keys %{ $snapsbypath}) { my @pool = split ('/',$path); $pools{$pool[0]}=1; } foreach my $pool (keys %pools) { my ($exitcode, $msg) = check_zpool($pool,2); if ($exitcode > $errlevel) { $errlevel = $exitcode; } chomp $msg; push (@messages, $msg); } my @warninglevels = ('','*** WARNING *** ','*** CRITICAL *** '); my $message = $warninglevels[$errlevel] . join (', ',@messages); print "$message\n"; exit $errlevel; } #################################################################################### #################################################################################### #################################################################################### sub monitor_snapshots { # nagios plugin format: exit 0,1,2,3 for OK, WARN, CRITICAL, or ERROR. # check_snapshot_date - test ZFS fs creation timestamp for recentness # accepts arguments: $filesystem, $warn (in seconds elapsed), $crit (in seconds elapsed) my ($config, $snaps, $snapsbytype, $snapsbypath) = @_; my %datestamp = get_date(); my $errorlevel = 0; my $msg; my @msgs; my @paths; foreach my $section (keys %config) { if ($section =~ /^template/) { next; } if (! $config{$section}{'monitor'}) { next; } if ($config{$section}{'process_children_only'}) { next; } my $path = $config{$section}{'path'}; push @paths, $path; my @types = ('yearly','monthly','weekly','daily','hourly','frequently'); foreach my $type (@types) { if ($config{$section}{$type} == 0) { next; } my $smallerperiod = 0; # we need to set the period length in seconds first if ($type eq 'frequently') { $smallerperiod = 1; } elsif ($type eq 'hourly') { $smallerperiod = 60; } elsif ($type eq 'daily') { $smallerperiod = 60*60; } elsif ($type eq 'weekly') { $smallerperiod = 60*60*24; } elsif ($type eq 'monthly') { $smallerperiod = 60*60*24*7; } elsif ($type eq 'yearly') { $smallerperiod = 60*60*24*31; } my $typewarn = $type . '_warn'; my $typecrit = $type . '_crit'; my $warn = convertTimePeriod($config{$section}{$typewarn}, $smallerperiod); my $crit = convertTimePeriod($config{$section}{$typecrit}, $smallerperiod); my $elapsed = -1; if (defined $snapsbytype{$path}{$type}{'newest'}) { $elapsed = $snapsbytype{$path}{$type}{'newest'}; } my $dispelapsed = displaytime($elapsed); my $dispwarn = displaytime($warn); my $dispcrit = displaytime($crit); if ( $elapsed > $crit || $elapsed == -1) { if ($crit > 0) { if (! $config{$section}{'monitor_dont_crit'}) { $errorlevel = 2; } if ($elapsed == -1) { push @msgs, "CRIT: $path has no $type snapshots at all!"; } else { push @msgs, "CRIT: $path\'s newest $type snapshot is $dispelapsed old (should be < $dispcrit)"; } } } elsif ($elapsed > $warn) { if ($warn > 0) { if (! $config{$section}{'monitor_dont_warn'} && ($errorlevel < 2) ) { $errorlevel = 1; } push @msgs, "WARN: $path\'s newest $type snapshot is $dispelapsed old (should be < $dispwarn)"; } } else { # push @msgs .= "OK: $path\'s newest $type snapshot is $dispelapsed old \n"; } } } my @sorted_msgs = sort { lc($a) cmp lc($b) } @msgs; my @sorted_paths = sort { lc($a) cmp lc($b) } @paths; $msg = join (", ", @sorted_msgs); my $paths = join (", ", @sorted_paths); if ($msg eq '') { $msg = "OK: all monitored datasets \($paths\) have fresh snapshots"; } print "$msg\n"; exit $errorlevel; } #################################################################################### #################################################################################### #################################################################################### sub monitor_capacity { my ($config, $snaps, $snapsbytype, $snapsbypath) = @_; my %pools; my @messages; my $errlevel=0; # build pool list with corresponding capacity limits foreach my $section (keys %config) { my @pool = split ('/',$section); if (scalar @pool == 1 || !defined($pools{$pool[0]}) ) { my %capacitylimits; if (!check_capacity_limit($config{$section}{'capacity_warn'})) { die "ERROR: invalid zpool capacity warning limit!\n"; } if ($config{$section}{'capacity_warn'} != 0) { $capacitylimits{'warn'} = $config{$section}{'capacity_warn'}; } if (!check_capacity_limit($config{$section}{'capacity_crit'})) { die "ERROR: invalid zpool capacity critical limit!\n"; } if ($config{$section}{'capacity_crit'} != 0) { $capacitylimits{'crit'} = $config{$section}{'capacity_crit'}; } if (%capacitylimits) { $pools{$pool[0]} = \%capacitylimits; } } } foreach my $pool (keys %pools) { my $capacitylimitsref = $pools{$pool}; my ($exitcode, $msg) = check_zpool_capacity($pool,\%$capacitylimitsref); if ($exitcode > $errlevel) { $errlevel = $exitcode; } chomp $msg; push (@messages, $msg); } my @warninglevels = ('','*** WARNING *** ','*** CRITICAL *** '); my $message = $warninglevels[$errlevel] . join (', ',@messages); print "$message\n"; exit $errlevel; } #################################################################################### #################################################################################### #################################################################################### sub prune_snapshots { if ($args{'verbose'}) { print "INFO: pruning snapshots...\n"; } my ($config, $snaps, $snapsbytype, $snapsbypath) = @_; my %datestamp = get_date(); my $forcecacheupdate = 0; foreach my $section (keys %config) { if ($section =~ /^template/) { next; } if (! $config{$section}{'autoprune'}) { next; } if ($config{$section}{'process_children_only'}) { next; } my $path = $config{$section}{'path'}; my $period = 0; if (check_prune_defer($config, $section)) { if ($args{'verbose'}) { print "INFO: deferring snapshot pruning ($section)...\n"; } next; } foreach my $type (keys %{ $config{$section} }){ unless ($type =~ /ly$/) { next; } # we need to set the period length in seconds first if ($type eq 'frequently') { $period = 60 * $config{$section}{'frequent_period'}; } elsif ($type eq 'hourly') { $period = 60*60; } elsif ($type eq 'daily') { $period = 60*60*24; } elsif ($type eq 'weekly') { $period = 60*60*24*7; } elsif ($type eq 'monthly') { $period = 60*60*24*31; } elsif ($type eq 'yearly') { $period = 60*60*24*365.25; } # avoid pissing off use warnings by not executing this block if no matching snaps exist if (defined $snapsbytype{$path}{$type}{'sorted'}) { my @sorted = split (/\|/,$snapsbytype{$path}{$type}{'sorted'}); # if we say "daily=30" we really mean "don't keep any dailies more than 30 days old", etc my $maxage = ( time() - $config{$section}{$type} * $period ); # but if we say "daily=30" we ALSO mean "don't get rid of ANY dailies unless we have more than 30". my $minsnapsthistype = $config{$section}{$type}; # how many total snaps of this type do we currently have? my $numsnapsthistype = scalar (@sorted); my @prunesnaps; foreach my $snap( @sorted ){ # print "snap $path\@$snap has age $snaps{$path}{$snap}{'ctime'}, maxage is $maxage.\n"; if ( ($snaps{$path}{$snap}{'ctime'} < $maxage) && ($numsnapsthistype > $minsnapsthistype) ) { my $fullpath = $path . '@' . $snap; push(@prunesnaps,$fullpath); # we just got rid of a snap, so we now have one fewer, duh $numsnapsthistype--; } } if ((scalar @prunesnaps) > 0) { # print "found some snaps to prune!\n" if (checklock('sanoid_pruning')) { writelock('sanoid_pruning'); foreach my $snap( @prunesnaps ){ if ($args{'verbose'}) { print "INFO: pruning $snap ... \n"; } if (!$args{'force-prune'} && iszfsbusy($path)) { if ($args{'verbose'}) { print "INFO: deferring pruning of $snap - $path is currently in zfs send or receive.\n"; } } else { if (! $args{'readonly'}) { if (system($zfs, "destroy", $snap) == 0) { $pruned{$snap} = 1; my $dataset = (split '@', $snap)[0]; my $snapname = (split '@', $snap)[1]; if ($config{$dataset}{'pruning_script'}) { $ENV{'SANOID_TARGET'} = $dataset; $ENV{'SANOID_SNAPNAME'} = $snapname; if ($args{'verbose'}) { print "executing pruning_script '".$config{$dataset}{'pruning_script'}."' on dataset '$dataset'\n"; } my $ret = runscript('pruning_script',$dataset); delete $ENV{'SANOID_TARGET'}; delete $ENV{'SANOID_SNAPNAME'}; } } else { warn "could not remove $snap : $?"; } } } } removelock('sanoid_pruning'); removecachedsnapshots(0); } else { if ($args{'verbose'}) { print "INFO: deferring snapshot pruning - valid pruning lock held by other sanoid process.\n"; } } } } } } # if there were any deferred cache updates, # do them now and wait if necessary removecachedsnapshots(1); } # end prune_snapshots #################################################################################### #################################################################################### #################################################################################### sub take_snapshots { my ($config, $snaps, $snapsbytype, $snapsbypath) = @_; my %datestamp = get_date(); my $forcecacheupdate = 0; my @newsnaps; # get utc timestamp of the current day for DST check my $daystartUtc = timelocal(0, 0, 0, $datestamp{'mday'}, ($datestamp{'mon'}-1), $datestamp{'year'}); my ($isdst) = (localtime($daystartUtc))[8]; my $dstOffset = 0; if ($isdst ne $datestamp{'isdst'}) { # current dst is different then at the beginning og the day if ($isdst) { # DST ended in the current day $dstOffset = 60*60; } } if ($args{'verbose'}) { print "INFO: taking snapshots...\n"; } foreach my $section (keys %config) { if ($section =~ /^template/) { next; } if (! $config{$section}{'autosnap'}) { next; } if ($config{$section}{'process_children_only'}) { next; } my $path = $config{$section}{'path'}; my @types = ('yearly','monthly','weekly','daily','hourly','frequently'); foreach my $type (@types) { if ($config{$section}{$type} > 0) { my $newestage; # in seconds if (defined $snapsbytype{$path}{$type}{'newest'}) { $newestage = $snapsbytype{$path}{$type}{'newest'}; } else{ $newestage = 9999999999999999; } # for use with localtime: @preferredtime will be most recent preferred snapshot time in ($sec,$min,$hour,$mon-1,$year) format my @preferredtime; my $lastpreferred; # to avoid duplicates with DST my $handleDst = 0; if ($type eq 'frequently') { my $frequentslice = int($datestamp{'min'} / $config{$section}{'frequent_period'}); push @preferredtime,0; # try to hit 0 seconds push @preferredtime,$frequentslice * $config{$section}{'frequent_period'}; push @preferredtime,$datestamp{'hour'}; push @preferredtime,$datestamp{'mday'}; push @preferredtime,($datestamp{'mon'}-1); # january is month 0 push @preferredtime,$datestamp{'year'}; $lastpreferred = timelocal(@preferredtime); if ($lastpreferred > time()) { $lastpreferred -= 60 * $config{$section}{'frequent_period'}; } # preferred time is later this frequent period - so look at last frequent period } elsif ($type eq 'hourly') { push @preferredtime,0; # try to hit 0 seconds push @preferredtime,$config{$section}{'hourly_min'}; push @preferredtime,$datestamp{'hour'}; push @preferredtime,$datestamp{'mday'}; push @preferredtime,($datestamp{'mon'}-1); # january is month 0 push @preferredtime,$datestamp{'year'}; $lastpreferred = timelocal(@preferredtime); if ($dstOffset ne 0) { # timelocal doesn't take DST into account $lastpreferred += $dstOffset; # DST ended, avoid duplicates $handleDst = 1; } if ($lastpreferred > time()) { $lastpreferred -= 60*60; } # preferred time is later this hour - so look at last hour's } elsif ($type eq 'daily') { push @preferredtime,0; # try to hit 0 seconds push @preferredtime,$config{$section}{'daily_min'}; push @preferredtime,$config{$section}{'daily_hour'}; push @preferredtime,$datestamp{'mday'}; push @preferredtime,($datestamp{'mon'}-1); # january is month 0 push @preferredtime,$datestamp{'year'}; $lastpreferred = timelocal(@preferredtime); # timelocal doesn't take DST into account $lastpreferred += $dstOffset; # check if the planned time has different DST flag than the current my ($isdst) = (localtime($lastpreferred))[8]; if ($isdst ne $datestamp{'isdst'}) { if (!$isdst) { # correct DST difference $lastpreferred -= 60*60; } } if ($lastpreferred > time()) { $lastpreferred -= 60*60*24; if ($dstOffset ne 0) { # because we are going back one day # the DST difference has to be accounted # for in reverse now $lastpreferred -= 2*$dstOffset; } } # preferred time is later today - so look at yesterday's } elsif ($type eq 'weekly') { # calculate offset in seconds for the desired weekday my $offset = 0; if ($config{$section}{'weekly_wday'} < $datestamp{'wday'}) { $offset += 7; } $offset += $config{$section}{'weekly_wday'} - $datestamp{'wday'}; $offset *= 60*60*24; # full day push @preferredtime,0; # try to hit 0 seconds push @preferredtime,$config{$section}{'weekly_min'}; push @preferredtime,$config{$section}{'weekly_hour'}; push @preferredtime,$datestamp{'mday'}; push @preferredtime,($datestamp{'mon'}-1); # january is month 0 push @preferredtime,$datestamp{'year'}; $lastpreferred = timelocal(@preferredtime); $lastpreferred += $offset; if ($lastpreferred > time()) { $lastpreferred -= 60*60*24*7; } # preferred time is later this week - so look at last week's } elsif ($type eq 'monthly') { push @preferredtime,0; # try to hit 0 seconds push @preferredtime,$config{$section}{'monthly_min'}; push @preferredtime,$config{$section}{'monthly_hour'}; push @preferredtime,$config{$section}{'monthly_mday'}; push @preferredtime,($datestamp{'mon'}-1); # january is month 0 push @preferredtime,$datestamp{'year'}; $lastpreferred = timelocal(@preferredtime); if ($lastpreferred > time()) { $lastpreferred -= 60*60*24*31; } # preferred time is later this month - so look at last month's } elsif ($type eq 'yearly') { push @preferredtime,0; # try to hit 0 seconds push @preferredtime,$config{$section}{'yearly_min'}; push @preferredtime,$config{$section}{'yearly_hour'}; push @preferredtime,$config{$section}{'yearly_mday'}; push @preferredtime,($config{$section}{'yearly_mon'}-1); # january is month 0 push @preferredtime,$datestamp{'year'}; $lastpreferred = timelocal(@preferredtime); if ($lastpreferred > time()) { $lastpreferred -= 60*60*24*31*365.25; } # preferred time is later this year - so look at last year } else { warn "WARN: unknown interval type $type in config!"; next; } # reconstruct our human-formatted most recent preferred snapshot time into an epoch time, to compare with the epoch of our most recent snapshot my $maxage = time()-$lastpreferred; if ( $newestage > $maxage ) { # update to most current possible datestamp %datestamp = get_date(); # print "we should have had a $type snapshot of $path $maxage seconds ago; most recent is $newestage seconds old.\n"; my $flags = ""; # use zfs (atomic) recursion if specified in config if ($config{$section}{'zfs_recursion'}) { $flags .= "r"; } if ($handleDst) { $flags .= "d"; } if ($flags ne "") { push(@newsnaps, "$path\@autosnap_$datestamp{'sortable'}_$type\@$flags"); } else { push(@newsnaps, "$path\@autosnap_$datestamp{'sortable'}_$type"); } } } } } if ( (scalar(@newsnaps)) > 0) { foreach my $snap ( @newsnaps ) { my $extraMessage = ""; my @split = split '@', $snap, -1; my $recursiveFlag = 0; my $dstHandling = 0; if (scalar(@split) == 3) { my $flags = $split[2]; if (index($flags, "r") != -1) { $recursiveFlag = 1; $extraMessage = " (zfs recursive)"; chop $snap; } if (index($flags, "d") != -1) { $dstHandling = 1; chop $snap; } chop $snap; } my $dataset = $split[0]; my $snapname = $split[1]; my $presnapshotfailure = 0; my $ret = 0; if ($config{$dataset}{'pre_snapshot_script'}) { $ENV{'SANOID_TARGET'} = $dataset; $ENV{'SANOID_SNAPNAME'} = $snapname; if ($args{'verbose'}) { print "executing pre_snapshot_script '".$config{$dataset}{'pre_snapshot_script'}."' on dataset '$dataset'\n"; } if (!$args{'readonly'}) { $ret = runscript('pre_snapshot_script',$dataset); } delete $ENV{'SANOID_TARGET'}; delete $ENV{'SANOID_SNAPNAME'}; if ($ret != 0) { # warning was already thrown by runscript function $config{$dataset}{'no_inconsistent_snapshot'} and next; $presnapshotfailure = 1; } } if ($args{'verbose'}) { print "taking snapshot $snap$extraMessage\n"; } if (!$args{'readonly'}) { my $stderr; my $exit; ($stderr, $exit) = tee_stderr { if ($recursiveFlag) { system($zfs, "snapshot", "-r", "$snap"); } else { system($zfs, "snapshot", "$snap"); } }; $exit == 0 or do { if ($dstHandling) { if ($stderr =~ /already exists/) { $exit = 0; $snap =~ s/_([a-z]+)$/dst_$1/g; if ($args{'verbose'}) { print "taking dst snapshot $snap$extraMessage\n"; } if ($recursiveFlag) { system($zfs, "snapshot", "-r", "$snap") == 0 or warn "CRITICAL ERROR: $zfs snapshot -r $snap failed, $?"; } else { system($zfs, "snapshot", "$snap") == 0 or warn "CRITICAL ERROR: $zfs snapshot $snap failed, $?"; } } } }; $exit == 0 or do { if ($recursiveFlag) { warn "CRITICAL ERROR: $zfs snapshot -r $snap failed, $?"; } else { warn "CRITICAL ERROR: $zfs snapshot $snap failed, $?"; } }; } if ($config{$dataset}{'post_snapshot_script'}) { if (!$presnapshotfailure or $config{$dataset}{'force_post_snapshot_script'}) { $ENV{'SANOID_TARGET'} = $dataset; $ENV{'SANOID_SNAPNAME'} = $snapname; if ($args{'verbose'}) { print "executing post_snapshot_script '".$config{$dataset}{'post_snapshot_script'}."' on dataset '$dataset'\n"; } if (!$args{'readonly'}) { runscript('post_snapshot_script',$dataset); } delete $ENV{'SANOID_TARGET'}; delete $ENV{'SANOID_SNAPNAME'}; } } } $forcecacheupdate = 1; %snaps = getsnaps(%config,$cacheTTL,$forcecacheupdate); } } #################################################################################### #################################################################################### #################################################################################### sub blabber { my ($config, $snaps, $snapsbytype, $snapsbypath) = @_; $Data::Dumper::Sortkeys = 1; print "****** CONFIGS ******\n"; print Dumper(\%config); #print "****** SNAPSHOTS ******\n"; #print Dumper(\%snaps); #print "****** SNAPSBYTYPE ******\n"; #print Dumper(\%snapsbytype); #print "****** SNAPSBYPATH ******\n"; #print Dumper(\%snapsbypath); print "\n"; foreach my $section (keys %config) { my $path = $config{$section}{'path'}; print "Filesystem $path has:\n"; print " $snapsbypath{$path}{'numsnaps'} total snapshots "; if ($snapsbypath{$path}{'numsnaps'} == 0) { print "(no current snapshots)" } else { print "(newest: "; my $newest = sprintf("%.1f",$snapsbypath{$path}{'newest'} / 60 / 60); print "$newest hours old)\n"; foreach my $type (keys %{ $snapsbytype{$path} }){ print " $snapsbytype{$path}{$type}{'numsnaps'} $type\n"; print " desired: $config{$section}{$type}\n"; print " newest: "; my $newest = sprintf("%.1f",($snapsbytype{$path}{$type}{'newest'} / 60 / 60)); print "$newest hours old, named $snapsbytype{$path}{$type}{'newestname'}\n"; } } print "\n\n"; } } # end blabber #################################################################################### #################################################################################### #################################################################################### sub getsnapsbytype { my ($config, $snaps) = @_; my %snapsbytype; # iterate through each module section - each section is a single ZFS path foreach my $section (keys %config) { my $path = $config{$section}{'path'}; my %rawsnaps; foreach my $name (keys %{ $snaps{$path} }){ my $type = $snaps{$path}{$name}{'type'}; $rawsnaps{$type}{$name} = $snaps{$path}{$name}{'ctime'} } # iterate through snapshots of each type, ordered by creation time of each snapshot within that type foreach my $type (keys %rawsnaps) { $snapsbytype{$path}{$type}{'numsnaps'} = scalar (keys %{ $rawsnaps{$type} }); my @sortedsnaps; foreach my $name ( sort { $rawsnaps{$type}{$a} <=> $rawsnaps{$type}{$b} } keys %{ $rawsnaps{$type} } ) { push @sortedsnaps, $name; $snapsbytype{$path}{$type}{'newest'} = (time-$snaps{$path}{$name}{'ctime'}); $snapsbytype{$path}{$type}{'newestname'} = $name; } $snapsbytype{$path}{$type}{'sorted'} = join ('|',@sortedsnaps); } } return %snapsbytype; } # end getsnapsbytype #################################################################################### #################################################################################### #################################################################################### sub getsnapsbypath { my ($config,$snaps) = @_; my %snapsbypath; # iterate through each module section - each section is a single ZFS path foreach my $section (keys %config) { my $path = $config{$section}{'path'}; $snapsbypath{$path}{'numsnaps'} = scalar (keys %{ $snaps{$path} }); # iterate through snapshots of each type, ordered by creation time of each snapshot within that type my %rawsnaps; foreach my $snapname ( keys %{ $snaps{$path} } ) { $rawsnaps{$path}{$snapname} = $snaps{$path}{$snapname}{'ctime'}; } my @sortedsnaps; foreach my $snapname ( sort { $rawsnaps{$path}{$a} <=> $rawsnaps{$path}{$b} } keys %{ $rawsnaps{$path} } ) { push @sortedsnaps, $snapname; $snapsbypath{$path}{'newest'} = (time-$snaps{$path}{$snapname}{'ctime'}); } my $sortedsnaps = join ('|',@sortedsnaps); $snapsbypath{$path}{'sorted'} = $sortedsnaps; } return %snapsbypath; } # end getsnapsbypath #################################################################################### #################################################################################### #################################################################################### sub getsnaps { my ($config, $cacheTTL, $forcecacheupdate) = @_; my @rawsnaps; my ($dev, $ino, $mode, $nlink, $uid, $gid, $rdev, $size, $atime, $mtime, $ctime, $blksize, $blocks) = stat($cache); if ( $forcecacheupdate || ! -f $cache || (time() - $mtime) > $cacheTTL ) { if (checklock('sanoid_cacheupdate')) { writelock('sanoid_cacheupdate'); if ($args{'verbose'}) { if ($args{'force-update'}) { print "INFO: cache forcibly expired - updating from zfs list.\n"; } else { print "INFO: cache expired - updating from zfs list.\n"; } } open FH, "$zfs get -Hrpt snapshot creation |"; @rawsnaps = ; close FH; open FH, "> $cache" or die 'Could not write to $cache!\n'; print FH @rawsnaps; close FH; removelock('sanoid_cacheupdate'); } else { if ($args{'verbose'}) { print "INFO: deferring cache update - valid cache update lock held by another sanoid process.\n"; } open FH, "< $cache"; @rawsnaps = ; close FH; } } else { # if ($args{'debug'}) { print "DEBUG: cache not expired (" . (time() - $mtime) . " seconds old with TTL of $cacheTTL): pulling snapshot list from cache.\n"; } open FH, "< $cache"; @rawsnaps = ; close FH; } foreach my $snap (@rawsnaps) { my ($fs,$snapname,$snapdate) = ($snap =~ m/(.*)\@(.*ly)\t*creation\t*(\d*)/); # avoid pissing off use warnings if (defined $snapname) { my ($snaptype) = ($snapname =~ m/.*_(\w*ly)/); if ($snapname =~ /^autosnap/) { $snaps{$fs}{$snapname}{'ctime'}=$snapdate; $snaps{$fs}{$snapname}{'type'}=$snaptype; } } } return %snaps; } #################################################################################### #################################################################################### #################################################################################### sub init { my ($conf_file, $default_conf_file) = @_; my %config; unless (-e $default_conf_file ) { die "FATAL: cannot load $default_conf_file - please restore a clean copy, this is not a user-editable file!"; } unless (-e $conf_file ) { die "FATAL: cannot load $conf_file - please create a valid local config file before running sanoid!"; } tie my %defaults, 'Config::IniFiles', ( -file => $default_conf_file ) or die "FATAL: cannot load $default_conf_file - please restore a clean copy, this is not a user-editable file!"; tie my %ini, 'Config::IniFiles', ( -file => $conf_file ) or die "FATAL: cannot load $conf_file - please create a valid local config file before running sanoid!"; # we'll use these later to normalize potentially true and false values on any toggle keys my @toggles = ('autosnap','autoprune','monitor_dont_warn','monitor_dont_crit','monitor','recursive','process_children_only','skip_children','no_inconsistent_snapshot','force_post_snapshot_script'); # recursive is defined as toggle but can also have the special value "zfs", it is kept to be backward compatible my @istrue=(1,"true","True","TRUE","yes","Yes","YES","on","On","ON"); my @isfalse=(0,"false","False","FALSE","no","No","NO","off","Off","OFF"); # check if default configuration file is up to date my $defaults_version = 1; if (defined $defaults{'version'}{'version'}) { $defaults_version = $defaults{'version'}{'version'}; delete $defaults{'version'}; } if ($defaults_version < $MINIMUM_DEFAULTS_VERSION) { die "FATAL: you're using sanoid.defaults.conf v$defaults_version, this version of sanoid requires a minimum sanoid.defaults.conf v$MINIMUM_DEFAULTS_VERSION"; } foreach my $section (keys %ini) { # first up - die with honor if unknown parameters are set in any modules or templates by the user. foreach my $key (keys %{$ini{$section}}) { if (! defined ($defaults{'template_default'}{$key})) { die "FATAL ERROR: I don't understand the setting $key you've set in \[$section\] in $conf_file.\n"; } } if ($section =~ /^template_/) { next; } # don't process templates directly # only set defaults on sections that haven't already been initialized - this allows us to override values # for sections directly when they've already been defined recursively, without starting them over from scratch. if (! defined ($config{$section}{'initialized'})) { if ($args{'debug'}) { print "DEBUG: initializing \$config\{$section\} with default values from $default_conf_file.\n"; } # set default values from %defaults, which can then be overriden by template # and/or local settings within the module. foreach my $key (keys %{$defaults{'template_default'}}) { if (! ($key =~ /template|recursive|children_only/)) { $config{$section}{$key} = $defaults{'template_default'}{$key}; } } # override with values from user-defined default template, if any foreach my $key (keys %{$ini{'template_default'}}) { if ($key =~ /template|recursive/) { warn "ignored key '$key' from user-defined default template.\n"; next; } if ($args{'debug'}) { print "DEBUG: overriding $key on $section with value from user-defined default template.\n"; } $config{$section}{$key} = $ini{'template_default'}{$key}; } } # override with values from user-defined templates applied to this module, # in the order they were specified (ie use_template = default,production,mytemplate) if (defined $ini{$section}{'use_template'}) { my @templates = split (' *, *',$ini{$section}{'use_template'}); foreach my $rawtemplate (@templates) { # strip trailing whitespace $rawtemplate =~ s/\s+$//g; my $template = 'template_'.$rawtemplate; foreach my $key (keys %{$ini{$template}}) { if ($key =~ /template|recursive/) { warn "ignored key '$key' from '$rawtemplate' template.\n"; next; } if ($args{'debug'}) { print "DEBUG: overriding $key on $section with value from user-defined template $template.\n"; } $config{$section}{$key} = $ini{$template}{$key}; } } } # override with any locally set values in the module itself foreach my $key (keys %{$ini{$section}} ) { if (! ($key =~ /template|recursive|skip_children/)) { if ($args{'debug'}) { print "DEBUG: overriding $key on $section with value directly set in module.\n"; } $config{$section}{$key} = $ini{$section}{$key}; } } # make sure that true values are true and false values are false for any toggled values foreach my $toggle(@toggles) { foreach my $true (@istrue) { if (defined $config{$section}{$toggle} && $config{$section}{$toggle} eq $true) { $config{$section}{$toggle} = 1; } } foreach my $false (@isfalse) { if (defined $config{$section}{$toggle} && $config{$section}{$toggle} eq $false) { $config{$section}{$toggle} = 0; } } } # section path is the section name, unless section path has been explicitly defined if (defined ($ini{$section}{'path'})) { $config{$section}{'path'} = $ini{$section}{'path'}; } else { $config{$section}{'path'} = $section; } # how 'bout some recursion? =) if ($config{$section}{'zfs_recursion'} && $config{$section}{'zfs_recursion'} == 1 && $config{$section}{'autosnap'} == 1) { warn "ignored autosnap configuration for '$section' because it's part of a zfs recursion.\n"; $config{$section}{'autosnap'} = 0; } my $recursive = $ini{$section}{'recursive'} && grep( /^$ini{$section}{'recursive'}$/, @istrue ); my $zfsRecursive = $ini{$section}{'recursive'} && $ini{$section}{'recursive'} =~ /zfs/i; my $skipChildren = $ini{$section}{'skip_children'} && grep( /^$ini{$section}{'skip_children'}$/, @istrue ); my @datasets; if ($zfsRecursive || $recursive || $skipChildren) { if ($zfsRecursive) { $config{$section}{'zfs_recursion'} = 1; } @datasets = getchilddatasets($config{$section}{'path'}); DATASETS: foreach my $dataset(@datasets) { chomp $dataset; if ($zfsRecursive) { # don't try to take the snapshot ourself, recursive zfs snapshot will take care of that $config{$dataset}{'autosnap'} = 0; foreach my $key (keys %{$config{$section}} ) { if (! ($key =~ /template|recursive|children_only|autosnap/)) { if ($args{'debug'}) { print "DEBUG: recursively setting $key from $section to $dataset.\n"; } $config{$dataset}{$key} = $config{$section}{$key}; } } } else { if ($skipChildren) { if ($args{'debug'}) { print "DEBUG: ignoring $dataset.\n"; } delete $config{$dataset}; next DATASETS; } foreach my $key (keys %{$config{$section}} ) { if (! ($key =~ /template|recursive|children_only/)) { if ($args{'debug'}) { print "DEBUG: recursively setting $key from $section to $dataset.\n"; } $config{$dataset}{$key} = $config{$section}{$key}; } } } $config{$dataset}{'path'} = $dataset; $config{$dataset}{'initialized'} = 1; } } } return %config; } # end sub init #################################################################################### #################################################################################### #################################################################################### sub get_date { my %datestamp; ($datestamp{'sec'},$datestamp{'min'},$datestamp{'hour'},$datestamp{'mday'},$datestamp{'mon'},$datestamp{'year'},$datestamp{'wday'},$datestamp{'yday'},$datestamp{'isdst'}) = localtime(time); $datestamp{'year'} += 1900; $datestamp{'unix_time'} = (((((((($datestamp{'year'} - 1971) * 365) + $datestamp{'yday'}) * 24) + $datestamp{'hour'}) * 60) + $datestamp{'min'}) * 60) + $datestamp{'sec'}; $datestamp{'sec'} = sprintf ("%02u", $datestamp{'sec'}); $datestamp{'min'} = sprintf ("%02u", $datestamp{'min'}); $datestamp{'hour'} = sprintf ("%02u", $datestamp{'hour'}); $datestamp{'mday'} = sprintf ("%02u", $datestamp{'mday'}); $datestamp{'mon'} = sprintf ("%02u", ($datestamp{'mon'} + 1)); $datestamp{'noseconds'} = "$datestamp{'year'}-$datestamp{'mon'}-$datestamp{'mday'}_$datestamp{'hour'}:$datestamp{'min'}"; $datestamp{'sortable'} = "$datestamp{'noseconds'}:$datestamp{'sec'}"; return %datestamp; } #################################################################################### #################################################################################### #################################################################################### sub displaytime { # take a time in seconds, return it in human readable form my ($elapsed) = @_; my $days = int ($elapsed / 60 / 60 / 24); $elapsed -= $days * 60 * 60 * 24; my $hours = int ($elapsed / 60 / 60); $elapsed -= $hours * 60 * 60; my $minutes = int ($elapsed / 60); $elapsed -= $minutes * 60; my $seconds = int($elapsed); my $humanreadable; if ($days) { $humanreadable .= " $days" . 'd'; } if ($hours || $days) { $humanreadable .= " $hours" . 'h'; } if ($minutes || $hours || $days) { $humanreadable .= " $minutes" . 'm'; } $humanreadable .= " $seconds" . 's'; $humanreadable =~ s/^ //; return $humanreadable; } #################################################################################### #################################################################################### #################################################################################### sub check_zpool() { # check_zfs Nagios plugin for monitoring Sun ZFS zpools # Copyright (c) 2007 # original Written by Nathan Butcher # adapted for use within Sanoid framework by Jim Salter (2014) # # Released under the GNU Public License # # This program is free software; you can redistribute it and/or modify # it under the terms of the GNU General Public License as published by # the Free Software Foundation; either version 2 of the License, or # (at your option) any later version. # # This program is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU General Public License for more details. # # You should have received a copy of the GNU General Public License # along with this program; if not, write to the Free Software # Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA # Version: 0.9.2 # Date : 24th July 2007 # This plugin has tested on FreeBSD 7.0-CURRENT and Solaris 10 # With a bit of fondling, it could be expanded to recognize other OSes in # future (e.g. if FUSE Linux gets off the ground) # Verbose levels:- # 1 - Only alert us of zpool health and size stats # 2 - ...also alert us of failed devices when things go bad # 3 - ...alert us of the status of all devices regardless of health # # Usage: check_zfs # Example: check_zfs zeepool 1 # ZPOOL zeedata : ONLINE {Size:3.97G Used:183K Avail:3.97G Cap:0%} my %ERRORS=('DEPENDENT'=>4,'UNKNOWN'=>3,'OK'=>0,'WARNING'=>1,'CRITICAL'=>2); my $state="UNKNOWN"; my $msg="FAILURE"; my $pool=shift; my $verbose=shift; my $size=""; my $used=""; my $avail=""; my $cap=""; my $health=""; my $dmge=""; my $dedup=""; if ($verbose < 1 || $verbose > 3) { print "Verbose levels range from 1-3\n"; exit $ERRORS{$state}; } my $statcommand="$zpool list -o name,size,cap,health,free $pool"; if (! open STAT, "$statcommand|") { print ("$state '$statcommand' command returns no result! NOTE: This plugin needs OS support for ZFS, and execution with root privileges.\n"); exit $ERRORS{$state}; } # chuck the header line my $header = ; # find and parse the line with values for the pool while() { chomp; if (/^${pool}\s+/) { my @row = split (/ +/); my $name; ($name, $size, $cap, $health, $avail) = @row; } } # Tony: Debuging # print "Size: $size \t Used: $used \t Avai: $avail \t Cap: $cap \t Health: $health\n"; close(STAT); ## check for valid zpool list response from zpool if (! $health ) { $state = "CRITICAL"; $msg = sprintf "ZPOOL {%s} does not exist and/or is not responding!\n", $pool; print $state, " ", $msg; exit ($ERRORS{$state}); } ## determine health of zpool and subsequent error status if ($health eq "ONLINE" ) { $state = "OK"; } else { if ($health eq "DEGRADED") { $state = "WARNING"; } else { $state = "CRITICAL"; } } ## get more detail on possible device failure ## flag to detect section of zpool status involving our zpool my $poolfind=0; $statcommand="$zpool status $pool"; if (! open STAT, "$statcommand|") { $state = 'CRITICAL'; print ("$state '$statcommand' command returns no result! NOTE: This plugin needs OS support for ZFS, and execution with root privileges.\n"); exit $ERRORS{$state}; } ## go through zfs status output to find zpool fses and devices while() { chomp; if (/^\s${pool}/ && $poolfind==1) { $poolfind=2; next; } elsif ( $poolfind==1 ) { $poolfind=0; } if (/NAME\s+STATE\s+READ\s+WRITE\s+CKSUM/) { $poolfind=1; } if ( /^$/ ) { $poolfind=0; } if ($poolfind == 2) { ## special cases pertaining to full verbose if (/^\sspares/) { next unless $verbose == 3; $dmge=$dmge . "[SPARES]:- "; next; } if (/^\s{5}spare\s/) { next unless $verbose == 3; my ($sta) = /spare\s+(\S+)/; $dmge=$dmge . "[SPARE:${sta}]:- "; next; } if (/^\s{5}replacing\s/) { next unless $verbose == 3; my $perc; my ($sta) = /^\s+\S+\s+(\S+)/; if (/%/) { ($perc) = /([0-9]+%)/; } else { $perc = "working"; } $dmge=$dmge . "[REPLACING:${sta} (${perc})]:- "; next; } ## other cases my ($dev, $sta, $read, $write, $cksum) = /^\s+(\S+)\s+(\S+)\s+(\S+)\s+(\S+)\s+(\S+)/; if (!defined($sta)) { # cache and logs are special and don't have a status next; } ## pool online, not degraded thanks to dead/corrupted disk if ($state eq "OK" && $sta eq "UNAVAIL") { $state="WARNING"; ## switching to verbose level 2 to explain weirdness if ($verbose == 1) { $verbose =2; } } ## no display for verbose level 1 next if ($verbose==1); ## don't display working devices for verbose level 2 if ($verbose==2 && ($state eq "OK" || $sta eq "ONLINE" || $sta eq "AVAIL" || $sta eq "INUSE")) { # check for io/checksum errors my @vdeverr = (); if ($read != 0) { push @vdeverr, "read" }; if ($write != 0) { push @vdeverr, "write" }; if ($cksum != 0) { push @vdeverr, "cksum" }; if (scalar @vdeverr) { $dmge=$dmge . "(" . $dev . ":" . join(", ", @vdeverr) . " errors) "; if ($state eq "OK") { $state = "WARNING" }; } next; } ## show everything else if (/^\s{3}(\S+)/) { $dmge=$dmge . "<" . $dev . ":" . $sta . "> "; } elsif (/^\s{7}(\S+)/) { $dmge=$dmge . "(" . $dev . ":" . $sta . ") "; } else { $dmge=$dmge . $dev . ":" . $sta . " "; } } } ## calling all goats! $msg = sprintf "ZPOOL %s : %s {Size:%s Free:%s Cap:%s} %s\n", $pool, $health, $size, $avail, $cap, $dmge; $msg = "$state $msg"; return ($ERRORS{$state},$msg); } # end check_zpool() sub check_capacity_limit { my $value = shift; if (!defined($value) || $value !~ /^\d+\z/) { return undef; } if ($value < 0 || $value > 100) { return undef; } return 1 } sub check_zpool_capacity() { my %ERRORS=('DEPENDENT'=>4,'UNKNOWN'=>3,'OK'=>0,'WARNING'=>1,'CRITICAL'=>2); my $state="UNKNOWN"; my $msg="FAILURE"; my $pool=shift; my $capacitylimitsref=shift; my %capacitylimits=%$capacitylimitsref; my $statcommand="$zpool list -H -o cap $pool"; if (! open STAT, "$statcommand|") { print ("$state '$statcommand' command returns no result!\n"); exit $ERRORS{$state}; } my $line = ; close(STAT); chomp $line; my @row = split(/ +/, $line); my $cap=$row[0]; ## check for valid capacity value if ($cap !~ m/^[0-9]{1,3}%$/ ) { $state = "CRITICAL"; $msg = sprintf "ZPOOL {%s} does not exist and/or is not responding!\n", $pool; print $state, " ", $msg; exit ($ERRORS{$state}); } $state="OK"; # check capacity my $capn = $cap; $capn =~ s/\D//g; if (defined($capacitylimits{"warn"})) { if ($capn >= $capacitylimits{"warn"}) { $state = "WARNING"; } } if (defined($capacitylimits{"crit"})) { if ($capn >= $capacitylimits{"crit"}) { $state = "CRITICAL"; } } $msg = sprintf "ZPOOL %s : %s\n", $pool, $cap; $msg = "$state $msg"; return ($ERRORS{$state},$msg); } # end check_zpool_capacity() sub check_prune_defer { my ($config, $section) = @_; my $limit = $config{$section}{"prune_defer"}; if (!check_capacity_limit($limit)) { die "ERROR: invalid prune_defer limit!\n"; } if ($limit eq 0) { return 0; } my @parts = split /\//, $section, 2; my $pool = $parts[0]; if (exists $capacitycache{$pool}) { } else { $capacitycache{$pool} = get_zpool_capacity($pool); } if ($limit < $capacitycache{$pool}) { return 0; } return 1; } sub get_zpool_capacity { my $pool = shift; my $statcommand="$zpool list -H -o cap $pool"; if (! open STAT, "$statcommand|") { die "ERROR: '$statcommand' command returns no result!\n"; } my $line = ; close(STAT); chomp $line; my @row = split(/ +/, $line); my $cap=$row[0]; ## check for valid capacity value if ($cap !~ m/^[0-9]{1,3}%$/ ) { die "ERROR: '$statcommand' command returned invalid capacity value ($cap)!\n"; } $cap =~ s/\D//g; return $cap; } ###################################################################################################### ###################################################################################################### ###################################################################################################### ###################################################################################################### ###################################################################################################### sub checklock { # take argument $lockname. # # read /var/run/$lockname.lock for a pid on first line and a mutex on second line. # # check process list to see if the pid from /var/run/$lockname.lock is still active with # the original mutex found in /var/run/$lockname.lock. # # return: # 0 if lock is present and valid for another process # 1 if no lock is present # 2 if lock is present, but we own the lock # # shorthand - any true return indicates we are clear to lock; a false return indicates # that somebody else already has the lock and therefore we cannot. # my $lockname = shift; my $lockfile = "/var/run/$lockname.lock"; if (! -e $lockfile) { # no lockfile return 1; } # make sure lockfile contains something if ( -z $lockfile) { # zero size lockfile, something is wrong die "ERROR: something is wrong! $lockfile is empty\n"; } # lockfile exists. read pid and mutex from it. see if it's our pid. if not, see if # there's still a process running with that pid and with the same mutex. open FH, "< $lockfile" or die "ERROR: unable to open $lockfile"; my @lock = ; close FH; # if we didn't get exactly 2 items from the lock file there is a problem if (scalar(@lock) != 2) { die "ERROR: $lockfile is invalid.\n" } my $lockmutex = pop(@lock); my $lockpid = pop(@lock); chomp $lockmutex; chomp $lockpid; if ($lockpid == $$) { # we own the lockfile. no need to check any further. return 2; } open PL, "$pscmd -p $lockpid -o args= |"; my @processlist = ; close PL; my $checkmutex = pop(@processlist); chomp $checkmutex; if ($checkmutex eq $lockmutex) { # lock exists, is valid, is not owned by us - return false return 0; } else { # lock is present but not valid - remove and return true unlink $lockfile; return 1; } } sub removelock { # take argument $lockname. # # make sure /var/run/$lockname.lock actually belongs to me (contains my pid and mutex) # and remove it if it does, die if it doesn't. my $lockname = shift; my $lockfile = "/var/run/$lockname.lock"; if (checklock($lockname) == 2) { unlink $lockfile; return; } elsif (checklock($lockname) == 1) { die "ERROR: No valid lockfile found - Did a rogue process or user update or delete it?\n"; } else { die "ERROR: A valid lockfile exists but does not belong to me! I refuse to remove it.\n"; } } sub writelock { # take argument $lockname. # # write a lockfile to /var/run/$lockname.lock with first line # being my pid and second line being my mutex. my $lockname = shift; my $lockfile = "/var/run/$lockname.lock"; # die honorably rather than overwriting a valid, existing lock if (! checklock($lockname)) { die "ERROR: Valid lock already exists - I refuse to overwrite it. Committing seppuku now.\n"; } my $pid = $$; open PL, "$pscmd -p $$ -o args= |"; my @processlist = ; close PL; my $mutex = pop(@processlist); chomp $mutex; open FH, "> $lockfile"; print FH "$pid\n"; print FH "$mutex\n"; close FH; } sub iszfsbusy { # check to see if ZFS filesystem passed in as argument currently has a zfs send or zfs receive process referencing it. # return true if busy (currently being sent or received), return false if not. my $fs = shift; # if (args{'debug'}) { print "DEBUG: checking to see if $fs on is already in zfs receive using $pscmd -Ao args= ...\n"; } open PL, "$pscmd -Ao args= |"; my @processes = ; close PL; foreach my $process (@processes) { # if ($args{'debug'}) { print "DEBUG: checking process $process...\n"; } if ($process =~ /zfs *(send|receive|recv).*$fs/) { # there's already a zfs send/receive process for our target filesystem - return true # if ($args{'debug'}) { print "DEBUG: process $process matches target $fs!\n"; } return 1; } } # no zfs receive processes for our target filesystem found - return false return 0; } #######################################################################################################################3 #######################################################################################################################3 #######################################################################################################################3 sub getchilddatasets { # for later, if we make sanoid itself support sudo use my $fs = shift; my $mysudocmd = ''; my $getchildrencmd = "$mysudocmd $zfs list -o name -t filesystem,volume -Hr $fs |"; if ($args{'debug'}) { print "DEBUG: getting list of child datasets on $fs using $getchildrencmd...\n"; } open FH, $getchildrencmd; my @children = ; close FH; # parent dataset is the first element shift @children; return @children; } #######################################################################################################################3 #######################################################################################################################3 #######################################################################################################################3 sub removecachedsnapshots { my $wait = shift; if (not %pruned) { return; } my $unlocked = checklock('sanoid_cacheupdate'); if ($wait != 1 && not $unlocked) { if ($args{'verbose'}) { print "INFO: deferring cache update (snapshot removal) - valid cache update lock held by another sanoid process.\n"; } return; } # wait until we can get a lock to do our cache changes while (not $unlocked) { if ($args{'verbose'}) { print "INFO: waiting for cache update lock held by another sanoid process.\n"; } sleep(10); $unlocked = checklock('sanoid_cacheupdate'); } writelock('sanoid_cacheupdate'); if ($args{'verbose'}) { print "INFO: removing destroyed snapshots from cache.\n"; } open FH, "< $cache"; my @rawsnaps = ; close FH; open FH, "> $cache" or die 'Could not write to $cache!\n'; foreach my $snapline ( @rawsnaps ) { my @columns = split("\t", $snapline); my $snap = $columns[0]; print FH $snapline unless ( exists($pruned{$snap}) ); } close FH; removelock('sanoid_cacheupdate'); %snaps = getsnaps(\%config,$cacheTTL,$forcecacheupdate); # clear hash undef %pruned; } #######################################################################################################################3 #######################################################################################################################3 #######################################################################################################################3 sub runscript { my $key=shift; my $dataset=shift; my $timeout=$config{$dataset}{'script_timeout'}; my $ret; eval { if ($timeout gt 0) { local $SIG{ALRM} = sub { die "alarm\n" }; alarm $timeout; } $ret = system($config{$dataset}{$key}); alarm 0; }; if ($@) { if ($@ eq "alarm\n") { warn "WARN: $key didn't finish in the allowed time!"; } else { warn "CRITICAL ERROR: $@"; } return -1; } else { if ($ret != 0) { warn "WARN: $key failed, $?"; } } return $ret; } #######################################################################################################################3 #######################################################################################################################3 #######################################################################################################################3 sub convertTimePeriod { my $value=shift; my $period=shift; if ($value =~ /^\d+[yY]$/) { $period = 60*60*24*31*365; chop $value; } elsif ($value =~ /^\d+[wW]$/) { $period = 60*60*24*7; chop $value; } elsif ($value =~ /^\d+[dD]$/) { $period = 60*60*24; chop $value; } elsif ($value =~ /^\d+[hH]$/) { $period = 60*60; chop $value; } elsif ($value =~ /^\d+[mM]$/) { $period = 60; chop $value; } elsif ($value =~ /^\d+[sS]$/) { $period = 1; chop $value; } elsif ($value =~ /^\d+$/) { # no unit, provided fallback period is used } else { # invalid value, return smallest valid value as fallback # (will trigger a warning message for monitoring for sure) return 1; } return $value * $period; } __END__ =head1 NAME sanoid - ZFS snapshot management and replication tool =head1 SYNOPSIS sanoid [options] Assumes --cron --verbose if no other arguments (other than configdir) are specified Options: --configdir=DIR Specify a directory to find config file sanoid.conf --cron Creates snapshots and purges expired snapshots --verbose Prints out additional information during a sanoid run --readonly Simulates creation/deletion of snapshots --quiet Suppresses non-error output --force-update Clears out sanoid's zfs snapshot cache --monitor-health Reports on zpool "health", in a Nagios compatible format --monitor-capacity Reports on zpool capacity, in a Nagios compatible format --monitor-snapshots Reports on snapshot "health", in a Nagios compatible format --take-snapshots Creates snapshots as specified in sanoid.conf --prune-snapshots Purges expired snapshots as specified in sanoid.conf --force-prune Purges expired snapshots even if a send/recv is in progress --help Prints this helptext --version Prints the version number --debug Prints out a lot of additional information during a sanoid run sanoid-2.0.3/sanoid.conf000066400000000000000000000064231355716220600151310ustar00rootroot00000000000000###################################### # This is a sample sanoid.conf file. # # It should go in /etc/sanoid. # ###################################### # name your backup modules with the path to their ZFS dataset - no leading slash. [zpoolname/datasetname] # pick one or more templates - they're defined (and editable) below. Comma separated, processed in order. # in this example, template_demo's daily value overrides template_production's daily value. use_template = production,demo # if you want to, you can override settings in the template directly inside module definitions like this. # in this example, we override the template to only keep 12 hourly and 1 monthly snapshot for this dataset. hourly = 12 monthly = 1 # you can also handle datasets recursively. [zpoolname/parent] use_template = production recursive = yes # if you want sanoid to manage the child datasets but leave this one alone, set process_children_only. process_children_only = yes # you can selectively override settings for child datasets which already fall under a recursive definition. [zpoolname/parent/child] # child datasets already initialized won't be wiped out, so if you use a new template, it will # only override the values already set by the parent template, not replace it completely. use_template = demo ############################# # templates below this line # ############################# # name your templates template_templatename. you can create your own, and use them in your module definitions above. [template_demo] daily = 60 [template_production] frequently = 0 hourly = 36 daily = 30 monthly = 3 yearly = 0 autosnap = yes autoprune = yes [template_backup] autoprune = yes frequently = 0 hourly = 30 daily = 90 monthly = 12 yearly = 0 ### don't take new snapshots - snapshots on backup ### datasets are replicated in from source, not ### generated locally autosnap = no ### monitor hourlies and dailies, but don't warn or ### crit until they're over 48h old, since replication ### is typically daily only hourly_warn = 2880 hourly_crit = 3600 daily_warn = 48 daily_crit = 60 [template_hotspare] autoprune = yes frequently = 0 hourly = 30 daily = 90 monthly = 3 yearly = 0 ### don't take new snapshots - snapshots on backup ### datasets are replicated in from source, not ### generated locally autosnap = no ### monitor hourlies and dailies, but don't warn or ### crit until they're over 4h old, since replication ### is typically hourly only hourly_warn = 4h hourly_crit = 6h daily_warn = 2d daily_crit = 4d [template_scripts] ### dataset and snapshot name will be supplied as environment variables ### for all pre/post/prune scripts ($SANOID_TARGET, $SANOID_SNAPNAME) ### run script before snapshot pre_snapshot_script = /path/to/script.sh ### run script after snapshot post_snapshot_script = /path/to/script.sh ### run script after pruning snapshot pruning_script = /path/to/script.sh ### don't take an inconsistent snapshot (skip if pre script fails) #no_inconsistent_snapshot = yes ### run post_snapshot_script when pre_snapshot_script is failing #force_post_snapshot_script = yes ### limit allowed execution time of scripts before continuing (<= 0: infinite) script_timeout = 5 [template_ignore] autoprune = no autosnap = no monitor = no sanoid-2.0.3/sanoid.defaults.conf000066400000000000000000000106211355716220600167320ustar00rootroot00000000000000################################################################################### # default template - DO NOT EDIT THIS FILE DIRECTLY. # # If you wish to override default values, you can create your # # own [template_default] in /etc/sanoid/sanoid.conf. # # # # you have been warned. # ################################################################################### [version] version = 2 [template_default] # these settings don't make sense in a template, but we use the defaults file # as our list of allowable settings also, so they need to be present here even if # unset. path = recursive = use_template = process_children_only = skip_children = pre_snapshot_script = post_snapshot_script = pruning_script = script_timeout = 5 no_inconsistent_snapshot = force_post_snapshot_script = # for snapshots shorter than one hour, the period duration must be defined # in minutes. Because they are executed within a full hour, the selected # value should divide 60 minutes without remainder so taken snapshots # are apart in equal intervals. Values larger than 59 aren't practical # as only one snapshot will be taken on each full hour in this case. # examples: # frequent_period = 15 -> four snapshot each hour 15 minutes apart # frequent_period = 5 -> twelve snapshots each hour 5 minutes apart # frequent_period = 45 -> two snapshots each hour with different time gaps # between them: 45 minutes and 15 minutes in this case frequent_period = 15 # If any snapshot type is set to 0, we will not take snapshots for it - and will immediately # prune any of those type snapshots already present. # # Otherwise, if autoprune is set, we will prune any snapshots of that type which are older # than (setting * periodicity) - so if daily = 90, we'll prune any dailies older than 90 days. autoprune = yes frequently = 0 hourly = 48 daily = 90 weekly = 0 monthly = 6 yearly = 0 # pruning can be skipped based on the used capacity of the pool # (0: always prune, 1-100: only prune if used capacity is greater than this value) prune_defer = 0 # We will automatically take snapshots if autosnap is on, at the desired times configured # below (or immediately, if we don't have one since the last preferred time for that type). # # Note that we will not take snapshots for a given type if that type is set to 0 above, # regardless of the autosnap setting - for example, if yearly=0 we will not take yearlies # even if we've defined a preferred time for yearlies and autosnap is on. autosnap = 1 # hourly - top of the hour hourly_min = 0 # daily - at 23:59 (most people expect a daily to contain everything done DURING that day) daily_hour = 23 daily_min = 59 # weekly -at 23:30 each Monday weekly_wday = 1 weekly_hour = 23 weekly_min = 30 # monthly - immediately at the beginning of the month (ie 00:00 of day 1) monthly_mday = 1 monthly_hour = 0 monthly_min = 0 # yearly - immediately at the beginning of the year (ie 00:00 on Jan 1) yearly_mon = 1 yearly_mday = 1 yearly_hour = 0 yearly_min = 0 # monitoring plugin - define warn / crit levels for each snapshot type by age, in units of one period down # example hourly_warn = 90 means issue WARNING if most recent hourly snapshot is not less than 90 minutes old, # daily_crit = 36 means issue CRITICAL if most recent daily snapshot is not less than 36 hours old, # monthly_warn = 5 means issue WARNING if most recent monthly snapshot is not less than 5 weeks old... etc. # the following time case insensitive suffixes can also be used: # y = years, w = weeks, d = days, h = hours, m = minutes, s = seconds # # monitor_dont_warn = yes will cause the monitoring service to report warnings as text, but with status OK. # monitor_dont_crit = yes will cause the monitoring service to report criticals as text, but with status OK. # # setting any value to 0 will keep the monitoring service from monitoring that snapshot type on that section at all. monitor = yes monitor_dont_warn = no monitor_dont_crit = no frequently_warn = 0 frequently_crit = 0 hourly_warn = 90m hourly_crit = 360m daily_warn = 28h daily_crit = 32h weekly_warn = 0 weekly_crit = 0 monthly_warn = 32d monthly_crit = 40d yearly_warn = 0 yearly_crit = 0 # default limits for capacity checks (if set to 0, limit will not be checked) capacity_warn = 80 capacity_crit = 95 sanoid-2.0.3/sleepymutex000077500000000000000000000004761355716220600153210ustar00rootroot00000000000000#!/bin/bash # this is just a cheap way to trigger mutex-based checks for process activity. # # ie ./sleepymutex zfs receive data/lolz if you want a mutex hanging around # as long as necessary that will show up to any routine that actively does # something like "ps axo | grep 'zfs receive'" or whatever. sleep 99999 sanoid-2.0.3/syncoid000077500000000000000000001756371355716220600144210ustar00rootroot00000000000000#!/usr/bin/perl # this software is licensed for use under the Free Software Foundation's GPL v3.0 license, as retrieved # from http://www.gnu.org/licenses/gpl-3.0.html on 2014-11-17. A copy should also be available in this # project's Git repository at https://github.com/jimsalterjrs/sanoid/blob/master/LICENSE. $::VERSION = '2.0.3'; use strict; use warnings; use Data::Dumper; use Getopt::Long qw(:config auto_version auto_help); use Pod::Usage; use Time::Local; use Sys::Hostname; use Capture::Tiny ':all'; my $mbuffer_size = "16M"; # Blank defaults to use ssh client's default # TODO: Merge into a single "sshflags" option? my %args = ('sshkey' => '', 'sshport' => '', 'sshcipher' => '', 'sshoption' => [], 'target-bwlimit' => '', 'source-bwlimit' => ''); GetOptions(\%args, "no-command-checks", "monitor-version", "compress=s", "dumpsnaps", "recursive|r", "sendoptions=s", "recvoptions=s", "source-bwlimit=s", "target-bwlimit=s", "sshkey=s", "sshport=i", "sshcipher|c=s", "sshoption|o=s@", "debug", "quiet", "no-stream", "no-sync-snap", "no-resume", "exclude=s@", "skip-parent", "identifier=s", "no-clone-handling", "no-privilege-elevation", "force-delete", "no-clone-rollback", "no-rollback", "create-bookmark", "mbuffer-size=s" => \$mbuffer_size) or pod2usage(2); my %compressargs = %{compressargset($args{'compress'} || 'default')}; # Can't be done with GetOptions arg, as default still needs to be set my @sendoptions = (); if (length $args{'sendoptions'}) { @sendoptions = parsespecialoptions($args{'sendoptions'}); if (! defined($sendoptions[0])) { warn "invalid send options!"; pod2usage(2); exit 127; } } my @recvoptions = (); if (length $args{'recvoptions'}) { @recvoptions = parsespecialoptions($args{'recvoptions'}); if (! defined($recvoptions[0])) { warn "invalid receive options!"; pod2usage(2); exit 127; } } # TODO Expand to accept multiple sources? if (scalar(@ARGV) != 2) { print("Source or target not found!\n"); pod2usage(2); exit 127; } else { $args{'source'} = $ARGV[0]; $args{'target'} = $ARGV[1]; } # Could possibly merge these into an options function if (length $args{'source-bwlimit'}) { $args{'source-bwlimit'} = "-R $args{'source-bwlimit'}"; } if (length $args{'target-bwlimit'}) { $args{'target-bwlimit'} = "-r $args{'target-bwlimit'}"; } $args{'streamarg'} = (defined $args{'no-stream'} ? '-i' : '-I'); my $rawsourcefs = $args{'source'}; my $rawtargetfs = $args{'target'}; my $debug = $args{'debug'}; my $quiet = $args{'quiet'}; my $resume = !$args{'no-resume'}; # for compatibility reasons, older versions used hardcoded command paths $ENV{'PATH'} = $ENV{'PATH'} . ":/bin:/usr/bin:/sbin"; my $zfscmd = 'zfs'; my $zpoolcmd = 'zpool'; my $sshcmd = 'ssh'; my $pscmd = 'ps'; my $pvcmd = 'pv'; my $mbuffercmd = 'mbuffer'; my $sudocmd = 'sudo'; my $mbufferoptions = "-q -s 128k -m $mbuffer_size 2>/dev/null"; # currently using POSIX compatible command to check for program existence because we aren't depending on perl # being present on remote machines. my $checkcmd = 'command -v'; if (length $args{'sshcipher'}) { $args{'sshcipher'} = "-c $args{'sshcipher'}"; } if (length $args{'sshport'}) { $args{'sshport'} = "-p $args{'sshport'}"; } if (length $args{'sshkey'}) { $args{'sshkey'} = "-i $args{'sshkey'}"; } my $sshoptions = join " ", map { "-o " . $_ } @{$args{'sshoption'}}; # deref required my $identifier = ""; if (length $args{'identifier'}) { if ($args{'identifier'} !~ /^[a-zA-Z0-9-_:.]+$/) { # invalid extra identifier print("CRITICAL: extra identifier contains invalid chars!\n"); pod2usage(2); exit 127; } $identifier = "$args{'identifier'}_"; } # figure out if source and/or target are remote. $sshcmd = "$sshcmd $args{'sshcipher'} $sshoptions $args{'sshport'} $args{'sshkey'}"; if ($debug) { print "DEBUG: SSHCMD: $sshcmd\n"; } my ($sourcehost,$sourcefs,$sourceisroot) = getssh($rawsourcefs); my ($targethost,$targetfs,$targetisroot) = getssh($rawtargetfs); my $sourcesudocmd = $sourceisroot ? '' : $sudocmd; my $targetsudocmd = $targetisroot ? '' : $sudocmd; # figure out whether compression, mbuffering, pv # are available on source, target, local machines. # warn user of anything missing, then continue with sync. my %avail = checkcommands(); my %snaps; my $exitcode = 0; ## break here to call replication individually so that we ## ## can loop across children separately, for recursive ## ## replication ## if (!defined $args{'recursive'}) { syncdataset($sourcehost, $sourcefs, $targethost, $targetfs, undef); } else { if ($debug) { print "DEBUG: recursive sync of $sourcefs.\n"; } my @datasets = getchilddatasets($sourcehost, $sourcefs, $sourceisroot); if (!@datasets) { warn "CRITICAL ERROR: no datasets found"; @datasets = (); $exitcode = 2; } my @deferred; foreach my $datasetProperties(@datasets) { my $dataset = $datasetProperties->{'name'}; my $origin = $datasetProperties->{'origin'}; if ($origin eq "-" || defined $args{'no-clone-handling'}) { $origin = undef; } else { # check if clone source is replicated too my @values = split(/@/, $origin, 2); my $srcdataset = $values[0]; my $found = 0; foreach my $datasetProperties(@datasets) { if ($datasetProperties->{'name'} eq $srcdataset) { $found = 1; last; } } if ($found == 0) { # clone source is not replicated, do a full replication $origin = undef; } else { # clone source is replicated, defer until all non clones are replicated push @deferred, $datasetProperties; next; } } $dataset =~ s/\Q$sourcefs\E//; chomp $dataset; my $childsourcefs = $sourcefs . $dataset; my $childtargetfs = $targetfs . $dataset; # print "syncdataset($sourcehost, $childsourcefs, $targethost, $childtargetfs); \n"; syncdataset($sourcehost, $childsourcefs, $targethost, $childtargetfs, $origin); } # replicate cloned datasets and if this is the initial run, recreate them on the target foreach my $datasetProperties(@deferred) { my $dataset = $datasetProperties->{'name'}; my $origin = $datasetProperties->{'origin'}; $dataset =~ s/\Q$sourcefs\E//; chomp $dataset; my $childsourcefs = $sourcefs . $dataset; my $childtargetfs = $targetfs . $dataset; syncdataset($sourcehost, $childsourcefs, $targethost, $childtargetfs, $origin); } } # close SSH sockets for master connections as applicable if ($sourcehost ne '') { open FH, "$sshcmd $sourcehost -O exit 2>&1 |"; close FH; } if ($targethost ne '') { open FH, "$sshcmd $targethost -O exit 2>&1 |"; close FH; } exit $exitcode; ############################################################################## ############################################################################## ############################################################################## ############################################################################## sub getchilddatasets { my ($rhost,$fs,$isroot,%snaps) = @_; my $mysudocmd; my $fsescaped = escapeshellparam($fs); if ($isroot) { $mysudocmd = ''; } else { $mysudocmd = $sudocmd; } if ($rhost ne '') { $rhost = "$sshcmd $rhost"; # double escaping needed $fsescaped = escapeshellparam($fsescaped); } my $getchildrencmd = "$rhost $mysudocmd $zfscmd list -o name,origin -t filesystem,volume -Hr $fsescaped |"; if ($debug) { print "DEBUG: getting list of child datasets on $fs using $getchildrencmd...\n"; } if (! open FH, $getchildrencmd) { die "ERROR: list command failed!\n"; } my @children; my $first = 1; DATASETS: while() { chomp; if (defined $args{'skip-parent'} && $first eq 1) { # parent dataset is the first element $first = 0; next; } my ($dataset, $origin) = /^([^\t]+)\t([^\t]+)/; if (defined $args{'exclude'}) { my $excludes = $args{'exclude'}; foreach (@$excludes) { if ($dataset =~ /$_/) { if ($debug) { print "DEBUG: excluded $dataset because of $_\n"; } next DATASETS; } } } my %properties; $properties{'name'} = $dataset; $properties{'origin'} = $origin; push @children, \%properties; } close FH; return @children; } sub syncdataset { my ($sourcehost, $sourcefs, $targethost, $targetfs, $origin, $skipsnapshot) = @_; my $stdout; my $exit; my $sourcefsescaped = escapeshellparam($sourcefs); my $targetfsescaped = escapeshellparam($targetfs); # if no rollbacks are allowed, disable forced receive my $forcedrecv = "-F"; if (defined $args{'no-rollback'}) { $forcedrecv = ""; } if ($debug) { print "DEBUG: syncing source $sourcefs to target $targetfs.\n"; } my $sync = getzfsvalue($sourcehost,$sourcefs,$sourceisroot,'syncoid:sync'); if (!defined $sync) { # zfs already printed the corresponding error if ($exitcode < 2) { $exitcode = 2; } return 0; } if ($sync eq 'true' || $sync eq '-' || $sync eq '') { # empty is handled the same as unset (aka: '-') # definitely sync this dataset - if a host is called 'true' or '-', then you're special } elsif ($sync eq 'false') { if (!$quiet) { print "INFO: Skipping dataset (syncoid:sync=false): $sourcefs...\n"; } return 0; } else { my $hostid = hostname(); my @hosts = split(/,/,$sync); if (!(grep $hostid eq $_, @hosts)) { if (!$quiet) { print "INFO: Skipping dataset (syncoid:sync doesn't include $hostid): $sourcefs...\n"; } return 0; } } # make sure target is not currently in receive. if (iszfsbusy($targethost,$targetfs,$targetisroot)) { warn "Cannot sync now: $targetfs is already target of a zfs receive process.\n"; if ($exitcode < 1) { $exitcode = 1; } return 0; } # does the target filesystem exist yet? my $targetexists = targetexists($targethost,$targetfs,$targetisroot); my $receiveextraargs = ""; my $receivetoken; if ($resume) { # save state of interrupted receive stream $receiveextraargs = "-s"; if ($targetexists) { # check remote dataset for receive resume token (interrupted receive) $receivetoken = getreceivetoken($targethost,$targetfs,$targetisroot); if ($debug && defined($receivetoken)) { print "DEBUG: got receive resume token: $receivetoken: \n"; } } } my $newsyncsnap; # skip snapshot checking/creation in case of resumed receive if (!defined($receivetoken)) { # build hashes of the snaps on the source and target filesystems. %snaps = getsnaps('source',$sourcehost,$sourcefs,$sourceisroot); if ($targetexists) { my %targetsnaps = getsnaps('target',$targethost,$targetfs,$targetisroot); my %sourcesnaps = %snaps; %snaps = (%sourcesnaps, %targetsnaps); } if (defined $args{'dumpsnaps'}) { print "merged snapshot list of $targetfs: \n"; dumphash(\%snaps); print "\n\n\n"; } if (!defined $args{'no-sync-snap'} && !defined $skipsnapshot) { # create a new syncoid snapshot on the source filesystem. $newsyncsnap = newsyncsnap($sourcehost,$sourcefs,$sourceisroot); if (!$newsyncsnap) { # we already whined about the error return 0; } } else { # we don't want sync snapshots created, so use the newest snapshot we can find. $newsyncsnap = getnewestsnapshot($sourcehost,$sourcefs,$sourceisroot); if ($newsyncsnap eq 0) { warn "CRITICAL: no snapshots exist on source $sourcefs, and you asked for --no-sync-snap.\n"; if ($exitcode < 1) { $exitcode = 1; } return 0; } } } my $newsyncsnapescaped = escapeshellparam($newsyncsnap); # there is currently (2014-09-01) a bug in ZFS on Linux # that causes readonly to always show on if it's EVER # been turned on... even when it's off... unless and # until the filesystem is zfs umounted and zfs remounted. # we're going to do the right thing anyway. # dyking this functionality out for the time being due to buggy mount/unmount behavior # with ZFS on Linux (possibly OpenZFS in general) when setting/unsetting readonly. #my $originaltargetreadonly; my $sendoptions = getoptionsline(\@sendoptions, ('D','L','P','R','c','e','h','p','v','w')); my $recvoptions = getoptionsline(\@recvoptions, ('h','o','x','u','v')); # sync 'em up. if (! $targetexists) { # do an initial sync from the oldest source snapshot # THEN do an -I to the newest if ($debug) { if (!defined ($args{'no-stream'}) ) { print "DEBUG: target $targetfs does not exist. Finding oldest available snapshot on source $sourcefs ...\n"; } else { print "DEBUG: target $targetfs does not exist, and --no-stream selected. Finding newest available snapshot on source $sourcefs ...\n"; } } my $oldestsnap = getoldestsnapshot(\%snaps); if (! $oldestsnap) { if (defined ($args{'no-sync-snap'}) ) { # we already whined about the missing snapshots return 0; } # getoldestsnapshot() returned false, so use new sync snapshot if ($debug) { print "DEBUG: getoldestsnapshot() returned false, so using $newsyncsnap.\n"; } $oldestsnap = $newsyncsnap; } # if --no-stream is specified, our full needs to be the newest snapshot, not the oldest. if (defined $args{'no-stream'}) { if (defined ($args{'no-sync-snap'}) ) { $oldestsnap = getnewestsnapshot(\%snaps); } else { $oldestsnap = $newsyncsnap; } } my $oldestsnapescaped = escapeshellparam($oldestsnap); my $sendcmd = "$sourcesudocmd $zfscmd send $sendoptions $sourcefsescaped\@$oldestsnapescaped"; my $recvcmd = "$targetsudocmd $zfscmd receive $recvoptions $receiveextraargs $forcedrecv $targetfsescaped"; my $pvsize; if (defined $origin) { my $originescaped = escapeshellparam($origin); $sendcmd = "$sourcesudocmd $zfscmd send $sendoptions -i $originescaped $sourcefsescaped\@$oldestsnapescaped"; my $streamargBackup = $args{'streamarg'}; $args{'streamarg'} = "-i"; $pvsize = getsendsize($sourcehost,$origin,"$sourcefs\@$oldestsnap",$sourceisroot); $args{'streamarg'} = $streamargBackup; } else { $pvsize = getsendsize($sourcehost,"$sourcefs\@$oldestsnap",0,$sourceisroot); } my $disp_pvsize = readablebytes($pvsize); if ($pvsize == 0) { $disp_pvsize = 'UNKNOWN'; } my $synccmd = buildsynccmd($sendcmd,$recvcmd,$pvsize,$sourceisroot,$targetisroot); if (!$quiet) { if (defined $origin) { print "INFO: Clone is recreated on target $targetfs based on $origin\n"; } if (!defined ($args{'no-stream'}) ) { print "INFO: Sending oldest full snapshot $sourcefs\@$oldestsnap (~ $disp_pvsize) to new target filesystem:\n"; } else { print "INFO: --no-stream selected; sending newest full snapshot $sourcefs\@$oldestsnap (~ $disp_pvsize) to new target filesystem:\n"; } } if ($debug) { print "DEBUG: $synccmd\n"; } # make sure target is (still) not currently in receive. if (iszfsbusy($targethost,$targetfs,$targetisroot)) { warn "Cannot sync now: $targetfs is already target of a zfs receive process.\n"; if ($exitcode < 1) { $exitcode = 1; } return 0; } system($synccmd) == 0 or do { if (defined $origin) { print "INFO: clone creation failed, trying ordinary replication as fallback\n"; syncdataset($sourcehost, $sourcefs, $targethost, $targetfs, undef, 1); return 0; } warn "CRITICAL ERROR: $synccmd failed: $?"; if ($exitcode < 2) { $exitcode = 2; } return 0; }; # now do an -I to the new sync snapshot, assuming there were any snapshots # other than the new sync snapshot to begin with, of course - and that we # aren't invoked with --no-stream, in which case a full of the newest snap # available was all we needed to do if (!defined ($args{'no-stream'}) && ($oldestsnap ne $newsyncsnap) ) { # get current readonly status of target, then set it to on during sync # dyking this functionality out for the time being due to buggy mount/unmount behavior # with ZFS on Linux (possibly OpenZFS in general) when setting/unsetting readonly. # $originaltargetreadonly = getzfsvalue($targethost,$targetfs,$targetisroot,'readonly'); # setzfsvalue($targethost,$targetfs,$targetisroot,'readonly','on'); $sendcmd = "$sourcesudocmd $zfscmd send $sendoptions $args{'streamarg'} $sourcefsescaped\@$oldestsnapescaped $sourcefsescaped\@$newsyncsnapescaped"; $pvsize = getsendsize($sourcehost,"$sourcefs\@$oldestsnap","$sourcefs\@$newsyncsnap",$sourceisroot); $disp_pvsize = readablebytes($pvsize); if ($pvsize == 0) { $disp_pvsize = "UNKNOWN"; } $synccmd = buildsynccmd($sendcmd,$recvcmd,$pvsize,$sourceisroot,$targetisroot); # make sure target is (still) not currently in receive. if (iszfsbusy($targethost,$targetfs,$targetisroot)) { warn "Cannot sync now: $targetfs is already target of a zfs receive process.\n"; if ($exitcode < 1) { $exitcode = 1; } return 0; } if (!$quiet) { print "INFO: Updating new target filesystem with incremental $sourcefs\@$oldestsnap ... $newsyncsnap (~ $disp_pvsize):\n"; } if ($debug) { print "DEBUG: $synccmd\n"; } if ($oldestsnap ne $newsyncsnap) { my $ret = system($synccmd); if ($ret != 0) { warn "CRITICAL ERROR: $synccmd failed: $?"; if ($exitcode < 1) { $exitcode = 1; } return 0; } } else { if (!$quiet) { print "INFO: no incremental sync needed; $oldestsnap is already the newest available snapshot.\n"; } } # restore original readonly value to target after sync complete # dyking this functionality out for the time being due to buggy mount/unmount behavior # with ZFS on Linux (possibly OpenZFS in general) when setting/unsetting readonly. # setzfsvalue($targethost,$targetfs,$targetisroot,'readonly',$originaltargetreadonly); } } else { # resume interrupted receive if there is a valid resume $token # and because this will ony resume the receive to the next # snapshot, do a normal sync after that if (defined($receivetoken)) { $sendoptions = getoptionsline(\@sendoptions, ('P','e','v','w')); my $sendcmd = "$sourcesudocmd $zfscmd send $sendoptions -t $receivetoken"; my $recvcmd = "$targetsudocmd $zfscmd receive $recvoptions $receiveextraargs $forcedrecv $targetfsescaped 2>&1"; my $pvsize = getsendsize($sourcehost,"","",$sourceisroot,$receivetoken); my $disp_pvsize = readablebytes($pvsize); if ($pvsize == 0) { $disp_pvsize = "UNKNOWN"; } my $synccmd = buildsynccmd($sendcmd,$recvcmd,$pvsize,$sourceisroot,$targetisroot); if (!$quiet) { print "Resuming interrupted zfs send/receive from $sourcefs to $targetfs (~ $disp_pvsize remaining):\n"; } if ($debug) { print "DEBUG: $synccmd\n"; } if ($pvsize == 0) { # we need to capture the error of zfs send, this will render pv useless but in this case # it doesn't matter because we don't know the estimated send size (probably because # the initial snapshot used for resumed send doesn't exist anymore) ($stdout, $exit) = tee_stderr { system("$synccmd") }; } else { ($stdout, $exit) = tee_stdout { system("$synccmd") }; } $exit == 0 or do { if ($stdout =~ /\Qused in the initial send no longer exists\E/) { if (!$quiet) { print "WARN: resetting partially receive state because the snapshot source no longer exists\n"; } resetreceivestate($targethost,$targetfs,$targetisroot); # do an normal sync cycle return syncdataset($sourcehost, $sourcefs, $targethost, $targetfs, $origin); } else { warn "CRITICAL ERROR: $synccmd failed: $?"; if ($exitcode < 2) { $exitcode = 2; } return 0; } }; # a resumed transfer will only be done to the next snapshot, # so do an normal sync cycle return syncdataset($sourcehost, $sourcefs, $targethost, $targetfs, undef); } # find most recent matching snapshot and do an -I # to the new snapshot # get current readonly status of target, then set it to on during sync # dyking this functionality out for the time being due to buggy mount/unmount behavior # with ZFS on Linux (possibly OpenZFS in general) when setting/unsetting readonly. # $originaltargetreadonly = getzfsvalue($targethost,$targetfs,$targetisroot,'readonly'); # setzfsvalue($targethost,$targetfs,$targetisroot,'readonly','on'); my $targetsize = getzfsvalue($targethost,$targetfs,$targetisroot,'-p used'); my $bookmark = 0; my $bookmarkcreation = 0; my $matchingsnap = getmatchingsnapshot($sourcefs, $targetfs, \%snaps); if (! $matchingsnap) { # no matching snapshots, check for bookmarks as fallback my %bookmarks = getbookmarks($sourcehost,$sourcefs,$sourceisroot); # check for matching guid of source bookmark and target snapshot (oldest first) foreach my $snap ( sort { $snaps{'target'}{$b}{'creation'}<=>$snaps{'target'}{$a}{'creation'} } keys %{ $snaps{'target'} }) { my $guid = $snaps{'target'}{$snap}{'guid'}; if (defined $bookmarks{$guid}) { # found a match $bookmark = $bookmarks{$guid}{'name'}; $bookmarkcreation = $bookmarks{$guid}{'creation'}; $matchingsnap = $snap; last; } } if (! $bookmark) { if ($args{'force-delete'}) { if (!$quiet) { print "Removing $targetfs because no matching snapshots were found\n"; } my $rcommand = ''; my $mysudocmd = ''; my $targetfsescaped = escapeshellparam($targetfs); if ($targethost ne '') { $rcommand = "$sshcmd $targethost"; } if (!$targetisroot) { $mysudocmd = $sudocmd; } my $prunecmd = "$mysudocmd $zfscmd destroy -r $targetfsescaped; "; if ($targethost ne '') { $prunecmd = escapeshellparam($prunecmd); } my $ret = system("$rcommand $prunecmd"); if ($ret != 0) { warn "WARNING: $rcommand $prunecmd failed: $?"; } else { # redo sync and skip snapshot creation (already taken) return syncdataset($sourcehost, $sourcefs, $targethost, $targetfs, undef, 1); } } # if we got this far, we failed to find a matching snapshot/bookmark. if ($exitcode < 2) { $exitcode = 2; } print "\n"; print "CRITICAL ERROR: Target $targetfs exists but has no snapshots matching with $sourcefs!\n"; print " Replication to target would require destroying existing\n"; print " target. Cowardly refusing to destroy your existing target.\n\n"; # experience tells me we need a mollyguard for people who try to # zfs create targetpool/targetsnap ; syncoid sourcepool/sourcesnap targetpool/targetsnap ... if ( $targetsize < (64*1024*1024) ) { print " NOTE: Target $targetfs dataset is < 64MB used - did you mistakenly run\n"; print " \`zfs create $args{'target'}\` on the target? ZFS initial\n"; print " replication must be to a NON EXISTENT DATASET, which will\n"; print " then be CREATED BY the initial replication process.\n\n"; } # return false now in case more child datasets need replication. return 0; } } # make sure target is (still) not currently in receive. if (iszfsbusy($targethost,$targetfs,$targetisroot)) { warn "Cannot sync now: $targetfs is already target of a zfs receive process.\n"; if ($exitcode < 1) { $exitcode = 1; } return 0; } if ($matchingsnap eq $newsyncsnap) { # barf some text but don't touch the filesystem if (!$quiet) { print "INFO: no snapshots on source newer than $newsyncsnap on target. Nothing to do, not syncing.\n"; } return 0; } else { my $matchingsnapescaped = escapeshellparam($matchingsnap); # rollback target to matchingsnap if (!defined $args{'no-rollback'}) { my $rollbacktype = "-R"; if (defined $args{'no-clone-rollback'}) { $rollbacktype = "-r"; } if ($debug) { print "DEBUG: rolling back target to $targetfs\@$matchingsnap...\n"; } if ($targethost ne '') { if ($debug) { print "$sshcmd $targethost $targetsudocmd $zfscmd rollback $rollbacktype $targetfsescaped\@$matchingsnapescaped\n"; } system ("$sshcmd $targethost " . escapeshellparam("$targetsudocmd $zfscmd rollback $rollbacktype $targetfsescaped\@$matchingsnapescaped")); } else { if ($debug) { print "$targetsudocmd $zfscmd rollback $rollbacktype $targetfsescaped\@$matchingsnapescaped\n"; } system ("$targetsudocmd $zfscmd rollback $rollbacktype $targetfsescaped\@$matchingsnapescaped"); } } my $nextsnapshot = 0; if ($bookmark) { my $bookmarkescaped = escapeshellparam($bookmark); if (!defined $args{'no-stream'}) { # if intermediate snapshots are needed we need to find the next oldest snapshot, # do an replication to it and replicate as always from oldest to newest # because bookmark sends doesn't support intermediates directly foreach my $snap ( sort { $snaps{'source'}{$a}{'creation'}<=>$snaps{'source'}{$b}{'creation'} } keys %{ $snaps{'source'} }) { if ($snaps{'source'}{$snap}{'creation'} >= $bookmarkcreation) { $nextsnapshot = $snap; last; } } } # bookmark stream size can't be determined my $pvsize = 0; my $disp_pvsize = "UNKNOWN"; $sendoptions = getoptionsline(\@sendoptions, ('L','c','e','w')); if ($nextsnapshot) { my $nextsnapshotescaped = escapeshellparam($nextsnapshot); my $sendcmd = "$sourcesudocmd $zfscmd send $sendoptions -i $sourcefsescaped#$bookmarkescaped $sourcefsescaped\@$nextsnapshotescaped"; my $recvcmd = "$targetsudocmd $zfscmd receive $recvoptions $receiveextraargs $forcedrecv $targetfsescaped 2>&1"; my $synccmd = buildsynccmd($sendcmd,$recvcmd,$pvsize,$sourceisroot,$targetisroot); if (!$quiet) { print "Sending incremental $sourcefs#$bookmarkescaped ... $nextsnapshot (~ $disp_pvsize):\n"; } if ($debug) { print "DEBUG: $synccmd\n"; } ($stdout, $exit) = tee_stdout { system("$synccmd") }; $exit == 0 or do { if (!$resume && $stdout =~ /\Qcontains partially-complete state\E/) { if (!$quiet) { print "WARN: resetting partially receive state\n"; } resetreceivestate($targethost,$targetfs,$targetisroot); system("$synccmd") == 0 or do { warn "CRITICAL ERROR: $synccmd failed: $?"; if ($exitcode < 2) { $exitcode = 2; } return 0; } } else { warn "CRITICAL ERROR: $synccmd failed: $?"; if ($exitcode < 2) { $exitcode = 2; } return 0; } }; $matchingsnap = $nextsnapshot; $matchingsnapescaped = escapeshellparam($matchingsnap); } else { my $sendcmd = "$sourcesudocmd $zfscmd send $sendoptions -i $sourcefsescaped#$bookmarkescaped $sourcefsescaped\@$newsyncsnapescaped"; my $recvcmd = "$targetsudocmd $zfscmd receive $recvoptions $receiveextraargs $forcedrecv $targetfsescaped 2>&1"; my $synccmd = buildsynccmd($sendcmd,$recvcmd,$pvsize,$sourceisroot,$targetisroot); if (!$quiet) { print "Sending incremental $sourcefs#$bookmarkescaped ... $newsyncsnap (~ $disp_pvsize):\n"; } if ($debug) { print "DEBUG: $synccmd\n"; } ($stdout, $exit) = tee_stdout { system("$synccmd") }; $exit == 0 or do { if (!$resume && $stdout =~ /\Qcontains partially-complete state\E/) { if (!$quiet) { print "WARN: resetting partially receive state\n"; } resetreceivestate($targethost,$targetfs,$targetisroot); system("$synccmd") == 0 or do { warn "CRITICAL ERROR: $synccmd failed: $?"; if ($exitcode < 2) { $exitcode = 2; } return 0; } } else { warn "CRITICAL ERROR: $synccmd failed: $?"; if ($exitcode < 2) { $exitcode = 2; } return 0; } }; } } # do a normal replication if bookmarks aren't used or if previous # bookmark replication was only done to the next oldest snapshot if (!$bookmark || $nextsnapshot) { if ($matchingsnap eq $newsyncsnap) { # edge case: bookmark replication used the latest snapshot return 0; } $sendoptions = getoptionsline(\@sendoptions, ('D','L','P','R','c','e','h','p','v','w')); my $sendcmd = "$sourcesudocmd $zfscmd send $sendoptions $args{'streamarg'} $sourcefsescaped\@$matchingsnapescaped $sourcefsescaped\@$newsyncsnapescaped"; my $recvcmd = "$targetsudocmd $zfscmd receive $recvoptions $receiveextraargs $forcedrecv $targetfsescaped 2>&1"; my $pvsize = getsendsize($sourcehost,"$sourcefs\@$matchingsnap","$sourcefs\@$newsyncsnap",$sourceisroot); my $disp_pvsize = readablebytes($pvsize); if ($pvsize == 0) { $disp_pvsize = "UNKNOWN"; } my $synccmd = buildsynccmd($sendcmd,$recvcmd,$pvsize,$sourceisroot,$targetisroot); if (!$quiet) { print "Sending incremental $sourcefs\@$matchingsnap ... $newsyncsnap (~ $disp_pvsize):\n"; } if ($debug) { print "DEBUG: $synccmd\n"; } ($stdout, $exit) = tee_stdout { system("$synccmd") }; $exit == 0 or do { # FreeBSD reports "dataset is busy" instead of "contains partially-complete state" if (!$resume && ($stdout =~ /\Qcontains partially-complete state\E/ || $stdout =~ /\Qdataset is busy\E/)) { if (!$quiet) { print "WARN: resetting partially receive state\n"; } resetreceivestate($targethost,$targetfs,$targetisroot); system("$synccmd") == 0 or do { warn "CRITICAL ERROR: $synccmd failed: $?"; if ($exitcode < 2) { $exitcode = 2; } return 0; } } else { warn "CRITICAL ERROR: $synccmd failed: $?"; if ($exitcode < 2) { $exitcode = 2; } return 0; } }; } # restore original readonly value to target after sync complete # dyking this functionality out for the time being due to buggy mount/unmount behavior # with ZFS on Linux (possibly OpenZFS in general) when setting/unsetting readonly. #setzfsvalue($targethost,$targetfs,$targetisroot,'readonly',$originaltargetreadonly); } } if (defined $args{'no-sync-snap'}) { if (defined $args{'create-bookmark'}) { my $bookmarkcmd; if ($sourcehost ne '') { $bookmarkcmd = "$sshcmd $sourcehost " . escapeshellparam("$sourcesudocmd $zfscmd bookmark $sourcefsescaped\@$newsyncsnapescaped $sourcefsescaped\#$newsyncsnapescaped"); } else { $bookmarkcmd = "$sourcesudocmd $zfscmd bookmark $sourcefsescaped\@$newsyncsnapescaped $sourcefsescaped\#$newsyncsnapescaped"; } if ($debug) { print "DEBUG: $bookmarkcmd\n"; } system($bookmarkcmd) == 0 or do { # fallback: assume nameing conflict and try again with guid based suffix my $guid = $snaps{'source'}{$newsyncsnap}{'guid'}; $guid = substr($guid, 0, 6); if (!$quiet) { print "INFO: bookmark creation failed, retrying with guid based suffix ($guid)...\n"; } if ($sourcehost ne '') { $bookmarkcmd = "$sshcmd $sourcehost " . escapeshellparam("$sourcesudocmd $zfscmd bookmark $sourcefsescaped\@$newsyncsnapescaped $sourcefsescaped\#$newsyncsnapescaped$guid"); } else { $bookmarkcmd = "$sourcesudocmd $zfscmd bookmark $sourcefsescaped\@$newsyncsnapescaped $sourcefsescaped\#$newsyncsnapescaped$guid"; } if ($debug) { print "DEBUG: $bookmarkcmd\n"; } system($bookmarkcmd) == 0 or do { warn "CRITICAL ERROR: $bookmarkcmd failed: $?"; if ($exitcode < 2) { $exitcode = 2; } return 0; } }; } } else { # prune obsolete sync snaps on source and target (only if this run created ones). pruneoldsyncsnaps($sourcehost,$sourcefs,$newsyncsnap,$sourceisroot,keys %{ $snaps{'source'}}); pruneoldsyncsnaps($targethost,$targetfs,$newsyncsnap,$targetisroot,keys %{ $snaps{'target'}}); } } # end syncdataset() sub compressargset { my ($value) = @_; my $DEFAULT_COMPRESSION = 'lzo'; my %COMPRESS_ARGS = ( 'none' => { rawcmd => '', args => '', decomrawcmd => '', decomargs => '', }, 'gzip' => { rawcmd => 'gzip', args => '-3', decomrawcmd => 'zcat', decomargs => '', }, 'pigz-fast' => { rawcmd => 'pigz', args => '-3', decomrawcmd => 'pigz', decomargs => '-dc', }, 'pigz-slow' => { rawcmd => 'pigz', args => '-9', decomrawcmd => 'pigz', decomargs => '-dc', }, 'zstd-fast' => { rawcmd => 'zstd', args => '-3', decomrawcmd => 'zstd', decomargs => '-dc', }, 'zstd-slow' => { rawcmd => 'zstd', args => '-19', decomrawcmd => 'zstd', decomargs => '-dc', }, 'xz' => { rawcmd => 'xz', args => '', decomrawcmd => 'xz', decomargs => '-d', }, 'lzo' => { rawcmd => 'lzop', args => '', decomrawcmd => 'lzop', decomargs => '-dfc', }, 'lz4' => { rawcmd => 'lz4', args => '', decomrawcmd => 'lz4', decomargs => '-dc', }, ); if ($value eq 'default') { $value = $DEFAULT_COMPRESSION; } elsif (!(grep $value eq $_, ('gzip', 'pigz-fast', 'pigz-slow', 'zstd-fast', 'zstd-slow', 'lz4', 'xz', 'lzo', 'default', 'none'))) { warn "Unrecognised compression value $value, defaulting to $DEFAULT_COMPRESSION"; $value = $DEFAULT_COMPRESSION; } my %comargs = %{$COMPRESS_ARGS{$value}}; # copy $comargs{'compress'} = $value; $comargs{'cmd'} = "$comargs{'rawcmd'} $comargs{'args'}"; $comargs{'decomcmd'} = "$comargs{'decomrawcmd'} $comargs{'decomargs'}"; return \%comargs; } sub checkcommands { # make sure compression, mbuffer, and pv are available on # source, target, and local hosts as appropriate. my %avail; my $sourcessh; my $targetssh; # if --nocommandchecks then assume everything's available and return if ($args{'nocommandchecks'}) { if ($debug) { print "DEBUG: not checking for command availability due to --nocommandchecks switch.\n"; } $avail{'compress'} = 1; $avail{'localpv'} = 1; $avail{'localmbuffer'} = 1; $avail{'sourcembuffer'} = 1; $avail{'targetmbuffer'} = 1; $avail{'sourceresume'} = 1; $avail{'targetresume'} = 1; return %avail; } if (!defined $sourcehost) { $sourcehost = ''; } if (!defined $targethost) { $targethost = ''; } if ($sourcehost ne '') { $sourcessh = "$sshcmd $sourcehost"; } else { $sourcessh = ''; } if ($targethost ne '') { $targetssh = "$sshcmd $targethost"; } else { $targetssh = ''; } # if raw compress command is null, we must have specified no compression. otherwise, # make sure that compression is available everywhere we need it if ($compressargs{'compress'} eq 'none') { if ($debug) { print "DEBUG: compression forced off from command line arguments.\n"; } } else { if ($debug) { print "DEBUG: checking availability of $compressargs{'rawcmd'} on source...\n"; } $avail{'sourcecompress'} = `$sourcessh $checkcmd $compressargs{'rawcmd'} 2>/dev/null`; if ($debug) { print "DEBUG: checking availability of $compressargs{'rawcmd'} on target...\n"; } $avail{'targetcompress'} = `$targetssh $checkcmd $compressargs{'rawcmd'} 2>/dev/null`; if ($debug) { print "DEBUG: checking availability of $compressargs{'rawcmd'} on local machine...\n"; } $avail{'localcompress'} = `$checkcmd $compressargs{'rawcmd'} 2>/dev/null`; } my ($s,$t); if ($sourcehost eq '') { $s = '[local machine]' } else { $s = $sourcehost; $s =~ s/^\S*\@//; $s = "ssh:$s"; } if ($targethost eq '') { $t = '[local machine]' } else { $t = $targethost; $t =~ s/^\S*\@//; $t = "ssh:$t"; } if (!defined $avail{'sourcecompress'}) { $avail{'sourcecompress'} = ''; } if (!defined $avail{'targetcompress'}) { $avail{'targetcompress'} = ''; } if (!defined $avail{'localcompress'}) { $avail{'localcompress'} = ''; } if (!defined $avail{'sourcembuffer'}) { $avail{'sourcembuffer'} = ''; } if (!defined $avail{'targetmbuffer'}) { $avail{'targetmbuffer'} = ''; } if ($avail{'sourcecompress'} eq '') { if ($compressargs{'rawcmd'} ne '') { print "WARN: $compressargs{'rawcmd'} not available on source $s- sync will continue without compression.\n"; } $avail{'compress'} = 0; } if ($avail{'targetcompress'} eq '') { if ($compressargs{'rawcmd'} ne '') { print "WARN: $compressargs{'rawcmd'} not available on target $t - sync will continue without compression.\n"; } $avail{'compress'} = 0; } if ($avail{'targetcompress'} ne '' && $avail{'sourcecompress'} ne '') { # compression available - unless source and target are both remote, which we'll check # for in the next block and respond to accordingly. $avail{'compress'} = 1; } # corner case - if source AND target are BOTH remote, we have to check for local compress too if ($sourcehost ne '' && $targethost ne '' && $avail{'localcompress'} eq '') { if ($compressargs{'rawcmd'} ne '') { print "WARN: $compressargs{'rawcmd'} not available on local machine - sync will continue without compression.\n"; } $avail{'compress'} = 0; } if ($debug) { print "DEBUG: checking availability of $mbuffercmd on source...\n"; } $avail{'sourcembuffer'} = `$sourcessh $checkcmd $mbuffercmd 2>/dev/null`; if ($avail{'sourcembuffer'} eq '') { if (!$quiet) { print "WARN: $mbuffercmd not available on source $s - sync will continue without source buffering.\n"; } $avail{'sourcembuffer'} = 0; } else { $avail{'sourcembuffer'} = 1; } if ($debug) { print "DEBUG: checking availability of $mbuffercmd on target...\n"; } $avail{'targetmbuffer'} = `$targetssh $checkcmd $mbuffercmd 2>/dev/null`; if ($avail{'targetmbuffer'} eq '') { if (!$quiet) { print "WARN: $mbuffercmd not available on target $t - sync will continue without target buffering.\n"; } $avail{'targetmbuffer'} = 0; } else { $avail{'targetmbuffer'} = 1; } # if we're doing remote source AND remote target, check for local mbuffer as well if ($sourcehost ne '' && $targethost ne '') { if ($debug) { print "DEBUG: checking availability of $mbuffercmd on local machine...\n"; } $avail{'localmbuffer'} = `$checkcmd $mbuffercmd 2>/dev/null`; if ($avail{'localmbuffer'} eq '') { $avail{'localmbuffer'} = 0; if (!$quiet) { print "WARN: $mbuffercmd not available on local machine - sync will continue without local buffering.\n"; } } } if ($debug) { print "DEBUG: checking availability of $pvcmd on local machine...\n"; } $avail{'localpv'} = `$checkcmd $pvcmd 2>/dev/null`; if ($avail{'localpv'} eq '') { if (!$quiet) { print "WARN: $pvcmd not available on local machine - sync will continue without progress bar.\n"; } $avail{'localpv'} = 0; } else { $avail{'localpv'} = 1; } # check for ZFS resume feature support if ($resume) { my @parts = split ('/', $sourcefs); my $srcpool = $parts[0]; @parts = split ('/', $targetfs); my $dstpool = $parts[0]; $srcpool = escapeshellparam($srcpool); $dstpool = escapeshellparam($dstpool); if ($sourcehost ne '') { # double escaping needed $srcpool = escapeshellparam($srcpool); } if ($targethost ne '') { # double escaping needed $dstpool = escapeshellparam($dstpool); } my $resumechkcmd = "$zpoolcmd get -o value -H feature\@extensible_dataset"; if ($debug) { print "DEBUG: checking availability of zfs resume feature on source...\n"; } $avail{'sourceresume'} = system("$sourcessh $resumechkcmd $srcpool 2>/dev/null | grep '\\(active\\|enabled\\)' >/dev/null 2>&1"); $avail{'sourceresume'} = $avail{'sourceresume'} == 0 ? 1 : 0; if ($debug) { print "DEBUG: checking availability of zfs resume feature on target...\n"; } $avail{'targetresume'} = system("$targetssh $resumechkcmd $dstpool 2>/dev/null | grep '\\(active\\|enabled\\)' >/dev/null 2>&1"); $avail{'targetresume'} = $avail{'targetresume'} == 0 ? 1 : 0; if ($avail{'sourceresume'} == 0 || $avail{'targetresume'} == 0) { # disable resume $resume = ''; my @hosts = (); if ($avail{'sourceresume'} == 0) { push @hosts, 'source'; } if ($avail{'targetresume'} == 0) { push @hosts, 'target'; } my $affected = join(" and ", @hosts); print "WARN: ZFS resume feature not available on $affected machine - sync will continue without resume support.\n"; } } else { $avail{'sourceresume'} = 0; $avail{'targetresume'} = 0; } return %avail; } sub iszfsbusy { my ($rhost,$fs,$isroot) = @_; if ($rhost ne '') { $rhost = "$sshcmd $rhost"; } if ($debug) { print "DEBUG: checking to see if $fs on $rhost is already in zfs receive using $rhost $pscmd -Ao args= ...\n"; } open PL, "$rhost $pscmd -Ao args= |"; my @processes = ; close PL; foreach my $process (@processes) { # if ($debug) { print "DEBUG: checking process $process...\n"; } if ($process =~ /zfs *(receive|recv).*\Q$fs\E/) { # there's already a zfs receive process for our target filesystem - return true if ($debug) { print "DEBUG: process $process matches target $fs!\n"; } return 1; } } # no zfs receive processes for our target filesystem found - return false return 0; } sub setzfsvalue { my ($rhost,$fs,$isroot,$property,$value) = @_; my $fsescaped = escapeshellparam($fs); if ($rhost ne '') { $rhost = "$sshcmd $rhost"; # double escaping needed $fsescaped = escapeshellparam($fsescaped); } if ($debug) { print "DEBUG: setting $property to $value on $fs...\n"; } my $mysudocmd; if ($isroot) { $mysudocmd = ''; } else { $mysudocmd = $sudocmd; } if ($debug) { print "$rhost $mysudocmd $zfscmd set $property=$value $fsescaped\n"; } system("$rhost $mysudocmd $zfscmd set $property=$value $fsescaped") == 0 or warn "WARNING: $rhost $mysudocmd $zfscmd set $property=$value $fsescaped died: $?, proceeding anyway.\n"; return; } sub getzfsvalue { my ($rhost,$fs,$isroot,$property) = @_; my $fsescaped = escapeshellparam($fs); if ($rhost ne '') { $rhost = "$sshcmd $rhost"; # double escaping needed $fsescaped = escapeshellparam($fsescaped); } if ($debug) { print "DEBUG: getting current value of $property on $fs...\n"; } my $mysudocmd; if ($isroot) { $mysudocmd = ''; } else { $mysudocmd = $sudocmd; } if ($debug) { print "$rhost $mysudocmd $zfscmd get -H $property $fsescaped\n"; } open FH, "$rhost $mysudocmd $zfscmd get -H $property $fsescaped |"; my $value = ; close FH; if (!defined $value) { return undef; } my @values = split(/\t/,$value); $value = $values[2]; return $value; } sub readablebytes { my $bytes = shift; my $disp; if ($bytes > 1024*1024*1024) { $disp = sprintf("%.1f",$bytes/1024/1024/1024) . ' GB'; } elsif ($bytes > 1024*1024) { $disp = sprintf("%.1f",$bytes/1024/1024) . ' MB'; } else { $disp = sprintf("%d",$bytes/1024) . ' KB'; } return $disp; } sub getoldestsnapshot { my $snaps = shift; foreach my $snap ( sort { $snaps{'source'}{$a}{'creation'}<=>$snaps{'source'}{$b}{'creation'} } keys %{ $snaps{'source'} }) { # return on first snap found - it's the oldest return $snap; } # must not have had any snapshots on source - luckily, we already made one, amirite? if (defined ($args{'no-sync-snap'}) ) { # well, actually we set --no-sync-snap, so no we *didn't* already make one. Whoops. warn "CRIT: --no-sync-snap is set, and getoldestsnapshot() could not find any snapshots on source!\n"; } return 0; } sub getnewestsnapshot { my $snaps = shift; foreach my $snap ( sort { $snaps{'source'}{$b}{'creation'}<=>$snaps{'source'}{$a}{'creation'} } keys %{ $snaps{'source'} }) { # return on first snap found - it's the newest if (!$quiet) { print "NEWEST SNAPSHOT: $snap\n"; } return $snap; } # must not have had any snapshots on source - looks like we'd better create one! if (defined ($args{'no-sync-snap'}) ) { if (!defined ($args{'recursive'}) ) { # well, actually we set --no-sync-snap and we're not recursive, so no we *can't* make one. Whoops. die "CRIT: --no-sync-snap is set, and getnewestsnapshot() could not find any snapshots on source!\n"; } # fixme: we need to output WHAT the current dataset IS if we encounter this WARN condition. # we also probably need an argument to mute this WARN, for people who deliberately exclude # datasets from recursive replication this way. warn "WARN: --no-sync-snap is set, and getnewestsnapshot() could not find any snapshots on source for current dataset. Continuing.\n"; if ($exitcode < 2) { $exitcode = 2; } } return 0; } sub buildsynccmd { my ($sendcmd,$recvcmd,$pvsize,$sourceisroot,$targetisroot) = @_; # here's where it gets fun: figuring out when to compress and decompress. # to make this work for all possible combinations, you may have to decompress # AND recompress across the pipe viewer. FUN. my $synccmd; if ($sourcehost eq '' && $targethost eq '') { # both sides local. don't compress. do mbuffer, once, on the source side. # $synccmd = "$sendcmd | $mbuffercmd | $pvcmd | $recvcmd"; $synccmd = "$sendcmd |"; # avoid confusion - accept either source-bwlimit or target-bwlimit as the bandwidth limiting option here my $bwlimit = ''; if (length $args{'source-bwlimit'}) { $bwlimit = $args{'source-bwlimit'}; } elsif (length $args{'target-bwlimit'}) { $bwlimit = $args{'target-bwlimit'}; } if ($avail{'sourcembuffer'}) { $synccmd .= " $mbuffercmd $bwlimit $mbufferoptions |"; } if ($avail{'localpv'} && !$quiet) { $synccmd .= " $pvcmd -s $pvsize |"; } $synccmd .= " $recvcmd"; } elsif ($sourcehost eq '') { # local source, remote target. #$synccmd = "$sendcmd | $pvcmd | $compressargs{'cmd'} | $mbuffercmd | $sshcmd $targethost '$compressargs{'decomcmd'} | $mbuffercmd | $recvcmd'"; $synccmd = "$sendcmd |"; if ($avail{'localpv'} && !$quiet) { $synccmd .= " $pvcmd -s $pvsize |"; } if ($avail{'compress'}) { $synccmd .= " $compressargs{'cmd'} |"; } if ($avail{'sourcembuffer'}) { $synccmd .= " $mbuffercmd $args{'source-bwlimit'} $mbufferoptions |"; } $synccmd .= " $sshcmd $targethost "; my $remotecmd = ""; if ($avail{'targetmbuffer'}) { $remotecmd .= " $mbuffercmd $args{'target-bwlimit'} $mbufferoptions |"; } if ($avail{'compress'}) { $remotecmd .= " $compressargs{'decomcmd'} |"; } $remotecmd .= " $recvcmd"; $synccmd .= escapeshellparam($remotecmd); } elsif ($targethost eq '') { # remote source, local target. #$synccmd = "$sshcmd $sourcehost '$sendcmd | $compressargs{'cmd'} | $mbuffercmd' | $compressargs{'decomcmd'} | $mbuffercmd | $pvcmd | $recvcmd"; my $remotecmd = $sendcmd; if ($avail{'compress'}) { $remotecmd .= " | $compressargs{'cmd'}"; } if ($avail{'sourcembuffer'}) { $remotecmd .= " | $mbuffercmd $args{'source-bwlimit'} $mbufferoptions"; } $synccmd = "$sshcmd $sourcehost " . escapeshellparam($remotecmd); $synccmd .= " | "; if ($avail{'targetmbuffer'}) { $synccmd .= "$mbuffercmd $args{'target-bwlimit'} $mbufferoptions | "; } if ($avail{'compress'}) { $synccmd .= "$compressargs{'decomcmd'} | "; } if ($avail{'localpv'} && !$quiet) { $synccmd .= "$pvcmd -s $pvsize | "; } $synccmd .= "$recvcmd"; } else { #remote source, remote target... weird, but whatever, I'm not here to judge you. #$synccmd = "$sshcmd $sourcehost '$sendcmd | $compressargs{'cmd'} | $mbuffercmd' | $compressargs{'decomcmd'} | $pvcmd | $compressargs{'cmd'} | $mbuffercmd | $sshcmd $targethost '$compressargs{'decomcmd'} | $mbuffercmd | $recvcmd'"; my $remotecmd = $sendcmd; if ($avail{'compress'}) { $remotecmd .= " | $compressargs{'cmd'}"; } if ($avail{'sourcembuffer'}) { $remotecmd .= " | $mbuffercmd $args{'source-bwlimit'} $mbufferoptions"; } $synccmd = "$sshcmd $sourcehost " . escapeshellparam($remotecmd); $synccmd .= " | "; if ($avail{'compress'}) { $synccmd .= "$compressargs{'decomcmd'} | "; } if ($avail{'localpv'} && !$quiet) { $synccmd .= "$pvcmd -s $pvsize | "; } if ($avail{'compress'}) { $synccmd .= "$compressargs{'cmd'} | "; } if ($avail{'localmbuffer'}) { $synccmd .= "$mbuffercmd $mbufferoptions | "; } $synccmd .= "$sshcmd $targethost "; $remotecmd = ""; if ($avail{'targetmbuffer'}) { $remotecmd .= " $mbuffercmd $args{'target-bwlimit'} $mbufferoptions |"; } if ($avail{'compress'}) { $remotecmd .= " $compressargs{'decomcmd'} |"; } $remotecmd .= " $recvcmd"; $synccmd .= escapeshellparam($remotecmd); } return $synccmd; } sub pruneoldsyncsnaps { my ($rhost,$fs,$newsyncsnap,$isroot,@snaps) = @_; my $fsescaped = escapeshellparam($fs); if ($rhost ne '') { $rhost = "$sshcmd $rhost"; } my $hostid = hostname(); my $mysudocmd; if ($isroot) { $mysudocmd=''; } else { $mysudocmd = $sudocmd; } my @prunesnaps; # only prune snaps beginning with syncoid and our own hostname foreach my $snap(@snaps) { if ($snap =~ /^syncoid_\Q$identifier$hostid\E/) { # no matter what, we categorically refuse to # prune the new sync snap we created for this run if ($snap ne $newsyncsnap) { push (@prunesnaps,$snap); } } } # concatenate pruning commands to ten per line, to cut down # auth times for any remote hosts that must be operated via SSH my $counter; my $maxsnapspercmd = 10; my $prunecmd; foreach my $snap(@prunesnaps) { $counter ++; $prunecmd .= "$mysudocmd $zfscmd destroy $fsescaped\@$snap; "; if ($counter > $maxsnapspercmd) { $prunecmd =~ s/\; $//; if ($debug) { print "DEBUG: pruning up to $maxsnapspercmd obsolete sync snapshots...\n"; } if ($debug) { print "DEBUG: $rhost $prunecmd\n"; } if ($rhost ne '') { $prunecmd = escapeshellparam($prunecmd); } system("$rhost $prunecmd") == 0 or warn "WARNING: $rhost $prunecmd failed: $?"; $prunecmd = ''; $counter = 0; } } # if we still have some prune commands stacked up after finishing # the loop, commit 'em now if ($counter) { $prunecmd =~ s/\; $//; if ($debug) { print "DEBUG: pruning up to $maxsnapspercmd obsolete sync snapshots...\n"; } if ($debug) { print "DEBUG: $rhost $prunecmd\n"; } if ($rhost ne '') { $prunecmd = escapeshellparam($prunecmd); } system("$rhost $prunecmd") == 0 or warn "WARNING: $rhost $prunecmd failed: $?"; } return; } sub getmatchingsnapshot { my ($sourcefs, $targetfs, $snaps) = @_; foreach my $snap ( sort { $snaps{'source'}{$b}{'creation'}<=>$snaps{'source'}{$a}{'creation'} } keys %{ $snaps{'source'} }) { if (defined $snaps{'target'}{$snap}) { if ($snaps{'source'}{$snap}{'guid'} == $snaps{'target'}{$snap}{'guid'}) { return $snap; } } } return 0; } sub newsyncsnap { my ($rhost,$fs,$isroot) = @_; my $fsescaped = escapeshellparam($fs); if ($rhost ne '') { $rhost = "$sshcmd $rhost"; # double escaping needed $fsescaped = escapeshellparam($fsescaped); } my $mysudocmd; if ($isroot) { $mysudocmd = ''; } else { $mysudocmd = $sudocmd; } my $hostid = hostname(); my %date = getdate(); my $snapname = "syncoid\_$identifier$hostid\_$date{'stamp'}"; my $snapcmd = "$rhost $mysudocmd $zfscmd snapshot $fsescaped\@$snapname\n"; if ($debug) { print "DEBUG: creating sync snapshot using \"$snapcmd\"...\n"; } system($snapcmd) == 0 or do { warn "CRITICAL ERROR: $snapcmd failed: $?"; if ($exitcode < 2) { $exitcode = 2; } return 0; }; return $snapname; } sub targetexists { my ($rhost,$fs,$isroot) = @_; my $fsescaped = escapeshellparam($fs); if ($rhost ne '') { $rhost = "$sshcmd $rhost"; # double escaping needed $fsescaped = escapeshellparam($fsescaped); } my $mysudocmd; if ($isroot) { $mysudocmd = ''; } else { $mysudocmd = $sudocmd; } my $checktargetcmd = "$rhost $mysudocmd $zfscmd get -H name $fsescaped"; if ($debug) { print "DEBUG: checking to see if target filesystem exists using \"$checktargetcmd 2>&1 |\"...\n"; } open FH, "$checktargetcmd 2>&1 |"; my $targetexists = ; close FH; my $exit = $?; $targetexists = ( $targetexists =~ /^\Q$fs\E/ && $exit == 0 ); return $targetexists; } sub getssh { my $fs = shift; my $rhost; my $isroot; my $socket; # if we got passed something with an @ in it, we assume it's an ssh connection, eg root@myotherbox if ($fs =~ /\@/) { $rhost = $fs; $fs =~ s/^\S*\@\S*://; $rhost =~ s/:\Q$fs\E$//; my $remoteuser = $rhost; $remoteuser =~ s/\@.*$//; if ($remoteuser eq 'root' || $args{'no-privilege-elevation'}) { $isroot = 1; } else { $isroot = 0; } # now we need to establish a persistent master SSH connection $socket = "/tmp/syncoid-$remoteuser-$rhost-" . time(); open FH, "$sshcmd -M -S $socket -o ControlPersist=1m $args{'sshport'} $rhost exit |"; close FH; system("$sshcmd -S $socket $rhost echo -n") == 0 or do { my $code = $? >> 8; warn "CRITICAL ERROR: ssh connection echo test failed for $rhost with exit code $code"; exit(2); }; $rhost = "-S $socket $rhost"; } else { my $localuid = $<; if ($localuid == 0 || $args{'no-privilege-elevation'}) { $isroot = 1; } else { $isroot = 0; } } # if ($isroot) { print "this user is root.\n"; } else { print "this user is not root.\n"; } return ($rhost,$fs,$isroot); } sub dumphash() { my $hash = shift; $Data::Dumper::Sortkeys = 1; print Dumper($hash); } sub getsnaps() { my ($type,$rhost,$fs,$isroot,%snaps) = @_; my $mysudocmd; my $fsescaped = escapeshellparam($fs); if ($isroot) { $mysudocmd = ''; } else { $mysudocmd = $sudocmd; } if ($rhost ne '') { $rhost = "$sshcmd $rhost"; # double escaping needed $fsescaped = escapeshellparam($fsescaped); } my $getsnapcmd = "$rhost $mysudocmd $zfscmd get -Hpd 1 -t snapshot guid,creation $fsescaped |"; if ($debug) { print "DEBUG: getting list of snapshots on $fs using $getsnapcmd...\n"; } open FH, $getsnapcmd; my @rawsnaps = ; close FH or die "CRITICAL ERROR: snapshots couldn't be listed for $fs (exit code $?)"; # this is a little obnoxious. get guid,creation returns guid,creation on two separate lines # as though each were an entirely separate get command. my %creationtimes=(); foreach my $line (@rawsnaps) { # only import snap guids from the specified filesystem if ($line =~ /\Q$fs\E\@.*guid/) { chomp $line; my $guid = $line; $guid =~ s/^.*\tguid\t*(\d*).*/$1/; my $snap = $line; $snap =~ s/^.*\@(.*)\tguid.*$/$1/; $snaps{$type}{$snap}{'guid'}=$guid; } } foreach my $line (@rawsnaps) { # only import snap creations from the specified filesystem if ($line =~ /\Q$fs\E\@.*creation/) { chomp $line; my $creation = $line; $creation =~ s/^.*\tcreation\t*(\d*).*/$1/; my $snap = $line; $snap =~ s/^.*\@(.*)\tcreation.*$/$1/; # the accuracy of the creation timestamp is only for a second, but # snapshots in the same second are highly likely. The list command # has an ordered output so we append another three digit running number # to the creation timestamp and make sure those are ordered correctly # for snapshot with the same creation timestamp my $counter = 0; my $creationsuffix; while ($counter < 999) { $creationsuffix = sprintf("%s%03d", $creation, $counter); if (!defined $creationtimes{$creationsuffix}) { $creationtimes{$creationsuffix} = 1; last; } $counter += 1; } $snaps{$type}{$snap}{'creation'}=$creationsuffix; } } return %snaps; } sub getbookmarks() { my ($rhost,$fs,$isroot,%bookmarks) = @_; my $mysudocmd; my $fsescaped = escapeshellparam($fs); if ($isroot) { $mysudocmd = ''; } else { $mysudocmd = $sudocmd; } if ($rhost ne '') { $rhost = "$sshcmd $rhost"; # double escaping needed $fsescaped = escapeshellparam($fsescaped); } my $error = 0; my $getbookmarkcmd = "$rhost $mysudocmd $zfscmd get -Hpd 1 -t bookmark guid,creation $fsescaped 2>&1 |"; if ($debug) { print "DEBUG: getting list of bookmarks on $fs using $getbookmarkcmd...\n"; } open FH, $getbookmarkcmd; my @rawbookmarks = ; close FH or $error = 1; if ($error == 1) { if ($rawbookmarks[0] =~ /invalid type/ or $rawbookmarks[0] =~ /operation not applicable to datasets of this type/) { # no support for zfs bookmarks, return empty hash return %bookmarks; } die "CRITICAL ERROR: bookmarks couldn't be listed for $fs (exit code $?)"; } # this is a little obnoxious. get guid,creation returns guid,creation on two separate lines # as though each were an entirely separate get command. my $lastguid; foreach my $line (@rawbookmarks) { # only import bookmark guids, creation from the specified filesystem if ($line =~ /\Q$fs\E\#.*guid/) { chomp $line; $lastguid = $line; $lastguid =~ s/^.*\tguid\t*(\d*).*/$1/; my $bookmark = $line; $bookmark =~ s/^.*\#(.*)\tguid.*$/$1/; $bookmarks{$lastguid}{'name'}=$bookmark; } elsif ($line =~ /\Q$fs\E\#.*creation/) { chomp $line; my $creation = $line; $creation =~ s/^.*\tcreation\t*(\d*).*/$1/; my $bookmark = $line; $bookmark =~ s/^.*\#(.*)\tcreation.*$/$1/; $bookmarks{$lastguid}{'creation'}=$creation; } } return %bookmarks; } sub getsendsize { my ($sourcehost,$snap1,$snap2,$isroot,$receivetoken) = @_; my $snap1escaped = escapeshellparam($snap1); my $snap2escaped = escapeshellparam($snap2); my $mysudocmd; if ($isroot) { $mysudocmd = ''; } else { $mysudocmd = $sudocmd; } my $sourcessh; if ($sourcehost ne '') { $sourcessh = "$sshcmd $sourcehost"; $snap1escaped = escapeshellparam($snap1escaped); $snap2escaped = escapeshellparam($snap2escaped); } else { $sourcessh = ''; } my $snaps; if ($snap2) { # if we got a $snap2 argument, we want an incremental send estimate from $snap1 to $snap2. $snaps = "$args{'streamarg'} $snap1escaped $snap2escaped"; } else { # if we didn't get a $snap2 arg, we want a full send estimate for $snap1. $snaps = "$snap1escaped"; } # in case of a resumed receive, get the remaining # size based on the resume token if (defined($receivetoken)) { $snaps = "-t $receivetoken"; } my $sendoptions; if (defined($receivetoken)) { $sendoptions = getoptionsline(\@sendoptions, ('e')); } else { $sendoptions = getoptionsline(\@sendoptions, ('D','L','R','c','e','h','p','v','w')); } my $getsendsizecmd = "$sourcessh $mysudocmd $zfscmd send $sendoptions -nP $snaps"; if ($debug) { print "DEBUG: getting estimated transfer size from source $sourcehost using \"$getsendsizecmd 2>&1 |\"...\n"; } open FH, "$getsendsizecmd 2>&1 |"; my @rawsize = ; close FH; my $exit = $?; # process sendsize: last line of multi-line output is # size of proposed xfer in bytes, but we need to remove # human-readable crap from it my $sendsize = pop(@rawsize); # the output format is different in case of # a resumed receive if (defined($receivetoken)) { $sendsize =~ s/.*\t([0-9]+)$/$1/; } else { $sendsize =~ s/^size\t*//; } chomp $sendsize; # check for valid value if ($sendsize !~ /^\d+$/) { $sendsize = ''; } # to avoid confusion with a zero size pv, give sendsize # a minimum 4K value - or if empty, make sure it reads UNKNOWN if ($debug) { print "DEBUG: sendsize = $sendsize\n"; } if ($sendsize eq '' || $exit != 0) { $sendsize = '0'; } elsif ($sendsize < 4096) { $sendsize = 4096; } return $sendsize; } sub getdate { my ($sec,$min,$hour,$mday,$mon,$year,$wday,$yday,$isdst) = localtime(time); $year += 1900; my %date; $date{'unix'} = (((((((($year - 1971) * 365) + $yday) * 24) + $hour) * 60) + $min) * 60) + $sec; $date{'year'} = $year; $date{'sec'} = sprintf ("%02u", $sec); $date{'min'} = sprintf ("%02u", $min); $date{'hour'} = sprintf ("%02u", $hour); $date{'mday'} = sprintf ("%02u", $mday); $date{'mon'} = sprintf ("%02u", ($mon + 1)); $date{'stamp'} = "$date{'year'}-$date{'mon'}-$date{'mday'}:$date{'hour'}:$date{'min'}:$date{'sec'}"; return %date; } sub escapeshellparam { my ($par) = @_; # avoid use of uninitialized string in regex if (length($par)) { # "escape" all single quotes $par =~ s/'/'"'"'/g; } else { # avoid use of uninitialized string in concatenation below $par = ''; } # single-quote entire string return "'$par'"; } sub getreceivetoken() { my ($rhost,$fs,$isroot) = @_; my $token = getzfsvalue($rhost,$fs,$isroot,"receive_resume_token"); if (defined $token && $token ne '-' && $token ne '') { return $token; } if ($debug) { print "DEBUG: no receive token found \n"; } return } sub parsespecialoptions { my ($line) = @_; my @options = (); my @values = split(/ /, $line); my $optionValue = 0; my $lastOption; foreach my $value (@values) { if ($optionValue ne 0) { my %item = ( "option" => $lastOption, "line" => "-$lastOption $value", ); push @options, \%item; $optionValue = 0; next; } for my $char (split //, $value) { if ($optionValue ne 0) { return undef; } if ($char eq 'o' || $char eq 'x') { $lastOption = $char; $optionValue = 1; } else { my %item = ( "option" => $char, "line" => "-$char", ); push @options, \%item; } } } return @options; } sub getoptionsline { my ($options_ref, @allowed) = @_; my $line = ''; foreach my $value (@{ $options_ref }) { if (@allowed) { if (!grep( /^$$value{'option'}$/, @allowed) ) { next; } } $line = "$line$$value{'line'} "; } return $line; } sub resetreceivestate { my ($rhost,$fs,$isroot) = @_; my $fsescaped = escapeshellparam($fs); if ($rhost ne '') { $rhost = "$sshcmd $rhost"; # double escaping needed $fsescaped = escapeshellparam($fsescaped); } if ($debug) { print "DEBUG: reset partial receive state of $fs...\n"; } my $mysudocmd; if ($isroot) { $mysudocmd = ''; } else { $mysudocmd = $sudocmd; } my $resetcmd = "$rhost $mysudocmd $zfscmd receive -A $fsescaped"; if ($debug) { print "$resetcmd\n"; } system("$resetcmd") == 0 or die "CRITICAL ERROR: $resetcmd failed: $?"; } __END__ =head1 NAME syncoid - ZFS snapshot replication tool =head1 SYNOPSIS syncoid [options]... SOURCE TARGET or syncoid [options]... SOURCE USER@HOST:TARGET or syncoid [options]... USER@HOST:SOURCE TARGET or syncoid [options]... USER@HOST:SOURCE USER@HOST:TARGET SOURCE Source ZFS dataset. Can be either local or remote TARGET Target ZFS dataset. Can be either local or remote Options: --compress=FORMAT Compresses data during transfer. Currently accepted options are gzip, pigz-fast, pigz-slow, zstd-fast, zstd-slow, lz4, xz, lzo (default) & none --identifier=EXTRA Extra identifier which is included in the snapshot name. Can be used for replicating to multiple targets. --recursive|r Also transfers child datasets --skip-parent Skips syncing of the parent dataset. Does nothing without '--recursive' option. --source-bwlimit= Bandwidth limit in bytes/kbytes/etc per second on the source transfer --target-bwlimit= Bandwidth limit in bytes/kbytes/etc per second on the target transfer --mbuffer-size=VALUE Specify the mbuffer size (default: 16M), please refer to mbuffer(1) manual page. --no-stream Replicates using newest snapshot instead of intermediates --no-sync-snap Does not create new snapshot, only transfers existing --create-bookmark Creates a zfs bookmark for the newest snapshot on the source after replication succeeds (only works with --no-sync-snap) --no-clone-rollback Does not rollback clones on target --no-rollback Does not rollback clones or snapshots on target (it probably requires a readonly target) --exclude=REGEX Exclude specific datasets which match the given regular expression. Can be specified multiple times --sendoptions=OPTIONS Use advanced options for zfs send (the arguments are filterd as needed), e.g. syncoid --sendoptions="Lc e" sets zfs send -L -c -e ... --recvoptions=OPTIONS Use advanced options for zfs receive (the arguments are filterd as needed), e.g. syncoid --recvoptions="ux recordsize o compression=lz4" sets zfs receive -u -x recordsize -o compression=lz4 ... --sshkey=FILE Specifies a ssh key to use to connect --sshport=PORT Connects to remote on a particular port --sshcipher|c=CIPHER Passes CIPHER to ssh to use a particular cipher set --sshoption|o=OPTION Passes OPTION to ssh for remote usage. Can be specified multiple times --help Prints this helptext --version Prints the version number --debug Prints out a lot of additional information during a syncoid run --monitor-version Currently does nothing --quiet Suppresses non-error output --dumpsnaps Dumps a list of snapshots during the run --no-command-checks Do not check command existence before attempting transfer. Not recommended --no-resume Don't use the ZFS resume feature if available --no-clone-handling Don't try to recreate clones on target --no-privilege-elevation Bypass the root check, for use with ZFS permission delegation --force-delete Remove target datasets recursively, if there are no matching snapshots/bookmarks sanoid-2.0.3/tests/000077500000000000000000000000001355716220600141425ustar00rootroot00000000000000sanoid-2.0.3/tests/1_one_year/000077500000000000000000000000001355716220600161635ustar00rootroot00000000000000sanoid-2.0.3/tests/1_one_year/run.sh000077500000000000000000000020661355716220600173320ustar00rootroot00000000000000#!/bin/bash set -x # this test will take hourly, daily and monthly snapshots # for the whole year of 2017 in the timezone Europe/Vienna # sanoid is run hourly and no snapshots are pruned . ../common/lib.sh POOL_NAME="sanoid-test-1" POOL_TARGET="" # root RESULT="/tmp/sanoid_test_result" RESULT_CHECKSUM="92f2c7afba94b59e8a6f6681705f0aa3f1c61e4aededaa38281e0b7653856935" # UTC timestamp of start and end START="1483225200" END="1514761199" # prepare setup checkEnvironment disableTimeSync # set timezone ln -sf /usr/share/zoneinfo/Europe/Vienna /etc/localtime timestamp=$START mkdir -p "${POOL_TARGET}" truncate -s 5120M "${POOL_TARGET}"/zpool.img zpool create -f "${POOL_NAME}" "${POOL_TARGET}"/zpool.img function cleanUp { zpool export "${POOL_NAME}" } # export pool in any case trap cleanUp EXIT while [ $timestamp -le $END ]; do setdate $timestamp; date; "${SANOID}" --cron --verbose timestamp=$((timestamp+3600)) done saveSnapshotList "${POOL_NAME}" "${RESULT}" # hourly daily monthly verifySnapshotList "${RESULT}" 8760 365 12 "${RESULT_CHECKSUM}" sanoid-2.0.3/tests/1_one_year/sanoid.conf000066400000000000000000000002241355716220600203050ustar00rootroot00000000000000[sanoid-test-1] use_template = production [template_production] hourly = 36 daily = 30 monthly = 3 yearly = 0 autosnap = yes autoprune = no sanoid-2.0.3/tests/2_dst_handling/000077500000000000000000000000001355716220600170215ustar00rootroot00000000000000sanoid-2.0.3/tests/2_dst_handling/run.sh000077500000000000000000000023361355716220600201700ustar00rootroot00000000000000#!/bin/bash set -x # this test will check the behaviour arround a date where DST ends # with hourly, daily and monthly snapshots checked in a 15 minute interval # Daylight saving time 2017 in Europe/Vienna began at 02:00 on Sunday, 26 March # and ended at 03:00 on Sunday, 29 October. All times are in # Central European Time. . ../common/lib.sh POOL_NAME="sanoid-test-2" POOL_TARGET="" # root RESULT="/tmp/sanoid_test_result" RESULT_CHECKSUM="846372ef238f2182b382c77a73ecddf99aa82f28cc9995bcc95592cc78305463" # UTC timestamp of start and end START="1509141600" END="1509400800" # prepare setup checkEnvironment disableTimeSync # set timezone ln -sf /usr/share/zoneinfo/Europe/Vienna /etc/localtime timestamp=$START mkdir -p "${POOL_TARGET}" truncate -s 512M "${POOL_TARGET}"/zpool2.img zpool create -f "${POOL_NAME}" "${POOL_TARGET}"/zpool2.img function cleanUp { zpool export "${POOL_NAME}" } # export pool in any case trap cleanUp EXIT while [ $timestamp -le $END ]; do setdate $timestamp; date; "${SANOID}" --cron --verbose timestamp=$((timestamp+900)) done saveSnapshotList "${POOL_NAME}" "${RESULT}" # hourly daily monthly verifySnapshotList "${RESULT}" 73 3 1 "${RESULT_CHECKSUM}" # one more hour because of DST sanoid-2.0.3/tests/2_dst_handling/sanoid.conf000066400000000000000000000002241355716220600211430ustar00rootroot00000000000000[sanoid-test-2] use_template = production [template_production] hourly = 36 daily = 30 monthly = 3 yearly = 0 autosnap = yes autoprune = no sanoid-2.0.3/tests/common/000077500000000000000000000000001355716220600154325ustar00rootroot00000000000000sanoid-2.0.3/tests/common/lib.sh000066400000000000000000000060171355716220600165400ustar00rootroot00000000000000#!/bin/bash unamestr="$(uname)" function setup { export LANG=C export LANGUAGE=C export LC_ALL=C export SANOID="../../sanoid" # make sure that there is no cache file rm -f /var/cache/sanoidsnapshots.txt # install needed sanoid configuration files [ -f sanoid.conf ] && cp sanoid.conf /etc/sanoid/sanoid.conf cp ../../sanoid.defaults.conf /etc/sanoid/sanoid.defaults.conf } function checkEnvironment { ASK=1 which systemd-detect-virt > /dev/null if [ $? -eq 0 ]; then systemd-detect-virt --vm > /dev/null if [ $? -eq 0 ]; then # we are in a vm ASK=0 fi fi if [ $ASK -eq 1 ]; then set +x echo "!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!" echo "you should be running this test in a" echo "dedicated vm, as it will mess with your system!" echo "Are you sure you wan't to continue? (y)" echo "!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!" set -x read -n 1 c if [ "$c" != "y" ]; then exit 1 fi fi } function disableTimeSync { # disable ntp sync which timedatectl > /dev/null if [ $? -eq 0 ]; then timedatectl set-ntp 0 fi } function saveSnapshotList { POOL_NAME="$1" RESULT="$2" zfs list -t snapshot -o name -Hr "${POOL_NAME}" | sort > "${RESULT}" # clear the seconds for comparing if [ "$unamestr" == 'FreeBSD' ]; then sed -i '' 's/\(autosnap_[0-9][0-9][0-9][0-9]-[0-9][0-9]-[0-9][0-9]_[0-9][0-9]:[0-9][0-9]:\)[0-9][0-9]/\100/g' "${RESULT}" else sed -i 's/\(autosnap_[0-9][0-9][0-9][0-9]-[0-9][0-9]-[0-9][0-9]_[0-9][0-9]:[0-9][0-9]:\)[0-9][0-9]/\100/g' "${RESULT}" fi } function verifySnapshotList { RESULT="$1" HOURLY_COUNT=$2 DAILY_COUNT=$3 MONTHLY_COUNT=$4 CHECKSUM="$5" failed=0 message="" hourly_count=$(grep -c "autosnap_.*_hourly" < "${RESULT}") daily_count=$(grep -c "autosnap_.*_daily" < "${RESULT}") monthly_count=$(grep -c "autosnap_.*_monthly" < "${RESULT}") if [ "${hourly_count}" -ne "${HOURLY_COUNT}" ]; then failed=1 message="${message}hourly snapshot count is wrong: ${hourly_count}\n" fi if [ "${daily_count}" -ne "${DAILY_COUNT}" ]; then failed=1 message="${message}daily snapshot count is wrong: ${daily_count}\n" fi if [ "${monthly_count}" -ne "${MONTHLY_COUNT}" ]; then failed=1 message="${message}monthly snapshot count is wrong: ${monthly_count}\n" fi checksum=$(shasum -a 256 "${RESULT}" | cut -d' ' -f1) if [ "${checksum}" != "${CHECKSUM}" ]; then failed=1 message="${message}result checksum mismatch\n" fi if [ "${failed}" -eq 0 ]; then exit 0 fi echo "TEST FAILED:" >&2 echo -n -e "${message}" >&2 exit 1 } function setdate { TIMESTAMP="$1" if [ "$unamestr" == 'FreeBSD' ]; then date -u -f '%s' "${TIMESTAMP}" else date --utc --set "@${TIMESTAMP}" fi } sanoid-2.0.3/tests/run-tests.sh000077500000000000000000000007321355716220600164470ustar00rootroot00000000000000#!/bin/bash # run's all the available tests for test in */; do if [ ! -x "${test}/run.sh" ]; then continue fi testName="${test%/}" LOGFILE=/tmp/sanoid_test_run_"${testName}".log pushd . > /dev/null echo -n "Running test ${testName} ... " cd "${test}" echo -n y | bash run.sh > "${LOGFILE}" 2>&1 if [ $? -eq 0 ]; then echo "[PASS]" else echo "[FAILED] (see ${LOGFILE})" fi popd > /dev/null done sanoid-2.0.3/tests/syncoid/000077500000000000000000000000001355716220600156125ustar00rootroot00000000000000sanoid-2.0.3/tests/syncoid/1_bookmark_replication_intermediate/000077500000000000000000000000001355716220600247625ustar00rootroot00000000000000sanoid-2.0.3/tests/syncoid/1_bookmark_replication_intermediate/run.sh000077500000000000000000000026461355716220600261350ustar00rootroot00000000000000#!/bin/bash # test replication with fallback to bookmarks and all intermediate snapshots set -x set -e . ../../common/lib.sh POOL_IMAGE="/tmp/syncoid-test-1.zpool" POOL_SIZE="200M" POOL_NAME="syncoid-test-1" TARGET_CHECKSUM="a23564d5bb8a2babc3ac8936fd82825ad9fff9c82d4924f5924398106bbda9f0 -" truncate -s "${POOL_SIZE}" "${POOL_IMAGE}" zpool create -m none -f "${POOL_NAME}" "${POOL_IMAGE}" function cleanUp { zpool export "${POOL_NAME}" } # export pool in any case trap cleanUp EXIT zfs create "${POOL_NAME}"/src zfs snapshot "${POOL_NAME}"/src@snap1 zfs bookmark "${POOL_NAME}"/src@snap1 "${POOL_NAME}"/src#snap1 # initial replication ../../../syncoid --no-sync-snap --debug --compress=none "${POOL_NAME}"/src "${POOL_NAME}"/dst # destroy last common snapshot on source zfs destroy "${POOL_NAME}"/src@snap1 # create intermediate snapshots # sleep is needed so creation time can be used for proper sorting sleep 1 zfs snapshot "${POOL_NAME}"/src@snap2 sleep 1 zfs snapshot "${POOL_NAME}"/src@snap3 sleep 1 zfs snapshot "${POOL_NAME}"/src@snap4 sleep 1 zfs snapshot "${POOL_NAME}"/src@snap5 # replicate which should fallback to bookmarks ../../../syncoid --debug --compress=none "${POOL_NAME}"/src "${POOL_NAME}"/dst || exit 1 # verify output=$(zfs list -t snapshot -r -H -o name "${POOL_NAME}") checksum=$(echo "${output}" | grep -v syncoid_ | shasum -a 256) if [ "${checksum}" != "${TARGET_CHECKSUM}" ]; then exit 1 fi exit 0 sanoid-2.0.3/tests/syncoid/2_bookmark_replication_no_intermediate/000077500000000000000000000000001355716220600254575ustar00rootroot00000000000000sanoid-2.0.3/tests/syncoid/2_bookmark_replication_no_intermediate/run.sh000077500000000000000000000026561355716220600266330ustar00rootroot00000000000000#!/bin/bash # test replication with fallback to bookmarks and all intermediate snapshots set -x set -e . ../../common/lib.sh POOL_IMAGE="/tmp/syncoid-test-2.zpool" POOL_SIZE="200M" POOL_NAME="syncoid-test-2" TARGET_CHECKSUM="2460d4d4417793d2c7a5c72cbea4a8a584c0064bf48d8b6daa8ba55076cba66d -" truncate -s "${POOL_SIZE}" "${POOL_IMAGE}" zpool create -m none -f "${POOL_NAME}" "${POOL_IMAGE}" function cleanUp { zpool export "${POOL_NAME}" } # export pool in any case trap cleanUp EXIT zfs create "${POOL_NAME}"/src zfs snapshot "${POOL_NAME}"/src@snap1 zfs bookmark "${POOL_NAME}"/src@snap1 "${POOL_NAME}"/src#snap1 # initial replication ../../../syncoid --no-sync-snap --debug --compress=none "${POOL_NAME}"/src "${POOL_NAME}"/dst # destroy last common snapshot on source zfs destroy "${POOL_NAME}"/src@snap1 # create intermediate snapshots # sleep is needed so creation time can be used for proper sorting sleep 1 zfs snapshot "${POOL_NAME}"/src@snap2 sleep 1 zfs snapshot "${POOL_NAME}"/src@snap3 sleep 1 zfs snapshot "${POOL_NAME}"/src@snap4 sleep 1 zfs snapshot "${POOL_NAME}"/src@snap5 # replicate which should fallback to bookmarks ../../../syncoid --no-stream --no-sync-snap --debug --compress=none "${POOL_NAME}"/src "${POOL_NAME}"/dst || exit 1 # verify output=$(zfs list -t snapshot -r -H -o name "${POOL_NAME}") checksum=$(echo "${output}" | shasum -a 256) if [ "${checksum}" != "${TARGET_CHECKSUM}" ]; then exit 1 fi exit 0 sanoid-2.0.3/tests/syncoid/3_force_delete/000077500000000000000000000000001355716220600204545ustar00rootroot00000000000000sanoid-2.0.3/tests/syncoid/3_force_delete/run.sh000077500000000000000000000022171355716220600216210ustar00rootroot00000000000000#!/bin/bash # test replication with deletion of target if no matches are found set -x set -e . ../../common/lib.sh POOL_IMAGE="/tmp/syncoid-test-3.zpool" POOL_SIZE="200M" POOL_NAME="syncoid-test-3" TARGET_CHECKSUM="0409a2ac216e69971270817189cef7caa91f6306fad9eab1033955b7e7c6bd4c -" truncate -s "${POOL_SIZE}" "${POOL_IMAGE}" zpool create -m none -f "${POOL_NAME}" "${POOL_IMAGE}" function cleanUp { zpool export "${POOL_NAME}" } # export pool in any case trap cleanUp EXIT zfs create "${POOL_NAME}"/src zfs create "${POOL_NAME}"/src/1 zfs create "${POOL_NAME}"/src/2 zfs create "${POOL_NAME}"/src/3 # initial replication ../../../syncoid -r --debug --compress=none "${POOL_NAME}"/src "${POOL_NAME}"/dst # destroy last common snapshot on source zfs destroy "${POOL_NAME}"/src/2@% zfs snapshot "${POOL_NAME}"/src/2@test sleep 1 ../../../syncoid -r --force-delete --debug --compress=none "${POOL_NAME}"/src "${POOL_NAME}"/dst || exit 1 # verify output=$(zfs list -t snapshot -r -H -o name "${POOL_NAME}" | sed 's/@syncoid_.*$'/@syncoid_/) checksum=$(echo "${output}" | shasum -a 256) if [ "${checksum}" != "${TARGET_CHECKSUM}" ]; then exit 1 fi exit 0 sanoid-2.0.3/tests/syncoid/4_bookmark_replication_edge_case/000077500000000000000000000000001355716220600242125ustar00rootroot00000000000000sanoid-2.0.3/tests/syncoid/4_bookmark_replication_edge_case/run.sh000077500000000000000000000023251355716220600253570ustar00rootroot00000000000000#!/bin/bash # test replication edge cases with bookmarks set -x set -e . ../../common/lib.sh POOL_IMAGE="/tmp/syncoid-test-4.zpool" POOL_SIZE="200M" POOL_NAME="syncoid-test-4" TARGET_CHECKSUM="ad383b157b01635ddcf13612ac55577ad9c8dcf3fbfc9eb91792e27ec8db739b -" truncate -s "${POOL_SIZE}" "${POOL_IMAGE}" zpool create -m none -f "${POOL_NAME}" "${POOL_IMAGE}" function cleanUp { zpool export "${POOL_NAME}" } # export pool in any case trap cleanUp EXIT zfs create "${POOL_NAME}"/src zfs snapshot "${POOL_NAME}"/src@snap1 zfs bookmark "${POOL_NAME}"/src@snap1 "${POOL_NAME}"/src#snap1 # initial replication ../../../syncoid --no-sync-snap --debug --compress=none "${POOL_NAME}"/src "${POOL_NAME}"/dst # destroy last common snapshot on source zfs destroy "${POOL_NAME}"/src@snap1 zfs snapshot "${POOL_NAME}"/src@snap2 # replicate which should fallback to bookmarks and stop because it's already on the latest snapshot ../../../syncoid --no-sync-snap --debug --compress=none "${POOL_NAME}"/src "${POOL_NAME}"/dst || exit 1 # verify output=$(zfs list -t snapshot -r -H -o name "${POOL_NAME}") checksum=$(echo "${output}" | grep -v syncoid_ | shasum -a 256) if [ "${checksum}" != "${TARGET_CHECKSUM}" ]; then exit 1 fi exit 0 sanoid-2.0.3/tests/syncoid/5_reset_resume_state/000077500000000000000000000000001355716220600217405ustar00rootroot00000000000000sanoid-2.0.3/tests/syncoid/5_reset_resume_state/run.sh000077500000000000000000000022731355716220600231070ustar00rootroot00000000000000#!/bin/bash # test no resume replication with a target containing a partially received replication stream set -x set -e . ../../common/lib.sh POOL_IMAGE="/tmp/syncoid-test-5.zpool" MOUNT_TARGET="/tmp/syncoid-test-5.mount" POOL_SIZE="1000M" POOL_NAME="syncoid-test-5" truncate -s "${POOL_SIZE}" "${POOL_IMAGE}" zpool create -m none -f "${POOL_NAME}" "${POOL_IMAGE}" function cleanUp { zpool export "${POOL_NAME}" } # export pool in any case trap cleanUp EXIT zfs create -o mountpoint="${MOUNT_TARGET}" "${POOL_NAME}"/src ../../../syncoid --debug --compress=none "${POOL_NAME}"/src "${POOL_NAME}"/dst dd if=/dev/urandom of="${MOUNT_TARGET}"/big_file bs=1M count=200 ../../../syncoid --debug --compress=none --source-bwlimit=2m "${POOL_NAME}"/src "${POOL_NAME}"/dst & syncoid_pid=$! sleep 5 function getcpid() { cpids=$(pgrep -P "$1"|xargs) for cpid in $cpids; do echo "$cpid" getcpid "$cpid" done } kill $(getcpid $$) || true wait sleep 1 ../../../syncoid --debug --compress=none --no-resume "${POOL_NAME}"/src "${POOL_NAME}"/dst | grep "reset partial receive state of syncoid" ../../../syncoid --debug --compress=none "${POOL_NAME}"/src "${POOL_NAME}"/dst exit $? sanoid-2.0.3/tests/syncoid/6_reset_resume_state2/000077500000000000000000000000001355716220600220235ustar00rootroot00000000000000sanoid-2.0.3/tests/syncoid/6_reset_resume_state2/run.sh000077500000000000000000000023701355716220600231700ustar00rootroot00000000000000#!/bin/bash # test resumable replication where the original snapshot doesn't exist anymore set -x set -e . ../../common/lib.sh POOL_IMAGE="/tmp/syncoid-test-6.zpool" MOUNT_TARGET="/tmp/syncoid-test-6.mount" POOL_SIZE="1000M" POOL_NAME="syncoid-test-6" truncate -s "${POOL_SIZE}" "${POOL_IMAGE}" zpool create -m none -f "${POOL_NAME}" "${POOL_IMAGE}" function cleanUp { zpool export "${POOL_NAME}" } # export pool in any case trap cleanUp EXIT zfs create -o mountpoint="${MOUNT_TARGET}" "${POOL_NAME}"/src ../../../syncoid --debug --compress=none "${POOL_NAME}"/src "${POOL_NAME}"/dst dd if=/dev/urandom of="${MOUNT_TARGET}"/big_file bs=1M count=200 zfs snapshot "${POOL_NAME}"/src@big ../../../syncoid --debug --no-sync-snap --compress=none --source-bwlimit=2m "${POOL_NAME}"/src "${POOL_NAME}"/dst & syncoid_pid=$! sleep 5 function getcpid() { cpids=$(pgrep -P "$1"|xargs) for cpid in $cpids; do echo "$cpid" getcpid "$cpid" done } kill $(getcpid $$) || true wait sleep 1 zfs destroy "${POOL_NAME}"/src@big ../../../syncoid --debug --compress=none "${POOL_NAME}"/src "${POOL_NAME}"/dst # | grep "reset partial receive state of syncoid" ../../../syncoid --debug --compress=none "${POOL_NAME}"/src "${POOL_NAME}"/dst exit $? sanoid-2.0.3/tests/syncoid/run-tests.sh000077500000000000000000000007261355716220600201220ustar00rootroot00000000000000#!/bin/bash # run's all the available tests for test in */; do if [ ! -x "${test}/run.sh" ]; then continue fi testName="${test%/}" LOGFILE=/tmp/syncoid_test_run_"${testName}".log pushd . > /dev/null echo -n "Running test ${testName} ... " cd "${test}" echo | bash run.sh > "${LOGFILE}" 2>&1 if [ $? -eq 0 ]; then echo "[PASS]" else echo "[FAILED] (see ${LOGFILE})" fi popd > /dev/null done